Who Bears the Risk If AI Gets It Wrong?

Dear Will & AiME,

We’re embedding AI into more of our products and services, including tools that make recommendations and automate decisions. Internally, we’re wondering if an AI-driven outcome causes harm, delay, or loss, who actually bears that risk — us, our vendor, or our customer? How should this be handled in commercial contracts?

— General Counsel at a B2B Software Company

Short answer 💡

AI risk is largely determined by contract, with liability depending on how responsibilities are allocated among vendors, customers, and developers. Businesses should ensure their agreements clearly reflect how AI systems are built, deployed, and used to avoid unexpected exposure.

Dear General Counsel at a B2B Software Company,

The short answer is that AI risk is contractual risk. How you draft, negotiate, and structure agreements today will largely determine how exposed you are tomorrow.

Why AI Changes the Liability Conversation

Traditional software contracts assume predictable behavior. AI doesn’t always fit that model. Machine learning systems evolve, rely on probabilistic outputs, and may behave differently depending on data, context, or user inputs.

That creates tension in contracts that were built for deterministic tools. When something goes wrong, parties quickly discover that no one clearly “owns” the risk unless the agreement says so.

Key Contract Provisions for Allocating AI Risk

Warranties:

AI vendors increasingly narrow warranties to focus on performance “as designed,” not outcomes. Customers, conversely, often expect assurances of accuracy, compliance, or reliability.

A practical approach is to align warranties with what the AI is actually intended to do, and what it is not. Overpromising on outcomes tied to AI decision-making can create exposure that’s difficult to manage.

Indemnification:

Indemnity clauses are also a battleground topic. Questions often include:

  • Who indemnifies for IP infringement arising from AI outputs?

  • Who bears responsibility if AI-generated content causes regulatory or consumer harm?

  • Does indemnity apply when customers misuse or overly rely on AI outputs?

Clear carve-outs and responsibility-sharing are critical, particularly where training data, prompts, or downstream use are controlled by different parties.

Limitation of Liability:

Many contracts cap damages, but AI can lead to losses that exceed traditional expectations, especially in regulated industries or high-stakes decision environments.

Businesses should think carefully about whether AI-related claims are capped, excluded, or treated differently from standard software claims. In some cases, tiered caps or AI-specific exclusions make sense.

Allocating Risk Across the AI Ecosystem

Most AI solutions involve multiple players: model developers, platform providers, integrators, and end users. Risk allocation works best when each party bears responsibility for what it controls.

  • Developers may take responsibility for model architecture and training.

  • Vendors may assume risk for deployment, integration, and security.

  • Customers may retain responsibility for how outputs are used in business decisions.

Trying to push all AI risk onto one party rarely works and often stalls deals.

The Role of Insurance in AI Risk Management

Insurance markets are responding. Some companies are exploring expanded cyber, technology E&O, or bespoke AI-related coverage. These products are evolving, but they can help backstop residual risk that contracts don’t fully address.

Insurance shouldn’t replace good drafting, but it can complement a well-structured risk strategy.

Completely avoiding AI risk is not a reality. Success comes from identifying risks early, allocating them clearly, and pricing them appropriately through contracts and insurance.

— Will & AiME

Three Takeaways:

  1. AI-driven products shift how liability is assessed, making contract drafting more critical than ever.

  2. Indemnities, warranties, and liability caps should reflect who controls data, models, and decision-making.

  3. Contracts and insurance together form the foundation of a sustainable AI risk strategy.

Will Schultz & AiME

Will Schultz is an intellectual property and technology attorney and chair of Merchant & Gould’s Internet, Cybersecurity, and E-Commerce practice. He advises businesses on AI, online platforms, digital assets, and emerging technology law, drawing on experience as both a lawyer and entrepreneur.

https://www.merchantgould.com/people/william-d-schultz/
Previous
Previous

How Do I Spot Real AI Business Opportunities?

Next
Next

How Do I Stay Ahead of AI Fakes in Sports Media?