Insurance carriers quietly back away from covering AI outputs

Tags:

Several major insurance carriers have begun to back away from providing cybersecurity and other insurance to companies using AI to run internal processes, insiders say.

While there’s no standard response to customer use of AI in the insurance market, many carriers are now quietly declining to write policies for claims related to AI-generated outputs in cybersecurity and errors and omissions (E&O) coverage, these observers say. Other insurance carriers are jacking up prices to cover AI-related claims, they say.

Dozens of insurance carriers appear to be rethinking coverage for mistakes related to AI, says Connor Deeks, CEO of Codestrap, an AI development and consulting firm that works with insurance firms.

Many insurance companies aren’t comfortable with covering AI outputs because they can’t track the reasoning path the AI took to come up with a result, he says.

“That’s playing out downstream with insurance companies basically carving out coverage, whether that’s across cybersecurity or E&O,” he says. “All of these vibe-coded solutions and these AI systems that people have constructed have inherent risk baked into the cake now, and you can’t actually see the full process.”

The insurance carrier concerns about AI workloads first surfaced in November 2025, when Financial Times reported that three major carriers, AIG, Great American, and W.R. Berkley, filed requests with US regulators to offer insurance policies that exclude liabilities tied to AI tools such as chatbots and agents. At the time, those requests appeared to be preemptive moves to be allowed to exclude AI mistakes sometime in the future.

But now, many carriers seem to be moving forward with plans to exclude AI mistakes from policies, Deeks says. Several carriers he’s been in contact with are moving to limit or end coverage for AI-related business disruptions and liabilities, he adds. The irony is that many insurance carriers are embracing AI for their own internal purposes.

Deeks’ company has a vested interest in AI insurance coverage — Codestrap markets its AI coding platform as traceable and therefore insurable — but other industry insiders have also seen similar carrier decisions.

Carriers find exclusions

It’s still unclear how many carriers will refuse to insure AI workloads, but several carriers are now writing insurance policies that exempt coverage for AI-related business chaos, says Jason Bishara, financial practice leader at global carrier NSI Insurance Group.

“The risk appetite is changing among the carriers, and it’s always constantly evolving,” he says. “With regard to AI, there are carriers that are just removing it from their risk appetite and declining to quote altogether.”

While some carriers have declined to cover AI outputs, others are building in rate hikes to cover the increased risk, Bishara says. While he doesn’t have numbers on the extent of the rate hikes, they are significant, he adds.

“Every business has insurance, and every business now is using AI to some extent,” he adds. “Are you seeing those liabilities and exclusions within these policies and an aversion to it from the carriers? The answer is yes.”

Carriers are also treating AI vendors differently than AI users, he says. In many cases, carriers are declining to cover AI vendors altogether, while they carve out exceptions in policies against covering AI at companies using the technology.

“If you’re an AI-related company or specifically an AI company, there’s a good chance that you’ll get a declination at this point,” he adds.

In recent months, many carriers have been asking detailed questions about how customers are using AI to better understand the risk of insuring potential mishaps, he says. Ultimately, this increased scrutiny will make it more difficult for companies to buy insurance for AI workloads.

“For everybody leveraging AI right now, you’re seeing questions like, ‘What are your AI policies? What are your procedures? How are you leveraging AI within your business?’” Bishara adds. “We’re getting a lot of questions from the underwriters on, ‘How do you leverage AI within your business?’”

Coverage in flux

Phil Karecki, CTO for the insurance sector at managed services provider Ensono, also sees some carriers backing away from covering AI outputs, although he’s not sure whether it’s a major trend. Insurance carriers continuously experiment with how to provide coverage, he notes.

Carriers have tried to separate tightly governed AI deployments from more experimental projects when determining whether to provide coverage, he says.

“You’ve got this bifurcation of AI, the governed generative and the autonomous pieces,” he says. “It’s no longer, ‘Are you using AI?’ It’s asking, ‘Are you using governed AI? How are you governing it? How are you keeping it safe and secure?’”

Carriers have been trying to determine whether covering AI workloads can be profitable for them, Karecki adds. Governed AI tools operating in a bounded decision-making process will be more insurable, while experimental AI systems with no monitoring and no easy rollback will be difficult to cover, he notes.

“There’s a repositioning versus a pullback, and that’s very common to the industry, and they will at times open up coverage just to see if it’s this type of insurance that will sell,” he says. “They will assess the results and what needs to change so they can decide whether to re-enter this marketplace or abandon it completely.”

In some cases, whether an AI system is insurable may come down to circumstances at individual insurance customers. Carriers in general don’t want to get out of the business of providing insurance, Karecki says.

“What they’re working for right now is, ‘How do I make this profitable, and is this sector insurable?’” he says. “They make those decisions on every application regardless, but now, depending upon what they’re being asked to insure, the questions will follow. ‘What are you using AI for? How are you governing it? What risks does that introduce?’”

It makes sense that some carriers have begun to question whether to cover AI outputs, given the current level of unreliability of most AI systems, says Dorian Smiley, CTO at Codestrap.

“The math says these models should be deterministic, like given the same input, you should get the same output,” he says. “But you can get very different output from the same input, and they can’t know if the answer that they’re giving you is actually correct.”

In most cases, AI models lack inductive reason and can’t review their own work, but many organizations are talking about deploying hundreds of autonomous agents and treating them like digital employees, he notes.

“The idea that these agents are going to become employees, autonomous people working in your organization, is insane,” he says. “You would never hire a person that can’t learn new information, can’t reliably retrieve information, or check their own work.”

NSI’s Bishara has advice for IT and business leaders looking for insurance coverage for their AI workloads: Be honest about how they’re using AI. If they try to hide their AI risks, they risk having their claims rejected when something goes wrong, he says.

“If you don’t fully disclose these things appropriately in the way in which you’re functioning and operating, it could be utilized as an excuse to deny a claim at a later date,” he says. “You don’t want a carrier to come back and say, ‘We didn’t underwrite to that risk. We asked these questions, and you didn’t disclose it.’”

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *