AI Accountability Gets a Boost with New Insurance Offering
Lloyd’s of London, through the Toronto-based startup Armilla, has launched a pioneering insurance policy designed specifically for the AI era.
This coverage aims to protect companies from financial losses stemming from AI-related errors or malfunctions.
While Lloyd’s and Armilla are tapping into the booming AI market to grow their business—much like they have done with previous emerging risks—the move highlights an important reality: AI, despite its transformative potential, remains a significant business risk.
For companies hoping AI will lower operational costs, this new insurance signals a cautionary note.
Integrating AI could, in fact, increase expenses such as insurance premiums.
Armilla’s policy is structured to cover legal costs and potential damages if a company faces lawsuits related to harm caused by its AI products.
CEO Karthik Ramakrishnan suggests that beyond risk mitigation, this insurance could encourage wider AI adoption by easing fears around technology failures.
For example, in 2024, Air Canada faced costly repercussions after its AI chatbot mistakenly offered unauthorised discounts, a court ruling forced the airline to honour those offers.
This is wild. Not only is Air Canada being forced to honor a refund invented by its AI chatbot, but they tried to get out of it by claiming the bot is "responsible for its own actions." Everyone wants AI, but no one wants to be responsible for it.https://t.co/6vizniD9pK pic.twitter.com/5uzbJpTXY1
— Reid Southen (@Rahll) February 17, 2024
Had Air Canada been insured under Armilla’s policy, some of those losses might have been mitigated.
However, the coverage is selective—Armilla only insures AI systems after thorough evaluation to ensure an acceptable risk profile, refusing to cover “lemon” models prone to failure.
This contrasts with some existing insurers who provide limited AI-related protection as part of broader tech error policies.
Ultimately, this new product reflects the evolving landscape where AI’s power is balanced by the very real risks it introduces—and the growing need for businesses to manage those risks proactively.
Risks of Trusting AI’s Made-Up Data in Decision-Making
The impact of companies relying on AI-generated falsehoods—known as hallucinations—can be profound, resulting in misguided decisions, financial setbacks, and reputational damage, according to industry news site PYMNTS.
🚨 Hallucinations are one of the most serious legal challenges in AI, as they can lead to prohibitive liability costs for companies developing or deploying AI. What AI companies don't want to admit is that hallucinations may NEVER go away:
"The company found that o3 — its most… pic.twitter.com/apF02spaEM
— Luiza Jarovsky (@LuizaJarovsky) May 8, 2025
The outlet also raises critical questions about accountability when AI systems produce such errors.
This concern aligns with insights from MJ Jiang, Chief Strategy Officer at Credibly, who recently told Inc that while hallucinations in AI cannot be fully eliminated, they can only be mitigated.
Jiang warns that companies face significant legal risks from these AI-induced mistakes and should proactively consider who bears responsibility if an AI error leads to harm.
She emphasizes the importance of establishing robust mitigation strategies to minimise these risks.
In fact, she thinks that:
“…because GenAI cannot explain to you how it came up with the output, human governance will be essential in businesses where the use cases are of higher risk to the business.”
Business leaders and experts alike caution that adopting AI is far from risk-free and advocate for thorough preparation to ensure compliance and manage potential legal challenges.
Incorporating these considerations into your AI strategy and budget is essential for navigating the complex risks of AI implementation.