ComplianceFebruary 2026·12 min read

AI Insurance: Do You Need It? What Covers AI Mistakes?

Insurance policy documents. Photo by Vlad Deep on Pexels

Your business is using AI. Maybe it drafts client communications. Maybe it handles customer enquiries. Maybe it processes data that informs business decisions. The question nobody is asking: if the AI gets something wrong and a client suffers a loss, does your insurance actually cover it?

The honest answer for most Australian SMEs is: probably not entirely. Traditional business insurance policies were written before AI entered the workplace. Professional indemnity covers your professional errors. Public liability covers physical injury and property damage. Cyber insurance covers data breaches. But AI creates new categories of risk that sit in the gaps between these policies.

Lockton, one of the world's largest insurance brokerages, has already flagged AI misuse and data exposure as an emerging risk category requiring specific coverage. Other major brokers are following. This is not a theoretical concern. It is a practical insurance gap that Australian businesses need to address now.

The Insurance Gap AI Creates

$290K

partial refund Deloitte paid the Australian Government after AI hallucinated references

Zero

AI vendors that accept liability for the accuracy of their outputs

$50M

maximum penalties under Australian Privacy Act for serious breaches

Traditional insurance assumes a human made the decision or performed the work. When AI is involved, liability gets complicated. Consider these scenarios.

An accounting firm uses AI to prepare tax advice. The AI misinterprets a client's situation and the advice results in an ATO penalty. Who is liable? The accountant, who relied on the AI output. But does their professional indemnity policy cover advice that was substantially generated by a third-party AI tool?

A real estate agency uses an AI chatbot to answer tenant enquiries. The chatbot provides incorrect information about a tenant's rights under the Residential Tenancies Act. The tenant acts on that information and suffers a financial loss. Is this covered under the agency's professional indemnity or public liability?

A marketing consultancy uses AI to generate ad copy that inadvertently makes a misleading claim under Australian Consumer Law. The ACCC investigates. Is the consultancy's PI policy going to cover the legal defence costs and any penalties? The answer to all of these depends entirely on the specific policy wording, and most policies were not written with these scenarios in mind.

Your Existing Policies: What They Do and Do Not Cover

Professional Indemnity

Professional indemnity insurance covers you when your professional advice or service causes a client financial loss. It typically covers negligent acts, errors, and omissions. If you use AI to assist in delivering your professional service and the AI makes an error, your PI policy may cover the claim, but only if you can demonstrate you exercised reasonable care in using and verifying the AI output.

The risk is that insurers argue you were negligent in relying on AI without adequate verification. AI hallucinations are well documented, and a reasonable professional should know that AI outputs require checking. If you treated AI output as verified fact and passed it to a client, the insurer has grounds to question whether you met your professional standard of care.

Public Liability

Public liability covers physical injury and property damage caused by your business activities. AI creates new scenarios here: an AI-controlled system that causes physical harm, an AI recommendation that leads to unsafe conditions, or an AI-managed access system that fails. Most public liability policies do not explicitly address AI-mediated harm, which creates ambiguity in how claims would be assessed.

Cyber Insurance

Cyber insurance covers data breaches, ransomware, and business interruption from cyber incidents. If AI is involved in a data breach, either because the AI tool was compromised or because an employee inadvertently fed sensitive data into an AI platform, your cyber policy may cover the response costs. But feeding customer data into ChatGPT and having it appear in someone else's conversation is a scenario most cyber policies were not designed to address.

Management Liability / Directors and Officers

Directors have governance responsibilities around AI. If a company deploys AI without adequate risk assessment, policies, or oversight, and it causes harm, directors may face personal liability claims. D&O insurance may cover these claims, but increasingly insurers are asking about AI governance practices during the underwriting process.

Real Cases Where AI Insurance Gaps Appeared

Deloitte and the hallucinated references. Deloitte Australia had to partially refund $290,000 to the Australian Government after an AI-generated report contained fabricated academic references. As a major consulting firm, Deloitte's insurance likely covered this. But for a small consultancy facing a similar claim, the outcome could be very different.

The Victorian solicitor. A Victorian solicitor was disciplined by the legal profession regulator after submitting court documents containing AI-hallucinated case citations. The solicitor's professional indemnity insurance covered the legal costs, but the reputational damage and regulatory consequences were not insurable.

AI chatbot policy inventions. Multiple businesses globally have faced claims after AI chatbots promised discounts, refunds, or policy provisions that did not exist. In one Canadian case, an airline was held to honour a refund policy that its chatbot fabricated. These claims fall awkwardly between professional indemnity and general liability.

Discriminatory AI decisions. AI systems that discriminate against protected groups create liability under anti-discrimination legislation. If your AI hiring tool, pricing algorithm, or customer service system treats people differently based on protected characteristics, you face both regulatory penalties and civil claims. Most general liability policies do not explicitly cover discrimination by AI systems.

What You Should Do Right Now

1. Audit Your AI Use

List every way your business uses AI. Include tools your staff use independently, not just official company systems. Consider AI in customer communications, data processing, content creation, decision support, and any automated systems. You cannot assess insurance coverage for risks you have not identified.

2. Review Your Current Policies

Read the exclusions section of every policy. Look for language about technology errors, automated systems, software failures, and third-party tools. Some policies already contain exclusions for losses arising from automated decision-making or reliance on computer-generated outputs. If you find these exclusions, raise them with your broker immediately.

3. Talk to Your Broker

Tell your insurance broker exactly how you are using AI. Ask specifically whether your current professional indemnity covers errors in AI-assisted professional work, whether your public liability covers harm caused by AI systems, whether your cyber policy covers data exposure through AI tools, and whether any existing exclusions could apply to AI-related claims.

4. Document Your AI Governance

Insurers are increasingly looking at AI governance practices during underwriting. Having a documented AI compliance framework shows you take the risks seriously. This includes an AI usage policy, data handling procedures, human review requirements for AI outputs, and incident response plans for AI failures.

5. Consider Additional Coverage

Depending on your AI use, you may need to add specific endorsements to existing policies or take out additional coverage. Technology errors and omissions insurance is one option for businesses that develop or deploy AI systems. For businesses using AI in client services, a specific rider on your PI policy addressing AI-assisted work may be appropriate. Your broker can advise on the most cost-effective approach.

The Emerging AI Insurance Market

Dedicated AI insurance products are beginning to appear globally. These policies specifically address AI-related liabilities including algorithmic errors, biased outputs, data processing failures, and autonomous decision-making consequences. In Australia, Lockton has been at the forefront of advising on AI risk, and several specialty insurers are developing products for the local market.

For most Australian SMEs in 2026, a dedicated AI policy is not yet necessary or available. The practical approach is to ensure your existing coverage addresses the specific AI risks your business faces. This means having an honest conversation with your broker and potentially adding endorsements to your current policies.

What is not acceptable is ignoring the question entirely. AI is changing how your business operates, and your insurance needs to keep pace with those changes. The cost of a broker conversation is negligible compared to discovering a coverage gap when a claim arrives.

Reducing Your Risk Before Insurance

Insurance is the last line of defence, not the first. The most effective risk reduction comes from good AI practices. Always have a human review AI outputs before they reach clients. Document your verification processes. Train staff on the limitations of AI tools. Maintain records of when and how AI was used in client work.

These practices not only reduce the likelihood of an AI-related claim but also strengthen your position if a claim does arise. Being able to demonstrate that you had reasonable processes in place for AI oversight makes it much harder for an insurer to argue negligence.

Not Sure Where Your AI Risks Are?

Our Free AI Audit helps identify where you are using AI, where the gaps are, and what governance you need.

Frequently Asked Questions

It depends on the policy wording and how AI was used. Most professional indemnity policies cover negligent acts, errors, or omissions in the provision of professional services. If AI produced incorrect advice that you then delivered to a client without adequate review, the insurer may argue that relying on unverified AI output constitutes negligence on your part. Some insurers are already adding AI-specific exclusions or requiring disclosure of AI use in professional services. Check your policy wording carefully and ask your broker about AI-related scenarios.

Dedicated AI insurance products are emerging but not yet widely available in Australia as of early 2026. Lockton, Marsh, and several specialty insurers are developing AI-specific coverage that addresses gaps in traditional policies. For most SMEs, the current approach is to review existing professional indemnity, public liability, and cyber insurance policies for AI-related gaps, then work with a broker to add endorsements or riders that specifically address AI use. Purpose-built AI liability products are expected to become mainstream within the next 12 to 18 months.

If AI-generated information causes a client financial loss, you remain liable under Australian Consumer Law and potentially under professional negligence. The AI vendor's terms of service almost always exclude liability for the accuracy of outputs. Your client's claim is against you, not against the AI provider. The legal position is clear: using AI does not transfer your duty of care. If you are a financial adviser, accountant, lawyer, or any professional providing advice, you must verify AI outputs before passing them to clients. Failure to do so is likely to be treated as a breach of your professional obligations.

Yes. Most insurance policies require you to disclose material changes to how you operate your business. Introducing AI into client-facing processes, decision-making, or professional service delivery is a material change. Failing to disclose AI use could give your insurer grounds to deny a claim. Contact your broker, explain how you are using AI, and ask them to confirm coverage. This conversation also helps identify gaps before they become problems. Some insurers may adjust premiums or add conditions, but non-disclosure is a far bigger risk than a premium increase.

FW
FlowWorks Team
AI Automation & Consulting · Melbourne, Australia
Get started

Find out what's costing
your business the most.

A 30-minute conversation. No pitch. No obligation. We'll identify your highest-impact automation opportunities before you spend a dollar.

Get your AI Readiness Review
1300 484 044 · ops@flowworks.com.au · 470 St Kilda Rd, Melbourne VIC 3004