Your business is using AI. Maybe it drafts client communications. Maybe it handles customer enquiries. Maybe it processes data that informs business decisions. The question nobody is asking: if the AI gets something wrong and a client suffers a loss, does your insurance actually cover it?
The honest answer for most Australian SMEs is: probably not entirely. Traditional business insurance policies were written before AI entered the workplace. Professional indemnity covers your professional errors. Public liability covers physical injury and property damage. Cyber insurance covers data breaches. But AI creates new categories of risk that sit in the gaps between these policies.
Lockton, one of the world's largest insurance brokerages, has already flagged AI misuse and data exposure as an emerging risk category requiring specific coverage. Other major brokers are following. This is not a theoretical concern. It is a practical insurance gap that Australian businesses need to address now.
partial refund Deloitte paid the Australian Government after AI hallucinated references
AI vendors that accept liability for the accuracy of their outputs
maximum penalties under Australian Privacy Act for serious breaches
Traditional insurance assumes a human made the decision or performed the work. When AI is involved, liability gets complicated. Consider these scenarios.
An accounting firm uses AI to prepare tax advice. The AI misinterprets a client's situation and the advice results in an ATO penalty. Who is liable? The accountant, who relied on the AI output. But does their professional indemnity policy cover advice that was substantially generated by a third-party AI tool?
A real estate agency uses an AI chatbot to answer tenant enquiries. The chatbot provides incorrect information about a tenant's rights under the Residential Tenancies Act. The tenant acts on that information and suffers a financial loss. Is this covered under the agency's professional indemnity or public liability?
A marketing consultancy uses AI to generate ad copy that inadvertently makes a misleading claim under Australian Consumer Law. The ACCC investigates. Is the consultancy's PI policy going to cover the legal defence costs and any penalties? The answer to all of these depends entirely on the specific policy wording, and most policies were not written with these scenarios in mind.
Professional indemnity insurance covers you when your professional advice or service causes a client financial loss. It typically covers negligent acts, errors, and omissions. If you use AI to assist in delivering your professional service and the AI makes an error, your PI policy may cover the claim, but only if you can demonstrate you exercised reasonable care in using and verifying the AI output.
The risk is that insurers argue you were negligent in relying on AI without adequate verification. AI hallucinations are well documented, and a reasonable professional should know that AI outputs require checking. If you treated AI output as verified fact and passed it to a client, the insurer has grounds to question whether you met your professional standard of care.
Public liability covers physical injury and property damage caused by your business activities. AI creates new scenarios here: an AI-controlled system that causes physical harm, an AI recommendation that leads to unsafe conditions, or an AI-managed access system that fails. Most public liability policies do not explicitly address AI-mediated harm, which creates ambiguity in how claims would be assessed.
Cyber insurance covers data breaches, ransomware, and business interruption from cyber incidents. If AI is involved in a data breach, either because the AI tool was compromised or because an employee inadvertently fed sensitive data into an AI platform, your cyber policy may cover the response costs. But feeding customer data into ChatGPT and having it appear in someone else's conversation is a scenario most cyber policies were not designed to address.
Directors have governance responsibilities around AI. If a company deploys AI without adequate risk assessment, policies, or oversight, and it causes harm, directors may face personal liability claims. D&O insurance may cover these claims, but increasingly insurers are asking about AI governance practices during the underwriting process.
Deloitte and the hallucinated references. Deloitte Australia had to partially refund $290,000 to the Australian Government after an AI-generated report contained fabricated academic references. As a major consulting firm, Deloitte's insurance likely covered this. But for a small consultancy facing a similar claim, the outcome could be very different.
The Victorian solicitor. A Victorian solicitor was disciplined by the legal profession regulator after submitting court documents containing AI-hallucinated case citations. The solicitor's professional indemnity insurance covered the legal costs, but the reputational damage and regulatory consequences were not insurable.
AI chatbot policy inventions. Multiple businesses globally have faced claims after AI chatbots promised discounts, refunds, or policy provisions that did not exist. In one Canadian case, an airline was held to honour a refund policy that its chatbot fabricated. These claims fall awkwardly between professional indemnity and general liability.
Discriminatory AI decisions. AI systems that discriminate against protected groups create liability under anti-discrimination legislation. If your AI hiring tool, pricing algorithm, or customer service system treats people differently based on protected characteristics, you face both regulatory penalties and civil claims. Most general liability policies do not explicitly cover discrimination by AI systems.
List every way your business uses AI. Include tools your staff use independently, not just official company systems. Consider AI in customer communications, data processing, content creation, decision support, and any automated systems. You cannot assess insurance coverage for risks you have not identified.
Read the exclusions section of every policy. Look for language about technology errors, automated systems, software failures, and third-party tools. Some policies already contain exclusions for losses arising from automated decision-making or reliance on computer-generated outputs. If you find these exclusions, raise them with your broker immediately.
Tell your insurance broker exactly how you are using AI. Ask specifically whether your current professional indemnity covers errors in AI-assisted professional work, whether your public liability covers harm caused by AI systems, whether your cyber policy covers data exposure through AI tools, and whether any existing exclusions could apply to AI-related claims.
Insurers are increasingly looking at AI governance practices during underwriting. Having a documented AI compliance framework shows you take the risks seriously. This includes an AI usage policy, data handling procedures, human review requirements for AI outputs, and incident response plans for AI failures.
Depending on your AI use, you may need to add specific endorsements to existing policies or take out additional coverage. Technology errors and omissions insurance is one option for businesses that develop or deploy AI systems. For businesses using AI in client services, a specific rider on your PI policy addressing AI-assisted work may be appropriate. Your broker can advise on the most cost-effective approach.
Dedicated AI insurance products are beginning to appear globally. These policies specifically address AI-related liabilities including algorithmic errors, biased outputs, data processing failures, and autonomous decision-making consequences. In Australia, Lockton has been at the forefront of advising on AI risk, and several specialty insurers are developing products for the local market.
For most Australian SMEs in 2026, a dedicated AI policy is not yet necessary or available. The practical approach is to ensure your existing coverage addresses the specific AI risks your business faces. This means having an honest conversation with your broker and potentially adding endorsements to your current policies.
What is not acceptable is ignoring the question entirely. AI is changing how your business operates, and your insurance needs to keep pace with those changes. The cost of a broker conversation is negligible compared to discovering a coverage gap when a claim arrives.
Insurance is the last line of defence, not the first. The most effective risk reduction comes from good AI practices. Always have a human review AI outputs before they reach clients. Document your verification processes. Train staff on the limitations of AI tools. Maintain records of when and how AI was used in client work.
These practices not only reduce the likelihood of an AI-related claim but also strengthen your position if a claim does arise. Being able to demonstrate that you had reasonable processes in place for AI oversight makes it much harder for an insurer to argue negligence.
Our Free AI Audit helps identify where you are using AI, where the gaps are, and what governance you need.