ComplianceMarch 2026·11 min read

When AI Gets It Wrong: Hallucinations and Business Liability

Error warning computer screen. Photo by Benjamin Farren on Pexels

In October 2025, Deloitte Australia had to partially refund the Australian Government $290,000 after an AI-generated report contained fabricated academic references. The report looked professional. The citations appeared real. But when the government checked, several of the sources did not exist. The AI had invented them.

Around the same time, a Victorian solicitor was disciplined by the legal services commissioner after submitting court documents that included AI-generated case citations. The cases were fake. The solicitor had trusted ChatGPT to produce accurate legal references and submitted them without verification.

These are not cautionary tales from overseas. They happened in Australia, to established professionals and a Big Four consulting firm. If Deloitte and a practising lawyer can be caught out, your business can too. The question is not whether AI will make mistakes. It will. The question is whether you are prepared for when it does.

What Are AI Hallucinations and Why Do They Happen?

An AI hallucination is when an AI system generates output that is confident, coherent, and completely wrong. The term “hallucination” is misleading because it suggests the AI is seeing things that are not there. In reality, it is a design feature of how large language models work.

AI models like ChatGPT, Gemini, and Claude do not retrieve facts from a database. They predict the most probable next word based on patterns in their training data. When the model encounters a question it does not have a clear answer for, it does not say “I do not know.” It generates the most statistically likely response, which may be entirely fabricated.

CNBC described this as “silent failure at scale” in their March 2026 investigation. The failures are silent because the AI does not flag its own uncertainty. It presents invented facts with the same confidence as verified ones. At scale, this means businesses using AI across multiple processes are generating errors they may never detect.

Common types of hallucination include: fabricated statistics and percentages, invented academic papers and journal articles, fake legal case citations, made-up company names or products, and incorrect technical specifications. The more specialised the topic, the more likely the AI is to hallucinate, because there is less training data to draw accurate patterns from.

The Legal Risk for Australian Businesses

Under Australian law, you are responsible for the accuracy of information you publish, provide to clients, or rely upon in business decisions. This applies regardless of whether the information was generated by a human employee or an AI system.

Professional liability

If you are a consultant, accountant, lawyer, or other professional who provides advice based on AI-generated research, and that research contains hallucinated facts, you face the same professional liability as if you had fabricated the information yourself. The Victorian solicitor case proves this is not theoretical. “The AI told me” is not a defence.

Australian Consumer Law

If AI-generated marketing content makes false claims about your products or services, you are liable under the Australian Consumer Law for misleading or deceptive conduct. The ACCC does not care whether a human or an AI wrote the claim. If it is misleading, it is a breach.

Contractual liability

If an AI-generated proposal or contract includes incorrect terms, specifications, or commitments, and a client accepts those terms, you may be contractually bound to deliver what the AI promised. This is why AI-generated client-facing documents must always be reviewed by someone with authority to commit your business.

Negligence

If your business uses AI to make decisions that affect people (hiring, lending, service provision) and those decisions are based on hallucinated data, you face potential negligence claims. The upcoming Privacy Act amendments (December 2026) will add specific transparency requirements for automated decision-making, making this risk even more tangible.

The Real-World Cost

$290K

Partial refund Deloitte Australia made to the government after AI fabricated academic references in a report

Banned

Victorian solicitor banned from unsupervised practice after submitting AI-hallucinated case citations to court

49%

of customers still prefer human support over AI, partly because of trust concerns around accuracy

For an SME, a $290,000 refund could be business-ending. But the reputational damage may be worse than the financial cost. Once a client discovers you provided information generated by AI that turned out to be false, the trust is extremely difficult to rebuild.

How to Use AI Without Getting Burned

The answer is not to avoid AI. The answer is to use it properly. Here is the framework we use with clients.

Use AI for structure, not facts. AI is excellent at creating outlines, drafting frameworks, organising ideas, and producing first drafts. It is unreliable for factual claims, statistics, and citations. Use it to build the skeleton of a document, then fill in verified facts yourself.

Verify everything client-facing. Any document, email, proposal, or report that goes to a client must be reviewed by a human before sending. This is not optional. It is your professional obligation. Build this review step into your workflow so it happens automatically, not as an afterthought.

Never trust AI citations. If an AI produces a statistic, case study, or reference, check the source. Copy the citation into Google. If the paper does not exist, the AI made it up. This takes 30 seconds per citation and prevents the exact mistake that caught out Deloitte.

Match AI tasks to AI strengths. AI excels at tasks where perfect accuracy is not critical: brainstorming, first drafts, email tone adjustments, data summarisation, and creative content. It struggles where precision matters: legal citations, financial calculations, medical information, and technical specifications. Assign tasks accordingly.

Build a review culture, not a blame culture. When (not if) an AI output contains an error, the question should be “how did this get past review?” not “who used AI?” If your team is afraid to admit they used AI, they will use it secretly and skip the verification step. That is where the real danger lies.

Building Hallucination Safeguards Into Your Business

A proper AI usage policy should specifically address hallucination risk. At minimum, it should cover:

  • Which tasks AI can be used for without review (internal brainstorming, personal research)
  • Which tasks require human review before output leaves the business (anything client-facing)
  • Which tasks AI should not be used for at all (legal citations, financial advice, medical recommendations)
  • A mandatory citation verification step for any AI-generated research
  • An incident reporting process for when hallucinations are detected after the fact

The goal is not to prevent AI use. It is to prevent unverified AI output from reaching places where it can cause harm. Our AI governance service helps businesses build these safeguards so they can use AI confidently without the liability risk.

Using AI? Make Sure You Are Protected.

Take our Free AI Audit to assess your current AI risk profile. You will get specific recommendations on where AI is safe to use in your business and where you need safeguards.

Frequently Asked Questions

An AI hallucination is when an AI system generates content that sounds plausible but is factually wrong. This includes inventing statistics, fabricating case studies, citing academic papers that do not exist, or creating fake legal precedents. It happens because AI predicts likely-sounding text rather than verified facts.

Yes. Under Australian law, you are responsible for the accuracy of information you publish or provide to clients, regardless of whether a human or an AI generated it. If an AI-generated report, proposal, or communication contains false information and a client relies on it to their detriment, your business faces the same liability as if a human employee made the error.

You cannot eliminate them entirely, but you can manage the risk. Always have a human review AI outputs before they reach clients. Use AI for drafting, not for final output. Cross-check any statistics, citations, or factual claims. Implement confidence thresholds that flag uncertain outputs for review. And never use AI for tasks where accuracy is critical without a verification step.

All large language models hallucinate to some degree. However, newer models (GPT-4o, Claude 3.5, Gemini Pro) hallucinate less frequently than older ones. Tools with web access can verify facts in real time, reducing (but not eliminating) hallucination risk. The most reliable approach is human verification regardless of which tool you use.

FW
FlowWorks Team
AI Automation & Consulting · Melbourne, Australia
Get started

Find out what's costing
your business the most.

A 30-minute conversation. No pitch. No obligation. We'll identify your highest-impact automation opportunities before you spend a dollar.

Get your AI Readiness Review
1300 484 044 · ops@flowworks.com.au · 470 St Kilda Rd, Melbourne VIC 3004