In October 2025, Deloitte Australia had to partially refund the Australian Government $290,000 after an AI-generated report contained fabricated academic references. The report looked professional. The citations appeared real. But when the government checked, several of the sources did not exist. The AI had invented them.
Around the same time, a Victorian solicitor was disciplined by the legal services commissioner after submitting court documents that included AI-generated case citations. The cases were fake. The solicitor had trusted ChatGPT to produce accurate legal references and submitted them without verification.
These are not cautionary tales from overseas. They happened in Australia, to established professionals and a Big Four consulting firm. If Deloitte and a practising lawyer can be caught out, your business can too. The question is not whether AI will make mistakes. It will. The question is whether you are prepared for when it does.
An AI hallucination is when an AI system generates output that is confident, coherent, and completely wrong. The term “hallucination” is misleading because it suggests the AI is seeing things that are not there. In reality, it is a design feature of how large language models work.
AI models like ChatGPT, Gemini, and Claude do not retrieve facts from a database. They predict the most probable next word based on patterns in their training data. When the model encounters a question it does not have a clear answer for, it does not say “I do not know.” It generates the most statistically likely response, which may be entirely fabricated.
CNBC described this as “silent failure at scale” in their March 2026 investigation. The failures are silent because the AI does not flag its own uncertainty. It presents invented facts with the same confidence as verified ones. At scale, this means businesses using AI across multiple processes are generating errors they may never detect.
Common types of hallucination include: fabricated statistics and percentages, invented academic papers and journal articles, fake legal case citations, made-up company names or products, and incorrect technical specifications. The more specialised the topic, the more likely the AI is to hallucinate, because there is less training data to draw accurate patterns from.
Under Australian law, you are responsible for the accuracy of information you publish, provide to clients, or rely upon in business decisions. This applies regardless of whether the information was generated by a human employee or an AI system.
If you are a consultant, accountant, lawyer, or other professional who provides advice based on AI-generated research, and that research contains hallucinated facts, you face the same professional liability as if you had fabricated the information yourself. The Victorian solicitor case proves this is not theoretical. “The AI told me” is not a defence.
If AI-generated marketing content makes false claims about your products or services, you are liable under the Australian Consumer Law for misleading or deceptive conduct. The ACCC does not care whether a human or an AI wrote the claim. If it is misleading, it is a breach.
If an AI-generated proposal or contract includes incorrect terms, specifications, or commitments, and a client accepts those terms, you may be contractually bound to deliver what the AI promised. This is why AI-generated client-facing documents must always be reviewed by someone with authority to commit your business.
If your business uses AI to make decisions that affect people (hiring, lending, service provision) and those decisions are based on hallucinated data, you face potential negligence claims. The upcoming Privacy Act amendments (December 2026) will add specific transparency requirements for automated decision-making, making this risk even more tangible.
Partial refund Deloitte Australia made to the government after AI fabricated academic references in a report
Victorian solicitor banned from unsupervised practice after submitting AI-hallucinated case citations to court
of customers still prefer human support over AI, partly because of trust concerns around accuracy
For an SME, a $290,000 refund could be business-ending. But the reputational damage may be worse than the financial cost. Once a client discovers you provided information generated by AI that turned out to be false, the trust is extremely difficult to rebuild.
The answer is not to avoid AI. The answer is to use it properly. Here is the framework we use with clients.
Use AI for structure, not facts. AI is excellent at creating outlines, drafting frameworks, organising ideas, and producing first drafts. It is unreliable for factual claims, statistics, and citations. Use it to build the skeleton of a document, then fill in verified facts yourself.
Verify everything client-facing. Any document, email, proposal, or report that goes to a client must be reviewed by a human before sending. This is not optional. It is your professional obligation. Build this review step into your workflow so it happens automatically, not as an afterthought.
Never trust AI citations. If an AI produces a statistic, case study, or reference, check the source. Copy the citation into Google. If the paper does not exist, the AI made it up. This takes 30 seconds per citation and prevents the exact mistake that caught out Deloitte.
Match AI tasks to AI strengths. AI excels at tasks where perfect accuracy is not critical: brainstorming, first drafts, email tone adjustments, data summarisation, and creative content. It struggles where precision matters: legal citations, financial calculations, medical information, and technical specifications. Assign tasks accordingly.
Build a review culture, not a blame culture. When (not if) an AI output contains an error, the question should be “how did this get past review?” not “who used AI?” If your team is afraid to admit they used AI, they will use it secretly and skip the verification step. That is where the real danger lies.
A proper AI usage policy should specifically address hallucination risk. At minimum, it should cover:
The goal is not to prevent AI use. It is to prevent unverified AI output from reaching places where it can cause harm. Our AI governance service helps businesses build these safeguards so they can use AI confidently without the liability risk.
Take our Free AI Audit to assess your current AI risk profile. You will get specific recommendations on where AI is safe to use in your business and where you need safeguards.