ComplianceMarch 2026·13 min read

When AI Makes a Bad Decision: Who Is Liable in Australia?

Gavel legal court justice. Photo by KATRIN  BOLOVTSOVA on Pexels

Your AI chatbot tells a customer they qualify for a refund they do not qualify for. Your AI hiring tool screens out qualified candidates based on patterns that amount to age discrimination. Your AI financial analysis produces a recommendation that costs a client money because the underlying data was wrong. In each case, someone suffers a loss because of an AI decision. The question every business needs to answer: who pays?

In Australia, the short answer is you do. The business that deploys the AI. Not the AI vendor (usually). Not the model developer. Not the cloud provider. You. Because under existing Australian law, using AI does not transfer your legal obligations. It just changes how you fulfil them. And if you fulfil them badly, the liability sits with your business regardless of whether a human or an algorithm made the call.

This is not theoretical. Deloitte had to refund $290,000 after AI-generated content in a government report contained fabricated references. A Victorian solicitor faced disciplinary action for submitting AI-hallucinated case citations. Air Canada was held liable for its chatbot's incorrect fare information. The precedents are being set, and they all point in the same direction: the business is responsible for what its AI does.

The Responsibility Gap

$290K

Deloitte refund for AI-hallucinated references in a government report

Zero

AI-specific liability laws in Australia as of 2026

$50M

maximum Privacy Act penalty for serious breaches involving AI decisions

When a human employee makes a mistake, the liability chain is clear. The employee acted, the business is vicariously liable, and the question is whether reasonable care was taken. When AI makes a mistake, the chain fractures. Multiple parties are involved: the business that deployed the AI, the vendor that sold the tool, the developer who built the model, and the training data that shaped its behaviour. This is the responsibility gap.

Australian law has not yet created AI-specific liability rules. Instead, courts are applying existing legal frameworks to AI situations. Understanding how each framework applies is essential for managing your risk.

How Existing Law Applies to AI Decisions

Australian Consumer Law

Under the Australian Consumer Law, any representation made by your business to a consumer must be accurate. This includes representations made by your AI systems. If your chatbot states a refund policy, that is your business making a representation. If your AI pricing tool advertises a price, that is your business making an offer. The ACCC has made clear that businesses cannot hide behind "the AI said it, not us." Your AI is your agent, and its communications are your communications.

Negligence

To establish negligence, a claimant must show that you owed them a duty of care, that you breached that duty, and that the breach caused their loss. Deploying AI without adequate testing, monitoring, or human oversight may constitute a breach of the duty of care. If a reasonable business in your position would have had human review of AI outputs before they reached customers, and you did not, that is potentially negligent. The standard is not perfection. It is reasonableness. And what counts as reasonable is tightening as AI becomes more common and the risks become better understood.

Product Liability

If your business sells a product that includes AI (a software product with AI features, for example), product liability law may apply. Under the Competition and Consumer Act, a product must be fit for purpose and of acceptable quality. An AI product that consistently produces harmful outputs may fail both tests. This is more relevant for businesses that develop and sell AI products than for those that use AI tools internally, but the distinction blurs when AI-generated content or decisions are delivered to clients as part of a service.

Privacy Act

The amended Privacy Act creates specific obligations around automated decision-making. From December 2026, businesses must disclose when AI makes decisions that substantially affect individuals, explain the logic behind those decisions, and provide a pathway for human review. Non-compliance exposes you to penalties up to $50 million for serious breaches.

Anti-Discrimination Law

If your AI makes decisions that disproportionately disadvantage people based on protected characteristics (age, gender, race, disability), you are liable under the relevant anti-discrimination Acts regardless of whether the discrimination was intentional. AI discrimination is your discrimination. The algorithm is not a separate legal entity with its own liability. It is a tool you chose to use, and its biases are your responsibility to manage.

The ACCC Position

The Australian Competition and Consumer Commission has been clear: existing consumer protection laws apply to AI. Businesses cannot use AI as a shield against accountability. The ACCC is particularly focused on misleading conduct through AI-generated content, unfair contract terms related to AI services, and safety issues arising from AI products. Their position is that the consumer's relationship is with the business, not with the technology the business uses. If the technology causes harm, the business is accountable.

For practical purposes, this means every AI-generated communication, recommendation, or decision that reaches a customer must be treated with the same care as if a human employee produced it. The compliance standard does not lower just because the work was automated.

International Precedents Worth Watching

The Air Canada chatbot case. The company's chatbot told a customer he could claim a bereavement fare discount after booking. Air Canada said the chatbot was wrong. A tribunal ruled that the chatbot was the airline's agent and its promise was binding. This case established that businesses are bound by their AI's representations.

EU AI Act. The European Union has implemented the world's first comprehensive AI regulation, classifying AI systems by risk level and imposing obligations accordingly. High-risk AI (used in hiring, credit decisions, healthcare) faces the strictest requirements. While not directly applicable in Australia, the EU framework influences the direction of Australian regulatory thinking.

US FTC enforcement actions. The US Federal Trade Commission has taken action against companies whose AI tools caused consumer harm, focusing on deceptive practices, discrimination, and unfair data use. These actions, while in a different jurisdiction, signal the global trend toward holding businesses accountable for AI outcomes.

Practical Risk Management for SMEs

1. Human oversight on high-stakes decisions. Any AI decision that affects a customer's money, access, rights, or wellbeing must have human review before it is actioned. This is non-negotiable. Automated customer service responses, hiring decisions, pricing changes, and financial recommendations all fall into this category.

2. Document your AI governance. Record what AI tools you use, what decisions they influence, what data they access, what testing you conducted before deployment, and what monitoring processes are in place. If a dispute arises, documented governance demonstrates that you took reasonable care. Undocumented AI usage looks like negligence.

3. Review your insurance. Talk to your broker about whether your professional indemnity, public liability, and cyber insurance policies cover AI-related incidents. Many policies written before AI have gaps. Identify them before you need to make a claim.

4. Disclose AI use transparently. Update your terms of service, privacy policy, and customer communications to disclose where AI is used. Transparency reduces both legal risk and reputational risk. Customers who know AI is involved can make informed decisions.

5. Test for bias and errors regularly. Audit your AI outputs for patterns that suggest discrimination, inaccuracy, or quality decline. Silent failures are a liability risk because you cannot demonstrate reasonable care if you were not checking whether the AI was working correctly.

The Bottom Line

There is no AI exemption in Australian law. When AI makes a decision that causes harm, your business bears the liability under consumer law, privacy law, anti-discrimination law, and negligence principles. The responsibility gap is real but the practical answer is clear: if you deploy it, you own the outcomes. Human oversight, documented governance, appropriate insurance, and regular monitoring are not optional extras. They are the baseline for any business using AI in ways that affect other people.

Want to Deploy AI Responsibly?

Our Free AI Audit identifies automation opportunities while flagging compliance and governance considerations.

Frequently Asked Questions

Under current Australian law, the business that deploys the AI is primarily liable. You cannot outsource legal responsibility to an algorithm. If your AI chatbot gives wrong advice, your AI hiring tool discriminates, or your AI system makes an error that causes financial loss to a customer, your business bears the liability. The AI vendor may share some responsibility depending on contract terms, but the customer-facing entity is the first point of accountability. This applies under Australian Consumer Law, the Privacy Act, anti-discrimination legislation, and general negligence principles.

The responsibility gap refers to the disconnect between who makes a decision and who is accountable for it. When a human employee makes a bad decision, the chain of accountability is clear: the employee, their manager, and ultimately the business. When AI makes a bad decision, the chain is murkier. Did the AI vendor build a faulty model? Did the business configure it incorrectly? Did the training data contain biases? Did the employee who deployed it fail to supervise adequately? Current Australian law does not have AI-specific liability rules, so courts apply existing frameworks like negligence, product liability, and consumer law. This creates uncertainty that businesses need to manage proactively.

Deloitte Australia had to partially refund approximately $290,000 to the Australian government after AI-generated content in a report contained fabricated academic references. The case demonstrated that professional services firms are liable for AI outputs they deliver to clients, regardless of whether a human or AI produced the content. It also showed that AI hallucinations are not obvious to the people using the outputs. The fabricated references looked legitimate. The lesson for SMEs is that human review of AI outputs is not optional, especially when those outputs are delivered to clients or used to make decisions affecting others.

Five practical steps. First, maintain human oversight for any AI decision that affects customers, employees, or finances. Never let AI operate autonomously in high-stakes situations. Second, document your AI governance: what tools you use, what decisions they inform, and what review processes are in place. Third, check your insurance coverage. Many professional indemnity policies were written before AI and may not cover AI-specific failures. Fourth, include AI disclosures in your terms of service and privacy policy so customers know AI is involved. Fifth, keep records of AI inputs and outputs so you can demonstrate due diligence if a dispute arises.

FW
FlowWorks Team
AI Automation & Consulting · Melbourne, Australia
Get started

Find out what's costing
your business the most.

A 30-minute conversation. No pitch. No obligation. We'll identify your highest-impact automation opportunities before you spend a dollar.

Get your AI Readiness Review
1300 484 044 · ops@flowworks.com.au · 470 St Kilda Rd, Melbourne VIC 3004