ComplianceMarch 2026·11 min read

AI-Generated Content Rules in Australia: What Your Business Must Disclose

Three interconnected elements representing AI content transparency mechanisms: labelling, watermarking, and metadata, overlaid on Australia

In November 2025, the Australian Government released voluntary guidance titled Being Clear About AI-Generated Content. Published by the National AI Centre and the Department of Industry, Science and Resources, the guide outlines best practice approaches to transparency for businesses that create, modify, or publish AI-generated content.

The guidance is voluntary. But the obligations it sits alongside are not. The Australian Consumer Law already prohibits misleading and deceptive conduct, and the Treasury's October 2025 review confirmed that existing consumer protections apply fully to AI-generated content. The ACCC does not need new legislation to take action if an AI chatbot gives false advice or an AI-generated product description misleads a customer.

This guide breaks down what the government is recommending, what the law already requires, and what practical steps your business should take. If you are using AI to create marketing copy, customer communications, product descriptions, images, or any public-facing content, this applies to you.

Why This Matters Now

AI-generated content is everywhere. Marketing teams use it for blog posts, social media, and email campaigns. Customer service teams use it for chatbot responses and support articles. Design teams use it for product images and visual assets. According to CPA Australia's 2025 Business Technology Report, 92% of surveyed companies used AI tools in 2025, up from 72% the prior year.

The problem is not that businesses are using AI. The problem is that most have no process for disclosing when they do, and no system for catching errors before they reach customers. AI models hallucinate. They present fabricated statistics as fact, invent product features that do not exist, and generate claims that cannot be substantiated.

The Air Canada case is the most cited example. An AI chatbot told a bereaved passenger he was entitled to a discounted bereavement fare, which the airline did not actually offer. The tribunal held Air Canada liable. The lesson: if AI says it on your behalf, you own the consequences.

What the Law Already Requires

You do not need to wait for new legislation. The Australian Consumer Law already covers AI-generated content through several existing provisions.

Misleading and deceptive conduct (Section 18). It is prohibited to engage in conduct that is misleading or deceptive, or is likely to mislead or deceive. This applies to AI-generated content. If your AI creates a product description that overstates features, or a chatbot provides incorrect information about your services, you are in breach. Intent is irrelevant. The Treasury's October 2025 review confirmed that AI hallucinations can constitute deceptive conduct.

False or misleading representations (Section 29). Making false claims about the quality, value, or characteristics of goods or services is prohibited. AI-generated marketing copy that invents statistics, fabricates testimonials, or claims capabilities your product does not have falls squarely within this provision.

AI-washing. The ACCC has flagged "AI-washing" as an emerging concern: businesses overstating the role or capability of AI in their products to attract customers. Claiming your product "uses AI" when it runs a basic rules engine, or overstating the accuracy or sophistication of your AI systems, could constitute a misleading representation.

Penalties. Maximum penalties under the Australian Consumer Law include fines of up to $50 million, three times the benefit obtained, or 30% of adjusted turnover during the breach period, whichever is greatest. These are not theoretical. The ACCC has listed AI-related deceptive conduct as an enforcement priority for 2025-26.

The Three Transparency Mechanisms

The government's Being Clear About AI-Generated Content guidance recommends three transparency mechanisms. Each serves a different purpose, and for higher-risk content, the guidance recommends using them in combination.

Three pillars of AI content transparency: labelling represented by an eye, watermarking by a fingerprint, and metadata by document layers

1. Labelling

Labelling means adding visible text to tell users that content was generated or modified by AI and where it came from. This is the most straightforward transparency mechanism. A label might appear as a footnote, a banner, or a disclosure statement.

Examples:

A text disclaimer on AI-generated marketing copy: "This content was drafted with AI assistance and reviewed by our team."

A visible overlay on AI-generated images indicating the content is synthetic

An in-app disclosure when a chatbot response is AI-generated rather than written by a human

Labels are easy to implement but also easy to remove. On their own, they rely on good faith. For higher-risk content, combine labelling with watermarking or metadata to make the disclosure more durable.

2. Watermarking

Watermarking embeds information directly into the content so you can trace its origin or verify its authenticity. Watermarks can be visible (a semi-transparent overlay on an image, an audible tone in audio) or invisible (hidden data that requires specialised tools to detect).

Examples:

Visible watermarks on AI-generated product images or marketing visuals

Invisible watermarks embedded in AI-generated text that can be detected by verification tools

Audio watermarks in AI-generated voice content or podcasts

Watermarking is harder to remove than labelling, which makes it more reliable for content that will be shared or redistributed. However, not all AI tools support watermarking natively. You may need to add watermarks as a separate step in your content workflow.

3. Metadata Recording

Metadata is descriptive information about a piece of content that is included with the content file. It can record who created the content, when it was created, what tools were used, and whether it has been edited. Standards like the C2PA (Coalition for Content Provenance and Authenticity) provide a structured approach to metadata for AI content.

Examples:

Recording in the file metadata that an image was generated using a specific AI tool and date

Logging the AI model, prompt, and human review status for generated documents

Using C2PA-compliant metadata to create a verifiable chain of content provenance

Metadata supports the credibility of both labels and watermarks. It provides an audit trail that regulators or consumers can verify. The challenge is that metadata can be stripped from files when they are copied, screenshotted, or reformatted. Use metadata alongside other mechanisms, not as a standalone solution.

A Risk-Based Approach to AI Content Transparency

Not all AI-generated content carries the same risk. A brainstorming draft for an internal meeting is different from a financial recommendation sent to a client. The guidance recommends a proportionate approach: match your transparency mechanisms to the potential impact of the content.

Risk assessment framework with three tiers: higher risk in orange, medium risk in grey, lower risk in cream

Higher risk

Content that could influence decisions about individuals or create significant harm if inaccurate

Examples: Legal documents, financial advice, health information, hiring assessments, insurance recommendations, regulatory filings

Use all three mechanisms: label, watermark, and metadata. Require human review before publication. Maintain an audit log.

Medium risk

Content that represents the business publicly and could affect consumer decisions

Examples: Marketing copy, product descriptions, customer service responses, social media posts, email campaigns

Use labelling at minimum. Add metadata recording where practical. Human review recommended before publication.

Lower risk

Internal content with limited external impact

Examples: Internal meeting summaries, research notes, brainstorming drafts, code documentation

Internal disclosure that AI was used. No external labelling required unless the content is later published.

What This Means for Common Use Cases

AI chatbots and customer service. If customers interact with an AI chatbot, they should know they are communicating with AI. Disclose this at the start of the interaction, not buried in your terms of service. Any information the chatbot provides must be accurate. If the chatbot cannot verify a claim, it should say so rather than guessing. Establish a clear escalation path to a human agent.

Marketing and content creation. If you use AI to draft blog posts, email campaigns, social media content, or product descriptions, review every piece for accuracy before publication. AI-generated statistics should be verified against primary sources. Claims about your products or services must reflect reality. Consider adding a disclosure where AI played a significant role in content creation.

Images and visual content. AI-generated images are increasingly realistic. If you use AI to create product images, marketing visuals, or social media graphics, consider whether a reasonable consumer would assume the images are photographs of real products or scenarios. If so, disclosure is advisable. Watermarking is particularly relevant for visual content.

Professional services output. Accounting firms, legal practices, and consulting firms that use AI to draft reports, analysis, or advice have a higher duty of care. The content must be reviewed by a qualified professional before it reaches the client. Relying on AI-generated analysis without verification could expose the firm to professional negligence claims as well as consumer law liability.

Voice AI and phone systems. If your business uses AI to answer phone calls, the caller should be told they are speaking with an AI assistant at the start of the call. The AI should not impersonate a real person. Any commitments or information provided during the call must be accurate and aligned with your actual policies and offerings.

The Privacy Act Connection

AI content transparency does not exist in isolation. It sits alongside the Privacy Act 2026 amendments, which introduce mandatory transparency requirements for automated decision-making by December 2026. If your AI systems process personal information to generate content, both frameworks apply.

For example, if you use AI to personalise marketing emails based on customer data, you need to consider both the consumer law obligation (is the content accurate and not misleading?) and the privacy obligation (have you disclosed that AI is processing personal information to generate personalised communications?).

The practical advice is to build one framework that addresses both. An AI usage policy that covers data handling, content review, and disclosure requirements will satisfy the intent of both the consumer law guidance and the privacy reforms. Our AI usage policy template provides a starting point.

AI Content Transparency Checklist

A practical checklist for implementing the government's recommendations in your business.

Audit all AI tools currently used to generate or modify content across the organisation

Classify each type of AI-generated content by risk level (higher, medium, lower)

Implement labelling for all external-facing AI-generated content

Evaluate watermarking options for higher-risk visual, audio, and video content

Establish metadata recording processes for AI-generated documents and media

Review marketing copy and product descriptions for accuracy, ensuring AI outputs do not contain hallucinated claims

Update terms of service and privacy policy to disclose AI use in content creation

Create an internal AI content policy specifying when disclosure is required and what mechanisms to use

Train staff on the difference between AI-assisted content and human-created content, and when disclosure applies

Establish a human review process for all higher-risk AI-generated content before it is published or shared

Document your transparency practices for audit and regulatory purposes

Schedule quarterly reviews of AI content practices as tools and guidance evolve

What We Recommend

The guidance is voluntary today. But the regulatory direction is clear. The EU AI Act already mandates disclosure for AI-generated content. Canada and the UK are moving in the same direction. Australia will follow. Businesses that build transparency practices now will be ahead when voluntary becomes mandatory.

Start with the basics. Add a human review step to every AI content workflow. Verify any statistics, claims, or product information before publication. Disclose AI involvement where consumers would reasonably want to know.

Then build toward maturity. Implement metadata recording so you have an audit trail. Explore watermarking for visual and audio content. Create an internal policy that specifies when and how your team discloses AI use. Review it quarterly as the tools and the regulatory landscape evolve. For a broader view of AI governance in Australia, see our AI governance guide.

Need help with AI content governance? Our AI governance service helps Australian businesses build practical transparency frameworks. From usage policies to content review processes, we help you stay compliant without slowing your team down.

Explore AI governance services

Frequently Asked Questions

Is it illegal to use AI-generated content in Australia?

No. Using AI-generated content is not illegal. However, under the Australian Consumer Law, businesses must ensure that AI-generated content is not misleading or deceptive. If AI-generated content contains inaccuracies, hallucinations, or creates a false impression about your products or services, the business is liable regardless of whether a human or AI produced it.

Do I have to label AI-generated content in Australia?

Currently, labelling is voluntary under the government's guidance. However, the Australian Consumer Law already requires that content not be misleading. If a reasonable consumer would be misled by not knowing content was AI-generated, failing to disclose could constitute deceptive conduct. The guidance recommends labelling as best practice, and the direction of regulation globally is toward mandatory disclosure.

What transparency mechanisms does the Australian Government recommend?

The government guidance recommends three mechanisms: labelling (visible text disclosing AI involvement), watermarking (embedding traceable information into content), and metadata recording (including details about how content was created and by what tools). For higher-risk content, the guidance recommends using multiple mechanisms together.

What are the penalties for misleading AI-generated content?

Under the Australian Consumer Law, penalties for misleading or deceptive conduct include fines of up to $50 million, three times the value of the benefit obtained, or 30% of adjusted turnover, whichever is greatest. The ACCC has signalled that AI-related deceptive conduct is an enforcement priority for 2025-26. These penalties apply whether content was created by a human or by AI.

Does this guidance apply to small businesses?

Yes. The Australian Consumer Law applies to all businesses regardless of size. Even if your business falls under the Privacy Act small business exemption, the prohibition on misleading and deceptive conduct has no turnover threshold. If you use AI to generate any public-facing content, this guidance is relevant.

What about AI features embedded in software I already use?

If platforms you use, such as CRM systems, email marketing tools, or accounting software, have AI features that generate content on your behalf, you are still responsible for the accuracy and transparency of that content. The fact that AI is embedded in a third-party tool does not transfer your obligations to the vendor.

FW
FlowWorks Team
AI Automation & Consulting · Melbourne, Australia
Get started

Find out what's costing
your business the most.

A 30-minute conversation. No pitch. No obligation. We'll identify your highest-impact automation opportunities before you spend a dollar.

Get your AI Readiness Review
1300 484 044 · ops@flowworks.com.au · 470 St Kilda Rd, Melbourne VIC 3004