ComplianceApril 2026·10 min read

AI Policy Template for Australian Businesses

AI policy document template for Australian businesses

An AI policy is a written document that tells your team exactly how they can and cannot use AI tools at work. It covers which tools are approved, what data can go into them, who is responsible when things go wrong, and how the policy gets reviewed. If your business uses AI in any form and you do not have a policy, you are operating with unnecessary risk.

The Privacy Act 2026 amendments make this more urgent than ever. The small business exemption is being removed. Automated decision-making transparency is now a legal requirement. Without a documented AI policy, you cannot demonstrate compliance, and you cannot protect your business from the very real risks of uncontrolled AI use.

This guide walks you through every section your AI policy needs, with practical examples and a structure you can adapt to your business today.

Why Every Australian Business Needs an AI Policy

Your team is already using AI. The question is whether they are using it with guardrails or without them. A 2025 Microsoft survey found that 78% of knowledge workers use AI at work, and more than half of them brought their own tools without telling their employer. That is shadow AI, and it is happening in your business right now.

Without a policy, you have no idea what data your team is pasting into ChatGPT, Gemini, or any other tool. Customer records, financial data, employee information, legal documents. All of it could be flowing into AI systems with no oversight, no consent, and no data processing agreements in place.

A policy gives you control. It does not slow your team down. It gives them clarity on what they can do, so they can use AI confidently and productively without putting the business at risk.

What Your AI Policy Should Cover

A good AI policy is not a 50-page legal document. It is a clear, practical guide that fits your business. Here are the seven sections every AI policy needs:

SectionWhat It Covers
Purpose and ScopeWhy the policy exists, who it applies to, and what it covers
Data ClassificationGreen, amber, and red tiers for what data can enter AI tools
Approved Tools ListVetted AI tools with approved use cases and restrictions
Acceptable Use GuidelinesWhat staff can and cannot do with AI, with specific examples
Incident ResponseWhat to do when something goes wrong with an AI tool
Roles and AccountabilityWho owns the policy, who enforces it, who reviews it
Review ScheduleWhen and how the policy gets updated

Purpose and Scope

Start with why the policy exists and who it applies to. This section should be two or three paragraphs at most. State that the policy governs all use of AI tools by employees, contractors, and any third parties acting on behalf of the business. Define what you mean by AI tools broadly: generative AI (ChatGPT, Claude, Gemini), AI features embedded in existing software (Xero, HubSpot, Salesforce), and any automated decision-making systems.

Be specific about scope. Does the policy apply to personal devices used for work? Does it cover AI tools used by vendors on your behalf? For most businesses, the answer to both should be yes.

Data Classification: The Traffic Light System

This is the most important section of your policy. Your team needs to know, in clear terms, what data they can put into AI tools and what is off-limits. A three-tier traffic light system works well:

Green: Safe to use

Examples: Publicly available information, internal process documentation, generic marketing copy, anonymised data sets

Rule: Can be entered into any approved AI tool without additional approval

Amber: Use with caution

Examples: Customer names and contact details, internal financial summaries, employee feedback (non-identifying), vendor contracts

Rule: Requires manager approval and must only be used in approved enterprise-tier tools with data processing agreements

Red: Never use

Examples: Health records, credit card numbers, tax file numbers, passwords, employee performance reviews, legal privileged documents

Rule: Must never be entered into any AI tool under any circumstances

Approved Tools List

Maintain a list of every AI tool your business has vetted and approved. For each tool, document: the tool name, the approved use cases, the data tier it can access (green, amber, or red), whether the vendor has a data processing agreement in place, and whether the vendor uses your data to train their models.

This list should be a living document. Review it quarterly. When vendors change their terms (and they do, often without notice), your approved status needs to be reassessed. When your team wants to use a new tool, direct them to your vendor assessment process before they start.

Any AI tool not on the approved list should be considered unapproved. Make this explicit in the policy. Using unapproved tools should trigger a conversation, not punishment, but it needs to be addressed.

Acceptable Use Guidelines

This is where you get specific. Avoid vague language like “use AI responsibly.” Instead, give concrete dos and don’ts your team can act on immediately.

Acceptable uses

  • Drafting internal communications, meeting summaries, and process documentation
  • Research and brainstorming using publicly available information
  • Generating first drafts of marketing content (with human review before publishing)
  • Summarising long documents or reports for internal use
  • Code assistance and debugging (with code review before deployment)

Prohibited uses

  • Entering customer personal information into unapproved AI tools
  • Using AI to make automated decisions about employees without human oversight
  • Submitting legal, financial, or medical advice generated by AI without professional review
  • Using AI-generated content in client deliverables without disclosure (where required)
  • Sharing proprietary business data, trade secrets, or confidential strategies with AI tools

Incident Response Process

Things will go wrong. An employee will paste sensitive data into the wrong tool. An AI-generated email will contain incorrect information that goes to a client. A vendor will change their data handling terms without telling you. Your policy needs to cover what happens next.

Define a simple escalation path: who the employee reports to, how quickly they need to report it, what immediate containment steps to take (such as deleting the conversation or revoking API access), and who decides whether external parties need to be notified. For privacy breaches, you may have obligations under the Notifiable Data Breaches scheme to inform the OAIC within 30 days.

Document every incident, even minor ones. Patterns in minor incidents often reveal systemic gaps in your policy before a major breach occurs.

Aligning Your Policy with Australian Regulations

Your AI policy does not exist in a vacuum. It needs to align with the regulatory landscape that applies to your business. At a minimum, your policy should reference and align with the Privacy Act 2026 (including the automated decision-making provisions), your existing privacy policy, and any industry-specific requirements from bodies like APRA, AHPRA, or the Legal Services Council.

If your business is pursuing or considering ISO 42001 certification, your AI policy should map to the standard’s requirements. Even if certification is not on your radar, aligning with ISO 42001 principles gives your policy a solid structural foundation.

The OAIC’s guidance on AI and privacy is the benchmark your compliance will be measured against. Make sure your policy reflects the regulator’s expectations.

Making Your AI Policy Stick

The biggest risk with any policy is that nobody reads it. A PDF buried in a shared drive does not change behaviour. Here is how to make your AI policy operational:

Launch it properly. Run a team session to walk through the policy. Explain why it exists, not just what it says. Answer questions. Make it clear this is about protecting the team, not restricting them.

Make it findable. Pin it to your Slack channel. Add it to your onboarding checklist. Put a one-page summary on the wall. If people cannot find the policy, they will not follow it.

Review and refresh. Set a calendar reminder for six-monthly reviews. When the review happens, ask your team what is working and what is not. Policies that evolve with your team are the ones that last. Our AI governance consulting can help you build and embed a policy that your team actually follows.

Not sure where your business stands on AI readiness?

Our AI Readiness Review assesses your current AI usage, identifies gaps, and gives you a clear action plan. It is the fastest way to understand what your policy needs to cover.

Get your AI Readiness Review

Frequently Asked Questions

What should an AI policy include?

An effective AI policy should include five core sections: scope and purpose, data classification rules, an approved tools list, acceptable use guidelines, and an incident response process. It should also name the person or team responsible for AI governance and set a review schedule. The best policies are short, specific, and written in plain language that your team will actually read.

Is an AI policy legally required in Australia?

There is no standalone law requiring an AI policy. However, the Privacy Act 2026 amendments create obligations around automated decision-making, data handling, and transparency that are very difficult to meet without a documented policy. Industry regulators like APRA and AHPRA also expect documented governance for AI use in regulated sectors. In practice, having no policy is a compliance risk.

How often should we update our AI policy?

Review your AI policy at least every six months. AI tools evolve quickly, and new ones appear constantly. You should also trigger a review whenever there is a significant change: new regulations, a major AI incident in your industry, a new tool adoption, or a change in the data your business processes. Quarterly reviews are ideal for fast-moving industries.

Can we use a generic AI policy template?

A generic template is a useful starting point, but it must be customised. Your policy needs to reflect your specific industry regulations, the AI tools your team actually uses, and the types of data your business handles. A financial services firm and a marketing agency have very different risk profiles. Using a template without tailoring it creates a false sense of compliance.

Who should own the AI policy in a small business?

In a small business, the AI policy owner is typically the founder, managing director, or operations manager. The key is that someone specific is accountable. In larger SMEs, this might be the IT manager or a dedicated compliance lead. Whoever owns it needs authority to enforce the policy and a direct line to leadership when issues arise.

FW
FlowWorks Team
AI Automation & Consulting · Melbourne, Australia
Get started

Find out what's costing
your business the most.

A 30-minute conversation. No pitch. No obligation. We'll identify your highest-impact automation opportunities before you spend a dollar.

Get your AI Readiness Review
1300 484 044 · ops@flowworks.com.au · 470 St Kilda Rd, Melbourne VIC 3004