An AI policy is a written document that tells your team exactly how they can and cannot use AI tools at work. It covers which tools are approved, what data can go into them, who is responsible when things go wrong, and how the policy gets reviewed. If your business uses AI in any form and you do not have a policy, you are operating with unnecessary risk.
The Privacy Act 2026 amendments make this more urgent than ever. The small business exemption is being removed. Automated decision-making transparency is now a legal requirement. Without a documented AI policy, you cannot demonstrate compliance, and you cannot protect your business from the very real risks of uncontrolled AI use.
This guide walks you through every section your AI policy needs, with practical examples and a structure you can adapt to your business today.
Your team is already using AI. The question is whether they are using it with guardrails or without them. A 2025 Microsoft survey found that 78% of knowledge workers use AI at work, and more than half of them brought their own tools without telling their employer. That is shadow AI, and it is happening in your business right now.
Without a policy, you have no idea what data your team is pasting into ChatGPT, Gemini, or any other tool. Customer records, financial data, employee information, legal documents. All of it could be flowing into AI systems with no oversight, no consent, and no data processing agreements in place.
A policy gives you control. It does not slow your team down. It gives them clarity on what they can do, so they can use AI confidently and productively without putting the business at risk.
A good AI policy is not a 50-page legal document. It is a clear, practical guide that fits your business. Here are the seven sections every AI policy needs:
| Section | What It Covers |
|---|---|
| Purpose and Scope | Why the policy exists, who it applies to, and what it covers |
| Data Classification | Green, amber, and red tiers for what data can enter AI tools |
| Approved Tools List | Vetted AI tools with approved use cases and restrictions |
| Acceptable Use Guidelines | What staff can and cannot do with AI, with specific examples |
| Incident Response | What to do when something goes wrong with an AI tool |
| Roles and Accountability | Who owns the policy, who enforces it, who reviews it |
| Review Schedule | When and how the policy gets updated |
Start with why the policy exists and who it applies to. This section should be two or three paragraphs at most. State that the policy governs all use of AI tools by employees, contractors, and any third parties acting on behalf of the business. Define what you mean by AI tools broadly: generative AI (ChatGPT, Claude, Gemini), AI features embedded in existing software (Xero, HubSpot, Salesforce), and any automated decision-making systems.
Be specific about scope. Does the policy apply to personal devices used for work? Does it cover AI tools used by vendors on your behalf? For most businesses, the answer to both should be yes.
This is the most important section of your policy. Your team needs to know, in clear terms, what data they can put into AI tools and what is off-limits. A three-tier traffic light system works well:
Examples: Publicly available information, internal process documentation, generic marketing copy, anonymised data sets
Rule: Can be entered into any approved AI tool without additional approval
Examples: Customer names and contact details, internal financial summaries, employee feedback (non-identifying), vendor contracts
Rule: Requires manager approval and must only be used in approved enterprise-tier tools with data processing agreements
Examples: Health records, credit card numbers, tax file numbers, passwords, employee performance reviews, legal privileged documents
Rule: Must never be entered into any AI tool under any circumstances
Maintain a list of every AI tool your business has vetted and approved. For each tool, document: the tool name, the approved use cases, the data tier it can access (green, amber, or red), whether the vendor has a data processing agreement in place, and whether the vendor uses your data to train their models.
This list should be a living document. Review it quarterly. When vendors change their terms (and they do, often without notice), your approved status needs to be reassessed. When your team wants to use a new tool, direct them to your vendor assessment process before they start.
Any AI tool not on the approved list should be considered unapproved. Make this explicit in the policy. Using unapproved tools should trigger a conversation, not punishment, but it needs to be addressed.
This is where you get specific. Avoid vague language like “use AI responsibly.” Instead, give concrete dos and don’ts your team can act on immediately.
Things will go wrong. An employee will paste sensitive data into the wrong tool. An AI-generated email will contain incorrect information that goes to a client. A vendor will change their data handling terms without telling you. Your policy needs to cover what happens next.
Define a simple escalation path: who the employee reports to, how quickly they need to report it, what immediate containment steps to take (such as deleting the conversation or revoking API access), and who decides whether external parties need to be notified. For privacy breaches, you may have obligations under the Notifiable Data Breaches scheme to inform the OAIC within 30 days.
Document every incident, even minor ones. Patterns in minor incidents often reveal systemic gaps in your policy before a major breach occurs.
Your AI policy does not exist in a vacuum. It needs to align with the regulatory landscape that applies to your business. At a minimum, your policy should reference and align with the Privacy Act 2026 (including the automated decision-making provisions), your existing privacy policy, and any industry-specific requirements from bodies like APRA, AHPRA, or the Legal Services Council.
If your business is pursuing or considering ISO 42001 certification, your AI policy should map to the standard’s requirements. Even if certification is not on your radar, aligning with ISO 42001 principles gives your policy a solid structural foundation.
The OAIC’s guidance on AI and privacy is the benchmark your compliance will be measured against. Make sure your policy reflects the regulator’s expectations.
The biggest risk with any policy is that nobody reads it. A PDF buried in a shared drive does not change behaviour. Here is how to make your AI policy operational:
Launch it properly. Run a team session to walk through the policy. Explain why it exists, not just what it says. Answer questions. Make it clear this is about protecting the team, not restricting them.
Make it findable. Pin it to your Slack channel. Add it to your onboarding checklist. Put a one-page summary on the wall. If people cannot find the policy, they will not follow it.
Review and refresh. Set a calendar reminder for six-monthly reviews. When the review happens, ask your team what is working and what is not. Policies that evolve with your team are the ones that last. Our AI governance consulting can help you build and embed a policy that your team actually follows.
Our AI Readiness Review assesses your current AI usage, identifies gaps, and gives you a clear action plan. It is the fastest way to understand what your policy needs to cover.
Get your AI Readiness ReviewAn effective AI policy should include five core sections: scope and purpose, data classification rules, an approved tools list, acceptable use guidelines, and an incident response process. It should also name the person or team responsible for AI governance and set a review schedule. The best policies are short, specific, and written in plain language that your team will actually read.
There is no standalone law requiring an AI policy. However, the Privacy Act 2026 amendments create obligations around automated decision-making, data handling, and transparency that are very difficult to meet without a documented policy. Industry regulators like APRA and AHPRA also expect documented governance for AI use in regulated sectors. In practice, having no policy is a compliance risk.
Review your AI policy at least every six months. AI tools evolve quickly, and new ones appear constantly. You should also trigger a review whenever there is a significant change: new regulations, a major AI incident in your industry, a new tool adoption, or a change in the data your business processes. Quarterly reviews are ideal for fast-moving industries.
A generic template is a useful starting point, but it must be customised. Your policy needs to reflect your specific industry regulations, the AI tools your team actually uses, and the types of data your business handles. A financial services firm and a marketing agency have very different risk profiles. Using a template without tailoring it creates a false sense of compliance.
In a small business, the AI policy owner is typically the founder, managing director, or operations manager. The key is that someone specific is accountable. In larger SMEs, this might be the IT manager or a dedicated compliance lead. Whoever owns it needs authority to enforce the policy and a direct line to leadership when issues arise.