AI compliance in Australia is not a single obligation. It is a combination of requirements from the Privacy Act 1988, the Australian Consumer Law, anti-discrimination legislation, industry-specific regulations, and the emerging Guardrails for AI in Australia framework. For most businesses, navigating this landscape feels overwhelming.
This checklist breaks it down into 10 concrete steps. Each step includes specific tasks you can assign, track, and complete. Work through them in order: the early steps (audit and classification) inform everything that follows.
The compliance deadline for the Privacy Act's automated decision-making provisions is December 2026. But compliance is not a switch you flip on a deadline date. It is a capability you build over months. Starting now gives you time to do it properly rather than rushing to meet a regulatory deadline. For the full regulatory context, visit our Privacy Act and AI compliance hub.
73% of Australian businesses using AI have no formal governance in place (CSIRO, 2024). The average cost of a data breach in Australia is AUD $4.26 million (IBM, 2024). This checklist helps you close the governance gap before it becomes an expensive problem.
Before you can govern AI, you need to know what AI you are using. This is the foundation of every other compliance step.
List every AI tool in use across the organisation (ChatGPT, Copilot, Gemini, Claude, Jasper, etc.)
Identify AI features embedded in existing software (Xero AI, HubSpot AI, Salesforce Einstein, Microsoft 365 Copilot)
Survey staff to uncover unofficial or personal AI tool use (shadow AI)
Document the purpose and function of each AI tool
Record what data each AI tool accesses and processes
Note whether each tool's data is used for model training
Not all AI use carries the same risk. A risk classification helps you focus compliance efforts where they matter most.
Categorise each AI system as low, medium, or high risk based on its access to personal data and decision-making role
High risk: AI that makes or influences decisions about individuals (hiring, credit, customer service outcomes)
Medium risk: AI that processes personal information but does not make decisions (data analysis, reporting)
Low risk: AI that does not access personal information (content drafting from public data, code generation)
Document the risk classification and the reasoning behind each rating
The Privacy Act 1988 and its upcoming amendments set the baseline for how you handle personal information in AI systems.
Confirm that your privacy policy discloses AI use in data processing and decision-making
Verify that consent mechanisms cover AI-specific data processing where required
Assess whether each AI system complies with the data minimisation principle (APP 3)
Check that personal information used by AI systems is limited to the stated purpose (APP 6)
Confirm that cross-border data flows from AI tools comply with APP 8 (overseas disclosure)
Prepare for the December 2026 automated decision-making transparency obligations
Data is the fuel for AI systems. Controlling what goes in is the most effective way to control what comes out.
Establish a data classification framework (Public, Internal, Confidential, Restricted)
Define which data tiers can be used with which AI tools
Prohibit Restricted data (TFNs, health records, credit card numbers) from entering any external AI tool
Implement technical controls where possible (data loss prevention tools, network restrictions)
Require anonymisation or de-identification before processing personal information with AI
Establish data retention and deletion procedures for AI interactions and outputs
If you use third-party AI tools, you are still responsible for how they handle your data. Vendor assessment is a core compliance requirement.
Review vendor terms of service and data processing agreements for each AI tool
Confirm whether vendor AI tools use your input data for model training (and opt out where possible)
Verify where data is stored and processed (onshore vs. offshore)
Assess vendor security certifications and practices (SOC 2, ISO 27001, encryption standards)
Document the vendor assessment findings for each AI tool in use
Establish a process for ongoing vendor monitoring and periodic reassessment
The Privacy Act amendments will require human review capability for significant automated decisions. Building this now avoids a scramble later.
Identify all AI systems that make or influence decisions about individuals
Designate staff with the authority and competence to review automated decisions
Document the human review process for each high-risk AI system
Ensure reviewing staff have access to sufficient information to conduct genuine reviews
Establish a timeline for human review responses (the OAIC expects reasonable timeframes)
Create a mechanism for individuals to request human review of AI-driven decisions
An AI usage policy gives your team clear rules about what is acceptable. Without one, every staff member is making their own judgement calls about AI use.
Define approved AI tools and their permitted uses
Specify data classification rules for AI interactions
Document prohibited uses (e.g. entering client data into free-tier tools, AI-only decision-making)
Establish quality assurance requirements for AI outputs
Include incident reporting procedures for AI-related issues
Require staff acknowledgement of the policy
Policies only work if people understand them. Training is what turns a document into a practice.
Conduct initial AI usage policy training for all staff
Include practical examples and scenarios relevant to your industry
Cover data classification rules and what can and cannot be shared with AI tools
Train staff on how to identify and report AI-related incidents
Explain the Privacy Act obligations and why the policy exists
Schedule annual refresher training and training for new starters
Record training completion for compliance documentation
A formal risk assessment documents the risks associated with your AI use and the controls you have in place to mitigate them.
For each AI system, identify the risks: data breach, inaccurate outputs, bias, privacy violation, IP leakage
Assess the likelihood and potential impact of each risk
Document the controls in place to mitigate each risk
Identify residual risks and determine whether they are acceptable
Assign risk owners for each AI system
Schedule regular risk reassessment (quarterly recommended)
Regulators expect documented evidence of compliance. Good documentation also makes ongoing governance easier and cheaper.
Create an AI register listing all AI systems, their risk classification, and responsible persons
Document all policies, procedures, and training materials
Maintain records of AI system decisions and outcomes for audit purposes
Log all AI-related incidents and their resolution
Schedule quarterly governance reviews and document findings
Keep vendor assessment records current
Maintain an evidence trail showing ongoing compliance efforts
Completing the checklist is the starting point, not the finish line. AI compliance is an ongoing practice that requires regular attention. Here is what ongoing compliance looks like in practice.
Quarterly reviews. Review your AI register, risk assessments, and policy effectiveness every quarter. AI tools change, new tools are adopted, and regulations evolve. Quarterly reviews ensure your compliance posture stays current.
Incident response. When an AI-related incident occurs (and eventually one will), your response should be swift, documented, and consistent with your Notifiable Data Breaches obligations under Part IIIC of the Privacy Act. Having procedures in place before an incident occurs is what separates prepared businesses from panicking ones.
Regulatory monitoring. The AI regulatory landscape in Australia is evolving quickly. The Guardrails for AI in Australia framework, OAIC guidance updates, and potential new legislation all need tracking. Assign someone in your organisation to monitor regulatory developments, or work with a governance partner who does this for you.
Annual training refresh. Staff turnover, new AI tools, and evolving policies all mean that annual training refreshers are essential. Training is not a one-time event. It is a recurring investment in your organisation's compliance capability.
Need help working through the checklist? Our AI governance service covers every item on this checklist: from initial audit through to policy development, vendor assessment, staff training, and ongoing compliance support. We build frameworks that are proportionate to your size and practical to maintain.
Get help with AI complianceFor a small business with a handful of AI tools, you can complete the initial audit and documentation in 2 to 4 weeks. Mid-size businesses with more complex AI use typically need 4 to 8 weeks. This includes the AI audit, policy development, vendor review, and initial staff training. Ongoing compliance activities (quarterly reviews, monitoring, policy updates) require a few hours per quarter.
It depends on your internal capability. If you have someone with experience in privacy compliance and AI governance, you can work through most of the checklist internally. However, the privacy impact assessment, vendor contract review, and compliance mapping steps benefit from specialist input. Many businesses use a hybrid approach: completing the straightforward items internally and engaging a governance specialist for the more technical elements.
The biggest risk is not knowing what AI you are using. Shadow AI, where staff use unapproved AI tools without the organisation's knowledge, is widespread. A 2024 CSIRO survey found that 73% of Australian businesses using AI had no formal governance in place. If you do not know what tools your team is using and what data they are sharing, you cannot comply with the Privacy Act or any other regulation.
Yes. AI features embedded in platforms like Xero, HubSpot, Salesforce, and Microsoft 365 are still AI systems that process your data and potentially make decisions about your customers. The Privacy Act does not distinguish between standalone AI tools and embedded AI features. You are responsible for understanding how those features work, what data they access, and whether they comply with your obligations.