The Office of the Australian Information Commissioner (OAIC) is the national regulator for privacy and freedom of information. As AI adoption has accelerated across Australian businesses, the OAIC has published guidance making clear that existing privacy law already applies to AI systems that handle personal information.
This is not future regulation. It is the current state of play. If your business uses AI in any way that touches personal data (customer information, employee records, patient data, lead details), the Australian Privacy Principles apply right now.
This guide breaks down the OAIC's key publications on AI, explains which Australian Privacy Principles are most relevant, and provides practical steps for compliance. For a broader view of Australia's AI governance landscape, see our AI governance guide.
The OAIC has addressed AI and privacy through several key publications and statements. The core message is consistent: AI does not exist in a regulatory vacuum. Here are the key themes.
The OAIC has been explicit: businesses do not need to wait for new AI-specific legislation. The Privacy Act 1988 and the Australian Privacy Principles already cover AI systems that collect, use, store, or disclose personal information. The fact that a process is automated does not exempt it from privacy obligations. If anything, automation requires greater attention to privacy because of the scale and speed at which personal information can be processed.
The OAIC expects businesses to be transparent about how they use AI. This means telling individuals when AI is being used to make decisions about them, explaining what data the AI uses, and providing information about how the AI system works in general terms. "We use AI" is not enough. People have a right to understand how their information is being processed and what influence AI has on outcomes that affect them.
The OAIC recommends conducting privacy impact assessments (PIAs) before deploying AI systems that handle personal information. A PIA helps you identify privacy risks, assess their severity, and put controls in place before problems occur. For high-risk AI uses (automated decision-making that affects individuals, profiling, sensitive data processing), the OAIC considers a PIA essential, not optional.
The OAIC has emphasised that automated decision-making must include appropriate human oversight, particularly for decisions that significantly affect individuals. The 2026 Privacy Act reforms formalise this expectation by requiring businesses to disclose when automated decisions are made and provide a pathway for human review.
There are 13 APPs in total, but six are particularly relevant when deploying AI systems. Here is what each means in practice.
You must have a clear, up-to-date privacy policy that explains how you handle personal information, including through AI systems. If you use AI to process customer data, your privacy policy needs to say so. Vague statements about "using technology to improve services" are not sufficient.
You can only collect personal information that is reasonably necessary for your functions or activities. This applies to AI training data, inputs to AI systems, and any data collected as a byproduct of AI processing. If your AI chatbot collects conversation data, that collection must be necessary and disclosed.
When you collect personal information (including through AI systems), you must tell people what you are collecting, why, who you might share it with, and how they can access it. If an AI system is collecting or inferring information about individuals, notification requirements apply.
Personal information can only be used for the primary purpose it was collected for, or a directly related secondary purpose the individual would reasonably expect. Using customer support data to train an AI model is a secondary use that may require consent.
You must take reasonable steps to ensure personal information is accurate, up-to-date, and complete. If AI systems are making decisions based on personal data, inaccurate data can lead to unfair outcomes. This principle puts the responsibility on you to ensure data quality.
You must protect personal information from misuse, interference, loss, unauthorised access, modification, and disclosure. This extends to AI systems: how is data secured when processed by AI? What happens to data after AI processing? Are third-party AI providers meeting your security requirements?
The OAIC's guidance is principles-based, not prescriptive. That means the specific actions depend on your business, your AI use cases, and the type of data involved. Here is a practical framework.
Map every AI tool and system in your organisation. Include third-party AI features embedded in software you already use. For each, document what personal information it accesses, how that data is processed, and where it is stored.
Your privacy policy must reflect your actual AI use. Add clear statements about which AI systems process personal information, what types of data are involved, and the purpose of the processing. Avoid jargon.
For any AI system that handles personal information, conduct a PIA. This does not need to be a 50-page document. A structured assessment of data flows, risks, and controls is sufficient for most SMEs.
Only feed AI systems the personal information they actually need. If your AI chatbot does not need to know a customer's date of birth to answer their question, do not pass it. Review your data flows and strip out unnecessary fields.
If you use third-party AI tools, assess their privacy practices. Where is data processed? Is it stored? Is it used to train models? What security measures are in place? Document your assessment and review it regularly.
For any AI-driven decision that significantly affects an individual (credit decisions, insurance, employment screening), ensure there is a mechanism for human review. Document your oversight process and make it accessible to affected individuals.
Your staff need to understand their privacy obligations when using AI. This includes knowing which tools are approved, what data can be entered into AI systems, and how to handle privacy concerns. Regular, practical training is more effective than annual compliance presentations.
For a step-by-step compliance framework, our AI compliance checklist covers the full scope of requirements. And for businesses looking at formalising their AI governance, our guide to the Privacy Act and AI provides a detailed legal context.
FlowWorks helps Australian businesses build AI governance frameworks that meet OAIC expectations and Privacy Act requirements. We can audit your AI use, conduct privacy impact assessments, and build practical compliance processes.
Get in touch