The Privacy Act 1988 is undergoing its most significant reform since it was enacted. For businesses using AI, these changes are not abstract policy discussions. They create concrete, enforceable obligations that will reshape how you collect data, make decisions, and interact with customers.
The reforms, driven by the Attorney-General's Privacy Act Review and informed by OAIC guidance, introduce specific requirements around automated decision-making, data minimisation, and consent. The compliance deadline for the core provisions is December 2026. That may sound like plenty of time, but building compliant processes, updating privacy notices, training staff, and auditing AI systems takes months, not weeks.
This guide breaks down what is changing, what it means for your business, and what you should be doing now to prepare. For the broader governance picture, see our Privacy Act and AI compliance hub.
The Privacy Act was written in 1988, before the internet was mainstream, before smartphones existed, and long before AI became a standard business tool. While it has been amended over the years, the core framework was not designed for a world where algorithms make decisions about people at scale.
In 2020, the Attorney-General's Department launched a comprehensive review of the Act. The resulting report, published in 2022, contained 116 proposals for reform. The government has accepted the majority of these proposals, and the Privacy Act Amendment Bill is now progressing through Parliament.
In parallel, the OAIC has been issuing updated guidance on how existing privacy obligations apply to AI. The message is clear: even before the amendments take formal effect, regulators expect organisations to be applying privacy principles to their AI systems. Waiting for the legislation to be finalised before taking action is a risk, not a strategy.
The reforms introduce or strengthen several provisions that directly affect how businesses can use AI. Here are the six changes that will have the most practical impact.
Organisations that use AI to make or substantially assist in making decisions about individuals must disclose that AI is involved. This applies to any decision that could reasonably be expected to significantly affect the rights, interests, or wellbeing of a person. The OAIC has indicated that "substantially assist" is interpreted broadly: if AI generates a recommendation that a human routinely approves without independent assessment, that counts as automated decision-making.
Organisations must be able to explain, in plain language, how their AI systems reach decisions. This includes the type of personal information used as inputs, the logic or methodology applied, and the factors that most significantly influenced the outcome. "The algorithm decided" is not an acceptable explanation. Regulators expect a level of detail that allows affected individuals to understand and, where necessary, challenge the decision.
The reforms strengthen consent requirements for AI-related data processing. Where personal information is used for AI training, profiling, or automated decision-making, organisations will need to obtain specific, informed consent. This goes beyond the current approach of bundled consent in lengthy terms and conditions. Consent must be voluntary, current, and informed by clear disclosure of how AI will use the data.
While data minimisation is already an Australian Privacy Principle, the reforms give it sharper teeth in the AI context. Organisations must demonstrate that each data input to an AI system is reasonably necessary for the specific purpose. Collecting broad datasets "in case they might be useful" for future AI applications will no longer be defensible.
Individuals affected by significant automated decisions will have the right to request human review. The reviewing human must have sufficient authority, competence, and access to information to conduct a genuine review, not a rubber stamp. Organisations must establish and publicise the process for requesting review.
The Privacy Act currently exempts businesses with annual turnover under $3 million from most obligations. The Attorney-General's Department has recommended removing or significantly narrowing this exemption. If it is removed, tens of thousands of small businesses using AI tools will come under the Act for the first time.
The penalty regime under the Privacy Act was significantly strengthened in November 2022 following the Optus and Medibank data breaches. These enhanced penalties apply to the new AI-related provisions as well.
Serious interference with privacy
Up to $50 million, or three times the benefit obtained, or 30% of adjusted turnover (whichever is greatest)
Repeated minor breaches
Infringement notices of up to $313,000 per contravention for bodies corporate
Failure to comply with OAIC direction
Civil penalty proceedings in the Federal Court
The Office of the Australian Information Commissioner has published guidance on AI and privacy that signals the regulator's expectations. Even before the formal amendments take effect, the OAIC expects organisations to apply existing Australian Privacy Principles to their AI systems.
Privacy Impact Assessments. The OAIC recommends conducting a Privacy Impact Assessment (PIA) before deploying any AI system that processes personal information. A PIA should identify the personal information flows, assess the privacy risks, and document the mitigation measures in place.
Proactive disclosure. Rather than waiting for individuals to ask whether AI is involved in decisions about them, the OAIC expects organisations to proactively disclose AI use in their privacy policies and at the point of data collection.
Vendor due diligence. If you use third-party AI tools, the OAIC expects you to have assessed how those vendors handle personal information. You remain responsible for data that you share with AI platforms, even if the processing happens on the vendor's infrastructure.
Ongoing monitoring. Compliance is not a one-time event. The OAIC expects organisations to monitor their AI systems on an ongoing basis for privacy risks, accuracy, and alignment with stated purposes. Regular reviews, at least quarterly, are considered good practice.
Conduct an AI audit. Identify every AI system in use across your organisation, including features embedded in existing software. Document what personal information each system accesses, how it processes that data, and what decisions it makes or influences. Our AI compliance checklist provides a structured approach.
Update your privacy policy. Add clear disclosures about how your organisation uses AI to process personal information and make decisions. Be specific about the types of AI systems in use, the personal information they access, and the purposes for which they are used.
Create an AI usage policy. Establish clear internal guidelines for how staff use AI tools. This should cover approved tools, prohibited uses, data classification rules, and incident reporting procedures. See our AI usage policy template for a practical starting framework.
Establish human review processes. For every AI system that makes or influences decisions about individuals, set up a documented process for human review. Designate staff with the authority and training to conduct genuine reviews and overrides.
Review vendor contracts. Check that your agreements with AI vendors address data handling, security, and compliance obligations. Ensure you understand where data is processed, who has access, and what happens to data when the engagement ends.
Train your team. Staff need to understand the basics of responsible AI use and your organisation's specific policies. This does not require deep technical training. It means ensuring everyone knows what they can and cannot do with AI tools, and how to report concerns.
Need help preparing for the Privacy Act changes? Our AI governance service helps Australian businesses build practical compliance frameworks. From policy development to staff training, we cover everything you need before the December 2026 deadline.
Explore AI governance servicesThe automated decision-making transparency obligations have a compliance deadline of December 2026. Organisations should begin preparing now, as building compliant processes takes time. The OAIC has indicated that it expects organisations to demonstrate good faith progress toward compliance well before the deadline.
The small business exemption (for businesses with annual turnover under $3 million) is under active review as part of the Privacy Act reforms. The Attorney-General's Department has recommended removing or significantly narrowing this exemption. Even if it remains, the Australian Consumer Law and anti-discrimination legislation already apply to all businesses regardless of size.
Maximum penalties under the Privacy Act were increased in 2022 to whichever is greatest: $50 million, three times the value of the benefit obtained from the breach, or 30% of adjusted turnover during the relevant period. The OAIC can also issue infringement notices, accept enforceable undertakings, and seek civil penalty orders through the Federal Court.
Yes. If you use AI features built into platforms like Xero, HubSpot, Salesforce, or Microsoft 365 Copilot, and those features process personal information or contribute to decisions about individuals, you are still responsible for compliance. The Privacy Act places obligations on the organisation that collects and controls the data, not on the software vendor alone.