The deadline is December 2026 — approximately 9 months away. Businesses that have not started preparing risk penalties of up to $50 million. This guide covers exactly what you need to do, step by step.
Australia's Privacy Act is getting its most significant overhaul in decades, and artificial intelligence is at the centre of it. The amendments, scheduled to take effect in December 2026, introduce new obligations around automated decision-making that will affect virtually every business using AI — from customer service chatbots to algorithmic pricing to automated resume screening.
If your business uses any form of AI or automated decision-making that affects individuals, you have obligations that did not exist twelve months ago. The good news is that compliance is entirely achievable if you start now. The bad news is that the window is closing, and the penalties for non-compliance are severe.
This guide breaks down exactly what is changing, who is affected, and the seven steps you need to take to be compliant by December 2026. We have distilled complex legislation into practical, actionable guidance that any business owner or manager can follow.
The Privacy Act amendments introduce several key changes that directly affect how businesses can use AI and automated systems. The most significant changes relate to automated decision-making transparency, enhanced individual rights, and stronger enforcement powers for the Office of the Australian Information Commissioner (OAIC).
Automated decision-making transparency. Businesses must now disclose when automated systems are used to make decisions that significantly affect individuals. This includes decisions about credit, insurance, employment, housing, and access to services. The disclosure must be meaningful — explaining not just that automation is used, but how it works, what data it considers, and how individuals can challenge the decision.
Right to explanation. Individuals now have the right to request an explanation of how an automated decision was made about them. This means your AI systems need to be explainable — you cannot simply say “the algorithm decided.” You need to be able to articulate, in plain language, what factors were considered and how they influenced the outcome.
Right to human review. For decisions that significantly affect individuals, there must be a pathway to request human review of automated decisions. This is not optional — it is a right that individuals can exercise, and businesses must have processes in place to honour it within reasonable timeframes.
Enhanced enforcement. The OAIC has been given stronger enforcement powers and increased funding to investigate and penalise non-compliance. Maximum penalties have been significantly increased, and the commissioner has new tools including infringement notices and enforceable undertakings that can be deployed more quickly than full court proceedings.
The short answer: more businesses than you think. The Privacy Act applies to organisations with annual revenue over $3 million, as well as health service providers, organisations that trade in personal information, and certain other entities regardless of revenue.
But the automated decision-making provisions cast a wider net than many businesses realise. If you use any of the following, you are likely affected: AI-powered chatbots that handle customer enquiries, automated lead scoring or qualification systems, algorithmic pricing or recommendation engines, automated credit or risk assessment, AI-driven recruitment screening, automated tenant selection or assessment, marketing personalisation that uses personal data, or any system that makes decisions about individuals without meaningful human oversight.
Even if you use third-party AI tools rather than building your own, the compliance obligation falls on you — the entity making decisions about individuals. You cannot outsource accountability to your software vendor.
Your privacy policy needs a substantial update. For each type of automated decision your business makes, you must disclose the following information in clear, plain language that an ordinary person can understand.
First, identify and describe each automated decision-making process — what it does, what decisions it makes, and who it affects. Second, explain what personal information is used as input to the decision. Third, describe the logic involved — not the source code, but a meaningful explanation of how the system reaches its conclusions. Fourth, explain the potential consequences of the decision for the individual. Fifth, provide clear instructions for how to request human review of an automated decision, including who to contact and expected response times.
Generic statements are not sufficient. A privacy policy that simply says “we may use automated systems to process your information” does not meet the new requirements. Each type of automated decision needs its own specific, meaningful disclosure.
The Australian Government’s framework for AI assurance (GfAA) outlines six practices that form the foundation of responsible AI use. While not all are legally mandated (yet), they represent the standard that regulators, courts, and customers will hold you to. Building your compliance programme around these six practices positions you well beyond the minimum legal requirements.
Designate clear responsibility for AI outcomes within your organisation. Someone needs to own AI governance — whether that is a dedicated role or an addition to existing responsibilities.
Be open about how AI is used in your business. This goes beyond privacy policy disclosures — it means being willing to explain AI decisions to affected individuals in plain language.
Provide clear pathways for individuals to challenge AI-driven decisions. Make it easy to request human review, and ensure the review process is genuine and accessible.
Actively test for and mitigate bias in your AI systems. This includes bias in training data, algorithmic bias, and bias in how decisions are implemented and communicated.
Apply the Australian Privacy Principles to all AI data processing. Minimise data collection, ensure data quality, implement strong security measures, and respect individual rights over their data.
Ensure AI systems perform as intended, with appropriate testing, monitoring, and fallback mechanisms. Implement ongoing monitoring to detect degradation, drift, or unintended behaviours.
Here is the practical, seven-step process we recommend for achieving compliance before December 2026. This is the same framework we use with FlowWorks clients, adapted for businesses at any stage of AI adoption.
Start by mapping every instance where your business uses AI or automated decision-making. This includes obvious applications like chatbots and recommendation engines, but also less obvious ones — automated credit scoring, resume screening, fraud detection, pricing algorithms, and any system that makes or influences decisions about individuals without direct human involvement. Most businesses are surprised by how many automated decisions they are already making.
Not all automated decisions carry the same risk. The amendments distinguish between decisions that have a significant effect on individuals (high-risk) and those with minimal impact (low-risk). A decision that affects someone's access to credit, insurance, employment, or housing is high-risk. A product recommendation on your website is low-risk. High-risk decisions face stricter transparency and review requirements.
Your privacy policy must now explicitly disclose where automated decision-making is used, what types of decisions are made, what data is used as inputs, the logic involved in the decision (in plain language), and how individuals can request a human review. Generic statements like 'we may use automated systems' are no longer sufficient. You need specific, meaningful disclosure for each type of automated decision.
For high-risk automated decisions, you must provide a mechanism for individuals to request human review. This means having a clear process, trained staff who can actually review and override automated decisions, and reasonable response timeframes. The review cannot be a rubber stamp — the human reviewer must have the authority and information to genuinely reconsider the decision.
The amendments strengthen requirements around the data that feeds your AI systems. You need to ensure the data is accurate, relevant, up-to-date, and collected with appropriate consent. If your AI system uses data for a purpose beyond what it was originally collected for, you may need fresh consent. Establish regular data quality audits and document your data lineage — where it comes from, how it is processed, and how it feeds into automated decisions.
For high-risk AI applications, conduct a formal impact assessment that evaluates potential harms, identifies affected individuals, and documents mitigation measures. This is not a one-time exercise — impact assessments should be reviewed whenever the AI system is significantly updated or when you become aware of unintended consequences. Document everything. The OAIC will expect to see evidence of ongoing assessment, not just a tick-box exercise completed once.
Compliance is not just a legal or IT issue — it requires awareness across your organisation. Everyone who uses, manages, or is affected by AI systems needs to understand the new obligations. This includes front-line staff who interact with customers about automated decisions, technical teams who build and maintain AI systems, and leadership who approve AI deployments. Develop training materials, run workshops, and make compliance part of your AI governance framework.
The consequences of non-compliance are significant and have been substantially increased under the amendments. The maximum civil penalty for serious or repeated breaches is now the greater of $50 million, three times the value of the benefit obtained from the breach, or 30% of the organisation’s adjusted turnover in the relevant period.
Beyond financial penalties, the OAIC can issue enforceable undertakings requiring specific remediation actions, infringement notices for less serious breaches, and public determinations that damage your reputation. In practice, the reputational damage from a public privacy breach finding often exceeds the financial penalty.
The OAIC has indicated that it will take a proportionate approach to enforcement — prioritising education and compliance assistance before resorting to penalties. However, this grace period will not last indefinitely, and businesses that have made no effort to comply will face the full force of enforcement when the transition period ends.
Misconception: “This only applies to big tech companies.”
Reality: The Privacy Act applies to all organisations with annual revenue over $3 million, and many smaller organisations depending on their activities. If you use AI to make decisions about individuals — customers, employees, tenants, clients — you are likely covered.
Misconception: “We do not use AI, so this does not affect us.”
Reality: Many businesses use automated decision-making without calling it AI. If you use automated credit checks, algorithmic pricing, automated email filtering that affects customer service, or any software that makes decisions without direct human involvement, you are affected.
Misconception: “We can just add a paragraph to our privacy policy and be done.”
Reality: The amendments require meaningful transparency, not boilerplate language. You need specific disclosures about each type of automated decision, the data used, the logic applied, and how to request human review. Generic statements will not satisfy the OAIC.
Misconception: “We have until December 2026, so we have plenty of time.”
Reality: Nine months may sound like a lot, but implementing meaningful AI governance takes time — especially if you need to audit existing systems, update processes, train staff, and build human review mechanisms. Businesses that start now will be ready. Those that wait until September will scramble.
The maximum penalty for serious or repeated breaches of the Privacy Act is currently $50 million, three times the value of the benefit obtained from the breach, or 30% of adjusted turnover — whichever is greater. While the OAIC is more likely to issue enforcement notices and require remediation for first-time issues, the penalties for deliberate or negligent non-compliance are severe. More practically, non-compliance creates litigation risk, reputational damage, and loss of customer trust.
Yes. If you use a third-party AI tool that makes decisions about individuals — even if you did not build the AI — you are responsible for compliance. This means you need to understand how the tool works, what data it uses, and what decisions it makes. You cannot outsource your compliance obligations to your software vendor.
An automated decision is any decision made by a computer system without meaningful human involvement. This includes fully automated decisions (no human in the loop) and substantially automated decisions (where a human is technically involved but routinely follows the system's recommendation without genuine review). The key test is whether the decision has a significant effect on the individual — affecting their rights, access to services, or treatment.
FlowWorks offers AI governance consulting that covers the full compliance journey. We audit your existing AI usage, classify decisions by risk, update your privacy documentation, build human review mechanisms into your automated workflows, conduct impact assessments, and train your team. We also build compliant AI automation from the ground up — so every new system meets the December 2026 requirements from day one. Visit our AI governance services page to learn more.
The clock is ticking. Businesses that start their compliance journey now will be ready, tested, and confident by December 2026. Those that wait risk scrambling, cutting corners, or facing enforcement action.
FlowWorks offers AI governance consulting that covers the full compliance journey — from initial audit to policy updates to training. Book a consultation to understand where your business stands and what you need to do next.
Book a compliance consultation