If your business uses AI to score leads, screen tenants, shortlist job applicants, approve loan applications, or triage customer enquiries, the amended Privacy Act has something specific to say about it. The automated decision-making provisions, introduced through the Privacy and Other Legislation Amendment Act 2024, create enforceable transparency obligations for any organisation that uses AI to make or substantially assist in decisions about individuals.
The compliance deadline is December 2026. That sounds distant, but auditing your AI systems, updating privacy policies, building human review processes, and training staff takes months of sustained effort. Organisations that wait until mid-2026 to start will be scrambling.
This guide explains what automated decision-making means under the Act, which decisions are covered, what you must disclose, how to handle human review requests, and what penalties apply for non-compliance. For the broader picture of how the Privacy Act reforms affect AI use, see our Privacy Act 2026 changes overview.
The amended Privacy Act defines automated decision-making broadly. It covers any decision that is made solely by an automated system, or any decision that is substantially assisted by an automated system, where the decision could reasonably be expected to significantly affect the rights, interests, or wellbeing of an individual.
“Automated system” is not limited to what most people think of as AI. It includes machine learning models, rule-based algorithms, scoring systems, and any computational process that evaluates personal information and produces an output that influences a decision about a person. If a system takes personal information as an input and produces a score, ranking, recommendation, classification, or decision as an output, it is likely covered.
The legislation draws an important distinction between three categories of AI involvement in decisions. Understanding which category your systems fall into determines the level of obligation that applies.
Decisions made entirely by an AI system with no human involvement at any stage. Examples include automated credit scoring that approves or declines an application without a person reviewing it, or algorithmic tenant screening that ranks applicants and selects one automatically.
Decisions where AI generates a recommendation, score, or ranking that a human then acts on. If the human routinely accepts the AI output without conducting an independent assessment, the OAIC treats this as functionally automated. A recruitment system that shortlists candidates via CV screening, which a hiring manager then approves without reviewing the rejected applications, falls into this category.
Decisions where AI provides one input among several, and a qualified person genuinely weighs the AI output against other factors before deciding. A financial adviser who uses AI-generated market analysis alongside their own research and client knowledge is making an AI-informed decision. These carry lighter obligations, but the AI component must still be disclosed.
The provisions apply to any automated decision that could reasonably be expected to “significantly affect” an individual. The OAIC has provided guidance on what this means in practice, and the threshold is lower than many businesses expect.
Decisions clearly covered: Credit scoring and loan approvals, insurance underwriting and claims decisions, employment decisions (hiring, promotion, termination), tenancy applications and rental scoring, healthcare triage and treatment prioritisation, access to government services, and educational assessments.
Decisions likely covered: Customer service chatbots that determine the level of service an individual receives, dynamic pricing that adjusts costs based on customer profiles, marketing personalisation that affects what offers an individual sees, automated warranty and refund decisions, and appointment scheduling algorithms that prioritise certain patients over others.
Decisions unlikely to be covered: Basic spam filtering, internal analytics that do not affect individuals directly, and automated formatting or data entry that does not involve decisions about people. The key test is whether the output of the automated process affects an identifiable individual in a way that matters to them.
The amended Act requires organisations to include specific AI-related disclosures in their privacy policies. These are not optional additions. They are enforceable obligations with penalties for non-compliance.
What personal information AI uses. For each AI system that makes or assists in decisions about individuals, you must disclose the categories of personal information that serve as inputs. This means specifying whether the system uses names, contact details, financial records, employment history, health information, behavioural data, or other categories. “We use personal information for automated processing” is not sufficient.
What decisions AI makes solely. You must identify every decision that is made entirely by an automated system without human involvement. For each one, explain the purpose of the automated decision and the types of outcomes it can produce.
What decisions AI substantially assists. You must also disclose decisions where AI generates a recommendation, score, or output that a human then acts on, particularly where the human routinely follows the AI output. The OAIC has made clear that labelling a process “human-in-the-loop” does not automatically exempt it from the transparency provisions.
The general logic of the AI system. You must be able to explain, in plain language, how the AI system reaches its outputs. This does not require disclosing proprietary algorithms, but it does require explaining the methodology, the key factors that influence outcomes, and any known limitations.
How to request human review. Your privacy policy must clearly state how an individual can request a human review of an automated decision, including who to contact and what the review process involves. For more on structuring your overall AI governance approach, see our AI governance services.
The amended Act gives individuals the right to request that a human review any automated decision that significantly affects them. This is not a suggestion. It is a legally enforceable right, and organisations must establish processes to honour it.
The reviewer must be qualified. The person conducting the review must have sufficient expertise to assess the AI's output critically. A junior staff member clicking “approve” on a screen does not constitute genuine review. The reviewer needs domain knowledge relevant to the decision being reviewed.
The reviewer must have authority. The reviewer must have the actual authority to override the AI's decision. If the review process does not permit overrides, it is not a genuine review.
The reviewer must have access to information. The reviewer needs access to the same information the AI used, plus any additional context the individual provides. They must be able to understand why the AI reached its conclusion and assess whether that conclusion was reasonable.
Reasonable timeframes. The Act does not specify exact timeframes for completing reviews, but the OAIC has indicated that reviews must be completed within a “reasonable period.” For time-sensitive decisions such as credit applications or healthcare triage, this likely means days, not weeks.
Record keeping. You must keep records of human review requests, the review process followed, the outcome, and the reasoning. These records may be requested by the OAIC during an investigation or assessment.
The phrase “substantially affects the rights, interests, or wellbeing” of an individual is the trigger for the automated decision-making provisions. The OAIC has provided guidance on interpreting this test, drawing on both the explanatory memorandum to the legislation and international precedents.
Financial impact. Any decision that affects someone's access to credit, insurance, pricing, or financial services is likely to substantially affect them. This includes loan approvals, insurance premium calculations, and dynamic pricing based on personal profiles.
Employment impact. Decisions about hiring, promotion, performance assessment, rostering, and termination all substantially affect individuals. This covers the full employment lifecycle, from CV screening through to performance management algorithms.
Access to services. If an automated decision determines whether someone receives a service, what level of service they receive, or how quickly they receive it, that is likely to be a substantial effect. Healthcare appointment prioritisation and customer service tiering are clear examples.
Housing. Automated tenant screening, rental application scoring, and property access decisions all substantially affect individuals. Housing is a basic need, and the OAIC has signalled that it considers automated housing decisions to be high-risk.
The cumulative effect test. The OAIC has noted that even decisions that seem minor individually may substantially affect someone when considered cumulatively. If your AI systems collectively build a profile that determines how an individual is treated across multiple touchpoints, the aggregate effect may trigger the provisions even if no single decision seems significant on its own.
The automated decision-making provisions are sector-neutral, but their practical impact varies by industry. Here is how the obligations apply to the sectors where automated decisions are most common.
Automated credit scoring, fraud detection algorithms, AI-driven loan pre-approvals, client risk profiling, and anti-money laundering screening. If your practice uses AI to categorise clients by risk level or to flag suspicious transactions, those are automated decisions about individuals.
AI-powered document triage that prioritises or deprioritises matters, conflict-of-interest screening, automated client intake questionnaires that determine eligibility for services, and predictive analytics that estimate case outcomes. Even a chatbot that routes enquiries to different practice areas based on the nature of the legal issue is making a decision about an individual.
Automated tenant screening and scoring, rental application ranking, AI-generated property valuations used for lending decisions, and algorithmic matching of buyers to properties based on financial profiles. Any system that filters, ranks, or scores prospective tenants or buyers is caught by the provisions.
Appointment prioritisation algorithms, triage chatbots that assess symptoms and direct patients to different care pathways, AI-assisted diagnostic tools, and automated patient risk scoring. If an AI system determines that one patient should be seen before another, or that a patient's symptoms suggest a particular condition, those are decisions that substantially affect the individual.
CV screening and shortlisting algorithms, automated skills assessments, AI-generated candidate rankings, personality profiling tools, and video interview analysis. Recruitment is one of the areas where the OAIC has signalled it will focus enforcement, given the direct impact on individuals' livelihoods.
Dynamic pricing algorithms that adjust prices based on customer profiles, personalised marketing that targets or excludes individuals based on AI profiling, automated warranty claim decisions, and chatbot-driven returns and refund approvals. If the system decides whether a customer gets a refund without a human reviewing the case, that is an automated decision.
The penalty regime for privacy breaches was significantly strengthened in November 2022, and these enhanced penalties apply to the new automated decision-making provisions. The OAIC has a range of enforcement tools at its disposal, from compliance notices through to civil penalty proceedings.
Serious interference with privacy
Up to $50 million, or three times the benefit obtained, or 30% of adjusted turnover (whichever is greatest)
Failure to provide automated decision-making transparency
Compliance notices from the OAIC, with escalation to civil penalty proceedings for continued non-compliance
Repeated minor breaches
Infringement notices of up to $313,000 per contravention for bodies corporate
Failure to comply with OAIC direction
Civil penalty proceedings in the Federal Court, plus potential enforceable undertakings
Beyond formal penalties, the reputational damage from a publicised enforcement action can be significant. The OAIC publishes its determinations, and media coverage of privacy enforcement is increasing. For a complete view of what compliance looks like across all AI obligations, see our AI compliance checklist for Australian businesses.
Getting compliant with the automated decision-making provisions is a structured process. Here are the ten steps that will take you from wherever you are now to a defensible compliance position before December 2026.
List every tool, platform feature, and custom system that uses AI or algorithms to process personal information. Include features embedded in existing software such as CRM lead scoring, accounting platform fraud detection, and email marketing segmentation. Document what personal information each system accesses and what outputs it generates.
For each system, determine whether it makes solely automated decisions, substantially assists decisions, or merely informs decisions. Be honest about whether humans genuinely review AI outputs or simply rubber-stamp them. If a human approves AI recommendations more than 90% of the time without independent assessment, the OAIC is likely to treat that as substantially automated.
For each AI system, document what personal information goes in, where it is processed, what happens to it during processing, and what comes out. Include data shared with third-party AI vendors. This mapping is essential for your privacy policy disclosures and for responding to access requests.
Your privacy policy must now disclose: which decisions are made solely by AI, which decisions AI substantially assists, what personal information each AI system uses, the general logic of how the AI reaches its outputs, and how individuals can request human review. Generic statements like "we may use automated systems" are insufficient.
For every AI system that makes or substantially assists in decisions that significantly affect individuals, designate qualified staff who can conduct genuine reviews. These reviewers must have the authority to override the AI, access to the same information the AI used, and sufficient training to assess the AI's output critically. Document the review process and make it accessible to affected individuals.
For each AI system, prepare a plain-language explanation of how it works. You do not need to reveal proprietary algorithms, but you must be able to explain: the type of personal information used as inputs, the general methodology or logic, the factors that most significantly influence the outcome, and any known limitations or biases. Keep these explanations current as systems change.
If you use third-party AI tools, ensure your contracts address data handling, processing locations, security standards, and compliance obligations. Confirm that your vendors can support your transparency and explainability requirements. You remain responsible for decisions made using vendor AI, even if the processing happens on their infrastructure.
The OAIC recommends a Privacy Impact Assessment (PIA) for any AI system that processes personal information. A PIA identifies privacy risks, assesses their severity and likelihood, and documents the mitigation measures in place. For high-risk systems such as those making decisions about creditworthiness, employment, or healthcare, a PIA should be considered mandatory.
Staff who use AI tools need to understand their obligations under the amended Act. This includes knowing which systems are classified as automated decision-making, when and how to conduct genuine human reviews, how to respond to individuals who request information about AI decisions, and how to report concerns about AI system behaviour.
Compliance is not a one-off exercise. Schedule quarterly reviews of your AI systems to check for accuracy, bias, and alignment with stated purposes. Monitor whether human review processes are functioning as intended. Update your privacy policy and internal documentation whenever systems change. Keep records of your compliance activities.
Your privacy policy is the primary vehicle for meeting the automated decision-making transparency obligations. Here is what needs to change.
Add a dedicated AI and automated decision-making section. Do not bury AI disclosures in general data processing descriptions. Create a clearly labelled section that individuals can find easily. Title it something like “How we use AI and automated systems” or “Automated decision-making.”
List each AI system individually. For each automated decision-making system, describe: the purpose of the system, the type of personal information it uses, whether the decision is made solely by AI or substantially assisted by AI, the general logic of how the system works, and the potential outcomes of the decision.
Include a human review request process. Clearly state how an individual can request human review of an automated decision. Provide a specific contact method (email address, phone number, or online form) and indicate the expected timeframe for a response.
Use plain language. The OAIC has repeatedly emphasised that privacy policies must be written in language that an average person can understand. Avoid legal jargon, technical terminology, and unnecessarily complex sentence structures. If your privacy policy requires a law degree to understand, it does not meet the standard.
Keep it current. Your privacy policy is a living document. Every time you add, remove, or modify an AI system that affects individuals, update the policy. Schedule quarterly reviews to ensure the policy accurately reflects your current AI systems. For broader guidance on building internal AI policies, see our AI usage policy template.
Need help preparing for the automated decision-making provisions? Our AI governance service helps Australian businesses audit their AI systems, build compliant transparency disclosures, and establish human review processes. We handle the complexity so you can focus on running your business.
Explore AI governance services