When your AI hiring tool automatically filters out applicants over 50, that is age discrimination. When your AI chatbot gives shorter, less helpful responses to customers with non-standard English, that is indirect racial discrimination. When your AI pricing algorithm charges higher premiums in postcodes with higher migrant populations, that is potentially unlawful too.
The uncomfortable truth about AI discrimination is that it usually happens without anyone intending it. AI systems learn patterns from historical data, and historical data contains the biases of the humans who created it. If your training data reflects decades of hiring preferences that favoured certain demographics, the AI will replicate those preferences. It does not know it is discriminating. It is simply optimising for the patterns it was trained on.
Australian anti-discrimination law does not care about intent. It cares about outcomes. If your AI system produces discriminatory outcomes, you are liable. This applies whether you built the AI yourself, bought it from a vendor, or use a free tool. The University of Melbourne research confirms that algorithmic discrimination in recruitment is already a documented problem in Australia. Here is what SMEs need to understand.
Covers race, colour, descent, national or ethnic origin
Covers sex, gender identity, intersex status, sexual orientation, pregnancy
Covers physical, intellectual, psychiatric, sensory, neurological, and learning disabilities
Covers discrimination based on age in employment, education, and services
These four Commonwealth Acts, plus the Fair Work Act 2009 and equivalent state and territory legislation, create a comprehensive framework that applies to AI. The key principle is simple: if a human cannot lawfully make a decision based on a protected characteristic, neither can an AI system deployed by that human or their business.
Both direct and indirect discrimination are covered. Direct discrimination is when AI explicitly treats someone differently because of a protected characteristic. Indirect discrimination is when AI applies a seemingly neutral condition or requirement that has a disproportionate impact on people with a protected characteristic. Most AI discrimination is indirect, which makes it harder to detect but no less unlawful.
AI hiring bias is the most documented area of AI discrimination. Resume screening tools trained on historical hiring data often penalise women (because historically fewer women were hired in certain roles), older workers (because the training data skews toward younger hires), people with disabilities (because employment gaps or non-traditional career paths are flagged as negative signals), and ESL speakers (because natural language processing struggles with non-standard grammar).
Jobs and Skills Australia has warned that AI recruitment is becoming the norm in Australia and risks leaving real talent behind. University of South Australia research confirms that algorithmic bias in hiring is measurable and significant. 62% of Australian organisations now use AI in some part of their recruitment process.
AI customer service systems can discriminate in several ways. Language models perform better with standard Australian English, creating a two-tier service experience. Voice AI systems may struggle with certain accents, leading to worse outcomes for customers from non-English-speaking backgrounds. Chatbots may provide different quality responses based on postcode, name, or communication style.
AI pricing algorithms that use postcode as a variable can indirectly discriminate by race or ethnicity, because some postcodes correlate strongly with specific demographic groups. AI credit scoring that considers employment history can discriminate against people with disabilities who have employment gaps. Insurance pricing algorithms that use health data can discriminate against people with certain disabilities or genetic conditions.
AI-driven ad targeting can exclude protected groups from seeing opportunities. If your AI marketing tool targets job ads only to people aged 25 to 40, or shows financial products primarily to specific ethnic groups based on browsing data, this constitutes discrimination in the provision of services. The fact that the AI made the targeting decision autonomously does not shift liability away from the advertiser.
Direct discrimination is straightforward: the AI system uses a protected characteristic as an explicit factor. If an AI hiring tool filters out applicants whose resume mentions "maternity leave" or "disability pension", that is direct discrimination based on sex and disability respectively.
Indirect discrimination is more subtle and far more common with AI. It occurs when a seemingly neutral requirement disproportionately affects people with a protected characteristic. For example, an AI hiring tool that requires "unbroken employment history" indirectly discriminates against women (who may have career breaks for childcare) and people with disabilities (who may have health-related gaps). The requirement appears neutral but its impact is not.
Indirect discrimination is only lawful if the requirement is "reasonable" in the circumstances. Using AI to screen for unbroken employment history would need to be justified as genuinely necessary for the role, not merely a convenient proxy for reliability.
When you buy an AI tool from a vendor, who is responsible if it discriminates? Under Australian law, the answer is clear: you are. The business that deploys the AI system bears the liability, not the company that built it. Your contract with the AI vendor may include indemnities, but these do not protect you from regulatory action or complaints to the Australian Human Rights Commission.
This means due diligence on AI vendors matters. Before adopting an AI tool that makes decisions about people, ask the vendor what bias testing they have conducted, what data the model was trained on, whether it has been validated for the Australian context, and what ongoing monitoring is in place. If the vendor cannot answer these questions, that is a red flag.
AI insurance considerations are closely linked to discrimination risk. Your professional indemnity and public liability policies may not explicitly cover discrimination claims arising from AI decisions.
1. Map your AI decision points. Identify every place AI makes or influences decisions about people: hiring, customer service, pricing, marketing targeting, performance assessment, and access to services. These are your risk points.
2. Test for disparate outcomes. For each AI decision point, check whether outcomes differ across protected characteristics. Are different acceptance rates appearing for different age groups? Are customer satisfaction scores lower for customers from certain backgrounds? Quantify the differences.
3. Maintain human oversight. Never let AI make final decisions about people without human review. This is both a legal safeguard and a practical one. Human reviewers can catch discriminatory patterns that automated monitoring might miss.
4. Document everything. Keep records of your AI vendors, what bias testing was done, what monitoring is in place, and what actions you took when issues were identified. This documentation is your defence if a complaint is made.
5. Include AI in your compliance checklist. Anti-discrimination should be a standing item in your AI governance framework, not an afterthought. Review AI systems for bias at least annually, or whenever the system is updated or retrained.
6. Train your team. Staff who use AI tools need to understand that AI can produce discriminatory outputs and that they have a responsibility to flag concerns. A culture where people feel comfortable raising issues about AI bias is more effective than any compliance checklist.
Australia does not yet have AI-specific anti-discrimination legislation. But the existing framework already covers AI discrimination through the four Commonwealth Acts plus the Fair Work Act. The Australian Human Rights Commission has been increasingly active on AI issues, and the OAIC is addressing AI through privacy and automated decision-making frameworks.
The direction of travel is clear: more regulation, not less. The NSW Digital Work Systems Act already requires employers to consider how digital systems affect workers. The Privacy Act reforms introduce automated decision-making transparency obligations by December 2026. Businesses that address AI discrimination proactively will be well ahead of those scrambling to comply when new rules arrive.
Our Free AI Audit helps identify where your business uses AI, where the compliance gaps are, and what to prioritise.