ComplianceMarch 2026·12 min read

AI and Hiring Bias: What Australian Employers Must Avoid

Job interview recruitment. Photo by Vitaly Gariev on Pexels

62% of Australian organisations now use AI in some part of their recruitment process, according to Jobs and Skills Australia. Resume screening, candidate ranking, video interview analysis, and automated assessments are now standard at many companies. The promise is efficiency: screen 500 applications in minutes instead of hours.

The problem is that these systems discriminate. University of South Australia research found that AI recruitment tools systematically disadvantage women, older workers, people with disability, and candidates from non-English-speaking backgrounds. Bayside Group reported similar findings. AKS Law published a detailed analysis of the legal risks.

Jobs and Skills Australia put it bluntly: “AI recruitment becomes norm, Australia risks leaving real talent behind.” If you are using AI in hiring, you need to understand the bias risks and your legal exposure.

How AI Recruitment Bias Actually Works

62%

of Australian organisations use AI in recruitment

4+

federal anti-discrimination laws apply to AI hiring decisions

Dec 2026

automated decision-making transparency obligations take effect

AI does not intend to discriminate. The bias is structural, baked into the training data and the criteria the system optimises for.

Historical data bias. If your past hiring data shows you predominantly hired men for engineering roles, an AI trained on that data will learn that male candidates are “better” for those roles. It is not making a conscious decision. It is pattern-matching against biased historical outcomes. Amazon discovered this the hard way when their AI hiring tool systematically downgraded resumes containing the word “women’s” (as in “women’s chess club captain”).

Proxy discrimination. Even when protected characteristics (age, gender, ethnicity) are removed from the data, AI finds proxies. Suburb of residence correlates with socioeconomic status and ethnicity. University attended correlates with age and class. Hobbies and interests can signal gender. The AI does not need to see your age to discriminate based on it.

Language and communication bias. AI tools that assess written communication or video interviews often penalise non-native English speakers, people with speech impediments, neurodiverse candidates, and anyone who communicates differently from the “standard” the system was trained on.

Disability discrimination. Video analysis tools that assess body language, eye contact, and facial expressions can systematically disadvantage candidates with physical disabilities, autism spectrum conditions, or anxiety disorders. The AI reads their natural behaviour as “poor communication” when it is simply different communication.

Your Legal Obligations

The critical legal principle is this: you cannot outsource liability to an AI system or an AI vendor. If your AI recruitment tool discriminates, you are liable. Not the software company. Not the algorithm. You.

The relevant legislation includes the Racial Discrimination Act 1975, the Sex Discrimination Act 1984, the Age Discrimination Act 2004, the Disability Discrimination Act 1992, and the Fair Work Act 2009. All of these apply to hiring decisions regardless of whether the decision was made by a human, an AI, or a combination of both.

From December 2026, the amended Privacy Act adds specific transparency obligations for automated decision-making. You will need to disclose in your privacy policy what decisions are made by computer programs and how those decisions substantially affect individuals. Recruitment screening is a textbook example.

The OAIC guidance already recommends disclosing AI involvement in recruitment. While not yet mandatory in all contexts, getting ahead of the December 2026 deadline protects you from both legal risk and reputational damage.

How to Use AI in Recruitment Responsibly

AI in recruitment is not inherently wrong. Used properly, it can actually reduce bias by applying consistent criteria and removing human biases like the “halo effect” or affinity bias. The key is implementation.

Audit your tools regularly. At least annually, compare AI screening outcomes across demographic groups. If 80% of female candidates are screened out at stage one but only 40% of male candidates, you have a problem. Your vendor should be able to provide bias testing data. If they refuse, switch vendors.

Keep humans in the loop. Never let AI make the final hiring decision. Use AI to assist with screening and shortlisting, but ensure a human reviews every candidate the AI rejects. This is both good practice and increasingly a legal requirement under automated decision-making obligations.

Be transparent with candidates. Tell applicants that AI is involved in the screening process. Explain what the AI assesses and provide an alternative pathway for candidates who want human-only review. This is increasingly expected and will be legally required from December 2026.

Review your criteria. The AI is only as fair as the criteria you give it. Review what the system screens for and whether those criteria are genuinely job-relevant or merely proxies for protected characteristics. “Cultural fit” is notoriously biased. “Demonstrated ability to perform the core tasks listed in the role description” is much fairer.

Provide reasonable adjustments. If your AI tool includes video interviews, timed assessments, or communication analysis, offer alternative formats for candidates with disability. Under the Disability Discrimination Act, failure to make reasonable adjustments is itself discrimination.

Document everything. Keep records of your AI tool selection, bias testing results, audit findings, and the steps you have taken to mitigate risk. If a discrimination claim is made, your documentation is your defence.

Is Your Business AI-Compliant?

Our Free AI Audit includes a compliance assessment covering recruitment, privacy, and governance requirements for Australian businesses.

Frequently Asked Questions

Yes. AI recruitment tools can discriminate against women, older workers, people with disability, non-native English speakers, and candidates from certain ethnic backgrounds. The discrimination is often unintentional, embedded in the training data or the criteria the system is optimised for. Amazon famously abandoned an AI hiring tool that systematically downgraded female candidates.

Yes. Under Australian law, the employer remains liable for discriminatory hiring outcomes regardless of whether a human or an AI system made the decision. The Racial Discrimination Act, Sex Discrimination Act, Age Discrimination Act, Disability Discrimination Act, and Fair Work Act all apply. You cannot outsource liability to an AI vendor.

Run regular audits comparing outcomes across demographic groups. Check whether the AI screens out a disproportionate number of candidates from particular age groups, genders, or cultural backgrounds. Test the system with identical resumes where only demographic indicators change. Ask your vendor for bias testing documentation. If they cannot provide it, that is a red flag.

Yes, and you may soon be legally required to. The amended Privacy Act includes automated decision-making transparency obligations taking effect December 2026. Even before that deadline, best practice and the OAIC guidance recommend disclosing AI involvement in recruitment. Many candidates already suspect it and appreciate transparency.

FW
FlowWorks Team
AI Automation & Consulting · Melbourne, Australia
Get started

Find out what's costing
your business the most.

A 30-minute conversation. No pitch. No obligation. We'll identify your highest-impact automation opportunities before you spend a dollar.

Get your AI Readiness Review
1300 484 044 · ops@flowworks.com.au · 470 St Kilda Rd, Melbourne VIC 3004