Nearly half of Australian workplaces now use some form of AI to monitor staff. 64% use automated performance assessment tools. The technology has outpaced the law, and most employers have no idea where the legal boundaries sit.
WorkTime’s 2026 analysis of Australian employee monitoring laws identified it as one of the most frequently asked questions from employers. Dentons published a detailed legal analysis. The OAIC updated its workplace monitoring guidance. The message from all three: you can monitor, but the rules are stricter than you think.
Getting this wrong is not just a legal risk. Overly aggressive monitoring destroys trust, increases turnover, and can create the exact productivity problems you were trying to solve. This guide covers what the law actually says, state by state, and how to implement AI monitoring that is both legal and effective.
of Australian workplaces use AI-powered monitoring tools
use automated performance assessment and tracking tools
minimum written notice required in NSW before surveillance begins
Australia does not have a single national workplace surveillance law. Instead, you are dealing with a patchwork of state and territory legislation, the federal Privacy Act, and general employment law principles. The rules depend on where your employees work, not where your business is headquartered.
The Workplace Surveillance Act 2005 (NSW) is the most detailed piece of workplace monitoring legislation in Australia. It covers three types of surveillance: camera, computer, and tracking.
For computer surveillance (which covers most AI monitoring tools), employers must give employees at least 14 days written notice before monitoring begins. The notice must specify the kind of surveillance, how it will be carried out, when it will start, and whether it will be continuous or intermittent.
Covert surveillance is only permitted with an authority from a magistrate, and only when the employer has reasonable grounds to suspect an employee is involved in unlawful activity. You cannot use covert monitoring to check productivity or investigate policy breaches.
The NSW Digital Work Systems Act 2026 adds another layer, requiring employers to consider how digital monitoring systems may create psychosocial hazards for workers.
The Workplace Privacy Act 2011 (ACT) takes a similar approach to NSW but uses different terminology and thresholds. Employers must establish a workplace privacy policy, consult with employees, and provide reasonable notice before introducing surveillance.
The ACT legislation explicitly covers visual, data, and tracking surveillance. Unlike NSW, the ACT legislation does not specify a minimum notice period, but the requirement for consultation means you cannot simply announce monitoring and start the next day.
Victoria, Queensland, South Australia, Western Australia, Tasmania, and the Northern Territory do not have specific workplace surveillance legislation. This does not mean you can do whatever you want. The federal Privacy Act 1988 applies nationwide, and the Australian Privacy Principles (APPs) regulate how personal information (including monitoring data) is collected, used, and stored.
The OAIC’s guidance makes clear that employee monitoring data is personal information under the Privacy Act. You must have a legitimate purpose, collect only what is necessary, store it securely, and give employees access to their own data on request. The general protections provisions of the Fair Work Act also apply, meaning monitoring cannot be used to discriminate, bully, or take adverse action against an employee.
Modern AI monitoring tools go far beyond checking whether someone is at their desk. The sophistication of these systems catches many employers off guard when they realise what they have actually deployed.
Keystroke and activity logging records every key pressed, application opened, website visited, and file accessed. AI analyses patterns to generate productivity scores and flag anomalies.
Screen capture and recording takes periodic screenshots or continuous video of employee screens. AI can analyse these for content, flagging sensitive data, off-task activity, or policy violations.
Communication analysis scans emails, chat messages, and meeting transcripts for sentiment, tone, and keywords. Some systems claim to predict employee disengagement or flight risk from communication patterns.
Location and movement tracking uses GPS, Wi-Fi, and Bluetooth to track where employees are during work hours. AI analyses movement patterns, time spent in locations, and proximity to other employees.
Automated performance scoring combines data from multiple sources to generate performance scores, efficiency metrics, and productivity rankings. 64% of Australian workplaces now use some form of automated performance assessment.
If you are implementing or already using AI monitoring tools, here is what you need to have in place.
1. Legitimate business purpose. Document why you are monitoring. Protecting confidential data, ensuring workplace safety, and complying with regulatory obligations are legitimate. Micromanaging how fast someone types is not.
2. Proportionality. The monitoring must be proportionate to the risk. Monitoring a finance team’s access to banking systems is proportionate. Recording every second of every employee’s screen all day is unlikely to be.
3. Written policy. Create a clear workplace AI and monitoring policy that explains what is monitored, why, how the data is used, who has access, how long it is retained, and how employees can raise concerns.
4. Notice and consent. In NSW, provide 14 days written notice. In other states, provide reasonable notice. Update employment contracts for new hires. Get written acknowledgement from existing staff.
5. Data security. Monitoring data is sensitive personal information. Store it securely, limit access to authorised personnel, and have a retention and destruction schedule. The Privacy Act requires you to destroy personal information when it is no longer needed.
6. Review and audit. Review your monitoring practices at least annually. Check whether the data being collected is still necessary. Audit who has accessed monitoring data and why. The AI compliance checklist provides a broader framework.
7. Human review. Never use AI monitoring as the sole basis for disciplinary action, termination, or performance reviews. The automated decision-making provisions of the amended Privacy Act require human involvement in decisions that substantially affect individuals. Always have a human review AI-generated insights before acting on them.
Compliance is the floor, not the ceiling. The bigger risk with AI monitoring is what it does to your workplace culture.
Research consistently shows that excessive monitoring decreases productivity rather than increasing it. Employees who feel watched become anxious, risk-averse, and less creative. They optimise for the metrics being tracked rather than the outcomes that matter. If you measure keystrokes, people will type more. That does not mean they are producing better work.
The businesses that get monitoring right use it sparingly and transparently. They monitor systems and data, not people. They track outcomes, not activity. They use AI to identify risks (security breaches, compliance failures) rather than to rank employee productivity.
If your employees find out you have been monitoring them without their knowledge, the trust damage is irreversible. No amount of productivity data is worth losing your best people.
Our Free AI Audit assesses your current AI practices against compliance requirements, so you can identify gaps before they become problems.