If you sit on a board in Australia, AI is already your problem. Not because you chose to adopt it, but because your organisation almost certainly is using it, whether through official channels or through staff using ChatGPT on their phones to draft emails and analyse data.
The governance question is not whether to allow AI. It is whether you have oversight of how AI is being used, what risks it creates, and whether your organisation is compliant with existing laws that apply to AI. Because here is the uncomfortable truth: directors can be personally liable if AI systems cause harm and the board failed to exercise reasonable oversight.
Australia’s approach to AI governance is standards-led, meaning the government is using voluntary frameworks and existing legislation rather than rushing to pass a standalone AI Act. For directors, this means you cannot wait for an “AI law” to tell you what to do. You need to act now based on existing obligations.
AI is not a technology issue. It is a risk management, compliance, and strategic governance issue. Every AI system your organisation uses makes decisions or influences decisions that affect customers, employees, and stakeholders. When those decisions go wrong, the accountability trail leads to the board.
Privacy risk. The Privacy Act 1988 (with December 2026 amendments) requires transparency when automated decision-making substantially affects individuals. If your organisation uses AI to assess customer applications, screen job candidates, or personalise services, you have disclosure obligations. The automated decision-making requirements take effect in December 2026 and require board-level awareness.
Discrimination risk. 62% of Australian organisations use AI in recruitment, and these systems have documented biases against women, older workers, and people from non-English speaking backgrounds. The Racial Discrimination Act, Sex Discrimination Act, Age Discrimination Act, and Fair Work Act all apply regardless of whether a human or an AI made the biased decision.
Workplace safety risk. The NSW Digital Work Systems Act 2026 specifically requires organisations to consider how digital systems (including AI) create psychosocial hazards for workers. This is likely to be adopted by other states.
Reputational risk. When Deloitte had to partially refund $290,000 to the Australian government after AI fabricated academic references in a report, the reputational damage extended well beyond the financial cost. A Victorian solicitor was disciplined for using AI-hallucinated case citations. These failures reflect on leadership, not just on the individuals involved.
Australia does not have a standalone AI Act. The government’s position, articulated through the Department of Industry, Science and Resources, is a “standards-led” approach. This means relying on voluntary frameworks, existing legislation, and industry standards rather than creating new AI-specific regulation.
The key frameworks directors should know about include the AI Ethics Principles (eight principles published by the Australian Government), the AI6 Framework (six essential practices for responsible AI from the National AI Centre), ISO 42001 (the international standard for AI management systems), and the Responsible AI Index which benchmarks Australian organisations against responsible AI practice.
The Responsible AI Index 2025 scored Australia at 43 out of 100, with 65% of organisations still in early stages of AI governance. This means most boards are behind, but it also means there is still time to get ahead of the curve. Our comprehensive AI governance guide covers the full landscape.
Most boards cannot answer this question. Staff are using AI tools that were never formally approved. Customer-facing AI may have been deployed by the marketing team without governance review. Third-party vendors may be using AI in services you procure. Start by creating an AI register that lists every AI system, its purpose, what data it accesses, and who is accountable for it.
This is different from knowing what tools are in use. The question is what decisions those tools are making or influencing. Is AI screening job applicants? Assessing customer credit risk? Determining pricing? Prioritising support tickets? Each of these carries different risk profiles and different legal obligations.
The Privacy Act, Consumer Law, anti-discrimination legislation, and workplace safety laws all apply to AI use. The board should have a compliance assessment that maps each AI system against applicable legislation. Our AI compliance checklist provides a starting point.
Some AI applications are low risk (using AI to draft internal meeting summaries). Others are high risk (using AI to make decisions about customer eligibility or employee performance). The board should define clear risk categories and approval requirements for each level.
When (not if) an AI system produces a harmful outcome, how does the organisation respond? Who is notified? How is the affected party treated? How is the system corrected? An incident response plan for AI is as important as your cyber security incident plan.
A 20-person professional services firm does not need the same AI governance framework as a bank. The key principle is proportionality: governance should be appropriate to the scale of AI use and the level of risk.
For small to medium organisations, a practical governance framework includes an AI usage policy (what is permitted, what is prohibited), an AI register (what systems are in use), a risk assessment process for new AI deployments, regular compliance reviews (quarterly at minimum), and an incident reporting procedure.
For larger organisations or those in regulated industries, add formal AI ethics review for high-risk applications, third-party auditing of AI systems, staff training programmes with certification, and board reporting on AI risk metrics.
Directors do not need to be AI experts. They do need to be informed enough to ask the right questions, challenge management when AI risks are not being addressed, and ensure the organisation has appropriate governance proportionate to its AI use.
The duty of care and diligence under the Corporations Act 2001 requires directors to inform themselves about matters that materially affect the organisation. AI is now one of those matters. Ignorance is not a defence.
The good news: taking proactive steps now puts your board ahead of 65% of Australian organisations that are still in early stages. The governance frameworks exist. The question is whether your board has the will to implement them.
Our Free AI Audit provides a governance baseline for boards looking to understand their current AI posture and identify gaps.