ComplianceFebruary 2026·12 min read

AI Readiness for Boards: What Directors Need to Know

Boardroom meeting directors. Photo by cottonbro studio on Pexels

If you sit on a board in Australia, AI is already your problem. Not because you chose to adopt it, but because your organisation almost certainly is using it, whether through official channels or through staff using ChatGPT on their phones to draft emails and analyse data.

The governance question is not whether to allow AI. It is whether you have oversight of how AI is being used, what risks it creates, and whether your organisation is compliant with existing laws that apply to AI. Because here is the uncomfortable truth: directors can be personally liable if AI systems cause harm and the board failed to exercise reasonable oversight.

Australia’s approach to AI governance is standards-led, meaning the government is using voluntary frameworks and existing legislation rather than rushing to pass a standalone AI Act. For directors, this means you cannot wait for an “AI law” to tell you what to do. You need to act now based on existing obligations.

Why AI Is a Board-Level Issue

AI is not a technology issue. It is a risk management, compliance, and strategic governance issue. Every AI system your organisation uses makes decisions or influences decisions that affect customers, employees, and stakeholders. When those decisions go wrong, the accountability trail leads to the board.

Privacy risk. The Privacy Act 1988 (with December 2026 amendments) requires transparency when automated decision-making substantially affects individuals. If your organisation uses AI to assess customer applications, screen job candidates, or personalise services, you have disclosure obligations. The automated decision-making requirements take effect in December 2026 and require board-level awareness.

Discrimination risk. 62% of Australian organisations use AI in recruitment, and these systems have documented biases against women, older workers, and people from non-English speaking backgrounds. The Racial Discrimination Act, Sex Discrimination Act, Age Discrimination Act, and Fair Work Act all apply regardless of whether a human or an AI made the biased decision.

Workplace safety risk. The NSW Digital Work Systems Act 2026 specifically requires organisations to consider how digital systems (including AI) create psychosocial hazards for workers. This is likely to be adopted by other states.

Reputational risk. When Deloitte had to partially refund $290,000 to the Australian government after AI fabricated academic references in a report, the reputational damage extended well beyond the financial cost. A Victorian solicitor was disciplined for using AI-hallucinated case citations. These failures reflect on leadership, not just on the individuals involved.

The Australian Governance Landscape

Australia does not have a standalone AI Act. The government’s position, articulated through the Department of Industry, Science and Resources, is a “standards-led” approach. This means relying on voluntary frameworks, existing legislation, and industry standards rather than creating new AI-specific regulation.

The key frameworks directors should know about include the AI Ethics Principles (eight principles published by the Australian Government), the AI6 Framework (six essential practices for responsible AI from the National AI Centre), ISO 42001 (the international standard for AI management systems), and the Responsible AI Index which benchmarks Australian organisations against responsible AI practice.

The Responsible AI Index 2025 scored Australia at 43 out of 100, with 65% of organisations still in early stages of AI governance. This means most boards are behind, but it also means there is still time to get ahead of the curve. Our comprehensive AI governance guide covers the full landscape.

Five Questions Every Board Should Be Asking

1. What AI Systems Are We Using?

Most boards cannot answer this question. Staff are using AI tools that were never formally approved. Customer-facing AI may have been deployed by the marketing team without governance review. Third-party vendors may be using AI in services you procure. Start by creating an AI register that lists every AI system, its purpose, what data it accesses, and who is accountable for it.

2. What Decisions Are Being Made or Influenced by AI?

This is different from knowing what tools are in use. The question is what decisions those tools are making or influencing. Is AI screening job applicants? Assessing customer credit risk? Determining pricing? Prioritising support tickets? Each of these carries different risk profiles and different legal obligations.

3. Are We Compliant with Existing Laws?

The Privacy Act, Consumer Law, anti-discrimination legislation, and workplace safety laws all apply to AI use. The board should have a compliance assessment that maps each AI system against applicable legislation. Our AI compliance checklist provides a starting point.

4. What Is Our Risk Appetite for AI?

Some AI applications are low risk (using AI to draft internal meeting summaries). Others are high risk (using AI to make decisions about customer eligibility or employee performance). The board should define clear risk categories and approval requirements for each level.

5. Do We Have an Incident Response Plan?

When (not if) an AI system produces a harmful outcome, how does the organisation respond? Who is notified? How is the affected party treated? How is the system corrected? An incident response plan for AI is as important as your cyber security incident plan.

Building a Proportionate Governance Framework

A 20-person professional services firm does not need the same AI governance framework as a bank. The key principle is proportionality: governance should be appropriate to the scale of AI use and the level of risk.

For small to medium organisations, a practical governance framework includes an AI usage policy (what is permitted, what is prohibited), an AI register (what systems are in use), a risk assessment process for new AI deployments, regular compliance reviews (quarterly at minimum), and an incident reporting procedure.

For larger organisations or those in regulated industries, add formal AI ethics review for high-risk applications, third-party auditing of AI systems, staff training programmes with certification, and board reporting on AI risk metrics.

The Director’s Responsibility

Directors do not need to be AI experts. They do need to be informed enough to ask the right questions, challenge management when AI risks are not being addressed, and ensure the organisation has appropriate governance proportionate to its AI use.

The duty of care and diligence under the Corporations Act 2001 requires directors to inform themselves about matters that materially affect the organisation. AI is now one of those matters. Ignorance is not a defence.

The good news: taking proactive steps now puts your board ahead of 65% of Australian organisations that are still in early stages. The governance frameworks exist. The question is whether your board has the will to implement them.

Assess Your Organisation’s AI Readiness

Our Free AI Audit provides a governance baseline for boards looking to understand their current AI posture and identify gaps.

Frequently Asked Questions

Under existing Australian corporate law, directors have a duty of care and diligence. If an AI system causes harm and the board failed to exercise reasonable oversight, directors could face personal liability. This includes failing to understand what AI systems the organisation uses, not having governance frameworks in place, ignoring known risks, or not ensuring compliance with applicable laws like the Privacy Act. The standard is whether a reasonable director in that position would have taken steps to manage AI risk.

Australia does not have a standalone AI Act. However, existing legislation already applies to AI. The Privacy Act 1988 (with 2026 amendments on automated decision-making), the Australian Consumer Law, anti-discrimination legislation, and the Corporations Act 2001 all create obligations that apply when AI is used. The NSW Digital Work Systems Act 2026 is the first state-level AI-specific law. The Australian Government uses a standards-led approach through voluntary frameworks like the AI Ethics Principles and the AI6 responsible AI framework.

At minimum: an AI register listing all AI systems in use and their risk level, clear accountability for who approves new AI deployments, a risk assessment process for evaluating AI systems before deployment, regular review cycles to ensure AI systems are performing as expected, incident response procedures for when AI goes wrong, and compliance monitoring for relevant legislation (Privacy Act, Consumer Law, anti-discrimination). The framework should be proportionate to the organisation's size and the risk level of its AI use.

Quarterly review is the minimum recommended frequency. This should include an update on the AI register (new systems added, old ones retired), review of any incidents or near-misses, compliance status against applicable legislation, and progress on the AI governance roadmap. Annually, the board should conduct a comprehensive review of the entire AI governance framework, benchmark against current best practice, and update policies to reflect legislative changes.

FW
FlowWorks Team
AI Automation & Consulting · Melbourne, Australia
Get started

Find out what's costing
your business the most.

A 30-minute conversation. No pitch. No obligation. We'll identify your highest-impact automation opportunities before you spend a dollar.

Get your AI Readiness Review
1300 484 044 · ops@flowworks.com.au · 470 St Kilda Rd, Melbourne VIC 3004