The Fifth Quadrant and National Artificial Intelligence Centre released the 2025 Australian Responsible AI Index in August 2025. It is the most comprehensive assessment of how Australian organisations are approaching AI governance, ethics, and responsible implementation.
We read the full report so you do not have to. Here are the numbers that matter, what they mean for Australian businesses, and what you should be doing about it. For practical next steps, our AI governance in Australia guide covers the essentials.
The short version: Australia scores 43 out of 100 on responsible AI maturity. That is down one point from 2024. And if you are a smaller business, you are likely further behind than you think.
The Index divides organisations into four maturity segments based on their score across 45 responsible AI practices. The distribution tells a clear story: most Australian businesses are still figuring this out.
Source: Fifth Quadrant / National AI Centre, Australian Responsible AI Index 2025. n=418.
65% of organisations are in the Emerging or Developing stages. That means nearly two-thirds of Australian businesses using AI have limited governance in place. They are experimenting with AI tools, maybe using ChatGPT or automating a few processes, but without formal policies, risk assessments, or oversight structures.
Only 12% qualify as Leading. These are predominantly larger organisations (46% have 1000+ employees) that have been using AI for more than four years, with dedicated governance committees and comprehensive documentation.
This is where it gets concerning for smaller businesses. The report found a clear 7-point maturity gap between SMEs and enterprise organisations, and the gap is not closing.
Source: Fifth Quadrant / National AI Centre, 2025. SMEs n=97, Enterprise n=107.
The practical implication: if you are running a business with 20 to 99 employees and using AI in any capacity, there is a strong chance you are in the Emerging or Developing category. Your team is probably using generative AI tools, maybe some chatbots or virtual assistants, but without formal guidelines governing how those tools should be used, what data can go into them, or who is accountable when something goes wrong.
The report breaks down RAI maturity by industry. Customer-facing and technology-driven sectors lead, while traditional industries lag behind.
Source: Fifth Quadrant / National AI Centre, 2025. n=418.
Transport and Logistics trails at 38, with 69% of organisations in the Developing stage and only 5% in Leading. Professional Services sits at 41, with 51% still Developing. Construction is at 42, with 64% in the Developing stage. These are the industries where FlowWorks works most frequently, and the numbers confirm what we see on the ground: businesses are adopting AI tools faster than they are adopting the governance frameworks to use them safely.
Here is an interesting finding: 90% of organisations report having an AI strategy. That sounds encouraging until you look deeper.
In 2024, 49% said their AI strategy covered all divisions. In 2025, that dropped to 37%. More organisations are acknowledging that their strategy only covers some parts of the business.
The report suggests this reflects growing awareness of what a real AI strategy actually requires. As frameworks like the Voluntary AI Safety Standard (VAISS) gain visibility, organisations are being more honest about whether their "strategy" is genuinely comprehensive or just a document sitting in a shared drive.
The Voluntary AI Safety Standard was released by the Australian Government in September 2024. It is not law yet, but it is the clearest signal of where regulation is heading. The report found that 33% of organisations have already adopted it, with Leading organisations at 64%.
The connection between standards adoption and business outcomes is clear. 95% of organisations that have implemented AI standards also have an AI strategy, compared to 83% of those without standards. Standards create strategic clarity.
The report identifies which responsible AI practices are most and least commonly implemented. The pattern is revealing: organisations are good at documentation but bad at accountability.
Source: Fifth Quadrant / National AI Centre, 2025. n=418.
The gap is telling. Organisations can explain what their AI does (documentation and explainability are the top practices) but they have not built the structures to ensure it does it responsibly. Only 33% have an AI risk or governance committee. Only 25% have designated someone as responsible for AI governance. And only 21% are checking whether the AI tools they buy from vendors are actually transparent about how they work.
This is the governance gap, and it starts at the top. Our guide on AI governance for boards explains what directors need to do. It is the difference between knowing you should do something and actually doing it. And it is where most Australian businesses are stuck.
One of the most useful sections of the report is the outcomes data. Organisations that have implemented AI standards and guidelines report measurable business benefits.
Source: Fifth Quadrant / National AI Centre, 2025. Base: organisations with AI standards/guidelines, n=393.
This is not about compliance for the sake of compliance. The data shows that responsible AI governance directly correlates with better business outcomes. Organisations that implement standards report higher efficiency, lower costs, better compliance, and stronger customer trust.
If you are a small or medium business using AI in any capacity, here is what we recommend based on the report findings.
Take stock of how your business uses AI today. Which tools, which processes, which decisions are AI-influenced? Most businesses are surprised by how much AI has crept into their operations without formal oversight.
You do not need a 50-page governance framework on day one. Start with a simple document that outlines which AI tools are approved, what data can go into them, and who is responsible for reviewing AI outputs. This alone puts you ahead of 75% of organisations.
The 10 guardrails of the Voluntary AI Safety Standard are the clearest framework available. Even if you only implement 5 of the 10 initially, you are building on a foundation that will likely become regulatory requirement.
The report shows this is the biggest gap. Someone in your business needs to own AI governance. It does not have to be a full-time role, but it needs to be someone's responsibility.
A fresh set of eyes on your AI practices will identify risks and opportunities you have missed. This is what our AI Readiness Review is designed to do.
Our AI Governance service helps Australian businesses build responsible AI frameworks aligned with the VAISS guardrails and Privacy Act requirements.
Get Your AI Readiness ReviewThe Australian Responsible AI Index is a comprehensive assessment developed by Fifth Quadrant and sponsored by the National Artificial Intelligence Centre. It tracks responsible AI maturity across five dimensions: fairness, accountability, transparency, explainability, and safety. Organisations are scored out of 100 based on 45 identified responsible AI practices. The 2025 report surveyed 418 organisations across Australia.
The Voluntary AI Safety Standard was released by the Australian Government in September 2024. It consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. The guardrails cover accountability, risk management, data governance, testing, human oversight, transparency, contestability, supply chain transparency, compliance, and stakeholder engagement. 33% of Australian organisations have adopted it so far.
There is a clear maturity gap. Smaller organisations (20 to 99 employees) score 40 out of 100, while enterprise organisations (1000+ employees) score 47. Only 28% of smaller businesses are in the more mature stages (scoring 50+) compared to 45% of enterprise. Smaller businesses have implemented an average of 13 responsible AI practices versus 16 for enterprise.
The most implemented practices are: maintaining comprehensive documentation (49%), developing materials to explain AI inputs and decision-making (42%), reviewing training data for bias (41%), and informing stakeholders about AI use (41%). The least implemented include designated responsible AI roles (25%), human rights assessments (25%), and vendor transparency assessments (21%).
Yes. With the Privacy Act 2026 amendments introducing AI transparency requirements, governance is becoming mandatory rather than optional. The data shows clear business value: 52% of organisations with AI governance report increased operational efficiency, 46% report cost savings, and 44% report improved compliance. Organisations with AI standards are also more likely to have a clear AI strategy (95% vs 83%).
Fifth Quadrant and National Artificial Intelligence Centre. Australian Responsible AI Index 2025. Published 26 August 2025. Sponsored by the Australian Government Department of Industry, Science and Resources. Survey of 418 Australian organisations, conducted April to May 2025.