In October 2025, Australia's National AI Centre (part of CSIRO) updated its Guidance for AI Adoption with a framework called the AI Six. These are six essential practices that any organisation should follow when adopting AI. Think of them as the minimum standard for responsible AI use in Australia.
Most Australian SMEs have never heard of the AI6 framework. That is a problem, because while it is technically voluntary, it represents where regulation is heading. The practices align closely with obligations that already exist under the Privacy Act, anti-discrimination legislation, and Australian Consumer Law. Ignoring them now means scrambling later.
The good news is that the AI6 framework is refreshingly practical. It was designed with small and medium businesses in mind, not just enterprises with dedicated AI ethics teams. Here is what each practice involves and how a 10 to 50 person business can implement it without hiring a governance specialist.
These six practices cover the full spectrum of responsible AI. They are not sequential steps but concurrent obligations. Every AI system you use should be assessed against all six. The depth of implementation scales with the risk level of what the AI is doing.
What it means: Be open about when and how AI is used. Customers, employees, and stakeholders should know when they are interacting with AI, what data AI uses, and how AI influences decisions that affect them.
What it looks like for SMEs: If your website uses an AI chatbot, label it as AI. If your hiring process includes AI screening, tell candidates. If AI helps generate client reports or recommendations, disclose the AI involvement. This does not mean qualifying every email with "this was written with AI assistance." It means being transparent about AI in contexts where it materially affects outcomes.
AI content transparency rules are already tightening in Australia. The ACCC and OAIC have both issued guidance on disclosure obligations. Getting transparency right now avoids compliance scrambling later.
What it means: AI should not produce outcomes that discriminate against individuals or groups based on protected characteristics. This covers both direct discrimination and indirect or systemic bias in AI outputs.
What it looks like for SMEs: Review your AI tools for potential bias, particularly in hiring, customer service, pricing, and marketing. Ask vendors about their bias testing processes. Monitor outcomes across different customer or applicant groups. If you notice patterns that suggest certain groups are being treated differently, investigate and address the issue.
Anti-discrimination law already applies to AI decisions. The fairness practice in AI6 aligns with your existing legal obligations under four Commonwealth anti-discrimination Acts.
What it means: Someone in your organisation must be responsible for AI decisions and their outcomes. AI does not make anyone less accountable. If anything, using AI increases accountability obligations because you need to demonstrate that appropriate oversight was in place.
What it looks like for SMEs: Assign a person (it can be the business owner in a small team) as the AI accountability lead. This person approves new AI tools, monitors outcomes, responds to complaints, and ensures compliance with the other five practices. Document who is responsible for what. When something goes wrong, there should be a clear chain of responsibility. Directors and board members have specific AI governance responsibilities.
What it means: AI systems must handle personal data in compliance with the Privacy Act and protect that data from unauthorised access. This includes data used to train AI, data processed by AI, and data generated by AI.
What it looks like for SMEs: Know what data your AI tools collect and process. Understand where that data is stored and who has access. Do not feed customer personal information into AI tools without checking their data handling policies. Implement data minimisation: only provide AI with the data it needs for the specific task, not more. The Privacy Act reforms are making these obligations stricter, with new automated decision-making transparency requirements taking effect in December 2026.
What it means: Humans must maintain meaningful control over AI systems. This does not mean rubber-stamping every AI output, but it does mean having the ability to intervene, override, or shut down AI systems when they are not performing as intended.
What it looks like for SMEs: Define which AI decisions require human review before action. Set thresholds: AI can handle routine customer queries autonomously, but anything involving a complaint, a refund over a certain amount, or a sensitive topic gets escalated to a human. Build in circuit breakers that pause AI systems if error rates exceed acceptable levels. The level of oversight should match the risk level of the decision.
What it means: AI systems should perform consistently and as intended. They should be tested before deployment, monitored during operation, and updated when performance degrades. Reliability includes both accuracy and robustness: the system should work correctly and should handle unexpected inputs gracefully.
What it looks like for SMEs: Test AI tools before going live with real customers or real decisions. Monitor ongoing performance with simple metrics: accuracy rate, error rate, customer satisfaction with AI interactions, and time saved versus time created. If performance drops, investigate before the problem compounds. Measuring AI performance properly ensures you catch reliability issues early.
Week 1: Audit. List every AI tool and system your business uses. For each one, note what it does, what data it accesses, who uses it, and what decisions it influences. This gives you the baseline.
Week 2: Assess. Score each AI system against the six practices. Where are you already compliant? Where are the gaps? Prioritise gaps based on risk: systems that affect people (customers, employees, candidates) get addressed first.
Week 3: Document. Write a simple AI policy that addresses each of the six practices. This does not need to be a 50-page document. A clear two to three page policy that covers your approach to transparency, fairness, accountability, privacy, human oversight, and reliability is sufficient for most SMEs. The AI compliance checklist provides a structured starting point.
Ongoing: Monitor. Review your AI systems against the framework quarterly. Update your policy when you adopt new AI tools or when the regulatory landscape changes. The AI6 framework is a living standard, not a one-off compliance exercise.
The AI6 framework sits alongside several other governance frameworks. ISO 42001 provides a certifiable AI management system for organisations that need formal certification. The Responsible AI Index provides benchmarking data. And the SafeAI-Aus framework aligns with the Victorian AI Assurance Standards (VAISS).
For most Australian SMEs, the AI6 framework is the right starting point. It is practical, proportionate, and aligned with where Australian regulation is heading. If you outgrow it, ISO 42001 certification is the natural next step. But for a business with 5 to 50 employees, AI6 provides more than enough structure to use AI responsibly and stay ahead of compliance requirements.
Our Free AI Audit evaluates your current AI practices against key governance criteria and identifies your next steps.