An AI risk assessment is a structured process for identifying what can go wrong when your business uses AI, how likely it is to happen, how much damage it could cause, and what you are doing to prevent it. Every business using AI needs one. Without it, you are making decisions about risk without actually understanding the risk.
The regulatory pressure is real. The Privacy Act 2026 amendments require transparency around automated decision-making and introduce penalties of up to $50 million for serious breaches. But compliance aside, a risk assessment protects your business from the operational, financial, and reputational damage that comes from uncontrolled AI use.
This guide gives you a complete, practical process for conducting an AI risk assessment. No jargon. No theory. Just the steps you need to follow.
Before you can assess risk, you need to know what you are assessing. Build a complete inventory of every AI tool your business uses. This includes standalone generative AI tools like ChatGPT, Claude, and Gemini. It includes AI features embedded in your existing software, such as smart compose in Gmail, AI analytics in HubSpot, or automated categorisation in Xero. And it includes any AI tools your vendors or contractors are using on your behalf.
For each tool, document: what the tool does, who uses it, what data it accesses, where the data is processed (Australia, US, EU, or elsewhere), and whether the vendor has a data processing agreement in place. This inventory is the foundation of everything that follows.
Do not skip the shadow AI check. Survey your team directly. Ask them what tools they use. You will find tools that nobody in leadership knew about. That is not a criticism of your team. It is a reflection of how easy AI tools are to adopt and how quickly the landscape moves. Our AI governance framework guide includes a detailed audit checklist you can use.
AI risks fall into predictable categories. For each AI tool in your inventory, assess it against the following risk areas:
| Category | Description | Likelihood | Impact |
|---|---|---|---|
| Data Privacy | Personal or sensitive data entered into AI tools without consent or appropriate safeguards | High | High |
| Accuracy / Hallucination | AI generates incorrect information that is used in client deliverables or decisions | High | High |
| Security | Sensitive business data leaked through AI tool APIs, logs, or training pipelines | Medium | High |
| Bias and Discrimination | AI tool produces outputs that disadvantage certain groups in hiring, lending, or service delivery | Medium | High |
| Regulatory Non-Compliance | Automated decisions made without required transparency or human oversight under the Privacy Act | Medium | High |
| Vendor / Third-Party Risk | AI vendor changes terms, suffers a breach, or uses your data for model training | Medium | Medium |
| Reputational | AI-generated content or decisions cause public embarrassment or customer trust loss | Low-Medium | High |
| Operational Dependency | Business becomes overly reliant on an AI tool that experiences outages or discontinuation | Low | Medium |
Identifying risks is not enough. You need to prioritise them. Use a simple 5-point scoring system for both likelihood and impact. Multiply the two scores to get a risk rating. A risk with a likelihood of 4 and an impact of 5 has a rating of 20 out of 25. That is a priority.
| Score | Likelihood | Impact |
|---|---|---|
| 1 - Rare | Less than 5% chance per year | Minimal disruption, no financial or legal consequence |
| 2 - Unlikely | 5-20% chance per year | Minor disruption, small financial cost, no regulatory action |
| 3 - Possible | 20-50% chance per year | Moderate disruption, noticeable financial cost, potential regulatory inquiry |
| 4 - Likely | 50-80% chance per year | Significant disruption, material financial cost, regulatory investigation likely |
| 5 - Almost Certain | Over 80% chance per year | Severe disruption, major financial loss, regulatory penalties, reputational damage |
Focus on risks with a combined score of 12 or above first. These are the ones that need immediate mitigation. Risks scoring 6 to 11 should be addressed within your next quarterly review. Anything below 6 should be monitored but is unlikely to need urgent action.
For every risk that scores above your threshold, document a specific mitigation. Vague statements like “we will monitor this” are not mitigations. A mitigation is a concrete action that reduces either the likelihood or the impact of the risk.
Each mitigation should have an owner. Someone specific who is accountable for implementing and maintaining the control. Without ownership, mitigations become aspirational statements rather than actual protections.
Your risk register is the single document that brings everything together. It is a living record of every identified AI risk, its score, its mitigation, its owner, and its current status. For most SMEs, a well-structured spreadsheet works perfectly.
Columns to include: risk ID, risk category, description, AI tool affected, likelihood score (1-5), impact score (1-5), combined risk rating, mitigation, owner, status (open/mitigated/accepted), last reviewed date, and next review date.
Review the register quarterly. Update scores when circumstances change. Add new risks as you adopt new tools. Close risks that are no longer relevant. The register is your evidence of ongoing risk management, which matters when the OAIC comes asking questions.
Your risk assessment does not happen in isolation. It sits within a regulatory framework that is tightening quickly. The Privacy Act 2026 amendments are the headline change, but they are not the only one.
Key regulatory requirements: The Privacy Act 2026 requires transparency around automated decisions about individuals. The OAIC has published specific guidance on AI and privacy. APRA expects regulated entities to manage AI as operational risk. ISO 42001 provides the international benchmark for AI management systems. Your risk assessment should demonstrate alignment with all applicable requirements.
For regulated industries like financial services, healthcare, and legal, the bar is higher. Your risk assessment needs to account for sector-specific requirements on top of general privacy obligations. If you serve government clients, you will also need to align with the Department of Industry, Science and Resources voluntary AI safety standard, which is becoming less voluntary with each procurement cycle.
Assessing tools, not use cases. The same AI tool can be low-risk in one context and high-risk in another. ChatGPT used for brainstorming marketing slogans is low-risk. ChatGPT used to draft personalised financial advice is high-risk. Assess how each tool is being used, not just what the tool is.
Treating it as a one-off exercise. A risk assessment done once and filed away is almost worthless. AI tools change. Your usage changes. Regulations change. The assessment must be a living process with scheduled reviews.
Ignoring embedded AI. Many businesses assess ChatGPT and Copilot but forget about the AI features in their CRM, accounting software, and email platform. These embedded AI tools process your data too, and they need to be included in your assessment.
No ownership. A risk register without named owners for each risk and mitigation is a document, not a management system. Someone needs to be accountable. In a small business, that might be the founder or operations manager. In a larger organisation, it should be distributed across teams with a central coordinator.
Our AI Readiness Review includes a full risk assessment of your current AI usage, with prioritised recommendations and a practical action plan.
Get your AI Readiness ReviewAn AI risk assessment is a structured process for identifying, scoring, and mitigating the risks that come with using AI tools in your business. It covers data privacy, accuracy, bias, security, compliance, and reputational risks. The output is a risk register that documents each risk, its likelihood, its potential impact, and the specific controls you have in place to manage it.
Conduct a full AI risk assessment at least annually, with quarterly reviews of your risk register. You should also reassess whenever you adopt a new AI tool, change how an existing tool is used, experience an AI-related incident, or face new regulatory requirements. The Privacy Act 2026 amendments make regular risk assessment a practical necessity for compliance.
The top risks are data privacy breaches (customer data entering AI tools without consent), accuracy failures (AI hallucinations in client-facing content), compliance gaps (failing to meet Privacy Act automated decision-making requirements), and security vulnerabilities (sensitive data exposed through AI APIs). For regulated industries, there are additional risks around sector-specific obligations from APRA, AHPRA, and other bodies.
Yes. The Privacy Act 2026 removes the small business exemption, meaning every Australian business that handles personal data must comply with privacy obligations including those related to AI. A risk assessment does not need to be complex for a small business. A simple spreadsheet covering your AI tools, the data they access, and your mitigation steps is a solid starting point.
An AI audit is a point-in-time review of what AI tools your business is using and how. A risk assessment goes further. It evaluates the potential negative outcomes of that AI use, scores them by likelihood and impact, and documents specific controls to manage each risk. Think of the audit as the discovery phase and the risk assessment as the analysis and action phase. You need both.