GuideMar 10, 2026·9 min read

AI and Cyber Security for Small Businesses: How to Use AI Without Putting Your Data at Risk

AI and Cyber Security for Small Businesses: Using AI Without Putting Your Data at Risk

AI adoption among Australian small businesses has accelerated dramatically. Tools like ChatGPT, Gemini, and Copilot are now part of the daily workflow for thousands of SMEs across the country. Teams use them to draft emails, analyse data, generate reports, and speed up research that used to take hours. The Australian Cyber Security Centre has flagged this rapid adoption as a key area of concern for businesses that have not yet put proper safeguards in place.

But here is the problem: most small businesses have no data security guardrails around AI usage. No policy. No approved tools list. No guidance on what employees should and should not type into these tools. And in many cases, no awareness that the data they enter into a free AI chatbot could be stored, reviewed by third parties, or used to train future models.

This guide covers the real risks, the practical steps to mitigate them, and how to build an internal AI usage policy that actually works. It is written for Australian business owners, managers, and IT leads who want to use AI productively without exposing their business or their clients to unnecessary risk.

The Real Risks of Using AI Without Guardrails

The risks are not theoretical. They are happening now, in businesses of every size. Understanding them is the first step to managing them.

Data leaks via prompts. When an employee pastes a client contract into ChatGPT to "summarise the key terms," that data is transmitted to OpenAI's servers. Depending on the plan and settings, it may be stored, reviewed by staff, or used as training data. The employee meant well. The outcome is a potential data breach.

Third-party model training. Many AI providers use your inputs to improve their models unless you explicitly opt out. This means your proprietary business information could influence responses given to other users, including competitors. Free-tier tools are particularly prone to this.

Supply chain exposure. AI tools often rely on third-party infrastructure, plugins, and integrations. Each layer adds a potential point of failure or data exposure. A plugin that connects your AI tool to your CRM might be processing data through servers in a jurisdiction with weaker privacy protections.

Shadow AI. This is the biggest risk for most SMEs. Employees are using AI tools that IT and management do not know about. Personal ChatGPT accounts, browser extensions with AI features, free transcription tools, image generators. Each one is a potential data leak that falls outside your security perimeter. You cannot manage risk you cannot see.

What the Australian Government Says

The Australian Signals Directorate (ASD) and cyber.gov.au have published guidance on AI and cyber security for small businesses that is directly relevant. Here is the plain English summary.

Assess before you deploy. Before adopting any AI tool, evaluate what data it will access, where that data is processed and stored, and what the provider's data handling policies are. This applies to every tool, not just the ones your IT team provisions.

Treat AI tools like any other third-party service. The same due diligence you would apply to a new cloud provider or SaaS tool should apply to AI services. Review their security certifications, data processing agreements, and incident response capabilities.

Do not share sensitive or classified information with public AI services. This guidance is aimed at government agencies, but the principle applies equally to businesses handling client data, financial records, or proprietary information.

Implement usage policies and training. The ASD recommends that all organisations establish clear policies for AI tool usage and ensure staff understand the risks. The Australian Government has also published an AI policy guide and template to help organisations get started. A policy is only useful if people know it exists and understand what it requires. The OAIC has also confirmed that obligations under the Privacy Act and the Australian Privacy Principles apply to data processed by AI tools, just as they do for any other form of data processing.

10-Point AI Security Checklist for Small Businesses

This checklist is designed to be practical and actionable. You do not need a dedicated security team to implement these steps. A business owner or office manager can work through most of them in an afternoon.

1. Read the terms of service before you use any AI tool

Most people skip this. Do not. Look specifically for clauses about data retention, model training, and third-party sharing. If the provider reserves the right to train on your inputs, assume everything you type becomes part of their dataset.

2. Disable training on your data wherever possible

OpenAI, Google, and most major providers now offer settings to opt out of model training. In ChatGPT, go to Settings > Data Controls and turn off "Improve the model for everyone." For enterprise plans, training is typically off by default. Check every tool your team uses.

3. Use enterprise or business tiers, not free accounts

Free-tier AI tools often have weaker data protections. Business and enterprise plans typically offer stricter data handling, audit logs, and contractual commitments around data processing. The cost difference is small compared to the risk of a data breach.

4. Never paste client-identifiable information into public AI tools

This includes names, email addresses, phone numbers, financial details, health information, and any data that could identify an individual. If you need AI to analyse client data, use a tool with a proper data processing agreement in place.

5. Establish clear categories for what can and cannot be shared

Create a simple traffic-light system. Green: general business questions, marketing copy drafts, publicly available data. Amber: internal processes, anonymised data, general financial queries. Red: client PII, passwords, contracts, proprietary code, legal documents.

6. Audit which AI tools your team is actually using

Shadow AI is real. Your team is almost certainly using AI tools you do not know about. Run a quick survey or check browser extensions and app installs. You cannot secure what you cannot see.

7. Use your own API integrations instead of copy-pasting into chat windows

When you connect to AI models via API, you get far more control over data handling. API usage typically comes with stronger data protections and no model training on your inputs. This is how FlowWorks builds automations for clients.

8. Train your team, not just once, but regularly

AI tools change fast. A policy written six months ago may already be outdated. Run brief quarterly refreshers covering new tools, updated risks, and any changes to your internal policy.

9. Log AI usage for sensitive workflows

For regulated industries or high-risk processes, keep a record of what AI tools were used, what data was provided, and what outputs were generated. This is essential for compliance with the Privacy Act and industry-specific regulations.

10. Have an incident response plan

If someone accidentally pastes sensitive data into a public AI tool, what happens next? Define the steps: who to notify, how to assess the exposure, and what remediation looks like. Treat it with the same seriousness as any other data breach.

What Should Never Go Into an AI Tool

Regardless of the tool, the plan tier, or the provider's assurances, certain types of data should never be entered into any AI chat interface. If your team remembers nothing else from this guide, remember this list.

Client names, addresses, email addresses, or phone numbers
Financial records, bank details, credit card numbers, or tax file numbers
Passwords, API keys, access tokens, or security credentials
Employee personal information, payroll data, or HR records
Legal documents, contracts, or terms under negotiation
Proprietary source code, trade secrets, or unreleased product details
Health records or any information covered by the Privacy Act's Australian Privacy Principles
Internal strategic documents, board minutes, or confidential business plans

If you need AI to process any of the above, the right approach is to use secure, enterprise-grade integrations with proper data processing agreements, encryption, and access controls. This is fundamentally different from pasting data into a chat window. The OAIC's guidance on commercially available AI products provides further detail on your obligations when handling personal information through AI tools.

How to Build an Internal AI Usage Policy

Your AI policy does not need to be long or complex. It needs to be clear, specific, and practical. Here is a framework that works for businesses of 5 to 200 employees. For a broader look at governance frameworks, see our guide on AI governance in Australia.

1. Purpose and scope

State why the policy exists and who it covers. Keep it simple: "This policy governs how [Company Name] employees use AI tools in the course of their work. It applies to all staff, contractors, and anyone accessing company systems."

2. Approved tools list

Maintain a list of AI tools that have been reviewed and approved for business use. Include the tier (free vs business), what each tool may be used for, and any restrictions. Update this list quarterly.

3. Data classification rules

Define what data can be entered into AI tools using the traffic-light system: Green (safe to use), Amber (use with caution and anonymisation), Red (never enter into any AI tool). Give specific examples for each category relevant to your business.

4. Acceptable use guidelines

Spell out what AI tools may be used for (drafting emails, brainstorming, research, summarising public documents) and what they may not be used for (processing client data, making final decisions on legal or financial matters, generating content published without human review).

5. Output review requirements

All AI-generated content that will be shared externally, used in client deliverables, or relied upon for business decisions must be reviewed by a qualified person before use. AI is a drafting tool, not an authority.

6. Incident reporting procedure

Explain what to do if someone accidentally shares restricted data with an AI tool. Include who to contact, what to document, and the timeline for reporting. Make it clear that reporting is expected and will not be punished if done promptly.

7. Review and update schedule

Set a review date. AI tools and their data practices change rapidly. A policy that is reviewed every six months will stay relevant. One that sits in a drawer for two years will not.

The most effective policies are the ones that make the right thing easy. If your approved AI tools are genuinely useful and accessible, people will use them. If your policy is reasonable and clearly explained, people will follow it. If you want help assessing whether your business is ready to formalise its AI practices, our AI readiness assessment is a good starting point.

Frequently Asked Questions

Is it safe to put business data into ChatGPT?

It depends on your plan and your settings. On the free tier, OpenAI may use your inputs to improve their models unless you opt out. On ChatGPT Team and Enterprise plans, your data is not used for training by default. Regardless of the plan, you should never enter client-identifiable information, passwords, or proprietary data into any AI chat tool. If you need AI to process sensitive data, use API integrations with proper data processing agreements.

Do AI tools train on my business data?

Some do, some do not. It depends entirely on the tool, the plan tier, and your settings. OpenAI's free ChatGPT tier can use your conversations for training unless you opt out. Google's Gemini for Workspace does not train on business data. Anthropic's Claude business plans do not train on your inputs. Always check the specific tool's data usage policy, and opt out of training wherever the option exists.

What should an AI usage policy cover for a small business?

At minimum, your policy should cover: which AI tools are approved for use, what types of data can and cannot be entered, who is responsible for reviewing AI-generated outputs, how to report accidental data exposure, and a review schedule to keep the policy current. You do not need a 50-page document. A clear, practical two to three page policy that your team actually reads is far more effective.

What does the Australian government say about AI and data security?

The Australian Signals Directorate (ASD) and cyber.gov.au have published guidance recommending that organisations assess AI tools before deployment, avoid sharing sensitive data with public AI services, and implement usage policies. The Australian Government has also released an AI policy guide and template for organisations. The Privacy Act requires businesses to protect personal information regardless of whether it is processed by a human or an AI tool. The OAIC has also published guidance confirming that using AI does not reduce your obligations under the Australian Privacy Principles.

How do I stop employees from using unapproved AI tools at work?

You cannot eliminate shadow AI entirely, but you can reduce it significantly. Start by providing approved alternatives that actually meet your team's needs. If people are using ChatGPT because their work tools are slow, blocking ChatGPT will not fix the problem. Combine approved tools with clear guidelines, regular training, and a culture where people feel comfortable asking before trying something new.

Need Help Securing Your AI Workflow?

We help Australian businesses implement AI tools with proper security, governance, and data protection from day one. Book a call and we will review your current AI usage and identify any gaps.

Get in touch
FW
FlowWorks Team
AI Automation & Consulting · Melbourne, Australia
Get started

Find out what's costing
your business the most.

A 30-minute conversation. No pitch. No obligation. We'll identify your highest-impact automation opportunities before you spend a dollar.

Get your AI Readiness Review
1300 484 044 · ops@flowworks.com.au · Melbourne, Australia