ComplianceMarch 2026·12 min read

AI and Data Privacy: Where Does Your Customer Data Actually Go?

Data privacy security lock. Photo by Towfiqu barbhuiya on Pexels

Your team is using AI tools. You know it because you encouraged it, or you suspect it because you have noticed cleaner emails and faster turnaround on reports. Either way, customer data is going somewhere, and most business owners have no idea where.

Here is the number that should get your attention: 34.8% of employee inputs into ChatGPT now contain sensitive data. That is up from 11% in 2023. It includes customer names, email addresses, financial information, health records, and business-confidential data. Every one of those inputs potentially leaves your control and enters a third-party system with its own data handling policies.

This is not an argument against using AI. It is an argument for understanding exactly where your data goes, what rights you retain over it, and what the OAIC expects you to do about it.

The Consumer vs API Distinction That Changes Everything

34.8%

of ChatGPT inputs contain sensitive data, up from 11% in 2023

Zero

AI-specific exemptions in the Australian Privacy Act

$50M

maximum penalty for serious privacy breaches under the amended Act

The most important distinction in AI data privacy is one that most business owners do not know exists: the difference between consumer versions and API access.

Consumer versions (the ChatGPT website, Claude.ai, Gemini in your browser) typically store your conversations, may use them to improve their models, and retain data according to policies that most users never read. When your employee types a customer's financial details into ChatGPT free, that data is stored on OpenAI's servers, potentially used for model training, and retained for an indefinite period.

API access (used by developers and automation platforms) has different rules. OpenAI's API does not train on your data by default. Anthropic's Claude API retains data for 30 days for safety monitoring and then deletes it. Google's Gemini API terms vary by product and tier. This matters because most business AI automation is built on APIs, which provides significantly better data protection than having staff paste information into consumer chat interfaces.

Business tiers (ChatGPT Team, ChatGPT Enterprise, Claude for Business) sit in the middle. They provide data isolation and do not train on your inputs, but they store conversations on the provider's servers. The data handling is better than consumer tiers but not as controlled as direct API access through your own infrastructure.

Where Each Platform Sends Your Data

OpenAI (ChatGPT)

Free and Plus tiers: conversations stored on US servers, used for model training unless you opt out in settings (Settings > Data Controls > Improve the model). Team and Enterprise: data not used for training, stored in isolated environments, still on US servers. API: no training on your data by default, 30-day log retention, data processed in the US.

Anthropic (Claude)

Consumer (claude.ai): conversations stored, may be used for safety research and model improvement. Business tiers: data not used for training. API: 30-day retention for safety monitoring, no model training, data processed in the US (with some availability in European and Asia-Pacific regions depending on tier).

Google (Gemini)

Consumer Gemini: data stored and may be reviewed by humans for quality improvement. Workspace Gemini: governed by your Google Workspace agreement and data processing terms. Vertex AI (API): enterprise data protection, no model training on your data, regional data processing options available including Australia.

Microsoft (Copilot)

Consumer Copilot: data stored by Microsoft, subject to their consumer privacy policy. Microsoft 365 Copilot: processed within your Microsoft 365 tenant, subject to your enterprise data processing agreement, does not leave your tenant boundary. Azure OpenAI Service: enterprise controls, regional deployment options, no model training on your data.

Your Obligations Under the Privacy Act

The Privacy Act amendments do not create a separate AI regime. They apply existing principles to new technology. Under the Australian Privacy Principles, you must:

APP 1: Open and transparent management. Your privacy policy must disclose that you use AI tools to process personal information. If you are feeding customer data into ChatGPT, Claude, or any other AI system, your privacy policy needs to say so. Most business privacy policies written before 2024 do not mention AI.

APP 6: Use and disclosure. Personal information collected for one purpose should not be used for another without consent. If you collected a customer's email for invoicing and then feed it into an AI marketing tool, that is a secondary use that requires either consent or a reasonable expectation test.

APP 8: Cross-border disclosure. If personal information is sent to an AI provider overseas, you are responsible for ensuring they handle it in accordance with Australian privacy standards. Most major AI providers are US-based. You need to assess whether their data handling meets Australian requirements before sending personal information to their systems.

APP 11: Security. You must take reasonable steps to protect personal information from misuse, interference, loss, and unauthorised access. Allowing staff to paste customer data into unvetted AI tools without security controls may breach this obligation.

The Practical Checklist

1. Audit your AI usage. Find out every AI tool your team uses and what data goes into each one. This includes browser-based tools, mobile apps, browser extensions, and any AI features built into your existing software. You cannot manage what you do not know about.

2. Classify your data. Not all data needs the same protection. Create three tiers: public information (can go into any AI tool), internal business data (approved tools only), and personal/sensitive information (enterprise-tier AI tools with data processing agreements only).

3. Choose the right tier. For any workflow involving personal information, use business-tier AI tools or API access. The cost difference between free and business tiers is trivial compared to the cost of a privacy breach. ChatGPT Team is $25 per user per month. That is the cheapest insurance policy you will ever buy.

4. Update your privacy policy. Disclose AI usage in clear, specific terms. State which AI tools you use, what personal information they process, and where data is stored. The compliance checklist covers this in detail.

5. Create an AI usage policy. Set rules for what data can go into which tools. Make it specific enough that employees do not have to guess. An AI usage policy template is the fastest way to get this in place.

6. Review vendor agreements. Check the terms of service for every AI tool you use. Look for data retention periods, training data usage, data location, and sub-processor lists. If a vendor's terms are unacceptable, switch to one whose terms are.

7. Train your team. Train every employee on data classification and AI usage rules. The biggest privacy risks come from well-meaning staff who do not understand the implications of pasting customer data into consumer AI tools.

What Happens When It Goes Wrong

Samsung banned employee use of ChatGPT after staff uploaded proprietary source code. Law firms have faced discipline after client-confidential information appeared in AI conversations. A European data protection authority fined a company for feeding employee data into AI tools without proper notice or consent.

For Australian SMEs, the risks are proportionate but real. A customer who discovers their health information was processed by an AI tool without consent has grounds for a complaint to the OAIC. A client who learns their financial data was used to train an AI model may have grounds for legal action. The reputational damage alone, regardless of formal penalties, can be significant for a small business that depends on trust.

The good news is that compliance is straightforward. Use business-tier tools for sensitive data. Disclose AI usage in your privacy policy. Train your team. The cost of getting this right is minimal. The cost of getting it wrong is not.

The Bottom Line

Your customer data goes wherever your AI tools send it. The consumer version of ChatGPT stores it on US servers and may use it for training. The API version does not. The Privacy Act does not care whether a human or an AI processed the data. Your obligations remain the same. Know where your data goes, choose the right tier of AI tools, and tell your customers what you are doing. This is not about avoiding AI. It is about using it like a professional.

Not Sure If Your AI Setup Is Privacy-Compliant?

Our Free AI Audit assesses your data handling practices alongside automation opportunities. Takes 2 minutes.

Frequently Asked Questions

It depends on the version you use. Free and Plus versions of ChatGPT store your conversations and may use them to train future models unless you opt out in settings. ChatGPT Team and Enterprise versions do not use your data for training. The API (used by developers and automation platforms) does not train on your data by default. This distinction matters enormously. If your staff are using the free version to process customer information, that data may be retained by OpenAI and used in ways you cannot control. Business versions provide data isolation but cost more.

Under the Privacy Act 1988 and the Australian Privacy Principles, you must tell customers what personal information you collect, how you use it, and who you share it with. If you feed customer data into AI tools, that counts as a use and potentially a disclosure to a third party. You need to update your privacy policy to reflect AI usage, ensure the AI provider has adequate data protection, and confirm that overseas data transfers comply with APP 8. The OAIC has made clear that existing privacy obligations apply to AI. There is no AI exemption.

Yes, significantly. When you use ChatGPT, Claude, or Gemini through their consumer interfaces, your inputs may be stored and potentially used for model training. When you access the same models through their APIs, data handling is typically more restrictive. OpenAI's API does not train on your data by default. Anthropic's API has a 30-day retention window. Google's API terms vary by product. For businesses handling sensitive customer data, the API route is almost always safer. The trade-off is that APIs require technical implementation rather than simple browser access.

Start by surveying your team directly. Ask what AI tools they use for work, which ones they have accounts for, and what types of data they input. Many employees use AI tools without realising they are sending customer data to third parties. Check browser extensions, installed applications, and subscription billing for AI services. Create an approved tools list specifying which AI platforms are sanctioned for which data types. Implement a simple classification system: public data can go into any tool, internal data only into approved tools, and customer personal information only into enterprise-tier AI services with data protection agreements.

FW
FlowWorks Team
AI Automation & Consulting · Melbourne, Australia
Get started

Find out what's costing
your business the most.

A 30-minute conversation. No pitch. No obligation. We'll identify your highest-impact automation opportunities before you spend a dollar.

Get your AI Readiness Review
1300 484 044 · ops@flowworks.com.au · 470 St Kilda Rd, Melbourne VIC 3004