SmartCompany reported in early 2026 that deepfake scams have arrived in Australia with force. “Deepfake bosses to perfect emails” was the headline, and it was not exaggeration. Security Brief AU followed up with their analysis of AI’s 2026 security fallout, confirming that small businesses are now the primary target.
The scams have changed. The Nigerian prince email with broken English is gone. In its place: a perfectly written email from your “accountant” asking you to update banking details. A phone call that sounds exactly like your business partner asking for an urgent transfer. A video call with a deepfake of your CEO instructing the finance team to pay an invoice.
These are not theoretical scenarios. They are happening to Australian businesses right now. And small businesses, with fewer security layers and less formal verification processes, are the easiest targets.
of deepfake scam attempts target small businesses via email
of audio is enough to create a convincing voice clone of any person
lost to scams by Australians in 2023 (ACCC), and AI is accelerating the trend
Traditional phishing emails were easy to spot: bad grammar, generic greetings, obvious urgency. AI has eliminated all of these tells. Modern phishing emails are grammatically perfect, personally addressed, and written in a style that matches the sender they are impersonating.
An AI can scrape your website, LinkedIn profiles, and social media to learn your team’s names, roles, writing style, and current projects. It then generates an email that reads exactly like a message from your colleague, supplier, or accountant. “Hi Sarah, following up on the invoice discussion from yesterday. The client has changed their bank details. Can you update the payment to this BSB and account number?”
That email takes less than a second to generate and costs nothing. At scale, scammers send thousands of these daily. Even a 1% success rate is profitable.
Voice cloning is the one that catches people off guard. AI can now clone any voice from a short audio sample. Your voicemail greeting, a YouTube video, a podcast appearance, even a snippet from a phone call. The cloned voice is then used to call your team and impersonate you.
Imagine your bookkeeper receiving a phone call that sounds exactly like you saying, “I need you to pay this invoice urgently. I am in a meeting so I cannot email you the details, but the account number is...” The voice is yours. The cadence is yours. The urgency feels real.
For a small business where everyone knows the boss’s voice, this is devastatingly effective. The technology is commercially available and improves every month.
The most sophisticated variant uses deepfake video in real-time video calls. A scammer joins a Zoom or Teams call wearing a deepfake of a trusted person (your CEO, your client, your accountant). They discuss business, reference real details scraped from public sources, and then make a financial request.
In 2024, a Hong Kong finance worker was tricked into transferring US$25 million after a deepfake video call with what appeared to be the company’s CFO and other colleagues. The same technology is now accessible to small-time scammers and is being deployed against Australian businesses.
Large corporations have multi-factor authentication, dedicated security teams, formal payment authorisation processes, and employee training programmes. Most small businesses have none of these.
In a 10-person business, the person who pays invoices is often the same person who answers the phone. There is no second approval step. There is no security team reviewing email headers. The boss says “pay it” and it gets paid. Scammers know this.
CyberSecurity Ventures reports that SMEs are now the fastest-growing target for AI-powered scams, precisely because the return on effort is highest. It takes the same amount of effort to deepfake a call to a small business as to a large corporation, but the small business is far more likely to fall for it.
You do not need enterprise-level security. But you do need basic processes that AI cannot bypass.
Establish a verbal verification protocol. Any request involving money (payment changes, new invoices, bank detail updates) requires verbal confirmation via a known phone number. Not the number in the email. Not the number from the call. The number you have saved in your contacts. This single step defeats most AI scams.
Create a code word system. Agree on a code word with your team and key suppliers that must be used in any unusual financial request. AI cannot guess your internal code word.
Require dual authorisation for payments above a threshold. No single person should be able to authorise payments above a set amount (we suggest $2,000 for most SMEs) without a second approval. This is good financial hygiene regardless of AI scams.
Train your team. The most expensive scam protection is the one your team does not know about. Spend 30 minutes showing them examples of AI-generated phishing emails, playing voice clone demos, and explaining the verification process. Do this quarterly, not just once.
Be cautious with voice and video online. Limit the amount of audio and video of key personnel available publicly. Voicemail greetings, video testimonials, and podcast appearances all provide source material for voice cloning. Consider whether a text-based voicemail greeting is sufficient.
For a comprehensive cyber security checklist including AI-specific threats, see our detailed guide.
AI can protect your business and threaten it. Our Free AI Audit assesses both your opportunity and your risk profile, so you can move forward with confidence.