Heimdal
article featured image

Contents:

The conversational AI market is exploding. Grand View Research suggests it’s set to jump from $11.58 billion in 2024 to $41.39 billion by 2030, a massive 23.7% annual growth rate. While businesses use AI to boost customer service, cybercriminals are jumping in too, launching slick impersonation scams.

These scams are spreading fast. A report from the Identity Theft Resource Center shows a 148% spike in impersonation scams between April 2024 and March 2025 as scammers spin up fake business websites, create lifelike AI chatbots, and build voice agents that sound just like real company reps. In 2024 alone, the Federal Trade Commission reported $2.95 billion in losses due to impersonation scams.

Heimdal breaks down how scammers fake customer service, points out the industries they hit hardest, and shares simple ways to double-check who you’re talking to before giving up your personal info.

The technology behind the deception

Today’s AI scams blend high-tech tools with surprisingly simple methods, making impersonation easier than ever.

How AI powers slick impersonation

Voice cloning has gotten scarily accurate. McAfee notes that with just three seconds of audio, scammers can copy someone’s voice so well that in its study of 7,000 people, 70% weren’t confident enough to tell if the voice was real or a clone.

AI chatbots have also leveled up. They mirror tone, language, and responses so well they’re almost impossible to tell apart from real customer service reps.

On top of both, fake website creation has exploded. AI can whip up polished product descriptions, realistic images, and fake reviews that look legit — all in minutes.

Fraud is easier than ever

Starting an AI scam is cheap and fast. According to a 2024 Deloitte study, scamming software sells on the dark web for as little as $20. Scams can not only be cheap, but they work fast: Consumer advice charity Advance Direct Scotland found that an AI scam can go live in under two minutes.

These low costs and easy access have supercharged growth. Open-source fraud reporting platform Chainabuse reports that generative AI scams quadrupled between May 2024 and April 2025. 

More than 38,000 new scam pages popped up every day in the first half of 2024, according to Security Boulevard. The mix of powerful tech and easy access makes AI business impersonation one of the fastest-growing threats facing consumers today.

Industries under siege: Most targeted sectors

No industry is completely safe from AI impersonation scams, but some face much bigger risks because of the sensitive data and money they handle.

Financial services: The primary target

The financial sector sits at the top of scammers’ hit list. The Financial Crimes Enforcement Network warned U.S. banks in November 2024 about the surge in AI-powered identity fraud. 

Deloitte expects U.S. banking fraud losses to soar from $12.3 billion in 2023 to $40 billion by 2027. Signicat’s Battle Against AI-driven Identity Fraud report, based on February 2024 data, found that AI drives over 42% of all fraud.

E-commerce and retail

Online shopping platforms are another favorite target. According to Juniper Research, e-commerce fraud is projected to rocket from $44.3 billion in 2024 to a staggering $107 billion by 2029. Microsoft has exposed scams using fake shopping sites and AI chatbots designed to harvest payment details and personal info.

Most impersonated entities

Scammers love going after businesses. Identity Theft Resource Center’s 2025 report notes that about 51% of impersonation scams target them directly, while 21% focus on financial institutions, all rich with data that scammers crave.

As AI scams keep evolving, businesses in these industries need to stay alert and rethink their defenses to keep up with this fast-moving threat.

Anatomy of modern AI scams: Real-world case studies 

Looking at actual incidents shows just how sophisticated and convincing AI-powered scams have become, even fooling cautious, tech-savvy individuals.

The $25 million deepfake video conference scam

In one of the most shocking cases to date, a Hong Kong finance worker was tricked into transferring 200 million Hong Kong dollars (about $25.6 million) after attending a deepfake video call with what appeared to be the company’s chief financial officer and other senior colleagues.

The employee initially suspected a phishing attempt but was convinced by a highly realistic video conference. Scammers used publicly available video footage to create AI-generated versions of each participant, perfectly mimicking voices and facial expressions to make the fake meeting appear completely authentic.

Tech company CEO impersonations

Cybercriminals have increasingly targeted tech companies by impersonating top executives. At the company LastPass, an employee received calls, texts, and WhatsApp messages from someone posing as the CEO. The voice was cloned using audio taken from YouTube videos.

At cloud security firm Wiz, scammers used an AI-generated voice clone of the CEO to leave voicemails for dozens of employees, asking for sensitive credentials. In both cases, the impersonations were realistic enough to almost trick seasoned security professionals.

Consumer-facing scams

AI scams aren’t limited to corporate environments. In Canada, three men lost a combined 373,000 Canadian dollars (over $273,000) after being convinced by deepfake videos featuring what appeared to be Justin Trudeau and Elon Musk promoting a fake investment scheme.

Voice cloning scams are also widespread. In the McAfee study, 10% of respondents received a message from an AI voice clone. Of those targeted, 77% reported financial losses.

The ‘scam sweatshop’ operation

According to The Sunday Post, authorities in Scotland uncovered so-called AI “scam sweatshops,” where criminals generated hyperpersonalized fraud campaigns in under two minutes using freely available AI apps. These operations swindled over 700,000 pounds (more than $945,000) from Scots through highly targeted voice and text-based scams.

These real-world examples highlight a sobering reality: AI-driven scams are no longer crude or obvious; they are highly advanced and often indistinguishable from legitimate interactions.

Regulatory response: The FTC fights back

As AI-powered impersonation scams have exploded, regulators have scrambled to keep up. Leading the charge, the FTC has rolled out new rules to protect both consumers and businesses.

The Government and Business Impersonation Rule

Law firm WilmerHale explains that the FTC’s Impersonation Rule, which took effect in April 2024, makes it illegal to materially and falsely pose as a government agency or business. This landmark rule gives the FTC the power to move fast against scammers running fake websites, pushing fraudulent chatbots, or using AI voice agents to mislead people.

Violators face fines of up to $53,088 per violation. The rule also allows the FTC to drag scammers into federal court to secure refunds for victims, a big step in helping people get their money back.

First-year results

The FTC didn’t waste time. In its first year, the agency filed five enforcement actions under the new rule and shut down 13 fake websites posing as the commission itself.

The FTC also launched “Operation AI Comply,” a crackdown on AI-powered fraud. This effort has targeted AI chatbots offering fake “legal advice” and tools flooding review sites with phony testimonials, all designed to erode public trust.

Proposed extensions

Scams keep evolving, and the FTC knows it. ReadWrite notes that the agency has proposed expanding the rule to cover impersonation of individuals, a direct move against voice cloning and deepfake scams that can mimic real people almost perfectly.

These regulatory moves mark a strong first step. But they also show that fighting AI scams will require constant vigilance from both regulators and the public.

Red flags: How to spot AI impersonation

Even the most polished AI scams leave small tells. Learning to catch these clues can help you avoid falling for them.

Chatbot warning signs

Response patterns: If a chatbot replies instantly and flawlessly every time, be cautious. While quick responses are normal, perfect spelling and grammar — combined with robotic or awkward phrasing — often point to AI, not a human.

Behavioral red flags: Be wary if the bot repeats itself often or keeps pushing one solution. Real reps usually offer options and handle specific questions smoothly. AI bots tend to struggle when the conversation goes off-script.

Technical signs: Bots often have uniform response delays, no matter how complex the question is. They’re also available 24/7 without normal staffing patterns.

Voice cloning detection

Audio quality issues: Listen for weird pauses, odd tone shifts, or strange audio glitches. AI voices usually miss the natural emotion and flow of real speech.

Conversation patterns: Scammers using cloned voices often keep calls short and urgent to avoid questions. If someone you know sounds “off” or acts strangely, don’t ignore it.

Website and email verification

Visual inspection: Real business websites generally show full contact details, including a physical address, phone number, and official email. Look for security badges and seals from trusted organizations.

Communication channels: When in doubt, go straight to the source. Call or email using contact info from official statements or the company’s main website, not links from pop-ups or emails.

Spotting these signals and taking a moment to double-check can stop a scam before it even starts.

Protection Strategies: Your Defense Against AI Scams 

Once you learn to spot impersonation attempts, the next step is building strong defenses. A mix of smart habits and proactive strategies can make a huge difference in keeping you safe.

Immediate verification steps

Multichannel confirmation: Always double-check unexpected requests, even if the number seems familiar. If a chatbot or caller asks for sensitive info or urgent payments, hang up or close the chat. Then, reach out directly through an official phone number or email from the company’s website.

Family and business protocols: Set up a “safe word” with family to confirm emergencies. For businesses, employers can implement dual approval for transactions so no single person can approve large payments alone.

Digital hygiene practices

​​Voice protection: Consider using automated voicemail greetings instead of your own voice to cut down cloning risks. Avoid sharing voice data online, a habit worth building since 53% of adults share voice recordings weekly without thinking about the risks, according to McAfee’s report.

Information sharing: Never share passwords, Social Security numbers, or financial details over chat, email, or phone unless you’re absolutely sure who you’re talking to. Be extra cautious with urgent or pushy requests.

Business security measures

Employee training: Teach employees about new AI impersonation tactics. Regularly update them on scam trends and make sure they know the steps to verify any requests involving sensitive data or large payments.

Technical safeguards: Use multifactor authentication to reduce the risk of unauthorized access. Another common suggestion is checking financial statements and account activity often to catch suspicious transactions early.

Combining sharp habits, solid tech tools, and clear protocols gives you the best defense against fast-evolving AI scams.

CTA button - AI-powered defense

Staying ahead of the AI arms race 

AI has completely reshaped the fraud game. Putting advanced tools into almost anyone’s hands allows scammers to pull off schemes that used to require elite hacking skills. Because of this, old-school detection methods just can’t keep up.

But despite these challenges, consumers still have strong ways to fight back. Using solid verification habits and staying skeptical are some of the best defenses for keeping personal and financial info safe.

On the regulatory side, the FTC’s tough enforcement of the Impersonation Rule shows the government is serious about stopping AI-powered scams. New proposals, such as expanding the rule to cover individual impersonation, show policymakers are adjusting to keep pace with fast-changing threats.

Looking forward, AI scams will only get more advanced, so our awareness and defenses need to evolve too. Staying informed, regularly updating security habits, and sharing what you learn with others will be key to staying safe.

If you think you’ve run into an AI impersonation scam, report it at ReportFraud.ftc.gov. Your quick action protects you and helps authorities spot new threats, keeping others from getting caught in the same traps.

This story was produced by Heimdal and reviewed and distributed by Stacker.

Written by: Evan Ullman for Heimdal

If you liked this article, follow us on LinkedInXFacebook, and Youtube, for more cybersecurity news and topics.

Author Profile

Madalina Popovici

Digital PR Specialist

linkedin icon

Madalina, a seasoned digital content creator at Heimdal®, blends her passion for cybersecurity with an 8-year background in PR & CSR consultancy. Skilled in making complex cyber topics accessible, she bridges the gap between cyber experts and the wider audience with finesse.

CHECK OUR SUITE OF 11 CYBERSECURITY SOLUTIONS

SEE MORE