Contents:
Social engineering and AI-driven fraud are climbing to the top of global security concerns. The World Economic Forum lists them among the biggest cybersecurity threats of 2025. And the threat is no longer just spam emails with obvious typos. Today’s scams are targeted, convincing, and increasingly powered by artificial intelligence.
We’re not just talking about phishing links or fake support calls. We’re talking about deepfaked voicemails from loved ones. Phony messages that sound like your boss. Emails that mirror your own writing style. AI makes it easy to personalize deception on a massive scale.
In this article, Heimdal breaks down where social engineering started, how it’s evolving with AI, and who’s most likely to fall for it. We’ll highlight real-world examples and finish with straightforward steps to help individuals and organizations protect themselves.
Origins and evolution of social engineering
Social engineering relies on psychological manipulation. Scammers trick people into revealing personal information or taking actions they wouldn’t normally consider. Basically, it’s convincing someone to act against their interests, exposing their private or confidential information.
The concept isn’t new. Early scams go back to in-person cons. But the digital shift changed everything. The internet opened new ways to deceive people. Online platforms gave scammers more reach, resulting in faster, broader, and more convincing scams.
According to the FBI’s 2024 IC3 Report, this explosion in connectivity has dramatically expanded the scale of attacks.
Who is most vulnerable?
No one is immune to social engineering, but some people and places see more damage than others.
Older adults face the highest losses. In 2024, individuals 60 and older reported the most complaints to the Internet Crime Complaint Center. They also lost more money than any other age group at over $4.8 billion, up 43% from 2023. Phishing/spoofing and tech support scams hit this group hardest.
Where you live also matters. California, Texas, Florida, and New York had the most reported complaints and the highest losses last year. California saw over $2.5 billion in losses, while Texas lost more than $1.3 billion and Florida about $1 billion.
Organizations are just as vulnerable. The WEF’s 2025 Outlook highlights how critical sectors, such as government, healthcare, finance, and infrastructure, face heightened cyber risks.
How scams build on traditional methods and evolve with technology
Today’s scams build on familiar tricks but are more convincing. However, classic methods still dominate. Phishing, business email compromise (BEC), romance scams, and fake tech support calls remain go-to techniques.
In 2024, the FBI received over 193,000 phishing and spoofing complaints, and BEC scams caused $2.77 billion in losses. AI is making scams like these harder to spot.
AI-powered scams use several techniques:
- Deepfakes. Scammers fake a loved one’s voice or mimic an executive in a video.
- Hyper-personalized phishing. AI crafts clean, accurate, and targeted emails.
- Automation. Large-scale attacks launch in seconds with little effort.
The cost is staggering. According to the IC3 report, investment fraud (often AI-driven) led to $6.57 billion in losses in 2024. Cryptocurrency fraud reached $9.3 billion, with adults over 60 most affected. What was once obvious is now polished and personal.
Real-life examples of AI-driven social engineering
Urgency and fear remain core tools among scammers. One common scam begins with a fake call or message claiming a loved one is in danger.
Victims are pressured to act fast- send money, share banking info, or buy gift cards. Empathy-driven scams work just as well. Romance scams build trust and then ask for money.
The latest twist is generative AI tools. Criminals now use them to build fake identities and clone online profiles. The IC3 warns that this tactic is spreading, especially in financial fraud, with criminals using AI-generated text, images, audio, and video.
Whether it’s fear or empathy, the goal is the same. Scammers hope to convince victims that a problem is real and get them to act before thinking.
How to identify and avoid AI-driven social engineering scams
Stopping these scams starts with spotting the signs.
For individuals:
- Verify money or data requests through a separate, trusted communication channel.
- Question urgency. Don’t act on impulse when receiving unsolicited messages.
- Look for deepfake signs, such as robotic speech or visual glitches
- Use strong passwords and enable MFA.
For organizations:
- Train employees to detect phishing and fraud attempts.
- Require verification for fund transfers and sensitive data requests.
- Use strong email filtering and anti-phishing tools.
- Report scams at IC3.gov or contact your local FBI office.
Staying ahead of the scam
Social engineering isn’t slowing down. It’s adapting, and fast. AI gives scammers new tools to make old tricks far more believable. What used to be low-effort deception is now hyper-targeted, high-tech manipulation.
But while the tools may be new, the core defense remains the same: awareness, verification, and quick reporting. Think before acting, pause when something feels rushed, ask questions, and don’t be afraid to confirm through another channel. Technology can help, too. MFA, strong passwords, and smart filters all put up real barriers between scammers and their targets.
The FBI urges everyone, individuals and businesses included, to report scams and share information. Even one report could help someone else avoid the same trap. Staying ahead of AI-driven scams requires preparation. The more you know, the harder it is to be fooled.
This story was produced by Heimdal and reviewed and distributed by Stacker.
Written by: Evan Ullman for Heimdal
If you liked this article, follow us on LinkedIn, X, Facebook, and Youtube, for more cybersecurity news and topics.