- AI impersonation scams use voice cloning and deepfake video to convincingly mimic trusted people
- Cybercriminals target people and businesses through calls, video meetings, messages, and emails
- Experts say that independently verifying identities and using multi-factor authentication are key to protecting yourself
Imagine getting a frantic call from your best friend. Their voice is shaky as they tell you they’ve been in an accident and urgently need money. You recognize the voice instantly; after all, you’ve known them for years. But what if that voice isn’t actually real?
In 2025, scammers are increasingly using AI to clone voices, mimic faces, and impersonate people you trust the most.
The rise in this type of scam has been staggering. According to Moonlock, AI scams have surged by 148% this year, with criminals using advanced tools that make their deception near-impossible to detect.
You may like
So how can you stay safe from this growing sci-fi threat? Here’s everything you need to know, including what cybersecurity experts are recommending.
What are AI impersonation scams?
AI impersonation scams are a fast-growing form of fraud where criminals use artificial intelligence to mimic a person’s voice, face, or typing style with alarming accuracy.
These scams often rely on voice cloning, which is a technology that can recreate someone’s speech patterns with just a few seconds of recorded audio.
The samples aren’t hard to find; you can often spot them in voicemails, interviews, or social media videos. According to Montclair State University, even short clips from a podcast or online class can be enough to build a convincing AI impersonation of someone’s voice.
Some scams take this even further, using deepfake video to simulate live calls. For instance, Forbes reports that scammers have impersonated company executives in video meetings, convincing staff to authorize large wire transfers.
Experts say the rapid growth of AI impersonation scams in 2025 comes down to three factors: better technology, lower costs, and wider accessibility.
With these digital forgeries at their side, attackers assume the identity of someone you trust, such as a family member, a boss, or even a government official. They then request valuable, confidential information, or skip the extra step and ask for urgent payments.
These impersonated voices can be very convincing, and this makes them particularly nefarious. As the US Senate Judiciary Committee recently warned, even trained professionals can be tricked.
Who is affected by AI impersonation scams?
AI impersonation scams can happen across phone calls, video calls, messaging apps, and emails, often catching victims off guard in the middle of their daily routines. Criminals use voice cloning to make so-called “vishing” calls, which are phone scams that sound like a trusted person.
The FBI recently warned about AI-generated calls pretending to be US politicians, including Senator Marco Rubio, to spread misinformation and solicit a public reaction.

On the corporate side of “vishing,” cybercriminals have staged deepfake video meetings posing as company executives. In a 2024 case threat actors posed as the CFO of UK-based engineering company Arup, and tricked its employees into authorizing transfers totaling a whopping $25 million.
These attacks generally scrape pictures and videos from LinkedIn, corporate websites, and social media in order to craft a convincing impersonation.
AI impersonation is getting more sophisticated, too – and fast. The email provider Paubox found that nearly 48% of AI-generated phishing attempts, including voice and video clones, successfully sidestep detection by current email and call security systems.
How to stay safe from AI impersonation scams
Experts say that AI impersonation scams succeed because they create a false sense of urgency in their victims. Criminals exploit your instinct to trust familiar voices or faces.
The most important defense is to simply slow down; take your time to confirm their identity before you act. The Take9 initiative says that simply pausing for nine seconds can go a long way toward staying safe.
If you receive a suspicious call or video from someone you know, hang up and call them back on the number you already have. As cybersecurity analyst Ashwin Raghu told Business Insider, scammers count on people reacting in the moment, and calling back eliminates that urgency.

It’s also important to watch for subtle red flags. Deepfake videos can have a few tells, such as unnatural mouth movements, flickering backgrounds, or eye contact that feels a little ‘off’. Similarly, AI-generated voices can have unusual pauses or inconsistent background noise, even if they sound convincing at first.
Adding extra layers of security can help, too. Multi-factor authentication (MFA) makes it harder for scammers to get into your accounts even if they successfully steal your credentials.
Cybersecurity expert Jacqueline Jayne told The Australian that your best bet is to pair direct verification with some form of MFA — particularly during periods of high scam activity, such as during tax season.
AI offers a ton of mind-boggling capabilities, but it also gives scammers powerful new ways to deceive. By staying vigilant, verifying suspicious requests, and talking openly about these threats, you can reduce the risk of being caught off guard — no matter how real the deepfake may seem.
You might also like
Services Marketplace – Listings, Bookings & Reviews