Picture this: You answer the phone one day and hear a voice that sounds like your child’s. They tell you that they have been kidnapped and need cash for a ransom right away. You scramble to help—only to realize that the voice on the other end of the line doesn’t belong to your child but instead is part of a sophisticated, terrifying new AI scam that uses deepfake phone calls.

That’s what happened to Arizona mother Jennifer DeStefano, who recently testified about her experience to the Senate. And unfortunately, her story is all too common. As artificial intelligence (AI) technology becomes cheaper and more accessible, criminals are frequently using it to impersonate the voices of our friends and loved ones to trick us into sending them money. According to the Federal Trade Commission, scammers stole more than $12.5 billion from Americans in 2024, with imposter scams accounting for $2.95 billion of those losses.

The good news? You can beat scammers at their own game. Reader’s Digest spoke with five cybersecurity experts, including the head of the Identity Theft Resource Center, to learn how to spot these new AI scam calls, how they can put your personal information at risk and what to do if you become a target. Read on to find out how to protect yourself and stop scammers in their tracks.

Get Reader’s Digest’s Read Up newsletter for more tech, travel, cleaning, humor and fun facts all week long.

What is the new AI scam call, exactly?

A clever scammer with a good AI program doesn’t need much more than a few-second recording of a loved one’s voice to be able to clone the person’s voice and apply their own script. From there, they can play the audio over the phone to convince their victim that someone they love is in a desperate situation and needs money immediately.

These aren’t your typical four-word phone scams—they’re much more advanced. In one of the most common examples, parents or grandparents receive a call from their children or grandchildren claiming they need money for ransom or bail, like the AI deepfake scam DeStefano encountered. “We have seen parents targeted and extorted for money out of fear that their child is in danger,” says Nico Dekens, director of intelligence and collection innovation at ShadowDragon.

Eva Velasquez, CEO of the Identity Theft Resource Center, says that the center also receives reports of AI scam calls that convince victims their relative needs money to pay for damages from a car accident or other incident. Other scams include using a manager or executive’s voice in a voicemail instructing someone to pay a fake invoice, as well as calls that sound like law enforcement or government officials demanding the targeted individual share sensitive information over the phone.

As you can see, this type of phishing using AI can take many forms, but the through line is the sense of urgency the scammer creates. The goal is to get you to panic and make an impulsive decision.

How does this AI scam work?

It may take a few steps to pull together an AI scam, but the tech speeds up the process to such an extent that these cons are worryingly easy to produce compared with voice scams of the past. Here are the three steps involved:

Step 1: Collect the recording

To carry out an AI scam call, criminals first must find a five- to 10-second audio recording of a loved one’s voice, such as a clip from YouTube or a post on Facebook or Instagram. Yep, that’s all it takes for an AI algorithm to create an eerily accurate clone. To do that, the scammer feeds the recording into an AI tool that learns the person’s voice patterns, pitch and tone—and, crucially, simulates their voice.

These tools are widely available and cheap or even free to use, which makes them even more dangerous, according to experts. For example, generative AI models like ChatGPT or Microsoft’s VALL-E need to listen to only three seconds of an audio “training” clip of someone speaking to create a replica of their voice. “As you can imagine, this gives a new superpower to scammers, and they started to take advantage of that,” says Aleksander Madry, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory.

Step 2: Give a script to the AI

Once the AI software learns the person’s voice, con artists can tell it to create an audio file of that cloned voice saying anything they want. Their next step is to call you and play the AI-generated clip (also called a deepfake). The calls might use a local area code to convince you to answer the phone, but don’t be fooled—the bad guys are capable of spoofing their phone numbers. Many phone-based fraud scams originate from countries with large call-center operations, like India, the Philippines or even Russia, according to Velasquez.

Step 3: Set the trap

The scammer will tell you that your loved one is in danger and that you must send money immediately in an untraceable way, such as with cash, via a wire transfer or using gift cards. Although this is a telltale sign of a scam, most victims will panic and agree to send the money. “The nature of these scams plays off of fear, so in the moment of panic these scams create for their victims, it is also emotional and challenging to take the extra moment to consider that it might not be real,” Dekens says.

Scammers are also relying on the element of surprise, according to Karim Hijazi, the managing director of SCP & CO, a private investment firm focused on emerging technology platforms. “The scammers rely on an adequate level of surprise in order to catch the target off guard,” he says. “Presently, this tactic is not well known, so most people are easily tricked into believing they are indeed speaking to their loved one, boss, co-worker or a law enforcement professional.”

How has AI made scams easier to run—and harder to spot?

Imposter scams are nothing new, but artificial intelligence has made them more sophisticated and convincing. “AI did not change much in terms of why people do scams—it just provided a new avenue to execute them,” Madry says. “Be it blackmail, scam, misinformation or disinformation, they now can be much cheaper to execute and more persuasive.”

While AI has been around for decades for both criminal and everyday use—think: AI password cracking and AI assistants like Alexa and Siri—it was expensive and required a massive amount of computing power to run. As a result, shady characters needed a lot of time and expertise with specialized software to impersonate someone’s voice using AI. But that’s not the case anymore. “Now, all of this is available for anyone who just spends some time watching tutorials on YouTube or reading how-to docs and is willing to tinker a bit with the AI systems they can download from the internet,” says Madry.

On top of that, Velasquez notes that previous imposter phone scams used to blame a poor connection or bad accident to explain why their voice sounded different. But today’s technology “has become so good that it is almost impossible for the human ear to be able to tell that the voice on the other end of the phone is not the person it purports to be,” says Alex Hamerstone, a director with the security consulting firm TrustedSec.

Microphone icon centered above flowing, swirling blue and red waveforms, set against a gradient background from dark blue to red.wwwebmeister/Getty Images

How can you avoid AI scams?

They may be quicker to create than past imposter scams, but AI scam calls are still labor-intensive for criminals, so your odds of being targeted are low, according to Velasquez. Most con artists want to use attacks that they can automate and repeat over and over again, and “you can’t do that with voice clones because it requires the victim to know and recognize a voice, not just some random voice,” she says.

The problem is that these attacks will continue to increase as the technology improves, making it easier to locate targets and clone voices. And there’s another reason now is the best time for criminals to run these cons: “As these kinds of capabilities are new to our society, we have not yet developed the right instincts and precautions to not fully trust what is being said via phone, [especially] if we are convinced that this is the voice of a person we trust,” Madry says.

That’s why it’s important to take proper precautions to boost your online security and avoid being targeted in the first place. Here are a few tips to protect yourself from AI scams, according to our experts:

Make your social media accounts private

Before sharing audio and video clips of yourself on Facebook, Instagram, YouTube or other social media accounts, Velasquez recommends limiting your privacy settings (including who can see your posts) to people you know and trust. Users who keep their posts open to everyone should review and remove audio and video recordings of themselves and loved ones from social media platforms to thwart scammers who may seek to capture their voices, she says.

Use multifactor authentication

Setting up multifactor authentication for your online accounts can also make it more difficult for fraudsters to access them. This system requires you to enter a combination of credentials that verifies your identity—such as a single-use, time-sensitive code you receive on your phone via text—in addition to a username and password to log in to your account.

If you use biometric tools for verification, opt for ones that use your face or fingerprint rather than your voice to avoid providing criminals with the resources to create an AI deepfake scam.

Assign a secret phrase

Hijazi suggests coming up with a secret phrase or password that you can exchange with your loved one ahead of time. That way, if you receive a call alleging that they have been kidnapped or need money right away, you can authenticate that you are indeed speaking to the real person and aren’t being tricked by an AI clone. “This does take some advance planning, but it’s a free and effective proactive measure,” Hijazi says.

Erase your digital footprint

Last, you can avoid being targeted by these scams to begin with by disappearing from the internet, as best as anyone is able to these days. Scammers often rely on the trail of breadcrumbs you leave about yourself online, from your pet’s name to your high school mascot, to learn about your life and build a scam around it.

“There are vast amounts of information freely and publicly available about almost every one of us,” Hamerstone says. “It is very simple to find out people’s family members, associations and employers, and these are all things that a scammer can use to create a convincing scam.”

The solution is simple, according to Hamerstone: “Limiting the amount of information we share about ourselves publicly is one of the few ways that we can lower the risk of these types of scams.”

Online tools like DeleteMe can automatically remove your name, address and other personal details from data brokers, which will make it more difficult for scammers to target you. Google even has a new Results About You tool that will alert you when your personal info appears in its search results and will make it easy to request their removal.

What should you do if you receive an AI scam call?

If you get a call from a loved one who is demanding money, don’t panic—that’s exactly what the scammer on the other end wants. “It can be scary and disturbing to hear a loved one in distress, but the most important thing to remember is not to overreact,” Velasquez says.

Instead, experts recommend taking these steps before agreeing to send money to someone over the phone:

  1. Call your loved one directly using a trusted phone number.
  2. If you can’t reach them, try to contact them through a family member, friend or colleague.
  3. Ask the caller to verify a detail that only they would know, such as the secret phrase mentioned above.
  4. Alert law enforcement. They can help you verify whether the call you received is legitimate or a scam, Dekens says.
  5. Listen for any audio abnormalities, such as unusual voice modulation or synthetic-sounding voices, to identify a scammer. “Deepfake audio can lack natural intonation or exhibit glitches, like sounding angry or sad,” Dekens says. Hijazi also points out that this technology is not “conversational” yet and will likely fail to keep up if you continue asking questions.
  6. If you determine that the call is a scam, write down or screenshot the phone number that called you.
  7. Block the number on your phone and place it on your do-not-call list to avoid receiving a call from it again.
  8. Dekens suggests putting the scam caller on speaker on your phone and recording the audio with a secondary phone. “It is a good way to preserve evidence,” he says.
  9. Report the call to your mobile phone carrier so the company can take appropriate action.

While AI scams are becoming more common, you do not have to become a victim. Being aware of these scams and staying vigilant will go a long way in keeping you—and your personal information—safe.

About the experts

  • Nico Dekens is the director of intelligence and collection innovation at ShadowDragon. He has more than 20 years of experience as an intelligence analyst with Dutch law enforcement.
  • Eva Velasquez is the CEO of the Identity Theft Resource Center and the former vice president of operations for the San Diego Better Business Bureau.
  • Aleksander Madry, PhD, is a faculty member at the Massachusetts Institute of Technology (MIT) and researcher at MIT’s Computer Science and Artificial Intelligence Laboratory. He’s currently on leave and working as a researcher with OpenAI.
  • Karim Hijazi is the managing director of SCP & CO, a private investment firm focused on emerging technology platforms.
  • Alex Hamerstone is the advisory solutions director for TrustedSec, a cybersecurity company.

Why trust us

Reader’s Digest has published hundreds of articles on personal technology, arming readers with the knowledge to protect themselves against cybersecurity threats and internet scams as well as revealing the best tips, tricks and shortcuts for computers, cellphones, apps, texting, social media and more. We rely on credentialed experts with personal experience and know-how as well as primary sources including tech companies, professional organizations and academic institutions. For this piece on the newest AI scam, Brooke Nelson Alexander tapped her experience as an Emmy-nominated reporter who covers tech and cybersecurity for Reader’s Digest. We verify all facts and data and revisit them over time to ensure they remain accurate and up to date. Read more about our team, our contributors and our editorial policies.

Sources:

  • Nico Dekens, director of intelligence and collection innovation at ShadowDragon
  • Eva Velasquez, CEO of the Identity Theft Resource Center
  • Aleksander Madry, PhD, faculty member at the Massachusetts Institute of Technology (MIT) and researcher at MIT’s Computer Science and Artificial Intelligence Laboratory
  • Karim Hijazi, managing director of SCP & CO
  • Alex Hamerstone, advisory solutions director for TrustedSec
  • Federal Trade Commission: “New FTC Data Show a Big Jump in Reported Losses to Fraud to $12.5 Billion in 2024″
  • U.S. Senate Committee on the Judiciary: “Subcommittee on Human Rights and the Law: Artificial Intelligence and Human Rights”