How Crooks are Using AI in Social Engineering Attacks

By Rebecca Herold

Last updated: December 29, 2023

Here is one of the many questions we receive from our free monthly Privacy Professor Tips awareness publication readers, LinkedIn connections, and our Data Security and Privacy with the Privacy Professor radio/podcast show listeners.

We provided a short answer to it within the January 2024 Tips. However, we wanted to expand upon that information within a blog post. Here is the question:

How is artificial intelligence (AI) being used within social engineering tactics?

First, you may be asking, what is social engineering?

Social engineering is a term that broadly covers all the types of tactics used to manipulate individuals, often by exploiting their trust or naivety, and convincing them to divulge sensitive information, such as login credentials or financial data.

AI is being used much more frequently, and in new and unexpected ways, to not only launch social engineering attacks, but also to facilitate many new types of cyberattacks. In addition to research I’ve done in this area, one of our other Privacy & Security Brainiacs team members, Noah Herold, has also done research for how AI is being used for social engineering and other cybercrime tactics.

Criminals do not need much clean audio to train AI to create realistic voice models. Less than an hour is needed for most AI models, and that amount of time is getting less and less as AI models used to impersonate voices are getting more powerful. Criminals are able to use published audio, such as from podcasts, videos, and audible news reports, to train AI for to impersonate specific individuals to initiate such scams. AI is also being used for a wide range of social engineering tactics.

Impersonating others in phone calls

We covered a long list of ways that AI is being used to spoof callers in phone calls for social engineering, and how to spot and prevent being victimized by them in another blog post we made today. Please see it as a supplement to this blog post.

AI use in other social engineering tactics

It’s now possible to use AI to filter your voice in near real-time rather than just recording sound bites to use for a script. For example, here is a video where someone is pretending to be the developer of the videogame that he's playing as a joke. NOTE: The language is not safe for work (NSFW) environments.

Crooks are also using social engineering in extortion attempts to have ransom paid after claiming to have kidnapped an individual, or to kill someone. The crooks often show photos or videos that seemingly shows the person being impersonated, or play an audio clip of the person, as “proof” that the crook has them or is targeting them. The crooks then demand a large amount of money to release and/or not harm the person. Such extortion attempts are occurring daily.

Specific to social engineering tactics, Noah and I put together the following list describing a few ways that you can tell signs the video and photo images were likely AI generated, and that the person contacting you is likely trying to social engineer you. Look closely at any so-called “evidence” the person contacting you is providing. If this happens to you, look closely at images and videos.

  • Are there any small details that are incorrect? A common AI defect is including too many or too few fingers or toes on people. Also having too many eyes, arms, legs, or other body parts, or blurry body parts, are also common indicators of AI generated images.

  • Look at the clothes, jewelry, hairstyles, and other characteristics of the purported person. Do they match what the person was last seen wearing? Would such attributes be uncharacteristic for that person?

  • Videos where the person never blinks, blinks too frequently, or blinks in an unnatural way are often the result of AI generated images.

  • Facial issues, skin, or hair issues, and faces that seem blurrier than the surroundings in which they are situated, and/or abnormally soft appearance of the focus, are also indicators of AI-generated images.

  • Does the lighting look artificial? AI algorithms often do not sync the generated images of deepfakes with the original images from which the impersonations are made.

  • The audio of AI generated videos often are slightly, almost imperceptibly, not exactly synced with the mouth of the impersonated individual.

To identify when hackers are attempting to spoof/clone a voice, increasingly more often voice liveness detection (VOID) is being embedded in mobile computing device firmware or software used in such devices for voice recognition. Other tools will be made available soon as well.