Heimdal
article featured image

Contents:

In the age of technological advancement, it’s not just tech-savvy online bad actors that you have to watch out for – fake videos created using AI-driven software known as “deepfakes” are becoming increasingly hard to spot. In this article, we’ll look at what deepfakes are and how to spot them so you can protect yourself from misleading or malicious content.

What Is a Deepfake?

Deepfakes are fake video and audio footage of individuals that are meant to make them look like they have said and done things that, in fact, they haven’t. “Deep” relates to the “deep learning” technology used to produce the media and “fake” to its artificial nature. Most of the time, the faces of people are superimposed on the bodies of others, or their actual figure is altered in such a way that it appears to be saying and doing something that they never did.

The term was born in 2017 when a Reddit user posted a fake adult video showing the faces of some Hollywood celebrities. Later, the user also published the machine learning code used to create the video.

How Deepfake Works

Learning how deepfakes are created and what are they used for is also important when it comes to spotting and avoiding them. Here’s what you should know: 

How Are Deepfakes Created?

Deepfakes are created by using artificial intelligence algorithms to swap one person’s face for another in images or videos. The process is called Deep Learning, which is a type of machine learning that can be used to create artificial intelligence models.

  • First, the AI requires two sets of input images: the deepfake target and the original source. Computers can be programmed to recognize a variety of random faces or numerous pictures of a single person.
  • The AI then produces the output pictures, determining the deepfake’s distinctive and vital expressional nuances. The target individual’s subtle facial expressions and traits must be captured by the AI in order for it to be convincing.
  • The input and output images are combined to perform the face swap in the final step. With the face of person B, an encoder recreates the movements and expressions of person A. To make a deepfake video convincing, this must be done frame by frame.

What Are Deepfakes Used for?

While the technology has a lot of potential for creative purposes (like creating comedic videos or realistic characters in movies or video games), it also has enormous implications for misinformation or more sinister applications. Here is what deepfakes could have a highly negative impact on:

#Politics

Deepfakes could influence elections since they can put words into politicians’ mouths and make them look like they’ve done or said certain things which, in fact, they haven’t. Deepfake producers could target popular social media channels, where the content shared can instantly become viral.

#Justice

Fake evidence for criminal trials could be used against people in court and this way, they could become accused of crimes they did not commit. Thus, the wrong people could go to jail. And on the other hand, people who are guilty could be set free based on false proof.

#Stock Market

Deepfakes could be used to manipulate stock prices when altered footage of influential people making certain statements gets distributed. Imagine what would happen if a fake video of the CEOs of companies such as Apple, Amazon, or Google declared they’d done something illegal. For instance, back in 2008, Apple’s stock dropped 10 points based on a false rumor that Steve Jobs had suffered a major heart attack emerged.

#Online Bullying

The deepfake technology could also be used to amplify cyberbullying, especially since it’s now becoming widely available. People can easily turn into victims when manipulated media of them is posted online. Or they can get blackmailed by cybercriminals who are threatening to leak the footage if, for instance, they don’t pay a certain amount of money.

#Companies

Someone could be making false statements about your business to destabilize and degrade it. Malicious actors could make it look like you or someone within your organization admitting to having been involved in consumer fraud, bribery, sexual abuse, and any other wrongdoings you can think of. Obviously, these kinds of false statements can destroy your company’s reputation and make it difficult for you to prove otherwise.

How to Spot a Deepfake

Deepfakes may be difficult to spot, but there are some tell-tale signs that you can look for:

  • Blinking – according to research, the eye blinking in videos seems to be not that well presented in deepfake videos.
  • Head position – watch out for blurry face borders that subtly blend into the background.
  • Artificially-looking skin – if the face looks extra smooth like it’s been edited, this may be another warning sign. Also, watch out for the skin tone that can be slightly different than the rest of the body.
  • Slow speech and different intonation – sometimes, you will notice the one who is being impersonated talks rather slowly or there isn’t quite a match between the real person’s voice and the fake one.
  • An overall strange look and feel – in the end, you should trust your instinct. Sometimes, you can simply tell something’s not right.

Due to the current gaps in the law, producers of deepfakes are not incriminated. However, the Deepfakes Accountability Act (known as “Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act – yes, you’ve correctly identified an acronym right there) aims to take measures to criminalize this type of fake media.

In short, anyone who creates deepfakes would be required to reveal that the footage is altered. And if they fail to do so, it will be considered a crime. The existence of these kinds of regulations is mandatory to protect deepfake victims and also the general public from distorted information.

No matter how good technological deepfake detection solutions will become, they won’t prevent manipulated media from being shared and reaching large numbers of people. So, for companies, the best way to avoid negative consequences is to teach your employees how to identify fake footage and question everything that seems suspicious inside the organization.

Train Your Employees

The topic of deepfakes can be looked at during your cybersecurity training. For instance, if they receive an unexpected call from the CEO who is asking them to transfer $1 million to a bank account, they could, first of all, question if the person on the other line is who they say they are. Maybe, a good countermeasure would be to have a few security questions in place that need to be asked to verify a caller’s identity.

Monitor Your Brand’s Online Presence

Your brand’s presence is probably already being monitored online. So, make sure your designated people keep an eye on fake content involving your organization and if anything suspicious is brought to light, they do their best to take it down as soon as possible and mitigate the damage. This brings us to the next point.

Be Transparent

If you become a victim of deepfakes, ensure that your audience is aware of the targeted attack. Trying to ignore what happened or assuming that people didn’t believe what they’ve seen or heard won’t make the issue disappear. Therefore, your PR efforts should be centered around communicating that someone from your company has been impersonated and highlighting the artificial nature of the distributed footage. Never let misinformation erode your public’s confidence!

Final Thoughts  

As technology continues to evolve, it’s becoming easier and easier to create fake videos and images that look real. Deep fakes are a type of artificial intelligence that can generate realistic images and videos of people who don’t actually exist.

While deep fakes can be used for good, like creating realistic characters in movies or video games, they can also be used for malicious purposes, like creating fake news stories or spreading misinformation.

The fact that they are getting easier to create and harder to detect shows just how important it is to be able to spot deep fakes so you know when you’re seeing something that isn’t real.

P.S. Did you enjoy this article? Follow us on LinkedIn, Twitter, Facebook, and Youtube to keep up to date with everything we post!

 

Author Profile

Elena Georgescu

Communications & Social Media Coordinator | Heimdal®

linkedin icon

Elena Georgescu is a cybersecurity specialist within Heimdal™ and her main interests are mobile security, social engineering, and artificial intelligence. In her free time, she studies Psychology and Marketing. Some of her guest posts on other websites include: cybersecurity-magazine.com, cybersecuritymagazine.com, techpatio.com

Comments

this looks really scary.

Hi Ronin, it looks frightening indeed. Unfortunately, the danger of deepfakes is real.

Leave a Reply

Your email address will not be published. Required fields are marked *

CHECK OUR SUITE OF 11 CYBERSECURITY SOLUTIONS

SEE MORE