BlogPrivacy4TH NOV 2024
AuthorSamir Yawar
8 min read
Privacy

What is a Deepfake in Cyber Security

Twitter
Facebook
WhatsApp
Email
LinkedIn
What is a Deepfake in Cyber Security blog image

Have you seen Donald Trump being called a “complete dipshit” by Barack Obama? As real as it looks and sounds, that didn’t happen at all. What you saw was a phenomenon called a deepfake video. It has prompted questions – what is a deepfake in cyber security and how does it affect you?

With advances in artificial intelligence, deepfakes have emerged as a potent cyber threat, creating hyper-realistic images and videos of events that never occurred. In this article, we’ll explain what deepfakes are, how they’re created, why they’re a cybersecurity concern, and what can be done to counteract their risks.

What is a Deepfake in Cyber Security?

A deepfake is an artificial image or video generated by a form of machine learning known as “deep learning,” hence the name. This technology can create highly realistic but fake media that can be difficult for the human eye to discern as fake. In the sections below, we provide two explanations for deepfake technology—one for general readers and another for those interested in the technical details.

Simple Explanation

Deep learning is a subset of machine learning where algorithms learn by example. Think of it as similar to how a child learns to recognize objects through repeated exposure. In the case of deepfakes, an algorithm is trained on countless real images and learns to create new, realistic images that mimic these examples.

Technical Explanation

Deep learning utilizes “hidden layers” within neural networks, similar to how the brain processes information. These neural networks, especially recursive neural networks (RNNs), excel in tasks like image recognition, making them ideal for creating deepfakes.

The deepfake creation process typically involves two algorithms: one that generates fake images and another that detects fakes. These models work in tandem, improving each other's accuracy over time, leading to the creation of fake images and videos so realistic that humans often can’t tell the difference.

How is a Deepfake Different from Photoshop or Face-Swapping?

Photoshopped or digitally altered images are commonplace and often harmless. For example, apps that let users swap faces or alter appearances are usually easy to recognize as modified and are intended for fun. Deepfakes, however, employ advanced machine learning to make images or videos that look genuine to the human eye, posing unique security and ethical risks.

Why Are Deepfakes Dangerous?

In our digital world, people often form opinions based on online content, making deepfakes a powerful tool for misinformation.

  1. Small-Scale Threats: Scammers might use deepfakes to impersonate family members, tricking victims into sending money in emergencies. These personalized scams exploit the realistic quality of deepfakes to make the appeals more convincing.

  2. Large-Scale Threats: On a larger scale, deepfake videos of political leaders could incite violence, manipulate public opinion, or even impact global diplomacy.

Are Deepfakes Only Limited to Videos?

Not at all. Deepfake technology has expanded to include realistic fake photos and even audio, enabling cybercriminals to mimic voices with disturbing accuracy. For example, a fraudster could create a deepfake of a CEO’s voice to request a money transfer, exploiting the trust in audio communication. This expansion of deepfake technology increases their potential as a cybersecurity threat.

The Technology Behind Deepfakes

The process of creating deepfakes usually involves a “generative adversarial network” (GAN), which uses two AI algorithms working together. One algorithm, the “generator,” produces synthetic images, while the other, the “discriminator,” evaluates their realism. Over time, this feedback loop results in highly convincing fake media.

How Can You Spot a Deepfake?

Detecting deepfakes can be challenging, but some red flags include unnatural blinking, patchy skin tones, and flickering edges around faces. However, as deepfake technology improves, spotting these clues becomes more difficult.

To address this, tech firms and universities are investing in AI models capable of detecting fake media. In 2020, major tech companies even launched a Deepfake Detection Challenge to advance detection methods.

Could Deepfakes Undermine Trust Altogether?

Deepfakes risk creating a “zero-trust society” where people may no longer distinguish truth from fiction. This could erode trust in media, politics, and even legal evidence. For instance, if deepfakes mimic biometric data like face or voice recognition, they could also pose personal security risks, as they can deceive systems relying on these forms of authentication.

Are deepfakes illegal?

Deepfakes themselves aren’t inherently illegal, but those who create or distribute them can easily violate laws. Depending on its content, a deepfake could infringe on copyright, breach data protection regulations, or be considered defamatory if it subjects the individual to ridicule.

There is a specific criminal offense for sharing private or sexual images without consent, such as in cases of revenge porn, which can lead to sentences of up to two years in prison. In the UK, laws differ by region: in Scotland, the revenge porn statute includes deepfakes, making it an offense to share or threaten to share photos or videos that depict someone in an intimate setting. In England, you can report deepfake incidents to the police online.

Can Deepfakes Be Used for Good?

While many deepfakes are used for manipulation, they have beneficial applications too. Some museums, for example, use deepfakes to bring historical figures to life, enhancing visitor experiences. In the medical field, deepfake voice-cloning can restore the voices of people who have lost them to illness.

Shallowfakes: A Related Threat

Unlike deepfakes, shallowfakes are videos that are subtly edited or shown out of context. Though simpler, they are still impactful and can spread misinformation, as demonstrated by slowed-down videos of public figures that mislead viewers about their behavior.

What’s the Solution?

Ironically, AI may offer the best solutions for combating deepfakes. Detection algorithms can be trained to recognize digital manipulations that are invisible to the human eye. Additionally, blockchain-based tracking for media could help verify the authenticity of images and videos, although this approach is still in development.

Last but not the least, proper security awareness training can instill proactive habits that make aid in identifying, reporting and fighting against deepfakes wherever you encounter them.

Conclusion - Staying vigilant is important to fight against deepfakes

Deepfakes are expected to grow in prevalence and sophistication. While not all deepfakes are harmful, it’s essential to approach media with a critical eye. Be particularly cautious of videos or messages that solicit money or sensitive information, or those featuring out-of-character behavior from public figures.

As with all cybersecurity threats, awareness and vigilance are key to minimizing the risks posed by deepfakes, especially in organizations that take cybersecurity very seriously.

Samir Yawar
Samir Yawar / Content Lead
Samir wants a world where people can instinctively whack online scams and feel accomplished without the need for psychic powers. As an ISC2 member, he is doing his bit to turn cybersecurity awareness training into a fun concept with simple, approachable and accessible content. Reach out to him at X @yawarsamir
FAQsFrequently Asked Questions
A deepfake is a realistic-looking image, audio, or video created using artificial intelligence, specifically a technology called deep learning. Deepfakes can make it appear as though people are saying or doing things they never actually did.
Yes, deepfakes can spread misinformation, deceive people, and damage reputations. They’re often used in misinformation campaigns, scams, and even personal attacks. However, they’re also used in entertainment and other benign applications.
Spotting deepfakes can be challenging. Poor-quality deepfakes might show unnatural blinking, odd lighting, or distorted facial features. High-quality detection often requires specialized AI tools that look for patterns invisible to the human eye.