Have you seen Donald Trump being called a “complete dipshit” by Barack Obama? As real as it looks and sounds, that didn’t happen at all. What you saw was a phenomenon called a deepfake video. It has prompted questions – what is a deepfake in cyber security and how does it affect you?
With advances in artificial intelligence, deepfakes have emerged as a potent cyber threat, creating hyper-realistic images and videos of events that never occurred. In this article, we’ll explain what deepfakes are, how they’re created, why they’re a cybersecurity concern, and what can be done to counteract their risks.
A deepfake is an artificial image or video generated by a form of machine learning known as “deep learning,” hence the name. This technology can create highly realistic but fake media that can be difficult for the human eye to discern as fake. In the sections below, we provide two explanations for deepfake technology—one for general readers and another for those interested in the technical details.
Deep learning is a subset of machine learning where algorithms learn by example. Think of it as similar to how a child learns to recognize objects through repeated exposure. In the case of deepfakes, an algorithm is trained on countless real images and learns to create new, realistic images that mimic these examples.
Deep learning utilizes “hidden layers” within neural networks, similar to how the brain processes information. These neural networks, especially recursive neural networks (RNNs), excel in tasks like image recognition, making them ideal for creating deepfakes.
The deepfake creation process typically involves two algorithms: one that generates fake images and another that detects fakes. These models work in tandem, improving each other's accuracy over time, leading to the creation of fake images and videos so realistic that humans often can’t tell the difference.
Photoshopped or digitally altered images are commonplace and often harmless. For example, apps that let users swap faces or alter appearances are usually easy to recognize as modified and are intended for fun. Deepfakes, however, employ advanced machine learning to make images or videos that look genuine to the human eye, posing unique security and ethical risks.
In our digital world, people often form opinions based on online content, making deepfakes a powerful tool for misinformation.
Small-Scale Threats: Scammers might use deepfakes to impersonate family members, tricking victims into sending money in emergencies. These personalized scams exploit the realistic quality of deepfakes to make the appeals more convincing.
Large-Scale Threats: On a larger scale, deepfake videos of political leaders could incite violence, manipulate public opinion, or even impact global diplomacy.
Not at all. Deepfake technology has expanded to include realistic fake photos and even audio, enabling cybercriminals to mimic voices with disturbing accuracy. For example, a fraudster could create a deepfake of a CEO’s voice to request a money transfer, exploiting the trust in audio communication. This expansion of deepfake technology increases their potential as a cybersecurity threat.
The process of creating deepfakes usually involves a “generative adversarial network” (GAN), which uses two AI algorithms working together. One algorithm, the “generator,” produces synthetic images, while the other, the “discriminator,” evaluates their realism. Over time, this feedback loop results in highly convincing fake media.
Detecting deepfakes can be challenging, but some red flags include unnatural blinking, patchy skin tones, and flickering edges around faces. However, as deepfake technology improves, spotting these clues becomes more difficult.
To address this, tech firms and universities are investing in AI models capable of detecting fake media. In 2020, major tech companies even launched a Deepfake Detection Challenge to advance detection methods.
Deepfakes risk creating a “zero-trust society” where people may no longer distinguish truth from fiction. This could erode trust in media, politics, and even legal evidence. For instance, if deepfakes mimic biometric data like face or voice recognition, they could also pose personal security risks, as they can deceive systems relying on these forms of authentication.
Deepfakes themselves aren’t inherently illegal, but those who create or distribute them can easily violate laws. Depending on its content, a deepfake could infringe on copyright, breach data protection regulations, or be considered defamatory if it subjects the individual to ridicule.
There is a specific criminal offense for sharing private or sexual images without consent, such as in cases of revenge porn, which can lead to sentences of up to two years in prison. In the UK, laws differ by region: in Scotland, the revenge porn statute includes deepfakes, making it an offense to share or threaten to share photos or videos that depict someone in an intimate setting. In England, you can report deepfake incidents to the police online.
While many deepfakes are used for manipulation, they have beneficial applications too. Some museums, for example, use deepfakes to bring historical figures to life, enhancing visitor experiences. In the medical field, deepfake voice-cloning can restore the voices of people who have lost them to illness.
Unlike deepfakes, shallowfakes are videos that are subtly edited or shown out of context. Though simpler, they are still impactful and can spread misinformation, as demonstrated by slowed-down videos of public figures that mislead viewers about their behavior.
Ironically, AI may offer the best solutions for combating deepfakes. Detection algorithms can be trained to recognize digital manipulations that are invisible to the human eye. Additionally, blockchain-based tracking for media could help verify the authenticity of images and videos, although this approach is still in development.
Last but not the least, proper security awareness training can instill proactive habits that make aid in identifying, reporting and fighting against deepfakes wherever you encounter them.
Deepfakes are expected to grow in prevalence and sophistication. While not all deepfakes are harmful, it’s essential to approach media with a critical eye. Be particularly cautious of videos or messages that solicit money or sensitive information, or those featuring out-of-character behavior from public figures.
As with all cybersecurity threats, awareness and vigilance are key to minimizing the risks posed by deepfakes, especially in organizations that take cybersecurity very seriously.