Security Deepfakes Are on the Rise: What This Means for Corporate IT Security

security deepfakes portnox

In early 2020, a Hong Kong bank manager received a call from a company director asking him to authorize transfers to the tune of $35 million. Recognizing the director’s voice and being convinced of the reason for the transfer (an upcoming acquisition), he began moving the money. However, this request was entirely fraudulent – the bank manager had never spoken to the director. Instead, he was duped by a worrying new technology dubbed “deep voice“, a subset of deepfake technology.  

Cybercriminals are increasingly leveraging security deepfakes to facilitate business email compromise (BEC) fraud and bypass multi-factor authentication (MFA) protocols, and know your customer (KYC) ID verification. And as deepfake technology becomes increasingly more sophisticated and accessible, this trend will only continue. For example, only last year, the FBI warned that malicious actors would undoubtedly leverage “synthetic content,” like deepfakes, for cyber operations over the next 18 months. 

But just how do bad actors leverage deepfakes? And what does this mean for corporate IT security? Let’s get into it.  

Security Deepfakes, Explained

Deepfakes use artificial intelligence and machine learning to create compelling images, videos, and audio hoaxes. They are a type of synthetic (computer-generated) media and can be so convincing at mimicking a real person that they can fool both people and algorithms.  

Here, the specific technologies at play are deep learning and general adversarial networks (GANs). In simple words, this means that two neural networks (computing systems inspired by how the human brain works) compete against each other to create increasingly convincing media. The goal of neural network A is to generate an image that neural network B cannot distinguish from its training data. And the goal of neural network B is not to be fooled in this way. The result? Scarily convincing generated images.  

The introduction of GANs has significantly advanced deepfakes, but other prominent technologies are also contributing to deepfakes’ rise – 5G and cloud computing. These technologies allow video streams to be manipulated in real-time, opening the doors for live-streaming and video conferencing fraud.  

How Security Deepfakes Bypass Cybersecurity Controls

Defending corporate networks in a world where high-profile data breaches are a daily occurrence is no easy task. Organizations today rely on robust IT security protocols and tools, including AI-driven network security, stringent network access controls, zero trust principles, and more. However, while companies work hard to strengthen their IT security, cybercriminals work hard to find a way around it. It’s a game of constant cat and mouse.  

Deepfakes are particularly concerning because they can dramatically increase the effectiveness of phishing and BEC attacks – something that organizations are already struggling to combat. For example, according to CISCO’s 2021 Cybersecurity Threat Trends report, around 90% of data breaches occur due to phishing 

Deepfake Phishing Attacks

Much of the security threat around deepfake phishing revolves around their use in business email compromise attacks. Why? Because BEC attacks are the highest-grossing form of all phishing attacks for cybercriminals 

In a business email compromise attack, cybercriminals send convincing-looking emails attempting to trick a targeted employee into releasing funds or revealing sensitive information. And unlike in traditional phishing attacks, these emails aren’t sent out indiscriminately – they are specifically crafted to appeal to specific individuals.  

These types of attacks rely on trust and urgency. For example, when you get a request from your boss asking you to transfer funds, you trust that it’s a legitimate request, and you feel compelled to act quickly to avoid disappointing them. Cybercriminals love when people act quickly because it leaves less room for doubt and critical thinking, and they use several tactics to try and ramp up the urgency in their messages.  

But security deepfakes work by targeting the other component – trust. A voicemail or video message from a senior ranking employee is even more convincing than a carefully crafted email. And deepfakes still seem in the realm of science fiction for many people. Most employees won’t stop to think that a cybercriminal has trained an algorithm on audio recordings of their boss freely available online.  

The rise of hybrid and distributed workforces are also contributing to the success of this type of attack. It’s no longer unusual for employees to receive high-impact requests without speaking to someone face to face.  

Remote Identification Verification

Security deepfakes are becoming increasingly successful at bypassing remote identification verification checks. For example, recent academic research found that deepfakes are around five times better at spooring verification solutions than traditional methods like 3D masks and printed photos.   

Know-Your-Customer (KYC) verification checks, where companies often use video or images to check customers are who they claim to be, are also highly vulnerable to deepfakes. Unlike with a sophisticated BEC attack, cybercriminals only need minimal source material to conduct a face swap that can fool biometric identification systems.  

Combating Security Deepfakes

Unfortunately, deepfake technology is advancing faster than the systems we use to detect them. We currently use various factors to detect security deepfakes, mainly using algorithms to look for abnormalities in skin, eyes, hair, background discrepancies, and unusual pixel compositions. However, cybercriminals are also becoming increasingly adept at getting around these detections. 

So what does this mean going forward? First, we could see AI utilized to combat deepfake threats. For example, sufficiently advanced AI systems could crunch existing video and audio files and compare them to new material to see if a video was created by splicing together existing clips. Additionally, blockchains could be used to verify whether content has been manipulated from its original version. 

However, this technology isn’t likely to be available to the average organization any time soon. With this in mind, companies should focus their efforts on educating employees on the existence of deepfakes, so they are more likely to second-guess the authenticity of an unexpected video or voicemail request. At the same time, companies should encourage employees not to act quickly to unusual requests and instead take the time to verify the request’s legitimacy. 


Try Portnox Cloud for Free Today

Gain access to all of Portnox's powerful zero trust access control free capabilities for 30 days!