Features 07.01.2025

The Truth About Deepfakes, and How to Tackle Them

Cheap, convincing and ubiquitous – how do we stop AI-powered fakes?

As deepfakes become increasingly realistic, Kate O’Flaherty asks what damage are they causing, and what’s being done to tackle them?

As artificial intelligence (AI) becomes more capable and accessible, the market for deepfakes is thriving. Research shows that deepfake videos can influence people’s perception of individuals, even if they are aware of the technology and know when they’re being exposed to it.

This is a major concern, as deepfakes proliferate. Threat actors are using AI to create images and videos to influence elections, while fake video and audio helps them perform CEO fraud, where employees are tricked into transferring cash to criminals. Meanwhile, deepfake sextortion scams involve criminals stealing someone’s likeness to create fake pornographic content. And National Crime Agency director Alex Murray has warned that the volume of deepfake and AI-generated child abuse images is doubling every six months.

It’s clear that industry, government and other stakeholders need to pull together to tackle the threat.

Realistic deepfakes

AI used for image and video processing has “drastically improved” and is “available to the masses”, says Matt Aldridge, principal solutions consultant at OpenText Cybersecurity.

“We are now in a period where edits or even uniquely generated content can be indistinguishable from the real thing. This now extends to videos where such creativity was, until recently, impossible,” he tells Assured Intelligence.

As deepfakes continue to improve, it’s inevitable that more people will take advantage, argues Jake Moore, global cybersecurity advisor at Eset. He says the speed at which deepfakes are shared online is a major problem.

“Social media platforms need to be held accountable for the videos shared on their platforms and offer warnings and awareness advice where possible,” he tells Assured Intelligence.

Psychological damage

The impact of deepfakes can be huge, in some cases leading to psychological damage. For example, they can be used as a means of revenge, says Will Richmond-Coggan, a partner at national law firm Freeths.

“Revenge-type deepfakes are by their very nature designed to undermine or diminish someone’s standing in society by creating a fake scenario including a real person, without their permission or knowledge,” he tells Assured Intelligence.

“Often this will involve pornographic content, which is deeply distressing for the individuals affected, most of whom are female.”

“The truth is that AI-powered tools can be generated by anyone with enough time, money and training data”

In India, journalist Rana Ayyub was the victim of what has been labelled “deepfake pornographic slander”, where a video with her likeness was produced in a bid to silence her, as far back as 2018.

“I started throwing up. I just didn’t know what to do. In a country like India, I knew this was a big deal,” Ayyub wrote in an article for the Huffington Post. “I didn’t know how to react, I just started crying.

I asked [a source from the ruling BJP party] why it was circulating within political circles and he told me people within the party had been passing it on. Before I could even gather myself, my phone started beeping and I saw I had more than 100 Twitter notifications, all sharing the video.”

Often the victims do not have the means to have the content removed or their name publicly cleared, says Freeths’ Richmond-Coggan.

“This carries with it psychological damage, not only by being the victim of the deepfake itself, but as a result of being helpless in the aftermath,” he adds.

Deepfake content is also used to further political agendas – for example, by faking celebrity support for a political candidate or artificially creating something that would embarrass a political candidate, says Richmond-Coggan.

Another use of deepfake technology is to create audio or video content mimicking company CEOs. In January, Assured Intelligence revealed how audio deepfakes are increasingly tricking employees into transferring large sums of cash to criminals.

What’s being done to stop it?

Across the globe, lawmakers are working to tackle the scourge. In April, the UK government announced a new law to prosecute anyone making a sexually explicit deepfake. California has also passed a number of laws to address deepfakes and sextortion.

“Of course, making something a crime doesn’t automatically mean criminals won’t do it,” says Dane Sherrets, solutions architect at HackerOne.

In data protection law, the principles of transparency and consent have been “a pervasive theme” for over 50 years, adds Freeths’ Richmond-Coggan. Ultimately, he says he would like to see “strict rules around the creation of deepfakes and clear labelling, so that deepfake content is easily and clearly identifiable”.

From a transparency perspective, the provenance of data used to train AI models is extremely important, says Frederic Werner, head of strategic engagement at the UN’s International Telecommunication Union (ITU) and co-founder of AI for Good.

“Knowing where the data comes from helps to establish authenticity and reliability and understand potential issues surrounding accuracy, bias, or licensing,” he tells Assured Intelligence. “Being able to verify the authenticity and owner of generative AI multimedia content is a necessity to protect the digital rights of the person or organisation.”

“It’s about empowering individuals and organisations to make informed decisions in an increasingly digital world” James Tucker

Technology is currently being tested to verify participants on Teams or Zoom calls, but this is not the case for other uses of deepfake technology, says Eset’s Moore. He says sextortion has become “a worrying problem” among young people, with “long lasting effects on the victim’s mental health”.

AI companies offering deepfake technology with off-the-shelf products are well aware of the issue and have put measures in place to limit and block the creation of pornographic material. However, the truth is that AI-powered tools can be generated by anyone with enough time, money and training data.

“Plus there are underground alternatives without the same morals, so the problem becomes difficult to police,” Moore says.

Several startups and AI firms have developed sophisticated detection tools, analysing elements such as pixel inconsistencies and mismatched audio-visual synchronisation, explains James Tucker, head of CISO international at Zscaler. These technologies are becoming increasingly accessible and could soon be integrated into mainstream devices, he says.

“For instance, Apple and Android may soon offer built-in tools to identify AI-generated media,” Tucker adds.

Security vendors are also getting involved. A dedicated tool called Trend Micro Deepfake Inspector  “scans for AI-face swapping scams and alerts users in real-time,” explains the firm’s senior antivirus threat researcher, David Sancho.

What the future holds

It’s widely acknowledged that the issue needs to be dealt with promptly, before technology advances make the problem worse. Deepfake pornography can have a traumatic effect on victims, and as technology improves, it becomes easier for malicious individuals to create fake footage of anyone they want, says Eset’s Moore.

“Although the recent criminalising of deepfake pornography creation and sharing may not stop the problem, it helps send a clear message to people that it is extremely damaging to all those involved and could significantly impact the creator as well,” he argues.

Yet video editing and production tools will continue to mature, as advances in AI technology progress, says OpenText’s Aldridge. He describes “the scariest scenario” as one in which AI is leveraged to automatically edit out artefacts and glitches that can be used to differentiate between real and fake.

“We are now in a period where edits or even uniquely generated content can be indistinguishable from the real thing” Matt Aldridge

To combat these dangers, the IT industry needs to develop better detection tools, such as apps that can help verify whether content is real or AI-generated, says Zscaler’s Tucker. Alongside technology, there must be stricter regulations that mandate transparency, such as requiring clear labels on AI-generated content, and enhanced public awareness, he says.

“It’s about empowering individuals and organisations to make informed decisions in an increasingly digital world,” Tucker continues.

Education needs to improve in order to promote awareness among the general populace, agrees Eset’s Moore.

“As technology improves, we are inevitably going to see more deepfakes generated with little thought about the victims from those creating the content, or viewing it,” he concludes.

Latest articles

Be an insider. Sign up now!