Blogs & Opinions 11.04.2024

Government Can’t Manage the Deepfake Threat – Execs Need to Shoulder the Responsibility Instead

Any faith in the ability of governments to deal with deepfakes is misplaced

CEOs need to stop trusting the government to handle the deepfake threat and instead double their investment in deepfake verification tools, finds Michael Marcotte

The cybersecurity landscape is shifting beneath executives’ feet. In the time it takes to find solid ground and neutralise one threat, new technologies emerge, making what was previously stable and secure now vulnerable and exposed.

Right now, cybersecurity professionals are losing sleep over the proliferation of deepfakes. The emergence of easily accessible generative AI tools such as Midjourney means that the number of deepfake videos online has ballooned – at an estimated annual rate of about 900% (according to The World Economic Forum).

There have already been several high-profile corporate casualties. One finance worker recently paid out $25 million after a video call with a deepfake ‘chief financial officer’. Deepfakes could feasibly be used to gain unauthorised access to anything and everything, from sensitive financial data to intellectual property.

Equally, malicious actors could deploy them to cause reputational harm. These actors could use deepfakes of senior management to harm consumer and investor confidence. Sophisticated criminals could use this for market manipulation by shorting the company’s stock beforehand.

So, what has the response been in the face of this new calibre of threat?

Lackluster is the first word that springs to mind. Corporations have been happy to leave this to the government, trusting that they can legislate deepfakes out of existence.

“In the current state of acute geopolitical tension, believing that the CCP will stop Chinese hackers from disrupting US industry and business is delusional”

Any faith in the ability of governments to deal with deepfakes is misplaced. Delegating them full responsibility means trusting all states will police deepfakes similarly. But in the current state of acute geopolitical tension, believing that the CCP will stop Chinese hackers from disrupting US industry and business is delusional.

CEOs must step up and take some responsibility to insulate their firms. Otherwise, they may find themselves as the next deepfake used to gain access to corporate secrets. So what can they do?

The first thing they need to do is allocate proper funding to cybersecurity. Kaspersky estimates that 25% of management teams currently acknowledge that they underfund cybersecurity. And those are only the ones that accept it.

I want to see corporates double their investment in cybersecurity. With this new war chest, they can deploy various preventative technologies to insulate their firms properly.

The first of these should be biometric signatures. Biometric signatures such as facial and vocal recognition are currently the preserve of only the most technologically advanced companies. They need to become industry standard.

To our eyes, deepfakes seem like perfect representations of their targets – but they aren’t. There are often tiny flaws and inconsistencies at the most granular level that are imperceptible to our eyes and ears, like pixels out of place or vocal fluctuations.

However, biometric recognition tools can pick these up and match them against stored genuine employee biometric data. This will verify whether or not a video or recording is a deepfake.

Another promising avenue is to fight fire with fire – this means turning AI back on its own creations. There is no tool better suited to recognise AI-generated deepfakes than AI itself. Using the current flood of deepfakes that are on the internet, AI systems can be trained to recognise unusual facial movements, inconsistent lighting and shadows, or mismatched audio and lip movements.

“There is no tool better suited to recognise AI-generated deepfakes than AI itself”

AI is also a significant part of the third preventative technology that CEOs need to start rolling out. Forensic media toolkits use AI systems, among other technologies, to verify if an image, video, or recording has been tampered with. These tools examine the metadata embedded in a deepfake, such as timestamps, device information, and editing history, to check for evidence of manipulation. AI can be used alongside human experts for multimodal analysis and prevention.

These technologies are available now. But executives have been happy to cede responsibility to the government and are content with daft calls for ‘increased regulation.’ Regulation has not and will not stop this new breed of cybercriminals. CEOs need to double their investment now in deepfake verification tools.

This will require some expenditure, but you’d think the threat of losing cash, IP, trade secrets, and reputational damage to hackers would more than justify the investment.

Michael Marcotte is one of the world’s pre-eminent experts in digital identity, cybersecurity and business intelligence technology. He joined EchoStar, the multi-billion-dollar satellite communications firm in 2006, and was global CIO, global CDO and president (Hughes Cloud Services). He was one of the very first people to serve as chief digital officer in a major international corporation. Mike left in 2014 and has applied his expertise at a range of firms in technology, cybersecurity and venture capital. He is also co-founder of the National Cybersecurity Intelligence Center (NCC) and was founder and chairman of the NCC’s Rapid Response Center Board. He has been a senior advisor for several heads of state and US Senators.

Latest articles

Be an insider. Sign up now!