
Features 08.04.2025
Could Content Credentials Be the Answer to the Deepfake Threat?
Momentum is building behind an initiative to enhance digital content transparency
Features 08.04.2025
Momentum is building behind an initiative to enhance digital content transparency
Can you believe what you see online? Unfortunately for many people today, the answer is increasingly a resounding “no”. Deepfakes are bad news for many reasons. But for CISOs, they posed an outsized threat through the potential to amplify social engineering, business email compromise (BEC) and even social media scams.
There is certainly no silver bullet for a challenge that’s only set to grow as the technology behind deepfakes gets better and cheaper. But an initiative dubbed Content Credentials has already won many plaudits, including the NSA and the UK’s National Cyber Security Centre (NCSC). It could yet help society in general, and businesses in particular, to push back against a rising tide of online fakery.
Deepfakes have been circulating online for several years. These digitally altered or completely synthetic pieces of audio, video and image-based content were initially viewed with curiosity as fairly harmless, easy-to-spot fakes. But the technology has rapidly matured, supercharged by generative AI (GenAI), to the point where threat actors are now using it in everything from sextortion and online scams to child abuse material. The government is frantically looking for answers to what it describes as “a growing menace and an evolving threat”, citing figures that eight million fakes will be shared in 2025, up from 500,000 two years ago.
“Threat actors are now using deepfakes in everything from sextortion and online scams to child abuse material.”
From a CISO perspective, there are several potential risks associated with malicious use of the technology, including:
Brand damage: Deepfake videos of CEOs and senior executives circulated online could be used to tarnish the corporate brand directly, in order perhaps to influence the stock price, or even to perpetuate investment scams and other fraud.
Direct financial loss: While the above scenario could also create significant financial risk, there are other more direct tactics threat actors can use to make money. One is by amplifying business email compromise (BEC) scams. Instead of sending an email to a finance team member, requesting a fund transfer, a cybercriminal could send manipulated audio or video impersonating a supplier or C-suite executive. The FBI has been warning about this for several years. BEC cost $2.9bn in 2023 alone, and the figure continues to rise.
Unauthorised access: Beyond BEC, deepfakes could also be deployed to amplify social engineering in an attempt to gain access to sensitive data and/or systems. One such technique spotted in the wild is the fake employee threat, which has already deceived one cybersecurity company. Faked images, or even video, could be used to add credibility to a candidate that would otherwise be turned away, such as a nation state operative or cybercriminal.
Account takeover/creation fraud: Cybercriminals are also using stolen biometrics data to create deepfakes of customers, in order to open new accounts or hijack existing ones. This is especially concerning for financial services firms. According to one study, deepfakes now account for a quarter of fraudulent attempts to pass motion-based biometrics checks.
The challenge is that deepfakes are increasingly difficult to tell apart from the real thing. And the technology is being commoditised on the cybercrime underground, lowering the barrier to entry for would-be fakers. There have even been warnings that deepfakes could be made even more realistic if large language models (LLMs) were trained with stolen or scraped personal information – to create an evil “digital twin” of a victim. The deepfake would be used as the front end of this avatar, who would look, sound and act like the real person in BEC, fake employee and other scams.
Against this backdrop, many cybersecurity professionals are getting behind Content Credentials. Backed by the likes of Adobe, Google, Microsoft and – most recently – Cloudflare, the initiative was first proposed by the Coalition for Content Provenance and Authenticity (C2PA). Currently being fast-tracked to become global standard ISO 22144, it works as a content provenance and authentication mechanism.
“The idea is to create trust among content consumers through greater transparency.”
A Content Credential is a set of “tamper-evident”, cryptographically signed metadata attached to a piece of content at the time of capture, editing or directly before publishing. If that content is edited and/or processed over time, it may accrue more Content Credentials, enabling the individual who has altered it to identify themselves and what they did. The idea, as the NSA puts it, is to create trust among content consumers through greater transparency, just as nutrition labels do with food.
The initiative is evolving as potential weaknesses are discovered. For example, recognising that trust in the metadata itself is paramount, efforts were made to enhance preservation and retrieval of this information. Thus, Durable Content Credentials were born, incorporating digital watermarking of media and “a robust media fingerprint matching system”.
If the standard takes off, it could be a game changer, argues Andy Parsons, senior director for content authenticity at Adobe.
“We’ve seen great momentum for real-world applications of Content Credentials which includes being integrated into the recently launched Samsung Galaxy S25. They are also supported by all ‘big five’ camera manufacturers – Canon, Fujifilm, Leica, Nikon, and Sony – as well as by the BBC for BBC Verify,” he tells Assured Intelligence.
“Where social media and other websites do not yet retain visible Content Credentials when content is posted on their platforms, we have released the Adobe Content Authenticity extension for Google Chrome to allow end users to view Content Credentials on any website.”
“The approach helps organisations filter out manipulated content, make informed decisions and reduce exposure to misinformation” Will Allen
Cloudflare’s head of AI audit and media privacy, Will Allen, adds that it could be used as a “trusted authentication tool” to tackle BEC, social media scams and other deepfake content.
“This approach helps organisations filter out manipulated content, make informed decisions and reduce exposure to misinformation,” he tells Assured Intelligence.
However, there are still limits to the initiative’s potential impact, especially given the growing speed, quality and accessibility of deepfake tools.
Although there’s is “active work underway” to support live video, according to Adobe’s Parsons, that support is not yet finalised. This could leave the door open for threat actors using real-time deepfake tools for BEC fraud. Trend Micro senior threat researcher, David Sancho, adds that until all sources watermark their content, the potential for a high rate of false negatives is amplified.
“Often, once you see it, you can’t unsee it. This is more relevant for disinformation campaigns, but also for some scams,” he continues. “The criminals may also be able to remove fingerprinting metadata from synthetic media.”
While Content Credentials offers a helping hand in the form of additional data points to study, it’s not a silver bullet.
“Instead, to stop BEC, a company needs to implement strong processes that force finance employees to double/triple check money transfers beyond a certain amount, especially out of working hours,” Sancho continues. “This makes BEC a much more difficult proposition for the criminal because they must fool two or three people, not only one.”
Cloudflare’s Allen admits that take up remains the key to success.
“The biggest challenge is adoption – making it easier for users to inspect and verify Content Credentials,” he says. “For this to be truly effective, verification needs to be effortless and accessible, wherever users encounter media – whether on social media platforms, websites, or apps.”
Adobe’s Parsons claims that its Content Authenticity Initiative (CAI) now has over 4000 members, but agrees that end-user awareness will be key to is success.
“The more places Content Credentials show up, the more valuable they become. We also need to help build more healthy assessment of digital content and grow awareness of tools that are available,” he concludes. “Therefore, ensuring people are better educated to check for credentials and to be sceptical of content without them becomes even more essential.”
To implement Content Credentials, the NSA says organisations should consider: