Features 08.02.2024

The NCSC’s Artificial Intelligence Warning: What Next for CISOs?

The latest report from the NCSC claims AI will “almost certainly increase the volume and heighten the impact of cyber attacks over the next two years.”

The near-term impact of artificial intelligence on the threat landscape could be significant, warns the UK’s leading security agency. What does that mean for corporate strategy, asks Phil Muncaster?

Concern around artificial intelligence (AI) being used as an aide by cyber criminals has been building for some time. But when the cybersecurity vendor community drives the narrative, it’s sometimes tricky to pick out the genuine threats from the hype (or, worse, scaremongering). That’s why the latest report from the National Cyber Security Centre (NCSC) is so significant. When the GCHQ outpost claims AI will “almost certainly increase the volume and heighten the impact of cyber attacks over the next two years”, it’s time to pay attention.

Fortunately, AI’s more sophisticated malign uses will be restricted to a select group of threat actors in the near term. And there are things that network defenders can do to mitigate the main risks highlighted by the NCSC.

What’s the NCSC got to say?

Described by the NCSC as “the most acute cyber threat facing UK organisations and businesses”, ransomware is singled out in its threat assessment as one of the primary beneficiaries of malicious AI tooling. It’s not as if ransomware actors need any help. New blockchain analysis of cryptocurrency payments to threat actors finds that they extorted at least $1bn out of victims last year, an all-time high.

Meanwhile, UK organisations continue to be breached at an alarming rate. Government figures claim that 59% of mid-sized and 69% of large businesses were attacked between January 2022 and January 2023.

In short, threat actors already seem to have an unfair advantage. The input of AI tools could swing the pendulum decisively in their favour unless cybersecurity teams can respond. Specifically, the NCSC warns that:

  • The main current AI “capability uplift” is in social engineering. Generative AI (GenAI) can help threat actors craft highly convincing phishing messages with virtually no grammatical mistakes. This provides a significant advantage to threat actors with few cyber skills of their own.
  • The ability to “summarise data at pace” will enable threat actors to identify high-value vulnerable assets for examination and exfiltration, enhancing the value and impact of cyber attacks.
  • AI is improving the efficiency of existing malware and exploiting development techniques, vulnerability research, and lateral movement. However, this capability will likely remain solely in sophisticated actors’ hands as it requires human expertise and large volumes of good-quality exploit training data.
  • Threat actors will be able to analyse exfiltrated data faster and more effectively and use it to train AI models.
  • Expertise, equipment, time and financial resources are essential. Capable state actors, commercial spyware companies, and some criminal groups will benefit most from AI for this reason.
  • Malicious AI will make it harder for defenders by making social engineering more challenging to spot. It will also reduce the window between a patch becoming available and getting exploited.
  • In time, GenAI-as-a-service will be highly disruptive by democratising these capabilities to a wider range of criminals. The process has arguably already begun with dark web tools like WormGPT and HackGPT.
  • As successful exfiltration occurs, the data that feeds AI will “almost certainly” improve, driving faster and more precise cyber operations.

More than GenAI

The experts that Assured Intelligence spoke to agree with the NCSC’s assessment. While much of the media focus has been on GenAI, there’s plenty more for CISOs to be worried about, argues Jason Nurse, CybSafe’s director of science and research and reader in cybersecurity at the University of Kent.

“The automation capabilities of AI allow cyber criminals to streamline the process of identifying vulnerabilities within software and organisational systems” Jason Nurse

“The automation capabilities of AI allow cyber criminals to streamline the process of identifying vulnerabilities within software and organisational systems. It reduces the need for constant human oversight on the part of attackers, making it easier for them to launch and sustain attacks,” he tells Assured Intelligence.

“We have also seen scammers integrate ChatGPT into dating apps, displaying capabilities to get past security protocols like CAPTCHA, analyse inboxes, respond to messages and ask for phone numbers.”

Nurse also cites reports of “adaptive malware,” capable of analysing victim systems and evolving to avoid detection. Such technology is not “immediately concerning”, he counters, adding that deepfakes are a more pressing threat given its relative scarcity. Such technology could be used in voice phishing, business email compromise (BEC), and even to bypass biometric-based identity verification systems.

Deepfaking it

Tech firm Sumsub recorded a tenfold annual increase in deepfakes detected globally last year, with crypto and fintech sectors comprising the vast majority (96%) of cases. The firm’s head of artificial intelligence and machine learning, Pavel Goldman-Kalaydin, argues that organisations must embrace multi-layered methods such as “behavioural anti-fraud measures and transaction monitoring” to mitigate the threat.

He adds that protecting against data exfiltration – through encryption, access controls and continuous monitoring – will be critical to prevent the potential for threat actors to train their AI models.

“This data encompasses sensitive and proprietary information, such as personally identifiable information (PII), financial records and intellectual property. It could also include information extracted from raw logs, further emphasising the importance of protecting sensitive data and the vast number of logs containing potentially valuable information,” he tells Assured Intelligence.

“Threat actors can leverage AI to analyse and summarise this data quickly, enabling them to identify high-value assets for exfiltration.”

Where do CISOs go from here?

The NCSC report highlights the various levels of uplift that AI could give different threat actor types. In doing so, it differentiates between: “highly capable” state actors, commercial companies selling to states, “capable” state actors, organised crime, hacktivists, opportunistic cyber criminals, and hackers-for-hire. For Goldman-Kalaydin, understanding who the organisation’s main adversaries are likely to be is a key first step for CISOs looking to craft a response.

“Understanding adversaries’ skill levels and motivations informs strategic decisions for defending against attacks” Pavel Goldman-Kalaydin

“Identifying adversaries is vital for CISOs, especially considering the rising threat of deepfakes. Understanding adversaries’ skill levels and motivations informs strategic decisions for defending against attacks,” he explains.

The University of Kent’s Nurse agrees, claiming that “the better understanding cybersecurity teams have of their opponents, the more impactful decisions can be regarding tech investment and strategy.”

However, at a high level, the solution in most cases will be the same: “positive cybersecurity culture, investment in impactful tech solutions and strong processes and protocols before, during and after cyber events.”

Culture is particularly crucial, says Nurse. “If employees are not aware of the importance of cyber hygiene and are not able to openly discuss and highlight suspicious activity without fear of punishment, technological safeguards will only get them so far,” he argues.

“In practice, this means openly discussing cyber threats and providing training, not punishing but celebrating people for voicing cybersecurity concerns, and reducing perceptions of an ‘us vs them’ mentality that prevents employees engaging with senior management on issues of great importance.”

When it comes to training, voice cloning and AI-generated phishing will be key additions to user awareness programmes going forward. But given the increasing sophistication of these efforts, user scepticism will be important, Nurse says.

“This means encouraging a culture in which members are naturally inquisitive about the communications they receive. Were you expecting an email from this person? Is it their usual email address? Do they use the organisation’s email signature?” he explains. “Training providers must elevate their education to include the inevitability of AI-based attacks.”

As AI empowers threat actors, organisations should ultimately look to innovative tooling of their own, says Acronis CISO Kevin Reed.

“AI has already been used for over a decade in protection tools to find anomalies and block unwanted access,” he tells Assured Intelligence. “AI helps to quickly find outliers in large data sets and takes immediate mitigation actions to lower the potential impact. This AI versus AI fight will increase in the next few years where a rapid initial response becomes important.”

It seems the AI arms race has only just begun.

Latest articles

Be an insider. Sign up now!