
Features 02.09.2025
How CISOs Can Stay on the Right Side of a Yawning AI Divide
The next stage of malicious AI use is upon us. CISOs should take note.
Features 02.09.2025
The next stage of malicious AI use is upon us. CISOs should take note.
Predictions about the future of the threat landscape are usually loudly proclaimed and then quietly forgotten about. But the National Cyber Security Centre (NCSC) was pretty accurate with its first crystal ball-gazing report into AI in January 2024. Now it’s back with another two-year lookahead. And once again, it makes for a nervy read – especially for CISOs working in critical national infrastructure (CNI).
Most worryingly, the NCSC predicts that there will “almost certainly” be a growing digital divide between organisations that can keep pace with AI-powered threats, and those that can’t. The question for security leaders is how to make sure they are on the right side of this widening chasm.
The NCSC makes several predictions in its report, using the Professional Head of Intelligence Assessment (PHIA) probability yardstick. That positions “highly likely” as 80-90% certain, and “almost certain” at 95-100%. The headline finding is perhaps the most uncontroversial: that AI will “almost certainly continue to make elements of cyber-intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats.”
“The time between disclosure and exploitation has shrunk to days and AI will almost certainly reduce this further”NCSC
More interesting is how it sees the threat landscape evolving over the coming two years. First, threat actors will increasingly be able to circumvent safety features built into legitimate AI tools, via jailbreak-as-a-service offerings. Or else they will be able to repurpose open-source models to do their bidding, the report claims.
To what end will these tools be put? The NCSC cites victim reconnaissance, vulnerability research and exploit development (VRED), social engineering, basic malware generation, and processing exfiltrated data. Of these, the most significant use case is “highly likely” to be VRED, with intelligent, automated systems discovering and exploiting vulnerabilities en masse in victim environments.
This is where the divide emerges. While these capabilities increase the volume of attacks against unpatched systems, only those developers and system owners using AI to find and patch bugs first will be able to mitigate the risk effectively.
“System owners already face a race in identifying and mitigating disclosed vulnerabilities before threat actors can exploit them,” notes the NCSC. “The time between disclosure and exploitation has shrunk to days and AI will almost certainly reduce this further. This will highly likely contribute to an increased threat to CNI or CNI supply chains, particularly any operational technology with lower levels of security.”
That’s not the only AI threat facing network defenders. Aside from using automated tools to identify and exploit vulnerabilities, threat actors will also deploy it to make “rapid changes to malware and supporting infrastructure”, in order to better evade detection. Once again, only those with “AI assistance for defence” stand a chance at tackling this challenge, the NCSC warns
AI will also be a target for attack, as it makes its way into more enterprise IT environments. Direct and indirect prompt injection, software vulnerabilities, and supply chain attacks on such systems could facilitate access to wider systems. Poorly engineered AI rushed to market without adequate security built in, will make the adversary’s job even easier.
As if that weren’t enough for security teams to think about, there’s one more prediction from the agency. AI is also “highly likely” to improve the ability of skilled cyber actors to discover zero-day vulnerabilities and exploitation techniques.
When it comes to protecting the growing AI attack surface, visibility is the first key step, according to Chris Hosking, cloud and AI security evangelist at SentinelOne. He argues that security teams need to understand what AI models is currently being used and what services and infrastructure are being built.
“AI should be a partner that makes your security team smarter and faster, not something that replaces their judgment”Gerald Beuchelt
“We should first recognise that most organisations have a bifurcated AI attack surface. There are risks associated with the adoption of AI across the enterprise, such as the use of popular LLMs by marketing, or AI-assisted coding by development teams,” he tells Assured Intelligence.
“Often this adoption comes outside of IT’s oversight, and security teams can’t protect what they can’t see. At the same time, many organisations are building their own AI services into their products, which introduces further security considerations.”
IEEE senior member and Ulster University cybersecurity professor, Kevin Curran, recommends blending traditional security with AI-specific defences.
“Ultimately, securing this environment begins with the data pipeline. Poisoned or manipulated datasets can compromise an entire model. Another key step is to implement data encryption such as AES-256 for sensitive data both at rest and in transit,” he tells Assured Intelligence.
“CISOs must ensure data integrity through provenance checks, sanitisation and strong access controls. Adversarial training, watermarking and ongoing monitoring are examples of methods that help to reduce the risk of manipulation, theft or misuse.”
APIs and endpoints are another key point of exposure which can be exploited for model extraction, prompt injection or other adversarial attacks. Rate limiting and input validation are vital to mitigating risk here, says Curran.
“CISOs should also consider the importance of securing surrounding infrastructure and the supply chain including managing open source dependencies, cloud platforms and containerised environments,” he adds.
“Implement continuous vulnerability management and strict identity controls under a zero-trust model to mitigate risks to these areas. Beyond technology, governance and compliance play a central role. Frameworks such as the NIST AI Risk Management Framework and the OWASP Top 10 for LLMs can provide direction for a CISO, while incident response plans must adapt to AI-specific threats.”
There’s also plenty for security teams to do to mitigate the threat of AI-powered attacks, argues Acronis CISO, Gerald Beuchelt. He cites an increase in detections of social engineering and BEC attacks from 20% in January to May 2024 to 26% a year later – most likely due to AI use.
“The truth is that people can’t keep up with the volume and speed of today’s threats. We need AI to fight AI,” he tells Assured Intelligence. “What works is using and training AI tools to spot patterns we’d never catch on our own, filter out the noise, and even kick off an automated response before a human steps in. That way, the focus can be on the real, high-risk problems instead of drowning in alerts.”
“Ultimately, securing the AI environment begins with the data pipeline”Kevin Curran
AI can also help on the people and process side, Beuchelt adds.
“AI should be a partner that makes your security team smarter and faster, not something that replaces their judgment,” he says. “As capabilities advance, agentic AI will bring more autonomy, which makes setting clear objectives and maintaining oversight even more important to ensure it strengthens the team rather than creating new risks.”
IEEE’s Curran agrees that AI tools will help reduce alert fatigue and free up analysts for complex cases, while scaling anomaly detection and triage to meet a surge in automated threats.
“As part of scaling their response, security leaders should also expand their threat intelligence sharing and collaboration,” he continues. “Since AI risks evolve quickly, no single company can keep pace alone. Cybersecurity departments should plug into sector-wide information exchanges, ISACs, and open-source intelligence on AI-specific vulnerabilities.”
Another important consideration is governance and workforce changes.
“Departments must train staff on AI-specific risks, develop playbooks for AI-related incidents, and adopt frameworks like NIST’s AI RMF or the EU AI Act for structured oversight,” says Curran. “In combining automation, proactive governance, and strong collaboration, CISOs can expand their capacity to match the speed and complexity of AI-driven threats.”
The NCSC is right to highlight the threat to CNI, argues SentinelOne’s Hosking.
“CNI presents a uniquely attractive target because any disruption has disproportionate consequences, and AI increases both the attack surface and the pace of attacks,” he argues.
“CNI presents a uniquely attractive target because any disruption has disproportionate consequences”Chris Hosking
“Misconfigurations, leaked credentials or insecure endpoints can be exploited faster and at greater scale by adversaries using AI. At the same time, many CNI operators face budget and staffing challenges. Without investment in AI-driven defences – such as posture management, automated detection and prioritisation – these organisations risk falling behind.”
IEEE’s Curran says the best way for CNI to close the digital divide here is via AI-specific threat modelling, real-time monitoring and zero trust security.
“To further ensure resilience, organisations should focus on redundancy, manual overrides, and adversarial robustness training,” he adds. “Public-private partnerships with agencies like CISA and cybersecurity firms can further enhance threat intelligence sharing and help mitigate attacks.”
Acronis’s Beuchelt concludes on a note of cautious optimism.
“The good news is we’re not powerless. The same AI that criminals use can also help defenders spot unusual behaviour faster, automate routine checks, and strengthen resilience,” he argues.
“For CNI organisations, the focus has to be on building basic cyber hygiene, layering defences, and then looking at how AI can be deployed responsibly to extend limited resources. If we don’t keep pace, that digital divide the NCSC warns about will only get wider, and that’s not a risk any of us can afford when it comes to critical infrastructure.”