
Features 25.02.2025
How to Usher in AI Growth Without the Cyber Pain
The government’s new AI plan does little to address potential risks
Features 25.02.2025
The government’s new AI plan does little to address potential risks
What goes around, comes around. In his 1963 “White Heat of Technology” speech, Labour party leader Harold Wilson advocated modernizing the British economy through investment in technology. After his party’s election the following year, he spent his first term delivering on those promises.
Over 60 years later, a new Labour party is once again pushing for technological transformation. In January, it announced the AI Opportunities Action Plan, a strategy to maximize AI’s potential for economic growth. But as the UK joins the US in embracing AI’s commercial possibilities, how can it balance opportunity with risk?
The UK’s plan makes bold commitments across three broad areas. The first prepares the ground for AI development by planning a 20-fold expansion of AI computing infrastructure within five years. This includes the development of a new national supercomputing facility.
Accompanying this is an investment in AI Growth Zones which will benefit from streamlined planning permissions. There are a slew of third-party datacentre construction commitments, along with an AI Energy Council to help power them all. Also on the agenda is a centralised data repository – the National Data Library – which will support AI research.
“How can the UK balance opportunity with risk?”
The second pillar sees the expansion of AI services in the public sector, tasking each government department to find areas where AI can streamline and improve public services.
Finally, a dedicated team will ensure that, once on track to become an AI superpower, the UK stays there by making it an attractive place for companies supporting the AI ecosystem to do business. This will include “guaranteeing firms access to energy and data”, according to a government missive.
Yet as the power of technology increases, so do its potential risks. So any plan advocating its rapid widespread development and adoption should also evaluate those risks.
If you asked ChatGPT to summarise the government’s approach here, it might well read “sit tight”. The Department for Science, Innovation and Technology (DSIT) will explain its approach to sustainability and security risk this Spring, the plan says. Each department will talk to its regulator to work out what it plans to do and what it needs.
Among the 50 recommendations laid down by government AI czar Matt Clifford (all of which the government accepted), the word “risk” appears in just three. “Regulation” makes it into seven, but the language is heavy on using regulation to support innovation rather than check it.
“Regulation or not, it’s important to recall AI’s risks and their potential reliefs”
It’s worth contrasting this language with that in the Biden administration’s now-rescinded October 2023 Executive Order, Safe, Secure, and Trustworthy Development and Use of AI. This took an aggressive approach to AI regulation, with clear guardrails both in the federal government and with foundational large language model (LLM) companies.
DSIT will rely heavily on what used to be called the AI Safety Institute (AISI) – which the plan says it will continue to fund – for its AI risk work. The AISI evolved from the Frontier AI Taskforce that Clifford helped design. It advises on best practices, and contributes to creating AI regulations that balance innovation and safety.
One problem now concerning experts is that the AI Safety Institute is no more. The government renamed it to the AI Security Institute on February 14 this year. It will no longer address issues such as bias or misinformation, but instead on the dangers of “serious AI risks with security implications”, such as fraud and child abuse.
The timing of this announcement was important. It was made just three days after the AI Action Summit in Paris, at which new US vice president JD Vance scolded the EU for its focus on “excessive regulation”. This was toxic for innovation, he warned, adding that AI should remain free from “ideological bias” and not serve as a tool for “authoritarian censorship”.
Vance’s speech follows a US about-face on AI issues. In January, President Trump rescinded Biden’s AI safety EO, replacing it with his own: Removing Barriers to American Leadership in Artificial Intelligence. This emphasises growth and innovation over regulation.
The AI’s U-turn worries Cody Venzke, senior policy counsel for the National Political Advocacy Division at the American Civil Liberties Union.
“This administration is full steam ahead with AI,” he tells Assured Intelligence, voicing concerns over reports that Elon Musk’s DOGE team is already feeding cloud-based AI systems data from government systems over which it has gained control.
“There’s no indication that the AI that is being used on these databases has gone through any sort of safety checks, and various DOGE-affiliated individuals such as those running the GSA have made it very clear this is going to be an AI-first administration,” he adds.
Regulation or not, it’s important to recall AI’s risks and their potential reliefs, says Dominique Shelton Leipzig, founder and CEO of legal advisory firm Global Data Innovation and a board member at the International Association of Privacy Professionals (IAPP).
“There are certain steps that are necessary to ensure that you’ll be known for your innovation and not known for some kind of AI incident that’s bad for growth, brand and reputation,” she tells Assured Intelligence.
Examples of companies failing to account for those risks are plentiful. She provides two – Rite-Aid’s disastrous facial recognition system that misidentified paying customers as shoplifters, and a disgruntled customer that got logistics firm DPD’s chatbot to swear and write disparaging poems about the company. No one wants such bad publicity.
How do organisations identify and manage AI risk to stop themselves hitting the headlines?
“Adversaries may attempt to influence AI models to generate misleading or harmful outputs”Ken Huang
The risks are categorised in the Cloud Security Alliance’s LLM Threats Taxonomy, says Ken Huang, research fellow and co-chair of the organisation’s AI Safety Working Group. Incidentally, they include the deceptive content and bias that the UK’s AISI has deprecated.
Some of the risks that Huang highlights focus on attacking the AI models themselves.
“Adversaries may attempt to influence AI models to generate misleading or harmful outputs. This can result in financial losses, reputational damage and regulatory violations,” he says. One way they can do this is to poison new data that existing AI models consume to expand their learning.
Another danger is data disclosure. Researchers have already persuaded LLMs to confess their secrets. Integration with poorly secured applications and APIs can exacerbate risks like these by expanding the attack surface.
“AI adoption often relies on third-party components, frameworks, and data sources,” warns Huang. “A compromised supply chain can introduce vulnerabilities that attackers exploit to compromise AI integrity.”
One part of that software supply chain is the LLMs themselves. Authorities ranging from South Korea to the State of New York are already moving to ban the hosted version of China’s DeepSeek LLM over security concerns, for example.
As the price of producing foundational LLMs continues to drop – even DeepSeek is now getting the DeepSeek treatment – these models will become more like commodities, warn experts. Leipzig says that the way for them to differentiate themselves is through more accuracy and better support for reliable enterprise integration. Companies will pay top dollar to work with AI models that won’t embarrass them, or worse.
Understanding these risks is just one part of a responsible organisation’s job; the other is addressing them. In her book Trust. Responsible AI, Innovation, Privacy and Data Leadership, Leipzig breaks down AI risk management into five areas, based on the TRUST acronym:
Triaging: Evaluating each AI risk for their specific use case so that they can mitigate the appropriate ones. Righteous data: Ensuring that their application uses high-quality responsibly managed for AI training. Uninterrupted testing: Constant monitoring and auditing of AI output to spot any emerging issues, such as the degradation of an AI model. Supervision: Keeping a human in the loop to oversee and correct such issues with the AI application. Technical documentation: Documenting systems for transparency and accountability.
Chris Dimitriadis, chief global strategy officer at IT governance association ISACA, argues that human skills are critical to help minimise AI risk. Not only should cybersecurity experts be involved at every stage of AI development and deployment, but everyday non-technical staff should also be educated in how to use it appropriately.
“Our research found that despite nearly three quarters of European organisations reporting that their staff use AI at work, only 30% provide training to employees in tech-related positions, while 40% offer no training at all,” he tells Assured Intelligence. “To improve privacy and security while using AI programs, widespread education on the security risks is imperative – as well as formal training and clear company policies for AI use in the workplace.”
While Europe doubles down on AI regulation, the White House appears to have left responsible AI firmly in the rear-view mirror. It remains to be seen just how hard the UK will hit the brakes as it accelerates towards AI innovation. With the road so uneven, it behooves organisations to take a solid look at their own risk tolerance. As the race to AI is well underway, there’s no time like the present.