
Features 04.03.2025
Is It Time to Worry About GenAI? How to Keep Your Business Safe
GenAI offers risk as well as opportunity for organisations
Features 04.03.2025
GenAI offers risk as well as opportunity for organisations
Of all the new technologies emerging in recent years, generative artificial intelligence (GenAI) has arguably been the most revolutionary. The success of platforms like ChatGPT rocketed the technology into the mainstream, normalising everyday use. We now live in a world of prompts and AI-generated content.
However, with every advancement like this, there’s a darker side. Already, the FBI has warned of a surge in fraud driven by GenAI, and companies are also in the crosshairs. Fortunately, there’s already a lot that organisations can do to mitigate GenAI risk.
Is GenAI even worth the risk? If it’s a threat, why not ban the technology altogether? Unfortunately, this isn’t a realistic option, according to Marc Lueck, CISO in residence at Zscaler EMEA.
“When GenAI initially became popular a couple of years ago, organisations’ first response was to block its use to stop harm,” he tells Assured Intelligence. “However, this initial reaction became untenable quite fast and organisations today have to ensure that GenAI is [safe to use], rather than blocking it.”
In other words, GenAI is already here, and so widespread that the only option is risk management, not banishment. Among the biggest threats are:
“The success of platforms like ChatGPT rocketed the technology into the mainstream, normalising everyday use”
Prompt leaks: When the input to a GenAI system is exposed. For example, if an employee uploads a series of classified documents to a large language model (LLM), and a bad actor gets hold of this prompt, they’ll have access to potentially sensitive information.
Theft of LLM training data: Companies will often use their own proprietary datasets in LLMs, ensuring the GenAI model is tailored specifically to their needs. If attackers access this data, it could be used in follow-on fraud or for extortion.
Data poisoning: Malicious actors manipulate training data, in order to sabotage the system, cause disruption to the service or force its outputs to align with the hackers’ goals.
Prompt injection attacks: Adversaries craft malicious inputs disguised as regular prompts, in order to bypass built-in safety guardrails. This could lead to a gamut of issues, including leaking data or executing unauthorised commands.
Social engineering and deepfakes: Here, GenAI is a tool for attack rather than the target itself. It empowers threat actors with the ability to impersonate others, in targeting employees or consumers. Threats range from business email compromise (BEC) attacks to large-scale deepfake scams on social media.
There are several ways organisations can use technology to reduce the threat of GenAI. The first, and broadest point, is to stay on the bleeding edge of innovation.
“Businesses need to upgrade their protections at pace and ensure that they are keeping up to date with evolving methods of attack from bad actors,” says Chris Dimitriadis, chief global strategy officer at ISACA, a global association for cybersecurity professionals.
In the case of issues such as prompt injections, Zscaler’s Lueck suggests adopting new security tools with the ability to do prompt recording or prompt management “to help to support employees to shape good prompts and stop bad prompts from happening.”
Alongside this, Lueck argues that if organisations are concerned about their data being held in models they don’t have full control over and want “a more nuanced level of control”, they could decide to “create their own GenAI application”.
ISACA’s Dimitriadis is clear about the importance of AI governance and employee awareness training.
“All decisions involving AI need to be vetted to ensure they comply with in-market regulations, which are designed to protect business and industry from the dark side of AI,” he tells Assured Intelligence.
“It must be an enterprise-wide effort, with the C-Suite wrapping the need for AI governance, cybersecurity defences, and skills provision into their core business strategies.”
Manoj Bhatt, a cybersecurity consultant who has worked with Accenture and the Ministry of Justice, warns that more GenAI capabilities are being built into third-party software.
“This is where organisations have to become more effective in training employees to jump over the suspension of disbelief bridge”Quentyn Taylor
“Often these are automatically enabled via backend toggles, which security teams may not be aware of,” he tells Assured Intelligence. “Processes should be put in place to ensure that software is checked regularly for this, to ensure sensitive data isn’t being uploaded into GenAI systems, or new and unknown attack vectors have opened up.”
Cyber Security Unity founder, Lisa Venture, adds that when it comes to threats like prompt injection, it’s crucial to implement clear usage policies, outlining precisely when and how these tools can be used. Building on this – and with an eye on theft of LLM training data – she tells Assured Intelligence that “organisations should enforce confidentiality agreements for all personnel and partners with access”.
When it comes to data poisoning, she suggests that “businesses should adopt strict validation processes for all incoming datasets and periodically audit them for anomalies.”
Combined, this set of processes – which involve clear usage policies, confidentiality contracts, and auditing strategies – should help to reduce GenAI’s threats. But there’s more.
One of the largest attack vectors for cyber-criminals is people – and GenAI is no different. So how can organisations ensure staff are equipped to handle the technology?
“The C-Suite must wrap AI governance, cybersecurity defences and skills provision into their core business strategies”Chris Dimitriadis
Ventura believes that “companies should focus on employee awareness through security training”. This should cover a wide range of elements, such as how to design secure prompts and the ability to spot potentially compromised data.
ISACA’s Dimitriadis agrees.
“The key to doing this lies in skills,” he says. “Failure to [train everyone] will leave pockets of vulnerability in even the strongest of business defence networks.”
This is especially important when it comes to deepfakes and social engineering.
Quentyn Taylor – senior director of product, information security, and global incident response at Canon – says people need to be prepared for the onset of increasingly realistic synthetic content.
“This is where organisations have to become more effective in training employees to jump over the suspension of disbelief bridge,” he tells Assured Intelligence. “It needs to get the human being into a mental state where they’re not naturally accepting [what they see].”
There’s no doubt that GenAI is an emerging risk to business and something organisations should prepare for, but many of the experts Assured Intelligence spoke to urge balance.
“While it’s easy to recognise the risks associated with GenAI, it’s important to note that the majority of current security incidents do not yet involve AI,” Taylor says. “This is because attackers find traditional methods like purchasing credentials on the dark web or phishing to be highly effective.”
GenAI is here to stay, but organisations that are focused and prepared shouldn’t need to rewrite the rules of security to mitigate the associated risks. This is a time for evolution, not revolution.