Features 28.04.2023

Halt and Catch Fire: Why Pausing AI Development Could be a Cybersecurity Disaster

Calls to stop the training of artificial intelligence models could backfire big time. Hear from cybersecurity experts about why.

The rise of generative artificial intelligence could lead to more sophisticated cyber crimes. But calls to ‘pause’ the industry could make things even worse, according to cybersecurity experts.  Eugene Yiga reports

Generative artificial intelligence products that create pictures from words or write essays for lazy kids might seem innocent enough. Still, the fact that these tools can also create realistic fake images, videos, audio, and text is a concern. Indeed, as deepfakes become even more convincing, hackers could find it much easier to commit their crimes.

“ChatGPT has raised concerns about its impact on cybersecurity, particularly the way hackers can use it to carry out new cybersecurity attacks,” says Sivan Tehila, founder and CEO of Onyxia Cyber. “For example, an experiment conducted by Check Point demonstrated how, with a simple prompt, ChatGPT could generate a sophisticated phishing email with ease.”

For Dan Shiebler, Head of Machine Learning at Abnormal Security, large language models like GPT-4 present two types of severe and immediate risks. First, the models are powerful tools for spammers to flood the internet with low-quality content and for criminals to uplevel their social engineering scams. The second is that the models are too powerful for businesses not to use and too unpolished for businesses to use safely.

“The tendency of these models to hallucinate false information or fail to calibrate their certainty poses a major risk of misinformation,” Shiebler says. “Furthermore, businesses that employ these models risk cyber criminals injecting malicious instructions into their prompts. This is a major new security risk that most businesses are not prepared to deal with.”

Reminiscent of Einstein

Given all this, it’s no wonder that the Future of Life Institute is calling for a six-month pause on what it calls ‘giant AI experiments’. But, for Shiebler, the interesting thing about the open letter is how diverse the people that signed it and their motivations are.

Elon Musk has been pretty vocal in believing that artificial general intelligence (computers figuring out how to make themselves better and therefore exploding in capability) is an imminent danger, whereas AI sceptics like Gary Marcus are clearly coming to this letter from a different angle,” he says. “Personally, I don’t think this letter will achieve much. The cat is already out of the bag on these models. The limiting factor in generating them is money and time, and both of these will fall rapidly. We need to prepare businesses to use these models safely and securely, not try to stop the clock on their development.”

As deepfakes become even more convincing, hackers could find it much easier to commit their crimes

Barmak Meftah, co-founder and general partner of Ballistic Ventures, agrees. While he admits that “there’s a strong case to be made in support of the letter,” he also finds it hard to imagine how a six-month pause would be applied or controlled. Government regulation? A commercial AI consortium that sets the rules of engagement? Or something else entirely?

“It’s a beautiful sentiment, reminiscent of Einstein’s effort to rally scientists around the threats of nuclear weapons over 75 years ago,” says Roger Thornton, a co-founder and general partner of Ballistic Ventures. “Frankly, I would start with eliminating texting and driving before AI. The reality is that it’s never going to happen. If some small subset agrees to it, that’s wonderful. Merciless competitors and nefarious actors are not going to slow down. All this would mean the kind-hearted people who support this would find themselves behind.”

Protect .vs. exploit arms race

Indeed, even if a coalition of today’s leading AI companies agrees on a pause, it can’t be guaranteed or even enforced across geopolitical boundaries, let alone the darker corners of the internet. This could give cyber criminals, nation-states, and other bad actors a significant leg up. According to John Ayers, VP of Offensive Security at Cyderes, it would offer “a potential advantage their legitimate counterparts frankly cannot afford”.

“Artificial intelligence enables meaningful advancements, but its adoption across all industries and touching most products increases the complexity of the technological landscape and thus opens up myriad new ways to attack,” he says. “In the background of AI innovation is an arms race between protecting the technology-driven status quo and exploiting it, as threat actors can weaponise technology almost as fast as it emerges. If the business world takes a timeout on its AI experiments, this does not mean cyber threats will do the same.”

Since the criminals don’t follow national cyber crime laws and international norms that aim to prioritise cybersecurity over cyber attacks, they aren’t going to slow down anytime soon. And since they are already sophisticated regarding things like business email compromise, vendor invoice fraud, and executive impersonations, we can expect them to become even savvier when working with AI, especially given that Meta’s LLaMA model was leaked online.

It’s going to be necessary to deploy AI in defence to detect AI-generated phishing emails

What does this mean? That we’re about to be inundated with AI-driven phishing campaigns and AI-generated malware that could run circles around traditional anti-malware technology, leaving security teams overwhelmed.

“It’s going to be necessary to deploy AI in defence to detect AI-generated phishing emails and malware, to detect anomalous activity on networks, and to provide the opportunity to respond quickly,” says Andrew Robinson, co-founder and chief security officer at 6clicks. “If defenders pause for six months, it’s unlikely we will be prepared for the impending onslaught.”

Zero evidence of safety controls

So, what’s the solution? As Turing Prize winner Yann LeCun said in a recent webinar, it’s about regulating products, not research. For Tehila, that means practical things like ensuring new models are released securely, regularly updating security protocols based on new AI capabilities, and adopting AI-based platforms and protection systems to counter the evolving threat landscape effectively. In some cases, it might mean going even further.

“ChatGPT has been acquiring user data with zero evidence of any privacy or safety controls in place to protect those users,” says Davi Ottenheimer, VP of Trust and Digital Ethics for Inrupt. “We’ve seen Italy ban this tech and Replika as well, so the company was clearly processing user data in unsafe ways. While development may not pause despite calls for it, a temporary ban is still necessary and useful, even with widespread VPN access.

The extra step shifts a lot of the risk and burden. In other words, someone going to the trouble of setting up a particular and safe environment to access the service is the opposite of what we have seen with ChatGPT, which is people being exposed far too easily to unsafe AI without having any idea of how dangerous it is.”

“It’s as if stakeholders didn’t listen to everyone shouting from the hilltops that safe learning is critical to a healthy society” Davi Ottenheimer

AI and centralised data storage often overlook user privacy and control. That needs to change. But, despite the potential dangers, regulation in the US has been slow to address the risks posed by OpenAI. That’s why it’s crucial to involve key stakeholders early to ensure that AI is developed responsibly. This means collaboration between legal, security, engineering, science, and product leaders.

“ChatGPT only just clarified that it would not use data submitted by customers to train or improve its models unless they are expressly opt-in,” Ottenheimer says. “Being so late shows a serious regulation gap and an almost blind disregard for the planning and execution of data acquisition. It’s as if stakeholders didn’t listen to everyone shouting from the hilltops that safe learning is critical to a healthy society. But we tend to see an innovation boom that comes along with regulation, so the right stakeholders can bring appropriate guardrails to force better engineering.”

Another idea is for security teams and leaders to proactively implement CISA’s guidance to include monitoring for any shifts in online activity related to their business or sector. Innovators could also seize the opportunity to unlock the potential of new technical solutions: frameworks like DISARM and tools like GPTZero can and should be integrated into cognitive security operations centres.

“Multi-disciplinary teams skilled in AI, cognitive security, threat hunting, counterintelligence, and other fields can find creative ways to use these tools to detect misinformation, disinformation, and malinformation campaigns,” says Sean Guillory, Senior Robotics Process Automation Bot Developer at Booz Allen Hamilton. By the way, if the concept of malinformation is new to you, it’s the deliberate publication of private information for personal or private interest and the intentional manipulation of genuine content. “Building such capabilities now would empower policymakers, private-sector entities, and society at large to resist and counter threats more effectively in the future.”

A “cybersecurity disaster”

The bottom line is that calls for pausing AI development may be well-intentioned, but it risks creating a cybersecurity disaster if legitimate actors fall behind their criminal counterparts. Indeed, rather than stopping the clock on AI development, our challenge is to harness its power responsibly, securely, and ethically. So, let’s focus on regulating AI products, fostering responsible AI development, and deploying robust security measures to counter evolving threats. That’s something we can all agree is a good thing; no signatures to an open letter required.

Latest articles

Be an insider. Sign up now!