Features 06.02.2024
ChatGPT: Six Ways Generative AI Will Impact Cybersecurity in 2024
It’s been nearly a year since ChatGPT burst into the mainstream, but what’s the impact on cybersecurity?
Features 06.02.2024
It’s been nearly a year since ChatGPT burst into the mainstream, but what’s the impact on cybersecurity?
It’s hard to believe that the generative AI tool ChatGPT burst into the mainstream more than a year ago. While artificial intelligence (AI) has been around for a long time, the launch of ChatGPT made waves because it’s efficient, free, and easy to use. Unsurprisingly, chat-based tools have become the underlying tech behind many solutions.
But with new and exciting technology inevitably comes danger. Cyber criminals are finding new ways to use ChatGPT and its generative AI peers to carry out and increase the impact of cyber attacks.
We’ve broken down the reality of ChatGPT, including the good, the bad, and the ugly. Read on to understand three ways generative AI will boost security in 2024 and three ways it will be a threat.
1: Quickly analyse data to thwart attacks and predict threats
According to Jake Moore, global cybersecurity advisor at ESET, Generative AI has an “exceptional ability” to process and analyse vast quantities of data at speed “with a level of detail that is unattainable for human operators”. “The technology excels in identifying patterns and anomalies within data, which is critical for early detection of potential security threats.”
As the technology improves, so do its adaptive learning capabilities, which Moore says will “play a crucial role” in 2024. “The technology is constantly evolving, and a continuous learning process ensures AI models remain up-to-date and effective against changing security threats.”
Expect to see more of this in 2024, with the tech increasingly able to predict future issues. “By anticipating and adapting to new types of attacks, generative AI provides a dynamic defence mechanism that also prepares for future threats,” Moore says.
2: Powerful security tools and better insights
Generative AI will offer a boost to security tools, making them “a lot more powerful” in 2024, Gabriel Hopkins, chief product officer at Ripjar, predicts. “Generative AI can quickly scan infrastructure for potential weaknesses and entry points. It could also be trained on past vulnerable code examples and penetration-testing techniques.”
“They adapt and learn, identifying vulnerabilities a human might miss” Camden Woollven
AI-powered tools such as DeepHack and DeepExploit can help automate the “often tedious and complex process” of penetration testing, says Camden Woollven, group head of AI at GRC International Group. “They don’t just follow scripts; they adapt and learn, identifying vulnerabilities a human might miss.”
Better still, the nature of AI means it can run constantly in the background. “This means security professionals’ time can be shifted and manual tasks reduced,” says Hopkins.
3: Helping to plug the talent gap
It’s no secret that there’s a skills shortage in cybersecurity, and experts say generative AI will help plug this gap in 2024. The technology will boost security culture, initiating a drive towards self-service cybersecurity, says David Corlette, vice president of product management at VIPRE Security Group.
“At the moment, if an employee receives an email containing a questionable link or attachment, the only option is to forward it to the infosec team and wait for them to find time to look at it,” explains Corlette.
It’s annoying for everyone involved, which is why the process will change in the future, he predicts. Soon, it’ll be possible for employees to query an embedded generative AI tool in their own words and language to check if a link or attachment is malicious. “They will receive the answer in a concise, easy-to-understand way, and the burden on the infosecurity team would be reduced,” he says.
Cybersecurity teams can use similar capabilities to automate time and resource-intensive processes, says Corlette. “Processes that today are very manual – such as uploading potentially malicious samples to online analysis tool VirusTotal – will soon be accessible by asking an AI bot to do the research for you.”
But every coin has two sides. So, let’s look at the other side of the ChatGPT coin.
1: Better, scarier deepfakes
Recent advancements in AI-powered tools have led to a resurgence in deepfakes. Moore says it is now easier than ever to manipulate images, videos and sound with a powerful impact. “Using this technology to mimic well-known political figures is extremely easy, and content can be made in little to no time,” he says.
“There is the potential that any criminal group or nation-state will be able to create deepfake media to meddle with the many general elections going on over the next 12 months” Jake Moore
The technology can also spoof company CEOs: Voice cloning attacks have seen firms lose millions of pounds.
Deepfakes currently look impressive, but a noticeable flaw often gives the game away, such as a strange head movement or a glitch in the sound. However, as the technology advances, it becomes increasingly hard to identify, says Moore. “Distinguishing between real and fake has become more challenging – especially in political situations or when there is a more powerful narrative or agenda to prove.”
With better deepfakes comes the risk that attacks will ramp up in 2024, Moore predicts. “There is the potential that any criminal group or nation-state will be able to create deepfake media to meddle with the many general elections going on over the next 12 months,” he warns.
2: AI-enhanced malware
Generative AI helps improve software development, but attackers can also use it to write malware. A whole business model is starting to emerge on underground forums.
Take, for example, WormGPT. Available on the dark market, WormGPT is a chatbot interface designed to simplify malware generation. Much like the ransomware-as-a-service model that makes complex attacks available to just about anyone, generative AI enables the concept of malware-as-a-service. This “is essentially malware as self-service”, making it easily accessible to criminals, says Greg Day, SVP and global field CISO at Cybereason.
Generative AI can enhance malware so it’s more stealthy and unlikely to be noticed until it’s too late. Woollven cites the example of DeepLocker, a new breed of malware that uses AI to conceal its intentions until it reaches the intended target. “For a business, this could mean seemingly benign software running on the network is a ticking time bomb, waiting for the right moment to strike.”
3: Convincing phishing attacks
One of the most talked-about criminal uses of generative AI is email phishing, where attackers use the technology to enhance attacks.
The figures say it all: There has been a 1,265% increase in malicious phishing emails since the launch of ChatGPT in November 2022, according to the SlashNext State of Phishing 2023 Report. As the technology improves, you’ll need to be more suspicious and sceptical of incoming emails than ever before.
Generative AI helps adversaries to craft “really convincing, personalised phishing messages”, says Dr Farshad Badie, vice-dean of the faculty of computer science and informatics at the Berlin School of Business and Innovation. “It copies how people talk and think, making deceptive content that slips past normal security filters.”
The technology also lowers the technical barrier to creating convincing profile pictures and helps write “impeccable text”, says James McQuiggan, security awareness advocate at KnowBe4.
In 2024, cyber criminals will use AI to craft “frighteningly convincing” emails, bypassing spam filters and tricking recipients into disclosing sensitive information, says Woollven. “These messages will be tailored to individual targets, using data scraped from social media and other sources to increase their effectiveness.”
One year on, ChatGPT and its generative AI peers are set to boost efficiency and your security, too – but only if you approach the tech correctly. That said, ensure you recognise that generative AI will be increasingly used in cyber attacks in 2024. It’s a good idea to train staff regularly on the changing threats and ensure you have the tools and strategies to keep your business safe.