As artificial intelligence (AI) tools are continuously improving how they imitate human behaviour, cyber criminals are simultaneously exploiting the technology to employ advanced psychological manipulation tactics, explains Samudhra Sendhil…
With nearly half (49%) of cybersecurity professionals concerned about Chat GPT’s potential to help hackers improve technical knowledge and develop specialised skills, it’s clear that this technology evolution has a dark side.
There’s no denying that AI tools enable a new generation of fraudsters to deploy highly convincing tactics. AI-enabled scammers can pose as real people or brands, communicating near-flawlessly with the voice and characteristics of the person or organisation they’re impersonating.
The risk is exceptionally high right now because the (increasingly subtle) warning signs aren’t as well known and are less widely reported. By now, most people know not to click links in emails from people they don’t know. But the tell-tale signs of generative AI-powered social engineering attacks aren’t yet common knowledge.
AI is supercharging social engineering attacks
With that accepted, let’s look at some common methods and techniques cyber criminals use in their social engineering attacks.
- Data scraping
Data scraping is the automated extraction of information from websites. Scammers using automated tools to collect various forms of valuable data, including text, images, and links, is old hat. Now, however, they can also employ intelligent AI algorithms to analyse that scraped data, allowing them to spot patterns and personal details that help them build tailor-made messages, offers and requests. What would typically take a human many hours can now be done by AI in minutes, creating honed messages that are much harder to identify as a scam.
- Sentiment analysis
Sentiment analysis is an advanced AI technique that detects emotional nuances in text. With this ability built into the process, scammers can identify keywords, phrases, and even writing styles that strike a chord with victims who are vulnerable or distressed. Cyber criminals can craft genuine and empathetic messages, taking advantage of unsuspecting individuals.
- Social media profiling
While the risks of an overly-public social media profile are highly publicised, many are less than rigid with privacy controls. Cyber criminals can use smart AI algorithms to diving deep into people’s social media profiles, uncovering details about their lives, passions, and connections. As a result, they can then create elaborate profiles of their victims, weaving together shared interests, common experiences, and mutual connections, which they can exploit.
- Deepfake personas
Generative AI is accelerating the volume of deepfakes (AI-generated false videos). Deepfake detection firm DeepMedia estimates that some 500,000 deepfake videos and voice recordings will be posted globally by the end of 2023. These digital doppelgangers look and sound just like real people, engaging victims in conversations that feel strikingly genuine.
UK finance expert Martin Lewis recently became the victim of an AI-powered scam. In a widely circulated ad on social media, a deepfake video impersonating Martin attempts to solicit money for a supposed investment scheme. Playing on authority bias, the cyber criminals sought to exploit his followers and extract money. It’s difficult to identify whether it’s a trusted friend or high-profile finance expert on the other side of the screen or whether it’s just a scammer. The lines blur, and scepticism wavers.
Two tools to defeat AI-enabled cybercrime
We’re at the beginning of a steep learning curve and a new, unpleasant era in cyber crime. But we do have the tools to fight back in terms of both technology and public information.
The technology used to execute next-gen social engineering attacks can also be deployed to identify them before they succeed. Generative AI has great potential as a powerful defence strategy, combined with invaluable human judgment to thwart malicious intent. Advanced threat intelligence systems and behaviour-based analytics can proactively detect and mitigate the risks AI-generated content poses.
User education must also take centre stage. Technology leaders must equip individuals with the knowledge and awareness to recognise and report suspicious activities. Empowering users in cybersecurity best practice and fostering a culture of ongoing training helps cultivate a resilient and vigilant community.
The more comprehensively we grasp the psychological strategies used by cyber criminals, the more effectively we can safeguard those who are most susceptible. To do this, researchers, industry professionals, and policymakers must join forces to take on the ever-evolving threat landscape. We can cultivate a united front against AI-powered social engineering attacks by sharing knowledge, expertise, and resources.
Samudhra Sendhil is an enterprise analyst at ManageEngine. Samudhra has worked in various parts of the world, including the United States, the United Kingdom, South Korea, and Malaysia. This multicultural background equips her with a holistic understanding of the enterprise landscape, enabling her to conduct nuanced analysis and effectively communicate ideas.