
Blogs & Opinions 19.06.2025
Global Cyber Scam Networks Require New Approaches
As cyber threats grow more sophisticated, resilience requires more than just technology
Blogs & Opinions 19.06.2025
As cyber threats grow more sophisticated, resilience requires more than just technology
“It spreads like a cancer,” were the words of the United Nations Office on Drugs and Crime’s (UNODC) regional analyst, John Wojcik, last month, referring to the global expansion of cyber scam networks. His comments followed the release of a new UN report into the issue. The report, and Wojcik’s striking words, made headlines, but many of us with lengthy tenures in the cybersecurity industry were unsurprised.
Gone are the days when cyber scammers launched isolated attacks. Attackers are now industrialised, well-funded enterprises operating in the digital domain, where jurisdictional borders have become meaningless. They are aggressively targeting individuals and organisations worldwide, using sophisticated tactics such as social engineering and emotional manipulation, and leveraging technology to acquire vast amounts of wealth.
Facing up to this new reality demands a shift in cybersecurity best practice – away from purely technical defences and towards a comprehensive approach centred on human awareness.
The growth of international cybercrime rings has been driven largely by the availability of Cybercrime-as-a-Service (CaaS) models. Sophisticated attack tools that once required extensive technical knowledge are now available through subscription services, one-time purchases, or profit-sharing arrangements.
The growth of international cybercrime rings has been driven largely by the availability of CaaS models
Criminal marketplaces now sell everything, including phishing kits, ransomware subscriptions and corporate network access. These packages are akin to software packages used in the corporate world, with intuitive user interfaces, tutorials and even customer support services.
As a result, the barrier to (cyber crime) entry has dropped dramatically. Cyber criminals can now operate across borders, stay anonymous, and face minimal risk of being identified or held accountable.
While ransomware and phishing may be lucrative revenue streams for these gangs, business email compromise (BEC) is the golden goose. The FBI’s latest annual Internet Crime Report highlights just how profitable: of the $16.6 bn that Americans lost to cyber fraud in 2024, nearly $2.8 billion was lost to BEC. Unlike ransomware, which requires complex technical infrastructure, BEC exploits human psychology and organisational trust, impersonating executives or vendors to trick victims into transferring money or sensitive data. The result is high success rates and massive payoffs, with minimal technical effort.
Advances in AI, and particularly how it can be used to target the human element, are worrying developments that are supercharging cyber scammers – and they demand a fundamental overhaul of how we approach cybersecurity awareness.
Just as legitimate businesses use customer data to personalise offers and improve engagement, cyber criminals are now using AI to do the same, with far more sinister intent. Scam networks are leveraging AI to build detailed victim profiles, making their attacks more convincing, more targeted, and far harder to detect.
A study published in Harvard Business Review details how, when used for phishing attacks, AI enables attackers to craft hyper-personalised messages for specific job roles and functions, improving click-through rates and reducing suspicion. AI essentially eliminates the traditional trade-off between mass campaigns and effective targeting, enabling personalised attacks at scale.
Arup lost £20 million to a deepfake scam impersonating senior management
Likewise, through the analysis of digital footprints, scammers can identify the specific fears or concerns of their targets and then craft content designed to trigger alarm or panic responses.
When these targeting capabilities work in concert with the latest deepfake audio and video technologies, the results can be devastating to individuals and organisations alike.
Early last year, UK engineering firm Arup lost £20 million to a deepfake scam impersonating senior management. And Arup is hardly the only business being targeted in this way. As its global CIO, Rob Greig said at the time, “Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing and deepfakes. What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months”.
These AI-driven attacks are becoming the new standard, and if businesses and individuals are to protect themselves, cybersecurity strategies have to adapt.
Relying solely on technological defences is clearly no longer enough. Attackers are increasingly skilled at exploiting human vulnerabilities, making it essential for organisations to build cyber resilience that extends beyond the security team. This means equipping every user, not just with rules, but with a real understanding of attacker tactics and how to respond.
Resilience comes from more than just awareness: it requires continuous education, practical training, and empowering people to act securely in their roles and personal lives. Only then can we begin to reduce the impact of social engineering and digital fraud.
Effective approaches for businesses include:
As cyber threats grow more sophisticated, resilience won’t come from technology alone. It will come from people – empowered, informed, and united by a shared sense of responsibility. The organisations that lead tomorrow will be those that invest in their human firewall today.