Features 18.03.2026
AI Autopsy: Vibe Coding Fuels Global FortiGate Compromises
Why CISOs must ruthlessly enforce the fundamentals
Features 18.03.2026
Why CISOs must ruthlessly enforce the fundamentals
A new wave of cyber attacks has demonstrated that you don’t need a zero-day vulnerability to compromise hundreds of enterprise networks. You just need a generative AI (GenAI) assistant and a bunch of targets with sub-par security posture.
This wasn’t a highly sophisticated advanced persistent threat (APT) group with state-sponsored resources. It was an opportunistic, financially motivated actor who utilised commercial large language models (LLMs) to scale a credential-based attack to unprecedented levels. The Russian-speaking threat actor compromised over 600 FortiGate devices across 55 countries in just five weeks, according to AWS.
The campaign highlights a critical shift in the modern threat landscape. The AI platforms the adversary used did not discover novel vulnerabilities or write complex exploits. Instead, they acted as a powerful force multiplier, providing an assembly line that helped a low- to medium-skilled attacker orchestrate parallel scanning, automate post-exploitation tasks, and dynamically generate lateral movement plans.
The campaign succeeded by systematically scanning for FortiGate management interfaces exposed to the internet. Using basic brute-force techniques against administrative and VPN accounts lacking multi-factor authentication (MFA), the attacker was able to extract device configurations. These configuration files were a gold mine of data, containing SSL-VPN user credentials with recoverable passwords, administrative credentials, internal network topologies, and LDAP bind configurations that stored passwords in an encrypted format.
“The big concern is how AI is now lowering the skill set required and barriers to entry for this type of attack”
Security researchers were able to track the full scope of this campaign after discovering a misconfigured server hosted in Switzerland that exposed the attacker’s entire operational toolkit. The open directory hosted 1402 files across 139 subdirectories, containing stolen firewall backups, Active Directory mapping data, credential dumps, and AI-generated attack-planning documents.
To process this data at scale, the threat actor deployed a custom, Docker-based orchestrator written in Go, named CHECKER2. This tool was used to scan thousands of VPNs in parallel, with logs showing it evaluated over 2500 potential targets across more than 100 countries. CHECKER2 automated the post-VPN reconnaissance workflow, mapping target networks, classifying them by size, and running service discovery with open-source tools such as the “gogo” port scanner and the Nuclei vulnerability scanner.
What makes this campaign unique is how the reconnaissance data was used. The threat actor built a custom Python-based Model Context Protocol (MCP) server named ARXON. An MCP server acts as an intermediate layer that ingests raw data, feeds it into language models, and then uses the generated output to direct other tools. ARXON ingested the internal network data gathered by CHECKER2 and queried models such as DeepSeek and Claude to generate structured, step-by-step attack plans automatically.
The AI analysed the targets’ infrastructure and provided instructions for the next steps. These included how to gain domain admin access, identify credential search locations on domain controllers, find methods to identify IT staff, and exploitation steps for lateral movement. Researchers noted that the threat actor rapidly evolved their operations. In December 2025, they were using a publicly available MCP tool called HexStrike, but within eight weeks, they had transitioned to the fully automated, custom-built ARXON system.
Through “vibe coding”, the actor generated numerous scripts to parse configurations, extract credentials, and orchestrate their attacks. However, as Assured CISO Nick Harris points out, the real danger lies in how AI acts as an efficiency driver rather than a master developer.
“While the clickbait suggested AI was part of the attack path, this is an example of AI allowing for efficiencies in the process. The irony is that the forensics identified poor code quality, so while the attacker made life easy for himself with vibe coding, it might have caused issues later,” he tells Assured Intelligence. “What this vibe coding allowed wasn’t an AI-generated attack but standard automation via custom tooling … The big concern is how AI is now lowering the skill set required and barriers to entry for this type of attack, making these events ever more likely”.
“Every exposed edge that you have is actually a credential decision point” Rishi Kaushal
Rik Turner, chief analyst of cybersecurity at Omdia, notes the sheer scale of the threat. “GenAI helps with the three ‘vs’ of cyberattacks, namely their volume, velocity, and variety,” he tells Assured Intelligence, adding that actors can now rapidly iterate and evade defenders. “This is particularly handy in things like brute-force attacks against passwords, where it now takes a fraction of the time it took previously to change a single digit in a putative password.”
Rishi Kaushal, CIO at Entrust, agrees that AI has fundamentally shifted the economics of attackers. “The attackers didn’t become smarter. They just became faster and cheaper at the same time,” he tells Assured Intelligence.
Once inside the network, the attacker utilised their AI-generated plans to target Active Directory. They performed DCSync attacks and harvested NTLM hashes using open-source toolkits such as Meterpreter, Mimikatz, and Impacket. In some instances, the AI coding agent was even pre-approved via a .json configuration file to autonomously execute these offensive tools on the victim’s network, without requiring human approval for each command.
This automated lateral movement highlights a critical vulnerability in how organisations manage identity at the network edge. “Every exposed edge that you have is actually a credential decision point,” Kaushal explains. He stresses that a valid credential remains the fastest way into an enterprise, making it imperative to implement strong identity telemetry. Defenders must move beyond simple perimeter controls to spot unusual authentication patterns, enforce continuous monitoring, and manage complex certificate lifecycles. This is especially important as AI agents themselves begin to require distinct, manageable identities.
“The fundamentals actually matter even more in the era of AI-assisted attacks” Rick Turner
For Omdia’s Turner, this means a firm pivot toward zero trust principles. “I’ve long argued that zero trust is justified institutional paranoia, but the need for it increases manifold with AI in the picture,” he says. Turner suggests that traditional VPNs are no longer fit for purpose, and organisations should instead look to zero-trust network access (ZTNA) technology as a replacement.
He defines this defensive posture as “security against AI” – one of four distinct ways cybersecurity and AI interact.
“Security against AI … needs to be multifaceted, but at the very least there should be both stringent checks on the way in … and continuous monitoring of their activity once they’re in, with a view to detecting anything anomalous in their behaviour,” Turner explains. Because attackers can easily blend in with legitimate tooling by using compromised credentials, continuous authorisation is rapidly becoming a necessity, he adds.
The attacker’s AI-generated plans specifically targeted internal storage and backup infrastructure. Using custom PowerShell scripts and attempting to exploit known vulnerabilities, they sought to extract privileged credentials and execute ransomware precursors.
As Entrust’s Kaushal points out, ransomware operators target backups for two key reasons. They do so to harvest encrypted data for potential post-quantum decryption down the line, and to break an organisation’s business model by removing their ability to recover from an attack.
To counter this, “immutability is not an optional thing anymore. It’s a requirement”, Kaushal warns. Backups must be completely isolated, heavily encrypted, and run on entirely separate administrative identities and access controls. Crucially, the cryptographic keys for these backups must be managed separately to ensure that a total compromise of the storage environment does not also yield the keys.
Omdia’s Turner agrees, noting that organisations need to prepare for “hardened recovery”. This ensures that “the restored environment is itself secured, patched and configured to resist further attacks”. It disables unused services and removes default credentials to prevent reinfection during the recovery phase.
This global FortiGate campaign serves as a stark warning to CISOs. Despite the hype surrounding AI, the most severe risks currently stem from traditional security vulnerabilities and poor credential hygiene. As Turner argues, “the fundamentals actually matter even more in the era of AI-assisted attacks”.
Organisations running edge appliances must take immediate action by ensuring their management interfaces are strictly cordoned off from the public internet. All default and common credentials should be changed, and multi-factor authentication (MFA) rigorously enforced for all administrative and remote access points, he continues. Furthermore, API keys should be rotated, and cloud roles reviewed to ensure least-privilege principles are upheld.
As AI dramatically lowers the barrier to entry for cyber criminals, the threat landscape will only become more saturated with malicious tools and actors. The organisations that adapt quickest will not be those that simply buy more defensive AI. They will be the ones that rigorously defend their identities, implement robust security telemetry, and actively harden their perimeters against automated, high-speed attacks.