Features 05.11.2025
How ‘Vibe-Coded’ Copycat Attacks Could Muddy the Cyber Waters
What happens when threat actors feed security blogs into AI?
Features 05.11.2025
What happens when threat actors feed security blogs into AI?
Vibe-coding is an AI-assisted software development technique in which users describe what a programme should do in plain language, and the model automatically generates the underlying code. However, it can also be used for nefarious purposes, as well as for legitimate projects, according to a new report.
By leveraging existing threat analysis, criminals can generate convincing imitations of known malware in minutes. Not only does this lower the bar for threat actors. It also results in a new class of copycat attacks that closely resemble the original, confusing attribution and evading conventional detection.
The idea that good threat reporting might itself be fuelling cybercrime is uncomfortable for an industry striving for openness and information sharing. It raises challenging questions about the way we talk about threats.
Robert McArdle, director of forward-looking threat research (FTR) at Trend Micro and one of the report’s authors, says the discovery emerged from his own team’s use of generative AI.
“Like every tech company, we were using AI to produce quick proof-of-concept code,” he tells Assured Intelligence. “You give it a specification, and the more detailed that specification, the better the result. Then it dawned on us: a lot of the detailed papers that we and others put out are effectively a specification for what the malware does.”
“The more detailed the write-up, the easier malware creation becomes” Robert McArdle
To test the theory, Trend Micro fed its own published research into a generative model. The AI was able to produce convincing reproductions of the described malware – close enough that, with a little human refinement, they could have functioned as workable threats.
“You still need to do some work,” McArdle says. “But does it help you? Yes, it does. The more detailed the write-up, the easier it becomes.”
This has far-reaching implications. Security blogs, advisories and technical deep-dives are intended as a defence against opacity – a way to demystify threat techniques and raise the general level of preparedness. However, those same write-ups now serve as ready-made design briefs for AI tools capable of generating new malicious code.
Vibe-coding also scrambles one of the industry’s most useful, and fallible, disciplines: attribution.
For years, investigators have tracked groups based on their coding style, language choices, infrastructure and preferred targets. Subtle quirks in a malware sample could link it to a known adversary, building up profiles that help law enforcement and analysts understand who’s attacking whom and why.
But AI muddies that picture. When a model trained on public malware samples generates a close approximation of an existing campaign, it’s easy to misread the signal.
“Less mature companies often base attribution on what the malware looks like,” says McArdle. “If the file resembles something they’ve seen before, they assume it’s the same actor. But with vibe-coding, that connection no longer holds.”
Trend Micro employs a “diamond model” of attribution, assessing four key elements before identifying a culprit: adversary, infrastructure, tools or capabilities, and victims. “If you can link all four, then you’re probably right,” McArdle says. “But if you’re just matching files, you’ll be wrong a lot more often now.”
The report’s publication reignites an ethical debate that has simmered in cybersecurity for years: does transparency help or hinder? Should the industry publish detailed malware analyses when those same details can be used to regenerate the attack?
McArdle’s view is pragmatic. “It probably still benefits defenders more than attackers,” he says. “But we should ask ourselves, when we’re reporting low-level technical details, does the reader really need to know this? Are we including it because it’s useful, or because we want to show off our analysis?”
“The future of detection may rest less on identifying who wrote the code and more on how it behaves”
He points out that restricting access to detailed intelligence isn’t straightforward. Some have suggested private sharing between vendors, but this would exclude many practitioners who rely on public reports. “You’d end up creating a second-class tier of defenders,” he says. “The people who need this information the most are often the ones without a big vendor partnership.”
Tim Chase, field CISO at Orca Security, agrees that open reporting should continue. “The attackers are going to have their ways of sharing anyway,” he tells Assured Intelligence. “If we stop publishing, we just blind the good guys.”
For Chase, collaboration is a fundamental part of the defensive ecosystem. “I’ve always been a big fan of sector-specific sharing groups – the ISACs and industry collectives,” he adds. “Even competitors work together in those forums because everyone benefits from the collective visibility.”
Still, he acknowledges that some details may need tighter handling. “In critical infrastructure, maybe you limit distribution to verified participants through programmes like CISA or InfraGard. But for the broader industry, I think open reporting still does more good than harm.”
If vibe-coded attacks obscure the identity of the adversary, defenders can no longer rely on attribution or known signatures as their primary line of defence. Chase argues that this shift should prompt organisations to adopt more behaviour-based and runtime security models.
“Static detection isn’t enough,” he says. “If a piece of malware suddenly starts creating user accounts or talking to other machines, you should be able to flag that based on behaviour, not a file hash.”
“If we stop publishing, we just blind the good guys” Tim Chase
He suggests many security teams have underinvested in two key areas: source-code management and runtime protection.
“Since SolarWinds, we’ve seen that attackers are going after the source,” Chase argues. “So you need proper controls around how your code is stored and what open source components you’re pulling in. Then at runtime, you need systems that look for abnormal activity – not just static binaries.”
Chase frames it as an arms race between attacker and defender automation. “They’re using AI to create attacks. We have to use AI to respond,” he explains. “It’s the same pattern we saw with the move to cloud – we had to evolve from manual remediation to DevSecOps. Now we need to evolve again, to AI-speed defence.”
That means security operations centres (SOCs) will increasingly need to rely on AI agents to triage alerts, identify patterns and even take limited autonomous action. “Ultimately, I’d like to see self-healing systems,” says Chase. “If an AI detects an abnormal process in a container, it should be able to pause or isolate it automatically until a human can review.”
Ironically, the same generative techniques that make attacks easier to launch may also make them easier to catch.
“The industry doesn’t need to talk less. It just needs to think harder about who’s listening”
McArdle notes that vibe-coded malware often borrows from existing behaviours the model has seen before. “At the file level, it’s still hard to spot,” he says. “But when you look at what the malware actually does – the behaviour layer – those patterns repeat. The more they rely on what’s been seen before, the more predictable they become.”
In other words, heuristic and behavioural detections might actually get stronger as AI-generated attacks proliferate. “If you had a detection for the last one,” McArdle adds, “it might work for the next one too.”
That’s small comfort for analysts still grappling with attribution chaos. But it suggests the future of detection may rest less on identifying who wrote the code and more on how it behaves.
For all the talk of new frontiers, the cat-and-mouse dynamic remains the same – only faster. Every new defensive technique eventually gets reverse-engineered, just as every new exploit inspires a patch. The difference now is speed: AI has transformed what used to be a slow evolutionary cycle into a near-real-time feedback loop.
Both McArdle and Chase see opportunity in that acceleration. AI may have lowered the barrier to entry for attackers, but defenders are also better equipped to scale response and automate insight. “The technology we’re using today is the worst version we’ll ever have,” McArdle notes. “In six months, it will be drastically better.”
Transparency still matters, but it needs maturity, discipline and context. Vibe-coding isn’t a reason to stop sharing threat intelligence; it’s a reminder to share it purposefully, with an understanding of how it might be reused. The industry doesn’t need to talk less. It just needs to think harder about who’s listening.