Features 02.12.2025

Is Agentic AI Set to Transform SecOps?

Autonomous SOCs could soon become standard industry practice

Are AI agents a game changer or a potential risk for the SOC? Danny Palmer investigates

Could the security operations centre (SOC) be about to get an AI-powered reboot? Chief information security officers (CISOs) certainly think so. According to Omdia’s 2025 cybersecurity decision-maker survey, many believe the autonomous SOC could become the industry standard in as little as two years.

It makes sense. After all, many of the tasks SOC employees perform – detecting, triaging and analysing potential cybersecurity threats – are exactly the kinds of jobs which AI excels at. By autonomously analysing alerts, AI agents could accelerate threat detection, freeing up human teams to focus on more complex, meaningful activities. But not everyone is convinced AI is a SecOps silver bullet.

Introducing the AI-augmented SOC

A typical SOC can be a notoriously hectic, stressful environment, with alert overload almost a given.

“Today’s SOC teams face unprecedented challenges – the top challenge being scale. The amount of data we collect – from endpoints, cloud services, identities, and SaaS apps – has exploded,” Box CISO, Heather Ceylan, tells Assured Intelligence. “Our SOC has to separate the signals from the noise in near real time, without burning out the team or missing the one event that really matters.”

The problem is that SOC alerts come in faster than humans can realistically deal with them. And this challenge is only growing as cyber criminals exploit AI to make their campaigns more effective. It’s well documented that they’re using Large Language Models (LLMs) to make phishing emails appear more legitimate and deploying automation to scale attacks.

“If we get that balance right, AI becomes a powerful augmentation, not a risky shortcut” Heather Ceylan

Meanwhile, the proliferation of cloud applications means employees can log in from almost anywhere. That’s good for productivity, but also provides new entry points for attackers to exploit.

For example, an employee who’s usually based in the UK suddenly logs in from the other side of the world. That’s not necessarily malicious activity in itself – maybe they’re on a work trip, or checking emails while on holiday. But it’s unusual. In this scenario, a SOC analyst needs to take time out to investigate the issue. In the meantime, that employee may be locked out of their account. However, with an AI-assisted SOC, triaging alerts could become almost instant.

“There are lots of things that humans do really well. But as a human, I don’t have all the signals to understand all the behavioural patterns, no matter the situation. Agentic AI is uniquely equipped to solve that problem at the speed and scale that attacks are occurring,” says Mike Britton, CIO and former CISO at Abnormal AI.

“That’s where an agent can examine an alert, take some additional steps to understand the full context, then send it to a human for human review,” he tells Assured Intelligence.

Supercharging security investigations

Not all alerts are linked to malicious activity. They might indicate unexpected but otherwise legitimate behaviour, or they might be false positives. But if trained on the correct data, an AI-powered SOC can help humans prioritise the most significant alerts.

“A great thing AI brings is that a machine can read and analyse logs faster than people can. If we’re looking for a needle in a haystack, AI can determine the needle from the hay and get us to the most important alerts. I think that’s where we’re going to gain a lot of value with AI,” Netscout CISO, Debby Briggs, tells Assured Intelligence.

“It’s using AI to get rid of the hay so we can look at the needle. It’s being able to learn from the way they handle those needles, those issues, and start automating some of that – because a machine doesn’t need to sleep.”

No silver bullet

Nevertheless, even advocates of deploying AI in the SOC believe there are challenges. First, the technology is a double-edged sword: cyber criminals are using it to conduct attacks as much as network defenders are deploying it to stop them.

“You need to understand your risk appetite. What are you willing to have AI work on?” Mike Britton

Second, AI isn’t a silver bullet. Organisations can’t simply procure an AI solution and hope for the best. A business that is seriously considering building an autonomous SOC should first understand exactly what they want to use the technology for and why.

“There’s a fever pitch to get AI deployed, and people are just running around, scrambling, saying ‘I need AI agents’. There’s pressure around deploying something without truly understanding the problem you’re trying to solve. But you need to understand that,” says Abnormal AI’s Britton.

AI agents in the SOC should focus on specific tasks, such as triaging alerts or monitoring for suspicious behaviour. Even if AI were powerful enough to do so, CISOs shouldn’t hand over full control of their SOC. That’s because some prompts or commands may result in unintended consequences, Britton continues.

“You need to understand your risk appetite,” he says. “What are you willing to have AI work on? What data, access and permissions are you willing to give it? How will you control it?”

The risk of a cybersecurity skills drain

While advocates of AI-assisted SOCs speak highly about how AI agents can take over the “grunt work” performed by the most junior SOC analysts, this also risks creating a paradox. If all junior-level work is outsourced to an algorithm, what happens to those roles in the future?

“One of my biggest concerns with AI in the SOC is that it’s tough right now to find people to work there. So yes, AI is going to help us with that problem, as it can take over the Level 1 SOC analyst work,” says Netscout’s Briggs.

“The autonomous SOC could become the industry standard in as little as two years”

“But if we do that and I need to hire Level 2 and Level 3 people, where are the people graduating from cybersecurity degrees going to get that experience? And how does someone who graduates from college today get that experience if they don’t have that work?”

It’s a challenge which many industries are grappling with, but one which could have significant consequences for the future of cybersecurity.

“There’s a cultural and skills shift. Analysts need to learn how to work with AI agents, question their output, and design effective prompts and playbooks,” says Box’s Ceylan. “Leaders need to set clear policies on where automation is allowed, where it is not, and how we measure success. If we get that balance right, AI becomes a powerful augmentation, not a risky shortcut.”

If these challenges are met head-on, the technology can improve security operations and help cybersecurity staff tackle the challenges of an ever-more-connected, AI-enabled world. But even the most enthusiastic advocates agree that human talent will always remain an essential part of the SOC.

“Humans are always going to need to be involved – machines taking over isn’t going to happen,” Netscout’s Briggs concludes.

Latest articles

Be an insider. Sign up now!