
Blogs & Opinions 22.05.2025
How API Security Can Bolster Trust and Mitigate Agentic AI Risk
With greater autonomy comes a greater risk of something going wrong
Blogs & Opinions 22.05.2025
With greater autonomy comes a greater risk of something going wrong
Agentic AI signifies a major leap forward for artificial intelligence. It means systems that are capable of perceiving, reasoning and acting autonomously. But giving AI more agency will require guardrails and governance that extend beyond the security model we see today. Network defenders will need to think hard about how to allow AI to access data and how to secure the wider ecosystem.
There is no AI without Application Programming Interfaces (APIs), the pieces of code which essentially enable different software systems to communicate with each other in real time, by providing access to data. Since AI agents rely heavily on APIs for data exchange and functionality, securing these interactions is paramount. However, APIs are notoriously open to abuse, enabling them to be subverted. A recent report found there were 150 billion API attacks over the past two years, with AI-powered APIs partly to blame.
“Agentic AI introduces a new layer of complexity, whereby every agent behaves like a bidirectional API.” James Sherlow
Agentic AI introduces a new layer of complexity, whereby every agent behaves like a bidirectional API, increasing the level of risk. As a result, these AI-driven exchanges threaten to inadvertently expose internal systems, create significant vulnerabilities, and imperil valuable data assets.
It’s a problem that’s highlighted in the recently updated OWASP Top 10 for Large Language Models. LLM06:2025 has been expanded to reflect the threat posed by AI that has excessive functionality, permissions or, in the case of agentic AI, autonomy. The example given is of an AI application that has the ability to delete files and proceeds to do so because parameters or permissions have not been set. So how can organisations bolster their defences when agentic AI may not even be under their control?
External AI learns by sending out crawler bots to collect data across the internet, often without obtaining permission. Detecting these bots has traditionally been problematic because AI agents are generally obfuscated, making it difficult to tell which LLM they originate from. But detection is possible, and can help security teams determine what type of AI is being used and then to restrict or prevent the AI from harvesting organisational data.
“Since AI agents rely heavily on APIs for data exchange and functionality, securing these interactions is paramount.” James Sherlow
Internally, there’s also the problem of unsanctioned or “shadow AI”. As these AI tools also use APIs to access data, they are discoverable. They can then can be blocked or accepted and managed alongside other AI agents. However, even AI agents that are verified can overstep the line. So it’s important to analyse the interactions being made by the API to determine if access requests fall within acceptable boundaries and can be considered legitimate, or whether they are classed as potentially malicious.
Agentic AI introduces unique challenges that go beyond foundational API security, requiring governance and monitoring to mitigate unpredictable behaviour. The broad permissions necessary for their complex tasks must be carefully managed, with granular access controls and regular audits to minimise overprivileged access. Additionally, organisations’ reliance on third-party APIs and event-driven architectures like WebSockets demands extra security measures, including risk assessments and fallback mechanisms to handle disruptions.
It’s clear we are on the cusp of a new era, with Gartner predicting that by 2028, 33% of enterprise software applications will include agentic AI—enabling 15% of day-to-day work decisions to be made autonomously. Knowing that this change is coming, organisations must begin to plan now for how they will mitigate risk, facilitate collaboration between development and security teams, and harness innovation to secure AI. It’s time to get to work.
Prior to Cequence Security, James led the Security Systems Engineering team at Palo Alto Networks in Western Europe. Before that, he led the Systems Engineering Team at ConSentry, focusing on application visibility, control and security in both the wired and wireless Local Area Network (LAN). He has previously helped pioneer the next generation of cloud native application delivery at Avi Networks, which was acquired by VMware.