Blogs & Opinions 28.08.2025

More Autonomy, More Risk: What Businesses Need to Know About ChatGPT Agent

AI assistants are evolving, with major implications for cybersecurity

Jonathan Lee explains why early adopters of agentic AI should always keep a human in the loop

Ever since OpenAI fired the starting pistol on the race to artificial general intelligence (AGI), experts have been predicting the coming of agentic AI. Whereas generative AI (GenAI) is focused on content creation and summarisation, AI agents are all about working autonomously to complete tasks for their human masters. Yet this autonomy – as well as their deeper knowledge of humans, and wider integration into digital ecosystems – marks agents out as a potential security risk.

We recently evaluated the new ChatGPT agent to see where customers could be exposed, and concluded that strong user oversight remains essential.

Jonathan Lee, Trend Micro

What’s new with ChatGPT?

OpenAI’s ChatGPT changed the game when it launched at the end of 2022. Its powerful processing and ability to converse in convincingly natural human-like language made it a stunning success. In just two months, 100 million users had downloaded it. Yet businesses are moving more cautiously. In North America (16%), APAC (16%) and Europe (12%), less than a fifth have actually proven the business value of GenAI and built capabilities using the technology, according to Boston Consulting Group (BCG).

This promises to change with agentic AI, which offers much more than GenAI. AI agents are designed to be dynamic, goal-oriented systems that operate independently to finish tasks. The most advanced versions will not only complete pre-assigned tasks but also be able to anticipate the needs of their masters.

The new ChatGPT agent is a big step in this direction. By using a virtual computer and registered accounts, it can browse the internet, generate and execute code and process information in a way previous digital assistants could not. Combining advanced planning and reasoning enables it to adapt to changing circumstances and unfamiliar tasks – everything from planning and booking trips to creating editable presentations. But more power could mean more risk for organisations that decide to use it.

Autonomous challenges

Unlike earlier, reactive versions of ChatGPT, the new agent doesn’t necessarily need to wait for its user to issue instructions. It works proactively and with little supervision required. But that could present threat actors with an opportunity to maliciously manipulate its actions.

An attacker could embed a malicious prompt within a webpage the agent visits, for example. In this scenario, the prompt, which is hidden in text or metadata, turns the agent’s actions toward unintended outcomes. It could be instructed to delete files, share sensitive information via email with unauthorised users, order unintended items, or otherwise do the threat actor’s bidding – with potentially serious financial, reputational and compliance implications.

“Shadow AI accounted for 20% of breaches over the past year, according to IBM. The impact of agentic systems could be even greater.”

It’s true that OpenAI has deployed a series of safeguards to help mitigate these risks. For example, the agent asks for explicit user confirmation before it takes critical actions with real-world consequences. And it requires active supervision for sensitive tasks like sending an email. OpenAI also blocks high-risk actions like bank transfers. However, such guardrails are not a silver bullet. Humans tasked with supervising agents may be distracted and/or may approve actions reflexively without properly understanding the implications.

Deeper insight means greater risk

The ChatGPT agent also differs from traditional GenAI in that, rather than holding static information like users’ language preferences, it learns continuously through doing. And while adapting over time in this way to become more useful, it interacts natively with diverse sources of information. These could include the user’s email inbox and calendar, customer databases, corporate social media accounts, and many others.

Threat actors could exploit this to access sensitive information from connected accounts and logged-in websites. Once again, OpenAI has added privacy controls to mitigate these risks. For example, users can clear their browsing history and log-out of active sessions. And the agent itself doesn’t store passwords and other sensitive information when working autonomously.

Control is key

However, the broader risks remain. Security teams will therefore need to manage the digital identities of AI agents and their related accounts as rigorously as their human-linked counterparts – striking the appropriate balance between convenience and control, in line with their corporate risk appetite.

“ChatGPT is a snapshot of the future. Organisations wishing to use it, or harness the full power of agentic AI in the years to come, must carefully weigh the risks against the potential gains”

ChatGPT is a snapshot of the future. Organisations wishing to use it, or harness the full power of agentic AI in the years to come, must carefully weigh the risks against the potential gains. And understand what vendor guardrails there are, and where they fall short. They should also do so before it’s too late. Better to allow a secure and managed agentic tool in the organisation than risk unsanctioned AI proliferating.

Shadow AI accounted for 20% of breaches over the past year, according to IBM. The impact of agentic systems could be even greater. In these early days, it would be preferable to maintain human supervision over sensitive operations, lest agents expose the organisation to breaches, leaks and compliance risks. Autonomous AI will in time do wonders for corporate users. But only if it can be trusted first.

Jonathan Lee is Trend Micro’s UK Director of Cyber Strategy, focused on engagement with UK government, the wider public and private sectors and critical national infrastructure on all matters related to cybersecurity, resilience and public policy. With nearly 30 years’ experience, Jonathan has a passion for helping organisations benefit from the power that technology has in transforming society in a secure manner.

Latest articles

Be an insider. Sign up now!