Blogs & Opinions 05.12.2023

ChatGPT Turns One: A Story of Trust, Sensitive Data and Skills

One year older, but is it one year wiser?

In just one year, ChatGPT has earned insane hype, 180 million subscribers, and a hefty dose of concern around the chatbot’s ability to enhance and assist hacking. Chris Denbigh-White takes advantage of ChatGPT’s first birthday to analyse its performance to date

Initially considered a mot du jour, since it entered the public’s consciousness one year ago, ChatGPT has become the mot d’année (or ‘word of the year’ to any non-French speakers out there). With over 180 million active subscribers (and rising), ChatGPT is “intelligent enough” to pass the bar exam and is allegedly responsible for a 1,265% rise in phishing emails. It’s easy to see how ChatGPT is doing more than writing LinkedIn content for corporate marketers. Its impact has been ubiquitous.

Within the cybersecurity industry, ChatGPT has the potential to solve the “many eyes on glass*” problem by helping analysts triage and respond to incidents more efficiently and effectively than ever before.

As it stands, a comprehensive and cohesive agreement from governments to systematically assess the risks of this technology and understand its potential manipulation by criminals or its applications in defence is notably absent. While noteworthy steps have been taken, such as the introduction of the EUAI Act (the first regulation on artificial intelligence), President Biden’s executive order, the global AI Summit, and the guidelines jointly released by the UK NCSC and CISA on November 26th for the secure development of AI, it is evident that significant progress remains to be made.

Traversing the space of sensitive data, skills and regulation, let’s assess some of the critical impacts ChatGPT has had over the past year.

Considering data loss

ChatGPT is an incredibly useful tool, and it’s easy and intuitive. The flip side is that the more useful and more straightforward the tool is, the easier it will be for people to feed it information, including sensitive company information (advertently or inadvertently), which is then swallowed up by the platform without any fundamental understanding of how it will then be stored or used. This issue may resonate with individuals and entities bound by data sovereignty imperatives.

ChatGPT is “intelligent enough” to pass the bar exam and is allegedly responsible for a 1,265% rise in phishing emails

For a social media executive at a small company, prompting ChatGPT to make a corporate Instagram post “more informal” or “sound more like Buzzfeed” is mainly harmless. Compare this to a chief financial officer struggling with their latest quarterly report, copying and pasting sensitive financial information into the platform and prompting it to sugar-coat it in a way that will be more amenable to the board.

There is a concern about ChatGPT data flows in that there is a lack of agreement and understanding. What’s more, not losing control of inputted data is a line that has still not been found for many companies, and this data loss factor has kept many corporate cyber and data professionals up at night.

LLMs will not displace jobs but reskill workers

One of the most significant conversations has been around ChatGPT swallowing up jobs and leaving workers with specific skill sets redundant in the face of a technology that can do their jobs just as well. This is akin to the calligraphy experts in the 19th century who bemoaned the printing press; they felt newspapers lacked the calligraphic ability to create beautiful characters and saw the printing press as largely cheating. Of course, the rest is history…

LLMs (large language models and the foundation of ChatGPT) may be embraced with the rub of time. I find doomsday predictions around vast swathes of the population unemployed fanciful. As with any significant technological shift, there will be a period of refactoring in the labour market. People will begin repurposing their current skill set to complement LLMs and find opportunities to work alongside this technology quickly, allowing them to do more creative and rewarding tasks.

The black box around regulation

The key thing to remember is that LLM technology is still in its infancy. Just like anything, we can’t immediately trust it unless we get to know it. In this vein, from an AI/LLM perspective, this incorporates regulation, control and agreement.

Currently, LLMs are a regulation black box; just like the friend in the pub quiz who is convinced of an answer with no network signal to check he’s correct, we are largely running blind. For AI in general, the CISA and National Cyber Security Centre recently launched landmark guidelines on AI cybersecurity, but this is still very early days. Regulation surrounding this technology is still a massive bone of contention and is unlikely to be completely solved anytime soon.

For cybersecurity professionals, it’s all about collaboration and doing the right thing. Generative AI can help with effectiveness and efficiency in our industry and has massive potential to do even more. Still, we need to adopt a repeatable framework across the board that ensures we know the risks and take a sensible (and documented) approach to these advanced technologies.

Another year on – will we be another year wiser?

This next year will likely bring more debate, advantages, controversies, and speculation around ChatGPT and LLMs in general. The better we understand data flows, how best to complement workers and how we can approach an agreement on a standardised risk assessment framework for LLM, the more useful this technology will become for the cybersecurity industry and many other industries.

I’m excited for the year ahead for the ChatGPT platform, LLMs and Generative AI. Let’s make 2024 the year we get it right, or at the very least, the year we move the needle up and to the right!

*Assured Intelligence asked ChatGPT what ‘many eyes on glass problem’ means, and this was its response: “The ‘many eyes on glass’ problem refers to a situation in which numerous individuals or stakeholders are observing a situation or dataset, typically displayed on a screen or dashboard (referred to as “glass” in this context), with each person having their interpretation, analysis, or understanding of the information presented.”


Chris Denbigh-White is Next’s chief security officer. As a former police and intelligence officer, he has built his career in system design, defence, and governance. Chris most recently served as VP of information security at Deutsche Bank. Chris is also an active contributor to the advisory board of the SANS Institute and has played a role in developing the (ISC)2 CISSP exam.

Latest articles

Be an insider. Sign up now!