Blogs & Opinions 26.12.2023

Shadow AI and the AI Apps That Are a CISO’s Nightmare

Forty per cent of enterprise apps involve uploading content, code, files, or spreadsheets for AI assistance. This raises huge data leakage questions.

CISOs have enough on their plate without worrying about the plethora of AI-powered apps making their kingdom vulnerable. Alastair Paterson offers a practical steps checklist to help them out

ChatGPT-3 was released just one year ago, quickly attracting 100 million users within its first two months. While ChatGPT garners significant media attention, it’s part of a larger, lesser-acknowledged ecosystem: over 10,000 AI-powered applications are currently available, with about 1,000 new ones added monthly.

A ‘wild west’ of AI apps is emerging

Most of these AI-powered apps come from independent developers and are, in many ways, mimicking the ‘app rush’ of the early days of mobile app stores. However, unlike those heydays of the late 00s, this rapidly growing sprawl has no oversight or control. In most cases, it’s unclear who the developers behind these apps are and where they are based, let alone how they will handle data and comply with regulations such as GDPR. It’s truly a ‘wild west’ and a nightmare for any CISO.

40% of all AI apps ask to upload content

We looked at the most popular 1000 apps to see what they are being used for – and it’s easy to see why employees might be tempted to try them given the promised productivity gains. A recent Boston Consulting Group study estimates AI can boost a worker’s performance by as much as 40%.

“Blocking access to AI tools might put that employee or company at a competitive disadvantage”

Just over 20% of the apps we studied are dedicated to ‘customer service’ – these provide tools to handle customer inquiries and feedback and support tickets automatically. This group is followed very closely (19.7%) by ‘content creation & copywriting’ apps, with the same number also dedicated to ‘general productivity’ (time tracking, calendar management, task prioritisation, etc). Code assistants (15%) and email assistants (13.7%) complete the ‘top 5’, with the latter particularly troubling from a security point of view given they essentially demand access to a user’s inbox.

Overall, 40% of the 1,000 enterprise apps we looked at involve uploading content, code, files, or spreadsheets for AI assistance. This raises huge data leakage questions.

CISOs face risk either way

Studies indicate that 71% of employees have already used generative AI for work, so it’s not difficult to understand why many believe that the risks of shadow AI will outweigh the dangers of shadow IT. The risks are different, however. In the early days of Shadow IT, blocking employee access to untrusted devices or data sources was a sensible way of mitigating any risk. With AI, however, the space is moving so quickly that blocking access to AI tools might put that employee or company at a competitive disadvantage. The CISO, therefore, potentially faces risk with either option. So what can be done about it?

Practical steps

The last couple of weeks have seen a pledge from 16 countries to make AI systems ‘secure by design’. This is encouraging, and given time, we should see the AI app ecosystem settle down with fewer trusted apps from reputable providers. However, right now, there is a need for practical steps. Where should we start? This checklist may help:

  • Shadow AI: Get visibility over which apps and ‘Shadow AI’ employees are using and understand why they are using it to inform threat models. Understand key use cases and value drivers and provide safe alternatives to high-risk applications.
  • Vendor Assessments: Be curious and demanding of your vendors. Hacker News published a list of questions to ask vendors, which is worth checking out: https://thehackernews.com/2023/11/ai-solutions-are-new-shadow-it.html
  • Create and Refine Policies: Most companies now have an AI policy. Ensure this is iterated and updated in line with how your employees use the tools. Educating and nudging employees in the right direction is as critical as ever.
  • Monitor for Policy Violations: Implement systems to detect policy breaches. While many organisations have rules that stipulate you cannot share sensitive data, few can monitor for policy violations.

‍Not everything AI-related is necessarily a security risk. Still, we are seeing a classic case of AI advancing so fast that everyone, including governments, regulators, organisations and even AI researchers themselves, are scrambling to catch up. Just as shadow IT is now carefully managed, shadow AI will likely evolve to a point where it gives CISOs fewer sleepless nights. Still, right now, we are in the eye of a storm, and organisations need to address the issue urgently.


Alastair Paterson is the CEO and co-founder of Harmonic Security. Before this, he co-founded and was CEO of the cybersecurity company Digital Shadows from its inception in 2011 until its acquisition by ReliaQuest/KKR for $160m in July 2022. Alastair led the company to become an international, industry-recognised leader in threat intelligence and digital risk protection.

Latest articles

Be an insider. Sign up now!