Blogs & Opinions 26.12.2023
Shadow AI and the AI Apps That Are a CISO’s Nightmare
Forty per cent of enterprise apps involve uploading content, code, files, or spreadsheets for AI assistance. This raises huge data leakage questions.
Blogs & Opinions 26.12.2023
Forty per cent of enterprise apps involve uploading content, code, files, or spreadsheets for AI assistance. This raises huge data leakage questions.
ChatGPT-3 was released just one year ago, quickly attracting 100 million users within its first two months. While ChatGPT garners significant media attention, it’s part of a larger, lesser-acknowledged ecosystem: over 10,000 AI-powered applications are currently available, with about 1,000 new ones added monthly.
Most of these AI-powered apps come from independent developers and are, in many ways, mimicking the ‘app rush’ of the early days of mobile app stores. However, unlike those heydays of the late 00s, this rapidly growing sprawl has no oversight or control. In most cases, it’s unclear who the developers behind these apps are and where they are based, let alone how they will handle data and comply with regulations such as GDPR. It’s truly a ‘wild west’ and a nightmare for any CISO.
We looked at the most popular 1000 apps to see what they are being used for – and it’s easy to see why employees might be tempted to try them given the promised productivity gains. A recent Boston Consulting Group study estimates AI can boost a worker’s performance by as much as 40%.
“Blocking access to AI tools might put that employee or company at a competitive disadvantage”
Just over 20% of the apps we studied are dedicated to ‘customer service’ – these provide tools to handle customer inquiries and feedback and support tickets automatically. This group is followed very closely (19.7%) by ‘content creation & copywriting’ apps, with the same number also dedicated to ‘general productivity’ (time tracking, calendar management, task prioritisation, etc). Code assistants (15%) and email assistants (13.7%) complete the ‘top 5’, with the latter particularly troubling from a security point of view given they essentially demand access to a user’s inbox.
Overall, 40% of the 1,000 enterprise apps we looked at involve uploading content, code, files, or spreadsheets for AI assistance. This raises huge data leakage questions.
Studies indicate that 71% of employees have already used generative AI for work, so it’s not difficult to understand why many believe that the risks of shadow AI will outweigh the dangers of shadow IT. The risks are different, however. In the early days of Shadow IT, blocking employee access to untrusted devices or data sources was a sensible way of mitigating any risk. With AI, however, the space is moving so quickly that blocking access to AI tools might put that employee or company at a competitive disadvantage. The CISO, therefore, potentially faces risk with either option. So what can be done about it?
The last couple of weeks have seen a pledge from 16 countries to make AI systems ‘secure by design’. This is encouraging, and given time, we should see the AI app ecosystem settle down with fewer trusted apps from reputable providers. However, right now, there is a need for practical steps. Where should we start? This checklist may help:
Not everything AI-related is necessarily a security risk. Still, we are seeing a classic case of AI advancing so fast that everyone, including governments, regulators, organisations and even AI researchers themselves, are scrambling to catch up. Just as shadow IT is now carefully managed, shadow AI will likely evolve to a point where it gives CISOs fewer sleepless nights. Still, right now, we are in the eye of a storm, and organisations need to address the issue urgently.