Blogs & Opinions 19.04.2023

ChatGPT for the Legal Industry: Friend or Foe?

The legal sector has much to gain from using ChatGPT but also much to lose. Lawrence Perret-Hall weighs up the advantages and disadvantages

The legal sector has much to gain from using ChatGPT but also much to lose. Lawrence Perret-Hall weighs up the advantages and disadvantages

ChatGPT is leading the surge in generative artificial intelligence (AI). The chatbot, created by OpenAI ­­and released in November 2022, has been trained on vast amounts of data and fine-tuned to understand and generate natural language text. The result? An AI-powered chatbot that, as of March 2023, is said to exhibit ‘human-level performance’.

It’s no wonder some organisations, including those in the legal sector, seek ways to leverage these AI tools to drive efficiency. For example, PwC and magic circle law firm Allen & Overy recently introduced their own AI chatbots to help speed up the work of lawyers on tasks like carrying out due diligence, analysing contracts and drafting documents and memos to clients.

Others, however, are questioning the security of this technology and considering how it can be used for malicious purposes. Europol has warned of the criminal use of ChatGPT, while the National Cyber Security Centre (NCSC) has highlighted the potential insecurity of the information users give in their prompts and queries. For those in the legal sector wanting to incorporate generative AI into their day-to-day, is it safe to do so? And what can firms do to protect themselves against the probable threats of AI-powered chatbots?

Malicious use of AI chatbots

Cybersecurity experts are under no illusion that cyber criminals will already be using AI chatbots maliciously. Research from Blackberry revealed that over half of IT professionals predict we are less than six months away from a successful cyber attack credited to ChatGPT.

One of the critical risks to the legal sector is ChatGPT’s power as a useful tool for phishing. ChatGPT makes drafting phishing emails quicker and generates text without poor spelling and grammatical errors (previously tell-tale signs of phishing attacks) in multiple languages and in a particular style, such as that of a CEO. Phishing attacks are the most frequent cyber incident experienced by law firms, making the ability of AI chatbots to produce highly realistic and convincing phishing emails extremely concerning for an industry that carries out most of its work via email.

Efficiency vs security

There are also risks associated with using AI chatbots by lawyers to speed up their everyday work, principally the security and privacy concerns around including sensitive information and data in queries. All prompts and queries users give to AI chatbots are stored and visible to their providers (OpenAI in the case of ChatGPT). Therefore, any stored information is at risk of public exposure – either maliciously via a hack or leak or non-maliciously, made accessible accidentally.

By the very nature of lawyers’ work, using AI chatbots to expedite tasks is risky. For example, the sensitivity of the information required to draft a contract or client memo makes inputting it into a potentially insecure online platform a security and privacy concern.

Combatting evolving threats

The good news, however, is that there are several things law firms can do to mitigate these evolving risks.

First, cybersecurity awareness training and phishing simulations have never been more important. Conducted frequently to help establish a ‘security and privacy first’ mindset across firms – from trainee to partner – training is a vital element for the sector to invest in to ensure the criticality of security is articulated to employees who may well be using AI chatbots in the workplace.

Second, legal firms need to assess their cyber resilience on an evolving and continuous basis. Ultimately, you don’t know what you don’t know, so conducting regular cybersecurity audits and gaining an independent evaluation from external experts is one of the best ways firms can uncover gaps in their security posture and give in-house teams time and support to remediate vulnerabilities.

Third, use caution. Some firms, like Mishcon de Reya, have chosen to ban ChatGPT altogether. However, this technology isn’t going anywhere any time soon, and those who find a way to safely implement generative AI will find competitive advantages. AI chatbots should be treated with caution. By ensuring cybersecurity best practices, educating staff and bolstering their security posture, law firms are best placed to drive efficiency while protecting their reputations, data and clients.

Lawrence Perret-Hall is commercial director at CYFOR Secure. Lawrence is deeply knowledgeable in incident response, supporting enterprises and SMEs in managing complex cyber incidents, organising multi-disciplinary responses and leading stakeholder engagement.



Latest articles

Be an insider. Sign up now!