Request demo
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Uncover
May 2, 2023

Lessons from our CTO on how to Protect Privacy & Confidentiality as Law Firms Embrace GPT

As generative AI like ChatGPT becomes integral to law practice, concerns about privacy, confidentiality, and security arise. Data used by AI models is crucial, especially when dealing with private client information. Using legal tech built with the right safeguards is essential. Our CTO, Imre Gelens explains how to make sure your law firm uses GPT safely.

According to the  “ChatGPT and Generative AI within Law Firms” published on 17 April 2023 by Thomson Reuters, 93% of lawyers heard of or read about generative AI or ChatGPT. Quoting the Head of Employment and a Partner at law firm Sternberg Reed, in that same report  “Within the next six months everybody at the firm will be using it and there’s absolutely no way you’re going to stop that, because people will get more in tune with what’s happening and how quickly this technology is developing.” 

These numbers and quotes are real, meaning that generative AI will soon be something you cannot imagine your law practice was doing without. However, understandably so, the use of legal tech and AI in law firms raises questions about privacy, confidentiality and security. 

So, up to us, the ones developing the legal tech, to shed some light on how privacy, confidentiality and security are safeguarded. 

How data is used by AI models

The process of training an AI model involves providing it with a large set of examples, such as images or text, and teaching it to identify patterns in the data that allow it to make accurate predictions or decisions. In the case of a model like GPT, this means feeding the model an enormous amount of text, asking questions about that text and then assigning a score to the answers. Once the model has been trained, it can be used to make predictions or decisions on new data that it hasn't seen before. For example, you can ask it new questions and it will be able to answer them accurately. What is important to realize about this is that during the training phase a lot of data is ingested by the model, but during the prediction phase (when asking ChatGPT a question for example) no data is being stored by the model. It just calculates the response and sends it back to the asker. However, data is typically stored for doing more training of the model later. 

Privacy and Confidentiality and Security 

The privacy and confidentiality concern expressed by lawyers centers around the data that is needed for the large language models like the one from OpenAI, particularly if it’s a use case that includes private client data. When it comes to the confidentiality of source material used to generate GPT’s output, there is a major difference between visiting the website of OpenAI and using confidential information when asking a legal question as opposed to using legal technology that has been built, taking into account the right safeguards. 

When using ChatGPT, all the information that you share about your client and their case as well as the response that ChatGPT gives to your question will be stored and can be used by Open AI to train their models later, unless explicitly opting out. This means that the information about your case and client can end up in the OpenAI database and can be used by OpenAI personnel. On top of that, even when opting out there currently is no version of the OpenAI platform that is hosted outside of the US, which means that your data will be stored outside of the EU, with all risks associated. 

However, it is possible to benefit from the AI power of GPT, and similar Large Language Models (LLMs), without your client’s data being stored and used for training purposes. Uncover uses only AI models which are either developed inhouse, or where it can be guaranteed that data is not used for re-training by third parties later.. In the case of OpenAI this means that the data that is used for answering questions will be solely used for that purpose and will not be included in any of the future versions of the GPT models. Furthermore, we are using our own deployments of these models in cloud data centers located in Europe. This mitigates any risks associated with the data leaving the EU. 

Since it is possible to benefit from the language models of OpenAI without your customer’s data being used to train it, we advise law firms seeking legal tech solutions to only use software that is guaranteed to not train the language models on the information fed by its lawyers. 

Taking the right measures when using Large Language Models is definitely a must. However, tooling does not work in isolation. So, risks concerning data security and privacy go beyond this. When looking into what legal tech to use in your firm, a simple way of ensuring the right security measures have been taken into account in the design of the technology, is to check whether the right certifications are in place. A certification to look at  is the ISO 27001. This is the internationally recognized standard providing a framework for establishing, implementing, maintaining, and continually improving an Information Security Management System (ISMS). Obtaining a ISO 27001 certification demonstrates an organization's commitment to implementing a robust ISMS and following best practices to protect sensitive information from potential threats. At Uncover we are not only ISO 27001 certified, but our compliance is continuously monitored, making it not just a once a year checkbox activity. Another important measure is the safeguarding of personal information in accordance with the GDPR (a comprehensive data protection regulation in the European Union (EU). A GDPR certification is a formal recognition that an organization has implemented the necessary processes, policies, and technical measures to comply with the GDPR. 

Don’t leave the backdoor open 

Web security has improved a lot over the last decade. Encryption is now the standard across the web and most products and tools have very good default security practices. With the technology improving, data loss hardly ever happens through technology failure anymore, but nowadays can be mainly attributed to human error. Doing thorough analysis on third party vendors, but then  using passwords like: FirmName2023!s is "mopping the floor with the tap running". So, effectively safeguarding your data begins with implementing robust password management practices and training staff to recognize phishing attempts. Considering the use of a password manager in your firm, avoiding easily guessable phrases or words, and enabling multi-factor authentication is still one of the highest value activities that a firm can do with regards to data security.

To sum it all up, privacy confidentiality and security are to be taken seriously, not only when using AI but when using technology in general. It is about making sure the law firms and the lawyers themselves as well as the legal technology providers who’s services they use, are aware of and take appropriate measures to safeguard the information that is trusted to them by their clients.

Request demo
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Request free trial
Thank you for requesting a free trial, we are thrilled to have you on board! Somebody from our sales team will get back to you shortly with the details.
Oops! Something went wrong while submitting the form.