As per the ABA Legal Technology Survey published in 2019, only 8% of respondents used resources with Artificial Intelligence. Within a few years, AI has become a focal point in conversation within the legal industry and beyond. With the advent of generative AI-powered technologies like ChatGPT, awareness and use of AI are rapidly increasing, too.  

According to Financial Times, recently, senior judges have warned the judiciary in England and Wales to limit the use of Artificial Intelligence in conducting legal research and to reduce information gathering on cases to online chatbots. The official guidance recently published explicitly for magistrates, tribunal panels, and judges emphasised the threat that AI tools would make factual errors or draw on law from foreign jurisdiction if asked for assistance with cases.  

The country’s second most senior judge, Sir Geoffrey Vos, said AI offered “great opportunities for the justice system, but because it’s so new, we need to make sure that judges at all levels understand [it properly]”. The deployment of advanced technologies like AI by the judiciary in England and Wales has grabbed the public’s attention to an extent, partly because judges are not needed to explain preparatory work they may have undertaken to produce a judgment.  

The official guidance 

The guidance highlighted that even though the judges could use AI for a few administrative or repetitive tasks, it wasn’t recommended for legal research except to remind judges of material with which they were already familiar. The guidance said, “Information provided by AI tools may be inaccurate, incomplete, misleading or out of date,” pointing out it was heavily relied on law in the US. “Even if it purports to represent English law, it may not do so.” 

AI has started to upturn the broad legal profession, with few organisations using AI to assist in drafting contracts. In one of the prime examples of dangers in courtroom use, a lawyer in New York was permitted after he admitted using ChatGPT to develop a brief for a case that involved invented citations and opinions.  

The risks 

The guidance which was released also alerted of the privacy risks. Judges were asked to remember that the information loaded into a publicly available AI chatbot “should be seen as being published worldwide”. Vos, the Master of the Rolls, mentioned that there was no suggestion that any judicial officer had asked a chatbot for sensitive case-specific information, and the guidance was issued to avoid doubt.  

He added that in the long term, AI offered “significant opportunities in developing a better, quicker and more cost-effective digital justice system”. AI would not assist in decision-making until the judiciary was “absolutely sure that the people we serve would have confidence in that approach — and we’re miles away from that”.  

The deputy head of civil justice, Lord Justice Birss, said AI can be used to aid judges in determining provisional assessments of costs, which is a data-heavy and time-consuming task. The document also said some opponents unrepresented by lawyers depended on AI tools to help them because they had limited professional advice. The guidance told the judges to look over the submissions that may have sought the assistance of a chatbot and said the judges should also be alerted about the dangers that “deepfake” technology poses.  

Sources of Article

  • Photo by Conny Schneider on Unsplash

Want to publish your content?

Get Published Icon
ALSO EXPLORE