On 7 March, the ABA Judicial Division, in collaboration with Thomson Reuters, organised a webinar to address the challenges encountered by lawyers when using generative artificial intelligence software, such as ChatGPT, and in particular its tendency to produce inaccurate information, a phenomenon referred to as “hallucination.”
The webinar covered various approaches to this problem. One suggestion was to utilize specialized databases instead of broad searches like Google to minimize inaccuracies. Professor Joshua Fairfield of Washington and Lee University demonstrated ChatGPT’s errors when answering legal queries, highlighting the importance of precise prompts. Mark Davies, a corporate lawyer, emphasized that the AI’s reliance on vast, general data sources can lead to incorrect responses, stressing the need for legal-specific models. Finally Emily Colbert of Thomson Reuters shared TR’s own progress in AI-Assisted Research using a trusted legal database to mitigate hallucination risks. However, she cautioned against expecting complete elimination of inaccuracies from AI systems at this stage. This discussion underscores the importance of careful utilization and understanding of AI tools in legal contexts to mitigate potential risks.