The Japan Federation of Bar Associations (JFBA) has endorsed a significant international declaration, the “G7 Bars’ Statement on Artificial Intelligence (AI),” which was officially supported by bar associations and law societies across the G7 countries, including the USA, Germany, Canada, France, Italy, the EU, the UK, and Japan. This statement, formalized on March 21, 2024,…
11th Circuit concurrence makes ‘modest proposal’ for use of AI-powered large language models in legal interpretation
In a recent concurrence from the 11th Circuit’s decision in Snell v. United Specialty Ins. Co., Judge Kevin Newsom explored the potential for AI-powered large language models (LLMs) to aid in legal interpretation. While the case itself concerned an insurance dispute over whether a ground-level trampoline project counted as “landscaping” under the policy terms, Judge…
New artificial intelligence tech regulations do not need to reinvent the wheel
Brett McGrath, President of the Law Society of New South Wales (NSW), emphasized the importance of not “reinventing the wheel” in the regulation of artificial intelligence (AI) during his address to a NSW Upper House Inquiry into AI. McGrath advocated for the NSW government to consider both domestic and international efforts in AI regulation, suggesting…
Implementation of the European Artificial Intelligence Act in Ireland
The Law Society of Ireland has made a submission to the Irish Government on how it believes the European Artificial Intelligence Act (AI Act) should be implemented in Ireland. The EU’s AI Act, which is designed to regulate and foster responsible AI development, officially came into force on August 1, 2024. The AI Act classifies…
Chief Justice Andrew Bell flags generative AI as major challenge for justice system
In a landmark address marking the bicentenary of the Supreme Court of NSW, Chief Justice Andrew Bell highlighted generative AI as a pivotal challenge facing the justice system. Speaking to an assembly of senior judges from Australia, New Zealand, and Singapore, Bell emphasised the evolving complexities that AI introduces to the work of the courts….
American Bar Association’s artificial intelligence task force releases law school survey
A recent survey conducted by the American Bar Association (ABA) and its Task Force on Law and Artificial Intelligence reveals that a significant number of law schools are actively incorporating artificial intelligence (AI) technologies into their curricula. The survey, which included 29 law schools, found that 55% of these institutions now offer AI-specific classes, and…
Response from the British Columbia profession to a Law Institute Consultation Paper on artificial intelligence and civil liability
The British Columbia Branch of the Canadian Bar Association (CBABC), has submitted a comprehensive response to the British Columbia Law Institute’s (BCLI) consultation paper on artificial intelligence and civil liability. In its response, the CBABC considers the recommendations of the BCLI on the application of tort law to AI software. The CBABC’s thoughtful response highlights…
Victorian Legal Services Board 2024 Risk Outlook
The Victorian Legal Services Board’s (VLSB) “2024 Risk Outlook” outlines the major risks facing Victoria’s legal sector in 2024. Cybersecurity remains a critical concern, as the Board has noted the growth of cyber attacks over the past year. Law practices are urged to implement robust security measures like multi-factor authentication and regular software updates whilst…
Law Society of Ontario provides comprehensive guidelines on the use of generative AI in legal practices
The April 2024 Futures Committee Report of the Law Society of Ontario provides comprehensive guidelines on the use of generative AI in legal practices. It emphasises the rapid evolution and integration of generative AI technologies, urging legal professionals to embrace these advancements while considering the ethical and professional implications. The report outlines the potential of…
One of the key challenges for the legal profession when considering AI is its tendency to produce inaccurate information
On 7 March, the ABA Judicial Division, in collaboration with Thomson Reuters, organised a webinar to address the challenges encountered by lawyers when using generative artificial intelligence software, such as ChatGPT, and in particular its tendency to produce inaccurate information, a phenomenon referred to as “hallucination.” The webinar covered various approaches to this problem. One…