Skip to Content (custom) - bh

Angle

US State Bars are Issuing Generative AI Guidance: So, What’s the Verdict?

  • 2 Mins

Generative AI has made the headlines for over a year now. What is this new technology? What can it do? How can it help in business? Is it safe? Is it ethical? These are all common questions observed in articles, blogs, podcasts, and industry conferences. In the legal field, it is becoming clear that this technology is here to stay and will change the game for legal services delivery. With safe and responsible usage, there is nothing to fear about using these tools. While there will always be some level of risk with any new endeavour, deploying a strong legal services management framework mitigates this risk. This can be accomplished through a variety of factors including internal policy creation, testing tools, partnering with consultants, and clear communication.

When lawyers use generative AI – or any technology for that matter – ethical obligations must remain top of mind. This needs to be factored into internal usage and partnerships with outside consultants. It is becoming clear that generative AI will enhance capabilities for functions like contract management, legal research, eDiscovery, brief drafting, and compliance. With one major mishap last year making national news where a lawyer used fake citations provided by ChatGPT, state bars are starting to issue guidance on its usage. The verdict is that lawyers can use it but must uphold their core ethical duties.

Trending Guidance

Only the California, Florida and most recently New York state bars have directly addressed generative AI, but it is likely that the majority of states will follow suit. When topics are so front-and-centere and hold real potential to transform the practice of law, the states will weigh in to ensure lawyers practicing within their borders have a clear understanding of what constitutes ethical behaviour. This happened with cloud computing, with several addressing the appropriateness of adopting this technology. The consensus was that cloud solutions are beneficial and appropriate if lawyers take reasonable care to maintain client confidentiality and data security.

The same trend is materialising with generative AI, with both states concluding that lawyers can use it but must uphold their core ethical duties. Below are some highlights.

California

California released the guidelines “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law” on Nov. 16, 2023. They are specifically described as guiding principles as opposed to best practices. The bar recognises the unique challenges and risks that generative AI brings and aims to help lawyers navigate usage while remaining ethical.

California’s practical guidance is comprehensive. The areas covered are the duty of confidentiality; duties of competence and diligence; duty to comply with the law; duty to supervise lawyers and nonlawyers; responsibilities of subordinate lawyers; communication regarding generative AI usage; charging for work produced by generative AI and generative AI costs; candour to the tribunal; meritorious claims and contentions; prohibition on discrimination, harassment, and retaliation; and professional responsibilities owed to other jurisdictions.

A few highlights include not inputting confidential client information into generative AI solutions with inadequate confidentiality and security protections, having senior lawyers establish clear policies around generative AI usage, and disclosing the intention to use generative AI to clients.

Florida

The Florida Bar Ethics Opinion 24-1 was issued on Jan. 19, 2024. Similar to California’s guidance, it allows lawyers to use generative AI as long as they take precautions. Florida’s opinion has a narrower focus, covering only four areas: confidentiality, accurate and competent services, improper billing practices, and lawyer advertising.”

To uphold these ethical duties, the Florida bar states:

  • Lawyers must ensure that the confidentiality of client information is protected when using generative AI by researching the programme’s policies on data retention, data sharing, and self learning. Lawyers remain responsible for their work product and professional judgment and must develop policies and practices to verify that the use of generative AI is consistent with the lawyer’s ethical obligations. Use of generative AI does not permit a lawyer to engage in improper billing practices such as double-billing. Generative AI chatbots that communicate with clients or third parties must comply with restrictions on lawyer advertising and must include a disclaimer indicating that the chatbot is an AI program and not a lawyer or employee of the law firm. Lawyers should be mindful of the duty to maintain technological competence and educate themselves regarding the risks and benefits of new technology.

This is just an overview of what to consider when using this technology to aid in the practice of law. The opinion thoroughly addresses each duty.

New York

On April 4, 2024 the New York Bar Association’s Task Force on Artificial Intelligence released a 92 page report that advises lawyers to disclose to clients when AI tools are employed in their cases. The report speaks to the benefits of AI in improving access to justice and enhancing the efficiency of legal services.  However, it cautions legal professions not to let an over-reliance on AI can come at the expense of human judgement and expertise.  The report underscores the importance of transparency and explainability in AI systems, urging legal professions to understand the underlying mechanisms of AI tools to ensure they align with ethical standards. The report also underscored the need for ongoing education to help legal professionals with the skills necessary to navigate the complexities of AI. Finally, it addresses the crucial role of regulatory oversight in governing AI applications. As AI systems become increasingly complex and pervasive, robust regulatory frameworks are essential to safeguard against potential harms such as bias, discrimination, and privacy infringements. 

Future Expectations

There is little doubt that more states will issue opinions or other guidance on this topic. In 2023, other states created task forces focused on responsible and ethical generative AI usage. As such, there will likely be additional opinions this year, or in 2025. As the technology matures and use cases grow, the legal industry will continue to be optimistic about its potential to improve legal practice. Ethical guidance can help law firms and legal departments craft internal policies and consider the best path forward. It will be interesting to see how future opinions overlap and how the courts eventually rule on issues involving this technology. Until there are more rulings on the topic, it is essential to work with legal services providers who thoroughly understand these ethical implications to ensure compliance.

The contents of this article are intended to convey general information only and not to provide legal advice or opinions.

Subscribe to Future Blog Posts