Skip to Content (custom)

Angle

AI and the Legal Profession: Hot Topics Before US Legal and Regulatory Bodies

The artificial intelligence (AI) wave continues to gain momentum as more organizations explore potential use cases. Generative AI specifically seems to be the topic of the year in the news, at work, and at home. Two common emotions seem to weave their way into every AI discussion – curiosity and fear. What business processes can these tools enhance? How can teams get buy in from their organization? There are questions posed by those eager to learn more and find beneficial ways to integrate AI tools.

Looking at the latter, there has also been skepticism about what generative AI can do and how risky it is to use this technology in business. Security, discriminatory outcomes, and privacy are all top concerns. What many are discovering that it is not the tool itself driving errors but instead the human behind the tech. Understanding how to deploy AI in a safe and responsible manner will breed success and highlight the importance of the human component.

Regulatory and legal bodies are recognizing that AI is here to stay. They are analyzing the benefits and risks from both human and technological standpoints. New legislation, regulatory rules, and court guidance can help organizations navigate AI and use these tools safely. Here are three hot topics in this area to monitor.
 

#1: State Legislation

It is unsurprising that that the states have started to consider and pass legislation on AI usage. This has materialized in several ways. One example is where states address AI via their broader state consumer privacy laws. Many enacted laws provide opt-out rights for using automated decision-making technology and profiling. Others require organizations to conduct risk assessments for certain processing activities where AI usage could fall into the proscribed categories. 

Some states have additional laws on the books addressing narrow categories of AI usage. For example, in California the Bolstering Online Transparency Act requires organizations to disclose that communication is occurring via a bot when attempting to incentivize sale or influence election votes. There are also a few states that have laws placing restrictions on using AI for hiring purposes, including the Illinois AI Video Interview Act. Also, the Maryland workplace AI law addresses the use of facial recognition technology during pre-employment interviews.

The above is only a snapshot and the beginning of what is to come. Since generative AI has taken center stage, more states have started to introduce AI bills. As of early September, there were 12 active proposed bills. Several others have been introduced over the past few years but have failed. The proposed legislation varies from focusing on generative AI to mitigating unlawful discrimination present in automated decision tools and much more.

Absent a federal law on AI regulation, the states will continue to attempt to pass their own laws. Some will be more tailored to specific topics or mentioned in broader laws. Other states may attempt to pass comprehensive legislation regulating AI. However, legislators and analysts are struggling with determining parameters that would embrace innovation without opening the risk floodgates. What may result is a patchwork approach to AI guidance that parallels consumer data privacy regulation.

A few predictions for 2024 are more focused AI bills, legislators and courts seeking education on use cases, risk assessment, and push for federal legislation. A few states have already created an unofficial group to collaborate on broad AI parameters. If this is productive, new bills will likely have some uniformity in terms of definitions and regulatory focus.
 

#2: SEC Conflict of Interest Rule

Regulatory bodies also recognize the importance of addressing AI and related technologies, with the Securities and Exchange Commission (SEC) recently voting to propose a new rule. If passed, the rule would regulate AI usage by broker-dealers and investment advisors. The SEC recognizes that firms have accelerated their use of new technologies. The SEC’s fact sheet on this topic provides succinct reasoning for this proposal:
 

“When the use of these technologies is optimized for investor interests, it can bring benefits in market access, efficiency, and returns. To the extent that firms use these technologies to optimize in a manner that places their interests ahead of investor interests, investors can suffer harm. Due to the scalability of these technologies and the potential for firms to reach a broad audience at a rapid speed, any resulting conflicts of interest could cause harm to investors in a more pronounced fashion and on a broader scale than previously possible.”


The proposal contains three main requirements. First, firms must neutralize conflicts of interest when using AI tools that advance their own interests over those of their investor clients. Second, to implement policies and procedures meant to prevent violations and comply with the rules. Third, to maintain clear recordkeeping when dealing with a conflict situation falling under the purview of these rules. According to the intended definition of covered technologies, the rules would apply to the major AI categories – reactive, limited-memory, and theory-of-mind.

The public comment period ends on October 10, so there may be some changes forthcoming. There has been some opposition that could influence the next steps. For example, opposers of the rule think this is a way to ban certain technologies. Also, that the rules could deprive investors of the benefits from AI when a firm opts out from using a tool to avoid costs associated with compliance. Interested parties should watch if the SEC formally adopts the rule over the coming months.
 

#3: Court Disclosure and Certification

Most have heard the story of the lawyer using ChatGPT to help draft a brief that ended up citing fake cases the tool created. This has caused several judges to implement standing orders requiring counsel to submit generative AI certifications. Whether such disclosure is necessary is a prominent issue. This can promote the use of responsible AI by ensuring lawyers are reviewing the information that AI tools generate. This puts the court on notice, saves judicial resources, helps lawyers maintain reputation, and avoids delay.

However, many in the legal community have spoken out on the potential negatives of these orders. Some think they are duplicative. Another concern is that the lack of consistency may potentially cause mass confusion or make lawyers inclined not to use this technology in ways that it can be beneficial. The Judicature article “Is Disclosure and Certification of the Use of Generative AI Really Necessary” discussed potential alternatives. First, that it may be better suited for district courts to issue local rules on the use of Generative AI tools to promote uniformity and avoid adverse consequences. Second, that providing public notice may be a better route than creating new rules. This would include the already existing obligation to verify factual and legal representation when having assistance with drafting court filings, i.e., using generative AI tools.

While it will be interesting to see how the certification issue trends and evolves, the simple fact is that the responsibility to fact-check and provide quality control on technology output will always remain human responsibility. Lawyers must remain technologically competent and check their work or will otherwise face court sanctions, ethical violations, reputational harm, and lost business.
 

Conclusion

AI is going nowhere and will continue to be a hot topic for years to come. Staying apprised of issues before legislators, regulators, and the courts is not an option. It is imperative to be able to safely integrate emerging technologies into everyday business operations and maintain compliance across the board.
 

The contents of this article are intended to convey general information only and not to provide legal advice or opinions.

Subscribe to Future Blog Posts

Learn more about Epiq's Service offerings
Our Services
Related

Related

Related