

Article

Defensibility Considerations With AI: Best and Worst-Case Scenarios
- eDiscovery
The use of AI in eDiscovery introduces opportunities and challenges, particularly when it comes to defensibility. Legal practitioners must be prepared to justify their use of AI tools under both ideal and adversarial conditions. Analyzing best and worst-case scenarios provides a framework for reviewing AI defensibility. In favorable circumstances, AI is considered no different from human reviewers, allowing for efficient and confidential workflows. In more contentious situations, the producing party may face scrutiny and demands for transparency. It is crucial to consider what practical cooperation looks like between parties, emphasizing the importance of reasonableness, transparency, and adherence to established principles.
What All eDiscovery Workflows Have in Common
Meet-and-confer discussions are a requirement in litigation and are essential to ensuring that both sides understand the parameters of discovery and raise concerns earlier on.
Discussions should cover the sources and formats of electronically stored information (ESI), methods for data preservation and collection, and protocols for search and review. Proactively addressing these topics helps ensure that discovery is conducted in a manner that is reasonable, proportional, and defensible, while also minimizing the risk of disputes or sanctions later in the case.
Another commonality is the potential for mistakes in manual or technology-assisted review (TAR). Risks necessitate quality control measures and documentation. The presence of mistakes does not undermine the defensibility of a workflow; what matters is whether the overall process was reasonable and proportional.
Precision, recall, and elusion rates are applicable across these methodologies. Consistency allows courts and parties to assess a review process without bias toward the technology used. The key is transparency and the ability to demonstrate thoughtful design and execution.
Best Case Scenario: AI Facilitated Review Held to the Same Standard As Human Review
In the best-case scenario, AI review has the same level of trust and scrutiny as traditional human review. This approach aligns with Sedona Principle 6, which states that the producing party is best situated to determine the appropriate technologies and methods for its production.
There is no requirement to disclose evaluation metrics such as precision and recall, although calculating them is recommended. Just as with human review, the absence of shared metrics does not imply a lack of rigor, only that the producing party is not compelled to expose its internal quality control measures unless a deficiency is alleged.
However, use of AI for the purposes of searching documents and designating them for preservation is risky, whether disclosed to opposing counsel or not. AI may prove to be useful in the identification of custodians and data stores subject to legal hold, so long as no filtering beyond date ranges is applied. If, for any reason, there is a loss of ESI that should have been subject to a legal hold, then the producing party must demonstrate that it took reasonable steps to avoid loss.
Worst Case Scenario: The Use of AI Is Scrutinized in Every Possible Way
In the worst-case scenario, the opposing party challenges production adequacy and demands full transparency into the AI review process. This includes requests to disclose the prompts used to guide AI under the argument that these prompts influence the scope and nature of the review. While this level of scrutiny is uncommon, it may arise in high-stakes litigation or when there is a history of discovery disputes.
The producing party may also face demands to disclose every step of the workflow, including preprocessing, filtering, and post-review validation. The burden of such disclosure can impact timelines and risk potential exposure of privileged information.
Another demand is access to “the AI reasoning or decision-making process.” While this may be infeasible given the opaque nature of large language models (LLMs), many tools log the chain of thought “reasoning” output by the underlying LLM in use. Ultimately, courts may balance the requesting party’s need for transparency with the producing party’s right to maintain confidentiality over its tools and methods.
Opposing counsel could also argue that AI may have biases that skew review results. In such cases, the producing party may need to demonstrate that steps were taken to assess and mitigate bias, even if such evaluations are not typically required in human review.
While these demands can be burdensome, they underscore the importance of maintaining thorough documentation and being prepared to defend the process if challenged.
Throughout every step of discovery, it is critical to maintain audit trails that document every action taken. If questions arise about the integrity of the process, a well-maintained audit trail provides evidence that appropriate procedures were followed.
Evaluating With Established Metrics
Evaluating the review process in eDiscovery requires applying established metrics to ensure it is effective and defensible. Three of the most critical metrics are unbiased estimates of recall, precision, and elusion for the review population. These three metrics should be used to evaluate any review workflow, whether technology-assisted or otherwise.
In Practice: What Does Cooperation Look Like?
Cooperation between parties is essential to avoid disputes and delays. For the producing party, this means being transparent about the use of LLM technology. Sharing high-level information about the technology used and the evaluation metrics obtained can help build trust and avoid conflict.
At the same time, the requesting party must act reasonably. Just as it would be inappropriate to demand the opposing party’s review protocol or seed set in a traditional TAR workflow, it is equally unreasonable to demand exhaustive details about an LLM technology review process without a specific basis for concern. Cooperation is critical to recognizing the balance between transparency and strategic confidentiality.
As the legal community adapts to AI, a shared commitment to fairness, efficiency, and defensibility will be critical in shaping the future of eDiscovery. Ultimately, defensibility in eDiscovery, whether using AI or not, rests on principles of reasonableness, proportionality, and good faith.
The original, full version of this blog is on ACEDS’ website and can be viewed here.
Lilith Bat-Leah has extensive experience managing, delivering, and consulting on eDiscovery, including identification, preservation, collection, processing, review, analysis, and production of digital data. She also has experience in research and development of eDiscovery software. Lilith is a regular participant in negotiations around eDiscovery and has provided expert testimony in domestic and international court proceedings. She specializes in the application of statistics, analytics, machine learning, and data science within the context of eDiscovery. Lilith writes and speaks on various topics, including ESI protocols, statistical sampling, and technology-assisted review. She co-chairs the DMLR working group with MLCommons, serves as an advisor to Common Crawl Foundation, and is on the ACEDS New York Chapter board, and is a member of Sedona Conference Working Groups 1 and 13. Lilith was also a founding board member of the ACEDS Chicago Chapter and served on the EDRM Global Advisory Council. She is a graduate from Northwestern University, magna cum laude.

Ronald J. Hedges, Principal, Ronald J. Hedges LLC
Ron served as a United States Magistrate Judge in the District of New Jersey for over 20 years. He speaks and writes on a variety of topics, many of which are related to electronic information, including procedural and substantive criminal law, information governance, litigation management, and the integration of new technologies such as artificial intelligence (AI) into existing information governance policies and procedures. He was a member of the AI task forces of the New Jersey and New York state bar associations and is now a member of the permanent AI committees of both Bars. Ron is also a member of the Founders Circle of the Georgetown Law Advanced eDiscovery Institute.
The contents of this article are intended to convey general information only and not to provide legal advice or opinions.