Don’t Trust Cupid, Trust the Black Box
- Cyber Breach
- 7 Mins
Valentine’s Day is here. The holiday is easily recognizable since pink and red cards, chocolates, and stuffed animal menageries appear on displays at your local pharmacy about one week after New Year’s. Often, the day centers on feelings of love or friendship and celebrates human relationships. Legend says that on Valentine’s Day, Cupid will instinctively know our true love and join them together with his bow and arrow. However, the modern era is not so trusting of the winged baby in a diaper to know who our soulmate should be. Instead, people have turned to match-making sites to help them sort through the dating pool and find the right person.
Many sign up for these sites and carry hopeful elation that these apps might make the perfect match, although those same users do not often consider the implications of how these services work. How does a computer know your true match from just a few questions you answer? These sites use artificial intelligence (AI) to pair likeminded people together and seem to do a pretty good job at it. Millions of people use AI to help them find their soulmate, but many in the legal profession have not yet swiped right on AI to help them in their work.
Historically there is a pattern of the legal industry being hesitant to adopt new technology, but eventually coming around after realizing the benefits and understanding the importance of emerging technologies in business. There was mass skepticism at the invention of email, legal research platforms, and the cloud, but now they are a critical component of legal. The 21st century has introduced AI to the arena, but this technology has proven more difficult for the legal industry to adopt than past technologies. What seems to be most daunting is not understanding the black box of AI.
Without truly grasping what is going on in the background, many lawyers are fearful of integrating AI into their practice and accepting it as a regular business tool.
Here are some key reasons why the legal industry should toss this notion and embrace the AI black box to help with tasks like eDiscovery review, contract analytics, privilege logs, and much more.
Our brains can essentially be viewed as black boxes collecting data. A person’s individual judgment, knowledge, and experience will determine outcomes and how to make decisions. The same thinking should be applied to AI solutions operating as a black box of technology using data and algorithms to determine outcomes.
Think of it in this context: a lawyer is presented with a stack of documents to review for privilege purposes and makes a call about which information to withhold or redact. Or the lawyer is asked to determine which documents to produce in discovery via manual review. There is no way to know what is going on in the lawyer’s mind, but the court and opposing parties would not generally question privilege or discovery calls without something appearing highly suspect. Instead, they would rely on the lawyer’s expertise and ethical obligations to make such determinations. So why question a machine essentially performing the same process? In fact, people can get more insight into decision-making from an AI solution than human judgment calls.
AI should not be feared. These solutions provide measured outcomes and are consistently more reliable. Lawyers can look back on the process of why a certain result came about and use these data-backed insights to defend any questions about methodology or disclosure. Questioning a human about their decisions simply would not provide the same clarity. For example, a lawyer using technology assisted review can compare project output to the original training set, which would verify why certain data was flagged as relevant while other data was discarded. Being able to thoroughly train these programs improves reliability as well since there is less room for human error.
Additionally, the AI technology lawyers are utilizing generally rely on supervised machine learning. Supervised learning consists of an algorithm analyzing, labeling, and ranking the data used for training. This is different than the unsupervised machine learning algorithms used for clustering, near-duplicate detection, and latent semantic indexing, which receive no input from the user and do not rank or classify documents. AI rooted in supervised machine learning should ease the minds of hesitant lawyers, as this contains a human element to help these solutions learn patterns and to predict outcomes with future data.
Lawyers have trusted many processes and technology in the past without truly understanding the science. Even with something seemingly simple as email, there is code and technology that goes beyond the legal mind. Lawyers, along with the rest of the world, just trust that it will function as expected. People also accept there is room for error, like when servers are busy or there are email glitches. The same thinking goes for everyday functions, like flipping a light switch. People may not understand the complex wire, transformer, and grid systems, but certainly rely on the switch without much question. So why not carry this thinking over to AI, especially when this technology can transform the legal industry and improve overall workflows?
Lastly, AI is everywhere. It is so seamless in everyday life that often, people do not realize they are using it. “Frictionless AI” highlights that people generally do not question things they are comfortable with. Take Amazon for example. There are millions of global users for Amazon and they are all using AI. Amazon uses AI capabilities for several purposes, most notably to understand the context of why people search for specific items. This is why there are certain products advertised to each customer and the site is able to predict what a customer may need next or what search results would be the best match. If people accept AI for functions like this, why not use it to improve legal operations and make better business decisions?
It is normal to question things we do not understand and be skeptical of technology when it’s difficult to grasp precisely how it works. But just as we’ve trusted apps to find our Valentine, suggest new products for our home, and provide tailored recommendations to us, so should we invite AI to make work easier. Who knows? Picking the right AI solution for your business may just be a perfect match.
To learn more about how AI can help with legal challenges, check out our blog on using AI in privilege review: https://www.epiqglobal.com/en-us/thinking/blog/applying-ai-in-privilege-review
The contents of this article are intended to convey general information only and not to provide legal advice or opinions.