Skip to Content (custom)

Nothing to Fear Here – Using AI Tools Responsibly Breeds Success

  • eDiscovery
  • 3 Mins

The news is flooded with stories about artificial intelligence (AI) tools -- some shining light on the benefits, and others meant to invoke fear and panic. While the latter has bred some scepticism about AI usage in business, it is time to take a step back and look at what drives the errors. Oftentimes, it is not the tool itself but instead the human behind the tech. The reality is that many organisations are using AI highly successfully and without mishap.

Every organisation – regardless of what industry it may operate in – should understand how to utilise AI in a safe and responsible manner. Most are likely already aware of the usual risks present in AI tools and other emerging technologies. Inherent bias, cybersecurity gaps, and lack of transparency are a few examples. However, many are forgetting to factor in the human component. It is crucial to understand the benefits and risks of both human and technological contributions to changing workflows. This helps teams build strategies and systems that realise the best in both, while effectively managing potential downsides.

AI in the News

A notable AI story in the media recently has been about the New York lawyer who used ChatGPT to help draft a brief that went south. The output included convincingly cited cases that did not in fact exist. Opposing counsel discovered the false citations and brought the issue to the court. The lawyer used ChatGPT to verify the accuracy of the decisions and the tool ended up creating facts and attributing the existence of the cases to legal research search engines. The lawyer responded that this was his first time using ChatGPT as a supplement to legal research and he was unaware that the tool could create false information.

In this case, the court recently imposed sanctions including a USD$5,000 fine that was joint and several among each counsel and their law firms, and an order to write to each judge who had been falsely identified as the author of the fake opinions, and to provide them with the transcript of hearing before the court, including copies of the fake opinions.

Since this incident, two federal judges are requiring counsel to submit generative AI certifications when submitting a document to the court. Counsel must attest that they did not use generative AI or that they used it and had a human check the information for accuracy. Legal analysts across the country have spoken freely about this decision, some calling it unnecessary overkill and duplicative, and others finding it necessary due to the new risks present in this technology.

Other instances of “scary” AI in the news include stories about using this tech for negative scientific purposes and predictions that AI will replace jobs. While headlines like these can increase fear about using these tools for business purposes, there are just as many stories on the other side of the coin that explore the benefits of AI and look at how responsible usage can limit risk.

Using AI Responsibly

From reducing documents in a dataset to facial recognition software and HR recruitment tools, AI has proved very beneficial across industries. With generative AI on the scene, organisations are considering how this tech can also benefit their businesses. It is crucial to examine both benefits and risks to pinpoint best use cases. In many instances, the benefits will outweigh the risks and there are best practices to employ that will curb fear. Learning how to use these tools safely and responsibly is the key.

Staying educated is the most important way to use AI responsibly. Keep up with news about things going awry and use it as a learning experience. For example, in the case law example above the lawyer could have mitigated risk much earlier if he would have initially checked the sources and not subsequently use ChatGPT to justify the research. Simply going into a legal research database after using generative AI as a starting point would have brought the fake cases to the lawyer’s attention and avoided potentially sanctionable behaviour. The fact that the lawyer used ChatGPT was not at issue here, it was the manner in which it was used. The responsibility to fact-check and provide quality control on the technology’s output will always remain the responsibility of human lawyers.  Lawyers must remain technologically competent and check their work or will otherwise face court sanctions, ethical violations, reputational harm, and lost business.

Turning to the fear of job loss, a recent Goldman Sachs report indicated that AI could replace 300 million full-time jobs. This is the type of news that raises alarm at first read, but it is prudent to point out that even when tech has “replaced” jobs in the past it has opened up new roles and different opportunities. AI is currently being used in most industries as a supplemental tool and the human component is still necessary but may just look different.

It is also crucial to remember that there are more generative AI tools available than ChatGPT. Tech companies are creating systems designed for specific industries or that can be privately used within an organisation’s own infrastructure.

Conclusion

The takeaway here is that generative AI can be just as beneficial as the next tool when organisations consider the human component and use it responsibly. Delegate tasks wisely with a deep practical understanding of the limitations as well as the advantages of all the different resource options for delivering services, both AI and human. Doing so will allow organisations to reap the game-changing benefits that technology offers with confidence. Considering all the hype, more AI regulation is definitely on the horizon. Organisations must continue to monitor developments in this area and implement policies regarding appropriate tech usage for business tasks.

The contents of this article are intended to convey general information only and not to provide legal advice or opinions.

Subscribe to Future Blog Posts