Skip to Content (custom) - bh

Angle

Is the AI Revolution Creating Information Governance Problems?

  • Information governance
  • 5 Mins

The world is powered by data and technology. With all the focus and new legislation on data privacy, organizations need to keep governance at the forefront when reconciling the use of emerging technologies with compliance and privacy considerations. One area where this becomes particularly important is when organizations utilize platforms, solutions, and other technologies powered by artificial intelligence (AI) for business purposes. Some examples include creditors automating decisions on consumer approvals or employers using AI technology to assist with hiring determinations. The question becomes what framework should be placed around these processes and how can organizations implement some level of governance to avoid violations under laws like the EU’s General Data Protection Regulation (GDPR) or overall data mismanagement that can result in a myriad of other issues?

Keep reading for help on how to answer these big questions.

Getting a Better Grasp on AI

In order to make smarter governance decisions, the first step is expanding knowledge about how AI operates. While it is not necessary to be an algorithm expert, it is important to understand that this technology has the potential to bank an extremely large amount of consumer data that, in turn, can trigger various compliance obligations. Organizations using AI for business functions already understand the basics: this machine learning technology utilizes trained algorithms that grow to detect trends and automate a variety of human tasks. However, to improve AI governance, there needs to be a larger focus on the data that AI systems rely upon to make inferences. Where does this information come from? What are the collection and storage protocols? Are the resulting patterns accurate? Most importantly, what is happening with sensitive consumer data? These are all questions to evaluate to minimize the chance that AI usage runs afoul of legal obligations, especially those rooted in privacy. Having a handle on this aspect of AI will also contribute to stronger information governance practices. When an organization better understands the tools it deploys in the regular course of business, there is more insight into what data they collect and store. This will strengthen information governance, bring unknown obligations to the surface, and lessen the risk of noncompliant behavior.

Data Governance

As organizations navigate through the digital age and data boom, governance should be a top concern. When dealing with AI technology, there needs to be a greater level of accountability and transparency. Organizations can look to their general data privacy framework and expand it to consider unique AI challenges. Pay close attention to compliance obligations under privacy laws and determine how AI fits into that. One definite issue to address is that there could be a bunch of private consumer data living on a company server simply because an algorithm made an inference. If this data is accurate, then there is the concern of failing to obtain consumer consent (which violates the GDPR and other new privacy laws). If that data is inaccurate, then according to these laws consumers should be able to challenge the inferences made about them.

So, how can organizations remedy this moving forward? There needs to be a mixture of preventative measures and auditing deployed. As touched on above, the first step is setting a specific framework around privacy and AI to provide guidelines on how to handle collected consumer data and the resulting inferences. Proper responses – like whether to delete and retain data or provide consumer notification – will be situation dependent. Take the example of a company using AI to help with recruiting and hiring new talent. A process like this has the potential to acquire tons of sensitive data and make inferences about why someone is suitable or unqualified for a certain position. Including a data retention timeline or notification to potential applicants that a profile will be generated could be two key components of an AI privacy framework and help spark the creation of information policies for all people within the organization using these solutions to follow. In general, self-reporting can address several consumer privacy issues that AI usage poses for organizations.

Next, there needs to be some level of governance around what is done with sensitive data. One option is creating GDPR-type protocols that grant consumers the right to delete data collected by these solutions and to challenge inaccurate inferences that AI makes about them. Using the recruiting example, say some bad data is fed into the solution that then renders the individual unqualified for the job. Allowing potential applicants access to this information or the right to request human review are ways to uphold the principle of accuracy. It also allows a level of screening while still keeping the advantages of using AI for a task, like cutting down on the time it takes to completely manually review a project. Other suggestions for AI governance include conducting internal audits, creating steering committees, appointing individuals to review AI ethics within the organization, periodic check-ins with employees to ensure they are following AI privacy policies and protocols, updated education on new technologies or changing compliance obligations, performing risk assessments, and clearly defining objectives in AI project plans.

What is Next for Data Privacy and AI?

Data privacy is always evolving, especially as new technologies emerge and integrate into the business world. With AI governance, there will undoubtedly be challenges and a learning curve. A major obstacle to anticipate when creating AI governance plans is what to do when an organization collects data because they think it might be valuable to eventually run through an AI solution. This is where the AI black box dilemma occurs since it is unknown whether this data will present an advantageous business opportunity until is it actually run through the program. Another challenge could be harmonizing the GDPR’s push for data minimization with AI’s reliance on big data. While challenges like this pose potential conflicts, creating committees to test out solutions like self-reporting and increased transparency will help carve out path for better AI governance and increase the effectiveness of AI oversight as we all navigate the digital era.

For more information on AI and Info Gov, please check out our latest podcast “Would you bury your driver's license?.

To learn how we are working with our clients to streamline eDiscovery and investigation review work with pre-built AI models, read on here.

The contents of this article are intended to convey general information only and not to provide legal advice or opinions.

Subscribe to Future Blog Posts