Explainable AI (XAI) and new laws for data governance

Something serious is happening across advanced economies, and lawmakers are moving quicker than ever before to regulate AI. What does this mean for your GRC team’s existing, and planned, investments in AI for autoclassification?

Something serious is happening across advanced economies, and lawmakers are moving quicker than ever before to regulate Artificial Intelligence (AI) for governance, risk, and compliance (GRC). What does this mean for your organisation’s existing, and planned, investments in AI for autoclassification?

Why all the fuss?

ChatGPT exploded onto the scene in November 2022, and almost immediately, regulators got concerned. The new tool was adopted quickly by users in a personal and professional capacity, and security breaches and privacy risks soon caused some problems. While tech leaders around the world were sounding the alarm on existential risks to humanity of such powerful, but fallible, AI, legislators quickly honed in on the practical problems.

We have had best practices for AI for a long time. Some include:

  • Recommendation on the Ethics of Artificial Intelligence (UNESCO): This states that AI systems should be auditable and traceable, and developed with Transparency and Explainability (T&E). AI systems must not displace ultimate human responsibility and accountability.
  • OECD AI Principles: This states that AI Actors should commit to transparency and responsible disclosure regarding AI systems, to enable those affected by an AI system to understand the outcome, and, to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.
  • G20 AI Principles: The G20 AI principles draw from and agree with the OECD principles and recommendations.

These standards were designed to protect the most vulnerable in our communities from bias, disadvantage, and harm caused by adverse outcomes. They essentially require any decisions arrived at using AI to be explainable, and transparent, so they can be challenged if they are unfair. Many people have been harmed due to bad algorithms, from the Horizon Post Office Scandal to Robodebt. But these have always been essentially voluntary recommendations. Most AI systems are still very opaque — even those used by governments and large regulated corporations.

What’s happened?

Since the flurry of discourse on the implications of ChatGPT and other emerging AI hit the headlines, governments have had to respond. They have commenced the process of implementing new, binding, laws requiring AI systems to be transparent and explainable. Canada has proposed the AI and Data Act. The EU’s Artificial Intelligence Act is in progress. The Unites States has a Draft Artificial Intelligence Bill of Rights. All of these broadly require that:

  • Algorithmic-based decisions which affect individuals cannot be made without human oversight
  • Algorithmic or automated decisions must be able to be explained where they have a detrimental impact
  • The explanations behind how the decision was reached have to be meaningful, useful, and valid.

So what does this mean for you?

The issues for governance teams

If you have adopted AI to help make decisions, including whether to share someone’s data, whether to protect it or not, whether to preserve it, or whether to destroy it, you will need to be able to explain how the AI arrived at the recommendation or outcome.

If you are currently using ‘black box’ AI, like Machine Learning, neural networks, predictive AI, and Large language models (LLMs) like ChatGPT to classify data, you may not be able to for long. If you can’t show how the AI reached the classification decision, and explain it simply so that it can be challenged, it’s likely that you will be barred from using it when the outcome of the decision has any affect on an individual. For example:

  • Your AI classified and sentenced someone’s Personnel File, and told your records team it could be disposed under law. It was destroyed — but it contained information that the previous staff member actually needed for a compensation case. They want to know why it was destroyed, exactly.
  • Your AI determined a dataset was NOT classified or sensitive, and it was then shared with a third party. This breached the privacy of an individual, and you can’t explain exactly why the algorithm decided it was ok to share.
  • Your AI determined a dataset WAS sensitive, and so you declined to return it as part of a Subject Access Request or other discovery. The requestor wants an explanation as to why the AI thought this content was so risky it couldn’t be shared with them.

But there’s a simple solution, right? We just need human oversight, to review, carefully, the decisions the AI recommends before enacting them. With black box AI, you in the governance team won’t be able to see the why or wherefore of the algorithm, so you will have to go back to the source material and check it ‘by hand’.

In which case… have we actually saved any time or effort? Are we getting value for money from our AI, when we have to manually check the machine’s workings each time by referring back to the original source material?

So, can we use AI for information governance?

Yes we can. Explainable AI (XAI) is a well-developed field, and is increasingly used around the world. XAI is also called ‘white box’ AI. Castlepoint is built using an XAI called Rules as Code. In this model, the AI is trained on the actual policies and regulations, not on the source data. It then matches data it finds in the environment to those rules, showing exactly why an item or record falls into the scope of a particular obligation.

As well as being inherently explainable and transparent, this model is also much more efficient and scalable. With black-box models, you usually need to curate large amounts of training data, then supervise the learning process. This can be a large burden for governance teams, and needs to be repeated for every new policy or rule. XAI is much simpler to implement, and can be up and running in hours, without needing you to do the work.

When do I need to tackle this?

A few examples of new, and newly prioritised, regulations and directives are included below. Depending on your jurisdiction, you may already be required to show explainability for your AI systems. If not yet, then soon. If you’re not sure how to take steps towards Ethical AI, or whether your current investment is futureproofed for these new laws, contact our expert regulatory team for advice and support.

CanadaEuropean UnionUnited KingdomUnited StatesNew Zealand
AI and Data Act (Digital Charter Implementation Act – proposed): Rules for the responsible development and deployment of artificial intelligence, including requiring AI systems to manage the risk of bias.Artificial Intelligence Act (In Progress): Regulates AI data quality, transparency, human oversight, and accountability, based on the risk classification of the AI systemData Ethics Framework: There is a planned mandatory transparency obligation on all public sector organisations using algorithms that have an impact on significant decisions affecting individuals, resulting from commissioned research by Centre for Data Ethics and InnovationArtificial Intelligence Bill of Rights (Draft): Addresses algorithmic discrimination, notice of explanation, and data privacy in AI systems. Automated systems should provide explanations that are technically valid, meaningful, and useful. Individuals should be able to opt out from automated systems in favor of a human alternativeAlgorithm charter for Aotearoa New Zealand: Maintain transparency by clearly explaining how decisions are informed by algorithms, including plain English documentation of the algorithm
Bill 64 (Quebec privacy amendments – proposed): Includes transparency and explainability obligations for automated decision makingConvention for the Protection of Individuals with Regard to Automatic Processing of Personal Data (Amending Protocol): Including rights for the persons in an algorithmic decision-making contextData Protection Act: Allows automated decisions to be challenged by affected stakeholdersCalifornia Privacy Rights Act: Requires businesses to provide “meaningful information about the logic involved in [automated] decision-making processes, as well as a description of the likely outcome of the process with respect to the consumerTreaty of Waitangi/Te Tiriti and Māori Ethics Guidelines for: AI, Algorithms, Data and IOT: Application of the Treaty principles including the need for transparency that ensures that “Māori people, whānau, hapū, Iwi and organisations are clear about how AI learning is generated and why this information is used to inform decisions that affect Māori.”
Consumer Privacy Protection Act (CPPA – proposed): Includes transparency and explainability obligations for automated decision makingGeneral Data Protection Regulation (GDPR) Article 25: An algorithmic-based decision which produces legal effects or significantly affects the data subject may not be based solely on automated processing of data. A form of appeal should be provided when automatic decision-making processes are used (ex-Data Protection Directive). Individuals have the right to contest any automated decision-making that was made on a solely algorithmic basisUnderstanding artificial intelligence ethics and safety – FAST Track Principles (Alan Turing Institute): Ensure fairness, accountability, sustainability, and transparency in AI. AI systems should be fully answerable and auditable. Organisations should be able to explain to affected stakeholders how and why a model performed the way it did in a specific contextNIST AI Risk Management Framework (AI RMF 1.0): Characteristics of trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managedTrustworthy AI in Aotearoa AI Principles: The operation and impacts of an AI system should be transparent, traceable, auditable and generally explainable to a degree appropriate to its use and potential risk profile so outcomes can be understood and challenged, particularly where they relate to people. AI stakeholders should retain an appropriate level of human oversight of AI systems and their outputs