The new government ethical AI principles and what they mean for information management automation

Yesterday, Minister for Industry, Science and Technology Karen Andrews announced the AI Ethics Framework, consisting of a set of Principles, and guidance for how and when to apply them. In essence, the government hopes these principles will be used by anyone designing or implementing an AI-based system that produces outcomes that could impact people, the environment, or society.

We know that information and records management impacts people in several ways. Firstly, if we can’t find our existing information, we can fail in providing the best outcomes – never forget that the Australian government deported one woman and locked another one up unlawfully, largely because they could not find existing records about them.

Secondly, if we keep high-risk information (such as sensitive personal information, commercial-in-confidence records, classified information or sensitive financial information) longer than we need to, we increase our risk. The longer we hold it, the more likely it is to be caught up in an accidental or malicious disclosure. The more unnecessary high-risk information we maintain, the greater the impact of a breach.

On the flipside, if we dispose of high-value information too soon, we can’t provide the best outcomes for customers and stakeholders, and we can’t make best use of our own intellectual property. Information is a very expensive asset to create, and if we don’t curate it properly we lose that investment.

And finally, if we can’t guarantee the integrity of our information, we can’t rely on it. When we need to show evidence or reasoning behind a significant decision, or show that a record has not been altered over time, we are at a loss without good records controls behind those documents or datasets.

So what does this mean for automated AI records and information systems? A few things:

  • We need to design our AI systems to be good at discovery, and be sure that they are comprehensive in their searches. If we are going to rely on AI to find information, we need to trust that it has looked everywhere and found everything. We can reduce the cost of FOIs in our Castlepoint automation case studies by nearly 99%, so we know AI can speed up discovery actions – but we also need to show that we have found everything that was requested.
  • We need to use AI to find risk data. Just applying a Records Authority or two is not enough for records management compliance. We also need to apply other policies to really understand our risk data. Remember, we don’t have to dispose of things when they reach minimum retention, and we often don’t bother – but for risky information, we should. Knowing where risk is helps target and focus records disposition activities.
  • We also need to make sure AI understands our business. We can’t base our AI models on old Business Classification Schemes or outdated information architectures – we need to always have a weather-eye on what’s important to our business right now, and going forward. AI runs the risk of locking in old models and just repeating them ad infinitum, essentially trapping us in the past.
  • Finally, our AI systems have to show their working, and be evidence-based and transparent. One of the principles of the Ethics Framework, and in fact of the existing Administrative Review Council guidance on use of expert systems, is that machines should not automate the exercise of discretion. Where a decision will impact people, society or the environment in a significant way, there needs to be human oversight. The machine needs to show you how it reached a recommendation, so that you can easily determine whether that recommendation is in fact valid, before it is executed. This is how we keep systems transparent, contestable and accountable.

Artificial Intelligence is a necessary next step for records and information management. We can’t continue to manage exponentially growing data, increasing regulation, and higher compliance expectations of citizens and partners without it. But we have to do it right. These ethical principles are an important step in grounding the burgeoning AI movement in the reality of the human condition.