Ethical Use of AI – a new problem

New Australian Signals Directorate Director-General Ms Rachel Noble PSM spoke today at the ANU National Security College. Castlepoint CEO Rachael Greaves was invited to attend as a representative of the Australian information security technology ecosystem.

The topic was ‘Long histories, short memories: the transparently secret ASD in 2020’, and you can see the speech online on the National Security College YouTube channel. 

ASD is making steps towards partnering with industry on national security, embedding staff across vendors and the government to make sure we are making the best use of technology and security intelligence. Ms Noble was asked by ANU Head of the National Security College Professor Rory Medcalf ‘what are the emerging or critical technologies that would worry you from a security perspective‘ — essentially, what capability should possibly be ‘sovereign’, developed by and for Australians? Ms Noble’s advice: all sectors should be focusing on the fundamentals:

  • Where is my data?
  • What is happening to it?
  • Who is it being shared with, and am I ok with that?
  • Who has access to it?
  • Who is controlling it?
  • How is it being used?

Answering the questions

We do now have technology that answers all of these questions. Castlepoint reads everything in the network, continually, and identifies the risky and valuable information. The system tracks and records everything that happens to that information, and alerts on activities that may be problematic. It shows how data items, systems, and whole information functions are being used across the enterprise, and for the first time gives organisations a complete picture of their environment. As Ms Noble says, getting these fundamental questions answered is the most important thing we can do for security, from the person level all the way to the national scale.

A sovereign capability?

Now that we have this capability, to read and understand everything inside a network, have we really solved a problem, or just traded it for a new one?

Castlepoint has command and control over all data, which is a risky power to have. We can manage this risk effectively with secure platforms, access management, and tight governance. But is this the only risk? Once we can see and understand everything, we actually have two new risks.

  1. Plausible deniability. Once Castlepoint identifies data spills, handling breaches, and weak controls, who is available in the organisation to remediate those? Once we use technology to understand risk at ‘machine speed’, as Ms Noble describes, how do the humans keep up? Once we have identified issues that were previously obfuscated, if we then fail to address them we will be liable in a way we may not have been when we could plead ignorance. For this reason, we need to either take a measured stepwise approach to fully understanding our data, or we need to actually increase human resources before we start. We address this by working with clients to develop privacy and breach response plans alongside the rollout, and by providing expert advice and support. ASD and other national security bodies must plan for the resource impact curve that comes with adoption of more powerful tools. 
  2. Abuse of power. We know the saying ‘absolute power corrupts absolutely’. Castlepoint is one solution to the problem of fraud and misuse of information, because of its auditing and monitoring transparency. But who watches the watchmen? At this stage, the only thing preventing our company selling this capability to those who would misuse it is our own ethics. But as home-grown technology such as Castlepoint attracts a more global client base, what steps should ASD and Defence be taking to oversee or limit this? 

In our culture, we do tend to think of every problem as having a practical and final solution. But this is not globally true. Many other cultures see things more holistically, and more cyclically, where every problem may have a solution, but that solution will inevitably lead to new problems. We must always be thinking ahead to how the solution of today may become the threat of tomorrow. At Castlepoint this is a core part of our governance, with a long planning horizon considering risk and ramifications. Partnering with leaders in security thinking, such as the National Security College and ASD, is one way to keep testing those theories, and protecting against them.