AI about more than just harnessing technology

Why it’s time to rethink existing machine learning, neural network, and predictive AI solutions, and shift towards more ‘white box’ models.

This article was originally published in Government News

The transformative tide of the Artificial Intelligence (AI) revolution is washing over the world, promising to reshape societies and systems. Australia finds itself grappling with this wave, with a report by the Minister of Industry, Science and Resources calling the nation “relatively weak” in the type of AI powering global entities, like ChatGPT.

Although the country faces a potential shortfall of skilled workers and computing power, the real journey extends far beyond mastering this technology. On the frontier of the digital revolution, Australia is not just confronting AI as a technological force, but as a societal one.

With the Federal Government’s deadline for input on responsible AI just past, regulating this transformative technology to address societal concerns represents an impending challenge for Australia and the rest of the world, given its advancements don’t respect national boundaries. However, it’s crucial to emphasise that AI extends beyond generative applications like ChatGPT. The true AI revolution began when regulators and governments started adopting AI auto-classification for data risk and value, outpacing the wider market.

Today, AI’s applications are vast, spanning across rules-as-code, machine learning, and governance, in all of which the government continues to develop expertise.

The AI journey is about more than harnessing technological power. It’s about ensuring the responsible use of a tool with the potential to reshape society profoundly. In this journey, the narrative we shape around AI will be a testament to our commitment to transparent and responsible innovation, a commitment that will determine our position in the future of AI.

Rachael Greaves, CEO, Castlepoint Systems

Navigating ethics and compliance

The potential of AI to democratise services and information, improve healthcare, combat climate change, and address many other global challenges cannot be understated. We must also appreciate the potential for AI to create new professions we cannot yet conceive, just as the internet gave rise to professions such as app developers and social media managers.

The ultimate challenge is ensuring these benefits are distributed equitably across society.

As Australia follows in the footsteps of the European Union, the United States, the United Kingdom, and New Zealand in implementing AI regulations, we’re entering a period of introspection. Technologies with impacts on citizens will need to be analysed for compliance, creating a possible upheaval of existing algorithm-supported services. This shift towards new regulation brings with it a ripple effect of pressure and anxieties that may, ironically, be more inhibitive to AI adoption than the supposed lack of skilled workers or computing power. To this end, Australia’s foray into AI regulation must include safeguards against inevitable security risks and improve its transparency for consumer and business protection.

Minister Ed Husic is not reinventing a wheel for AI regulation. The concepts of responsible AI have been present for many years, with Australia being one of the first economies to publish a Responsible AI Ethics Framework.

These existing frameworks, along with those from UNESCO, the G20, and the OECD, embody a universal understanding of the standards necessary when an algorithm impacts a person: transparency, explainability, and contestability. They are our tools to ensure any AI system, no matter how sophisticated or simple, can be subject to scrutiny.

AI’s potency does not lie solely with innovative generative systems. Any algorithm, if inaccurate and not subject to transparency or explainability, can cause irreparable harm. Cases such as Robodebt and the Horizon Scandal in the UK illustrate that AI simply facilitates the propagation of these damaging algorithms. The debate remains over the thresholds of ‘high-risk’ systems, but the concept of ‘harm’ is universal and inevitably linked to AI.

As Australia approaches AI regulation, the strategies of the EU, one of the first likely to enforce an AI Act, will be pivotal. Even in a post-Brexit world, the UK did not abandon or reimagine the EU’s General Data Protection Regulation (GDPR), signifying the potential global influence of the EU’s AI Act.

There may also be a necessity for a substantial pivot by AI companies. They may need to rethink existing machine learning, neural network, and predictive AI solutions, and shift towards more ‘white box’ models. However, if vendors can meet the expectations of one significant jurisdiction, they are likely to meet the expectations of all. As it stands, only three per cent of companies are ‘mature’ in regards to responsible AI, according to a report from the Gradient Institute and Fifth Quadrant. The firms describe “little change in the overall performance of Australian organisations in developing and implementing Responsible AI systems” since 2021.

The AI journey is about more than harnessing technological power. It’s about ensuring the responsible use of a tool with the potential to reshape society profoundly. In this journey, the narrative we shape around AI will be a testament to our commitment to transparent and responsible innovation, a commitment that will determine our position in the future of AI.