Australia’s AI moment: Responsible adoption and strategic capability

Dr Mark Pedersen, Vice Chair of the ACS AI & Ethics Committee, summarises Australia’s approach to responsible AI adoption with unambiguous clarity. And highlights how ACS members can seize what Mark describes as Australia’s AI Moment.

Australia’s AI journey is entering a pivotal phase. The opportunity for productivity and innovation is clear; the challenge lies in ensuring that AI is adopted in ways that align with our values, protect our rights, strengthen trust, and build strategic capability.

A national framework for responsible AI adoption

The National Artificial Intelligence Centre (NAIC) recently launched its  Guidance for AI Adoption at the AI Leadership Summit, a practical framework designed to help Australian organisations navigate AI adoption responsibly. 

At the heart of the guidance are six essential practices:

  1. Decide who is accountable
  2. Understand impacts and plan accordingly
  3. Measure and manage risk
  4. Be transparent about AI usage
  5. Test and monitor AI-enabled systems
  6. Maintain human control

This guidance builds on the NAIC’s Voluntary AI Safety Standard by providing practical implementation guidelines and supporting tools, including an AI policy template, an example AI risk assessment process, and an AI register template.

Pointers to relevant legislation covering the development and usage of AI are also very useful in establishing risks and obligations. The goal is to create a sound starting point for AI adoption, reduce confusion about what regulatory requirements may apply, and help organisations begin the journey to adopting more comprehensive AI governance standards, such as ISO/IEC 42001, if required. This is a similar pattern to the Essential Eight and ISO 27001 for cybersecurity.

For ACS members and the broader ICT profession, this is significant. The NAIC’s AI Adoption Guidance is a clear roadmap for both public and private sector organisations: embed responsible practices early, build governance frameworks for AI use and engage transparently so that the benefits of AI can be realised to enhance public trust and distribute those benefits fairly. Organisations that can demonstrate alignment with these practices will be better placed to build trust with stakeholders, manage risk and harness the benefits of AI at scale. 

 

Sovereign AI, copyright and strategic autonomy

Recently, the topic of sovereign AI capability has been in the spotlight, both nationally and at the state level, as the public and private sectors have debated what might be required to be “makers rather than takers” in the AI arena. 

There’s a clear call to develop the capability to build our own AI solutions, with several local initiatives, including Sovereign Australia AI’s Australis LLM and Maincode’s Matilda LLM being notable examples, and startup Sunflower.ai recently showcasing their live captioning and translation solution at SXSW Sydney. 

The sustainability of AI infrastructure is an increasingly important consideration, and there is a growing need for local capability to build and deploy AI solutions that don’t rely on massive compute infrastructure. Part of the solution is to use smaller, more specialised models, in tandem with breakthroughs in how computation is done. On that front, Australia is in the race to deliver practical quantum computing, with Silicon Quantum recently announcing working examples of how quantum computing can accelerate deep learning and reduce energy consumption at Telstra. 

Of course, international players will always be in the mix. So, a key element of the sovereign AI discussion is how we retain control of data and intellectual property, and the role of sustainable local infrastructure. This October, the federal government declared that it will not proceed with a broad “text and data mining” exception to copyright law (which would have allowed large-scale use of protected works to train AI without permission). In doing so, Attorney-General Michelle Rowland emphasised that protecting Australia’s creative industries remains a vital part of the national AI agenda.

For ACS members, this underscores a dual message: on the one hand, Australia needs to build capability, infrastructure and trusted AI systems; on the other, we must safeguard our rights, content and cultural assets. Strategies for AI adoption must be developed in synch with legal, governance and ethical frameworks that reflect the Australian context.

 

Implications for ACS members and the ICT profession

The release of the NAIC’s AI Adoption Guidance and the broader policy context provide several takeaways for ACS members:

  • Governance, ethics and risk roles will expand: With the six practices as a baseline, professionals with expertise in data governance, model oversight, ethics, auditing and AI risk management will be in increasing demand.

  • Organisations must shift mindset: It’s not enough to purchase or deploy an off-the-shelf model. The questions now include: What is our accountability? What processes govern the lifecycle of the model? How do we measure impacts? How do we ensure human oversight?

  • Domestic capability matters: While the national narrative around sovereign AI may still be evolving, the implication is clear: Australia wants to operate AI systems in a way that is aligned with our values, rights and rules. Professionals who can bridge technology, governance, and policy will be well placed.

  • Ethical and social dimensions are integral: The Guidance emphasises human-centred design, fairness, inclusion, and rights protection. This elevates the role of tech professionals beyond technical deployment into shaping how AI interacts with society.

 

Looking ahead

Australia’s future with AI is less about racing to replicate global-scale lab models and more about adopting the right frameworks to scale responsibly, create value aligned with Australian interests, and build trust across society. 

For ACS members, the call is to engage proactively: review your organisation’s AI readiness, embed governance structures, align your practices with the six core practices, and position yourself as a steward of responsible AI and not just an implementer.

In doing so, Australia can move beyond being a passive user of AI to being an active shaper of its impact, anchored in our ecosystem, our rights, our values, and our economic potential.

Mark Pedersen 

Vice Chair, ACS AI & Ethics Committee