What ACS members need to know about the updated policy for the responsible use of AI in government

The Australian Government has released Version 2.0 of its Policy for the Responsible Use of AI in Government, effective 15 December 2025.

This updated policy sets a stronger national direction for how government agencies adopt, govern and oversee artificial intelligence. For ACS members working across technology, policy, service delivery and AI development, the policy marks an important shift in how government will lead by example in responsible AI practice.

A more rigorous governance framework for AI in the public sector

The policy recognises that AI is rapidly transforming government operations, improving data analysis, enhancing service delivery and enabling more responsive decision-making. However, it also acknowledges the need for government to be held to a higher ethical and safety standard than the private sector.

Version 2.0 strengthens the governance framework by requiring agencies to:

  • establish transparent AI strategies
  • designate accountable officials and accountable use-case owners
  • create and maintain an internal AI use-case register
  • conduct risk-based impact assessments for in-scope AI use cases

These measures are designed to ensure that AI is adopted with clear accountability, oversight and public trust in mind.

Clear expectations on transparency and accountability

Agencies must now publish an AI transparency statement, explaining how they adopt and use AI, and update this annually. They must also notify the Digital Transformation Agency (DTA) of any changes. This requirement creates consistency in how government communicates AI use and lifts public trust through visible, ongoing disclosure.

Each agency must also:

  • define its strategic position on AI adoption within six months, and
  • designate responsible officials who oversee compliance and implementation.

These changes embed responsibility within senior leadership, ensuring AI adoption is not left solely to technical teams.

What counts as an in-scope AI use case?

The policy applies to any AI use case where its failure or misuse could cause more than insignificant harm, influence administrative decisions, impact the public without human review, or use sensitive or classified data.

Examples requiring careful assessment include:

  • automated decision-making
  • recruitment tools
  • systems used in justice, health, education or border control
  • AI involved in critical infrastructure

Early-stage experimentation is excluded, provided it does not introduce security or privacy risks.

Scaling requirements based on risk

Agencies must undertake an AI use case impact assessment for all in-scope AI applications. High-risk cases require formal governance via a designated board or senior executive and must be reviewed annually. Residual risks and mitigations must be transparent and reported to the DTA. Medium-risk cases are encouraged to receive enhanced governance where appropriate.

This risk-tiered approach ensures that the most consequential uses of AI receive the highest level of oversight.

Building capability across the APS

All government staff must complete mandatory responsible AI training within 12 months. Additional specialised training is encouraged for teams working in AI procurement, development, deployment or policy. The policy also strongly recommends that agencies adopt the Australian Government AI Technical Standard.

This is a significant signal of the skills uplift required across the public sector.

What this means for ACS members

This policy has direct relevance for ACS members working with or alongside government agencies. Members should expect:

  • greater demand for professionals with AI governance, risk, ethics and assurance expertise
  • clearer procurement guidance and higher expectations for vendors delivering AI systems
  • increased collaboration between public and private sectors to build AI capability
  • opportunities to support government with AI training, technical standards and responsible AI implementation

For those working in or consulting to government agencies, understanding the policy’s requirements, particularly around transparency, accountable use-case ownership, risk assessment and incident management, will be essential.

How ACS members can get involved

ACS members can play an active role by:

  • supporting agencies to build AI capability and prepare staff for mandatory training
  • contributing expertise to AI governance, impact assessments and risk-mitigation strategies
  • aligning products, services and consulting approaches to the policy’s accountability and transparency standards
  • participating in ACS forums, working groups and thought leadership activities on responsible AI.

ACS will continue to advocate for strong, practical frameworks that balance innovation with community protection, while helping members stay ahead of regulatory shifts in AI adoption.

Read the complete document here: Download the full policy