The use of artificial intelligence (AI) is growing across the public sector as entities seek to leverage its potential and find opportunities to deliver services more efficiently and effectively. For these reasons it is increasingly being incorporated into many new and existing processes and information systems.
At the same time, AI introduces ethical risks that require attention from executive leaders. There are 8 principles that provide a foundation for responsible use of AI, including setting expectations around wellbeing, human values, safety, fairness, transparency, accountability, privacy, and contestability. It’s important for leaders to have confidence in how their entities identify and manage these risks.
Our report, Managing the ethical risks of artificial intelligence (Report 2: 2025–26), examined how Queensland Government entities identify and manage ethical risks when using AI.
Practical tips for executive management
Our audit highlighted effective practices and areas needing attention across the entities we examined, and these insights are particularly useful for public sector leaders. In this blog, we focus on practical actions that can help leaders strengthen oversight, and ensure AI is used responsibly to manage ethical risks.
Have you updated your governance arrangements?
A good place to start is by checking whether your governance structures support oversight of AI. Consider who is accountable for decisions, how risks are monitored, and whether staff understand their responsibilities when using AI.
The Queensland Government introduced the AI governance policy in September 2024. It sets expectations for departments and statutory authorities to align governance with ISO 38507 Information technology – Governance of IT – Governance implications of the use of artificial intelligence by organizations. This standard can support leaders identify if their governance arrangements align with internal best practice.
Leaders should regularly review and update governance arrangements to keep pace with AI use across their entity. This includes confirming that roles and responsibilities are clear, oversight committees are functioning effectively, and reporting lines provide timely information on risks and controls.
Questions you can ask include:
|
Do you have visibility of how AI is used?
AI is not just appearing in new systems. It is also being added to existing programs, sometimes running in the background without users knowing. Its broad range of use and accessibility means staff may be using AI to support decision-making without leaders knowing. This is why it is important for decision-makers to have visibility on where and how AI is applied, the data it is using and the decisions it is supporting. This means ensuring your entity has processes, systems and controls to proactively identify what AI is being used, intentionally or otherwise.
Good visibility helps leaders identify the potential benefits and risks early to make informed decisions about how AI is applied.
Questions you can ask include:
|
Are ethical risks understood and managed?
Ethical risk assessments are a key tool for identifying and responding to ethical considerations. They help determine how closely an AI system should be monitored, what controls are needed, and how oversight should be applied.
AI systems differ in purpose, complexity, and potential impact, so understanding these factors allows governance and risk management to be tailored appropriately. While your entity may already do some risk assessments for privacy and security, using an ethical framework, such as the Foundational AI Risk Assessment (FAIRA), can help you identify a broader range of risks including:
bias in decision‑making
lack of transparency
unintended impacts on individuals and communities.
Questions you can ask include:
|
Are mitigation strategies in place and working?
Understanding what controls are in place, such as human review of AI outputs, limiting the use of sensitive data, or configuring AI systems to align with the entity’s context and values are important in actively managing ethical risks.
These strategies should be regularly tested and reviewed. Regular checks, internal audits, or independent reviews can provide assurance that AI systems operate safely, fairly, and transparently, and that any gaps or unintended outcomes are addressed promptly. Leaders should consider whether the information and assurance they are receiving is sufficient to understand the effectiveness of controls. If you are not confident in the current level of oversight, always seek additional reporting or independent review.
Questions you can ask include:
|
Do you have a plan to build staff capability?
Executive leaders should have a clear view of how their entity is developing AI capability across the workforce. They need to know whether there is a structured plan to equip staff with the necessary skills and knowledge, and to ensure they are using AI in ways that align with organisational values.
All staff play a key role in managing the ethical risks of AI. Policies and controls only work if staff understand and are willing to apply them.
Questions you can ask include:
|
Are you committed to continuous improvement?
Managing ethical risks in AI is an ongoing process. AI technology and its application evolves quickly, and so must the way in which ethical risks are managed. Executive management should actively champion continuous improvement, ensuring AI risk management evolves as technology and ethical expectations change.
Questions you can ask include:
|
What advice and guidance is available?
A growing body of advice and guidance is available to support executive leaders manage ethical risks associated with AI.
Appendix C Checklist for managing ethical risks in artificial intelligence provides practical questions for those responsible for governance to assess their current arrangements, identify gaps, and implement improvements to manage AI ethically and effectively. Use this checklist to progressively strengthen oversight, embed ethical risk management into everyday AI use, and maintain public trust in government services.
Australia’s AI Ethics Principles – These 8 voluntary principles provide a foundation for responsible use of AI. They set expectations around fairness, transparency, accountability, privacy, and contestability.
Voluntary AI Safety Guardrails – Released by the Australian Government, these guardrails give practical advice on safe design, testing, and deployment of AI. They encourage entities to put limits in place for high-risk applications, strengthen transparency for users, and ensure humans remain ‘in the loop’ for critical decisions.
National AI Assurance Framework – This framework highlights when risks should be reassessed during the AI lifecycle, including from design and testing to deployment and ongoing use. Building these checkpoints into governance helps entities stay ahead of emerging risks rather than reacting to them later.
By applying these resources, entities can ensure that AI systems are safe, ethical, and aligned with Queenslanders’ expectations.
Resources
Reports
Managing the ethical risks of artificial intelligence | Queensland Audit Office
Fact sheets
Delivering successful technology projects | Queensland Audit Office
Learnings for ICT projects | Queensland Audit Office
Blogs
Setting up technology projects for success | Queensland Audit Office