Interim Guidance on Government Use of Public Generative AI Tools in Australia

Interim Guidance on Government Use of Public Generative AI Tools

The “Interim Guidance on Government Use of Public Generative AI Tools,” updated on November 22, 2023, is a comprehensive document aimed at directing Australian Public Service (APS) staff in the responsible utilization of generative AI tools. This guidance, which will be updated based on public feedback and evolving policy needs, outlines principles and practical advice for APS staff to ensure that the deployment of these tools aligns with ethical, legal, and operational standards.

Key Highlights

1. Context and Purpose

  • The guidance is iterative, influenced by public consultation on AI use in Australia.
  • It aims to align with agency-specific policies and ICT obligations.
  • The document addresses the unique opportunities and risks posed by rapidly evolving generative AI technologies.

2. Golden Rules

  • APS staff should be able to explain and take ownership of decisions made with AI assistance.
  • Caution is advised against entering sensitive information into public AI tools.

3. Principles in Practice

  • Accountability: Staff should critically analyze AI outputs and ensure human oversight.
  • Transparency and Explainability: Usage of AI tools should be clear and justifiable.
  • Privacy Protection and Security: Sensitive information must not be entered into public AI tools.
  • Fairness and Human-Centered Values: Consider potential biases in AI outputs.
  • Human, Societal, and Environmental Wellbeing: Use AI tools in line with APS values and for community wellbeing.

4. Tactical Guidance – Dos and Don’ts

  • Align AI tool use with departmental policies and ICT obligations.
  • Exercise caution with AI-generated files and links to avoid security risks.
  • Continuously monitor the performance of AI tools against intended purposes.
  • Report non-compliance to appropriate authorities within the agency.

5. Use Cases

  • Generating initial content without revealing sensitive details.
  • Creating documents or presentations with non-sensitive information.
  • Utilizing AI-powered search tools while safeguarding organizational and tender-related information.
  • Analyzing publicly available datasets while being wary of AI-generated biases or inaccuracies.

The “Interim Guidance on Government Use of Public Generative AI Tools” provides specific use cases to illustrate how Australian Public Service (APS) staff can responsibly and effectively utilize generative AI tools in their work. These use cases are designed to highlight practical applications of the guidance principles while addressing potential risks and ethical considerations. Here are more detailed descriptions of these use cases:

1. Generating ‘First Pass’ Content

  • Scenario: Nick, an APS staff member, needs to develop a project plan.
  • Use of AI: He considers using ChatGPT to create a baseline project plan.
  • Guidance: Nick is advised to avoid entering any sensitive details about the project, such as names, specific requirements, or staff involved. This is to prevent the unintentional disclosure of confidential information.
  • Purpose: The AI tool can provide a template or starting point, but sensitive or specific details must be kept confidential.

2. Generating Files or Documents

  • Scenario: Stephanie needs to prepare an urgent PowerPoint presentation.
  • Use of AI: She plans to use a public AI platform to quickly generate the presentation slides.
  • Guidance: Stephanie should not input any sensitive or classified data into the AI tool. She is permitted to use non-sensitive public information or request generic slide templates.
  • Purpose: The AI can assist in creating basic structures or templates, but the content must be managed to avoid security breaches.

3. Using Generative AI-Powered Search

  • Scenario: Roque is drafting technical requirements for an urgent tender.
  • Use of AI: He wants to use Google Bard to confirm technical specifications like monitor resolutions.
  • Guidance: Roque should avoid inputting specific details about his agency’s requirements or the tender process. He must also validate any information obtained for accuracy and appropriateness.
  • Purpose: The tool can be used for general information gathering but should not be applied in a way that could reveal or imply sensitive project details.

4. Exploring Datasets

  • Scenario: Angus is conducting basic data analysis on a publicly available dataset.
  • Use of AI: He considers using ChatGPT Plus for generating insights from the data.
  • Guidance: Since the data is public and non-sensitive, Angus can use the AI tool. However, he must check the AI’s output for accuracy and bias before using the insights.
  • Purpose: The AI can assist in data analysis, but human oversight is necessary to ensure the integrity and fairness of the analysis.

Key Takeaways from the Use Cases

  • Confidentiality and Sensitivity: In all cases, the paramount concern is the protection of sensitive and confidential information.
  • Human Oversight: While AI can assist in initial stages or provide templates, human judgment and oversight are crucial in finalizing any government document or decision.
  • Ethical Use: The use cases reinforce the need for ethical and responsible use of AI, aligning with the broader principles of transparency, accountability, and fairness.
  • Contextual Application: Each scenario demonstrates how the context of AI use influences the approach and safeguards needed.

These use cases illustrate how APS staff can leverage generative AI tools in their workflows while adhering to established guidelines and ethical standards, ensuring that the use of t

6. Future Enhancements

  • The document mentions the development of a risk framework to assist in assessing the use of AI tools.

7. Legal and Ethical Considerations

  • Emphasis is placed on adhering to existing legislation, ethical principles, and data governance, especially concerning Indigenous data.

8. Context-Specific Risks

  • The varied nature of government activities means the risks of using generative AI tools are context-specific.

9. Training and Awareness

  • Staff are encouraged to undertake training to understand AI tool limitations and to critically analyze outputs.

10. Intellectual Property and Copyright

  • Considerations for IP rights and copyright issues are noted, suggesting legal advice for certain uses.

Conclusion

The APS staff is provided with this document as an essential guide, which aims to strike a balance between the potential of generative AI tools and the requirement for responsible, secure, and ethical utilization. This document demonstrates a proactive stance towards the management of emerging technologies in government operations, with a strong emphasis on accountability, transparency, and the well-being of the Australian community.

Read the full text HERE