Artificial Intelligence Usage Policy
Version: 20 August 2025
1 Introduction and purpose
(a) This Artificial Intelligence Usage Policy establishes guidelines and requirements for the responsible use of artificial intelligence (AI) technologies within our software products and services.
(b) The purpose of this policy is to ensure that AI usage in our products aligns with our organizational values, ethical standards, legal obligations, and regulatory requirements.
(c) Responsible AI usage is essential to maintain trust with our users, protect security and privacy, and ensure compliance with applicable laws and regulations.
(d) This policy is aligned with the Digital Transformation Agency's (DTA) policy for responsible AI in Government, which provides guidance on the ethical development and use of AI in the public sector.
(e) Our leadership team demonstrates commitment to the responsible and secure use of AI by providing necessary resources, establishing clear roles and responsibilities, and ensuring comprehensive security practices throughout our development process.
2 Scope
(a) This policy applies to all AI-enabled features and functionalities incorporated within our software products and services.
(b) The policy covers all forms of artificial intelligence technologies used in our products, including but not limited to: machine learning algorithms, natural language processing tools, automated decision-making systems, computer vision applications, and generative AI solutions.
(c) This policy applies to all phases of the AI lifecycle within our products, from conception and development through deployment, operation, and updates.
3 Definitions
(a) Artificial Intelligence (AI): Systems designed to perceive their environment, reason over collected information, and take actions that maximize their chance of successfully achieving specified goals.
(b) Machine Learning: A subset of AI that enables systems to learn patterns from data and improve their performance without being explicitly programmed.
(c) Automated Decision-Making: Processes where decisions are made by technological means without substantial human involvement.
(d) Algorithmic Transparency: The principle that the functioning of algorithms should be explainable and understandable to relevant stakeholders.
(e) Training Data: The dataset used to develop and teach an AI model to recognize patterns and make predictions.
(f) Algorithm: A set of rules or instructions given to an AI system to help it learn from data and make decisions.
(g) Model: A computational representation of patterns learned from data that can make predictions or decisions when given new inputs.
(h) Inference: The process by which an AI system applies learned patterns to new data to generate outputs or predictions.
4 Principles for responsible AI usage
This section aligns with and builds upon the Digital Transformation Agency's responsible AI principles to ensure our AI practices meet high standards for ethical and responsible use.
(a) Ethical standards: All AI systems in our products are developed, deployed, and used in accordance with ethical principles including fairness, accountability, responsibility, and respect for human autonomy and dignity.
(b) Data privacy and security: AI systems in our products comply with relevant data protection regulations, including the Privacy Act 1988 and, where applicable, the GDPR, with appropriate security controls to protect personal and sensitive information.
(c) Non-discrimination and fairness: Our AI systems are designed to avoid unfair bias and discrimination against individuals or groups based on protected characteristics, with regular testing for bias implemented throughout the system lifecycle.
(d) Transparency and explainability: AI-driven features in our products, especially those impacting user experience or decisions, are designed to be explainable in clear language, with appropriate documentation available.
(e) Human Oversight: Our AI systems operate under appropriate human supervision, with procedures for human review, especially for high-risk scenarios or unexpected outcomes.
(f) Confidentiality, Integrity, and Availability: AI systems in our products are designed, deployed, and operated with appropriate controls to ensure the confidentiality, integrity, and availability of information they process.
5 AI development and deployment guidelines
(a) All data used for AI training and operation adheres to established quality standards, with proper documentation of sources, collection methodologies, and limitations.
(b) Our development teams conduct thorough bias assessments during development and after deployment, implementing mitigation strategies for any identified biases.
(c) Each AI feature in our products has clearly defined roles and responsibilities for development, monitoring, and oversight to ensure accountability.
(d) Before deployment, all significant AI features undergo a formal impact assessment examining potential effects on users and operations, with particular attention to high-risk applications.
(e) All data used in and produced by AI systems is classified and handled according to appropriate security standards based on its sensitivity.
6 User rights and consent
(a) Users are clearly informed when they are interacting with or being subject to decisions made by AI systems, with notification provided at the earliest appropriate opportunity.
(b) Explicit consent is obtained before using personal data in AI systems, with clear explanations of how data will be used and procedures for withdrawing consent.
(c) Users affected by significant AI-driven features have the right to request and receive an understandable explanation of how the system works.
(d) Where feasible and legally required, users are provided with mechanisms to opt out of AI processing, with alternative non-automated options made available.
7 Monitoring and maintenance
(a) All operational AI systems in our products undergo regular monitoring to assess continued accuracy, reliability, fairness, and compliance with this policy.
(b) AI systems are audited regularly and after significant changes to assess continued compliance with requirements, with updates implemented to address identified issues.
(c) Performance metrics, incidents, and changes are documented throughout the AI system lifecycle to support transparency and continuous improvement.
8 Incident response and remediation
(a) We have established channels for users to report unexpected behaviors, potential biases, security concerns, or ethical issues related to AI systems in our products.
(b) Upon identification of issues, our teams implement appropriate corrective actions, including system modifications, additional controls, or temporary suspension of AI operations if necessary.
(c) Following incidents, we conduct formal reviews to identify root causes and implement improvements to prevent similar issues in the future.
(d) When AI incidents affect users, timely and transparent communication is provided, including information about remediation actions.
9 Compliance and legal considerations
(a) All AI features in our products comply with relevant Australian laws and regulations, including the Privacy Act 1988, anti-discrimination legislation, and sector-specific requirements.
(b) Any third-party AI components or systems incorporated into our products are contractually required to meet the requirements of this policy and applicable regulations.
(c) All AI systems handling personal information must adhere to the Australian Privacy Principles, with particular attention to data collection, use, disclosure, and security requirements.
(d) Our AI compliance framework is regularly reviewed and updated to reflect changes in laws and regulations affecting AI deployment and use.
10 Ongoing improvement and responsibility
(a) We maintain specific AI-related responsibilities across our development and product teams to ensure accountability for AI systems in our products.
(b) Our staff involved in AI development receive appropriate training on responsible AI practices, ethical considerations, and applicable regulatory requirements.
(c) We continuously evaluate emerging AI technologies and best practices to improve the safety, reliability, and effectiveness of AI features in our products.
11 Feedback and reporting
(a) Users are encouraged to provide feedback on AI features through our established support channels.
(b) We welcome reports about potential ethical issues, policy violations, or concerns related to AI systems in our products.
(c) All reports concerning AI ethics or policy issues are promptly investigated, with appropriate corrective actions implemented when necessary.
12 Policy review and updates
This policy is subject to ongoing review to ensure it remains current and effective:
(a) Updates will be made upon significant changes to applicable regulations, industry standards, or best practices.
(b) This policy is reviewed on an annual basis, whether there are changes required or not.
(c) Users will be notified of significant policy changes through appropriate communication channels.