Queensland government agencies1 must handle personal information2 in accordance with the privacy principles3 in the Information Privacy Act 2009 (Qld) (IP Act). This includes when using generative artificial intelligence systems (generative AI) such as Microsoft Copilot.
This guideline explains how the use of Copilot and other generative AI systems interacts with agency privacy obligations.
This guideline focusses on Microsoft Copilot because it is integrated into the Microsoft products and services used extensively across the Queensland public sector. The issues it discusses, however, apply to any generative AI system used by Queensland government agencies.
Generative AI is the common term used to describe computer systems which generate content in response to user prompts. A user prompt is the information the user inputs into the generative AI system, eg a question or request to carry out a task, which prompts the system to generate output.
Microsoft Copilot and Microsoft 365 Copilot4 are generative AI systems. Microsoft 365 Copilot (M365 Copilot) is integrated into the Microsoft 365 environment and is available in Word, Excel, PowerPoint, Outlook and other applications.
Microsoft Copilot is similar to M365 Copilot, but instead of being integrated into Microsoft 365, it is available on the web and through the Edge browser, as well as being embedded in Windows.
Copilot has the potential to be a useful tool, but its potential utility comes with privacy and security risks which must be managed.
Agencies are responsible for their use of Copilot or other generative AI systems. If an agency's use of Copilot results in a privacy or information security incident, the agency cannot avoid responsibility by claiming the incident was 'caused by the generative AI'.5
Any use of Copilot that involves personal information, whether as user input or generated output, must comply with the privacy principles, including those that govern the collection, accuracy, security, use, disclosure and overseas transfer of personal information.
Agencies should not use personal information in Copilot or other generative AI systems just because they are available. They should only be used where they are the best way to address an identified problem.
Generation of incorrect data, commonly referred to as ‘hallucinations’, is a known risk of using generative AI systems, which may be trained using inaccurate information.6
The definition of personal information in the IP Act does not require it to be correct, which means the privacy principles apply regardless of its accuracy.
An agency collects personal information when it acquires it directly from the individual or indirectly from another source. If an agency uses Copilot to generate personal information that did not previously exist, for example by asking it to infer something about an individual based on existing data, it has collected personal information about that individual.
Collection of personal information in the context of Copilot or other generative AI systems must comply with the privacy principles.7
Agencies can only collect personal information which is necessary for—or, for non-health agencies, directly related to—the agency's functions or activities. Complying with this obligation in the context of Copilot may have practical difficulties, as it is impossible to determine what personal information Copilot will generate in response to a user prompt.
Agencies should consider the best way to limit the potential breadth of Copilot's response when crafting user prompts. Where Copilot generates personal information which is unnecessary for, or unrelated to, the intended purpose, agencies should consider whether the information can be discarded, subject to applicable public records obligations.8
Collection of personal information must be lawful and fair. Using Copilot to generate or infer personal information is likely to be lawful, unless there is a limitation on doing so in an agency's legislation. However, using Copilot to generate or infer personal information is inherently less fair than collecting it directly from an individual or indirectly from another entity who collected it from the individual.
For directly collected personal information, the individual would generally have been aware of its collection and had input into what they provided. Where an agency uses Copilot to generate or infer personal information, the individual does not know the information has been collected, has no control over what Copilot generates or infers, and has no opportunity to provide corrections or context, or to challenge incorrect information.
This does not mean that collection of personal information using Copilot will automatically be unfair, but agencies need to carefully consider the possibility.
A health agency cannot use Copilot to generate or infer information about an individual if it could reasonably and practicably be collected directly from the individual.9 Additionally, health agencies must take reasonable steps to make an individual aware of the matters in NPP 1(3) if they indirectly collect their personal information using Copilot.10
All agencies must take reasonable steps to ensure the personal information they collect, use, and disclose is accurate, compete and up to date,11 and non-health agencies take reasonable steps to ensure the personal information they collect and use is relevant.12
In the context of Copilot or other generative AI systems, this will require an awareness of the potential for generative AI systems to produce incorrect or irrelevant data, an assessment of the risk that it will occur, and crafting user prompts which limit the possibility of incorrect or irrelevant data being returned.
Content generated by generative AI systems like Copilot must be carefully assessed to determine its accuracy and relevance before it is used, eg relied on to make a decision in relation to an individual, or disclosed.
Personal information can only be used for the purpose it was collected or for one of the permitted secondary purposes.13 Use has a very broad definition,14 meaning most things an agency does with personal information will be a use. This includes inputting personal information into Copilot or other generative AI systems as part of a user prompt or asking them to generate or infer personal information about an individual.
Agencies must ensure they only use personal information in the context of Copilot as part of the primary purpose for which the information was collected or in circumstances where one of the exceptions can be met.
Personal information can only be disclosed15 as permitted by the IP Act.16 Additionally, agencies:
M365 Copilot is integrated into Microsoft 365. If an agency's Microsoft 365 environment complies with the IP Act's disclosure, overseas transfer, and security requirements, the use of M365 Copilot will generally reflect this compliance.
However, there may be a risk of uncontrolled or unauthorised data exposure, including by way of overseas transfer, depending on the system configuration and use of enabled plugins. Agencies should refer to the Controlling the exposure of data with M365 copilot and Microsoft copilot in Queensland Government guideline for guidance on mitigating the possibility of uncontrolled or unauthorised data exposure.
Agencies could also consider disabling Copilot entirely.
The IP Act gives individuals the right to access and correct their personal information.18 This includes personal information which is part of a user prompt and personal information generated or inferred by Copilot or other generative AI systems.
All agencies have transparency obligations in relation to the personal information they collect, hold, and use.19 If an agency is routinely using Copilot or other generative AI systems, particularly where they are being used to make or inform decisions, they should consider including details in their privacy polices or other transparency documents.
Agencies may also be subject to other transparency obligations in relation to the use of generative AI systems. For example, the Use of generative AI for government information sheet requires content produced using generative AI tools to be clearly identified as such.
Any agency considering adopting Copilot or other generative AI systems as part of the agency's day to day practice should also consider conducting a Privacy Impact Assessment (PIA).
A PIA will assist an agency understand and evaluate the potential privacy impacts of adopting such a system and enable the identification and mitigation of potential privacy risks.
All generative AI systems have the potential to produce biased, discriminatory, or other harmful materials.20 Agencies should consider their obligations under the Human Rights Act 2019 (Qld) and the Anti-Discrimination Act 1991 (Qld) when using Copilot or other generative AI systems.
Agencies should also consider the HR Act, which contains a right to privacy, when assessing the privacy impacts of Copilot or other generative AI systems.
Current as at: September 12, 2024