Artificial intelligence (AI) is computer technology that uses large amounts of data to mimic human intelligence to perform requested tasks. It can be used to draft documents, interpret data, and help make decisions. The emergence of AI technologies presents opportunities and challenges for almost every organization. Cities may be able to make gains in efficiency by using AI but need to be cautious about the potential pitfalls of exposing nonpublic data and entrusting important government functions to AI services.
Cities must comply with the Data Practices Act when using AI
When considering the use of AI in municipal operations, compliance with the Minnesota Government Data Practices Act (MGDPA) is necessary. Government data is defined as all data collected, created, received, maintained, or disseminated regardless of physical form, storage media, or conditions of use. A city would need to be responsive to any data request pertaining to data created with the assistance of AI.
Understand the risk level before entering data into an AI service
In most circumstances, when putting government data into an AI service, a copy of that data is left with the service to help it grow its intelligence and be more responsive. For that reason, cities must know the data classification of data it intends to use and should only use low-risk data, as described below, with AI services.
- Low risk: Data that is defined by Minnesota Statutes Chapter 13 as “public” and intended to be available to the public.
- Moderate risk: Data that does not meet the definition of low-risk or high-risk. This includes but is not limited to system security information, not public names, not public addresses, not public phone numbers, and IP addresses.
- High risk: Data that is highly sensitive and/or protected by law or regulation. This includes but is not limited to protected health information, Social Security Administration data, criminal justice information, government-issued ID numbers (e.g., Social Security numbers, driver’s license numbers, state ID card numbers, passport numbers), federal tax information, account data, and bank account numbers.
Using moderate or high-risk data could be considered a data breach. The prevention of improper access or dissemination of data is a critical concern because there are civil or criminal penalties for violations. Cities should assume any data used on AI platforms like ChatGPT, Google Bard, or Microsoft Bing are going to be retained by the service.
For more information:
- Access the National League of Cities’ conversation on ChatGPT.
- Download the League of Minnesota Cities memo on Data Practices.
Other important considerations when using AI
- AI is designed to make up information when it does not have an answer. This can lead to inaccurate responses because the data it is using is incomplete or inaccurate. When using AI, subject matter experts still need to review any work generated for accuracy and completeness.
- Because of human influence, current AI systems have bias. When the data used to inform the AI system has preexisting prejudices or underrepresented data sets, the system cannot compensate for that. If using AI to help in decision making, cities should consider if the results have a discriminatory effect on certain residents because the data used was flawed. For example, when using calls for service data to determine how to allocate resources, cities need to consider if there are areas that don’t request service because of cultural norms or distrust in government.
City policies should include language about AI
While cities are considering how to use AI in their work, adopting a policy governing the use of AI seems to be a natural first step. The League has a computer use model policy that includes a reference to AI, acknowledging considerations for the transfer of government data to third-party entities. We encourage cities to review any policy regularly as this is a rapidly changing technology. Sample language that can be included in an existing computer use policy or within human resources policies is:
“Employees may use low-risk data with Artificial Intelligence (AI) technology to perform their work. Low-risk data is defined by Minnesota Statutes Chapter 13 as ‘public’ and is intended to be available to the public. The use of AI technologies often relies on the transfer and collection of data to third-party entities. If an employee is unsure of the data classification, they must review the data with the city’s responsible authority or their designee, prior to using the technology. All data created with the use of AI is to be retained according to the city’s records retention schedule.”
Download the LMC Computer Use model policy (doc).