Updated April 10, 2026
Artificial intelligence (AI) is technology that uses large amounts of data to mimic human intelligence to perform requested tasks influencing real or virtual environments. It can be used to draft documents, interpret data, make predictions or recommendations, and help make decisions. The emergence of AI technologies presents opportunities and challenges for almost every organization. Cities may be able to make gains in efficiency by using AI but need to be considerate about the potential risks, including exposing nonpublic data to third parties, and entrusting important government functions to AI services.
Cities must comply with the Data Practices Act when using AI
When considering the use of AI in municipal operations, compliance with the Minnesota Government Data Practices Act (MGDPA) is necessary. Government data is defined as all data collected, created, received, maintained, or disseminated regardless of physical form, storage media, or conditions of use. A city would need to be responsive to any data request pertaining to data created with the assistance of AI.
Understand the risk level before entering data into an AI service
In most circumstances, when putting government data into an AI service, a copy of that data is left with the service to help it grow its intelligence and be more responsive. For that reason, cities must know the data classification of data it intends to use and should only use low-risk data, as described below, with AI services.
- Low risk: Data that is defined by Minnesota Statutes Chapter 13 as “public” and intended to be available to the public.
- Moderate risk: Data that does not meet the definition of low-risk or high-risk. This includes but is not limited to system security information, not public names, not public addresses, not public phone numbers, and IP addresses.
- High risk: Data that is highly sensitive and/or protected by law or regulation. This includes but is not limited to protected health information, Social Security Administration data, criminal justice information, government-issued ID numbers (e.g., Social Security numbers, driver’s license numbers, state ID card numbers, passport numbers), federal tax information, account data, and bank account numbers.
Uploading, sharing, or disseminating moderate or high-risk data to an AI platform could be considered a data breach. The prevention of improper access or dissemination of data is a critical concern because there are civil or criminal penalties for violations. Cities should assume any data used on AI platforms like ChatGPT, Microsoft Copilot, or Google Gemini are going to be retained by the service.
For more information:
- Download the League of Minnesota Cities memo on Data Practices.
- Access the National League of Cities’ AI Toolkit for members.
Other important considerations when using AI
- AI is designed to make up information when it does not have an answer. This can lead to “hallucinations,” or inaccurate responses because the data it is using is incomplete, which leads to false conclusions. When using AI, it is important for subject matter experts to review any work generated for accuracy and completeness.
- AI provides responses based on large amounts of data, some of which may be outdated information or misinformation. This can lead to incorrect responses as AI systems are not authenticating information within each data source.
- Because of human influence, current AI systems have biases. When the data used to inform the AI system has preexisting prejudices or underrepresented data sets, the system cannot compensate for that. If using AI to help in decision making, cities should consider if the results have a discriminatory effect on certain residents because the data used was flawed. For example, when using calls for service data to determine how to allocate resources, cities need to consider if there are areas that don’t request service because of cultural norms or distrust in government.
- AI platforms or services should not be used to replace important city functions and responsibilities. For example, while AI may assist city clerks with drafting minutes, a clerk should review and edit.
City policies should include language about AI
While cities are considering how to use AI in their work, adopting a policy governing the use of AI seems to be a natural first step. The League has a computer use model policy that includes a reference to AI, acknowledging considerations for the transfer of government data to third-party entities. We encourage cities to review any policy regularly as this is a rapidly changing technology. Sample language that can be included in an existing computer use policy or within human resources policies is:
“Employees may use low-risk data with Artificial Intelligence (AI) technology to perform their work. Low-risk data is defined by Minnesota Statutes Chapter 13 as ‘public’ and is intended to be available to the public. The use of AI technologies often relies on the transfer and collection of data to third-party entities. If an employee is unsure of the data classification, they must review the data with the city’s responsible authority or their designee, prior to using the technology. All data created with the use of AI is to be retained according to the city’s records retention schedule.”



