With AI making its way into various industry verticals, it’s no surprise that government agencies are also embracing this technology. However, the use of generative AI, such as ChatGPT, in local governmental operations is raising eyebrows across the United States.
The Alameda County government has been educating its employees about the risks associated with generative AI. However, it has yet to declare a formal policy.
Recently, the US Environmental Protection Agency made headlines when it prevented its employees from using ChatGPT on cybersecurity grounds.
Meanwhile, the US State Department staff in Guinea was found guilty of using AI chatbots for drafting social media posts and speeches. These developments show the complex association between generative AI and governments.
Maine, on the other hand, took an unusual step when it banned its executive branch employees from using generative AI for the rest of the year, fearing cybersecurity issues.
However, Vermont is encouraging government staff to use AI chatbots to learn new programming languages and come up with internal-facing code.
In California, San Jose has been proactive in this regard. It has developed a comprehensive set of guidelines to govern the use of generative AI. It has made it mandatory for municipal employees to fill out a form every time they use AI tools.
Challenges of Accountability and Transparency
The adoption of generative AI in government comes with its own set of challenges. Given that government agencies need to adhere to strict laws to ensure transparency and uphold civic responsibility, it’s imperative for them to balance the benefits of AI with accountability.
Everybody cares about accountability, but it’s ramped up to a different level when you are literally the government.Jim Loter, Interim chief technology officer of Seattle
One of the notable security issues that government officials faced occurred when an assistant superintendent in Mason City, Iowa, used ChatGPT to determine the books that had to be removed from the libraries of the district. The controversy that followed demonstrated the grave accountability issues that government employees face when using AI tools.
It’s worth considering that any input used as prompts in AI tools is subjected to public record disclosure laws. AI tools would use this data to further train their models, which can lead to the disclosure of sensitive information.
This poses specific challenges for agencies working on healthcare and criminal justice.
Ethical Concerns and Potential Benefits of Generative AI
Despite growing concerns over the use of generative AI tools, government agencies are exploring ways to leverage the power of generative AI. For instance, in Arizona, the Maricopa County Superior Court is considering the scope to use AI for making legal documents more understandable to the public.
However, in San Jose, using generative AI to create public documents is considered “high risk. These tools aren’t free from the risk of misinforming users.
The earliest policies governing the use of generative AI tools have emerged at the city and state levels. The officials are also eager to learn from the experience of one another.
Experts believe that proper guidance is necessary from the Federal government to maintain ethical standards and consistency in this regard.