The key to managing artificial intelligence can be found in the parenting philosophy of Indiana Chief Information Officer Tracy Burns. That is, start out restrictive and loosen as the feature becomes more robust.
Burns told StateScoop: “It is [him] Nuts” said the state is starting with a more restrictive mindset, at least initially, by allowing the use of AI tools before protecting them.
“We actually blocked the generative AI from the network,” Barnes said. “That was the starting point. Hey, until I understood this more, [until] Understand what's going on here. While we figure out what the right policies are and what the right protections and security are, let's at least make sure that sensitive data is not input into the AI models that are out there. That should be fixed. ”
Burns isn't the only one taking note. Other state technology officials told StateScoop it's important to ensure the accessibility and cybersecurity of digital services. Meanwhile, AI vendors are trying to help states identify security risks when implementing AI in digital services.
“Big challenge”
Washington, D.C.'s Office of the Chief Technology Officer is prioritizing accessibility for “everyone in the district,” said citywide CTO Mike Rupert. He said it was important to ensure residents could access city services using older devices.
“That's a big challenge we've always had. Unlike Nike or Adidas, we're not going to bully them, but they can deal with something that 10% of people don't have access to,” Rupert said. Told. “It's like a flashy redesign of an app or something like that. We can't do that because it's not going to work for everyone. It has to work on older phones and older browsers. These are newer These are the kinds of things that we've always really considered when deploying tools, and I think that's probably going to be the same with AI. We can't take anyone for granted. ”
Stephen Miller, D.C.'s interim chief technology officer, told StateScoop that the district's technology office is “100% committed to transparency” and when the city will release content created by generative AI. The company said it is ensuring the public knows when a user is interacting with the content (including when the user is interacting with the content). Chatbot.
Rupert said all of the city's websites meet federal Section 508 compliance standards, as well as WCAG 2.2 standards, which focus on accessibility for users who may have low vision, learning disabilities, or mobility disabilities. He said he is working hard to meet the requirements. He said all city websites are scanned every three days.
Miller predicted that AI would help.
“AI tools are generally helpful. They're going to help district governments beyond what they themselves are doing,” Miller said. “These AI tools will be built into users' computers and mobile phones, so when it comes to accessibility aspects, users will be able to work on their devices and receive advice on how to make them more accessible. …It's part of our responsibility to give them the right information and make sure the situation is open. But in general, this revolution is expanding beyond the reach of governments. And I think we'll be able to take advantage of that.”
own the algorithm
Maryland is the state that announced four major IT projects in January, including an executive order on AI, a new digital services team, a cyber partnership with the National Guard, a digital accessibility policy, and the responsible use of AI and generative AI. We are striving to According to people involved, the same applies to both internal and digital services. His Maryland CIO Katie Savage gave the example of using AI to predict what services communities struggling with drug addiction will need.
“We want to make sure that our technology development is secure and accessible, and we want to make sure that the algorithms that we create are data sources,” Savage said. Stated. “We want to make sure that we own the data, we own the algorithms, we understand the third-party terms of service, and that's something that we can really control internally. But that's not true for the state of Maryland. As long as residents interact with something, it will have an impact.”
Burns, Indiana's chief information officer, said all content published by government agencies in the state undergoes “regular and rigorous reviews” to ensure the information complies with accessibility standards. said.
“There’s no question that AI has the power, and where it’s going and the market is rapidly adopting it is to make these things more accessible and more usable. , they have abilities that help make them more attritable,” Barnes said. He said. “One thing I will say is that Indiana is not going to let AI be the complete determining factor in whether or not we can support that consumption.”
“Force Multiplier”
James Collins, a former chief information officer for the state of Delaware who now works for Microsoft's state, local and higher education business, told StateScoop that AI is enabling cybersecurity threats and that “data “It makes it more difficult to defend,” but it also helps the defense.
“[Microsoft has] Our experience is that many of our customers do not have sufficient resources to adequately protect their businesses,” Collins said, referring to the company's chatbot product. “We want to put tools in their hands that allow them to gather the right insights and take the most important actions based on information from their environment. We also want to use AI to We want to automate some parts of the response, so it's a force multiplier in the security space. It enables some attacks, but it actually helps mitigate those attacks. , we are putting the tools in the hands of our customers to deal with it.”
Savage said Maryland is considering using AI to analyze cybersecurity incident reports. She said her agency is conducting a cybersecurity assessment and wants to make sure the state's new task force has the support it needs to fix the problems it finds.
“Now we can let constituent services know how urgent this issue is,” Savage said. “[AI] Provides an analysis of “How often are these problems occurring?”
Washington, DC city government officials are also focused on the safety and accountability of AI tool outputs. Miller said AI security is a “core element” of the city's current efforts, and the tools are tested prior to release so humans are kept in the loop.
“We have been working for three years, [Rupert] And I and many others here at OCTO work to make sure the D.C. government is a trusted source of information,” Miller said. “When you come to DC.gov and say I'm looking for this particular service, that service says it's offered by DC.gov. We don't think that's going to change with AI. I am thinking about it.”
Burns said Indiana employees will have the knowledge and skills to leverage AI, and the agency will [AI] In the right way. ”
“Our agencies and others are also asking us to understand what AI means, how AI tools are being developed, where there are concerns, and, more importantly, how to make our teams better. It’s essential that we start to really understand where are the opportunities to engage and improve the systems and solutions for the services that we create and deliver,” Burns said. “For us to say, 'Okay, let's move on,' we can't just sit back and rely on the message from the vendor that it works and that it's safe.”