close
close

State’s adventure with the AI ​​chatbot began with cooperation

According to agency officials, in the ongoing development and implementation of an internal artificial intelligence chatbot, the Department of State’s cooperation is as important as the technology on which these systems are based.

Matthew Graviss, state director of data and artificial intelligence, joined Gharun Lacy, deputy assistant secretary and deputy director of the Diplomatic Security Service for state cyber and technology security, for an update Nextgov/FCW about an internal chatbot based on artificial intelligence, which is intended to improve the department’s work.

Both Graviss and Lacy emphasized that collaboration is the key to a chatbot’s success.

“I think it’s a really amazing partnership that (serves) as a model for other agencies because we have a chief of cybersecurity, a chief of diplomatic technology, a chief of analytics and artificial intelligence, all working closely together to the point where we can together as a leadership team every few weeks,” Graviss said. “Our AI modernization is a team game.”

State’s quest to build and scale AI technology for internal use has been a roughly year-and-a-half process – – a relatively long road to getting its AI tool up and running quickly. Graviss, who started out as a research resource before moving to generative offerings, said his team tried several different AI models before settling on Microsoft Azure’s OpenAI.

The state has also contracted other software vendors to help with the chatbot. Palantir is helping to develop the user interface and further integrate the multilingual model, Deloitte is helping to analyze chatbot messages and output responses, and Bright Star is working to conduct independent verification and validation audits.

With the chatbot currently serving 10,000 state employees, Graviss said the model will continue to evolve with new features and analytics capabilities released on a regular basis. This will impact approximately 270 of the department’s mission areas around the world, including diplomatic efforts.

“From a user perspective, we want to ensure that our diplomats are comfortable using generative AI on a daily basis,” Graviss said. “Part of the philosophy of generative AI that is valuable to the department is that our currency is words. At the end of the day, at the Department of State, we read, write, and engage. It is extremely powerful. So, having worked in multiple agencies, I have not seen technology fulfill the mission that generative AI does in diplomacy.”

Keeping diplomats engaged on the ground and off screen is fundamental to the state’s development of AI advisors. Graviss said the chatbot’s main goal is to consolidate the extensive research documented each year by state analysts, ranging from congressional reports to regional executive summaries for diplomats to specialized topics such as human trafficking and human rights reports.

“Faculty spends about 150,000 hours creating these three reports, and these three reports really make an impact,” Graviss said, noting that State AI products focus on supporting the research process, not the writing process. The model currently focuses on collecting government analysts’ research documents throughout the year and summarizing them into specialized information chapters.

“The advantage of the research tool is the ability… to use generative artificial intelligence to summarize information, translate it into English and automatically recommend which sections of reports are relevant to a given material,” Graviss said

Modernizing the state using artificial intelligence will not take place in a vacuum. As more emerging tools become add-on applications to the chatbot, Lacy described an updated approach to cybersecurity that prioritizes balancing vulnerability opportunities with the willingness to adopt new technologies.

“One of the biggest changes we’ve learned on this journey is that we tend to see emerging technologies as an opportunity to… make sure our foundations are solid, and that we’re ready for emerging technologies,” Lacy said. “We like to have everything early, and this gives us a chance… to break it down, show the weak spots. It also helps us understand how we can use it.”

Lacy added that State’s strategy to involve multiple sub-agencies in developing the chatbot, particularly in-house cybersecurity specialists, has helped accelerate the adoption and advancement of the technology.

“We show that introducing security measures earlier speeds up the business process,” he said. “It’s not what people think. It doesn’t bother me; it speeds things up.”

A strong cybersecurity approach will be critical to the department’s overall security posture because the chatbot handles sensitive data that cannot be used in public software. Graviss emphasized that internal state staff play a key role in training permissible chatbot messages tailored to their specific job needs and how they operate in the field. Multiple teams – like Lacy’s – at State are currently overseeing alpha testing, or comprehensive software assessments, that bring together experts in areas such as diplomatic technology, the State Analyst Center, and diplomatic and cyber security to ensure the chatbot operates trained and securely deployed.

“One of the main recipients of this technology right now is my management,” Lacy said. “We use it for defense purposes, so as we become familiar with it, it helps us internally improve how we will do vulnerability assessments, how we will respond to incidents, and how we can not-disrupt all of this work because we are users. We don’t have to survey anyone anymore. We use it ourselves and see the development happen organically, so the biggest lesson for us is that the feedback loop can be much shorter.”

Beyond the technical approach needed to launch a secure and intuitive AI product, both Lacy and Graviss emphasized the importance of leadership support as a way to quickly implement updates in a complex federal agency such as a state. They cited Secretary Antony Blinken’s previous support for the adoption of artificial intelligence technologies at the agency as a key basis for other teams to collaborate to purchase, test and integrate emerging technologies.

“Leadership attention can really impact adoption,” Graviss said. “What I mean is, if the CIO and (deputy secretary) for cybersecurity and the chief artificial intelligence officer meet weekly or biweekly about generative AI, you can make real, meaningful progress, and you can make it quickly.”