close
close

Growing Army Demand for GenAI, Focus on Integration

Demand for generative AI is skyrocketing across the military—in fact, Army leaders are having to put a little “appetite suppressant” on it. In the early stages of GenAI’s rollout, the service supported teams with organic AI skill sets, but without the necessary business process expertise, resources quickly became a problem.

The Army is now focused on building a solid foundation for service-wide GenAI implementation, including developing cost-containment strategies and clear processes to help Soldiers and civilian employees more strategically integrate GenAI into their work.

But with some early successes using large language models, Army leaders now say they are seeing a tendency for organizations to “jump at it as the answer,” not fully understanding whether the technology is the best option.

“We have more people at the gate than we can handle right now. We’re just making sure that we create a foundation that makes sense and, number one, we can help people make sure that they don’t jump in there and run out of money and don’t get what they’re looking for. Number two, let’s use models that don’t support their requirements, which is going to be disappointing. Or number three, I think it’s more important to start using technology that really doesn’t enhance the type of work that they’re doing. So we’re trying to balance all of that now,” Army Chief Information Officer Leonel Garciga told Federal News Network.

Earlier this month, the Army rolled out a generative AI capability to its secure cArmy cloud. Developed by Ask Sage, the platform enables Soldiers and civilians to securely access Azure Gov OpenAI services, allowing them to apply OpenAI models to existing Army data.

However, the platform doesn’t support just one language model – a key feature of Ask Sage is that users can switch between models without having to retrain the data.

The platform also includes existing plugins and agents that users can implement out of the box without having to reinvent the wheel.

“You want to be able to understand the benefits of each model and be able to swap without being locked into a single vendor. Because Ask Sage can run anywhere, and we’re deploying it on cArmy at Impact Level 5, we can do that on Amazon, Azure, Google, but we also support 150 models with text, images, video, audio—that gives teams a lot of options,” said Ask Sage CEO Nic Chaillan, who served as the Air Force’s first chief software officer from 2019 to 2021.

The new software is already helping the military with software development, procurement, cybersecurity and even aviation safety.

For example, the Defense Contracting Command is already exploring how to use AI to create better contracts, identify commonalities across contracts, and find opportunities to integrate with the broader contracting community and defense industrial base.

“Just their first few runs, they definitely got a lot of great feedback from Army Contracting Command about where they were making some progress in understanding contract conflicts that they hadn’t seen before. We were able to actually take that and turn that into some decisions and changes in contracts that ultimately both save the government money and make it easier for vendors to deliver more efficiently,” Garciga said.

The Army has also seen immediate benefits in how its legal teams navigate new rules and regulations. The software is already helping legal teams quickly search federal law and existing Army and DoD policies, reducing manual work and significantly speeding up legal review.

As the number of use cases has grown, Garciga said, a key takeaway has been the importance of using cloud-native application programming interfaces (APIs) for underlying models, which often have built-in safeguards to ensure ethical use of the technology.

In addition, centralizing libraries and data models has proven cost-effective, eliminating the need for the Army to duplicate efforts across multiple systems and pay multiple times for the same data.

“One of the challenges of jumping from model to model, or maybe even cloud providers, is replicating data and paying for it multiple times. We’ve been working a lot to make sure that we understand what that looks like and how we maximize this idea of ​​centralized libraries or curated topics or even document domains, so we’re not replicating data and paying for it,” Garciga said.

Developing personas—profiles of different users, such as policy analysts or resource managers—helps the service improve the learning curve for new AI users while expanding the use of AI tools to more people.

“Our goal is to continue these proofs of concept over the next few months to really start to understand this space enough to enforce a requirement that will ultimately, at some point, lead to some acquisitions. I think the good thing about this space from what we’re learning right now is that no one is doing a one-size-fits-all thing. There’s so much work and opportunity here that I think it’s going to be up to us to make sure that we have the right use cases and are really disciplined in how we compete so that our partners will work with us,” Garciga said.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located in the European Economic Area.