close
close

Industry and government should speak the same language in AI

We already know that AI will augment and improve everything — but the design has one major flaw: what design?

We now live in a rapidly changing, technologically advanced world (note: at the time of writing, some of the information in this article may no longer be current). While AI provides an evolutionary path into the future, it will cause some headaches along the way. The lack of standards and policies is a big part of the problem; there are some of them, but we have a long way to go. The ability to create training models and make them available across (cross-platform) solutions and having appropriate policies in place are essential to the evolution of the use of artificial intelligence in government.

As nationally recognized AI guru Dr. Lisa Palmer recently pointed out to me, vendors with their own large language models (LLM) add a lot of value. I would suggest that we also have a second model or context engine, or even a hybrid model design, that could provide agency-specific context – a well-defined, standardized solution that can be leveraged in a cross-platform environment. However, these well-defined models or context engines common to solutions, platforms and providers do not currently exist. This is a big problem for large-scale solutions.

For smaller implementations, such as an Internet of Things (IoT) HVAC solution, proprietary vendor models are very similar and switching solutions are not a major concern. However, when trying to implement systems at scale – no matter how – unique AI training models must somehow be made available to new solutions.

It is well known that the purpose of government agencies is to serve the public by purchasing systems that provide a return on investment and serve a good purpose. The problem today is that learning in a unique environment takes time (I call it “break-up” time). If governments are unable to better serve society by acquiring new systems due to the delays in AI launches found with proprietary models/learning, the impact will be felt in terms of costs and benefits. Each system has a slightly different LLM, but they all share common structural requirements: a unique training model that is specific to the ecosystem in which the solution operates and can help provide agency-specific context and outcomes.

If this learning can be captured through a standard secondary model or context engine design, it can be made available in the new solution. This would increase flexibility in purchasing new solutions while retaining vendor-proprietary models that highlight product features. I am convinced that this design improvement will ensure fewer emergencies and risks associated with making decisions to change solutions.

It has become obvious that this is a problem, and OpenAI recently announced a model specification that could help with this. While this is great, I believe a public entity should be responsible for developing this. I’m not a fan of too much oversight, but I do believe that it’s important to help guide AI to an acceptable level of maturity.

In January 2023, the National Institute of Standards and Technology (NIST) published its first AI risk management framework. This is another great start, but still doesn’t require much work in terms of technical specifications. ISO.org is also working on a project to help with artificial intelligence guidelines; you can read more about it here.

Another great initiative is the GovAI Coalition, led by the city of San Jose. The coalition aims to help public agencies at all levels of government below federal (see GSA for federal) create policies, templates and baseline documents that other agencies can use as a starting point when creating more regional policies.

In an ideal world, governments will be able to solicit proposals from vendors and reduce AI “start-up” times by making these agency-unique models and contextual engines available across platforms. We need to create a base for AI technology before we dive too far into the deep end. Take, for example, Von Neumann’s architectural model for typical computations: yes, it has changed, but the basic concept has not. And interesting fact: it was developed before the advent of the Internet!