close
close

Like local authorities have done, the Fed remains open to artificial intelligence

New federal action is accelerating work on comprehensive artificial intelligence (AI) policy, but the process at senior levels has been slower than at state and local governments.

Many state and local governments have implemented their own AI policies and guidelines, in part because of the lack of such a policy at the federal level. The federal government has actually taken action on AI through an executive order, but industry reactions have been mixed.

Last week, the White House announced progress on the executive order. Federal agencies have completed all required 270-day actions in the executive order on schedule, work on longer timelines is progressing, and the AI ​​Safety Institute is seeking public comment on technical guidance for AI developers.


Additionally, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) has published policy recommendations covering open AI models—as mandated by an executive order.

NTIA Dual-use Foundation Model Report with Commonly Available Model Weights Calls on the US government not to restrict the wide availability of open model weights in today’s largest AI systems, and instead develop new risk monitoring capabilities.

“Open-weight models,” according to a press release Tuesday, allow developers to build on existing work, making AI tools and their development more widely accessible to smaller companies, researchers, and even individuals.

The report’s authors call on the U.S. government to develop an ongoing program to collect and evaluate further evidence on the risks and benefits.

Such evidence, the report says, should include research on the security of AI models and on the current and future capabilities of dual-use foundation models. It also recommends establishing risk-specific metrics—using this evidence to inform their development—and assessing benchmarks for when to take action. If action is necessary, the report says, this could justify restrictions on access to models or other risk mitigation measures.

A specific focus on open models is not something that is commonly seen in AI policy at the state and local level. In the private sector, however, IBM has advocated for an open-source approach to AI development.

What’s more, governments like Indiana have implemented AI policies that can evolve as needs change. Local governments like Seattle and Boston have also established short-term policies that position cities to evolve as AI capabilities change.

Also this week at the federal level, the Senate advanced a bipartisan bill that would establish barriers to AI procurement. The bill would require agencies to assess and address AI risks before purchasing technology.

“The bipartisan PREPARED for AI Act lays a solid foundation by codifying transparency, risk assessments, and other safeguards that will help agencies make smarter, more informed procurement decisions,” Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, said in a statement.

The PREPARED for AI Act is very similar to the California law GenAI Procurement, Application and Training Guidelines. These guidelines include mandates for the purchase and implementation of GenAI tools across state government, including ongoing monitoring and annual review.