close
close

Properly implement AI policy through a learning period moratorium

While some critics of artificial intelligence (AI) want to halt AI development, what is most needed today is a pause over overzealous regulatory proposals that could destroy America’s leadership in computational science and algorithmic technologies. With more than 700 federal and state AI legislative proposals threatening to drown AI innovators in a tsunami of bureaucracy, Congress should consider adopting a “learning period” moratorium that would limit burdensome new federal AI mandates, as well as the emerging a patchwork of inconsistent state and local regulations. laws.

The time to do so is now, as the race for AI supremacy against China heats up and other countries invest heavily to counter the United States. Chasing our AI innovators in layers of bureaucracy would reduce domestic entrepreneurship and investment, deprive citizens of many life-enhancing innovations, and limit economic growth. Equally troubling is how over-regulation could weaken our technological base and potentially even our national security.

Mountains of bureaucracy

Unfortunately, many lawmakers seem unaware of these threats, advancing extreme AI proposals based on far-reaching hypotheses and dystopian science-fiction plots. This fear-based thinking has led countries to propose far-reaching controls on algorithmic technologies. Colorado just became the first state to propose a comprehensive AI regulatory solution, which Gov. Jared Polis (D) signed into law even though he was concerned that state regulations like his could create a “complex compliance regime for all AI creators and implementers.” and a patchwork of mandates that will “disrupt innovation and deter competition.” California is also rapidly moving on a major bill that would place onerous restrictions on “frontier” AI models and create a new bureaucracy to administer regulations.

Overregulation is also a threat at the federal level, with more than 100 AI-related measures pending in Congress. The Biden administration is simultaneously pursuing unilateral regulation of artificial intelligence through its “AI Bill of Rights Plan,” a massive, 110-plus-page executive order, and a litany of new agency directives based on vague notions of “algorithmic fairness.”

Much of this effort rests on the assumption that the government can preemptively legislate “responsible AI,” forcing innovators to take new ideas through a maze of bureaucrats to obtain permission before innovating. Earlier this year, a top technology official in the Biden administration called for a “government artificial intelligence audit system” and suggested the need for an “army of auditors” to ensure “algorithmic accountability.” The resulting layers of technocratic meddling could lead to a death-by-a-thousand-cuts scenario for AI creators.

Undermining the winning formula

This is the exact opposite of the more flexible, market-based approach that the Clinton administration and Congress wisely developed in the 1990s for the Internet, digital commerce, and online speech. Rooted in political restraint, this framework protected the freedom to innovate without needing a bureaucrat’s blessing to launch the next great app or speech platform.

If American innovators and values ​​are to shape the most important technology today, we cannot shoot ourselves in the foot as the global AI race heats up. Congress should put an end to overzealous micromanagement before it’s too late. In the past, lawmakers have used forbearance and moratorium requirements to protect innovation and competition, although with mixed success.

The Telecommunications Act of 1996 provided that “no state or local statute or regulation or other state or local legal requirement shall prohibit or have the effect of prohibiting the ability of any entity to provide any interstate or interstate telecommunications service.” The Act included other specific exceptions to state and local regulation, as well as a provision requiring the Federal Communications Commission (FCC) and state regulators to refrain from regulating in certain cases to increase competition.

Another part of the Communications Act, intended to “encourage the delivery of new technologies and services to the public,” states that any party opposing the innovation “shall bear the burden of showing that such proposal is inconsistent with the public interest” and forces the FCC to make a decision within year. Unfortunately, the FCC largely ignores both this provision and the forbearance requirements of the Telecommunications Act, continuing to overregulate the communications and media markets.

Federal moratoriums better protect new technologies from bureaucratic interference and excessive taxes. Congress passed the Internet Tax Freedom Act of 1998 (made permanent in 2016) to stop the spread of “multiple and discriminatory taxes on e-commerce” and Internet access. Similarly, the Commercial Spaceflight Amendments Act of 2004 ensured that federal regulators would not undermine the emerging market for commercial crewed spaceflight.

How to construct an AI moratorium and preemption

These and other provisions could provide a template for developing a moratorium or preemption on AI regulation. A moratorium on AI learning periods should block the creation of any new regulatory bureaucracy for general-purpose AI, prevent new licensing regimes, block unlimited algorithmic liability, and prevent confusing state and local regulatory acts that disrupt the creation of a competitive national market in advanced algorithmic services.

A moratorium on the learning period for artificial intelligence would bring many benefits. First, it would create space for new types of algorithmic innovation to develop. This is particularly important for smaller AI companies and the open source AI market, which could be decimated by premature over-regulation of a still-developing sector.

Second, a moratorium on AI regulation would give policymakers and technology experts a chance to determine which issues require greater analysis and potential regulation. This pragmatic approach to policy would limit the harm of hasty decisions and help us gain knowledge by testing predictions and policies before introducing new laws.

However, a moratorium on learning new AI regulations does not mean zero regulation. Many existing laws and regulations already cover any AI-based practices that violate civil rights, consumer protections, the environment, intellectual property and national security. Policymakers can continue to enforce these policies where harms occur and fill gaps where necessary, or they can pursue less restrictive approaches such as transparency and education-based measures.

A federal standard for preemption of AI will need to include carving out some areas of traditional state government, including education, insurance and law enforcement. However, regulatory preemption will pose a challenge as AI, which is “the most important general-purpose technology of our time,” touches almost every field. For better or worse, some sectors and issues should be left to state and local governments.

Where national frameworks prove unsustainable, state and local governments should develop a harmonized, lightweight framework – perhaps in the form of multi-state agreements – to avoid burdening the development of a highly competitive and innovative national market for AI companies and technologies.

Review of existing regulatory capacity

While formulating a moratorium on artificial intelligence, Congress should also require that our government’s 439 federal departments be required to do two other things. First, agencies should study and review existing policies that may already address algorithmic innovations in their field, and consider how AI systems may already be over-regulated under current law. Second, agencies should identify additional ways AI technologies can help improve government services. (It would be prudent for state and local governments to conduct a similar review, although this is not necessarily required by federal law.)

The Trump administration’s Office of Management and Budget (OMB) recommended some of these ideas to agency heads in a guidance memo from November 2020. “Federal agencies must avoid regulatory or non-regulatory actions that unnecessarily hinder artificial intelligence innovation and development,” the OMB memo said. “Supporting AI innovation and development by waiving new regulations may be appropriate” and “agencies must avoid a cautious approach that holds AI systems at unbearably high levels so that the public cannot enjoy their benefits and that could undermine America’s position as a global leader in AI innovation.”

Unfortunately, following recent executive orders and announcements from the Biden administration, agencies have instead been encouraged to consider how to expand their regulatory ambitions toward artificial intelligence, even though Congress has not approved such actions.

Application

For the United States to remain a global leader in algorithmic technologies and computational capabilities, AI policy must be based on patience and humility, not a rush to overregulate. Policymakers must avoid blocking America’s innovative potential and instead pause the panic-based AI regulatory policies being considered today.

It is imperative that our nation meets the preconditions for a policy of growth and prosperity by re-embracing a culture of innovation that positions us as a world leader in advanced computing as momentum gathers in the next great technology race with China and the rest of the world.