close
close

The Asilomar conference and contemporary artificial intelligence controversies: lessons from regulation

Will humans still be important in the age of artificial intelligence (AI)?

The inevitable growing power of artificial intelligence is heading for a head-on collision with the ever-evolving but timeless foundations of higher education, work and litigation. Academic and political discourse has exploded, and concerns about AI models have ranged from grossly exaggerated to rationally justified.

Amid this confusion, a growing chorus of tech leaders have proposed a moratorium on artificial intelligence, hoping to provide some space for an appropriate ethical framework to allow the technology to catch up. The implication of this was that no further progress would be made until an acceptable set of standards for the safe use of AI was in place.

However, the prevailing ethos of “move fast and break” soon took over in big tech as the development of AI models seemed to accelerate instead. Even Elon Musk, who initially expressed solidarity and signed the proposed moratorium, launched an artificial intelligence startup just last year.

We now face a crossroads: do we allow artificial intelligence to overtake traditionally human-led efforts, or do we frantically install speed bumps?

Our inevitable decision is certainly not unique in the cascade of historical events that have defined the past century, and one pressing controversy comes to mind: the regulation of recombinant DNA technology. The culminating controversy that once plagued geneticists and politicians alike provides a useful historical precedent for regulating modern artificial intelligence research.

Recombinant DNA technology

In the 1950s and 1960s, recombinant DNA (rDNA) technology became a pioneering innovation, boosted by grandiose claims of using gene editing to create crop disease-resistant crops or artificially produced insulin with few side effects. However, concerns quickly emerged regarding the protection of the general public and laboratory personnel from potential biological hazards arising from experiments. Fears have intensified that individual scientists will create new deadly diseases or conduct a laboratory experiment that will create some variant of Frankenstein’s monster. These concerns were reinforced by vocal opposition from prominent scientists and scientists such as Dr. George Wald, winner of the 1967 Nobel Prize in Biology.

This rising wave of concern culminated in a moratorium on rDNA research in July 1974, advocated by leading experts in the rDNA regulatory movement. Chief among them was biochemist and genome editing pioneer Paul Berg, who led discussions between government and academia. In perhaps the most famous example of scientists regulating scientific research, the moratorium reflected the intense struggle between innovation and ethics that dominated the second half of the 20th century. In many ways, the existential and moral struggles of Paul Berg and his colleagues resemble those that plague contemporary proponents and opponents of artificial intelligence.

1975 Asilomar Conference

On February 24–27, 1975, the Asilomar Conference gathered over one hundred scientists, lawyers, and selected journalists in Monterey, California, to discuss whether the moratorium on recombinant DNA technology should be lifted. Their question: What guidelines would facilitate safe, “protected” experiments to reduce the risks of rDNA technology?

Scientists recognized the need to approach rDNA technology in a way that not only satisfied public concerns but also promoted autonomy in research. Self-governance was an attractive principle because it avoided the chaotic patchwork of federal legislation in the forums for scientific decision-making. Conference participants were plagued by uncertainty, with many fearing that strict regulations would fill a policy void. As the Monterey discussions continued, Congress prepared to impose strict regulations if no standard standards were adopted by the end of the conference.

After many heated sessions of disagreement, several recommendations were combined into an uncertain compromise: proper handling of bacteria that cannot survive outside the laboratory, classification of experiments based on necessary safety levels, and discontinuation of experiments involving known carcinogens, toxin-producing genes, and antibiotic resistance genes. Paul Berg took the lead in synthesizing and proposing these recommendations, which eventually led to formal adoption Guidelines for research involving recombinant or synthetic nucleic acid molecules by the US National Institutes of Health (NIH).

The crux of the Asilomar Conference debate centered on the appropriate level of interference and oversight for the governing body to exercise in uncharted territory. Conference participants praised their fulfillment of their social responsibilities and hailed their avoidance of government-regulated surveillance as a boon for future research. As one molecular biologist put it, it was “a truly amazing time for scientists who actually put restrictions on their work.” The Asilomar conference was initially perceived as extremely successful, and at one point was even considered the gold standard for self-regulation in science. This also resulted in an overall improvement in laboratory safety practices.

However, the efficacy and motives of the Asilomar conference have been questioned. Many of the imagined genetic horrors that the convention was intended to prevent have been found to be exaggerated and currently unworkable, and the intentions to maintain the “restraint” shown by scientists have been met with increasing skepticism as time has allowed for reflection on Asilomar. Scientists were motivated in part by a desire to avoid regulation and maintain favorable public relations, as is evident in the first-hand accounts of participating researchers, in which they refer to universal assistance. Many scientists feared that indecision and failure to agree on rigorous guidelines would lead to “heavy legislation” from Congress and that “(their agreed guidelines were) probably the fastest path to science that we know of.” Thus, the research restrictions approved at Asilomar were partly rooted in an ulterior motive: reducing the perceived need for government oversight.

Asilomar Conference as a Lens for Assessing Contemporary Debates on Artificial Intelligence

Echoes of these hidden agendas and calls to action can be seen in today’s artificial intelligence landscape, where technology leaders are once again calling for a moratorium. The open letter signed by more than 30,000 people, including Elon Musk and political economy expert Daron Acemoglu, shows how concerns have skyrocketed over “artificial intelligence systems equipped with human-competitive intelligence (which) could pose a serious risk to society and humanity.”

The Asilomar AI Principles, developed at the 2017 Beneficial AGI Summit, confirm the concerns expressed by participants in the 1975 convention. The decision to hold the AI ​​conference in Asilomar, California – where just a few decades earlier regulation had fallen into the hands of genomics researchers – is not random. The collection of researchers led by Paul Berg provides a useful historical comparison to assess measurable responses to today’s AI tools.

A particularly critical lesson learned from the Asilomar conference on recombinant DNA is that surveillance and innovation are not necessarily incompatible. The 1975 guidelines spread in ways that encouraged oversight, with new safeguards required at universities and the creation of research bodies such as the US National Institutes of Health’s Recombinant DNA Advisory Committee. Public scrutiny also quickly followed the proceedings – as only 15 percent of Asilomar participants were from the media – giving the public a glimpse into the decision-making process behind this extremely sensitive issue. The conference therefore increased the transparency of rDNA testing for both regulators and the general public.

However, the key difference between genetic modification research conducted in 1975 and today’s artificial intelligence technology lies in the institutions involved. In the 1970s, many scientists involved in recombinant DNA research were inspired by collaborations among their academic institutions. In turn, most AI developers and software engineers work with private companies, blurring the distinction between public responsibility and working in the private sector. This dilemma is not exclusive to artificial intelligence, as many issues in science and technology are plagued by vested economic interests.

A handful of powerful tech giants, such as OpenAI, are currently driving the development of generative AI tools. Big Tech rivals can obtain AI-related intellectual property (IP), granting companies ownership of certain generative AI components and tools. Intellectual property may be held in reserve or used as a “competitive weapon in lawsuits against rivals.” The accumulation of these patents by Big Tech competitors drives a deeper wedge into the future possibilities of open cooperation and establishing agreed rules of conduct.

For example, OpenAI does not provide specific details about training its GPT-4 model, citing the highly “competitive landscape and security implications.” Chatbot developers have admitted that their AI tools have deep flaws, yet there is a need to release products quickly to stay ahead of rivals.

As several reports indicate, government policy may risk “entrenching” the competences of a few large technology companies, rather than mitigating them. Aggressive oversight using the public and private sectors is becoming increasingly necessary to ensure the adoption of appropriate legislation.

Last year, several experts warned Congress not to put the future of artificial intelligence solely in the hands of a few of the most powerful technology companies. Rigorous public oversight and scrutiny, combined with rigorous government regulation, are essential to ensure that the development of transformative AI systems is done responsibly and in the best interests of society.

Decision-making at the 1975 Asilomar conference was influenced by interests beyond the simple public good—including intentions to avoid regulation and maintain the freedom to continue experimenting—which created challenges in responding to more dire assessments of potential threats. Interests related to individual gain should not be given priority in the age of artificial intelligence.

However, this dilemma does not mean that completely halting AI development is the only viable solution. Rather, it requires installing speed bumps to slow the dangerous race toward ever-evolving, unpredictable AI models. The emphasis needs to shift to increasing the accuracy and transparency of powerful artificial intelligence systems. AI developers need to modify their policies to encourage open collaboration with competitors and decision-makers. To address the potential for self-motivated regulatory standards, a robust AI governance framework is needed.

As with recombinant DNA technology, the choice before us is clear: do we actively shape the future of transformative AI, or do we let it shape us? The stakes could not be higher, and the lessons learned from the 1975 Asilomar conference loom large. We have the opportunity to enjoy a long “AI summer”, reaping the benefits of our innovations while developing them for the clear benefit of all and giving society time to adapt. Ethics must catch up with innovation. Let’s not rush unprepared for a dangerous fall.