close
close

OpenAI’s New Fundraising Is Shaking Up Silicon Valley

All of this makes OpenAI sound like a typical tech sensation: a hot startup relying on intrepid investors to develop a new way of doing things that it hopes will change the world. Think Google, Facebook, or Uber. But its significance goes further. Generative artificial intelligence (AI), the technology OpenAI is built on, is changing the rules of the game in Silicon Valley itself.

The new technology poses three big challenges: Many venture capital (VC) backbones can’t afford the massive amounts of money that companies like OpenAI need to train and run generative AI models; the technology scales differently than is normal for them; and it may rely on unfamiliar approaches to making money. In short, generative AI is wreaking havoc in the home of America’s disruptive leaders. Enjoy your schadenfreude.

The first shock to venture capitalists is the size of the checks required to fund creators of large language models (LLMs) like the ones powering ChatGPT. The average size of a VC fund raised in America last year was about $150 million, according to PitchBook, a data firm. OpenAI is looking to raise more than 40 times that from investors. The biggest checks for LLMs are therefore being written not by the VC industry but by tech giants. Since 2019, Microsoft has invested $13 billion in OpenAI; Amazon has invested $4 billion in Anthropic, one of OpenAI’s main rivals.

The tech giants aren’t just offering money. Their cloud services provide computing power to train startups’ LLMs, and they also distribute their products—OpenAI via Microsoft’s Azure cloud and Anthropic via Amazon Web Services. Microsoft is expected to invest more in OpenAI’s latest funding round. Apple (which will offer ChatGPT to iPhone users) and Nvidia (which sells a huge amount of OpenAI chips) are also likely to participate. As are sovereign wealth funds, which shows how much money is needed to get a seat at the table.

A few venture capitalists are undeterred by the high entry fee. OpenAI’s fundraising is led by Thrive Capital, a New York-based investment firm that has made other big investments in high-profile startups, including Stripe, a payments company recently valued at $65 billion. Big Silicon Valley investors like Sequoia Capital and Andreessen Horowitz helped underwrite some of the $6 billion raised in May by Mr. Musk’s xAI and contributed to the $1 billion raised this month by Safe Superintelligence, a modeling firm run by Ilya Sutskever, a former OpenAI founder, that now has scant revenue.

But the size of the money involved means that some VCs are adopting a new modus operandi. Traditionally, venture capital firms spread capital thinly across a range of startups, knowing that if a few of them make it big, the gains will outweigh what the ones that don’t make it lose. In the era of generative AI, where startups with access to the most capital, computing power, data, and researchers have a big advantage, some are betting on established startups rather than kissing a lot of frogs.

The second big challenge for recent VC investment practice is the scalability of new technology. LLM funding is starting to resemble the early days of Silicon Valley, when venture capitalists invested in companies solving hard scientific problems, such as chipmakers, rather than the newer trend of backing internet startups.

One of the venture mantras of the past decade has been “blitzscaling.” Because most internet companies’ software is cheap to build and cheaper to run, startups can focus their money and attention on growing as quickly as possible. These days, everyone talks about the “laws of scaling”: The more computing power and data you throw at AI, the smarter the models become. So you have to invest a ton of money up front to develop a competitive product, or come up with a new approach.

In a recent blog post, Ethan Mollick of the Wharton School at the University of Pennsylvania grouped the state-of-the-art LLMs into four loose “generations,” each requiring ten times more computing power and data than the previous one. He calculated that in 2022, when ChatGPT was released, training models typically cost $10 million or less. Some of the most cutting-edge models developed since then could cost $100 million or more. Those coming soon could cost $1 billion. He thinks training costs will eventually exceed $10 billion. While experts argue about how predictable these scaling laws are, training costs continue to rise (see chart).

(Economist)

See the whole image

(Economist)

Reasoning is also getting more expensive. On September 12, OpenAI introduced a new pair of models, called o1 (nicknamed Strawberry), that are designed to take multiple steps of “reasoning” to generate a more accurate answer to a query, relying heavily on a process called reinforcement learning. (Ask the latest version of ChatGPT how many rs are in strawberry, and it immediately answers two. Turn on o1, and after four seconds of what it calls “thinking,” it gives the correct answer.) This step-by-step approach, especially useful for complex subjects like math and science, improves when more computing power is used to think through the answer.

As LLMs become more computationally intensive, the people who develop them are furiously looking for ways to reduce their costs. Meanwhile, many VC firms are being forced out of the market. Instead of putting money into the models, some are funding startups that build on them, such as those that provide coding tools, virtual healthcare, or customer support.

This is causing a third major change in the VC playbook, as the industry is forced to figure out how startups that rely on expensive LLMs can become profitable. Digital advertising, the preferred monetization model in Silicon Valley for decades, is hard to incorporate into generative AI tools without undermining their credibility with users. Subscriptions can also be tricky. Software companies typically charge per user per month. But as companies deploy AI agents that can help humans do their jobs, the number of users could decline.

Impertinence

OpenAI still has its skeptics. They have trouble seeing how its revenue growth can justify such a stratospheric valuation, especially given the competition from smaller, cheaper models, some of which are at least partially open source. Big investments from wealthy funds are often a sign of overstated expectations. Scientific breakthroughs in model building could upend the industry. Skeptics also say that OpenAI’s rapid turnover of top talent underscores lingering concerns about corporate governance and security, following the ouster and subsequent reinstatement of Mr. Altman less than a year ago.

It certainly won’t be easy for the would-be hectocorn to continue galloping ahead of its rivals. Anthropic is investing heavily, with Amazon backing. Google, Meta and xAI have their own strong offerings. The competition is fierce. If the rest of Silicon Valley wants to get in on the action, it will have to think differently.

© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com