close
close

This Week in AI: With Chevron’s Collapse, AI Regulation Seems Dead

Hello everyone, and welcome to TechCrunch’s regular newsletter dedicated to artificial intelligence.

This week, in the realm of artificial intelligence, the U.S. Supreme Court overturned the “Chevron rule,” a 40-year-old ruling on the authority of federal agencies that required courts to defer to agencies’ interpretations of congressional laws.

Chevron’s deference allowed agencies to make their own rules when Congress left aspects of its statutes unclear. Courts will now be expected to exercise their own legal judgments — and the consequences could be far-reaching. Scott Rosenberg of Axios writes that Congress — not exactly the most functional body these days — must now effectively try to predict the future with its rules, because agencies can no longer apply basic rules to new enforcement situations.

And that could put an end to attempts at nationwide regulation of AI once and for all.

Congress has already struggled to pass a basic AI policy framework—to the point that government regulators on both sides of the aisle have felt compelled to step in. Any rules it writes will have to be incredibly detailed if they are to survive legal challenges—a seemingly impossible task given the speed and unpredictability with which the AI ​​industry is evolving.

Justice Elena Kagan addressed the issue of artificial intelligence during oral arguments:

Imagine that Congress passes an AI bill and it has all kinds of delegations in it. Just because of the nature of things, and especially the nature of the subject matter, there are going to be all kinds of places where, even though there’s no explicit delegation, Congress has actually left a loophole. … (Do) we want the courts to fill that loophole, or do we want the agency to fill that loophole?

Courts will fill that gap now. Or federal lawmakers will see this exercise as futile and kill their AI laws. Whatever the outcome, regulating AI in the US has become orders of magnitude more difficult.

News

Google’s Green AI Costs: Google has released its 2024 Environmental Report, an 80-plus-page document detailing the company’s efforts to apply technology to environmental issues and mitigate its negative impacts. But it avoids asking how much energy Google’s AI uses, Devin writes. (AI is notoriously energy-hungry.)

Figma disables design feature: Figma CEO Dylan Field said Figma would temporarily disable its “Make Design” AI feature, which was allegedly a copy of Apple’s weather app design.

Meta changes AI label: After Meta began labeling photos with the “Made with AI” label in May, photographers complained that the company was mistakenly applying the labels to real photos. Meta is now changing the label to “AI info” across all of its apps in an attempt to appease critics, Ivan reports.

Robot cats, dogs and birds: Brian writes about how New York State is giving away thousands of robotic animals to seniors amid a “loneliness epidemic.”

Apple brings AI to Vision Pro: Apple’s plans go beyond the previously announced Apple Intelligence launches for iPhone, iPad, and Mac. According to Bloomberg’s Mark Gurman, the company is also working to bring the features to its Vision Pro mixed-reality headsets.

Research paper of the week

Text-generating models like OpenAI’s GPT-4o have become a staple of technology. Rarely do you come across applications that NO Use them today to accomplish everything from writing emails to writing code.

But despite their popularity, how these models “understand” and generate human-sounding text is not an established scientific question. To strip away the layers, researchers at Northeastern University looked at tokenization, the process of breaking text down into units called tokens which models can work with more easily.

Today’s text-generating models process text as a series of tokens drawn from a set of “vocabulary of tokens,” where a token might correspond to a single word (“fish”) or to a fragment of a larger word (“sal” and “mon” in “salmon”). The vocabulary of tokens available to the model is typically referred to as before training, based on the characteristics of the data used to train it. However, the researchers found evidence that the models also develop hidden vocabulary maps groups of tokens—for example, multi-tokens words like “northeast” and the phrase “break a leg”—to semantically meaningful “units.”

Based on this evidence, the researchers developed a technique for “exploring” the hidden vocabulary of any open model. From Meta’s Llama 2, they extracted phrases like “Lancaster,” “World Cup players,” and “Royal Navy,” as well as lesser-known terms like “Bundesliga players.”

The work has not yet been peer-reviewed, but the researchers believe it could be a first step toward understanding how lexical representations are formed in models — and could serve as a useful tool for discovering what a model “knows.”

Model of the week

The Meta research team has trained several models to create 3D assets (i.e. 3D shapes with textures) from text descriptions, suitable for use in projects like apps and video games. While there are many models that generate shapes, Meta says these are “state of the art” and support physically based rendering, which allows developers to “relight” objects to give them the look of one or more light sources.

The researchers combined two models, AssetGen and TextureGen, inspired by Meta’s Emu image generator, into a single pipeline called 3DGen to generate shapes. AssetGen converts text prompts (e.g., “t-rex in green wool sweater”) into a 3D mesh, while TextureGen increases the “quality” of the mesh and adds texture to produce the final shape.

Finish
Image sources: Finish

With 3DGen software, which can also be used to re-texturize existing shapes, generating one new shape takes about 50 seconds from start to finish.

“By combining the strengths (of these models), 3DGen achieves very high-quality synthesis of 3D objects from text prompts in less than a minute,” the researchers wrote in a technical paper. “Evaluation by professional 3D artists shows that the 3DGen result is preferred over industrial alternatives most of the time, especially for complex prompts.”

Meta seems poised to incorporate tools like 3DGen into its metaverse game development efforts. According to the job posting, the company is looking to research and prototype VR, AR, and mixed-reality games created with generative AI technology — including, presumably, custom shape generators.

Take the bag

Apple could gain an observer seat on OpenAI’s board as a result of the partnership between the two companies announced last month.

Bloomberg reports that Phil Schiller, Apple’s executive in charge of the App Store and Apple events, will join OpenAI’s board as a second observer after Microsoft’s Dee Templeton.

If the decision goes ahead, it would be a remarkable show of force by Apple, which plans to integrate OpenAI’s AI-powered chatbot platform ChatGPT into many of its devices later this year as part of a broader set of AI features.

Apple won’t pay OpenAI to integrate ChatGPT, citing reports that the PR exposure is as valuable — or more valuable — than cash. In fact, OpenAI may end up paying Apple; Apple is said to be considering a deal where it would receive a share of revenue from any premium ChatGPT features OpenAI brings to Apple platforms.

As my colleague Devin Coldewey has pointed out , this puts OpenAI’s close collaborator and major investor Microsoft in the awkward position of effectively subsidizing Apple’s ChatGPT integration — with nothing to show for it. Apple is clearly getting what it wants — even if that means disputes that its partners have to settle.