close
close

AI Briefing: Senators Propose New Privacy, Transparency, Copyright Laws

The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers raised concerns about the risks of AI accelerating — such as online surveillance, fraud, hyper-targeted ads and discriminatory business practices — others warned that regulations could further protect tech giants and burden smaller companies.

The risks of artificial intelligence could accelerate the risks to consumers associated with social media and digital advertising, according to U.S. Sen. Maria Cantwell (D-Wash.). Just as the growth of online advertising was fueled by data, Cantwell worries that tech companies will train AI models with sensitive data and use that information against consumers. She said a restaurant in her home state reportedly made reservations based on potential diners’ income data.

“If they don’t have enough money to buy a bottle of wine, they’ll give the reservation to someone else,” Cantwell said. “Without strong privacy laws, once the public data is exhausted, there’s nothing to stop them from using our private data… I’m very concerned that the ability to collect massive amounts of personal data about people and draw conclusions about them very quickly at very low cost could be used in harmful ways, such as charging consumers different prices for the same product.”

Cantwell and other lawmakers also hope to enact new federal transparency standards to protect intellectual property and safeguard against various risks associated with AI-generated content. On Thursday, Cantwell and Sens. Marsha Blackburn (R-Tenn.) and Martin Heinrich (D-M) introduced new bipartisan legislation to protect publishers, actors and other artists, as well as mitigate the risks of AI-generated disinformation, called the COPIED Act — short for the Content Origin Protection and Integrity from Edited and Deepfaked Media Act.

The COPIED Act would require the National Institute of Standards and Technology (NIST) to develop transparency standards for AI models and create standards for content provenance—including the detection and watermarking of synthetic content—and develop new cybersecurity standards that prohibit anyone from manipulating content provenance data. The bill would also prohibit AI companies from using protected content to train AI models or generate content without permission, allow individuals and companies to sue violators, and empower the Federal Trade Commission and state attorneys general to enforce the regulations.

Blackburn said privacy laws and legislation like the COPIED Act are more important than ever to help people protect themselves. She said proposals like the No Fakes Act are also needed to protect people from falling victim to AI deepfakes. “Who owns the virtual you?” she posed.

Major organizations have already backed the COPIED Act, including the News/Media Alliance, the National Newspaper Association, the National Broadcasters Association, SAG-AFTRA, the Nashville Songwriters and the Recording Academy. According to the bill’s text, the COPIED Act would apply to platforms, including social media companies, search engines, content platforms and other technology companies, that generate more than $50 million in annual revenue and have at least 25 million users for more than three months.

One of the expert witnesses who testified at Thursday’s hearing was Ryan Calo, a law professor at the University of Washington and co-founder of the UW Tech Policy Lab. He argued that companies were already exploring using customer data to charge different prices, citing examples like Amazon charging repeat customers more and Uber offering users higher prices if their cellphone battery was low. “This is the world of using AI to extract consumer surplus, and it’s not a good world. And it’s one that can be solved with data minimization,” he said.

Calo and other witnesses said new data minimization laws could help protect consumers from having their data collected, shared and misused. Udbhav Tiwari, Mozilla’s director of global product policy, said building privacy features into AI models early could help. Another witness, Amba Kak, co-executive director of the AI ​​Now Institute, warned that something as subtle as the tone of someone’s voice could be used to predict different outcomes.

“You don’t have to be clairvoyant that all roads could lead us to the same advertising technologies that got us here,” Kak said. “This is the moment to act.”

Without federal privacy laws, people can’t know who has their data or how it’s being used, said Sen. Jacky Rosen (D-Nev.). Without uniform regulations, she said, “the data supply chain is riddled with holes.”

Some lawmakers have warned that AI regulations could inadvertently hurt small businesses. Another expert witness, Morgan Reed, president of ACT | The App Association — which represents thousands of developers and manufacturers of connected devices — said a U.S. privacy law would make it easier for small businesses to comply without having to navigate a growing number of state privacy laws. Reed said AI and privacy regulations apply not just to small tech companies but also to small businesses that use technology.

“The reality is that small businesses have been faster adopters of (AI),” Reed said. “More than 90% of my members are using generative AI tools today, which translates to an average 80% increase in productivity. And our members who are developing these solutions are more agile than their larger competitors… Their experiences should play an important role in informing policymakers about how new laws should address AI development and use.”

U.S. Sen. Ted Cruz, a Texas Republican, was among the committee members who cautioned against sweeping regulation of AI. During his opening remarks, Cruz acknowledged the need for federal rules on AI and privacy, but said the regulations should be more focused on solving specific problems. One example is the Take It Down Act, a bipartisan bill he co-sponsors with U.S. Sen. Amy Klobuchar, a Minnesota Democrat. The bill, introduced last month, would target bad actors who create and publish AI-generated, explicit deepfakes of real people.

“Our goal should not be to enact any single standard for data protection, but to create an appropriate standard that protects privacy while not stifling American technological innovation,” Cruz said.

Prompts and Products: AI News and Announcements

  • AWS and Writer have unveiled new tools for their separate platforms that aim to make it easier to build enterprise-class generative AI applications and improve their accuracy.
  • eBay has introduced new advertising tools, including AI-generated campaign recommendations based on market trends.
  • The House Judiciary Committee has charged the Global Alliance for Responsible Media (GARM) with violating antitrust laws, accusing GARM of using its market influence to steer advertisers away from right-wing platforms.
  • The U.S. Department of Justice announced the results of an investigation accusing Russian entities of using artificial intelligence-generated images and text to spread disinformation on social media platforms, including X (Twitter).
  • Microsoft resigned from its seat on the OpenAI board, and Apple abandoned plans to take an observer position on the board.
  • Omnicom has launched a new artificial intelligence (AI)-powered content platform called ArtBotAI that uses advanced language models to help marketers optimize creative for campaigns.
  • Anthropic has unveiled a new way for developers to experiment with prompts while building generative AI frameworks using the startup’s Claude AI models.

Other stories from Digiday: