close
close

Stable Diffusion Used to Create AI-Generated Children’s CSAM

The Prompt is a weekly digest of the buzziest AI startups, biggest breakthroughs, and business deals. To get it in your inbox, sign up here.

Welcome back to The Prompt.

Another artificial intelligence (AI) startup has been (partially) acquired by a tech giant.

Amazon announced Friday that it was hiring the co-founders and about a quarter of the employees of AI robotics firm Covariant. The e-commerce giant also obtained a non-exclusive license to the company’s AI models, which it plans to integrate with its fleet of industrial robots. Founded in 2017, Covariant has raised more than $240 million in funding from investors including Index Ventures and Radical Ventures.

The announcement follows similar deals over the past few months, with major tech companies hiring founders and teams from popular AI startups like Inflection, Adept, and Character AI.

Now let’s move on to the headlines.


ETHICS + LAW

Facial recognition company Clearview AI fined $30 million by Dutch data protection authority scraping billions of photos of people from the internet without their knowledge or consent and building “illegal database” of photosJack Mulcaire, Clearview’s chief legal officer, said the company has no customers in the EU and that the decision is “unlawful.” The company’s facial recognition tools have been used by law enforcement agencies in hundreds of cases of child abuse, Forbes reported last year.

Two voice actors, Karissa Vacker and Mark Boyett, have AI voice generation startup sued ElevenLabs, for allegedly exploiting hours of copyrighted audiobook narration to create customized synthetic voices that sound similar to their own and train them basic artificial intelligence model with recordings. According to the filing, the company removed one of the AI-generated voices from its platform last year after the actor contacted it, but was unable to remove the voice from its API for months because “technical challenge” which allowed other websites to create duplicates of the voice. The company did not respond to Forbes’ request for comment.

POLITICS + ELECTIONS

Two convicted fraudsters and conspiracy theorists Jacob Wohl and Jack Burkman used fake names, secretly launch AI lobbying firm called LobbyMatic, Politico reports. The duo also he falsely claimed in demo screenshots, companies like Microsoft, Pfizer, and Palantir have used the AI ​​platform to generate insights and analyze legislation, according to 404 Media. Late last year, the company also created a fake profile to post blogs on Medium.

AI OFFER OF THE WEEK

Creator of ChatGPT OpenAI is in talks to raise several billion dollars in a round that would value the AI ​​giant at 100 billion dollarsthis Wall Street Journal reported last week. Investment company Thrive Capital, founded by a billionaire Josh Kushnerleads the round and plans to invest $1 billion in the company. Tech giants like Apple, Nvidia and Microsoft According to reports, others are also taking part in this round.

It’s also worth noting that AI coding startup Codeium, which was named to the Next Billion Dollar Startups list in August, has raised $150 million at a valuation of $1.25 billion.


DEEP DIVE

For many children visiting Disney World in Orlando, Florida, it was the trip of a lifetime. For the man who filmed them with a GoPro, it was something more nefarious: an opportunity to create child abuse images.

The man, Justin Culmo, arrested in mid-2023, admitted to creating thousands of illegal images of children taken at an amusement park and at least one high school using a version of Stable Diffusion’s artificial intelligence model, according to federal agents who presented the case to a group of law enforcement officers in Australia earlier this month. Forbes obtained details of the presentation from a source close to the investigation.

Culmo has been charged with a series of child abuse offenses in Florida, including abusing his two daughters, secretly filming minors and distributing child sexual abuse images (CSAM) on the dark web. He has not been charged with producing AI CSAM, which is a crime under U.S. law. At the time of publication, his lawyers had not responded to requests for comment. He pleaded guilty last year. A jury trial is scheduled for October.

“This is not just a blatant invasion of privacy, it is a deliberate attack on the safety of children in our communities,” said Jim Cole, a former Homeland Security agent who tracked the defendant’s online activity during 25 years as a child exploitation investigator. “This case clearly demonstrates the ruthless exploitation that AI can enable when used by someone with the intent to do harm.”

The case is one of a growing number of cases in which AI is being used to transform photos of real children into realistic depictions of abuse. In August, the Justice Department unveiled charges against Army soldier Seth Herrera, accusing him of using generative AI tools to create sexualized images of children. Earlier this year Forbes reported that Wisconsin resident Steven Anderegg was accused of using Stable Diffusion to produce CSAM from images of children that were ordered on Instagram. In July, British nonprofit Internet Watch Foundation (IWF) reported that it had detected more than 3,500 AI CSAM images online this year.

Read the full story on Forbes.


WEEKLY DEMO

AI-Generated Reviews with five stars If flooding mobile app stores and smart TVsAccording to media transparency firm DoubleVerify, making it harder to decide which apps are worth downloading. Scammers use AI tools to give high ratings to fraudulent apps that constantly display ads — even when the phone is off — to make money. But some warning signs, such as unusual formatting AND similar writing style in different reviews, can help you spot fake app reviews.


AI INDEX

200 million

People are using ChatGPT at least once a week, OpenAI said. That’s twice as many users as it announced last November.


MODEL BEHAVIOR

An AI assistant called Lindy AI recently rolled up“human client” when asked to provide a video tutorial on how to set up the assistant. In response, the email chatbot hallucinating and he did a jokedirecting the customer to Rick Astley’s 1987 music video titled “Never Gonna Give You Up.”