close
close

AI trained on photos of children highlights need for regulation

Published: July 11, 2024

Photo from This Is Engineering via Pexels

Artificial Intelligence Learns from Your Kids’ Photos

Author: Movieguide® Contributor

Last month, Wired exposed an AI imaging tool that was using photos of children without their consent — a development that underscores the growing need for tighter regulation of AI companies.

“So AI and all of these technologies can be the best thing that happens to humanity, helping us solve all of these really hard problems, and they can be the worst thing that happens to us. They can put our children at risk, they can put the world at risk… and now is the time when we have to navigate toward those better outcomes,” AI expert Jamie Metzl told Fox News.

According to a report by Human Rights Watch, a Wired article explained that more than 170 photos of children were “repurposed” and used to train artificial intelligence.

“Their privacy is violated first when their photo is scraped and put into these datasets. Then these AI tools are trained on that data and therefore they can create realistic images of children,” said Hye Jung Han, a child rights and technology researcher at Human Rights Watch. “The technology is being developed in such a way that any child who has any photo or video of themselves online is now at risk because any malicious actor can take that photo and then use these tools to manipulate it in any way they want.”

As artificial intelligence becomes more widespread, situations like this will become more common.

“We have these large AI systems, these large learning models, and the way they learn is by taking all the digital information they can capture, and the more information — the more data they can capture — the smarter they become, and that means they’re going to consume the internet,” Metzl added. “So we need the right kind of governance and regulation to protect us, it’s not that we should just let these companies run wild, there has to be regulation, and there isn’t.”

Fortunately, many AI experts agree with Metzl. Last summer, leading AI researchers, including Sam Altman, founder of OpenAI, testified before Congress about the need for strong regulation of the industry.

In response, President Biden signed an executive order in November aimed at protecting Americans from the spread of AI-generated misinformation and disinformation, while also providing funding to government agencies to develop strong defenses against the negative aspects of this technology.

While the executive order was a step in the right direction, it’s clear that more work needs to be done to steer the technology in a positive direction, especially its use in creating deepfakes that are being used to blackmail millions.

Movieguide® previously reported:

Artificial intelligence makes it easier for pedophiles to threaten and abuse children, while making it harder for law enforcement to identify perpetrators.

“The use of AI to sexually abuse children will make it harder for us to identify real children who need protection and will further normalise abuse,” Graeme Biggar, director-general of the UK’s National Crime Agency (NCA), said in a recent speech. “And that matters because we estimate that viewing these images – real or AI-generated – significantly increases the risk that offenders will move on to sexually abuse children themselves.”

“There is no doubt that our job is becoming more difficult as big tech companies expand their services, including implementing end-to-end encryption, in ways they know will make it harder for law enforcement to detect and investigate crimes and protect children,” he continued.

With generative AI, sex offenders can now easily create hyper-realistic child sexual abuse images and videos that can be easily viewed and shared.

To watch BABY’S BOSS: BACK TO BUSINESS – Season One: Scooter Buskie

To watch PLAYING WITH FIRE