close
close

Age verification comes to social media as the end of the age of unregulated use approaches

If trends continue, social media will follow cigarettes’ lead: an activity that benefited early on from lax age verification laws but has over time been identified as a public health threat, leading to increasingly stringent regulations. Moves to impose appropriate age restrictions on social media platforms have prompted some companies to tighten age safeguards. But as with cigarettes, they have also prompted increased lobbying efforts funded by big tech companies that have a lot to lose among young users.

A UK report shows improvement following the introduction of the children’s code

A new report shows that legislation and increased regulation in the UK are having an impact as social media companies make what researchers call “a range of improvements” to protect children’s safety and privacy. A statement from the London School of Economics and Political Science (LSE), which collaborated on the report through its Digital Futures for Children center, said its team discovered 128 changes related to children’s safety and privacy made by Meta, Google, TikTok and Snap between 2017 and 2024.

Many of them appear to have been motivated by the adoption of the Age-Appropriate Design Code (AADC) in 2021. This year, four companies introduced a total of 42 changes.

More than 60 percent of the 128 changes were related to default settings, meaning improvements to default settings that made products more secure compared to the base version.

“This report illustrates the effective impact of regulation in protecting children’s safety and privacy online,” says Steve Wood, founder of PrivacyX Consulting, former Deputy Information Commissioner and author of the report. “The study highlights a shift toward substantive design changes that include security by default – from private account settings to limits on targeted advertising.”

However, there are still concerns that the reliance on parental controls – rated as the second most popular type of privacy protection measure – means that many of the potential benefits are not necessarily realized in significant numbers.

Wood says the team aims to repeat the study in 2025 to assess further progress. Meanwhile, researchers made eleven recommendations to improve child safety laws and regulations. They are included in the document “The impact of regulations on children’s digital lives”, which is available here.

$6.5 million age assurance trial begins in Australia

Australia’s age assurance experiment will begin. According to the Age Verification Providers Association’s (AVPA) LinkedIn blog, the A$6.5 million ($4.3 million) study is examining “both age verification and age estimation technologies to examine their effectiveness in protecting children from exposure with pornography and other influential content.” content on the Internet.”

AVPA hopes the study will answer some of the lingering questions plaguing age assurance technologies, such as whether it threatens user privacy, how much it costs, and why children wouldn’t use VPNs just to bypass age verification measures. “It is essential that a broad range of stakeholders from all states and territories are closely involved in the design, conduct and evaluation of each study,” the blog says. AVPA recommends that civil society groups, industry associations, a range of regulatory bodies and affected platforms, apps and websites be represented on the advisory board.

The organization warns against repeating other studies elsewhere, especially in the EU. They note that the eSafety Age Verification Action Plan already incorporates the results of a similar European Commission-funded activity in 2021-2022 that became euConsent’s AgeAware. “It was a simple proof of concept that showed that it was possible to check age on one website once and then reuse it on other sites, even if the age checking service was provided by a competing provider. While the results of this have been very positive, there is no point in simply repeating it in Australia as it would provide very little marginal benefit.”

Australia has indicated it is moving towards what e-security commissioner Julie Inman-Grant calls a “double-blind tokenization approach”. The AVPA notes that euConsent has announced its intention for its AgeAware program to “lead the age assurance industry toward using an ecosystem in which age assurance providers create tokens that users can store on their smartphone, tablet, computer, or even any another connected device, and digital services can then check whether users meet the age requirements.”

The age assurance debate on social media could open the door to porn

Most people would agree that hardcore pornography should have age restrictions. However, the problem is not as obvious with social media platforms, most of which technically have age restrictions – Instagram, for example, uses age controls from Yoti – but many also have a large proportion of users who are technically too young, according to their own policies platforms.

The Australian trial is questioning whether the current standard cut-off age of 13 for social media users should be raised to 16. Even then, says Iain Corby, director of the AVPA, younger users may still be at risk. “For facial age estimation, the average error of the best-in-class is a year and a half, so if we tried to control for 16-year-olds’ access, we’d have to expect a fair number of 14-year-olds and quite a few of the few 15-year-olds to get through,” says Corby.

But some believe adding restrictions on social media could be an obstacle to efforts to restrict access to pornographic sites in Australia, which government officials blame on a nationwide crisis of violence against women. An article in the Courier Mail quotes Melinda Tankard Reist, director of the Collective Shout movement, as saying that “the issue is too urgent” to deal with on social media. “Big tech companies have done untold damage,” he says. “Every day of delay means millions more children are at risk.”

New York lawsuit fueled by nearly $1 million spent on lobbying

Social media companies are showing off their less compliant US sites with a bit of legal extravagance. The New York Post reports that Google and Meta are leading a “fierce push to destroy New York legislation aimed at protecting children online.”

According to the Post, big tech companies and their allies have already spent $823,235 lobbying lawmakers to kill Senate Bill S7694 establishing the Stop Addictive Feeds Exploitation (SAFE) for Kids Act and Senate Bill S7695A establishing the New York Children’s Privacy Act. The regulations are intended, among other things, to tighten the rules regarding algorithmic channels for younger users.

Danny Weiss, director of advocacy at Common Sense Media and a supporter of the bills, notes that they are “spending a lot of money to oppose these bills as if they were an existential threat to New York.”

The highest spender in the lobbying process? Meta, the parent company of Facebook and Instagram, issued a statement opposing the regulations on the rather flimsy grounds that “teens move interchangeably between multiple sites and apps, and different laws in different states will mean inconsistent experiences for teens and their parents in Internet.”

Article topics

age assessment | age verification | AgeAware App | Australia | AVPA | children | euCONSENT | selfie biometrics | social media | United Kingdom | United States

Latest biometric news

Mastercard is increasing its efforts to roll out Community Pass, a digital platform that stores your digital ID and wallet in…

Robert Opp, UNDP Chief Digital Officer, emphasizes the importance of viewing DPI as the digital equivalent of physical infrastructure…

Soon, AI will be used to pre-screen applications for correct information, triage cases by calculating complexity and route them accordingly…

The Ministry of Information Technology and Communications announced the completion of work on the national data protection and management policy, noting…

Interpol has announced a tender for mobile biometric devices (MBC) for collecting fingerprints and facial images. Specifically, the project…

Digital rights advocacy group Access Now is suing U.S. Customs and Border Protection (CBP) and Immigration and Customs Enforcement…