close
close

FTC’s social media report needs a rewrite

Employee report in a nutshell

On September 19, 2024, the FTC voted 5–0 to release a staff report on the data collection and use practices of major social media and video streaming services.

The report assessed information collected by FTC staff from nine companies, including some of the largest social media and video streaming services: Amazon.com, Inc., which owns the Twitch gaming platform; Facebook, Inc. (now Meta Platforms, Inc.); YouTube LLC; Twitter, Inc. (now X Corp.); Snap Inc.; ByteDance Ltd., owner of the video-sharing platform TikTok; Discord Inc.; Reddit, Inc.; and WhatsApp Inc.

The orders requested information about how companies collect, track and use personal and demographic information, how they determine which advertisements and other content is displayed to consumers, whether and how they apply algorithms or data analysis to personal and demographic information, and how their practices impact children and young people.

According to FTC Chair Lina Khan:

“The report shows how social media and video streaming companies are collecting vast amounts of Americans’ personal data and making billions of dollars a year from it. While these practices are lucrative for companies, they can threaten people’s privacy, freedoms and expose them to a range of harms, from identity theft to harassment. Of particular concern is that several companies are failing to adequately protect children and teenagers online. The report’s findings are timely, especially as state and federal policymakers consider legislation to protect people from data breaches.”

The report outlines specific staff concerns related to multimedia services and video streaming operations, including: :

· “Woefully inadequate” collection and indefinite retention of datasets, including information from data brokers, about both users and non-users of their platforms;

· Company business models that encourage mass collection of user data for data monetization, especially through targeted advertising, posing risks to user privacy;

· Inadequate corporate monitoring of automated data-powered systems and obstacles that prevent third parties from opting out of using their data;

· Failure to adequately protect children and teenagers accessing multimedia and video streaming sites; AND

· The risk that companies that collect significant amounts of user data will be able to achieve anti-competitive market dominance, which could lead to companies damaging data prioritization at the expense of user privacy.

The report’s key recommendations have been taken into account: :

· Congress should enact comprehensive federal privacy legislation to limit surveillance, address basic protections, and provide consumers with data rights;

· Congress should enact additional legal privacy protections for children and teenagers over the age of 13;

· Companies should limit data collection, implement specific and enforceable data minimization and retention policies, limit data sharing with third parties and affiliates, delete consumer data when no longer needed, and adopt consumer-friendly privacy policies that are clear, simple and easily understandable;

· Companies should not collect sensitive information through privacy-infringing ad tracking technologies; AND

· Companies should carefully review their policies and practices regarding ad targeting based on sensitive categories.

Staff report issues

FTC staff were able to provide a neutral description and analysis of the data collection practices of the major streaming companies. Instead, the staff report reads primarily as a litany of concerns about theoretical (not actually demonstrated) harm to consumers. The report also essentially ignores the significant economic benefits of data use by businesses.

It is worth noting that although both FTC Commissioners agreed to release the report, Melissa Holyoak and Andrew Feguson, issued separate statements that differ from important aspects of the report’s analysis.

Commissioner Holyoak’s three key concerns

Noting that “businesses pay close attention to what the Commission votes,” Commissioner Holyoak highlighted three concerns:

“First(r)eport may impact freedom of speech on the Internet. If the analysis of the (r)eport concerns the protection of children and teenagers on the Internet or content that is clearly harmful (e.g. promoting self-harm), I express my deep sympathy. As previously noted, the (r)report says it does not “support any attempts to censor or moderate content based on political views.” However, since part of the (r)report covers how social media companies design or modify their algorithms and artificial intelligence in ways that could impact content recommendations or moderation, I have serious concerns.

Secondthe so-called “recommendations” contained in the (r)report actually aim to regulate private conduct through guidelines at a sub-regulatory level, and sometimes contain erroneous descriptions of the requirements of applicable law. We should not dictate or otherwise attempt to change private sector behavior in a guidance document.

ThirdI note that, despite the many descriptive observations this Report makes, there are key factual and policy issues that remain to be investigated. More analysis is needed before we can conclude that the (r)report’s recommendations will, without reservation, ultimately lead to good outcomes for consumers or competitors. In fact, what is particularly worrying is that these recommendations overlap with the Commission’s planned (2022) rulemaking (on commercial surveillance and data privacy) and appear designed partly to provide support and partly to provide support for such rulemaking by circumventing and potentially undermining the public process.”

Commissioner Ferguson’s concerns

Commissioner Ferguson’s criticism of the report focused on its treatment of targeted advertising and artificial intelligence and noted its omission of any discussion of political censorship:

“(r)eport’s claim that consumers may be “significantly at risk(s)” and suffer “extreme harm()” as a result of viewing targeted advertising is unfounded and constitutes an unwarranted attack on the online economy designed to justify heavy-handed regulations.

The (r)eport also calls for expanding AI security departments at these companies and giving the bureaucrats who run them binding authority over the engineers and business leaders who actually innovate and create new products. Conveniently, no mention is made of the stunningly poor decisions being made by the AI ​​security bureaucracy that is already having a detrimental impact.

Equally disappointing is what the (r)report left out. . . . The (FTC) orders (request for information) asked about the companies’ content moderation policies, but the (r)report says nothing about the pervasive political censorship and election interference carried out by the companies under investigation under the guise of “content moderation.” There is no mention in the (r)eport of banning politicians (including Donald Trump during his term as President of the United States), removing and demonetizing users who question the political consensus in Silicon Valley, or one of the most brazen acts of interference election in recent history: social media companies’ coordinated suppression of Hunter Biden’s laptop story in the run-up to the 2020 presidential election.

Disadvantages of the report: overall assessment

The staff report provides new information about media companies’ data collection and use practices that may be of interest to future researchers and Congress. This is beneficial.

However, the report does not provide a highly critical, subjective characterization of business practices, nor does it provide recommendations that reflect these characterizations. The report provided no empirical support for its findings. It also ignores economic analysis that points to the positive aspects of the practices it criticizes.

In particular, the report largely ignores the important benefits of data retention and, in particular, targeted advertising. More effective targeted advertising and data storage for monetization enable digital companies to improve the performance and qualification of platform services, benefiting individual and business users.

Additionally, language in a staff report that questions data retention without focusing on the specific harms associated with a specific case may discourage efficient and profitable use of the platform’s data. This slows down innovation and economic growth related to digital platforms.

Overall, the report reflects an unfounded belief that a benevolent government knows better than successful digital companies and can “improve” the practices of its platforms, an example of what eminent economist Harold Demsetz called the “Nirvana fallacy.”

Finally, it is ironic that the assessment report focuses on the risks of data sharing without recognizing that many antitrust interveners indirectly support data sharing by promoting platform interoperability as a way to increase competition.

Next step for the FTC

Severe criticism of the analysis of the staff report by 2 of 5 FTC commissioners, combined with the report’s lack of rigor, subjectivity, and analytical shortcomings, severely limit its usefulness. In its current form, this reflects poorly on the FTC. The Commission should have it rewritten by staff to eliminate its subjective conclusions and recommendations. The new report should also include an economic analysis discussing the social benefits and costs of storing and using data.