close
close

Good Faith Assumption on the Internet | Regulation overview

Scholar argues that online platforms should support positive user experiences through self-regulation.

Disinformation, cyberbullying and online harassment – ​​these bad faith acts are probably the first that come to mind for many people when they consider what characterizes today’s online environment.

Despite the prevalence of bad faith behavior, many online platforms assume that users are acting in good faith. For example, Wikipedia announces on its project page that “it is the assumption that editors’ changes and comments are made in good faith, that is, the assumption that people do not intentionally try to harm Wikipedia, even if their actions are harmful. “

But is it time for policymakers and online platforms to reconsider the assumption of good faith?

At least one law professor says no. Policymakers should not intervene and platforms should maintain their assumptions in good faith, argues Eric Goldman in a recent article.

Goldman, a professor at Santa Clara University School of Law, explains that government intervention can be counterproductive in curbing bad faith actions. Instead, he argues that online platforms should be self-regulated and proposes several tools they could use to foster a positive online community.

Goldman identifies two features of the early Internet that allowed online platforms to assume that users were acting in good faith: the homogeneous demographic of platform users and their small population. Goldman argues that the lack of user diversity has made it easier for platform designers to anticipate and discourage bad faith activity. Users were also less likely to engage in bad faith activities because there was less money and fame at stake due to their relatively small population.

However, over the past three decades, platform users have diversified and the number of users has increased, making it difficult for online platforms to continue to rely on users’ good faith, Goldman argues.

Goldman acknowledges that bad faith actors dominate today’s online communities. They spread disinformation for political or financial gain. Additionally, they exploit the anonymity of the online environment to engage in illegal activities such as cyberbullying or harassment.

Goldman highlights the fact that in the face of rampant harassment and disinformation online, policymakers around the world are increasingly requiring online platforms to manage user-generated content. For example, the UK’s Online Safety Act imposes a duty of care on online platforms to prevent harmful behavior. Similarly, the EU Digital Services Act requires online platforms to mitigate harm from user-generated content by establishing a reporting system.

Goldman argues that such regulations are problematic because they do not encourage good faith. Goldman notes that because bad faith activities inevitably occur on the Internet, such regulations require online platforms to view each user as a potential threat that could result in liability. As a result, online platforms must tighten their content moderation practices, deterring good-faith actors while also dealing with bad-faith actors, explains Goldman.

Instead, Goldman argues that the US regulatory framework – Section 230 of the Communications Decency Act – is more appropriate than its European counterparts because it allows online platforms to manage user-generated content through self-regulation.

Section 230 provisions ensure that platforms will not be liable for user-generated content

content they distribute, or decisions to remove offensive or harmful user-generated content – such as obscenity, violence and harassment – provided that the platforms do so in good faith.

This structure allows online platforms to assume users’ good faith without imposing liability for inevitable bad faith actions, Goldman argues. Furthermore, Goldman explains that Art. 230 provides online platforms with incentives to experiment with different site designs to attract good faith users and combat bad faith actions, knowing that they are immune from liability.

In light of Section 230, which allows online platforms to use a variety of tools to attract bona fide entities through self-regulation, Goldman argues that online platforms’ design choices should be guided by the platforms’ business objectives, not legal considerations. Online platforms have a strong incentive to respond to user interests because they benefit from user engagement. Ultimately, users benefit when platforms are able to determine the best solution for the community, Goldman says.

Goldman proposes several self-regulatory mechanisms that he believes would enable online platforms to detect and deter bad faith actors.

First, online platforms can adopt a “trust and security by design” approach. Trust and safety is a set of business practices that protect platform users from harmful content and behavior. Under this approach, the platform’s internal trust and security and content review teams work during the early stages of platform development to minimize the occurrence and impact of malicious actors once the platform is launched, Goldman explains.

Second, online platforms can choose a user-centric approach. For example, Youtube allows users to report problematic content from other users. However, this approach can sometimes do more harm than good, as users can use the reporting system to remove legal content, Goldman warns.

Third, Goldman argues that online platforms can guide users toward positive behaviors through design design choices. Instagram, for example, encourages users trying to post content that may violate its Community Guidelines to reconsider their post by sending them notifications reminding them of the rules.

Goldman notes that market mechanisms can also serve as examples of promising structural design. For example, online platforms that pay users for content may discourage bad faith content uploads by only paying users with a positive reputation or by introducing obstacles to the content uploading process to make illegal content uploads less profitable.

Finally, Goldman emphasizes the importance of platforms recruiting people from diverse backgrounds to their developer teams. Goldman notes that a homogeneous development team can have significant blind spots. On the other hand, diversifying such teams and taking different perspectives seriously during the software development process can help platforms develop more comprehensive and effective content moderation plans, Goldman explains.

Goldman concludes by acknowledging that while self-regulation is preferable to government intervention, it is imperfect. It acknowledges that online platforms cannot produce perfect design designs from scratch, and instead urges online platforms to continually review their designs based on new developments, evidence and experience.