close
close

Big tech companies were open to regulating internet safety – why did the New Zealand government abandon the idea?

The Coalition Government has abandoned efforts to modernize New Zealand’s outdated internet safety rules, despite qualified support for the changes from social media and tech giants.

The aim of the Safer Online Services and Media Platforms project, led by the Department of Home Affairs, was to develop a new framework for regulating what can be published on online platforms and other forms of media (such as news) in New Zealand.

It concerned the sharing of harmful content online, such as child sexual abuse, age-inappropriate material, bullying and harassment, promoting self-harm, etc. It also aimed to improve the regulation of online services and media platforms in general.

Announcing the project’s suspension in May, Home Affairs Minister Brooke van Velden argued that illegal content was already subject to scrutiny and that the concepts of “harm” and “emotional well-being” were subjective and open to interpretation. She also said it was a matter of freedom of speech.

The principle of freedom of speech is important to this coalition government and is an important factor to consider in a digital world. On this basis, the Department will not continue to work on regulating content on the Internet.

But when we reviewed submissions from tech and social media companies on the proposed framework, we found that companies like Facebook, Reddit and X (formerly Twitter) generally supported regulation – within limits.

Brooke van Velden talks to the media in parliament
Freedom of speech is important: Home Affairs Minister Brooke van Velden.
Getty Images

Regulation of online media

The “Safer Online Services and Media Platforms” project has been running since 2021. Last year, internal departments invited applications to the public.

The proposed regulations would create a new, more streamlined model for industry regulation. It has proposed codes of conduct regulated by an independent regulator to control online harm and protect public safety. Security standards would apply to online and other media platforms.

Currently, at least ten different government organizations have some responsibility for managing online services and responding to harmful content, which often overlaps. And some areas are barely regulated. For example, New Zealand law does not require social media companies to adhere to security standards.

Other countries are also considering how to regulate harmful digital content, online services and media platforms. Ireland, Canada, the UK and Australia have developed versions of this law to regulate the internet space.

Outdated regulations

We analyzed submissions from some of the dominant companies in the technology sector, including Google (including YouTube), Meta, Snap, Reddit, TikTok and X Corp. Our goal was to see what these companies had to say about regulations that directly impact their core businesses.

Everyone agreed that the current system is outdated and needs modernization. Google argued, for example:

Content regulation was developed for a different technological era, focusing on media such as radio and television broadcasts. It is therefore appropriate to update the regulatory framework to make it fit for purpose and to reflect both technological and social changes.

These companies have already introduced their own safeguarding policies and signed up to the Aotearoa New Zealand voluntary code of conduct on online safety and harm.

Importantly, none of the companies claimed that self-regulation efforts were sufficient.

According to these companies’ comments, the only option was a code that focused on objectives rather than rigid rules that would be too prescriptive. Comments stressed that the new code must be a system that is “proportionate” to implementation and enforcement.

Snap stated that:

Internet regulation is most effective when it is based on general principles that businesses of all sizes can follow and implement proportionately.

Proportionality is typically a legal test used to decide whether a right, such as freedom of expression, can be limited in the interests of another public interest. However, only Meta and X Corp mentioned protecting free speech in their comments.

Most submissions stated that they would trust an independent regulator to develop one overarching code, with the caveat that the regulator must be truly independent of all industry players as well as the government of the day.

Reddit stated:

we are also concerned about the proposal that codes of conduct should be developed by industry rather than government or the relevant regulatory agency.

The comments also indicated that consultations with industry entities are necessary throughout the design process.

A missed opportunity

In their comments on the proposed regulatory framework, each company had its own view on how the codes should be designed, whether the regulatory code should contain legal but harmful content, who should bear the burden of implementation, and what the penalties should be. as.

It is worth noting, however, that everyone supported changing the regulations.

The decision to abolish this framework represents a missed opportunity to protect future generations from some of the harms of online media.