Out of Harm’s Way: Ofcom to regulate harmful social media content

March 19, 2020

3 min read

Sign up to our mailing list! 👇

What's going on here?

Ofcom is to be given regulatory powers over the content posted on social media.

What does this mean?

This appointment has come as a result of the consultation from the Official Harms white paper. This white paper lays out the government’s plans, following consultation from the public and relevant bodies, for dealing with “online harms”. These include the protection of children from content which is harmful or dangerous. 

One of the responses from the consultation was the need for an independent regulator to police content on social media. The government aims to “avoid fragmentation of the regulatory landscape and enable quick progress on this important issue” by using Ofcom. (Gov.uk)

Currently, Ofcom’s regulatory scope covers television and media broadcasters. Should Ofcom start regulating social media content, this may include taking posts down and punishing those websites or companies who do not remove the content, either through fines or prosecution. The legislation, and therefore the regulator’s exact remit, is yet to be drafted.

What's the big picture effect?

The move from self-regulation to an independent third party is an important one. Self-regulation by social media networks can be tricky, as the networks decide how harmful content is handled. This has meant that harmful content has slipped through the net and not affected the businesses themselves. By having an independent regulator, harmful content should be removed more consistently, with punishments for non-removal becoming more serious. 

A potential pitfall, however, could be the sheer volume of content that exists on the Internet. Not only are there many “big-name” social media networks, but there are also numerous websites where users can potentially post content. There then follows the issue of the amount of content posted on each website. For example, 82 years’ worth of videos is uploaded to YouTube every day (TrustedReviews), while around 500 million tweets are posted daily on Twitter (Omnicore). It therefore poses the question: can all this content be regulated effectively? Also, can it be regulated by a company who is used to regulating just a  finite amount of broadcasters? 

It may transpire that Ofcom will rely on “flagged” content, which users have labelled as harmful content. The regulator may then investigate that content and decide whether or not it needs to be removed. This could be akin to Ofcom’s role in handling complaints from the public about television content. 

In other countries, self-regulation from social media networks is more common and working. For example, in Germany, social media platforms with more than two million registered German users have to review and remove illegal content within 24 hours of being posted or face fines of up to €50m (£42m). In Australia, social media platforms who do not remove harmful content either receive criminal penalties, jail sentences (up to three years) and financial penalties worth up to 10% of the company’s global turnover.

The introduction of Ofcom’s regulatory powers to monitor harmful content online is very much a positive step towards making the Internet a safer place. Whilst the volume of content may have already surpassed “manageable” levels, the fact that content will be removed, or social media companies punished, is a step in the right direction. It will be important to keep an eye out for any legislation that follows.

Report written by Harina Chandhok

If you’d like to write for LittleLaw, click here!

Share this now!