Money Talks: Advertisers strike deal with social media giants to curb proliferation of hate speech

October 8, 2020

Category:

3 min read

Sign up to our mailing list! 👇

What's going on here?

Facebook, YouTube and Twitter have agreed to be subject to a new self-regulatory framework for moderation of harmful online content as part of their efforts to curb hate speech and end the advertising boycott against them.

What does this mean?

In July 2020, over 1,000 brands, including Coca-Cola, Starbucks and Unilever, pulled their advertisements from Facebook, YouTube and Twitter’s platforms. The boycott, which used the hashtag “#StopHateForProfit”, sought to condemn the platforms’ failure to combat hate speech (to see our article on that, click here). Participating brands were anxious to protect their brand safety by preventing their advertisements from being shown alongside harmful content. In addition to a loss of ad revenue (approximately 98.5% of Facebook’s revenue comes from advertising), the boycott caused significant reputational damage for the social media companies by shining a light on their inadequate content moderation practices.

The deal, negotiated by the World Federation of Advertisers (WFA), will see social media giants subjected to a new self-regulatory content moderation framework. The system will provide common definitions of eleven categories of harmful content, including hate speech, pornography and illegal drug use. Additionally, it sets a “brand safety floor” for each category. This is a minimum standard of suitability, below which content will be deemed inappropriate for adjacent advertising and must be removed expediently. The agreement also empowers external auditors to oversee the system.

What's the big picture effect?

On the face of it, the agreement appears to be a positive step for social media platforms in the fight against harmful content. However, there are concerns that this issue requires substantial and immediate progress that is unlikely to be made if left to platforms’ self-regulation.

The system’s effectiveness in eradicating harmful content will depend on social media companies’ commitment to upholding their end of the bargain. The fact that it took a mass advertising boycott for them to take action on the issue suggests that they are acting to protect their finances and reputations rather than their users’ exposure to harmful content or their advertisers’ brand safety. These potentially skewed motives are likely to limit how well they deliver on their promises to what is in their own interests.

Stephan Loerke, CEO of the WFA, described advertisers as “the funders of the online ecosystem”. They play a critical role in driving positive change and accountability in the tech industry. Should the social media companies fall short of the agreement, several advertisers have expressed their willingness to call on lawmakers to intervene. Thus far, the UK government has been reluctant to legislate in this area. In February 2020, it set out its intention to move away from self-regulation and appoint Ofcom as an internet watchdog, but these plans were put on hold due to the pandemic. Other national legislatures have been more hands-on. In Germany, platforms can be fined up to €50m for failing to remove illegal content within 24 hours. Australian companies can face fines of up to 10% of their global turnover, and their executives can be sentenced to up to three years in prison.

In the coming weeks, all eyes will be on Facebook, YouTube and Twitter. If they prove unable to follow their own rules, stronger action will be needed to stop the spread of harmful content online.

Report written by Isobel Deane

Share this now!

Check out our recent reports!