News Nudges: Facebook directs users to authoritative news sources to combat COVID-19 infodemic
May 11, 2020
2 min read
What's going on here?
Facebook has announced new measures to help combat misleading information about COVID-19 on its platform. The company will direct users who have previously interacted with misinformation about the pandemic on Facebook to information from more reliable sources such as the World Health Organisation (WHO).
What does this mean?
Social media posts that feature misleading information about COVID-19 have been widespread since the emergence of the outbreak, with posts proliferating falsities about the origins of the outbreak, potential cures and health advice. Although Facebook has ramped up measures to monitor and remove this type of content, a good deal of misinformation still exists on the platform and is often only discovered after being widely distributed.
The platform is now opting for a new approach. By designing a notification that will appear on the News Feed of users that have interacted with misleading information about COVID-19, Facebook is actively directing users towards reliable sources. These measures have not been futile; since the end of March over one billion Facebook users have been directed to official sources of information about COVID-19 and approximately one hundred million users have actually viewed this information.
What's the big picture effect?
Social media companies across the board are taking a pro-active approach to address misinformation on their platforms. As well as its action on the News Feed, Facebook has also introduced measures to curb the spread of misinformation on WhatsApp. Users are now limited to forwarding a message to just one chat at a time, in an attempt to slow the spread of viral messages containing misinformation about COVID-19.
Similarly, YouTube has committed to the removal of content asserting a link between 5G and the COVID-19 pandemic and Twitter has said it will remove false and dangerous content about COVID-19. Despite some apparent success for these measures, critics argue that the danger of misleading content online is so significant that social media platforms must do more to combat it’s spread. This is highlighted by recent arson attacks in the UK and the death of hundreds in Iran after drinking methanol to try to cure the virus.
The willingness of Facebook in particular to combat misinformation has not gone unnoticed among campaigners. Non-profit activist group Avaaz, for example, has advocated that these measures by Facebook should be extended to other content shown on the platform. The company has typically been reluctant to remove politically contentious content, around issues including politics, gun rights, and immigration, arguing that it could limit free speech. For example, at the Silicon Slopes Tech Summit in January 2020, Facebook CEO Mark Zuckerberg defended the company’s decision to resist pressure to ban political ads as standing up for free expression.
Ultimately, it is easier for Facebook to determine what is false or unacceptable with regard to information about COVID-19 than it is about politically contentious issues. Looking forward, we are likely to see more measures on social media platforms to limit misinformation about COVID-19, but it does seem less probable that this will become a lasting mechanism to combat misinformation online more generally.
Report written by Jasmine Kobewka
Share this now!
Check out our recent reports!