Digital Lockdown: Facebook’s lockdown on COVID-19 misinformation
March 7, 2021
3 min read
What's going on here?
On Monday 8 February, Facebook announced plans to expand its misinformation policy to include false anti-vaccine claims.
What does this mean?
This aggressive policy change will allow Facebook to take down accounts making false claims related to COVID-19 and its subsequent vaccine. Popular claims include: the virus is man-made; the vaccine contains microchips; and that vaccines cause autism. Such claims have frequently been debunked by health experts and have damaging effects on global health. Facebook will assess posts on whether they “contribute to imminent physical harm”, but have refused to define what actually falls within this definition. The policy appears to have been deployed immediately as Robert F. Kennedy Jr, a prominent figure within the anti-vax community, had his Instagram account removed the following Wednesday for “repeatedly sharing debunked claims about coronavirus” a Facebook spokesperson commented.
However, many critics believe the policy is too little too late. Imran Ahmed, CEO of the Centre of Countering Digital Hate, commented that “the removal of these accounts is long overdue”. Ahmed also criticised Facebook’s policy for failing to tackle the problem at its roots. Whilst Kennedy’s Instagram account has now been deactivated, the high profile anti-vaxxer is still free to post on Facebook with a following of over 306,000 users. Facebook later admitted that it only takes down accounts based on the contents of their posts, and not whether they are linked to other banned accounts.
What's the big picture effect?
For many users, Facebook’s policy expansion will be a welcomed development. However, the announcement does ask some interesting questions regarding freedom of speech and censorship. As freedom of speech is often regarded as the beating heart of western democracy, any developments that suppress expression should be scrutinised. The new policy effectively enables Facebook to become “arbiters of truth”, empowering the company to decide what is, and what is not, “imminent physical harm”. This title is somewhat ironic, as two years ago the social media giant was under investigation for its role in the Cambridge Analytica scandal. Therefore, it is easy to question whether a company with such a shady past should be able to restrain expression, or whether this is an area in need of legislative intervention.
Nevertheless, it is hard to criticize Facebook’s attempt to create a safer online environment. The plethora of misinformation perpetuated through accounts like “vaccinefreedom” has undoubtedly undermined public confidence. A recent YouGov poll has revealed that nearly one in five British adults would refuse the COVID-19 vaccine. This will have a significant impact on the UK’s ability to fight the virus, as a 69% vaccination rate is needed to achieve herd immunity. Therefore, it is important social media platforms adopt a proactive approach to prevent false claims from influencing public behaviour. Given the impact misinformation could have on the vaccine’s effectiveness, and public health more broadly, Facebook’s new policy must be seen as a step in the right direction.
On the whole, Facebook’s new policy is an admirable attempt to fight misinformation. The new policy will help improve the accuracy of posts available to users, by deactivating accounts posting unsubstantiated claims. This development will hopefully improve the public’s confidence in the COVID-19 vaccine, pushing us one step closer to a time where lockdowns and social distancing are a long-forgotten memory.
Report written by Luke Cuthbert
Share this now!
Check out our recent reports!