LittleLaw Looks At… The Legal Framework for Deepfakes

September 27, 2021

7 min read

Sign up to our mailing list! 👇

What’s going on here?

Coinciding with the democratisation of technology (the rapid increase in accessibility of technology), the emergence of “deepfakes” has experienced a meteoric rise. The term “deepfake” is an apt amalgamation of “deep-learning” and “fakes”, two concepts which exist independently of one another, but when combined have a greater, more ominous meaning. Deepfake is the term used whereby somebody’s face (and typically words) are superimposed onto another’s body in a video, changing the content of their expressions and speech for any agenda imaginable, often with nefarious intent. Indeed, with remarkable ease, we are now enabled to change people’s words and actions.

Deepfakes raise logistical, ethical, and legal considerations, which can be seen from the perspective of the individual, or more systemically. Indeed, in our social media age in which real news is so readily claimed to be “fake news”, deepfakes have the potential to muddy an already hazy political landscape.

Deepfakes pose five challenges: 

  1. There is currently no cohesive or coherent legislative framework to counter deepfakes. This leaves individuals, organisations and our civil society unprotected; 
  2. Deepfakes are easy to create. The democratisation of technology, the internet and editing software, when paired with the ease of disseminating information via social media, has exacerbated this issue; 
  3. Deepfake detection techniques are effectively non-existent, further enabling deepfake dissemination; 
  4. The consequences of deepfakes are often nefarious and extreme for those targeted; and
  5. There are legitimate business purposes for deepfake technology, which requires regulation to strike a balance between maintaining commercial practice while protecting individuals and society.¹

Here, we will consider the personal and the political, to establish what can be done to legislate against the deepfake.

The Personal

The most relevant legal frameworks which may afford deepfake victims some protections are those of defamation, passing off and copyright.


Despite there being no legal framework tackling deepfakes head on, it may be that copyright laws can be used to combat the deepfake. In particular, a claimant may be able to make a claim under defamation. To bring a claim in defamation, a claimant must establish that there has been a publication of a statement (communicated to a third party), which identifies or refers to the claimant, which tends to lower the claimant in the estimation of right-thinking members of society, and which has caused or is likely to cause serious harm to the Claimant’s reputation.2

This substantive legal action was codified in The Defamation Act 2013, which potentially provides deepfake victims with legal recourse. Unfortunately for potential claimants, the case of Lachaux v Independent Print Ltd & Ors3 saw the introduction of a higher harm threshold, which consequently curbs the chances of success in this avenue. Importantly, what is considered “serious harm” within the deepfake realm is to be seen, which provides yet further ambiguity.

A deepfake content creator might use statutory defences in response to defamation claims against them. These include truth and honest opinion which, by the often malicious nature of deepfakes, would unlikely be successful. Privilege and publications on a matter of public interest also operate as a defence, however these again appear not to be applicable with regards to deepfakes. As such, if a claimant can establish their claim, it appears that a deepfake defendant may lack any cohesive defence.

The primary remedy for defamation is damages, however an injunction or a statement in open court may also be of merit. General damages are recoverable for loss of reputation and any consequential loss, typically calculated to consider the breadth of dissemination of the deepfake.

With the oft nefarious nature of deepfakes, it may be that such remedies are unable to truly compensate a claimant. Furthermore, the high threshold to succeed in bringing a defamation claim, and the potential legal costs of doing so, make defamation claims burdensome for any victim.


Another avenue for a deepfake victim might be under the tort of passing-off. Despite image rights not being legally recognised in the UK, case law provides protection to individuals whose image or reputation are unfairly used for commercial purposes. This avenue to recourse appears relevant predominantly to people in the public eye.

In Fenty v Arcadia,4 Rihanna used the tort of passing-off to establish a claim against Topman for their use of her image on one of their t-shirts, which became an era-defining wardrobe piece in the UK. Topman had used Rihanna’s image without her consent, which satisfied the tort of passing off as the public would have assumed that Rihanna had licensed the use of her image to Topman, which was not the case.5 If a deepfake were to be utilised commercially, it may be that passing-off could offer protection to deepfake victims.

Importantly, the remedies available for any successful claimant in passing-off are more wide ranging than that of defamation. They include injunctions, damages, an account of profits, or the delivering up of the infringing deepfake item, which once disseminated may prove difficult. Again, establishing a claim in passing off is likely entirely inappropriate for any victims who are not in the public eye.


Finally, copyright legislation has often been discussed as a vehicle to combat deepfakes. UK law protects copyright owners, which protect the usage of copyrighted material without the copyright owner’s licence or assignment of rights. This is founded in the Copyright, Designs and Patents Act 1988.6 The CDPA provides protection for alleged copyright infringers by way of “fair dealing” under sections 29 and 30. In particular, these are the execution of non-commercial research and private study, criticism or review, or the reporting of current affairs or events. Frustratingly, the term “fair dealing” is not defined in statute: Lord Denning held that “it is impossible to define what is “fair dealing””,7 rather, the concept should be considered on a case-by-case basis. As such, there is yet further ambiguity for any claimants seeking to issue a copyright claim against deepfake creators.

Despite the notion of fair dealing in the UK having drawn reproval for its overly rigid and retrospective nature,8 copyright has been said to provide “ample room” to deal with deepfakes.9 This will most likely apply to deepfakes created with legitimate or commercial intent. In a recent case, and in cross-application to the deepfake discussion, it was held appropriate to consider the deepfake creator’s intent when dealing with the concept of fair dealing.10 Rather than commercial usage, this addresses the more malicious side of deepfake creation.

It appears that all three avenues of recourse (defamation, passing off and copyright) fail to provide clear and effective recourse against deepfake footage. While passing off and copyright are more appropriate for those in the public eye, it can be considered that the lay person victimised by deepfake technology is left vulnerable.

The Political

In May 2019, a deepfake video of Nancy Pelosi appeared to show the U.S. Speaker of the House of Representatives slurring drunkenly though a speech. The video, initially posted by a “Trump Superfan” was widely disseminated on Facebook and YouTube, and was ultimately shared on Twitter, by Trump himself. Despite the video’s swift debunking, the deepfake had already been viewed and shared millions of times, and Trump’s tweet remained. Indeed, recent elections, including the 2016 U.S. Presidential Elections, have seen the effective microtargeting of individuals by political entities with news, discourse and rhetoric that meets their biases. The link here is that the same political intent can employ deepfake technology in the run up to elections to potentially cast doubt in otherwise undecided voters or the electorate at large.

Shamir Allibhai, CEO of Amber, a deepfake detection technology startup, states that society is evermore in an unfair battle against nefarious actors employing powerful AI tools for ill intent. Accordingly, this “postfact” world has the potential to undo much of the last century’s progress toward peace and stability, which was arguably achieved by evidenced-based conclusions. Indeed, Shamir believes that in response to political deepfakes, society will be forced to become more cynical, in what has been dubbed “the liar’s dividend”.11 In any event, it has been posited that the initial disinformation can still have a greater effect than any subsequent debunking: claiming a deepfake to be not real or true fails to completely eradicate its impact. The loss of citizen confidence in the trustworthiness of information has the ultimate potential to deconstruct democracy. 12

Littlelaw verdict: Knee "deep" in a legal mess

Whether personal or political, the implications of deepfake technology pose an existential threat. Playing on consumer desire for exciting or unanticipated news and information, deepfakes, facilitated by dissemination on social media platforms, can reach a broad audience. Importantly, the readiness in which media content is shared prior to any fact-checking further enables spread.

Legislation has proven unable to effectively protect the lay-person from deepfake victimisation, however those in the public-eye may have avenues of recourse.

Politically, if used effectively, deepfake technology may have the ability to affect elections, political discourse, or the political climate in any country. In nations as divided as the U.S. or the U.K, the actual fake news of deepfakes has the potential to dismantle democracy.

Legislation tends to lag behind technological advances, however with regards to deepfakes, law makers ought to dig deeper.

Report written by Matt Bryan

Share this now!


Check out our recent reports!