Deepfakes: Legal Implications and Global Regulation

Essex University law student Raksha Sunder explores the rise of deepfakes, their legal implications, and the potential impact of global regulation on this digital frontier. In 2018, Sam Cole from Motherboard exposed the use of AI algorithms by a Reddit user known as ‘deepfakes’ to create fake pornographic videos by swapping celebrities’ faces with adult film actors. This revelation came as the technology was gaining traction.
By 2019, deepfake technology had expanded beyond Reddit, with apps that could digitally remove a person’s clothing from a photo. Deepfakes are now associated with malicious purposes, including the creation of fake pornography, which has significant legal implications. For example, the UK’s Online Safety Bill includes provisions to criminalize the sharing of non-consensual deepfake pornography.


The risk of political deepfakes generating convincing fake news that could destabilize political environments is also a concern. The European Union’s Code of Practice on Disinformation highlights these dangers and calls for measures to combat the spread of manipulative deepfake content.


During the 2020 US presidential election, the nonpartisan advocacy group RepresentUs released two deepfake advertisements. These fake profiles featured North Korean leader Kim Jong-un and Russian President Vladimir Putin claiming they didn’t need to intervene in US elections. RepresentUs aimed to raise awareness of voter suppression to defend voting rights and boost turnout, despite concerns that such technology could cause confusion and interfere with elections.


In India, a politician reportedly used deepfake technology in a 2020 campaign video to replicate the sound of the Hindi dialect spoken by his target audience, Haryanvi. Additionally, a deepfake was circulated making it appear that singer Taylor Swift endorsed Donald Trump, causing a media frenzy until Swift clarified her actual endorsement of Kamala Harris. This incident demonstrated the disruptive potential of deepfakes in shaping public opinion.


As governments worldwide discuss how to regulate AI and deepfake technology, there are significant gaps in the laws and regulations introduced to handle the challenges posed by deepfakes. The DEEPFAKES Accountability Bill (H.R. 3230) introduced in the US is a significant step in governing deepfake technology. This proposed law includes provisions for labeling deepfake content and requires producers to disclose when an image or video has been altered with artificial intelligence.


The purpose of the Bill is to halt the propagation of malicious deepfakes that may threaten individuals, circulate false information, or disrupt democratic processes. Social media platforms like YouTube and Instagram have standards to prevent harmful content, but these are often unenforced due to the limitations of automated detection and the inefficiency of manual inspection procedures. This allows users to monetize content containing deepfakes, especially when they evade detection, thereby gaining profits while violating legal and platform guidelines.


The European Union (EU) has implemented measures such as the General Data Protection Regulation (GDPR) and the Code of Practice on Disinformation to combat deepfakes. Deepfakes may fall under GDPR jurisdiction if they use personal information or photos without consent. According to Article 4 of the GDPR, a person’s voice or appearance in deepfakes is considered personal information. Article 6 prohibits processing personal data without the subject’s consent. The voluntary Code of Practice on Disinformation, introduced in 2018, urges tech companies to demonetize misinformation and promote transparency in political advertising to curb the spread of deepfakes. However, its reliance on voluntary compliance limits its effectiveness.


A direct global regulatory structure targeting deepfake technology could significantly advance the fight against these deceptive tools. This could build upon existing agreements like the Convention on Cybercrime (Budapest Convention) of the Council of Europe, which sets guidelines for national cybercrime laws and fosters international collaboration. A treaty emphasizing disclosure and consent, similar to Section 104 of the DEEPFAKES Accountability Act, could be applied globally to address the creation and spread of deepfakes.


However, merely requiring disclosure from deepfake creators is insufficient to tackle the escalating challenges posed by these technologies. A more effective approach would be to establish international guidelines that include penalties for misuse of AI and deepfakes. This would mandate creators to disclose when they have altered content and to account for any harm caused by their creations, following similar rules applied to other digital threats like cybersecurity or online scams. By imposing strict punishments for those who misuse deepfakes to deceive or damage reputations, a more robust defense against their adverse effects can be established.


Another approach to addressing the misuse of deepfake technology is through international data protection agreements, similar to the EU-US Data Privacy Framework.


The increasing sophistication of deepfakes has raised significant concerns about the protection of personal data and the blurring of reality. Such agreements would standardize the protection of personal data used in deepfakes across borders, preventing data laundering by ensuring consistent safeguards regardless of jurisdiction.


By incorporating a mechanism similar to the European Arrest Warrant (EAW), these agreements could enable the swift transfer of suspects involved in deepfake crimes to the country where the offense occurred. This would prevent perpetrators from evading justice by exploiting weaker legal systems in other countries.


The costs associated with deepfakes are higher than ever as they continue to blur the distinction between fact and fiction. The days of “seeing is believing” are coming to an end, and if the legal system doesn’t keep up, we might live in a society where reality is nothing more than a digital illusion.


Raksha Sunder, a law student at the University of Essex with a keen interest in corporate law, is the Vice President of the Essex Law Society. She enjoys competing in writing competitions during her free time. The Legal Cheek Journal, which discusses these pressing legal issues, is sponsored by LPC Law.



Leave a Comment

Your email address will not be published. Required fields are marked *