Deepfake
Sébastien Lachaussée & Elisa Martin-Winkel

Sébastien Lachaussée & Elisa Martin-Winkel

Deepfake : which legal protection ?

In May 2024, actress Scarlett Johansson accused OpenAI of knowingly copying her voice, without her knowledge, to be one of the voices (“Sky”), which interacts vocally with Internet users in the last version of ChatGPT. The actress denounces: “At a time when we are all grappling with deepfakes and the protection of our own image, our own work, our own identity, I think these questions deserve absolute clarity” (article from Le Monde, May 2024). The artist Taylor Swift was also the victim of deepfakes, when false and sexually explicit images of herself circulated. These examples spotlight the importance to linger on the laws that apply to such new contents.

The development of artificial intelligence (AI) made it possible to convincingly reproduce a person’s voice or likeness, without their consent, to generate content. This is known as “deepfake” and refers to “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful” (art. 3, 60) of the European AI Regulation mentioned below).

In principle, deepfakes are authorized under freedom of expression if they copy elements from the public domain or that are not subject to the rights of third parties. Nevertheless, they may infringe third-party rights, and in particular personality rights. Personality rights are “rights inherent to the human person which belong by right to every physical person (innate and inalienable) for the protection of his or her primordial interests” (G. CORNU, Vocabulaire juridique, PUF, 1987). Among the “rights inherent to the human person” are the right to privacy and the right to one’s likeness. These rights enable an individual to control, to a certain extent, the use of his or her name, likeness or public image.

The creation of deepfakes must therefore be monitored. To this end, it is possible to include clauses on artificial intelligence in contracts. For example, a performer’s contract with his label may contain limits on the digital enhancement of his or her voice, or the modification of his or her likeness. At present, however, many contracts are silent on the subject and set no limits.

It is therefore necessary to establish a legislative framework. At French national level, the misuse of AI can be punishable under criminal law and constitutes an invasion of privacy (art. 226-1 CP), or identity theft (art. 226-4-1 French penal code – CP). The legislator has intervened to target deepfakes more specifically. The SREN Act of May 21, 2024 amends the offence of “montage” (editing), which punishes the act of bringing to the attention of the public or a third party “visual or sound content generated by algorithmic processing and representing the image or words of a person” without the authorization of said person, and which does not clearly appear as AI-generated content or on which the use of AI is not expressly mentioned, in order to adapt the law to deepfakes (art. 226-8 CP). In addition, French law creates a new offence to punish sexually explicit deepfakes (art. 226-8-1 CP).

If no criminal offence is found, the use of deepfakes can nevertheless be prosecuted under civil law. In this case, Article 9 of the French Civil Code may apply. It guarantees the right to privacy and the protection of other aspects of one’s personality, such as likeness and voice.

In addition, the European legislator has recently intervened to introduce a transparency obligation to ensure that end-users are aware that they are interacting with AI. The European Artificial Intelligence Regulation of June 13, 2024 does not classify generative AI as high-risk, but as a type of “general-purpose AI”. Accordingly, deployers that generate or manipulate image, audio or video content constituting a deep fake, shall disclose that the content has been “artificially generated or manipulated” (art. 50, §4 of the Regulation). The provider is responsible for such documentation and compliance. This requires him to make available “a sufficiently detailed summary about the content used for training of the general-purpose AI model” (art. 53, §1-d of the Regulation).

The first legally binding international treaty on AI was adopted on May 17, 2024. The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law recognizes the principles of transparency, human dignity, privacy rights and individual autonomy as safeguards against the misuse of AI and they must be carried out when using AI.

To fully understand the above-mentioned implementation of the legal framework to your projects, it is wise to seek the help of a legal professional to advise you and help you defend and protect your rights. 

SHARE THIS ARTICLE

CONTACT

OUR OFFICES

INFORMATION

E-mail

sl@avocatl.com

PHONE

+33.1.83.92.11.67

Address

11 rue Sédillot
75007 Paris

Follow us :

Newsletter

Please enter your e-mail :

[sibwp_form id=2]