The rapid advancement of artificial intelligence (AI) has made it increasingly easy, and widely accessible, to create highly realistic “deepfakes”. These are manipulated images, videos, or audio recordings that convincingly replicate a particular person’s appearance or voice.

Positive Use of Deepfakes

This technology has significant and legitimate creative and commercial applications, subject (where applicable) to the consent of the person or their estate in the event they are deceased. Some examples include:

Sector

Benefit

Accessibility and Assistive Technology

  • Voice cloning for individuals and enabling them to communicate in a voice that resembles their own.
  • For example, those who have lost the ability to speak due to illness (such as the actor Val Kilmer).

Media production

  • Recreating historical figures for documentaries.
  • De-aging or aging actors.
  • Continuing projects when an actor is unavailable or has passed away (subject to consent from the actor or the estate). For example, James Earl Jones (voice of Darth Vader) or Ian Holm (the character of Rook in Alien: Romulus (2024).
  • Synchronising lip movements to make foreign-language translated content appear natural.

Entertainment

  • Video games, VR experiences and similar.

Education and Training

  • Enabling historical figures to speak in interactive learning environments.
  • Use of role-play scenarios for training purposes. For example, the ‘Second Nature’ AI role play sales training platform.

Privacy

  • Replacing real faces in documentaries or news to protect the identities of individuals.

Marketing and Advertising (subject to the consent of the subject person)

  • Personalised video messages from celebrities or other individuals.
  • Licensed digital avatars of public figures for advertising or promotional campaigns.

Misuse of Deepfakes

While this technology has positive benefits (as detailed above), it also presents significant risks to individuals and organisations. Deepfakes can be used to commit fraud, spread misinformation, damage reputations, enable harassment, or impersonate individuals without their consent.

For example:

Area

Method

Non-consensual pornography

  • Creating explicit videos or images of individuals, often celebrities or private persons, without their permission, leading to harassment, emotional distress, and reputational harm.

Fraud and financial scams

  • Impersonating executives or family members using deepfake audio or video to trick employees into transferring money or divulging sensitive information.
  • In 2024, the British engineering company Arup was the victim of a deepfake scam in which £20million was transferred to criminal following a conference call in which the non-Arup attendees were all AI generated.

Political manipulation and misinformation

  • Fabricating videos of politicians to influence elections or public opinion.
  • Spreading fake news to create confusion or social unrest.
  • For example, in 2020, a manipulated video of Belgian Prime Minister Sophie Wilmès was circulated falsely showing her making statements about COVID-19.

Cyberbullying and harassment

  • Targeting individuals with manipulated content to humiliate, intimidate, or coerce them.
  • This is a growing issue in schools around the world.

Identity theft

  • Using deepfake-generated likenesses to bypass facial recognition security systems or steal personal information.

Reputational damage

  • Creating videos or images that falsely portray someone engaging in illegal or unethical behaviour, potentially harming careers or relationships.
  • In 2021, a deepfake video Ukrainian President Volodymyr Zelenskyy was circulated and which falsely showing him ordering troops to surrender.

Extortion and blackmail

  • Threatening to release fabricated content unless victims comply with demands, exploiting fear of public exposure.

The online misuse of personal images and likenesses is a serious risk to privacy, identity protection, and digital security, particularly as manipulated content can be difficult to detect and may circulate widely or ‘go viral’ before it can be challenged or removed.

Copyright in your image

The concept of images rights is one that exists in many jurisdictions around the world and, broadly, are a group of rights relating to an individual’s personality. This includes their image, likeness, or characteristics associated with them. The primary function of image rights is to prevent unauthorised persons from exploiting or making use of an individual’s image.

Image rights have long formed the basis of commercialised revenue streams for celebrities and famous people, in which their name, signature, likeness, silhouette or other identifying feature is commercialised on everything from coffee to jawline strengthening face massagers:

De’Longhi‘s “Perfetto” global campaign - 2022

De’Longhi‘s “Perfetto” global campaign - 2022

Facial Fitness Pao advertising campaign - 2014

Facial Fitness Pao advertising campaign - 2014

While copyright in a photo of a person can be established relatively easily, the concept of copyright in a person’s image has been more problematic.

Alright, Alright, Alright

Until relatively recently, the incidence of misuse of another person’s image was generally confined to celebrities and famous people, and whether they had secured trade mark protection for their name, signature or other identifying indicia was a key factor in their means of enforcement.

For example, the actor Matthew McConaughey (through his company J.K. Livin Brands Inc.) has registered various trade marks in the United States across a range of goods and services for various indicia associated with him. This includes his catchphrase ‘Alright, Alright, Alright’ and also motion marks depicting him in various poses.

A recent example is a 7 second long motion mark incorporating the images on the right:

US registration no. 7931810 in class 9

US registration no. 7931810 in class 9 for

“Downloadable audio-visual media content, namely, downloadable audio and video recordings, in the field of self-help, human growth and spirituality; Downloadable audio-visual media content, namely, downloadable audio and video recordings in the field of entertainment featuring television series, comedies, and drama”

The description of the mark is:

“The mark is a motion mark. The mark consists of the actor Matthew McConaughey with beige face and hands, brown hair and facial hair, white teeth, and black eyelashes, wearing a white button down shirt, silver necklace, silver watch, and silver rings. He is standing outdoors on a porch with brown wooden slats and a silver metal fence, with green trees and bushes, tan sand, white clouds, blue sea, and blue sky in the background. The motion consists of the actor standing and facing forward with his arms raised and palms open. Next, his arms open further wider. Then he glances to the side and places one hand on his hip and lowers the other hand out of the frame, with his body positioned at an angle, and turns his face forward. The duration of the motion is approximately 7 seconds.”

These and other registrations are steps, by Matthew McConaughey and others applying a similar strategy, to prevent misuse by AI of a likeness and image. However, not everyone has the resources to engage in a multi-mark/multi-class trade mark registration strategy.

The rapid emergence of AI, both in technical ability and ease of access, means that practically anyone can engage in generating a ‘deepfake’ (admittedly of varying quality) but, crucially, if the user has any online presence (which is generally true for most people in today’s digital world), that same person can themselves become a victim even if they are not a celebrity.

What is the common person on the street to do?

A Danish approach

In June 2025, Denmark’s Minister for Culture, Jakob Engel-Schmidt, announced proposed amendments to the Danish Copyright Act addressing the use of AI-generated content which replicates individuals’ faces, bodies, and voices.

The purpose of these changes is intended to safeguard people’s reputations and digital identities from misuse through generative AI. Two new provisions have been introduced, affecting both performing artists and the general public:

Section 65a

  • This prohibits unauthorised AI-generated imitations of artistic performances, ensuring that performers maintain control over their creative expression.

Section 73a

  • This extends comparable protections to all individuals, granting them the right to object to and seek removal of unauthorised digital reproductions of their likeness or voice.

Under the proposed law, individuals will hold copyright-like protection over representations of their own likeness, including for 50 years after their death.

However, the provisions do not apply to caricature or satire, unless such content amounts to misinformation intended to cause serious harm.

Although the proposed legislation primarily targets the use of photos, facial images, and voices, it applies broadly to any recognisable depiction of a person, including distinctive physical features such as silhouettes.

The proposed amendments grant Danish citizens three principal rights:

Right of removal

  • Individuals may require the removal of AI-generated content featuring their face, voice, or body that was created or published without their consent, regardless of its intended purpose.

Right to compensation

  • Individuals may claim damages for harm resulting from unauthorised use, even without having to demonstrate specific financial loss.

Platform liability

  • Technology platforms may face substantial fines if they fail to act promptly and effectively after being notified of unlawful content.

A UAE approach

Whether, and to what extent, other countries follow the approach of Denmark will be become apparent in the near future.

From a UAE perspective, the long-standing emphasis on privacy, unauthorised use of imagery, and damage to reputation has been addressed within various national laws for many years.

In a 2024 article, we explored the provisions of UAE law in relation to unauthorised photography in the UAE (see https://hadefpartners.com/news-insights/insights/no-photos-no-videos-is-photographing-or-videoing-of-people-in-a-public-place-illegal/). As an extension of this, and with regards to deepfakes, the provisions of the Federal Law by Decree No. (31) of 2021 Promulgating the Crimes and Penalties Law (Penal Code) are particularly helpful.

Article 425 of the Penal Code provides that:

Shall be punishable by confinement for a period not exceeding (2) two years or by a fine not exceeding (20,000) twenty thousand Dirhams any individual who, through any means of publicity charges another person with an incident susceptible of making him subject to punishment or exposing him to public hatred or contempt.

The punishment shall be confinement and a fine or one of these two penalties if the act of libel is committed against a public officer or any person to whom a public service is assigned, during the performance of his function or public service, due to or on the occasion of such performance; or if the act of libel violates the honour or takes from the reputation of families, or if it is noticeable that the act is intended for the attainment of an unlawful objective.

In the event where the act of libel is expressed by publication in newspaper or printed matters, this shall be considered a circumstance of aggravation.

In combination, Article 43 of Federal Decree-Law No. 34 of 2021 Concerning the Fight Against Rumours and Cybercrime (the Cybercrimes Law): provides that:

Whoever uses an information network, ITE, or an information system and insults another or attributes a quality to him that would make that person subject to punishment or contempt by third parties shall be punished with imprisonment and/ or a fine of not less than (AED 250,000) two hundred fifty thousand dirhams or more than (AED 500,000) five hundred thousand dirhams.

These provisions, which fall within defamation principles, are broad in nature and intended to give flexibility in the protection of the reputation and honour of both an individual and also of their families.

As a result, where a deepfake has the effect of exposing the victim to “punishment or exposing him to public hatred or contempt” that would arguably fall within Article 425 of the Penal Code and, if done via an electronic network (such as the via the Internet, WhatsApp or other social media conduit) also within the provisions of the Cybercrimes Law.

What is particularly helpful with these provisions is the criminal element and access to justice.

While possible for a victim to pursue a civil claim for damages (as in other countries), the ability for a victim to file a criminal complaint means that a lack of financial resources is not a barrier to protecting their rights.

In this regard, and from an access to justice perspective, the legislative framework in the UAE currently provides an effective and accessible means to both deterring and combatting deepfakes.

It will be interesting to see what extent other countries begin to deploy a similar approach, either alone or in combination with changes to national copyright laws as are currently being considered in Denmark.

Thanks to Katie Meldrum and Naduli de Silva (interns) on the research for this article.

 

Experts

Contacts