Denmark Charts a New Course: Deepfakes and Copyright as Brand‑Protection Risk
In an era when generative‑AI tools are increasingly capable of creating extremely realistic video, audio and image impersonations (“deepfakes”), the intersection of identity, brand reputation and digital liability is becoming central to any brand‑protection program. The latest signal comes from Denmark — its proposed amendment to the copyright law to give individuals “ownership” over their likeness, voice and physical features — and it carries significant implications for brands globally, including in the U.S.
What Denmark Is Doing
Denmark has proposed to amend its Copyright Act to introduce two novel provisions:
- Section 65 a: Prohibits making realistic AI‑generated imitations of a performing artist’s performance available to the public without consent.
- Section 73 a: Extends protections to all natural persons by banning the public availability of realistic digital imitations of a person’s personal, physical characteristics (appearance, voice, movements) without consent.
Key elements include treating likeness and voice rights as intellectual property, a 50‑year post‑mortem term, explicit exceptions for parody and satire, a notice‑and‑takedown mechanism for removal of unauthorized deepfakes, and potential platform liability for failure to comply. The law is expected to be enacted by late 2025 or early 2026.
Why This Matters for Brand Protection
From a brand‑protection standpoint, Denmark’s move signifies that digital identity (voice, face, movement) is now a brand asset. Key takeaways include:
1. Likeness risk becomes actionable IP risk.
2. Platforms face increased responsibility for takedowns.
3. Global enforcement challenges require proactive monitoring.
4. The framework anticipates future U.S. regulatory shifts.
5. Consent and labeling of AI‑generated marketing content are crucial.
What About the U.S.? Can the Same Protections Contractually or Statutorily Apply?
Statutory Landscape
At the federal level, the Take It Down Act (May 2025) criminalizes non‑consensual intimate images, including AI‑generated deepfakes, and mandates prompt platform removal. State‑level laws, such as Tennessee’s ELVIS Act, protect voice likenesses, while right‑of‑publicity statutes in several states cover name, image, and likeness. However, there is not yet a comprehensive federal IP‑style right.
Contractual and Brand Protection Tools
Brands can and should contract around this risk:
- Explicit consent for AI and likeness use in performer/influencer contracts.
- Indemnities for unauthorized deepfake misuse.
- Monitoring and takedown procedures.
- Use restrictions and labeling for synthetic media.
- DMCA‑style escalation workflows for impersonations.
What Brands Should Do to Anticipate Regulatory Shift
To align with these developments, brands should:
1. Map deepfake exposure across executives and endorsers.
2. Maintain a rights inventory and consent database.
3. Implement AI detection and monitoring systems.
4. Establish takedown workflows.
5. Update policies and contracts for AI‑use clauses.
6. Align global enforcement strategies.
Final Thoughts
The Danish initiative marks a turning point: “your voice, your face, your movement” are becoming codified assets. For brand‑protection firms like ThornCrest, this expands the frontier of risk beyond counterfeit goods into digital impersonations. Although the U.S. regime is still evolving, proactive contractual and operational safeguards can mitigate exposure and protect brand integrity.