close
close

Illegal deepfake porn is a social plague – here’s how Europe can fight it

Like the sun rising and setting, some things are inevitable. Consider technology. Whenever something new comes along, humans invariably find a way to abuse it. In recent years, that mantle has fallen to artificial intelligence (AI) and one of its most disturbing side effects—the rise of unwanted deepfake pornography.

The idea is as simple as it is terrifying: using digital technology to create fake, explicit images or videos of someone. While it’s been bubbling away in the dark corners of the internet for a few years now, recent improvements in AI tools mean that this type of content is becoming increasingly easier to create — and significantly worse for victims.

Fortunately, the authorities are taking this into account. Great Britain announced a first-of-its-kind law aimed at directly combating unwanted deepfake pornography By amendment to the act on the administration of criminal justiceMeanwhile, the EU has a number of laws and directives it can use to combat this sinister practice. At least, that is the hope.

The question is whether legal regulations are an effective tool in the fight against unwanted deepfake pornography and whether there is a way to eliminate it completely.

A Word About Terminology

TNW 2025 Conference — Return to NDSM June 19-20, 2025 — Save the Date!

To wrap up our incredible 2024 edition, we are excited to announce our return to NDSM Amsterdam in 2025. Register now!

At this point you may be wondering why we use the term “anti-consensual deepfake pornography” instead of the more commonly used term “deepfake porn”?

Well, Professor Victoria Baines — BCS colleague AND leading authority in cybersecurity — explains that shortening the term to “deepfake porn” is seen by cybersecurity advocates as “minimizing harmful behavior through shortcuts.”

As Baines notes, “the bottom line is that this is online abuse, not pornography.” The clearer we are about this problem, the better our chances of combating it. And with that in mind, let’s take a look at how governments are currently dealing with unwanted deepfake pornography.

What laws apply in the UK?

Bains says that despite the upcoming Criminal Justice Act amendments in the UK, “it is already a s.188 offence Internet Security Act to share intimate images without the other person’s consent.”

This direct formulation the legislation states that it is illegal to share media that “shows or appears to show” another person in an intimate state. While this broadly covers unsolicited deepfake pornography, the problem is that that is not its primary purpose.

According to Baines, this is exactly what the newly proposed Criminal Justice Amendment Bill aims to fix. It aims to “criminalise the creation of intimate images using digital technology without consent, regardless of whether the creator intends to share them”.

In other words, the upcoming amendment directly addresses the problem of non-consensual deepfake pornography. While existing laws could be used to prosecute criminals who create it, this new amendment tackles it head on.

How the EU is tackling unwanted deepfake pornography

“The EU has no specific rules on (involuntary) deepfake pornography,” Professor Cristina Vanberghen tells TNW.

Vanberghen is a senior expert in European Commissionwhere the focus is on artificial intelligence, DMA, DSAand cybersecurity policy. It says that non-consensual deepfake pornography is illegal under existing laws, in particular “affirmative interpretations of the principles around GDPR, the DSA, national laws and proposed measures such as those existing in AI.”

In practice, “using someone’s photos and videos in deepfakes without their consent can be considered a breach of the GDPR,” and the DSA “imposes stricter obligations on online platforms to quickly remove illegal content and disinformation, which may also apply to deepfake pornography.”

According to Asha Allen, Director and General Secretary CDT EuropeThe EU has opened another avenue to combat illegal content. Specifically, its adoption gender-based violence directive.

Allen says it is a crime to “create and then disseminate false images that create the appearance of a person engaging in non-consensual sexual activity.”

On paper this is a great step, but there is an important difference between a directive of this type and a regulation. In the words of the EUA regulation – such as the one Vanberghen discussed – is a binding legal act that must be applied in its entirety throughout the EU.

The directive, on the other hand, sets a goal. Then, “it is up to individual countries to develop their own laws on how to achieve it.” As for the gender-based violence directive, member states have until June 14, 2027, to actually adopt it into their national law or policy. Understandably, this raises a whole range of issues.

Need for clarity on unwanted deepfake pornography

“Common policies around deepfake pornography are key,” Vanberghen says. They need to set “clear boundaries and repercussions to discourage malicious behavior” and provide victims with legal avenues for protection and recourse.

The problem with adopting a directive on gender-based violence is that it may lead to inconsistent provisions in different jurisdictions. This in turn may lead to weak for perpetrators of unwanted deepfake pornography, which puts victims at risk.

An example is the UK Criminal Justice Amendment Act. Coalition to End Violence Against Women (EVAW) draws attention that “the threshold for this new law is based on the intention of the perpetrator,” not on whether the victim of unwanted deepfake porn consents to its creation.

Andrea Simon, director of EVAW, says this would create a “huge loophole in the law” that would “give perpetrators a ‘get out of jail free’ card, because proving intent in court is so difficult. In this state, the prosecution would have to prove that the creator specifically intended to cause distress, humiliation or suffering. Simon believes this would “ultimately prevent victims from accessing justice”.

And that’s the point — even in places where laws exist against the distribution of deepfake porn without the consent of victims, there still needs to be more transparency to properly protect victims.

Pushing regulations through in the EU

Two things seem clear. The need for specific and deliberate regulation against non-consensual deepfake porn in the EU — and the fact that it will eventually happen. The problem, Allen explains, is that “the EU lawmaking process is inherently lengthy,” because it has to involve 27 countries, seven political groups, and the European Council. Things don’t happen quickly in the EU for a reason.

But even if direct regulations on unwanted deepfake pornography are introduced, it does not mean that they will immediately solve all the problems.

Talking with Bill Eichner from European Policy Analysis Centre, Flailsays that Europe “tends to regulate and then struggle to enforce the rules because of the fragmented nature of the European Union.” As an example, he cites points to GDPR and how it gave “a general say on Google and Meta to Ireland and on Amazon to Luxembourg”, although neither side had any intention of introducing severe restrictions.

In the case of newer regulations, such as the DSA, Eichner says they have “increased enforcement in Brussels” and made administration more centralized. But he believes the problem is resource-based, since the part of the European Commission that deals with regulations such as the DSA often consists of “just a handful of officials.”

Combined with the structure of the EU, this could make law enforcement a nightmare – and there is no reason to think that combating unwanted deepfake pornography would be any different.

Using Technology to Fight Unwanted Deepfake Pornography

“I think stopping deepfake porn poses serious challenges, similar to cybersecurity,” Vanberghen says. But that doesn’t mean we can’t fight it.

Vanberghen points to the development of AI-based tools that can detect deepfake content, allowing operators to eliminate it quickly and effectively.

Allen shares a similar view, but stresses that the creation of these tools requires thorough research to ensure the techniques used are “effective, proportionate and produce fair results”.

Unfortunately, non-consensual deepfake pornography is unlikely to disappear from society entirely. As Vanberghen says, “While complete eradication may be unachievable, significant reductions are achievable with proactive measures and concerted efforts across sectors.”

BCS’s Baines supports this idea. He stresses that, in addition to “technical measures and legal deterrents, we will need to try to reduce the stigma of being deepfaked by raising awareness that these are not real images.”

Coordinated action to combat deepfake abuse

The idea is that alongside technical measures, there needs to be social and educational pressure on illegal content. This, combined with more funding for those who want to prosecute perpetrators, could significantly reduce the harm it causes.

Ultimately, non-consensual deepfake porn won’t go away on its own. It will take a concerted effort across all facets of government and society to highlight it for what it is: abuse.

Europe-wide laws against the creation of non-consensual deepfake pornography are necessary, but this alone is not enough. Instead, there needs to be a framework to enforce these laws. Technology can play a key role in this, but a cultural imperative is also needed — just like drunk driving.

Yes, as technology evolves, it will inevitably be used for malicious purposes. However, it is not that simple. The same tools that enable malicious activity can also prevent it. We may not be able to stop the sun from rising or setting, but we can influence how people use technology. Hopefully, that will happen soon.