ROB SCHMITZ, HOST:
This is ALL THINGS CONSIDERED from NPR News. I'm Rob Schmitz. Here in Germany, a story of betrayal, sex and artificial intelligence has captivated a nation, and it just may have implications for all of us. Forty-four-year-old celebrity Collien Fernandes has accused her fellow celebrity ex-husband, Christian Ulmen, of spreading images of her that were later transformed by artificial intelligence into pornographic videos. They were then circulated over the internet. Ulmen denies the allegations. Thousands of Germans have taken to the streets, demanding stricter laws for those behind these so-called deepfake videos. But how do you police something that's becoming easier to create and more difficult to trace?
We're joined now by Harvard law professor Rebecca Tushnet, codirector of the Berkman Klein Center for Internet and Society. Welcome, Rebecca.
REBECCA TUSHNET: Thank you.
SCHMITZ: So, Rebecca, I feel like these AI platforms are evolving so quickly that it's a little difficult to keep up. How easy is it to use these platforms to create these deepfake videos using someone's image?
TUSHNET: So the interesting thing to me is that if you look at the timeline of this, the allegations indicate that, you know, the software is being used in a time when it wasn't necessarily as widespread as it is now. But this capability has been available at the high end for a really long time. The last few "Star Wars" movies, Lucasfilm has been able to do this for a while.
The difference is there's a proliferation of, like, nudify (ph) apps. So basically, once you have the basic idea of doing this, it is possible to train a model specifically to create pornographic images. And even though the big names that you've heard of - like Anthropic, OpenAI and so on - have guardrails against this, it is not all that hard for someone to go, you know, train their own model. So, you know, it's very hard to put back in the bottle. And there are people who can make money off of it, which is why it's very hard to put back in the bottle.
SCHMITZ: So in the case here in Germany, the allegation involves someone known personally to the victim. First of all, how common is that? And then how much harder does that make these cases to prove?
TUSHNET: It's a good question because a lot of times we don't necessarily know. Often, when there's enough evidence, it is possible to trace back to someone, you know, who may tangentially know the person - sometimes like an employer, sometimes somebody that they had a contact with, you know, at a previous job or a failed date with. But it's often a surprising person. I have to say the husband is, you know, a particularly surprising person. But it's because some men feel an entitlement to do what they want, and the issue is, often, sort of figuring out who it was, which can be difficult, and then, you know, trying to minimize the harm caused by the dissemination.
SCHMITZ: The TAKE IT DOWN Act, which Congress passed in 2024 in the U.S., requires platforms to remove offensive, deepfake content within 48 hours after a complaint. I mean, that seems pretty straightforward, right?
TUSHNET: Right. The issue is that it doesn't necessarily stop the images from moving elsewhere or, you know, from being tweaked a little bit and reposted. And I think everyone understands this. This is to give you a tool to stop the bleeding. But frankly, I don't actually believe that criminal law alone, you know, will solve the problem.
SCHMITZ: So what more needs to be done to protect people from this?
TUSHNET: So I think we face a bunch of different problems. One of them is just a lack of strong social norms against this. So...
SCHMITZ: Right.
TUSHNET: ...The analogy that I like to draw is actually drunk driving in the U.S. So there was a time when drunk driving was not considered wrong, even post-criminalization. So what had to happen was a social shift that says, you know, this is not something a responsible person does. And that doesn't mean that nobody drives drunk, right? It still happens. But the conversation around it changed. And I think that's the kind of media literacy education, you know, anti-bullying, education - those are the kinds of steps that can make the long-term change.
But I want to be clear, like, it's still going to be possible to do this. In fact, you know, historically, it was possible to do this. There were photo-manipulated, you know, pornographic images before. What we can try and do is get it more under control.
SCHMITZ: Right. I mean, in the case that - you know, the parallel you make to drunk driving, I mean, one of the factors in reducing drunk driving also were just really harsh penalties for folks who were caught drinking and driving. I'm wondering, would that type of law work in the United States? I mean, this is what they're talking about doing here in Germany, for example.
TUSHNET: I'm a little hesitant about that 'cause in the U.S., we tend to overcriminalize and kind of undersocialize. I'm guided by that because although this situation is - it's obviously terrible - it's unusual in terms of the things that we're seeing that are causing the most disruption, which is to teenage girls. And the people who do that are usually teenage boys. And so I'm not usually a supporter of sending a teenager to 20 years in prison.
You know, there's reasons why we say, you know, teenagers aren't - don't have fully developed brains. They can still do incredible harm, and I don't in any way want to minimize that. Part of the thing that we do is, you know, we now try and teach them, for example, how to drive. We have graduated entry requirements. So I think we need to do things like that about responsible internet use.
SCHMITZ: Rebecca, this is such a disturbing trend, and I'm wondering are we seeing more of this or will we see more of this as AI platforms proliferate more and more?
TUSHNET: My honest guess is that what we will continue to have is a situation where the well-capitalized, you know, prominent ones - possibly with the exception of Grok - are built with guardrails that prevent this kind of thing, or at least try very hard to prevent this kind of thing. But there's a sort of little substrate of scammy (ph) little apps that offer and sometimes deliver the ability to do this for people who are willing to go looking.
SCHMITZ: That was Rebecca Tushnet, codirector of the Berkman Klein Center for Internet and Society at Harvard Law School. Rebecca, thank you.
TUSHNET: Thank you. Transcript provided by NPR, Copyright NPR.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.