A fountain of fakes

The hype around artificial intelligence tends to focus on extreme scenarios of Terminator-related apocalypse, but there’s a very worrying kind of AI that’s already causing a lot of trouble. It’s the use of AI tools to create realistic fakes. The same tech that makes Johnny Cash sing Taylor Swift, that pulls John Lennon’s voice out of an old demo tape or that puts actors into movies they were never in can be used for considerably more wicked purposes.

Here’s the Houston Chronicle on the elderly man who was called by the police about his son-in-law, who the caller said was in jail. The phone was passed to the son-in-law, who begged for bail money. The money was transferred, but there was no son-in-law and no police officer; the son-in-law’s voice was reportedly created by an AI tool good enough to fool his relatives.

I can’t vouch for the veracity of the story, although I assume the newspaper fact-checked it. But I know enough about AI and things-called-AI to know how powerful and realistic these tools can be. Here’s Johnny:

In New Jersey, teenage boys have been accused of creating fake pornography of their female classmates. As the WSJ reports, there is a lot of confusion about the legality or otherwise of this, and disagreement among parents regarding how it should be addressed: some are (rightly, in my opinion) demanding serious consequences while others are shrugging it off with “boys will be boys”. Given how realistic the results can be, I don’t see why this should be treated any less seriously than if the images were real: it’s still a form of sexual abuse.

And this tech isn’t just used for sexual abuse. Earlier this year, students in New York used AI to make a fake video of a school principal making a racist rant; last week, a deepfake showed model Bella Hadid apparently supporting the Israeli government and apologising for previous remarks supporting the plight of Palestinian people.

This is a new version of an old problem, which is technology’s ability to introduce new threats faster than we can decide if or how the technology should be regulated. And while it’s still possible to spot fakes, it’s getting harder to do that with each new generation. And as these systems evolve, they require less and less input to do what they do. The fake video of Bella Hadid used an existing video she’d been in (she was talking about Lyme disease) and repurposed it. Future fakes will only need a couple of photos.

The solution, I think, isn’t just to regulate the technology – although the howls of protest by the pro-AI crowd just make it more clear that we need to do that too. It’s to regulate the behaviour. The law may not know its ChatGPT from its DALL.E, but then it doesn’t have to any more than it needs to know the difference between an AK-47 and an AR-15; it’s not the tool that matters here; it’s the person who wields it to harm others.


Posted

in

by