The AI Threat (Taylor’s Version)
January 30, 2024 Howdy, Meteor readers, This email almost didn’t make it to you today. This morning, I woke up and saw that Calvin Klein launched a new campaign featuring Idris Elba, and knowing that my day could not get any better, I went back to bed. But it did, in fact, get better. I went down an Idris Elba rabbit hole (recommend), and found that the actor/model/DJ/rapper is also the face of a campaign aimed at pushing the British government to ban the sale of “zombie-style” knives, the weapon of choice for young people committing knife crimes (which are currently on the rise there). In today’s newsletter, we take a look into non-consensual, AI-generated pornography. Plus Florida’s latest attack on trans residents, dueling rappers, and the death of an icon. Buying a CK coat, Shannon Melero WHAT’S GOING ONKarma is the girls in the seats: Last week, Xwitter was flooded with pornographic images of Taylor Swift, all of which were AI-generated deepfakes. The response from her fanbase was immediate, with thousands of Swifties reporting and burying the images online with a level of coordination that military tactical units will someday study. And while I’ll take every opportunity to write about Swifties saving the day, this incident has also brought to the forefront the pervasiveness and dangers of AI porn. Swift certainly isn’t the first person to be targeted by the dark forces that produce AI porn. Just a few days before false images of Swift spread online, actress Xochitl Gomez, who’s still a minor, discovered that her face had been edited into sexually explicit images and circulated on Xwitter. Gomez has been trying to get the images removed without success. (Swifties, activate!) Meanwhile, Xwitter put a temporary block on using the search words “Taylor” and “Swift” less than a day after those images went viral. TAYLOR SWIFT IN NEW YORK EARLIER THIS MONTH. (PHOTO BY GOTHAM VIA GETTY IMAGES) Swift herself is considering legal action (and Missouri recently introduced the Taylor Swift Act), potentially bringing the kind of high-profile attention that could be a major tool in the fight against AI-generated porn. But court cases take time, and this crime is proliferating fast. AP reported in December that 143,000 new deepfake videos—many featuring underaged girls—were posted online in 2023 alone, more than in all previous years combined. (The tech to make these kinds of videos has been available to general users since 2017; there was even a now-defunct app called DeepNude that made it as easy as pushing a button.) In an interview with Slate, Sophie Maddocks, a cyber sexual violence researcher, explained that in a seven-month span in 2023, “there was a ten-fold increase of AI-generated nude images online,” which victimized preteen boys as well as young girls. While there is a federal bill on the table that would criminalize the production of deepfakes, Maddox calls the current legal protections in place—including a handful of state bills—a “patchwork in terms of civil and criminal recourse.” Victims have little to no options unless they can rpay the exorbitant lawyer fees and spend the time it takes to scrub these images from the internet—which platforms like Facebook and Xwitter aren’t in a rush to do. Maddox also points to an issue that feminist technologists have long called out: What would it look like “if we had created these AI tools in a social environment that did center consent and that did prioritize consent”? Technological developments that consider the implications for those most likely to be harmed? Imagine that! AND:
PROTESTORS IN KISUMU, KENYA. (PHOTO BY BRIAN ONGORO VIA GETTY IMAGES)
FOLLOW THE METEOR Thank you for reading The Meteor! Got this from a friend? Subscribe using their share code or sign up for your own copy, sent Tuesdays and Thursdays.
|