Taylor Swift is the latest victim of ‘disgusting’ AI trend

Fake pornographic, nude and sexually suggestive images of Taylor Swift went viral on social media Thursday, inciting her legions of fans to report the accounts responsible en masse, while attempting to bury the images in a deluge of supportive posts for the artist. 

One account responsible for sharing the photos — @Real_Nafu — has already been suspended by X for sharing a thread of the photos. Other accounts — @xCharlotteAI and @AIcelebimages, which are seemingly operated by the same user — remained active as of Thursday morning. 

The @AIcelebimages account has proceeded to post AI-generated photos of Ariana Grande and Emma Watson. Both accounts advertise a store, apparently operated by “the first fully functional AI influencer,” that sells AI-generated images of female celebrities. 

Related: George Carlin resurrected – without permission – by self-described ‘comedy AI’

“I’m not a Swifty, but the spread of Taylor Swift AI pictures should be stopped. She is a Human being with feelings no matter how rich she is it still hurts,” one X user wrote. “PROTECT TAYLOR SWIFT.”

Other users called the images “disgusting” and implored people not to share the photos.  

“If you think sexualized non-consensual AI-generated photos of Taylor Swift being spread online isn’t an issue, I want you to think what that means for countless women and children who aren’t Taylor Swift who’ve also been subjected to digital rape and AI porn,” one user said

Swift’s publicist, Tree Paine, did not immediately respond to a request for comment. 

Related: The ethics of artificial intelligence: A path toward responsible AI

Deepfake technology: The weaponization of information

Deepfake technology isn’t necessarily new, but advancements in artificial intelligence have made it more accessible and more powerful than before. 

In the past year, AI-generated deepfake fraud has ventured beyond images and into video and audio. Cases of AI-generated fraudulent phone calls have recently been on the rise; a fake robocall, seemingly from President Joe Biden, encouraged voters not to vote in the New Hampshire primary, according to NBC

Last year, a mother received a phone call from people who, claiming to have kidnapped her daughter, demanded a ransom payment. Her daughter’s screams on the other end of the line were utterly convincing, though in reality, her daughter was safe at home and in bed. 

Several weeks ago, a self-described “comedy AI” consumed each of George Carlin’s comedy specials to resurrect the late comic legend for a seemingly new special, released 15 years after his death, and without the consent of his family. 

Deepfake images of politicians and public figures — including those of the Pope and former President Donald Trump — have likewise been recently proliferating on social media. 

And deepfake porn has been around for years. A company called Sensity AI found in 2021 that more than 90% of deepfake videos online were nonconsensual porn. And 90% of those videos featured nonconsensual porn of women. 

Motherboard reported on AI-assisted deepfake celebrity porn in 2017. The issue has only escalated since then. 

An August investigation by 404 Media found that users of the site CivitAI can browse images of thousands of models that can be made to produce pornographic scenarios. 

“Our investigation shows the current state of the non-consensual AI porn supply chain: specific Reddit communities that are being scraped for images, the platforms that monetize these AI models and images and the open source technology that makes it possible to easily generate non-consensual sexual images of celebrities, influencers, YouTubers and athletes,” 404 wrote. 

“We live in a really troubling time where it’s not just about fake news, it’s about how people are actually weaponizing not just the AI technology, they’re really weaponizing information,” United Nations AI advisor, Neil Sahota, told TheStreet.

Related: Human creativity persists in the era of generative AI

AI regulatory efforts

Contained within Biden’s executive order on AI, which was unveiled in October, is a set of protections against AI-generated fraud. 

“The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content,” the order said

While experts at the time called the order a good first step, it was criticized for lacking clarity on enforcement. 

States including Texas, New York, Minnesota, Virginia, Georgia and Hawaii have passed legislation banning nonconsensual deepfake pornography. 

Representative Tom Kean (R-NJ) recently reintroduced a bill that would make the dissemination of nonconsensual deepfake porn a federal crime. He first introduced the bill in November, in the wake of an incident at a New Jersey high school in which male students allegedly used AI to generate pornographic images of female students.

“We’re living in a time of hyper change. We’re at a point where we’ll experience 100 years’ worth of change in the next 10 years,” Sahota said. “And we as a society are not ready to handle that. Historically, we’ve been very reactive; we don’t have that luxury anymore.”

This comes in the midst of mounting concerns over the responsible, ethical use of AI technology, even as companies are racing to ship more powerful iterations of their models; researchers have expressed concern over algorithmic discrimination, hallucinations, AI-assisted fraud, copyright infringement, concentrations of corporate power and severe economic disruption. 

Contact Ian with AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: Veteran fund manager picks favorite stocks for 2024

Related Posts