Fake Explicit Images of Taylor Swift Flood Social Media – The New York Times

Fake, sexually explicit images of Taylor Swift, likely generated by artificial intelligence, spread quickly across social media platforms this week, disturbing fans who saw them and sparking renewed calls from lawmakers to protect and crack down on women to take action against the platforms and technologies that disseminate such images.

An image shared by a user on X, formerly Twitter, was viewed 47 million times before the account was suspended on Thursday. X suspended several accounts that posted fake images of Ms. Swift, but the images were shared on other social media platforms and continued to spread despite those companies' efforts to remove them.

While X said it was working to remove the images, fans of the pop superstar flooded the platform in protest. They posted related keywords along with the phrase “Protect Taylor Swift” to drown out the explicit images and make them harder to find.

Reality Defender, a cybersecurity company focused on AI detection, determined with 90 percent certainty that the images were created using a diffusion model, an AI-driven technology accessible through more than 100,000 apps and publicly available models said Ben Colman, co-founder of the company. Founder and CEO.

As the AI ​​industry booms, companies are eager to release tools that allow users to create images, videos, text and audio recordings with simple prompts. The AI ​​tools are hugely popular, but have made it easier and cheaper than ever to create so-called deepfakes that depict people doing or saying things they have never done before.

Researchers now fear that deepfakes are becoming a powerful disinformation force, allowing everyday internet users to create non-consensual nude images or embarrassing depictions of political candidates. Artificial intelligence was used to create fake robocalls from President Biden during the New Hampshire primary, and Ms. Swift appeared in deepfake ads this month hawking cookware.

“There has always been a dark undercurrent of the Internet, non-consensual pornography of various kinds,” said Oren Etzioni, a computer science professor at the University of Washington who studies deepfake detection. “Now it’s a new strain of it that’s particularly harmful.”

“We will see a tsunami of these AI-generated explicit images. The people who created this see it as a success,” Mr. Etzioni said.

X said it has a zero-tolerance policy toward the content. “Our teams are actively removing all identified images and taking appropriate action against the accounts responsible for publishing these images,” a representative said in a statement. “We are closely monitoring the situation to ensure any further violations are addressed immediately and the content is removed.”

Since Elon Musk purchased the service in 2022, X has seen a rise in problematic content such as harassment, disinformation and hate speech. He has relaxed the site's content rules and fired, fired or accepted resignations of employees who worked to remove such content. The platform also reinstated accounts that had previously been suspended for rule violations.

Although many of the companies that make generative AI tools prohibit their users from creating explicit images, people are finding ways to break the rules. “It's an arms race, and it seems that every time someone develops a guardrail, someone else figures out how to jailbreak,” Mr. Etzioni said.

According to 404 Media, a technology news site, the images come from a channel on the messaging app Telegram dedicated to producing such images. But the deepfakes attracted widespread attention after they were posted on X and other social media services, where they spread quickly.

Some states have restricted pornographic and political deepfakes. But the restrictions haven't had much of an impact, and there are no federal regulations on such deepfakes, Mr. Colman said. Platforms have tried to crack down on deepfakes by encouraging users to report them, but that method hasn't worked, he added. By the time they are reported, millions of users have already seen them.

“The toothpaste is already out of the tube,” he said.

Ms. Swift's publicist, Tree Paine, did not immediately respond to requests for comment late Thursday.

Ms. Swift's deepfakes sparked renewed calls for action from lawmakers. Rep. Joe Morelle, a New York Democrat who introduced a bill last year that would make sharing such images a federal crime, said on Women everywhere, every day.” .”

“I have repeatedly warned that AI could be used to generate non-consensual intimate images,” Sen. Mark Warner, a Democrat from Virginia and chairman of the Senate Intelligence Committee, said of the images on X. “This is an unfortunate situation .” ”

Rep. Yvette D. Clarke, a Democrat from New York, said advances in artificial intelligence have made deepfakes easier and cheaper to create.

“What happened to Taylor Swift is nothing new,” she said.