The recent emergence of explicit, AI-generated images of Taylor Swift on social media has sparked outrage among her fans, and highlighted the damaging potential posed by mainstream artificial intelligence technology. While the images were predominantly circulating on social media site X, previously known as Twitter, they were also found on sites such as Instagram and Reddit, and have since been removed. However, the images continue to be shared on other, less regulated channels, and it is likely that they will continue to be shared in the future.
The images, which show the singer in sexually suggestive and explicit positions, were viewed tens of millions of times before being removed from social platforms. Swift’s enormous contingent of loyal “Swifties” expressed their outrage on social media this week, bringing the issue to the forefront.
X’s policies ban the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm,” and the company did not respond to CNN’s request for comment. Nine US states currently have laws against the creation or sharing of non-consensual deepfake photography, which are synthetic images created to mimic one’s likeness.
Graphika researchers—who study how communities are manipulated online—traced the fake Swift images to a 4chan message board that’s “increasingly” dedicated to posting “offensive” AI-generated content. The 4chan daily challenge appears to have sparked the deluge of explicit AI Taylor Swift images, with 4chan users making a game out of exploiting popular AI image generators. According to Graphika, the users have shared multiple behavioral techniques to create image prompts, attempt to avoid bans, and successfully create sexually explicit celebrity images.
In the 4chan community where these images originated, Taylor Swift is not even the most frequently targeted public figure. OpenAI has denied that any of the Swift images were created using DALL-E, while Microsoft has continued to claim that it is investigating whether any of its AI tools were used.
Attempting to stop the spread, X took the drastic step of blocking all searches for “Taylor Swift” for two days. However, experts such as Carlos López G., a senior analyst at Graphika, suggest that platforms will continue to risk being inundated with offensive content so long as 4chan users are determined to continue challenging each other to subvert image generator filters.
AI images can be created using text prompts and generated without the subject’s consent, creating privacy concerns. At the moment, it is still possible to look closely at images generated by AI and find clues they are not real. But experts say it is only a matter of time before there will be no way to visually differentiate between a real image and an AI-generated image.
AI-generated deepfakes — manipulated video produced by machine-learning techniques to create realistic but fake images and audio — have also been used increasingly to create fake celebrity endorsements. At the federal level, the center said last month, at least eight bills seek to regulate deepfakes and similar “synthetic media.”
The assault on Taylor Swift’s famous image serves as a reminder of how deepfakes have become easier to make in recent years. Deepfakes often target young women, and many deepfake apps are marketed as a way for regular people to make funny videos and memes. But at their core, such deepfakes are an attack on privacy.
The emergence of explicit AI-generated Taylor Swift images has sparked a debate about the implications of artificial intelligence technology, and the need for more effective regulation of such images. It is clear that the technology is advancing quickly, and that the potential for misuse is growing. It is therefore essential that lawmakers and social media platforms take action to protect the privacy and safety of individuals, and to ensure that such images are not used to spread misinformation.