Pornographic deepfakes of Taylor Swift went viral on X (formerly Twitter) this week, highlighting the dangers of AI-generated imagery online. Synthetic or manipulated media that may deceive people isn't allowed on X, according to its policy, and the platform's safety team posted on Friday that it's "actively removing all identified images and taking appropriate actions against the accounts responsible for posting them." By Saturday, users noticed that X attempted to curb the problem by blocking "Taylor Swift" from being searched — but not certain related terms, The Verge reported. Mashable was also able to produce the error page for the terms "Taylor Swift AI" and "Taylor AI." The terms "Swift AI," "Taylor AI Swift," and "Taylor Swift deepfake" are searchable on the platform, though, with manipulated images still displayed on the "Media" tab. As Mashable culture reporter Meera Navlakha pointed out in an article about the deepfakes of Swift, major social media platforms are struggling to contain AI-generated content. This is due to the speed and access of creating these images, causing social platforms like X to be inundated with them in recent months. Making Swift's name unsearchable suggests that X doesn't know how to handle the array of deepfake imagery and video on its platform. On Friday, White House press secretary Karine Jean-Pierre called the situation "alarming." She also commented that there should be legislation about it, hinting that the issue of AI image moderation may soon be seen in Congress. TopicsTwitterTaylor Swift