Last week, AI-generated images which depicted superstar Taylor Swift in sexually suggestive and explicit positions were spread around the internet, sparking horror and condemnation—and experts say it’s a wake-up call showing we need real regulation of AI now.
Mohit Rajhans, Think Start media and tech consultant, told CTV News Channel on Sunday that “we’ve turned into the wild west online,” when it comes to generating and spreading AI content.
“The train has left the station, artificial general intelligence is here, and it’s going to be up to us now to figure out how we’re going to regulate it.”
It reportedly took 17 hours for the fake images being circulated on X to be taken down.
The terms “Taylor Swift,” “Taylor Swift AI,” and “Taylor AI” currently bring up error reports if a user attempts to search them on X. The company has said this is a temporary measure as they evaluate safety on the platform.
But the deepfaked pornographic images of the singer were viewed tens of millions of times before social media sites took action. Deepfakes are AI-generated images and videos of false situations featuring real people. The big danger is that they are significantly more realistic than a photoshopped image.
“There’s a lot of potential harassment and misinformation that gets spread if this technology is not regulated,” Rajhans said.
The targeting of Swift is part of a disturbing trend of AI being used to generate pornographic images of people without their consent, a practice known as “revenge porn” which is predominantly used against women and girls.
While AI has been misused for years, Rajhans said there’s definitely a “Taylor effect” in making people sit up and pay attention to the problem.
“What’s happened is…because of the use of Taylor Swift’s image to do everything from sell products that she’s not affiliated with to doctor her (image) into various sexual acts, more people have become aware of how rampant this technology is,” he said.
Even the White House is paying attention, commenting Friday that action needs to be taken.
In a statement Friday, White House press secretary Karine Jean-Pierre said the spreading of fake nudes of Swift was “alarming” and that legislative action was being considered to better address these situations in the future.
“There should be legislation, obviously, to deal with this issue,” she said, without specifying which specific legislation they are supporting.
SAG-AFTRA, the union which represents thousands of actors and performers, said in a statement Saturday that they support proposed legislation introduced by U.S. Rep. Joe Morelle last year, called the Preventing Deepfakes of Intimate Images Act.
“The development and dissemination of fake images — especially those of a lewd nature — without someone’s consent must be made illegal,” the union said in the statement.
In the White House briefing, Jean-Pierre added that social media platforms “have an important role to play in enforcing their own rules” in order to prevent the spreading of “non-consensual intimate imagery of real people.”
Rajhans said Sunday that it’s clear social media companies need to step up in dealing with deepfakes.
“We need to hold social media companies accountable,” he said. “There has to be some heavy fines associated with some of these social media companies. They’ve made a lot of money off of people using social media.”
He pointed out that if people upload a song that doesn’t belong to them, there are ways it can get flagged on social media sites.
“So why are they not using this technology right now in an effort to moderate social media so that deepfakes can’t penetrate?” he said.
A 2023 report on deepfakes found that 98 per cent of all deepfake videos online were pornographic in nature—and 99 per cent of the individuals targeted by deepfake pornography were women. South Korean singers and actresses were disproportionately targeted, constituting 53 per cent of individuals targeted in deepfake pornography.
The report highlighted that technology exists now that allows users to make a 60-second deepfake pornographic video for free and in less than half an hour.
The sheer speed of progress occurring in the AI world is working against us in terms of managing the repercussions of this technology, Rajhans said.
“It’s getting so pedestrian level that you and I can just make memes and share them and no one can know the difference between (if) it’s actual fact or it’s something that’s been recreated,” he said.
“This is not just about Taylor Swift. This is about harassment, this is about sharing fake news, this is about a whole culture that needs to be educated about how this technology is being used.”
It’s unknown how long it could take to see Canadian legislation curtailing deepfakes.
The Canadian Security Intelligence Service called deepfakes a “threat to a Canadian future” in a 2023 report which concluded that “collaboration amongst partner governments, allies, academics, and industry experts is essential to both maintaining the integrity of globally distributed information and addressing the malicious application of evolving AI.”
Although a proposed regulatory framework for AI systems in Canada is currently being examined in the House of Commons, called the Artificial Intelligence and Data Act, it wouldn’t take effect this year. If the bill gains royal assent, a consultation process will start to clarify AIDA, with the framework coming into effect no sooner than 2025.