NEW YORK (AP) — Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.
However, experts fear the dark side of readily available tools could exacerbate something that primarily harms women: non-consensual deepfake pornography.
Deepfakes are videos and images created digitally or modified using artificial intelligence or machine learning. Porn created with the technology first spread across the internet a few years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn performers.
Since then, deepfake creators have circulated similar videos and images, targeting online influencers, journalists, and others with a public profile. Thousands of videos exist on a variety of websites. And some offer users the ability to create their own images — essentially allowing anyone they want to turn into sexual fantasies without their consent, or use the technology to harm former partners.
Experts say the problem grew as it became easier to create sophisticated and visually compelling deepfakes. And they say things could get worse with the development of generative AI tools that train on billions of images from around the web and spit out novel content using existing data.
“The reality is that technology will continue to spread, evolve and become as easy as pressing a button,” said Adam Dodge, the founder of EndTAB, a group that provides training on technology-enhanced abuse. “And as long as that happens, people will no doubt … continue to abuse this technology to harm others, primarily through online sexual violence, deepfake pornography, and fake nudity.”
Noelle Martin from Perth, Australia experienced this reality. The 28-year-old found deepfake porn of herself 10 years ago when she searched Google for a picture of herself out of curiosity one day. To this day, Martin says, she doesn’t know who created the fake pictures or videos of her having sex that she would later find. She suspects someone probably took a photo that was posted on her social media page or elsewhere and manipulated it into porn.
Horrified, Martin contacted various websites for several years to have the images removed. Some didn’t respond. Others took it down, but she soon found it again.
“You can’t win,” Martin said. “It’s something that will always be out there. It’s like it ruined you forever.”
The more she spoke up, she said, the more the problem escalated. Some people even told her the way she dressed and posted pictures on social media contributed to the harassment – essentially blaming her for the pictures and not the creators.
Finally, Martin turned her attention to legislation, arguing for a national law in Australia that would fine companies A$555,000 (US$370,706) for failing to comply with removal orders for such content from online safety agencies.
But governing the internet is nearly impossible when countries have their own laws governing content, sometimes created on the other side of the world. Martin, currently a solicitor and legal scholar at the University of Western Australia, believes the problem needs to be controlled through some sort of global solution.
Meanwhile, some AI models say they’re already restricting access to explicit images.
OpenAI says it removed explicit content from data used to train the DALL-E imaging tool, limiting users’ ability to create these types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks certain keywords from being used and encourages users to report problematic images to moderators.
Meanwhile, startup Stability AI rolled out an update in November that removes the ability to create explicit images with its image generator, Stable Diffusion. These changes came after reports that some users were using the technology to create celebrity-inspired nude images.
Motez Bishara, spokesman for the stability AI, said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurry image. But it is possible for users to manipulate the software and generate whatever they want since the company releases its code to the public. According to Bishara, Stability AI’s license “extends to third-party applications built on top of Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”
Some social media companies have also tightened their rules to better protect their platforms from harmful materials.
TikTok said last month that all deepfakes or manipulated content showing realistic scenes must be flagged to indicate it’s fake or altered in any way, and that deepfakes by private individuals and young people are no longer allowed. The company previously blocked sexually explicit content and deepfakes that mislead viewers about real events and cause harm.
Gaming platform Twitch also recently updated its policy on explicit deepfake images after it was discovered that a popular streamer named Atrioc opened a deepfake porn website in their browser during a live stream in late January. The site featured fake images from other Twitch streamers.
Twitch has previously banned explicit deepfakes, but now a look at such content — even if intended to express outrage — “will be removed and result in enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an immediate ban.
Other companies have also attempted to ban deepfakes from their platforms, but it takes care to keep them out.
Apple and Google recently said they had removed an app from their app stores that ran sexually suggestive deepfake videos of actresses in order to market the product. Research into deepfake porn is not widespread, but a 2019 report by AI firm DeepTrace Labs found that it was almost exclusively weaponized against women, and the individuals most affected were Western actresses, followed by South Korean K- pop singers.
The same app, which was removed by Google and Apple, had been showing ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement that the company’s policy restricts both AI-generated and non-AI adult content and has banned the app’s site from advertising on its platforms.
In February, meta and adult sites like OnlyFans and Pornhub began participating in an online tool called Take It Down, which allows teens to report explicit images and videos of themselves from around the internet. The reporting page works for regular images and AI-generated content — which has become a growing concern for child protection groups.
“When people ask our senior management, what are the boulders coming down the hill that we’re concerned about? The first is end-to-end encryption and what that means for child protection. And second is AI, and deepfakes in particular,” said Gavin Portnoy, a spokesman for the National Center for Missing and Exploited Children, which operates the Take It Down tool.
“We haven’t been able to formulate a direct answer to that yet,” Portnoy said.
Copyright 2023 The Associated Press. All rights reserved. This material may not be published, broadcast, transcribed or redistributed without permission.