woman looking at phone
Credit: Unsplash/CC0 Public Domain

Last week, the office of the San Francisco City Attorney issued a landmark lawsuit. It's accusing 16 "nudify" websites of violating United States laws in relation to non-consensual intimate images and child abuse material.

"Nudify" sites and apps are easy to use. They let anyone upload a photo of a real person to generate a fake but photorealistic image of what they might look like undressed. Within seconds, someone's photo becomes an explicit image.

In the first half of 2024, the 16 websites named in the lawsuit have been visited more than 200 million times. One of the sites says, "imagine wasting time taking her out on dates, when you can just use [redacted site] to get her nudes."

These sites are also advertised on social media. Since the start of this year, there has been a 2,400% increase in advertising of nudify apps or sites on .

What can victims do?

Even if the images look fake, deepfake abuse can cause significant harm. It can damage a person's reputation and career prospects. It can have detrimental mental and physical health effects, including social isolation, self-harm and a loss of trust in others.

Many victims don't even know their images have been created or shared. If they do, they might successfully report the content to mainstream platforms, but struggle to get it removed from private personal devices or from "rogue" websites that have few protections in place.

Victims can make a report to a if fake, non-consensual intimate images of them are shared without their consent.

If they're in Australia, or if the is based in Australia, the victim can report to the eSafety Commissioner, who can work on their behalf to have the content taken down.

What can digital platforms do?

Digital platforms have policies prohibiting the non-consensual sharing of sexualized deepfakes. But the policies are not always consistently enforced.

Although most nudify apps have been removed from app stores, some are still around. Some "only" let users create near-nude images—say, in a bikini or underwear.

Tech companies can do a lot to stop the spread. Social media, video-sharing platforms and porn sites can ban or remove nudify ads. They can block keywords, such as "undress" or "nudify," as well as issue warnings to people using these search terms.

More broadly, technology companies can use tools to detect fake images. Companies behind the development of AI image-generator tools need to incorporate "guardrails" to prevent the creation of harmful or illegal content.

Watermarking and labeling of synthetic and AI-generated content are important—but not very effective once images have been shared. Digital hashing can also prevent the future sharing of non-consensual content.

Some platforms already use such tools to address deepfake abuse. They're part of the solution, but we shouldn't rely on them to fix the problem.

Search engines play a role, too. They can reduce the visibility of nudify and non-consensual deepfake sites. Last month, Google announced several measures on deepfake abuse. When someone reports non-consensual explicit deepfakes, Google can prevent the content appearing in search results and remove duplicate images.

Governments can also introduce laws and regulatory frameworks to address deepfake abuse. This can include blocking access to nudify and deepfake sites, although VPNs can bypass blocked sites.

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: AI 'nudify' sites are being sued for victimizing people. How can we battle deepfake abuse? (2024, August 21) retrieved 21 August 2024 from https://techxplore.com/news/2024-08-ai-nudify-sites-sued-victimizing.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.