The U.S. Federal Trade Commission moved to put new rules into place around impersonation, citing the rising threat of scams enabled by generative artificial intelligence.
The agency is seeking public comment on a proposed rule that would make companies liable if they "know or have reason to know" their technology, including tools used to make AI-generated content, "is being used to harm consumers through impersonation," according to an FTC statement Thursday.
The FTC also said it finalized a rule regarding impersonations of businesses and the government, such as using business logos in scam emails or sending them from a government address. Under the rule, the commission can file court cases intended to make scammers pay back money made from such scams.
The FTC said that complaints around impersonation fraud were surging, and it's concerned that AI "threatens to turbocharge this scourge."
"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale," FTC Chair Lina Khan said in a statement, adding that voice cloning and other AI tools have made scams more feasible.
The rapid development of generative AI technology, which can generate voice, video or text in a variety of styles, has dazzled Silicon Valley. At the same time, the technology has raised privacy and security concerns because of its ability to impersonate individuals, for example President Joe Biden in a robocall.
2024 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.
Citation: FTC wants to penalize companies for use of AI in impersonation (2024, February 16) retrieved 16 February 2024 from https://techxplore.com/news/2024-02-ftc-penalize-companies-ai-impersonation.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.