The EU called on Facebook, TikTok and other tech titans on Tuesday to crack down on deepfakes and other AI-generated content by using clear labels ahead of Europe-wide polls in June.
The recommendation is part of a raft of guidelines published under a landmark content law by the European Commission for digital giants to tackle risks to elections including disinformation.
The EU executive has unleashed a string of measures to clamp down on big tech, especially regarding content moderation.
Its biggest tool is the Digital Services Act (DSA) under which the bloc has designated 22 digital platforms as "very large" including Instagram, Snapchat, YouTube and X.
There has been feverish excitement over artificial intelligence since OpenAI's ChatGPT arrived on the scene in late 2022, but the EU's concerns over the technology's harms have grown in parallel.
Brussels especially fears the impact of Russian "manipulation" and "disinformation" on elections taking place in the bloc's 27 member states on June 6-9.
In the new guidelines, the commission said the largest platforms "should assess and mitigate specific risks linked to AI, for example by clearly labeling content generated by AI (such as deepfakes)".
The commission recommends that big platforms promote official information on elections and "reduce the monetisation and virality of content that threatens the integrity of electoral processes" to diminish any risks.
"With today's guidelines we are making full use of all the tools offered by the DSA to ensure platforms comply with their obligations and are not misused to manipulate our elections, while safeguarding freedom of expression," said the EU's top tech enforcer, Thierry Breton.
While the guidelines are not legally binding, platforms must explain what other "equally effective" measures they are taking to limit the risks if they do not adhere to them.
The EU can ask for more information and if regulators do not believe there is full compliance, they can hit the firms with probes that could lead to hefty fines.
'Trusted' information
Under the new guidelines, the commission also said political advertising "should be clearly labeled as such" before a tougher law on the issue comes into force in 2025.
It also urges platforms to put in place mechanisms "to reduce the impact of incidents that could have a significant effect on the election outcome or turnout".
The EU will conduct "stress-tests" with relevant platforms in late April, it said.
X has already been under investigation since December over content moderation.
And the commission on March 14 pressed Facebook, Instagram, TikTok and four other platforms to provide more information on how they are countering AI risks to polls.
In the past few weeks, several of the companies including Meta have outlined their plans.
TikTok on Tuesday announced more of the measures it was taking including push notifications from April that will direct users to find more "trusted and authoritative" information about the June vote.
TikTok has around 142 million monthly active users in the EU—and is increasingly used as a source of political information among young people.
© 2024 AFP
Citation: Big tech told to identify AI deepfakes ahead of EU vote (2024, March 26) retrieved 26 March 2024 from https://techxplore.com/news/2024-03-big-tech-told-ai-deepfakes.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.