The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.
EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises fresh challenges for the fight against disinformation.
Jourova said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed
the 27-nation bloc’s voluntary agreement on combating disinformation to dedicate efforts
to tackle the AI problem.
Online platforms that have integrated generative AI into their services, such as Microsoft’s Bing search engine and Google’s Bard chatbot, should build safeguards to prevent malicious actors from generating disinformation, Jourova said at a briefing in Brussels.
Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to recognize such content and clearly label this to users, she said.
Google, Microsoft, Meta and TikTok did not respond immediately to requests for comment.
Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, “I dont see any right for the machines to have the freedom of speech.
Automated writing? Not so fast, says the Writers Guild of America
The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won’t take effect for several years.
Officials in the EU, which also is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative AI. Recent examples of debunked deepfakes include a realistic picture of Pope Francis in a white puffy jacket and an image of billowing black smoke next to a building accompanied with a claim that it showed an explosion near the Pentagon. Politicians have even enlisted AI to warn about its dangers. Danish Prime Minister Mette Frederiksen used OpenAIs ChatGPT to craft the opening of a speech to Parliament last week, saying it was written with such conviction that few of us would believe that it was a robot and not a human behind it. European and U.S. officials said last week that they’re drawing up a voluntary code of conduct for artificial intelligence that could be ready within weeks as a way to bridge the gap before the EU’s AI rules take effect. Similar voluntary commitments in the bloc’s disinformation code will become legal obligations by the end of August under the EU’s Digital Services Act, which will force the biggest tech companies to better police their platforms to protect users from hate speech, disinformation and other harmful material. Jourova said, however, that those companies should start labeling AI-generated content immediately. Real-time deepfakes are a dangerous new threat. How to protect yourself
Most of those digital giants
have already agreed to abide byare already signed up to
the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.
Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company that
The exit drew a stern rebuke, with Jourova calling it a mistake.
Twitter has chosen the hard way. They chose confrontation, she said. Make no mistake, by leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinized vigorously and urgently.
Twitter will face a major test later this month when European Commissioner Thierry Breton heads to its San Francisco headquarters with a team to carry out a “stress test,” meant to measure the platform’s ability to comply with the Digital Services Act. Breton, whos in charge of digital policy, told reporters Monday that he also will visit other Silicon Valley tech companies including OpenAI, chipmaker