A coalition of leading technology companies unveiled an agreement today aimed at curbing the spread of AI-generated misinformation during the 2024 election cycle. The accord, signed by over 20 firms including Microsoft, Meta, Google and OpenAI, lays out a set of commitments to assess risks from AI models and address the dissemination of manipulated content.
The rise of deepfakes and synthetic media has raised alarms ahead of a packed 2024 global election calendar. Over 40 countries representing 4 billion voters will head to the polls this year. Meanwhile, the number of AI-generated deepfakes increased 900% from last year, exacerbating concerns.
Under the new commitments, companies pledged to evaluate potential harms from AI systems and seek to detect the distribution of false or misleading content. They also agreed to provide transparency around these efforts. However, the voluntary principles allow flexibility in their application across different services.
While a step in the right direction, experts note the accord lacks concrete enforcement mechanisms. “I don’t see enough specifics,” said Josh Becker, a state senator in California. “We will likely need legislation that sets clear standards.” Critics also point out that existing detection methods have failed to keep pace with advances in synthetic media generation.
The need for vigilance around election-related AI misuse comes as new models like OpenAI’s Sora enable the automated generation of deceptive images, audio and video. Still, OCIOs like IBM’s Christina Montgomery stressed the importance of “cooperative measures to protect people and societies.”
The commitments represent the industry’s latest attempt to self-police the growing risks of AI technology. But their ultimate effectiveness remains uncertain ahead of the 2024 race. Tougher regulations around AI may still prove necessary to combat information manipulation.