OpenAI lays out its disinformation strategy ahead of the 2024 elections

As the US prepares for the 2024 presidential election, OpenAI shares its plans to suppress election-related misinformation worldwide, with a focus on increasing transparency about the origins of information. One such highlight is the use of cryptography – as standardized by the Coalition for Content Provenance and Authenticity – to encode the provenance of images generated by DALL-E 3. This will allow the platform to better recognize AI-generated images using an origin classifier. to help voters assess the reliability of certain content.

This approach is similar, if not better, to DeepMind's SynthID for digitally watermarked AI-generated images and audio as part of Google's own election content strategy, released last month. Meta's AI image generator also adds an invisible watermark to its content, although the company has not yet communicated its willingness to combat election-related misinformation.

OpenAI says it will soon work with journalists, researchers and platforms to get feedback on its provenance classifier. On the same topic, ChatGPT users will start seeing real-time news from around the world with attribution and links. They will also be directed to CanIVote.org, the official online resource for U.S. voting law, if they ask procedural questions such as where or how to vote.

Additionally, OpenAI reiterates its current policies to prevent counterfeit attempts in the form of deepfakes and chatbots, as well as content designed to distort the voting process or discourage people from voting. The company also bans applications designed for political campaigns and, if necessary, the new GPTs allow users to report potential violations.

OpenAI says that the lessons learned from these early efforts, if at all successful (and that's a very big “if”), will help roll out similar strategies around the world. The company will make further related announcements in the coming months.