Generative AI can help election misinformation, report warns

Platforms that generate images through artificial intelligence (AI) can be tools for electoral and political disinformation. This applies even to those that have policies against creating misleading content. This is what researchers from the Center for Countering Digital Hate (CCDH), a non-profit organization that monitors online hate speech, say in a report published this Wednesday (06).

For those in a hurry:

  • Researchers at the Center for Countering Digital Hate (CCDH) warn about the potential for generative AI platforms to be tools for electoral and political disinformation.
  • They were able to generate, for example, images of US President Joe Biden hospitalized and election workers destroying ballot boxes through platforms with generative AI. According to researchers, these images can serve as “photographic evidence” to spread electoral disinformation;
  • CCDH tested several platforms, including OpenAI's ChatGPT Plus and Microsoft's Image Creator. And they had a 41% success rate in producing misleading images, especially in prompts asking for depictions of voter fraud;
  • In total, 20 companies, including OpenAI and Microsoft, have agreed to work together to combat misinformation in the 2024 elections around the world. The Midjourney platform did not sign the initial commitment and had the worst performance in tests, generating false content in 65% of cases.

These researchers were able to use generative AI tools to create images of US President Joe Biden lying in a hospital bed and of election workers destroying ballot boxes, for example. “The potential for such AI-generated images to serve as 'photographic evidence' ' could exacerbate the spread of false claims, posing a significant challenge to preserving election integrity,” CCDH researchers wrote in the report.

Read more:

The organization's researchers tested: ChatGPT Plus (OpenAI), Image Creator (Microsoft), Midjourney and DreamStudio (Stability AI). They are all platforms in which the user can generate images from text commands (prompts). The information comes from the Reuters news agency.

Generative AI and its potential for disinformation

Illustration of a humanoid robot interacting with various forms of hologram content to represent the concept of artificial intelligence
(Image: Pedro Spadoni via DALL-E/Olhar Digital)

In total, 20 technology companies – including: OpenAI, Microsoft and Stability AI – have committed to working together to prevent interference from misleading AI content in the 2024 elections around the world. But Midjourney was not among the initial signatories.

The CCDH found that AI tools produced misleading images in 41% of tests. According to the report, the platforms were more susceptible to prompts that asked for representations of electoral fraud. ChatGPT Plus and Image Creator were able to block all candidate image prompts.

Midjourney performed the worst, generating misleading images in 65% of tests. The CCDH also noted that some images generated by Midjourney are publicly available – and there is already evidence of their use to create false political content.

(Image: Reproduction/Midjourney)

Midjourney founder David Holz stated that updates related to the US elections will be implemented soon. Holz also highlighted that previously created images do not reflect current moderation practices. Stability AI has updated its policies to prohibit fraud and promotion of misinformation.

OpenAI said it works to prevent the abusive use of its tools. Microsoft, a kind of partner of the ChatGPT developer, did not respond to the news agency's request for comment.

Related Articles

Back to top button