Back to Top

November 23, 2024
close
OpenAI won’t let politicians use its tech for campaigning, for now

OpenAI won’t let politicians use its tech for campaigning, for now

  • Science
  • January 16, 2024
  • No Comment
  • 71

Artificial intelligence company OpenAI laid out its plans and policies to try to stop people from using its technology to spread disinformation and lies about elections, as billions of people in some of the world’s biggest democracies head to the polls this year.

The company, which makes the popular ChatGPT chatbot, DALL-E image generator and provides AI technology to many companies, including Microsoft, said in a Monday blog post that it wouldn’t allow people to use its tech to build applications for political campaigns and lobbying, to discourage people from voting or spread misinformation about the voting process. OpenAI said it would also begin putting embedded watermarks — a tool to detect AI-created photographs — into images made with its DALL-E image-generator “early this year.”

“We work to anticipate and prevent relevant abuse — such as misleading ‘deepfakes,’ scaled influence operations, or chatbots impersonating candidates,” OpenAI said in the blog post.

Political parties, state actors and opportunistic internet entrepreneurs have used social media for years to spread false information and influence voters. But activists, politicians and AI researchers have expressed concern that chatbots and image generators could increase the sophistication and volume of political misinformation.

OpenAI’s measures come after other tech companies have also updated their election policies to grapple with the AI boom. In December, Google said it would restrict the kind of answers its AI tools give to election-related questions. It also said it would require political campaigns that bought ad spots from it to disclose when they used AI. Facebook parent Meta also requires political advertisers to disclose if they used AI.

But the companies have struggled to administer their own election misinformation polices. Though OpenAI bars using its products to create targeted campaign materials, an August report by the Post showed these policies were not enforced.

There have already been high-profile instances of election-related lies being generated by AI tools. In October, The Washington Post reported that Amazon’s Alexa home speaker was falsely declaring that the 2020 presidential election was stolen and full of election fraud.

Sen. Amy Klobuchar (D-Minn.) has expressed concern that ChatGPT could interfere with the electoral process, telling people to go to a fake address when asked what to do if lines are too long at a polling location.

If a country wanted to influence the U.S. political process it could, for example, build human-sounding chatbots that push divisive narratives in American social media spaces, rather than having to pay human operatives to do it. Chatbots could also craft personalized messages tailored to each voter, potentially increasing their effectiveness at low costs.

In the blog post, OpenAI said it was “working to understand how effective our tools might be for personalized persuasion.” The company recently opened its “GPT Store,” which allows anyone to easily train a chatbot using data of their own.

Generative AI tools do not have an understanding of what is true or false. Instead, they predict what a good answer might be to a question based on crunching through billions of sentences ripped from the open internet. Often, they provide humanlike text full of helpful information. They also regularly make up untrue information and pass it off as fact.

Images made by AI have already shown up all over the web, including in Google search, being presented as real images. They’ve also started appearing in U.S. election campaigns. Last year, an ad released by Florida Gov. Ron DeSantis’s campaign used what appeared to be AI-generated images of Donald Trump hugging former White House coronavirus adviser Anthony S. Fauci. It’s unclear which image generator was used to make the images.

Other companies, including Google and photoshop maker Adobe, have said they will also use watermarks in images generated by their AI tools. But the technology isn’t a magic cure for the spread of fake AI images. Visible watermarks can be easily cropped or edited out. Embedded, cryptographic ones, which are not visible to the human eye, can be distorted simply by flipping the image or changing its color.

Tech companies say they’re working to improve this problem and make them tamper-proof, but for now none seem to have figured out how to do that effectively yet.

Cat Zakrzewski contributed to this report.

#OpenAI #wont #politicians #tech #campaigning

Related post

Rachel Reeves announces more details of NHS funding plan

Rachel Reeves announces more details of NHS funding plan

Reuters Rachel Reeves and Wes Streeting visited a hospital in south London on Monday The government has announced more details of…
Alarm call as world’s trees slide towards extinction

Alarm call as world’s trees slide towards extinction

Salvamontes Colombia The yellow flower of one of the rarest magnolias in Colombia Scientists assessing dangers posed to the world’s trees…
Issue ‘not going away’ at hospital forced to partially close

Issue ‘not going away’ at hospital forced to partially…

Google Half the wards at Withybush Hospital closed last year after Raac was found Issues with reinforced autoclaved aerated concrete are…

Leave a Reply

Your email address will not be published. Required fields are marked *