Back to Top

November 22, 2024
close
How to detect AI deepfakes

How to detect AI deepfakes

  • Science
  • April 5, 2024
  • No Comment
  • 33

AI-generated images are everywhere. They’re being used to make nonconsensual pornography, muddy the truth during elections and promote products on social media using celebrity impersonations.

When Princess Catherine released a video last month disclosing that she had cancer, social media went abuzz with the latest baseless claim that artificial intelligence was used to manipulate the video. Both BBC Studios, which shot the video, and Kensington Palace denied AI was involved. But it didn’t stop the speculation.

Experts say the problem is only going to get worse. Today, the quality of some fake images is so good that they’re nearly impossible to distinguish from real ones. In one prominent case, a finance manager at a Hong Kong bank wired about $25.6 million to fraudsters who used AI to pose as the worker’s bosses on a video call. And the tools to make these fakes are free and widely available.

A growing group of researchers, academics and start-up founders are working on ways to track and label AI content. Using a variety of methods and forming alliances with news organizations, Big Tech companies and even camera manufacturers, they hope to keep AI images from further eroding the public’s ability to understand what’s true and what isn’t.

“A year ago, we were still seeing AI images and they were goofy,” said Rijul Gupta, founder and CEO of DeepMedia AI, a deepfake detection start-up. “Now they’re perfect.”

Here’s a rundown of the major methods being developed to hold back the AI image apocalypse.

Digital watermarks aren’t new. They’ve been used for years by record labels and movie studios that want to be able to protect their content from being pirated. But they’ve become one of the most popular ideas to help deal with a wave of AI-generated images.

When President Biden signed a landmark executive order on AI in October, he directed the government to develop standards for companies to follow in watermarking their images.

Some companies already put visible labels on images made by their AI generators. OpenAI affixes five small colored boxes in the bottom-right corner of images made with its Dall-E image generators. But the labels can easily be cropped or photoshopped out of the image. Other popular AI image-generation tools like Stable Diffusion don’t even add a label.

So the industry is focusing more on unseen watermarks that are baked into the image itself. They’re not visible to the human eye but could be detected by, say, a social media platform, which would then label them before viewers see them.

They’re far from perfect though. Earlier versions of watermarks could be easily removed or tampered with by simply changing the colors in an image or even flipping it on its side. Google, which provides image-generation tools to its consumer and business customers, said last year that it had developed a watermark tech called SynthID that could withstand tampering.

But in a February paper, researchers at the University of Maryland showed that approaches developed by Google and other tech giants to watermark their AI images could be beat.

“That is not going to solve the problem,” said Soheil Feizi, one of the researchers.

Developing a robust watermarking system that Big Tech and social media platforms agree to abide by should help significantly reduce the problem of deepfakes misleading people online, said Nico Dekens, director of intelligence at cybersecurity company ShadowDragon, a start-up that makes tools to help people run investigations using images and social media posts from the internet.

“Watermarking will definitely help,” Dekens said. But “it’s certainly not a waterproof solution, because anything that’s digitally pieced together can be hacked or spoofed or altered,” he said.

On top of watermarking AI images, the tech industry has begun talking about labeling real images as well, layering data into each pixel right when a photo is taken by a camera to provide a record of what the industry calls its “provenance.”

Even before OpenAI released ChatGPT in late 2022 and kicked off the AI boom, camera makers Nikon and Leica began developing ways to imprint special “metadata” that lists when and by whom a photo was taken directly when the image is made by the camera. Canon and Sony have begun similar programs, and Qualcomm, which makes computer chips for smartphones, says it has a similar project to add metadata to images taken on phone cameras.

News organizations like the BBC, Associated Press and Thomson Reuters are working with the camera companies to build systems to check for the authenticating data before publishing photos.

Social media sites could pick up the system, too, labeling real and fake images as such, helping users know what they’re looking at, similar to how some platforms label content that might contain anti-vaccine disinformation or government propaganda. The sites could even prioritize real content in algorithmic recommendations or allow users to filter out AI content.

But building a system where real images are verified and labeled on social media or a news website might have unintended effects. Hackers could figure out how the camera companies apply the metadata to the image and add it to fake images, which would then get a pass on social media because of the fake metadata.

“It’s dangerous to believe there are actual solutions against malignant attackers,” said Vivien Chappelier, head of research and development at Imatag, a start-up that helps companies and news organizations put watermarks and labels on real images to ensure they aren’t misused. But making it harder to accidentally spread fake images or giving people more context into what they’re seeing online is still helpful.

“What we are trying to do is raise the bar a bit,” Chappelier said.

Adobe — which has long sold photo- and video-editing software and is now offering AI image-generation tools to its customers — has been pushing for a standard for AI companies, news organizations and social media platforms to follow in identifying and labeling real images and deepfakes.

AI images are here to stay and different methods will have to be combined to try to control them, said Dana Rao, Adobe’s general counsel.

Some companies, including Reality Defender and Deep Media, have built tools that detect deepfakes based on the foundational technology used by AI image generators.

By showing tens of millions of images labeled as fake or real to an AI algorithm, the model begins to be able to distinguish between the two, building an internal “understanding” of what elements might give away an image as fake. Images are run through this model, and if it detects those elements, it will pronounce that the image is AI-generated.

The tools can also highlight which parts of the image the AI thinks gives it away as fake. While humans might class an image as AI-generated based on a weird number of fingers, the AI often zooms in on a patch of light or shadow that it deems doesn’t look quite right.

There are other things to look for, too, such as whether a person has a vein visible in the anatomically correct place, said Ben Colman, founder of Reality Defender. “You’re either a deepfake or a vampire,” he said.

Colman envisions a world where scanning for deepfakes is just a regular part of a computer’s cybersecurity software, in the same way that email applications like Gmail now automatically filter out obvious spam. “That’s where we’re going to go,” Colman said.

But it’s not easy. Some warn that reliably detecting deepfakes will probably become impossible, as the tech behind AI image generators changes and improves.

“If the problem is hard today, it will be much harder next year,” said Feizi, the University of Maryland researcher. “It will be almost impossible in five years.”

Even if all these methods are successful and Big Tech companies get fully on board, people will still need to be critical about what they see online.

“Assume nothing, believe no one and nothing, and doubt everything,” said Dekens, the open-source investigations researcher. “If you’re in doubt, just assume it’s fake.”

With elections coming up in the United States and other major democracies this year, the tech may not be ready for the amount of disinformation and AI-generated fake imagery that will be posted online.

“The most important thing they can do for these elections coming up now is tell people they shouldn’t believe everything they see and hear,” said Rao, the Adobe general counsel.

#detect #deepfakes

Related post

Southport murders accused facing terror charge

Southport murders accused facing terror charge

Merseyside Police Elsie Dot Stancombe, Alice da Silva Aguiar and Bebe King were killed in the stabbings in Southport The teenager…
CNN bans conservative commentator after verbal attack on Mehdi Hasan | US Election 2024 News

CNN bans conservative commentator after verbal attack on Mehdi…

US network says it has ‘zero room for racism’ after Girdusky tells Hasan: ‘I hope your beeper doesn’t go off.’ CNN…
Australian PM Albanese accused of seeking upgrades from Qantas boss

Australian PM Albanese accused of seeking upgrades from Qantas…

Australian Prime Minister Anthony Albanese has been accused of asking for free personal flight upgrades directly from the former CEO of…

Leave a Reply

Your email address will not be published. Required fields are marked *