Complete Story
 

08/15/2024

See Why AI Detection Tools Can Fail to Catch Election Deepfakes

The science of detecting manipulations is in its early stages

Artificial intelligence-created content is flooding the web and making it less clear than ever what’s real this election. From former president Donald Trump falsely claiming images from a Vice President Kamala Harris rally were AI-generated to a spoofed robocall of President Joe Biden telling voters not to cast their ballot, the rise of AI is fueling rampant misinformation.

Deepfake detectors have been marketed as a silver bullet for identifying AI fakes, or "deepfakes." Social media giants use them to label fake content on their platforms. Government officials are pressuring the private sector to pour millions into building the software, fearing deepfakes could disrupt elections or allow foreign adversaries to incite domestic turmoil.

But the science of detecting manipulated content is in its early stages. An April study by the Reuters Institute for the Study of Journalism found that many deepfake detector tools can be easily duped with simple software tricks or editing techniques.

Please select this link to read the complete article from The Washington Post.

Printer-Friendly Version