Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

With elections looming worldwide here’s how to identify and investigate AI audio deepfakes

Nieman Lab: “…Media manipulation investigators told GIJN that fake AI-generated audio simulations — in which a real voice is cloned by a machine learning tool to state a fake message — could emerge as an even bigger threat to elections in 2024 and 2025 than fabricated videos. One reason is that, like so-called cheapfakes, audio deepfakes are easier and cheaper to produce. (Cheapfakes have already been widely used in election disinformation, and involve video purportedly from one place that was actually from another, and where short audio clips are crudely spliced into videos, or the closed captions blatantly edited.) Another advantage they offer bad actors is they can be used in automated robocalls to target (especially) older, highly active voters with misinformation. And tracing the origin of robocalls remains a global blind spot for investigative reporters. (The overwhelming majority of deepfake traffic on the internet is driven by misogyny and personal vindictiveness: to humiliate individual women with fake sexualized imagery — but this tactic is also increasingly being used to attack women journalists.) “AI audio fakes can pose a significant threat,” emphasizes Olga Yurkova, journalism trainer and cofounder of StopFake.org, an independent Ukrainian fact-check organization. “They are easier and cheaper to create than deepfake videos, and there are fewer contextual clues to detect with the naked eye. Also, they have a greater potential to spread, for example, in WhatsApp chats.”She adds: “Analysis is more complex, and voice generation tools are more advanced than video generation tools. Even with voice samples and spectral analysis skills, it takes time, and there is no guarantee that the result will be accurate. In addition, there are many opportunities to fake audio without resorting to deepfake technology.” Data journalism trainer Samantha Sunne says newsrooms need constant vigilance in elections — both for the sudden threat of comparatively under-researched AI audio fakes, and because “deepfake technology is changing quickly and so are the detection and monitoring tools.” Fact check organizations and some pro-democracy NGOs have mobilized to help citizens groups and newsrooms analyze suspicious viral election content. For instance, a human rights empowerment nonprofit called Witness conducted a pilot Deepfakes Rapid Response project in the past year, using a network of about 40 research and commercial experts to analyze dozens of suspicious clips. In an interview with GIJN, the manager of the Rapid Response project, Shirin Anlen, said AI audio fakes appear to be both the easiest to make and the hardest to detect — and that they seem tailor-made for election mischief…”

Sorry, comments are closed for this post.