Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

How to Identify and Investigate AI Audio Deepfakes, a Major 2024 Election Threat

Global Investigative News Network: “In October 2023, an AI-synthesized impersonation of the voice of an opposition leader helped swing the election in Slovakia to a pro-Russia candidate. Another AI audio fake was layered onto a real video clip of a candidate in Pakistan, supposedly calling on voters to boycott the general election in February 2024. Ahead of the Bangladeshi elections in January, several fakes created with inexpensive, commercial AI generators gained voter traction with smears of rival candidates to the incumbent prime minister. And, in the US, an audio clip masquerading as the voice of President Joe Biden urged voters not to vote in one key state’s primary election. Experts agree that the historic election year of 2024 is set to be the year of AI-driven deepfakes, with potentially disastrous consequences for at-risk democracies. Recent research suggests that, in general, about half of the public can’t tell the difference between real and AI-generated imagery, and that voters cannot reliably detect speech deepfakes — and technology has only improved since then. Deepfakes range from subtle image changes using synthetic media and voice cloning of digital recordings to hired digital avatars and sophisticated “face-swaps” that use customized tools. (The overwhelming majority of deepfake traffic on the internet is driven by misogyny and personal vindictiveness: to humiliate individual women with fake sexualized imagery — but this tactic is also increasingly being used to attack women journalists.)…”

Sorry, comments are closed for this post.