Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Truth, Lies, and Automation – How Language Models Could Change Disinformation

Georgetown University’s Walsh School of Foreign Service, Center for Security and Emerging Technology: Truth, Lies, and Automation How Language Models Could Change Disinformation. Ben Buchanan, Andrew Lohn, Micah Musser, Katerina Sedova. May 2021. “Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3–a cutting-edge AI system that writes text–to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness….Mitigating the dangers of automation in disinformation is challenging. Since GPT-3’s writing blends in so well with human writing, the best way to thwart adversary use of systems like GPT-3 in disinformation campaigns is to focus on the infrastructure used to propagate the campaign’s messages, such as fake accounts on social media, rather than on determining the authorship of the text itself. Such mitigations are worth considering because our study shows there is a real prospect of automated tools generating content for disinformation campaigns. In particular, our results are best viewed as a low-end estimate of what systems like GPT-3 can offer. Adversaries who are unconstrained by ethical concerns and buoyed with greater resources and technical capabilities will likely be able to use systems like GPT-3 more fully than we have, though it is hard to know whether they will choose to do so. In particular, with the right infrastructure, they will likely be able to harness the scalability that such automated systems offer, generating many messages and flooding the information landscape with the machine’s most dangerous creations. Our study shows the plausibility—but not inevitability—of such a future, in which automated messages of division and deception cascade across the internet. While more developments are yet to come, one fact is already apparent: humans now have able help in mixing truth and lies in the service of disinformation…”

Sorry, comments are closed for this post.