Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Uncovering Similarities in News Organisations’ AI Guidelines

Oxford Internet Institute: “In the ever-evolving landscape of news reporting, the integration of Artificial Intelligence (AI) has recently taken centre stage. A significant catalyst for this transformation was the public debut of ChatGPT, a Large Language Model (LLM) by US-based OpenAI, in November 2022. This development has prompted many news organisations to turn their attention to AI, teetering on the edge of anticipation and apprehension. While the full impact of AI in journalism is still unfolding, many publishers are actively exploring the potential of the technology. Yet, many of these uses carry risks. AI-powered recommendation engines can discriminate against certain groups of users. Texts produced by LLMs are prone to factual errors and distortions while AI-generated images may be mistaken as real by audiences. The list of problems is long. If newsrooms decide to publish AI output without taking precautions, they may be putting their journalistic credibility at risk. In response, news organisations have started to draw up AI guidelines as one way of countering some of these issues and regarding the use of AI and to ensure the ethical use of AI. Yet, despite some pioneering work studying the content of such guidelines, questions remain. Amid calls to regulate AI more tightly, including in the news, how advanced are current efforts? Where do efforts converge or diverge and what are the blind spots? We set out to study this question, looking at AI guidelines from 52 publisher in 12 countries around the world. In this post, we share our perspective on the key findings and implications of our new pre-print (which has not yet been peer-reviewed).”

Sorry, comments are closed for this post.