Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Global policymakers don’t understand AI enough to regulate it.

The Print: “Tech companies must step up now When software is built to prioritise speed over safety, its creators delay dealing with possible negative consequences. On 11 April 2023, China released a comprehensive draft of measures to regulate Generative Artificial Intelligence, which can automatically turn basic user inputs into creative outputs like texts, images or videos. AI has dominated news cycles for its far-reaching consequences. Proponents argue that it can bring industry-altering efficiencies, while sceptics worry about its capacity to cause job losses, spread misinformation, and violate copyright. The Chinese draft measures follow a recent public call for pausing research into generative AI while regulators get enough “time to catch up”. There is no international consensus on how generative AI should be regulated yet. Several multilateral organisations, such as the International Telecommunication Union and the Organisation for Economic Cooperation and Development, have released non-binding AI guidelines. Jurisdictions like Italy and Spain are investigating the pitfalls of generative AI, and the American National Institute for Standards and Technology has released a voluntary AI Risk Management Framework. China intends to hold AI service providers responsible for machine-generated content, as well as filtering inappropriate content, auditing user prompts, verifying user identities, preventing algorithmic discrimination and generating “fake” news…”

Sorry, comments are closed for this post.