Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

A.I. Joe: The Dangers of Artificial Intelligence and the Military

Public Citizen: “The U.S. Department of Defense (DOD) and the military-industrial complex are rushing to embrace an artificial intelligence (AI)-driven future. There’s nothing particularly surprising or inherently worrisome about this trend. AI is already in widespread use and evolving generative AI technologies are likely to suffuse society, remaking jobs, organizational arrangements and machinery. At the same time, AI poses manifold risks to society and military applications present novel problems and concerns, as the Pentagon itself recognizes. This report outlines some of the primary concerns around military applications of AI use. It begins with a brief overview of the Pentagon’s AI policy. Then it reviews:

  •  The grave dangers of autonomous weapons – “killer robots” programmed to make their own decisions about use of lethal force.
  • The imperative of ensuring that decisions to use nuclear weapons can be made only by humans, not automated systems.
  • How AI intelligence processing can increase not diminish the use of violence.
  • The risks of using deepfakes on the battlefield.

The report then reviews how military AI start-ups are crusading for Pentagon contracts, including by following the tried-and-true tactic of relying on revolving door relationships. The report concludes with a series of recommendations:

  1. The United States should pledge not to develop or deploy autonomous weapons, and should support a global treaty banning such weapons.
  2.  The United States should codify the commitment that only humans can launch nuclear weapons.
  3. Deepfakes should be banned from the battlefield.
  4. Spending for AI technologies should come from the already bloated and wasteful Pentagon budget, not additional appropriations.”

Sorry, comments are closed for this post.