Human decisions still needed in artificial intelligence for war
Author: Denise Garcia, Northeastern University US President Joe Biden should not heed the advice of the National Security Commission on Artificial Intelligence (NSCAI) to reject calls for a global ban on autonomous weapons. Instead, Biden should work on an innovative approach to prevent humanity from relinquishing its judgement to algorithms during war. The NSCAI maintains […] The post Human decisions still needed in artificial intelligence for war first appeared on East Asia Forum.
Author: Denise Garcia, Northeastern University
US President Joe Biden should not heed the advice of the National Security Commission on Artificial Intelligence (NSCAI) to reject calls for a global ban on autonomous weapons. Instead, Biden should work on an innovative approach to prevent humanity from relinquishing its judgement to algorithms during war.
The NSCAI maintains that a global treaty that prohibits the development, deployment and use of artificial intelligence (AI) enabled weapons systems is not in the interests of the United States and would harm international security. It argues that Russia and China are unlikely to follow such a treaty. A global ban, it argues, would increase pressure on law-abiding nations and would enable others to utilise AI military systems in an unsafe and unethical manner.
This is an unsophisticated way of thinking through a complex problem. Negotiations and conversations at the United Nations have been occurring on this matter since 2014. The voices of AI scientists, Nobel Peace Laureates and civil society were not represented as part of the NSCAI’s advice. If science argues against AI weapons, it is difficult to maintain that their development and use would benefit US interests and international security.
Instead of following the NSCAI’s advice, President Biden could take the lead and create an innovative international treaty requiring human control over AI military systems. This means that AI could continue to be used in some aspects of military operations including mobility, surveillance and intelligence, homing, navigation, interoperability and target image discrimination. But when it comes to target acquisition and the decision to kill, states would be required by the treaty to retain human decision-making. This positive obligation should be legally binding.
The militarisation of AI seems to be inescapable, and all the major powers are well advanced in their pursuit of this technology. The United States seems to have no choice but to confront this evolving and volatile reality. Yet this does not mean that the licence to kill should be delegated to an algorithm.
The NSCAI fumbles in its argument supporting AI for war when it suggests that human control should be maintained in nuclear weapon systems activation. It recommends that the President seek commitments to human control from Russia and China. But this is unambitious for a superpower poised to regain leadership on the world stage. Additionally, the NSCAI’s advice seems misaligned with President Biden’s largescale and bold ‘diplomacy first’ and ‘America is back’ goals.
Worryingly, throughout the report the NSCAI recommends millions of dollars be allocated to develop AI war capacities. Tax dollars for rebuilding US diplomacy and the State Department should be given top priority.
The current US approach contrasts with that of the European Union, which is promoting the use of AI and new technology to mediate global problems, instead of creating new ones. Peace has deteriorated markedly in the last decade. The global economic impact of war and violence was US$14.5 trillion in purchasing power parity terms in 2019 according to the Institute for Economics and Peace. The world’s financial and intellectual resources should be put towards creating more structures for cooperation and frameworks for strengthening world peace and security.
Instead, the NSCAI outlines a duplication of an expensive and flawed strategy adopted during the Cold War: embrace competition and fill US arsenals with destructive weapon systems. This is no moment for warmongering and peddling failed strategies. Most countries around the world have called for international regulations to govern the use of AI in war by retaining human decision making in use of force in war.
Regrettably, the NSCAI’s report only mentions international law four times in its 756 pages, which shows a lamentable disregard for the rule of law. But it mentions ‘values’ 161 times. If the democratic US values of freedom, privacy, liberty and civil rights are to be upheld, an international treaty requiring human control of decisions over life and death in war must be established to advance those values.
The NSCAI has missed the chance to outline a genuinely transformative 21st-century blueprint for the use of AI for the common good, one which would see the United States lead as the champion state. The NSCAI report advises President Biden to embrace the AI ‘race’. Yet such competition and arms races have not served humanity well in the past. During the Cold War, the same ideas led to the accumulation of 70,000 nuclear weapons. Many were later deactivated and destroyed, leaving the world today with 13,410 warheads. Their exceedingly injurious humanitarian, public health and environmental consequences — along with exorbitant maintenance costs — render them practically unworkable and useless in a densely populated world.
There is a fleeting opportunity for the United States to lead the charge in AI weaponry management and to encourage developments that will be advantageous both for itself and for humanity. This small window of opportunity exists because President Biden inspires cooperation.
There are innumerable instances where China, Russia and the United States have cooperated meaningfully: the International Space Station, the 1968 Nuclear Non-Proliferation Treaty, the banning of chemical and biological warfare and the 2015 Paris Agreement. The United States should capitalise on these concrete instances of cooperation and seek out many more. But the window of opportunity for this kind of innovative out-of-the-box multilateral diplomacy may be short and demands urgent action now.
Denise Garcia is Professor at the College of Social Sciences and Humanities, Northeastern University, Boston.The post Human decisions still needed in artificial intelligence for war first appeared on East Asia Forum.