In warfare, the scenario emerging is AI versus AI. Humans would fall short

by Atul Pant

The United States Air Force Chief, Gen. David L. Goldfein, created a sensation at the Dubai Airshow in October 2019 by revealing plans for automation of the kill chain during lethal engagements.

Humans in their design would enter the picture only at the last stage of target engagement, while rest of the kill chain – detection of objects, their identification, decision to initiate lethal engagement and assignment of targets to weapons platforms – would be fully automated. This development would be in sync with the need for quicker response time to effectively respond to future threats.

To quote Goldfein, “In most kill chains today there is a human in every step of the loop, but the future would require humans on the loop – not in the loop, making final decisions for lethal or non-lethal fires.”

It is also argued that in future a human would be ‘on the loop’ when even the last stage is automated, although he would only oversee the operation with veto power.

The rapidly advancing technology has, of late, started bringing about some radical transformation to long-standing concepts and beliefs at large. Warfare is no exception in this regard, with some of its seemingly enduring concepts transforming under the impact of technology. Keeping humans in the chain of killing and destruction is one such issue. The terminology for the same – being used more after the proliferation of computers and networking in warfare – is ‘keeping human in the loop’.

Leaving the decision to kill or destroy totally to machines is invariably seen as cold-blooded and a gross violation of human ethics. Even the military league is generally strongly opposed to the idea. In certain cases, such automation is there even now with humans on the loop.

In air defence and missile defence, for example, the time available to make a decision is extremely limited, especially with modern high-speed aircraft and missiles, and it is “either they or us situation” most of the time.

Here, often computers are trusted to open lethal fire. A human operator in such cases is sitting ready on the ‘abort’ button to exercise his veto power. A majority of veterans and doyens in the military still contend that a human will always remain involved in making a decision to kill and this will never change.

The game-changer in recent years has been the artificial intelligence (AI), which is no more a thing unfamiliar and is evolving to become more capable, credible and trustworthy with time. Better electronics, higher computational power, better sensors, etc. are speeding up military activities and shortening the Observe, Orient, Decide, Act (OODA) cycle, leaving ever reducing times for decision making and action. Better propellants that give weapons higher speed and long-range kinetic killers are shortening the time available for defence.

With miniaturised electronics and advanced software, multitudes of deceptive weapon designs are also evolving often in the garb of something else. The number of entities that could pose a threat is increasing by the day and will multiply in future.

The human capacity to tackle all such threats effectively in a complex threat environment is only going to fall well short of what would be needed.

Autonomy brought about by AI has, therefore, become an inescapable feature as it allows functionality where humans could flounder – timely analysis of a threat and action to counter it effectively in a complex environment. Humans would eventually be removed from even the final stages of the threat neutralising cycle.

AI would perform all the activities in the OODA cycle and also choose the right weapons to tackle the threat, where needed.

On the other hand, AI is also making offensive weapons formidable – deadlier, speedier, smarter and more difficult to counter. Humans, in this case, would invariably fall short. In warfare, the scenario emerging is AI versus AI.

Soon the role left for the human would be to use his good judgement only to veto or abort where needed.

Lethal Autonomous Weapon Systems (LAWS) are already emerging as products of AI and other associated modern technologies mentioned earlier. LAWS have been described as the third revolution in warfare, after gunpowder and nuclear weapons.

The debates on LAWS are, however, mainly focused on the ethics of warfare, including responsibility and accountability if things go wrong. Will the responsibility and accountability be that of the system designer or the software designer, or the programmer, or the operator or the commander or no human at all?

In spite of all such apprehensions and criticisms, the United States Air Force (USAF) has moved a step closer to full automation of the kill chain as stated by its chief. Roboticist Ronald C Arkin believes that with AI improvement, AI decision making is likely to be superior and more ‘humane’ to that of humans during war.

It is also pertinent to note that AI-infused weapons are not only going to remain the forte of militaries but also of non-state actors. There is no dearth of malevolently inclined talented individuals who are ready to work for unscrupulous organisations or terrorist outfits.

Considering the loss of tactical advantage that would come about by not adopting automation and autonomous weapon systems in future, militaries would invariably be obligated to adopt the concept of full automation while keeping humans on the loop.

As far as national level stances and policies are concerned, till about halfway through the previous decade, ‘human in the loop’ was a strongly contested view by militaries as well as policymakers. Both the US and China had earlier issued statements implying a continuation of such a policy in the future development of warfare means.

However, since 2016, both countries have started indicating a change of policy.

Robert Work, the 32nd US Deputy Secretary of Defence, had stated that while the US will “always prioritise human control,” it will also allow war machines to “independently compose and select among different courses of action to accomplish assigned goals based on its knowledge and understanding of the world, itself, and the situation.”

In 1995, China too called for a legally binding protocol on non-usage of LAWS, but changed its outlook in 2016 and called for responsible use of LAWS in accordance with the United Nations charter and laws of armed conflict.

In the modern accelerated pace of technology development, things are evolving rapidly and unforeseeably. With the quickening of the decision cycle and the increasing number of threats, the issue of losing the tactical advantage or not being able to counter threats effectively due to human in the loop is going to be crucial and needs some serious thinking by policymakers, strategists and militaries.

Such a changeover to the automation of kill cycle with humans on the loop seems inevitable in the not too distant future.