The Algorithmic Sentinel: India’s Push For Autonomous Weaponry And The Ghost of Human Oversight

The Defence Research and Development Organisation (DRDO) and various public sector undertakings have formally embarked on the development of lethal autonomous weapons systems (LAWS), according to a landmark report tabled in the Lok Sabha on 30 March 2026.
This revelation signals a significant shift in India’s military strategy, integrating artificial intelligence across the full spectrum of warfare. Beyond lethal strike capabilities, the initiative encompasses command-and-control systems, cybersecurity, and even human-behaviour analysis to bolster national security.
Technological advancements highlighted in the report include underwater autonomous vehicles engineered for sophisticated target classification and AI-enabled missile systems. One of the more controversial developments is the "Face Recognition under Disguise" (FRSD) system, designed to identify individuals at sensitive locations even when they attempt to obscure their features.
This broad AI push aims to achieve "force multiplication," extending the reach of the battlefield while theoretically reducing the physical exposure of Indian troops to direct combat.
However, the Ministry of Defence has candidly acknowledged the inherent volatility of these technologies. The report admits that AI and machine learning techniques are "not amenable for verified decision making," a technical limitation that could lead to catastrophic "unintended outcomes." To manage this risk, the ministry suggested a semi-automatic framework where AI provides recommendations rather than final execution orders, though it noted a global lack of consensus on what truly defines "levels of autonomy."
On the international stage, India’s stance remains complex and somewhat resistant to restrictive regulation. While participating in UN discussions under the Convention on Certain Conventional Weapons, New Delhi has consistently argued against a legally binding ban on autonomous weapons, viewing such proposals as premature.
India’s voting record at the UN reflects this caution; it supports progress in discussions but has abstained from or voted against resolutions that mandate strict human control norms or independent reporting, fearing the stigmatisation of emerging tech.
Military leaders have expressed sharp concerns regarding the "cognitive offloading" of lethal decisions to algorithms. At the recent India AI Impact Summit, high-ranking officials stressed that command responsibility is absolute and cannot be transferred to a machine.
Examples were cited where commanders intervened to stop machine-recommended strikes that failed to account for civilian evacuations, highlighting the vital necessity of human judgment in the face of algorithmic speed.
The stakes of this technological race are underscored by recent global conflicts where AI-assisted targeting, such as the Lavender system, resulted in significant civilian casualties due to limited oversight.
As India advances its autonomous capabilities, the current absence of a dedicated oversight body, specific domestic legislation, or a formalised military doctrine remains a critical gap. Without these safeguards, the risk remains that algorithmic errors could translate into irreversible lethal force without a clear line of accountability.
Standing Committee On Communications And Information Technology
No comments:
Post a Comment