India is pushing for the military to use unmanned combat systems. Admiral R. Hari Kumar, head of the Indian Navy, emphasised the significance of autonomous systems in building a “futureproof” Indian Navy months after the Indian Army announced the introduction of “swarm drones” into its mechanised forces (IN). Admiral Kumar listed steps to improve the Navy’s operational prowess during a speech at the Navy Day press conference last month, including a move to acquire a fleet of armed drones for the Indian military: Can robots comprehend the laws of war? Artificially intelligent unmanned warfare systems create issues of law, ethics, and responsibility despite their increasing use in armed conflict, such as the armed American “Predator” drones. He asserted that the IN has a responsibility to closely monitor Chinese vessel activities in the Indian Ocean region. In “navigating the volatile security situation” in the littorals, military drones are valuable tools.
In fact, the IN has been working to increase monitoring in Indian territorial waters. The military published an unclassified version of its “unmanned roadmap” for the introduction of distant autonomous platforms, including submarine vehicles, in July 2022, two years after leasing MQ-9B Sea Guardian drones from the US. Underwater domain awareness, which is seen as a more and more important part of maritime deterrent in the Eastern Indian Ocean, is a major force behind the venture. There is a growing belief among Indian scholars and military planners that China’s undersea presence in the Indian Ocean is on the verge of passing a major threshold in the wake of the battle in Ladakh in June 2020. The Peoples Liberation Army Navy may have been researching the operational environment of the Indian Ocean, according to recent reports of Chinese drones being spotted in the waters off Indonesian islands. Chinese research and survey vessels have already been stationed in greater numbers in the waters surrounding India’s Andaman and Nicobar Islands. The IN aspired to obtain its own autonomous underwater vehicles (AUVs) with dual surveillance and strike capabilities as it became increasingly aware of the risks posed by foreign undersea presence in Indian waters.
However, maritime observers are intrigued by the navy’s interest in armed undersea drones. Underwater vehicles are frequently utilised for underwater search and exploration, but the Indian military establishment has never really considered them to be useful in combat. Despite the AUVs’ usefulness for duties like ship survey and mine identification, India’s navy planners have typically refrained from using underwater drones for battle.
No longer, obviously. The ability of underwater autonomous platforms powered by artificial intelligence for warfighting appears to be being recognised by Indian analysts and decision-makers only recently (AI). Indian watchers are starting to recognise the likely effects of disruptive technology on the marine domain as the fourth industrial revolution (4IR) shapes a new age in warfare. Many believe that AI enabled by deep learning, data analytics, and cloud computing is poised to change the maritime battlefront and maybe bring about a revolution in Indian naval affairs.
However, there is a gloomy atmosphere around the use of intelligent machines in maritime conflict. Despite the fascinating narrative around the use of AI in battle, the technology is more complex than most people realise. The first characteristic of artificially intelligent fighting systems is an ethical paradox. AI increases the risk of shared liability between networked systems even though it makes warfare more lethal. This is especially true when weapon algorithms are sourced from abroad and when the satellite and link systems that enable combat solutions are not in the user’s control. AI also compromises the control, safety, and accountability of weapon systems.
As if that weren’t complicated enough, AI is characterised by a propensity for particular types of data. The confidence in automated fighting solutions is undermined by biases in data gathering, the instructions for data processing, and the choice of probabilistic outcomes. Furthermore, AI appears to automate weapon systems in ways that go beyond the rules of war.
Such harms are not merely hypothetical. The use of emerging technology in warfare, according to those opposed to it, puts both military troops and civilians at risk. They argue that a system of targeting people based on probabilistic evaluations by computers that only take into account machine-learned experiences (measuring discrepancies between results and expectations at every stage of computation) is problematic because the computer neither has access to all necessary information to make an informed decision nor understands that it needs more information to come up with an ideal solution. It is impossible to hold a computer responsible for using force inadvertently in a theatre of combat, hence no one can be held accountable.
The doctrinal paradox raises similar issues. It is difficult to incorporate AI-driven warfighting strategies into doctrine, especially given that many technologies are still in the early stages of research and that it is unclear how successful AI might be in conflict. Some argue that military doctrine should be based on the anticipation of routine employment of unmanned assets in conflict in light of the successful deployment of armed drones in the conflicts in Azerbaijan and Armenia and the conflict in Ukraine. But modifying policies to take AI into account is difficult. Because military doctrine is based on a conventional concept of conflict, this is the case. If war is a normative construct, then there are laws and moral guidelines that must be obeyed. Military leaders are aware that “proportionality” in the use of force is crucial and that the “necessity” of deploying force in war should be proved. It would be incorrect to use the Azerbaijan-Armenia War or the Ukraine Conflict as a model for a doctrine.
Underwater combat drones raise no less complicated legal issues. Unmanned maritime systems may or may not be considered “ships” under the UN convention on the rules of the sea, but even if they are, it is doubtful that they can be categorised as warships. However, neither in times of peace nor during armed conflict is their authorised usage necessarily prohibited. Imagine that a Chinese navy or survey vessel was in a neighbouring country’s territorial waters when an Indian unmanned drone felt impelled to engage it. Although it wouldn’t necessarily be against the law, it would be unethical. Additionally, it would set a standard for China to reciprocate.
The capacity of the IN is another constraint on the advancement of AI. A huge gap still exists in the development of vital technologies, such as system engineering, aerial and undersea sensors, weapon systems, and high-tech components, even though technology absorption in the navy has grown over time in some areas. The navy is still concentrating on deploying AI in noncombat tasks including training, logistics, inventory management, maritime domain awareness, and predictive maintenance despite the introduction of numerous AI initiatives. The IN is still at a point in its evolutionary curve where integrating AI in combat systems might be dangerous, according to India’s maritime managers. Many people think that moving forward gradually is the wisest course of action.
It is important to recognise that the use of AI in battle affects both combat effectiveness and warfighting ethics. The military must employ its resources in accordance with national and international law because AI-infused unmanned systems on the marine battlefront represent a risk. The naval command of India would be well to steer clear of the dilemma of creating AI-powered underwater systems, whose use might be acceptable in a tactical setting but contravenes the core legal precepts of “humanity,” “military necessity,” and proportionality.