Switch to previous version of Magzter

Try GOLD

AI Out Of The Shadows And Into Warfare
The development of Autonomous Weapons Systems and Artificial Intelligence(AI) assisted planning and management of battlefields need checks and balances so that the human input into any decision on use of lethal force remains paramount, argues
Sanjay Badri Maharaj

The race for technological superiority in warfare, strategising and even planning processes is inexorably drawing in artificial intelligence (AI) as a factor into the equation. It is evident that across many countries, there is renewed emphasis on the development in autonomous weapons system (AWS) that is progressing rapidly, though the deployment of these systems is as yet extremely limited. Some consider the development of AI and its weaponisation to be destabilising as it brings about issues that may question and challenge decision makers.

One view is that that artificial intelligence is leading towards what some have termed a new algorithmic warfare battlefield and one which respects neither boundaries or borders and which may do away with the need for human intervention. Some fear that the linkages of AI in cyberspace, geospace and space (CGS) and the linking of these to weapons system would create a situation where CGS would be able to acquire, engage and eliminate a target without recourse to any form of human intervention. This would, in the more pessimistic view, present an unprecedented challenge for humanity as it effectively outsources warfare to remote and completely unaccountable weapons system which present number of legal and ethical issues. It may be that these concerns are completely overblown but there are legitimate concerns as to how these systems would be integrated and operated and how regular armed forces and decision makers would make use of these systems on the battlefield that would be unlimited.

Yet, this is already an issue, even with human intervention as we have seen with the development of armed drones being used extensively in counter-terror operations and their unfortunately less than stellar record in avoiding civilian casualties though reducing friendly casualties to a great degree and inflicting losses on hostiles.

AI in planning

One concern that the US Army War College is considering at present is the issue of how far AI would be integrated in or used to supplant human commands. The concern is over the issue of trust and allow such systems to dictate the launch of munitions against hostile targets. The US Navy, for example, grapples with this issue with its Aegis air and missile defence system where the detection of targets, the interception solutions and even the launch of interceptor missiles can be entirely autonomous of human interaction. This was supposed to deal with salvoes of hostile targets but can present major challenges in an environment where friendly targets or civilian aircraft might be in the vicinity. This could lead to tragic consequences if a civilian airliner is mistaken for a hostile target. This happened in the late 1980s when the USS Vincennes shot down an Iranian airliner with the loss of all aboard. While one can understand the Aegis system being developed with such a mode to enable the system to cope in the event that human operators are overwhelmed in a target-rich environment, there are inevitable risks with such systems leading to tragedy.

However, the US Army, for example, has no intention of allowing a machine to dictate when to “pull the trigger”. On the other hand, it is developing AI to do target recognition and intends to use them on systems as diverse as drones to tank gun-sights to infantry goggles. Moreover, the US armed services are exploring predictive maintenance algorithms that warn mechanics to fix failing components before human intervention can detect flaws. Furthermore, work is ongoing to develop cognitive electronic warfare systems that figure out the best way to jam enemy radar as well as new AI airspace management systems that enable strike fighters, helicopters, and artillery shells to converge onto the same target without fratricidal collisions and with perhaps fewer friendly fire incidents.

Continue Reading with Magzter GOLD

GoldLogo

Get unlimited access to thousands of curated premium stories and 5,000+ magazines

READ THE ENTIRE ISSUE

September 2019

MORE STORIES FROM GEOPOLITICSView All
Weapons That Killed Baghdadi
Geopolitics
10 mins read
The Welham Initiative
Geopolitics
4 mins read
K-4 Test Will Put India In Top 5
Geopolitics
5 mins read
Insurgencies Of North East
Geopolitics
10+ mins read
India's Military Special Forces
Geopolitics
10 mins read
Assad Will Be The Real Beneficiary
Geopolitics
3 mins read
Challenges Of F-21 Through ‘Make In India'
Geopolitics
9 mins read
Boosting Soldier Lethality
Geopolitics
10+ mins read
Combat Over Kargil - How The IAF Emerged On Top
Geopolitics
9 mins read
Dogfight Dukes
Geopolitics
8 mins read