How Are Autonomous Systems Used In Defence?
Post By: Ryan King On: 04-06-2024 - Industry Trends - Manufacturing
Using robotics in warfare may sound like a concept from science fiction – and for a long time, it was. But over the last few years military robots and autonomous weapons systems have become an everyday reality. With robust ethical arguments both for and against, RAS (Robots and Autonomous Systems) are a growing – if controversial – trend in defence technology.
What Are RAS?
Often referred to as uncrewed systems, RAS perform a range of functions in warfare and peacekeeping. While they may seem like a completely new innovation, they often have deep historical roots.
Take drones, which are commonly deployed for targeted attacks as well as reconnaissance, surveillance and monitoring. Their use in modern warfare is a recent move in a long tradition dating back to the experimental pilotless aircraft developed during the First World War.
Bomb disposal robots are another prominent case. These have been used in some form by both the police and the armed forces since the 1970s. Remotely controlled by a human bomb disposal expert, the most recent versions use “advanced haptic feedback” to give their operators an immersive feel for the conditions. They’re also much better equipped to navigate challenging landscapes than their predecessors. Some can even climb stairs.
These are just two of the most publicly visible examples of RAS, but they are fairly typical. Most RAS technologies used in the field today are semi-autonomous – uncrewed systems controlled and operated at a distance by humans.
How Are They Used?
RAS are sometimes deployed for tasks that fall under the “four Ds”: Dull, Dirty, Dangerous and Demanding. Drones and bomb disposal robots can undertake repetitive and difficult work without fatigue and they don’t suffer in unpleasant conditions. They’re also an effective way to keep human beings out of hostile and risky environments while ensuring that crucial logistical tasks are carried out. But that’s not the full extent of their usefulness from a defence perspective.
In The Air
In today’s conflicts, autonomous and (more commonly) semi-autonomous systems, including drones, are often used to attack enemy infrastructure. The first high-profile use of UCAVs (Unmanned Combat Aerial Vehicles) took place during the Yom Kippur War of 1973, when the Israeli forces deployed unarmed drones to draw Egyptian anti-aircraft fire, effectively depleting their arsenal with no casualties on the Israeli side. Combat UAVs have been a prominent feature of other major conflicts, including the Gulf War, and drone attacks have featured strongly on both sides of the 2020 Nagorno-Karabakh war as well as the ongoing war in Ukraine.
On The Ground
Uncrewed ground warfare has a very long history, starting with the use of radio-controlled “land torpedoes” by the French Army during World War I. The Soviet-armed Teletank and the single-use German Goliath demolition vehicle were both put into use in the 1930s and 1940s, while other projects, such as the British remote-controlled Matilda infantry tank, were developed but never deployed.
The first ALV (autonomous land vehicle) was developed in the United States in the mid-1980s. Today, ALVs are mostly used for reconnaissance and bomb disposal. Their ability to navigate unfamiliar, unpredictable and/or hostile conditions is still limited. But their potential to preserve combatant lives means that their capacities are undergoing constant development.
What’s Next?
The possibilities – and pitfalls – of AI are a current topic in a wide range of sectors, and defence is no exception. It’s not a theoretical question, either. AI is already used in the military sector, mostly for routine tasks such as data collection and processing, in fields ranging from procurement and supply to intelligence and surveillance. AI analysis is also used to simplify and streamline systems maintenance. These applications don’t have a high public profile, and they create little ethical concern among the industry and wider public.
More controversial is the idea that AI could be used to identify and remove threats – including enemy and terrorist combatants – automatically, with no human involvement. AI-operated weapons systems would be autonomous in the true sense, and they are the subject of intense discussion. Some experts claim that removing human intervention also means removing human error, as well as minimising casualties. Others argue that human conscience and moral judgement cannot and must not be replaced, especially when it comes to decisions that result in loss of life. They also point out that AI algorithms can also make errors. In a military context, these could have grave consequences for civilian populations.
It’s not yet certain whether an effective, fully autonomous weapons system can be developed – at least, in the near future. But the moral, legal and ethical questions raised are fundamental for the future of defence.
An Evolving Field With A Strong History
The First World War was a landmark in world history because it was the first global, mass-scale and industrialised conflict. It’s no surprise that many developments that are standard today, such as drone warfare and unmanned vehicles, date back to that period. In that respect, RAS are far from a new phenomenon and the drive to maximise strategic impact while minimising casualties is certainly not a recent concern.
The mainstreaming of semi-autonomous defence systems has raised questions about human responsibility in conflict. With artificial intelligence entering the picture, the relationship between people and systems is becoming more sophisticated. Military personnel already make practical and strategic decisions based on data collected and processed by AI. The pressing question is whether that decision-making capacity can and should be delegated.
To some, this is a natural and desirable progression in a field that increasingly relies on automation to lower the human and economic cost of conflict. To others, it’s a dangerous move that changes the whole basis of warfare, enabling difficult and potentially damaging decisions to be made with no human involvement. While technology develops rapidly, the ethical debate looks set to continue for many years to come.
Get More From Rowse Straight To Your Inbox