Should drones be allowed to carry out missions on their own?

Somewhere in the skies over Tajikistan, four of the most sophisticated US warplanes ever built are on a mission of the upmost importance; a warlord has captured some nuclear warheads and they need to be destroyed. Three of the planes are manned but one of them is an Unmanned Combat Air Vehicle or UCAV; a drone! However, the pilots soon realise that if they do indeed attack then radioactive dust particles thrown in to the air from the explosion will rain down on a nearby village and then across the border in to Pakistan. The mission is scrubbed and the planes turn back to their aircraft carrier.

But the UCAV refuses the order and decides the mission is too important to be abandoned.

Going against its instructions the UCAV attacks the target thus destroying the warlord’s nuclear weapons but also irradiating hundreds of thousands of people. It’s not finished yet however and decides to attack another target. This time in Russia…

RAF Reaper drone UAV

An RAF Reaper Remotely Piloted Vehicle (RPV) (www.raf.mod.uk)

So goes the story in the 2005 action and science-fiction movie, Stealth. The film was a box office flop that was dismissed by critics but to many military observers around the world it did raise a question that had been largely dismissed except of course in science-fiction. Could we really develop weapon systems that can in-theory identify and attack a target without any human intervention and if so, should we?

The answer to the first question is undeniably, yes. The most cutting edge combat aircraft such as the US Army’s AH-64E Guardian attack helicopter or the RAF’s Typhoon FGR.4 have sensors so advanced that they can detect and identify a hostile target such as an enemy vehicle or aircraft from great distances and present the information to the pilot. The pilot then has to select what he deems to be the appropriate weapon to prosecute the target and can essentially allow the aircraft’s computers to carry out the attack. It would not be difficult therefore to design computer software to take over the decision-making process on how to attack what the aircraft’s sensors have detected.

But here’s the catch!

The aircraft’s computer systems identify a target by looking at the sensor data and trying to match that data with whatever information exists in its own digital memory. It knows what a T-55 tank is supposed to look like and if, for example, an infra-red image returns a similar vehicle then it will reason it is a T-55. A pilot however can look at the image and determine exactly what it is through logic and reasoning rather than relying solely on the data from the onboard sensors. It may very well be a T-55 tank but it could also be a truck whose image is distorted by it being crammed full of refugees. The drone maybe programmed to attack anything that looks like a T-55 but the pilot can take in to account the fact that the vehicle is travelling in a convoy of refugee vehicles and therefore less likely to be a tank or at the very least this warrants further investigation. Even if it is proven to be a tank the pilot can decide that attacking it is not worth the civilian loss of life and abort. Such an attack by an automatic drone where there would be heavy loss of civilian life by mistake would be a political and human disaster.

It is for fear of that very mistake being made by an autonomous drone that groups demanding greater international laws preventing fully autonomous weapon systems comes in to play. This movement flourished in the early 2000s as drones took centre stage in the War on Terror in Afghanistan and Iraq and in 2009 the quite science-fiction sounding organisation, the International Committee for Robot Arms Control (ICRAC) was founded. The committee is composed of experts in the fields of robotics and international law and aims to address what they view as the growing dangers of increased autonomous weapon systems.

In 2010, the committee issued a statement in Berlin, Germany outlining many of its recommendations on the restrictions of autonomous weapon systems such as UCAVs. These restrictions included limiting unmanned weapons’ ability to make any of the following decisions independently of human control;

  • The decision to kill or use lethal force against a human being.
  • The decision to use injurious or incapacitating force against a human being.
  • The decision to initiate combat or violent engagement between military units.
  • The decision to initiate war or warfare between states or against non-state actors.

In 2014, supporters of the committee’s Berlin statement felt they had won their biggest victory to date when on February 27th of that year the European Parliament voted 534-49 to ban the development, production and use of fully autonomous weapons which enables attacks to be carried out without human intervention. The committee had wanted the restrictions to go further including limiting range and payload of all drones, even those under human control from the ground, but to this there was much stronger opposition from European governments many of whom such as France and the UK place great emphasis on them.

MQ-9 Reaper RAF Brimstone Hellfire missile UAV UCAV RPV

The MQ-9 Reaper RPV has carried out the bulk of RAF drone strikes in Iraq and Syria

Proponents of more sophisticated drones however, argue that no drone regardless of its sophistication is truly autonomous. A human decision has already been made to launch the drone against enemy forces and therefore the intention for the drone to kill has already been displayed before it even takes off. The autonomous drone would carry out the mission on behalf of its human commanders purely within the confines of its programming in the same way that a human can decide to fire a bullet at an enemy soldier; it’s true the human has no further control over the bullet but it is carrying out the human’s intent to kill. They also argue that a manned aircraft is theoretically a more unstable option because the human occupant is just as if not more fallible than an automated weapon system. A pilot can be prone to moral or psychological factors that may inhibit them from carrying out the mission even if the attack on the target is justified. Alternatively, a psychologically unbalanced pilot may have no regard for civilian lives whatsoever increasing the death toll on the ground.

If we were to consider a scenario whereby an air strike has been ordered on a terrorist weapons factory in Syria but from two perspectives; one of a manned aircraft and one of an autonomous drone. Both drone and pilot would carry out a risk assessment before deploying weapons which would look at potential threats to the aircraft and potential collateral damage to civilians. Once this assessment is complete the appropriate weapon would be selected and the attack carried out. Proponents of autonomous systems argue that the drone is safer because no attack would be carried out if the drone detected what it determined as civilians in the blast zone and being unable to violate its mission parameters would abort the mission. The human pilot on the other hand can still drop the weapon if he so chooses as could a human-controlled drone. Also, the more obvious concern with a manned aircraft is the risk to the pilot from enemy defences.

The problem of course is that a lot depends on the quality of the programming of the drone and its sensors. Just what kind of parameters should be programmed in to the drone to define civilians? There is even the risk that the drone could misinterpret hostile forces for civilians and not carry out the attack the result of which could be that weapons developed in the factory could be used against allied forces or even civilian targets in western cities. There are more basic moral concerns as well such as war appearing to become cleaner and therefore less repulsive – at least to the country that’s operating the autonomous drones – since bombing missions can be carried out without the risk of losing sons and daughters to enemy forces.

No one would argue that being able to defend your own country without the risk of losing the lives of your troops is attractive. Many of us in the UK remember the scenes of C-17s landing at RAF Brize Norton and coffins draped in the British flag being unloaded live on the BBC during the years of operations in Afghanistan and Iraq and no one wants to see that again. The political fallout of heavy casualties can affect an early withdrawal of troops even if the military objective has not yet been completed regardless of the wider consequences but could this bloodless type of war actually increase the chances of military conflict? The ICRAC argues that autonomous drones are taking away the human decision to initiate armed conflict because they operate on a set of restrictions limited to their own situation and ignorant of the wider scenario but most importantly are free of the implications of their actions unlike a human who could be prosecuted for illegally initiating combat. In this regard there is indeed a higher chance of conflict being unintentionally initiated with autonomous, weaponised drones and if this were to occur between two technologically sophisticated nations then it would only be a matter of time before the drones were defeated and lives would be lost as troops and manned aircraft/ships go in to battle.

Taranis UCAV UAV RPVIt’s the nightmare scenario that is driving the campaign to restrict truly autonomous drones. One of the most advanced drones currently in development is the UK’s BAE Systems Taranis (left); a high performance warplane that when development is complete will be able to conduct air defence and strike missions with equal prowess to that of a manned aircraft such as the Lockheed Martin F-35 Lightning II but even this is semi-autonomous. It still requires human intervention to make decisions but beyond that the drone carries out the mission itself in the same way that an Air Marshall at a command centre has passed instructions to pilots in combat in the past. This balance of man and machine would appear to offer the best of both worlds; all the advantages of unmanned aircraft but retaining the human factor.

There is still one problem however.

The operation of even a semi-autonomous drone relies on communication between the drone and the command centre. Any wireless signal can be broken either through malfunction or enemy interruption. If a semi-autonomous drone was to lose contact with its command centre should it then be allowed carry out the mission on its own or should it be programmed to return to base? The latter would of course best appease the current international feelings on the subject but what of the aforementioned terrorist weapons factory scenario whereby a Taranis aborting the mission would result in civilian deaths in the UK from terrorist actions?

The fact of the matter is that as free-thinking human beings we are naturally suspicious of entirely automated weapon systems. No matter how well programmed or advanced a drone is there will always be a question hanging over whether or not we can trust it to carry out our military intentions exactly. It is also important that someone be accountable for the use of military force otherwise human life on the whole is devalued which would only lead to more suffering. One final point to make however is that human beings armed with guns have been responsible for more unintentional deaths in combat than any other weapon and for that fact alone we shouldn’t completely dismiss the advantages technology offers us in the decision making process. They have the potential, if the programming is sophisticated enough, to significantly reduce collateral damage in combat. One thing is for sure; drones/UCAVs/RPVs use by western forces will only increase in the years to come and consequently so will the debate.

 

 

 

 

Advertisements

5 responses to “Should drones be allowed to carry out missions on their own?

  1. A really interesting article Tony. There are certainly strong arguments for and against drones in the battle field, and it is one that will no doubt go on for some time to come. With any ‘machine’ there is the danger of breakdown, interference etc, take away the human element and it has to decide for itself based not on logic but it’s own programmed memory. One day they may well produce a real thinking computer beyond our current comprehension, and that day is probably not too far away. That then raises the question of machines deciding who to kill and not to kill – a dreadful scenario.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s