China is developing a new low-cost ‘suicide drone’ that is despatched in swarms to attack targets such as troops, tanks and other armoured vehicles. A swarm of the fixed-wing unmanned aerial vehicles was tested last month by the developer, a research institute which is part of the state-owned China Electronics Technology Group Corporation, according to a video released by the company.
The unmanned aircraft appear to be similar to the CH-901, China’s first tactical attack drone. The CH-901 is small but it can spend up to 120 minutes in the air and can head towards a target at a speed of 93 miles per hour before detonating.
In the video, multiple drones are seen being fired from a launcher mounted on the back of a modified version of a light tactical vehicle, as well as at least two from helicopters.
The test underscores how the drone swarm threat is becoming ever-more real and will present increasingly serious challenges and threats in future conflicts.
Paul Scharre, an expert on drone warfare, told The Times:
“We can’t see from the Chinese video whether the drones are communicating and co-ordinating with each other. It could just be a launch of drones like the launch of missiles from a multiple-launch rocket system. However, the test shows that China is developing swarm drone systems and they could be operational in a few years.”
In August, an artificial intelligence algorithm was pitted against a human pilot in simulated fighter jet dogfights. The AI pilot won 5-0. The Pentagon’s Director for Research and Engineering for modernisation, Mark Lewis, said that the advantage of an AI pilot is that it will be prepared to
“do things that a human pilot wouldn’t do”.
In a blog published earlier this week by the International Observatory of Human Rights, Pauline Canham writes:
“Removing the human from battlefield operations is given as a significant advantage by operating states, claiming that machines are less likely to make mistakes and offer higher levels of precision and lower risk to military personnel.”
It is precisely this, she says, that illustrates the argument against fully autonomous weapons.
“They don’t have human attributes that indeed include fear for themselves, but also compassion towards others. They are, in effect, weapons of dehumanisation, with no ability to recognise the humanity in those they fight against, or any way to distinguish between combatants and civilians.”
Yet despite this, China is one of many countries racing to develop the technology.
The US has deployed drones for decapitation strikes in the Middle East, including to kill top Iranian commander Qassem Soleimani in January and two American companies, Northrop Grumman and Raytheon, are developing ways to integrate drone swarms with the UAVs able to co-ordinate strike operations. The Pentagon’s Defence Advanced Research Projects Agency is also experimenting with swarms of drones and robots on the ground to provide back-up for infantry units.
As Canham writes, however, there are growing calls for a ban on fully autonomous weapons and a treaty. Out of 97 countries that have publicly elaborated their views on killer robots since 2013, the vast majority regard human control and decision-making as critical to the acceptability and legality of weapons systems.
Most of these countries have expressed their desire for a new treaty to retain human control over the use of force, including 30 that explicitly seek to ban fully autonomous weapons.
The International Observatory of Human Rights is proud to be a member of the Campaign to Stop Killer Robots. The Campaign to Stop Killer Robots is a coalition of non-governmental organisations working to ban fully autonomous weapons and thereby retain meaningful human control over the use of force.