- This is the first known case of a self-hunting drone being used against people.
- A United Nations security report on the Second Libyan Civil War says it was a Kargu-2 quadcopter.
- More weird news here.
We all knew this day was coming. You can only mess around creating wildly advanced robots and practically sentient artificial intelligence before something like military drones start being used to autonomously attack humans.
Which, according to a recent United Nations security report, is exactly what happened last year.
The robot in question this time is a Kargu-2 quadcopter produced by defense contractor STM and the incident reportedly took place in March of 2020 in Libya during the ongoing Second Libyan Civil War.
The report states…
Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (above) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.
Turkey reportedly supplied the drones to Haftar Affiliated Forces (HAF) led by Libyan Field Marshal Khalifa Haftar – a violation of a United Nations arms embargo.
“The UN report implying first use of autonomous weapons against soldiers paints an uncertain picture — however, that’s the point,” national security consultant Zachary Kallenborn told Popular Mechanics.
“The first use of autonomous weapons in war won’t be heralded with a giant fireball in the sky and dark words on how humanity has become Death, Destroyer of Worlds. First use of autonomous weapons may just look like an ordinary drone. The event illustrates a key challenge in any attempt to regulate or ban autonomous weapons: how can we be sure they were even used?”
"The world needs to debate the growing threat of drone swarms. This debate shouldn’t wait until lethal drone swarms are used in war or in a terrorist attack but should happen now." – @ZKallenbornhttps://t.co/o1TNGepRBk pic.twitter.com/O7lpFa3Fnn
— Bulletin of the Atomic Scientists (@BulletinAtomic) June 1, 2021
According to Kallenborn, “The Kargu is a ‘loitering’ drone that can use machine learning-based object classification to select and engage targets, with swarming capabilities in development to allow 20 drones to work together.
“The UN report calls the Kargu-2 a lethal autonomous weapon. It’s maker, STM, touts the weapon’s “anti-personnel” capabilities in a grim video showing a Kargu model in a steep dive toward a target in the middle of a group of manikins. (If anyone was killed in an autonomous attack, it would likely represent an historic first known case of artificial intelligence-based autonomous weapons being used to kill. The UN report heavily implies they were, noting that lethal autonomous weapons systems contributed to significant casualties of the manned Pantsir S-1 surface-to-air missile system, but is not explicit on the matter.)”
“Autonomous weapon risk is complicated, variable, and multi-dimensional—the what, where, when, why, and how of use all matter,” he continues. “On the high-risk end of the spectrum are autonomous nuclear weapons and the use of collaborative, autonomous swarms in heavily urban environments to kill enemy infantry; on the low-end are autonomy-optional weapons used in unpopulated areas as defensive weapons and only used when death is imminent. Where states draw the line depend on how their militaries and societies balance risk of error against military necessity. But to draw a line at all requires a shared understanding of where risk lies.”