When we think about autonomous weapons and artificial intelligence we will often talk about Terminator and Skynet and all the other highly glamorised killing machines portrayed by film and media, but in light of the recent combined letter from more than 1000 experts, we will look to explore exactly what these experts mean by autonomous weapons and the applications behind them.
We explored the difference between the different types of AI, Artificial Narrow Intelligence (ANI) which specialises in one job, Artificial General Intelligence (AGI) that looks a lot like human intelligence with ability to plan, learn and comprehend complex ideas, and Artificial Super Intelligence (ASI) which demonstrates an intelligence many hundred times more powerful than ours, we don’t know how this type of intelligence will manifest itself.
When we consider which of these level of super intelligence could be brought into an autonomous machine, we are coming very close to, if not past the point, where computer processing power can be attached to a vehicle with imaging software and ballistic to be sent out to ‘hunt’ anything that it is programmed to identify.
Realistically, we aren’t waiting for full bodied androids to roam around before they can be weaponised, and this is the point that thousands of AI and robotics experts, Stephen Hawking, Elon Musk, Wozniak and philosophers such as Chomsky are putting across in their open letter to world leaders.
The precedent that this could potentially send is that these can and will be able to be produced cheaply and on a huge scale. The implications and possibilities surrounding what these could potentially achieve is stated in the letter as they go on to describe autonomous weapons such as these as the “Kalashnikovs of tomorrow,” warning that “it will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”
There are obvious and catastrophic repercussions that could unfold if the worlds military powers push ahead with autonomous weapon development, the letter concludes and proposes and outright ban on autonomous weapons, whilst also being careful to highlight the benefits that AI could have on humanity.
“We believe that an AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”
Again, the importance between different levels of intelligence is a key factor here, and it is still likely that any “autonomous machines” are created with the idea that a human is at some point in control of the final kill switch, they are still going to be as “effective” as if they were controlled by artificial intelligence.
There are very positive routes that can be travelled down through artificial intelligence and obviously very negative routes, but something that always needs to be remembered is development of weapons is a constant and as long as companies such as DARPA continue to research artificial intelligence, the two paths are likely to conjoin at some point in the future.
We commend Stephen Hawking and the Future of Life institute for putting this letter together, and we can hope for an international treaty in similar vein to the Chemical Weapons Treaty, which many states signed to not use chemical weapons, but the effectiveness, profitability and precision in targeting that Autonomous Weapons will have, may be too large of a draw for states to join.