Rise of the Machines
#1
Senior Member
Thread Starter
Join Date: Oct 2001
Location: Little Rock, AR
Posts: 218
Likes: 0
Received 0 Likes
on
0 Posts
Vehicle: 2007 Accent
Rise of the Machines
Russian weapons maker Kalashnikov is working on an automated gun system that uses artificial intelligence to make "shoot/no shoot" decisions. But exactly how this AI or any other decides who is a combatant and who isn't is at the heart of a raging debate over allowing autonomous weapons on battlefields filled with both soldiers and civilians.
The Kalashnikov "combat module" will include 7.62-millimeter machine gun coupled with a camera attached to a computer system. According to TASS, the module uses "neural network technologies that enable it to identify targets and make decisions". A key part of neural networking technology is the ability to learn from past mistakes.
Neural networks are computer systems that learn much like animal brains, learning from example and then using that learning to make decisions in the future. A battlefield robot, for example, may store images of both soldiers, guerillas, and unarmed civilians in an onboard database. Once its onboard cameras image a human being, the neural network would compare the person it is seeing to the database. If it has a firearm or uniform worn by enemy troops, it would open fire. Ideally, if it saw no weapon at all, it would judge the target a civilian and not open fire.
The problem with example-based learning in warfare is that in war mistakes are permanent and irreversible, and a neural network may not have the opportunity to apply lessons learned. If the robot misjudges a rocket launcher as a broomstick and doesn't fire, the rocket launcher will blow it up. That doesn't help the robot and won't help future robots discern a soldier with a rocket from a civilian with a broomstick. If a civilian's pitchfork is misidentified as a rocket launcher, he gets riddled with bullets. If the neural network then self-tweaks itself to identify pitchforks, that's good for the robot. But the civilian is still dead.
The Kalashnikov "combat module" will include 7.62-millimeter machine gun coupled with a camera attached to a computer system. According to TASS, the module uses "neural network technologies that enable it to identify targets and make decisions". A key part of neural networking technology is the ability to learn from past mistakes.
Neural networks are computer systems that learn much like animal brains, learning from example and then using that learning to make decisions in the future. A battlefield robot, for example, may store images of both soldiers, guerillas, and unarmed civilians in an onboard database. Once its onboard cameras image a human being, the neural network would compare the person it is seeing to the database. If it has a firearm or uniform worn by enemy troops, it would open fire. Ideally, if it saw no weapon at all, it would judge the target a civilian and not open fire.
The problem with example-based learning in warfare is that in war mistakes are permanent and irreversible, and a neural network may not have the opportunity to apply lessons learned. If the robot misjudges a rocket launcher as a broomstick and doesn't fire, the rocket launcher will blow it up. That doesn't help the robot and won't help future robots discern a soldier with a rocket from a civilian with a broomstick. If a civilian's pitchfork is misidentified as a rocket launcher, he gets riddled with bullets. If the neural network then self-tweaks itself to identify pitchforks, that's good for the robot. But the civilian is still dead.