Monday, August 28, 2017

Weapons of Doubt

Last week, Elon Musk again warned against Killer Robots.  This time, instead of just him on stage, he got 115 of his closest friends (really, leading experts in the field of AI and autonomous weapons) to publish a letter to the UN.  In it, he (and all the rest) warn against the use of these kinds of weapons that are not only unmanned, but also not directly controlled by a human being.

Is he wrong?  No.  There are terrific potentials for misuse around these things.  Think of mean ol' Mrs. Jenkins who always yells at you to pick up after your pooch.  If she sets up a Kalinshni-Bot(tm)  on her front yard, the neighborhood population of pups is going to take a sharp down turn.

What I find a bit disingenuous is that Mr. Musk is the person voicing these concerns.  After all, he is one of the leaders in autonomous vehicles with Tesla.  And the area awareness software and sensor suite that he is plunking into his cars could be easily modified to drive something more sinister.

But let's put that aside and think a bit about what those rules might be.


As I've said before, autonomous everything is coming.  On a long enough time scale, everything from cars to ceviche will be automated.  And this is a good thing.  Once all of the work is done by robots, we humans are free to create, pursue dreams and just be.  No more punching time cards because that's the most reliable way to put food on the table.  Unfortunately, weapons sit somewhere between cars and ceviche.

Furthermore, I agree that automation needs to be regulated.  Not in some we-must-save-the-jobs-for-the-people kind of way, but in such a way that we, the citizens of the world can be confident that the output of all of this automation is safe: safe materials, safe processing and safe delivery are all part of this.

When applied to automated weapons, this gets tricky fast.  The materials are now bullets.  Not safe.  The delivery is out of the barrel of a gun.  Not safe.  Which leaves the processing.  Can that be made safe?  And what does that mean in the context of a weapon anyway?

For me, it means that the weapon has very narrow parameters under which it is allowed to engage a target.  Here are a few off the top of my head:

  • It must repeatedly warn a potential target that it is being targeted.  And that warning must be in a manner that can be understood in highly stressful situations.  It must be loud, but not damagingly so.  It must be delivered in a manner with the highest likely-hood of being understood by the target: in a language or set of symbols that the target can properly interpret.  That warning should include warning shots as a last resort prior to active engagement.
  • It must have very narrow parameters for determining that a target is in fact a target.  In the wrong place and not wearing the properly coded RFID chip is not enough.  The weapon must be able to interpret that the potential target has a dangerous intent towards it or the asset is has been assigned to protect.
  • If there is any doubt, then the weapon must stand down.  Any doubt.  Stand down.  Period.
That last may all but hamstring these types of weapons.  There is almost always doubt.  But, if rules similar to these are implemented (plus many more because I'm bashing these out after dinner and a beer, more thought is needed), then these weapons might be safer than some testosterone fueled grunt bored on guard duty.

Of course, rules without enforcement are a waste of time.  And when the stakes are the lives of innocents, then I find it hard not to think in terms of similar consequences for those responsible for violating these rules.  No fines.  Most fines are no more than a corporate wrist slap.  Hard time and the potential for the death penalty.

But who is responsible?  The person who built the weapon and the underlying code?  The person who ordered that the weapon be released into the combat zone (or any other kind of zone)?  The person who bought the thing (if they are different from either of the first two)?  Yes.  All of them should be held responsible.  Collectively.  And all should be aware that their lives are on the line if they screw up.

Ultimately, these types of weapons need to be monitored by a human.  One that can work the same way that the weapons work: all the time, without fatigue or rest.  To do that, may mean uploading a human mind into the OS for these things.  If it is possible.

Fortunately, Elon is working on doing something very close to just that.

No comments:

Post a Comment