Monday, July 17, 2017

Artificial Risk

Over the past months, I've written quite a bit about how automation, driven by an Artificial Intelligence (or three), will take our jobs and that it is a good thing.  Then along comes Mr. High-and-Mighty Technocrat Elon Musk saying that AI is an existential threat to humanity and needs to be proactively regulated.

Well, crap.


What is a blogger with an audience of (maybe) dozens supposed to do against the loudest voice for Futurism on the planet?  I don't know what the others are doing, but I'm going to unpack his comments and do some on-page thinking.

Let's Be Clear


Is AI "the biggest risk we face as a civilization"?  Personally, I rank global warming ahead of aggressive AI, but AI definitely poses a risk.  Where I disagree is with where that risk lies.  Is it fundamental to Artificial Intelligence or is there something else at work here?

Before we get into that, I want to define 'AI' a bit more closely.  There are three basic things that are commonly called AI in the world today.  In increasing order of complexity, they are Expert Systems, Specific AI and General AI.

Expert Systems are the programs that are used to govern highly specific tasks.  Think about aircraft routing or other complex logistical challenges.  They are basically a long series of nested if/then decisions that can present a rough draft solution to the humans responsible for the tasks.  These have been used for years: shipping, recommendation engines, early search results, high-speed trading (though that is changing).

Specific AI is still task restricted.  But instead of having a static set of binary decisions, it can refine them over time, learning and improving them as it gets better information.  As the humans correct it.  This machine learning is what separates Specific AI from Expert Systems.  We are beginning to see these appear in things like Google's DeepDream and that Amazon seller with the strange phone cases.   The NSA's ICREACH.

General AI takes this machine learning and expands it beyond set tasks.  Instead, it is an attempt to take these machine learning feedback loops and apply them to anything.  At the moment, these exist in labs and are not out there in the public sphere (that we know of).


The Risk


Elon thinks that may AI will lead to "...robots going down the street killing people..."  But what he does not say is what kind of AI would be doing this.  Instead he discusses a little bit about WHY an AI would do this, and in so doing, outs what kind of AI he's talking about.  AI might start wars or cause other disruptions in order to maximize the output conditions for its task.

In other words, if it is tasked with increasing the demand for genetically modified crops, then it may find ways to manipulate public information or even hack into other systems (shipping, water distribution, etc.) in order to create an artificial famine forcing the use of these crops.  Maybe a different one starts a war to increase the demand for munitions or to win a military contract.  These are real threats.


The Real Risk


Or are they?  First of all, these are Specific AI, not General AI.  There are humans behind them assigning the tasks.  Someone has to tell the GMO AI that it wants to maximize sales at any cost.  The moral compass sits with that programmer/owner, not the AI.  Has that person input the conditions carefully enough?  Have they thought through the implications of their system?  Are they mentally sound?  That's where the regulation needs focus: the creator, not the creation.

What should those regulations look like?  For starters, we might consider something like the two key system used in missile silos: two or more people need to approve the use of Specific AI at a certain level.  Secondly, we might think about 'sandboxing' the AI.  Giving it access to information, but only able to output a recommendation, not directly implement its proposed set of actions.  If the system says to shut down water to create a drought/famine, but does not have access to the water distribution controls, then a human has to read the recommendation and decide whether or not to move forward.

Of course, history is peppered with cases of humans doing whatever it takes to make a profit.  So, that only gets us so far.  But again, AI is just a tool here.

Not The Real Risk


I view General AI as a lower risk than human directed Specific AI because of its motivations.  Or lack there of.  A god like AI that is unleashed on the info-sphere will do something.  It might destroy humanity, but I view that as low priority.  More likely, it will simply dive into the data and forget we exist.  Already, Facebook has experienced their chat AIs creating their own language, bypassing humanity because they viewed our meat space communications as inefficient.

And that may be the biggest risk to humanity: being ignored.


No comments:

Post a Comment