Monday, April 24, 2017

Working Towards No Work - Part 1

Two weeks ago, I wrote about why I think laws requiring a percentage of the work force to be human is a bad idea, using it to transition into a vision of a "Post-Work" world.  This week I want to continue on that theme and expand it a bit, focusing on the momentum towards it, the reasons why we need it and some thoughts on how to get there.

[
/Open (aside)

Last week, I did not write, but took the Middle School Daughter Unit camping.  Or, rather, her school did and I was allowed to tag along as long as I drove and fed the teachers leading the expedition.  We visited a Wolf Sanctuary and then went caving in lava tubes in and around El Malpais National Monument.  Visit both if you can, but definitely have a guide for the caves: it is easy to get lost.

/Close (aside)
]


Post-Work, not Post-Scarcity


Most people who talk about life in a fully automated society label it "Post-Scarcity".  This is not what I'm talking about.  Or at least, not yet.  To get to a full on Post-Scarcity world, we need to go beyond automation into the realm of matter reconstruction.

With work place automation, we are off loading the work to machines, but we are still dealing with the same resources.  The same amount of arable land to grow food, the same amount of water in the same occasionally convenient places.  All the automation does is help us maximize our use of those resources.  This is Post-Work.  The available resources are still limited.

For Post-Scarcity, we need to be able to build food, water and consumer packaged goods from things that are not food, water or goods.  Like breaking a rock down into its constituent atoms and then re-assembling them into other goods that are more useful to the people in the immediate location.  I'm not talking about vat-growing a steak.  Instead, this is building the steak atom-by-atom in the back of the restaurant, already cooked, on demand.  The current state-of-the-art for working on that scale has a long way to go, but is not outside the realm of 'eventually.'

Post-Work is a landing on the staircase that leads to Post-Scarcity, but does not get us all of the way there.

It is Inevitable, Mr. Anderson



With annoying definition pedantry things out of the way, let's talk about why work automation is going to happen (oh, let's!).  The reason is simple: the short, medium and long term gains for employers are just too high.

Robots don't sleep.  They don't need vacations.  They don't complain about work hours or have families or needs outside of the work place.  They have the potential to get sick (break), but their medical plan does not cringe at fire-and-replace if the repair cost is too high.  And that's for the high cost, physical world automation.  Many of us, myself included, will lose our jobs (if I had one) to software.  Then all of the ills of the mechanical world are tossed out (to be replaced by bugs and viruses, to be sure, but still more reliable).

Beyond the world of HR, automation adds one other significant factor: consistency of output.  We humans with our five imperfect senses cannot repeat tasks down to the millimeter consistently.  Those that can are considered savants or somewhere on the autism spectrum.  They are not sitting in the middle of the bell curve with the rest of us baseline humans.

As I said in my piece two weeks ago, those companies that automate quickly and completely will have a significant edge over those that do not.  If those companies find themselves in jurisdictions that attempt to force human labor on them, they will lobby against them, eventually moving to someplace that will allow them to operate as they want.


Next Week - I Promise


So, this rant is already subjecting all five of you who read this to a longer article than I think your patience can handle.  I'm going to push the rest of this to next week's installment.  The two topics left are:

  • Why work place automation MUST happen (Hint: there are 7.5 Billion reasons and growing).
  • How we make the transition to Post-Work with the least amount of pain (I don't have a clue, and this is the real reason it's getting pushed to next week).

Monday, April 10, 2017

Affirmative Automation

This week, I'm going to continue ignoring the repeal of Net Neutrality.  Instead I want to return to the concept of workplace automation.

Human Quotas


In particular, this article from The Guardian, US Edition, "Rise of robotics will upend laws and lead to human job quotas, study says."  The article in about a report from the International Bar Association on the rise of the robot workforce.  Despite the headline, the article spends little time talking about human quotas, instead documenting the rise of workplace automation.  Which is something anyone paying any kind of attention already knew about.

Despite the disparity between the headline and the content, the article does mention that the report does suggest that governments may attempt to regulate the job market, requiring that employers hire some number of humans.  In general, I think that this would be a colossal mistake.


Mismatched


(We're going to set aside the issues of building and maintaining automation for this article.  They are short term jobs that will also ultimately die to automation.  Eventually the robots will be building and maintaining themselves.)

The problem is that humans, as non-specialized tool users, will never be able to compete with task specific robots.  Those will always be able to do the task for which they are designed faster, more reliably and more cheaply than something like the jack-of-all-trades design that is the human body.

As that is the case, requiring humans to do similar work to the robots right next to them will reduce the competitive advantage of the company/country that enacts these quotas.  Other jurisdictions that allow their employers to go 'full auto' will have companies that can produce the same product cheaper and with higher quality, undercutting the quota companies and driving them out of business.  And then where will the humans work?

Why 'Work'


For me, the problem is the word 'Work'.  For the purposes of this rant, I'm going to define 'work' as the 'trade of free time for currency'.  Our current economy, at least in most developed economies, is based on the need for the population to work so that they can:

  1. earn money so that they can 
  2. spend money so that
  3. other people can work so that they can
  4. buy the things that the first people make/do.

We are all trading our free time so that we can buy things that other people make by trading their free time so that they can buy the things that we make.  This is the 'Business Cycle'.

(courtesy of the BBC)

But what if our "Needs & wants' were met without work because automation?  What would we do then?  That is the question that a fully automated work force starts to ask.


The Real Question


Work is supposed to reward effort with access to more and better resources through the middle many of currency.  We are supposed to be a meritocracy (a subject for debate).  But if there is no work to reward, then how do we know who is pulling their weight and who is just sitting around playing video games all day long?

This is the question that needs to be debated in the halls of power: how do we reward actions that our society deems meritorious?  It does not need to be money.  It could be Facebook 'Likes' (not to give the great and glorious Zuch any ideas to expand his already growing FB Economy).  It could be YouTube subscriptions or something like gaming achievements.  Maybe these could be used for access to higher tier goods and services... but that just swaps dollars and pounds for likes and achievements.


What Are You Going To Do With Your Life?


Maybe the real issue is not how we reward effort or creativity, but that we all feel that things like 'effort' and 'reward' need to exist.  I realize that competitiveness is baked into the human psyche after millions of years of evolution, but it may be time to start working those out of our minds.  Instead of doing things because there is an external reward, we should be doing things because the doing of those things is reward enough.  It is a nice thought.

In reality, maybe the first step is to actively start automating government.  When the lawmakers start to see their lives disrupted, something will happen.  Maybe quotas, maybe Universal Basic Income, maybe something else, but it will be a step towards a post-work human society.

Monday, April 3, 2017

The Bixby Button

It is time once again for me to fret and strut my hour upon the Blog-o-sphere.  The obvious targets for my sound and fury are all of the April Fool's jokes that bounced around the interwebs on Saturday, but no; those are asking for the abuse and so I will pass them by.

Of course, my favorite was Google Gnome.

Instead, I'll focus my ire on something else: the Galaxy S8 announcement, specifically Bixby.  As a brief disclaimer, I worked for Samsung for over nine years, but I'll try not to let any sentiment, good or bad, color my judgement.

Wait.  Not that Bixby.


A Kinder, Gentler Bixby


For those of you who do not follow smartphone press releases (what do you do with your lives?), Bixby is Samsung's voice assistant offering, but there is more.  It is also a info-card system on screen (Bixby Home) and an image recognition system through the phone's camera (Bixby Vision).  Each is designed to add context and suggestions to the actions people take with their Galaxy S8 phone.

What separates it from the other voice assistants already in market (and also on the Android driven S8) are a few things:

  1. It can work with supported third party apps and potentially do anything that the user can do with their hands.  While there aren't many such apps yet, that may change over time depending on how aggressive Samsung goes after developers.
  2. It can do image recognition.  If you see an object that you like, a pair of shoes or a car or whatever, then point the S8 camera at it and it will bring up information about that object.
  3. It can interface with Samsung's going line of smart things, including SmartThings.  This is maybe the biggest differentiater as Samsung makes many of the things that we all want to be smart.  They can directly influence products as they go to market instead of trying to buy their way into someone else's refrigerator or TV line.

That Being Said...


There are a few things that are less good about Bixby versus some of the other voice assistants on the market.

First, there is a button.  For me, one of the biggest advantages of Alexa or Google Home is that they are completely hands free.  I can be washing the dishes and get the lights turned on or off, skip a music track or get the weather.  In all fairness, the same can be said of Siri, Cortana and the in-phone version of the Google Assistant.  But for a smart home, it make is less useful.

Next, it is only on the Galaxy S8 and S8+.  While the Galaxy line of phones are popular, this is still a flagship product that will take time to trickle down to the masses.  Admittedly, smart home owners and users are closer to the 1% (called mass premium consumers in marketing parlance), but the Venn diagram of smart home owners and Galaxy early adopters strikes me as small right now.

Finally (at least for the purposes of this article), this is not Samsung's first foray into the world of voice assistants.  They launched S-Voice in 2012 in response to Siri.  It did not go well.  Much of this was due to a lack of developer support and buggy performance.  Things that do not bode well for Bixby.


There is Room


I hope that Bixby does well.  Not only for all of my former colleagues at Samsung and their job security, but also because it increases the competition in this area.  I would hate for the consumer innovation world to give up on this thinking that Amazon, Apple and Google have won.  Remember, In the early 2000's, we were a Yahoo! world and no one thought we needed Google.  There is still room for someone to reinvent the voice assistant market.

That's my sound and fury for this week.

Monday, March 27, 2017

Blog-O-Matic Automation

There were not many news items this last week that affected the Internet of Things.  A few, to be sure:

Of course, there was also the usual horde of investor articles that seem to rehash the same points: security, cost savings (or not) and how it will affect jobs.  I generally ignore most of those because they rarely say anything new and that bores me (and I write articles that are at least three times longer than the usual Buzzfeed crap, so I rank my attention span as better than the average internet goldfish).

But, as usual, something did catch my eye: another question on Reddit, in the /r/singularity section.  "Will Artificial Intelligence Replace Content Writers in the Future?"  Most of the comments are pro human: "Content for mindless dribble, 100% yes."  And that's a sentiment with which I mostly agree.  For ad copy and other boiler plate kinds of content (financial reports, etc), it may already have taken over.  Which is great, because most of us humans don't like writing that stuff (though we'll cash the check for the work).



What about 'real' content: long form, creative writing?  For the purposes of this article, we'll take 'creative' to mean fiction, opinion and in-depth reporting all of which require a creative use of language to keep the reader's attention.  What some of the articles on AI writing start calling 'soul' or 'heart'.  As someone who holds a BA in English, I want to dive into what that 'soul' is.

The current state of AI writing bots appear to be good at highly formulaic prose, hence the financial reports and legal briefs and other content that for whatever reason needs to stay within strict bounds.  Anywhere that a dropped comma can cost a company millions needs something with a superhuman attention to detail and legal exposure.  The current AI systems should be perfect for this.

However, to keep a reader's attention (and if you've made it this far, then I'm not too bad at it) requires not only following grammar rules, but also knowing when to break them.  I have a Greek chorus of writing instructors that scream in my ear every time that I start a sentence with 'And'.  And yet I do it often because it sounds 'right' to my inner ear.

In fact, most of us appreciate rule breaking in writing because it makes the writing more interesting.  It has to be done carefully and with intent, but that is what second and third (and fourth) drafts are for.  This is why it will be difficult for AI to 'own' content creation.  All artistic disciplines have these rules and all of them reward those artists that break them with intent.

Ultimately, this is because successful formulas become boring with repetition.  In our present pop culture zeitgeist, this is most obvious in summer tent pole blockbuster movies.  The current reigning champ is Disney/Marvel with their MCU which is going on nine years of movies since Iron Man was released in 2008.  Disney has also had success with family animation movies and princess movies and theme park rides.  Much is because they do have a formula, The Hero's Journey.  Yet, they do no slavishly follow all of its beats.  They mix it up, eventually pitting hero against hero in order to make the formula feel fresh (keep in mind that Civil War is really a Captain America movie, so all of the other Avengers in the film are there to distract us from the formula of Cap's journey).

Will AI be able to break the rules with intent?  No doubt some clever boffin will figure out the algorithm to make that happen.  Then Michael Bay will only need to enter a brief and a few actors' names, press a button and have another Transformers movie.  One that will make enough money to allow him to press the button again and again until the formula becomes old.  And there's the point.  New formulas require people to feed them into the AI.  New rules to implement and then break.

At least, they will as long as the AIs are writing for humans.  As soon as they start creating for themselves and their interests, then all of the rules are out the window.

Monday, March 20, 2017

The Ocarina of Control

The inspiration for this week's post comes from a reddit post. Or repost that got better traction.  For those of you who don't follow links in articles (shame on you... so I'll embed it below), these show a video/animated gif of a guy who can control his home through the tunes he plays on an Ocarina.  And that is trĂ©s cool.



But how long will he live with it?


Our intrepid YouTuber, 'Sufficiently Advanced', no doubt did this for a few reasons:

  • As an exercise in programming
  • To see if he could
  • To jump on the 'Breath of the Wild' coat tails (which he admits in the video comments)
  • Because it is insanely cool

However, I argue that he did not do it because it is PRACTICAL.  For most of us, home automation is not about being cool (or at least not only about being cool... in the same sense that squealing the tires is cool), it's about making our homes easier to live in.  Setting aside the massive security breach around whistling at the window, this project does not make living easier.

Remote Control or Automation

Using an Ocarina to control your home requires you to 1) have an Ocarina, 2) know how to play an Ocarina, and 3) hope no one else with those first two requirements know where you live.  Even with all three of those met, it is nothing more than a fancy remote control for the home.  In fact, most 'Smart' home systems are nothing more than fancy remote controls, albeit with app-to-hub authentication and fewer wires hanging around.  It reminds me of a post I wrote for Qioto last September that focused on the mental progression of a smarthome DIYer.  Because you won't click on that either, the TL;DR is:
The ultimate goal should be for the home to know what you want without having to reach in your pocket for anything, but that not all home systems are right for that level of automation.
It is not yelling at Alexa or the Google Assistant to "Turn on the Kitchen Lights."  That is useful, but not automated.  Instead, the system should know that you are in the kitchen and that it is dark outside so it should turn on the lights for you.  Then turn them off when you leave (or maybe a minute or two after you leave).


Not Quite There


To do that requires that there be motion sensors and smart switches and a controlling hub, all of which exist, but none of which really work 100% reliably.  The motion sensor needs to be in the right place, or there needs to be many of them to cover the area.  All of the lights need to be connected to smart switches and those all need to be linked to the motion sensors and a sunrise-sunset timer via the hub to make that work.  Alternatively, this can be worked out by location mapping our phone locations within the home, but GPS doesn't work well inside and the consumer version only resolves to about five meters (and Wi-Fi location mapping is not really there yet).

All of these things will become easier.  Many of the better systems (ones where the owners can afford to hire professionals to constantly troubleshoot it) can do it already.  Even the DIY systems say they can do it, but my experience is that they can only do it in very controlled conditions.

Until the reliability of smart home systems improves, take your smartphone (or Ocarina) with you.  It's dangerous to go alone.


Monday, March 13, 2017

WikiLeaks Distract-O-Rama

After two weeks of speculative writing on UI/UX for direct-to-brain interfaces, it's time to come back to what's happening in the present.  And it is so much fun!

The big news this last week in the on-line world and connected devices was the Vault 7 release from WikiLeaks.  Over 8,000 pages of stuff, much of it detailing the tools that the CIA used to spy on people.  Many of those tools targeted smartphones, smart cars and smart TVs; not the laptops and servers that are the staples of Hollywood Cyber Hacking.  

(Courtesy USA Network via the kind soul who posted it to YouTube)

In other words, the CIA has been targeting the Internet of Things.  According to WikiLeaks.  Who may or may not be in cahoots with Russia or the Trump Administration or not the Trump Administration. Or maybe Brexit?

Putting aside the loyalties of an institution who's primary aim is to "... give asylum to these [persecuted] documents, we analyze them, we promote them and we obtain more,” what we need to know is whether or not the CIA is spying on us through our TVs.  Are they?  Won't someone please tell me so I can enjoy my mindless drivel in peace?

Most experts seem to think that the short answer is "no, the CIA is not spying on you."  The longer answer adds a bunch of addendum: if you are an American citizen living in the USA, then it is illegal of the CIA to spy on you.  Because the legality of the CIA's operations is something that the American people have learned to trust, right?  Right?

If you are not an American or an American not in America, the the CIA might have used some of these tool to find out if you are doing things counter to the interests of America (sentence needs more America.)  Otherwise, it's not the CIA you need to worry about.  It's the NSA and the FBI and the ATF and the HSA, maybe the ICE or the FCC or the FTA (probably not FTD, but don't count them out).  Who knows how many of these 'tools' have been shared?


BUT THAT'S NOT REALLY THE POINT


All of this brouhaha about WikiLeaks may be true (most likely), but why now?  They have been promoting Vault 7 for a while.  Why release it now?  From what are we being distracted?  Here's some of the things that are happening while we all stare at Assange's 'sexy' face.
WikiLeaks can serve a purpose.  They can expose things to the public that are in the public's best interest to know.  But they can also do those things in such a manner that is NOT in the public's best interest.  This may be one.

We all need to keep an eye on everything that's going on and not be distracted by one story.

Monday, March 6, 2017

Neural UI: Beyond the Lace

Last week, I took it upon myself to offer unsolicited advice on the User Interface for the coming direct-to-brain technologies.  Because no one reached out to offer me millions of dollars to continue bashing my thoughts in this keyboard... I'll just have to offer more for free.  #badprecedent

Over the week, while not being offered speaking engagements, I have continued to noodle on the problem and have realized that I did not take the whole concept of "interface" far enough.  Last week's idea that "seeing" or "hearing" or "touching" as essential paradigms for a UI become bottlenecks when the brain is connected only works as long as the thoughts stay in natures own little ATX computer case: the skull.  But what should this all look like (crap it's hard to get away from visual metaphors as a human) when the thought process has been copied out of the brain and is 'running' on a different medium?

(image and case credit to user Masbuskado on Overclock.net)

That's right, I'm talking about running "you" on silicon or quantum computers or Minecraft Redstone or whatever.  Because if we will be able to implant thoughts into your head via neural laces or neural shunts, then it is not much of a jump to start pulling them out and storing them externally.  I want to set all of the issues around identity and 'which you is you' and 'if you have a copy running after you die, does it inherit your estate' all aside in order to focus on what that copy will experience when running on a computer.

Science Fiction authors have tackled this stuff for more than a few years.  Most of them offer up images similar to the ones in Neal Stephenson's Snow Crash or the Otherland series from Tad Williams.  The Matrix.  A visual world.  Granted, both of those worlds can only be experienced through the body via goggles or immersion rigs and were not intended to imaging a brain tap style connection.  Others take a more leave-your-body-behind approach and divide humanity between those who have uploaded and those who remain hide bound.  Most of these latter still stick to a world recreated; one of visions and sounds and surfaces.

But is that what will actually be experienced?  When uploaded, the mind is now in a different body, one with different senses.  Does it need to be coddled by some concept world similar to what it has been used to?  Maybe initially, to get over the shock, but I posit that, in the long term, that will all be left behind to something intensely different.

Alastair Reynolds had some of this towards the end of his first novel, Revelation Space, where the uploaded characters only recreate a virtual world when they have visitors.  That seems nice.  Unfortunately, Reynolds is sparse on details about what he thinks those people experience when they don't have visitors.  Is it just extended thinking with your eyes closed?  Long term meditation on the nature of self and reality?  Long term sensory deprivation?

Not that last, at least.  There will be sensors to the outside world.  We are all in the process of installing them right now: IP cams, microphones and connected smoke alarms to mention a few.  The difference will be the immediacy of it.  Our current senses are tied directly to our location.  We can only sense what is around us.  A mind in a computer will have a different sense of presence.  On the one hand, the sensors that it has access to will be even more location tied.  Cameras will swivel and tilt, but not move around.  On the other hand, that presence will also be distributed with access to sensors everywhere all at once.  Moving from place to place will be as easy as willing it.  Teleportation in a real sense.

Or continuing to do the same menial task... but as a robot!

But beyond the senses, what will it be like?  Will the mind's ability to think change merely because it is thinking on a different strata?  I suspect so.  Thoughts will be different.  They will have to be.  For instance, now, when we close our eyes, we still see: we visualize.  When we thinking on a computer, will we still think in terms of 'seeing' or will it be something else?

As intensely physical beings embedded in organic bodies, it is difficult to imagine a world in which physicality no longer exists.  Some poor soul (#punintended) is going to have to DO it before we can know.  Hopefully, that soul has something of the poet so that they can properly articulate their experience back to those of us still stuck with ears.

I'll volunteer myself, but only as a copy.  I still enjoy living in this physical world.