2016: The Cubs, AI, headaches and hangovers

For decades, artificial intelligence (AI) has been trodding along a hype curve filled with aspiration and disappointment fueled (and fueling) pop culture fascination and fear. When there seemed to be an AI breakthrough, we were disheartened to learn that it’ll be a “few more years” until the potential could be truly realized. Like the Chicago Cubs, artificial intelligence would have to wait until next year.

A few years ago, Cubs ownership changed hands and the franchise started to rebuild in the transition wake. New faces emerged along with new ideas and new branding. Investment patterns shifted and long standing relationships were tested and redefined. The result: the Cubs don’t have to wait until next year. The hype around the team finally became real in 2016. “Cubs win! Cubs win! The wait is over!” Chicago Cubs fans descended into a happy hangover after game 7.

The shift in AI strangely parallels that of the Cubs. A few years ago, IBM’s Watson surfaced in popular culture signaling a change in thinking artificial intelligence and what it meant. AI was rebranded machine learning and active assistance. Technologies once limited to laboratories (or Japan) entered and are expanding in the mainstream for larger and larger populations (just take a look at Alexa, Cortana and Siri). AI-driven toys like Anki Cosmo are reaching audiences with real machine learning that Sony with its Aibo could only dream of and the creators of the original Teddy Rukspin could never achieve.

For all of its growth and realization of decades of hyped potential, AI has hit an inflection point. Like the Cubs, 2016 was when promises were realized and the hype machine around “next year will be the year” collapsed. We’re now faced with “okay, now what?” And that “now what” came fast with many, many problems. 2016 has been the realization of a dream and a very real view of the headaches and hangovers that come with it.

Algorithms and machine learning both have come under intense scrutiny after the US election. The analysis, backlash and handwringing has been fast and furious. But what transpired across the digital landscape in the run up and aftermath of the election (and Brexit, too) should have not have come as a surprise to anyone. Utopian technologists need to realize that bias in algorithm design is very real (just take a look at the Facebook psychology experiment scandal in 2012) and manipulation of machine learning does happen and if left unchecked, can rapidly deteriorate to the point of dystopia (remember Microsoft Tay and its quick descent into hate and racism? A quality 2016 migraine…). People are becoming nervous. I think we’re now getting another headache that goes with the hangover.

If 2016 is indication, hangovers when utopian visions fall apart in practical reality will become the new norm and our headaches will grow along with fear. Not political fear, but the fear embedded in the very technologies that are designed to make our lives better and the headaches that come with understanding how to transform idealism into practical reality.

Here’s the problem: code is cold. It contains no ego nor moral judgement. Its decisions are based on rules and priorities, programmed and learned. It doesn’t understand right and wrong. It does not want, it chooses with no thought of consequences. It only understands do and not do. Our utopian future will be shaped – if not defined by code. Although designed by humans, once in execution code can take on a life of its own without human constraints. The only constraints are those that the code knows, which surfaces Asimov’s First Law of Robotics:

A robot may not injure a human being or, through inaction, all a human being to come to harm.

Decisions are increasingly made based code, not morals and ethics. Don’t agree? What about the VW Group pollution scandal? That was code running in hundreds of thousands of cars, hidden away until discovered. The algorithm was doing what it was designed to do – to cheat emissions testing systems around the world. Okay, so humans decided to secretly cheat, that’s not AI behavior — that’s just checking sensors and reacting to inputs. Bad algorithms by bad people, which have hurt millions. But add a little machine learning to that and there’s the beginning of a solid headache.

So what about about learned behavior (aside from Tay)? Autonomous cars are very, very cool but have, in my opinion, been rushed to market without real consideration of the consequences. Mercedes, in 2016, made the business choice to let its future cars kill us if they choose to. There will be a biased and learned behavior in every Mercedes car on the road (by design, people in Mercedes cars are assumed to have a greater life value than people outside of Mercedes cars). Unlike VW, at least Mercedes admits that the option to kill people will be in its code. Now as a pedestrian I have to worry about distracted drivers and movie-grade killer cars.

At least all the Cubs have to deal with is coming up with a winning season next year. The hangover will pass. AI and machine learning, however, are in the midst of a drunken stupor, and will have one heck of a hangover and a raging headache to deal with.

This post was inspired by this article on Venture Beat:”Why AI assistants need to stay neutral“.