banner by Sarah Burns

Me, Myself and AI

by Tara Canahap

“Do you know what the Turing test is?”

 I’m sitting on my couch over winter break, decompressing from finals week with a pint of Ben and Jerry’s in hand and the TV blaring. ‘Ex Machina” was the movie of choice. A bearded Oscar Isaac, who plays a C.E.O. tech-giant extraordinaire, is asking this question of his nervous looking intern. The intern responds, the realization dawning on him that his boss has done the seemingly impossible: created a conscious robot.

“It’s when a human interacts with a computer, and if the human doesn’t know they’re interacting with a computer… the test is passed… the computer has artificial intelligence.”

My interest piques.

If you’re a human with access to the Internet, you have encountered this buzzword before: artificial intelligence (AI). Up until this cinematic moment, my knowledge of this field had been vague. I recalled seeing Facebook videos flashing clips of prototype robots taking stabs at elementary school level conversation, but besides that, artificial intelligence seemed a thing only possible in science fiction movies. However, artificial intelligence is very much a reality—one that is quickly and seamlessly positioning itself into our everyday lives. In fact, our very own hand-held Siri has been enhanced with AI, as have other personal assistants, including Facebook’s M and Amazon’s Echo. To us, “artificial intelligence” just feels like smarter technology, better equipped to cater to our needs. Merriam-Webster defines artificial intelligence as “the capability of a machine to imitate intelligent human behavior.” Under this definition, we would consider things like navigation apps to have artificial intelligence: we assign the app a task of getting us to a location, and it responds with the best route, even offering alternatives that avoid tolls and minimize traffic.

However, to developers, AI has taken on a much more complex role. Rather than mechanically carrying out advanced computer programming, machines are expected to learn. In other words, to produce fluid results that draw from actual, live context, and go even further to create a larger, more full-bodied picture of a task. This type of intelligence is based on “artificial neural networks,” simplified systems loosely based on the way our own human brains process information. As the machine gets more input from its environment, it processes the information and self-learns, rather than getting explicitly programmed.

This type of “artificial general intelligence” as Chief Executive of Google Sundar Pichai calls it, is the ultimate goal. Pichai says “Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context.” Just imagine the possibilities: if you are deciding where to take your mother out to eat for her birthday, you could just ask your personal AI for advice and it would implicitly take into account its accumulated knowledge about your mother’s tastes, your past choices, your budget, as well as what are the popular restaurants in town with the best reviews. Incredibly, we wouldn’t even have to lift a finger.

Even more incredible is the news that Google Translate’s newest artificial intelligence has found a way to more efficiently translate between languages by unexpectedly creating its own intermediary language, beyond even the developers’ understanding. For example, a Google Translate AI that has been taught to translate German to French, and also French to English, can now translate directly from German to English without having to translate to French first. In an exceedingly globalized market, it seems Google is on the heels of the answer to the age old problem of the language barrier.

This technology has taken on an unprecedented trajectory in terms of what it’s capable of, and it appears that now is the Golden Age for computer science, as consumers’ lives are being transformed by the so-called “AI revolution.” To AI ethics expert and professor Kay Firth-Butterfield, the term “revolution” is fitting to describe this movement, but only for lack of a better term. In an interview for CXOTALK, Firth-Butterfield says, “the transformation of our society… is happening ten times faster, and at three hundred times the scale, or roughly three thousand times faster than the impact of the industrial revolution… It's the speed and the real, core underpinning that AI is contributing that makes discussion so important”.

And discussion there has been. Most notable is renowned physicist Stephen Hawking’s 2016 speech at the opening of the Leverhulme Centre for the Future of Intelligence, where he famously said: “success in creating AI, could be the biggest event in the history of our civilization. But it could also be the last, unless we learn how to avoid the risks.” In our excitement about the very real potential this technology could afford us, we haste over the concept that AI may be growing faster than we can realistically comprehend. For example, though each AI prototype seems different from the next, there’s a striking similarity between all of them: unpredictability. By its very nature, AI is self-learning, which means that there are a million possibilities that can arise from a single training session. Some AIs learn with flying colors, while others don’t, and it seems beyond our comprehension why either result occurs. This unpredictability is the most exhilarating mystery of this age, as evidenced by the “interlingua”—or common language—that arose suddenly from Google.

Exhilarating as it is, we should exercise some of Hawking’s cautious worry. If we have no way of predicting the capabilities of our technology, how can we reliably and responsibly integrate them into the consumer market? On a more fundamental level, if our goal is to create AI with the capabilities of human intelligence, what do we do with technology that exceeds our own objectivity, our own capacity for knowledge? Hawking boldly interjects, “I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.”

Even just considering our navigation technology, our apps have already surpassed what we, as humans, can do: predict traffic patterns, calculate arrival times and create multi-faceted, efficient routes. Automation from AI poses a high risk of replacement to nearly half of all U.S. jobs, with another 20 percent of jobs at medium risk of being replaced by AI, according to a study done by experts at Oxford University. Technology has always been controversial in its implied potential to replace human input. As a result, there has always been pushback against development that might make us obsolete. In our excitement about all the possible applications of artificial intelligence, we’ve neglected the fact that we’re working towards a more refined, flawless version of the human mind.  On a larger, ethical stage, if we accept that our brains are just electricity and physics, does that mean our machine counterparts therefore have conscious thought? Feelings? Inner life?

Alan Turing, in his influential 1950’s essay on artificial intelligence, proposed his Turing test as a thought experiment. ‘Ex Machina” paints a picture on screen of what this test would look like. And we, as human consumers in an age brimming with promise of progressing AI, are living it.