Trendcasting Artificial Intelligence

To truly understand Artificial Intelligence, we have to understand where we are assigning it too much potential and where we aren’t giving it enough credit.

Artificial intelligence or AI for short, began as a field of study in the 1950s, not long after the introduction of the first digital computer in 1942. In these early days we sought to develop intelligence that could rival that of humans. A superhuman digital intelligence that could reason and make decisions as we do but without the shortcomings inherent in human thought. We wanted to strip the emotions, inconsistencies, and errors of human decisionmaking while accelerating the rate at which decisions could be made. Moreover, this type of omnipresent artificial intelligence would be applied broadly to the full range of decisions confronted by humankind. Today we refer to this type of artificial intelligence as “General AI” or “Artificial General Intelligence,” but in those early days it was the type of intelligence we wanted to build.

In 1965 Herbert Simon, one of the early AI scientists, predicted “machines will be capable, within twenty years, of doing any work a man can do.” Simon missed the mark and this type of AI overzealousness at the time led to the long AI Winter of the 1970s and 1980s. Unrealizable expectations, an inability to commercialize and monetize General AI, and its seemingly slow progress all contributed. Technological innovation tends to move very slowly until suddenly it doesn’t.

Developments over the last decade in “deep learning,” using massive amounts of data to optimize decision engines with incredible accuracy, have reinvigorated interest in AI. AI research today primarily focuses on applying large amounts of data and computing power to narrowly defined domains. Deep learning is used to optimize single objective functions like “achieve checkmate,” “win Go,” or “maximize speech recognition accuracy.” In recent years we’ve seen significant progress in “Narrow AI” and that’s got us excited, and a little scared, that general AI, and specifically untethered general AI, is just around the corner. We find ourselves in one of those periods of seemingly sudden progress. But AI has been progressing since those early days in the 1950s. We’ve just tended to discount many of the advances made over the last six decades.

The earliest forms of narrow AI could do one thing really well, but couldn’t learn to do anything better until it was programmed to and updated with new potential. In this way, your computer, myriad software, and even a basic calculator where very simply narrow AI systems. But we often overlook these developments as AI. We don’t like to think that basic calculators are all that special or deserve to be considered intelligent. As roboticist, Rodney Brooks put it, “every time we figure out a piece of it, it stops being magical; we say, ‘Oh, that’s just a computation.’” Or as computer scientist Larry Tesler noted, “Intelligence is whatever machines haven’t done yet.” This has become known as Tesler’s Theorem. The “AI Effect” paraphrases this idea to say that “Artificial Intelligence is whatever hasn’t been done yet.” In other words, AI doesn’t get any of the credit for advances made, but holds all of the expectations of developments yet to come.
The massive growth of digital information has created a situation ripe for AI applications.Today’s AI advances are driven by Machine Learning coupled with massive computational processing power. We’ve shifted from trying to perfectly duplicate the logic of an expert to a probabilistic approach to offers some flexibility. Rather than defining all of the rules ex-ante, we are apply statistical techniques to uncover the rules embedded in data.

Machine learning is enabling previously simply AI systems to learn (i.e. improve) within their programmed field. Algorithms can identify new information from the inputs they receive and create new outputs. These AI systems get better at what they were intended to do, but they don’t jump out of that domain. Digital Assistants for example get better at deciphering speech but that won’t enable them to drive your car. While we call it learning, it is a narrow form of learning. So I can teach Alexa to play my favorite band or deliver me the nightly news, but I can’t get Alexa to have a favorite band of her own or have an opinion on what to do about North Korea.

We are building discrete AI systems to be very good at discrete problems. These systems are inherently poor, by design, at autonomously learning new skills, adapting to changing environments and ultimately outwitting humans. While general AI is still the goal for some, commercial forces will keep narrow AI applications the focus for many decades. Moreover, general AI will never be an outgrowth of narrow AI applications. Narrow AI applications are not designed to adapt their knowledge to other problems. These systems have programmed logic and parameters to make them best in class at solving a discrete sets of problems. Narrow AI systems lack the context required of general AI environments. Interact with any digital assistant today and it is immediately and abundantly clear. And while the perception of context appears to be improving, we are far from general AI.

As Rodney Brooks noted, there is “a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence. Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data. This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine. But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.”

In practicality we don’t want AI systems that think like humans. We want hyper focused AI applications that are extremely proficient within a narrowly defined field. We want to solve discrete problems. The AI systems that will flourish in the years to come will decipher large amounts of data to solve previously difficult, but discrete problems. We’ve given too much potential to general AI, and not enough to the narrow AI applications that are changing how we live, work, and communicate.

Related

By now you’ve likely heard the comforting quip: “AI won’t

My most recent Vision Magazine article on oil prices can

Back in January I wrote that Apple’s AirPlay would drive