Insights From The Inevitable: The Brain Behind IBM Watson on Machine Learning vs Machine Understanding

Insights From The Inevitable: The Brain Behind IBM Watson on Machine Learning vs Machine Understanding

IBM Watson team founder, David Ferrucci, Ph. D., led his team to their landmark Jeopardy! success in 2011. An award-winning artificial intelligence (AI) researcher, Ferrucci’s more than 25 years of work and his passion to see computers fluently think, learn, and communicate inspired him to found Elemental Cognition in 2015.

Request Recording

David has more than 100 patents and publications. An IBM Fellow, he has worked at IBM Research and Bridgewater Associates directing their AI research. He received his Ph.D. in Computer Science from Rensselaer Polytechnic Institute.

On December 8, David presented The Brain Behind IBM Watson: On Machine Learning Vs Machine Understanding as part of Text IQ’s “The Inevitable 2020 Series.”

“There's so much knowledge out there. Machines should be able to sift through the noise, help us overcome our biases, and afford efficient access to the explicable reason over the world's knowledge. I'm going to focus a lot on what it means to actually understand this content and to give us reason and not shallow predictions.”

―Ferrucci

One focus of Dr. Ferrucci’s talk is centered on going from what is referred to as "machine (or deep) learning” (ML) to “deep understanding.”

Dumb Machine

Machine learning is a “fascinating and critical technology that has made tremendous advances in recent years,” notes Ferrucci. It has “modest roots in regression” which, at is simplest, is using a set of variables to make predictions. Advancements made over the years can be seen in the field of image recognition. This has been aided by the advancements in neural networks and the consequent ability to move beyond the simple linear functions with the machine developing new non-linear functions with significant predictive power. The machine truly appears “intelligent” (even smart enough to beat the all-time Jeopardy! champion). But...

“When we step back and think about what intelligence is, intelligence starts to look like a really powerful function finder, powerful in that it can efficiently find these complex relationships or complex functions. What once had to be modeled in programs specifically by humans can be learned automatically. Humans just annotate the right output for given input. The deep learning looks at enough of that training data finds that function...

Of course, the big question that leaves us though, is...while it's doing these amazing things...what sort of intelligence is it actually building?”

―Ferrucci

The question is ultimately, what is the difference between pattern recognition and understanding? Finding that answer requires explicability.

In an “adversarial example” presented by Ferrucci a photograph of a Persian cat is correctly identified as a Persian cat with an 87% confidence level. The same image was also identified as a toaster with a 98% confidence level. Why? we can’t know because the machine can’t tell us.

Super Parrot

Clearly, “if intelligence is a function finder, then [machines] are finding a very different function than we are. One that may have little to do with how we engage with, and reason about, the world” says Ferrucci. And importantly, the machine can’t tell us how or why. Ferrucci asks a question that is familiar to litigators when technology-assisted review (TAR) was first introduced. “What sort of model is it building, and how useful is that model when I need to probe it and understand it?

“We are stuck with the proverbial black box, says David, [a] model that is inexplicable.” It is this inexplicability that underpins the concerns with pure machine learning approaches – particularly in critical decision making.” (Particularly if counsel has to attest to the accuracy of the results and defend those results in court.)

“The general ability for machines to read, reason, and understand remains a grand challenge,” says David. And today's state of the art natural language processing (NLP) fails to produce rational, explicable understanding.” And as he proffers: if machines really could “understand,” expertise across all domains would have been commoditized. We would simply need to give it all the knowledge ever recorded, ask it to read it, understand it, and discuss it.

“ML approaches get it right or wrong without explanations because they’re not building a rational understanding. As a result, they lack support for humans to understand and improve the underlying rationale because it is just a shallow prediction based on how text occurs,” insists Ferrucci. “What I like to call ‘super parrots.’"

Explicable Intelligence

“A richer architecture that, at its core, is knowledge and reasoning.”

―Ferrucci

So, is true language understanding possible? Is explicable intelligence in artificial intelligence on the horizon?

David poses a most fundamental question: “What does that even mean?” What is understanding? What is comprehension? It isn’t “predicting phrases that might co-occur.”

“One of the basic ideas that we laid out for everyone was when you read a story, the simplest story, [the machine] should be able to do what you would expect a good student to do, which is be able to build a spatial map: Relative position of entities, timelines, cause and effect, motivation. Who did what, and why did they do it?"

Ferrucci presents some of the work being done and his successes at Elemental Cognition. “What we did was we went from an architecture where you get the state of the art today, which is the statistical language modeling and black box stuff, to a much richer architecture that at its core is knowledge and reasoning."

“We still do knowledge discovery using deep learning over large corpora, but we translate that into formal knowledge and reasoning so that we can now produce answers, explanation, logical explanations, and explain them.”

David takes us through his work at Elemental Cognition demonstrating this new approach, its efficacy, and its vision for the future which also includes a live demo.

Here and throughout his presentation and accompanying slides world-renowned AI researcher David Ferrucci provides an understanding and perspective on machine learning and natural language understanding that is both accessible to the novice and initiated alike. He separates reality from hype, signal from noise, and does so with a keen sense of humor as well.

You will find the full presentation here and The Inevitable 2020 Series lineup here.