Singularity Summit 2012

AI and the Barrier of Meaning

[Video]


For more transcripts, videos and audio of Singularity Summit talks visit intelligence.org/singularitysummit

Speaker: Melanie Mitchell

Transcriber(s): Ethan Dickinson and Jeremy Miller


Moderator: Melanie Mitchell is next. She is a professor of Computer Science at Portland State University. While working on her PhD, she co-authored "Copycat," which is an artificial intelligence program that creates analogies. In addition to her work on AI, she has done extensive research in the areas of evolutionary computing, and cellular automata. She is the author or editor of five books, with titles including, "Analogy-Making as Perception," and "An Introduction to Genetic Algorithms." Professor Mitchell's most recent book, "Complexity: A Guided Tour," won the 2010 Phi Beta Kappa Science Book Award. Please give a warm welcome to Professor Melanie Mitchell.

[applause]

Melanie Mitchell: In 1985, the mathematician Gian-Carlo Rota asked, "I wonder whether or when AI will ever crash the barrier of meaning." In my talk today, I'm going to explore what it might mean to understand or perceive meaning, or why meaning is still a barrier for AI, and what it would take to unlock this barrier. I'm going to try to do all this in 20 minutes.

The question of understanding meaning has come to the fore of consciousness in a wider community than the AI community by IBM's Watson computer, which appeared on television and gave at least the strong appearance of understanding natural language. One of the things I'm going to ask is to what extent did it understand language, and what does that mean to understand.

The first thing to do is give my own definition of understanding, which is "The ability to fluidly adapt one's knowledge so as to perceive appropriate analogies in new situations." Here I'm bringing analogy to the forefront of what it means to understand, and I'm going to expand on that.


When we think about analogy, typically we think about very high-level thinking, maybe even the source of rare intellectual insight. Darwin made a famous analogy when he said that biological competition is like economic competition. Yukawa won a Nobel Prize for making an analogy between the nuclear force and the electromagnetic force, in particular making an analogy between electrons and an at that time unknown particle which mediates the nuclear force. Von Neumann, perhaps jump-started the computer age by thinking about the computer as an analogy to the brain. And, the reverse of that, Simon and Newell jump-started artificial intelligence by saying the brain, in fact, is like a computer.

These are all very high-level analogies made by maybe the smartest among us. But actually, the way that I view analogy and think its real import is, is in more everyday thought, kind of the things that we say every day. For example, it's the source for what we might call "labels" for situations. An example, you hear a piece of music and you say, "That sounds like Mozart," so you're in some sense making an analogy between something you know of Mozart and this new piece of music that you're hearing. "It's another Vietnam," people will describe some abstract situation, some political, economic, military situation as being "another Vietnam." And we say this, it's just natural, ordinary language, but it's really making quite an abstract analogy between two abstract situations.

Another example might be a "cover-up." We recognize this abstract concept of "cover-up" in many different situations, very flexibly. Idioms, like "the pot calling the kettle 'black.'" That's something that we learn as a child, and are able to apply it to different kinds of situations. Or things like, "People who in glass houses shouldn't throw stones." We say these things without even knowing consciously that we're making quite sophisticated analogies, but that's exactly what we're doing.

In fact, in even more ordinary situation, you probably have heard yourself say things like, "Been there, done that." Well those words, that word "that," "done that," can be used very flexibly, in the same way that you might say, "The same thing happened to me," in a situation where it's not at all the same thing as the thing that happened to your friend, but it's analogous in some more abstract way. "If I were in your shoes..." even "Ditto."

All these things are the hidden analogies that are at the core of all of our conceptualization, or you might call it "abstraction," "categorization." I see all of these as on a spectrum of cognitive analogy-making. Again, to say "understanding" is the ability to fluidly adapt one's knowledge to different situations so as to perceive appropriate analogies. Well I haven't defined what I mean by appropriate, I'm going to try and do that a little bit later. But to sum up a slogan, I would say that analogy is the vehicle by which concepts mean.

I want to tell you a little bit about my own journey to thinking about meaning and artificial intelligence. Back in the early days of computers, my father was a computer designer. This is a picture of him working on a very early computer in 1959. He helped found an early start-up computer company called Scientific Data Systems, which made mainframe computers. It got bought out by Xerox in the 60s, and he ended up retiring at a relatively young age and rebuilding one of these computers, the Sigma 5, in our den. I grew up with this as kind of like another sibling in the family.

[laughter]

Melanie: It wore a button that said "I pray in Fortran."

[laughter]

Melanie: As a young child, I wondered, "Does the computer actually pray in Fortran, while we're asleep, maybe it's there praying?" This was something that struck me as being a good possibility, and it carried me through my intellectual development and got me interested in the question of, "Do computers think? Can they think? How can they think?"

In 1984, I actually decided to go into the field of artificial intelligence. I went to graduate school at the University of Michigan, to work with Douglas Hofstadter, who was working on modeling how people make analogies, and ended up writing a book about all of our work together, with a number of other graduate students, called "Fluid Concepts and Creative Analogies," and you can see also it's called "Computer Models of the Fundamental Mechanisms of Thought."  This book has a nice distinction, in that it was the very first book ever sold on Amazon. So that was kind of a coup. [laughs]

One of the programs that I worked on with Hofstadter was called Copycat. Another student, Jim Marshall, worked on an extension called Metacat. These were computer programs that explored human-like understanding and meaning via analogy. What they did was made analogies in very restricted domains, but they made them in the way we believed humans might make them in more real-world situations.

An example. We used strings of letters to represent abstract situations. We have a situation up there on the top, "a b c changes to a b d." Then we have another situation, "p p q q r r," and these are situations with objects, relationships between objects, actions that take place, a change of a letter. We want to say, "What's the analogous change from 'a b c to a b d' to 'p p q q r r?'"

OK, what I'm going to do is show you a little movie of Copycat running and you'll see that the way it works is that many different relatively simple agents try and formulate relationships, structures, correspondences, and so on. Please start the movie. [clip begins, showing different lines and boxes showing up and disappearing to connect and group letters]

So you'll see the action of lots of simple explorations. The program's stochastic, that means that it makes random decisions, that get increasingly channeled into more and more focused explanations. You can see it structuring the bottom string by understanding that there are groupings of letters, and that maybe the groupings of letters are more important than the individual letters. It notices that "a b c" is an alphabetically increasing sequence in the alphabet, and it tries to make correspondences between the concepts it finds in the top string and the concepts it finds in the bottom string. It has a network of concepts that I'm not showing here that are interacting with the perceptual agents to try and make a coherent sense of the situation. [clip finishes] It finally comes up with an answer, "p p q q s s."

What it's doing here is noticing that the top string – it discovers that the change in the top string is that the rightmost letter is changed to its successor. If we did that to the bottom string, however, took that literally, we would get "p p q q r s," which seems like a very literal-minded answer, but instead it says, "Well, 'letter' in the bottom situation is really not to be taken completely literally, but a little bit flexibly." So I put it in quotes here, and in fact it gets translated into the notion of letter-group, which we call a "conceptual slippage."

Another example is if we just turn the bottom string around, we reverse it. Well in this case, the structure changes a bit. Here's what the program comes up with on a particular run. [Clip shows bottom string of letters: “r r q q o o”.] It says, OK, now we change the rightmost letter to its successor on the top, but here now, because the string is reversed, we not only have to make slippage between "letter" and "letter-group," but between "successor" and "predecessor."

Last example is this problem. "a b c changes to a b d, what does x y z change to?" The program doesn't get the answer "x y a," because it doesn't know about circularity in the alphabet. [Clip shows bottom string of letters “w y z”. Here's one of the answers it comes up with, where it says "a is at the front of the alphabet, z is at the end of the alphabet, so they correspond, and c corresponds to x," and so now we have multiple slippages where we would change the leftmost letter to its predecessor.


The program's stochastic, so it can also create other answers to this problem. One of the one's it comes up with is "x y d," where it says, "Well, change the rightmost letter to a d," and does the same thing down here. That's a little bit, what we might call "blockheaded". Then it can get answers like this, [clip shows bottom string of letters: “d y z”.] where it actually figures out that the rightmost letter corresponds to the leftmost letter, but it keeps the literal changed letter to "d", so it's making a creative leap in some sense, but keeping something very rigid and literal in another sense, so this might be called a "crackpot" answer.

[laughter]

Melanie: Understanding is a continuum, obviously. Maybe it's even a step function. We might have something we might call shallow understanding, which is literal application of our concepts to a new domain, or deeper understanding where we are actually able to fluidly adapt our concepts to a new situation by using conceptual slippages.

In short, I believe that the work we did in Copycat is not limited to this letter domain, but really extends more generally to many different domains, and the mechanisms that we modeled will actually work in more realistic domains. I'll talk about that in a minute. But just as a definition now, in short, deep understanding is the ability to make appropriate analogies in a vast array of new previously unseen situations.

I wrote a book called "Analogy-Making as Perception," which showed Copycat working in this vast array of different situations in this limited letter-string world. My advisor Doug Hofstadter went even further and said analogy isn't just perception, but it's actually the core of cognition. He has a new book coming out very soon that gives his view on this whole idea of analogy as core of cognition.

Bringing back to a program like Watson. I was really impressed by Watson when I saw it on television, but I guess the question I asked the most was, does it cross the barrier of meaning? There were a number of people talking about Watson as if it actually did in some sense understand meaning. From the IBM press release, "Watson understands natural language, breaking down the barrier between people and machines." Dan Cerutti, who was one of the main people on IBM promoting this project, "Watson... is able to understand ambiguous human language... I know some people who can't do that."

[laughter]

Melanie: Ray Kurzweil, last year at this event, said that "There's no way Watson could answer these Jeopardy queries if it had no understanding of language. It is understanding some fairly subtle language, including puns, similes, and metaphors." So this is a pretty strong claim.

On the other side though, Stanley Fish, a philosopher writing in the New York Times, said that Watson's performance "is a wholly formal achievement that involves no knowledge (the computer doesn't know anything in the relevant sense of 'know')."

Who's right here? I wanted to show you a couple of clips from Watson – can you start this movie please – to try to get this idea across.

[clip starts playing]

Host: Good morning everybody, thank you for being here. What do you say we play Jeopardy? Let's get right into the Jeopardy round. These categories. "A Man, a Plane, a Canal... Erie!", "Chicks Dig Me," "Children's Book Titles," "My Michelle," "M.C. 5", and finally "Vocabulary." Ken, you're in the first position, please make a selection.

Ken Jennings: I've never said this on TV, "Chicks Dig Me" for 200 please sir.

[laughter]

Host: "Kathleen Kenyon's excavation of this city mentioned in Joshua showed the walls had been repaired 17 times." Watson!

Watson: What is Jericho?

Host: Correct.

[clip stops playing]

Melanie: OK. "Chicks Dig Me." It's a pun, it's a category about female archaeologists. A typical Jeopardy query. OK, can you show the next video?

[clip starts playing]

Host: Watson!

Watson: What is Crete?

Host: Yes.

Watson: Let's finish "Chicks Dig Me."

[laughter]

[clip stops playing]

Melanie: OK, so Watson said, "Let's finish 'Chicks Dig Me,'" and everybody laughed. So why did they laugh? I asked a lot of people this question, why are the people laughing? Everybody said about the same thing, "Well, Watson, you know, it's like a computer, it has this robotic voice, and it has no idea what 'chicks' are, what it means that 'chicks dig me,' and it's kind of funny."


What does it mean? Understanding the meaning of a concept, to understand "chicks dig me." Being able to recognize situations in which the concept applies. Being able to fluidly use the concepts in new situations. And being able to make appropriate analogies. So when Ken Jennings said, "chicks dig me," he had a certain understanding of it that allowed him to, for instance, know that there are certain situations where "chicks dig me" is maybe an appropriate label [picture showing Bill Murray with a speech bubble to the phrase "chicks dig me" and a checkmark], and other kinds of situations where maybe it's not that appropriate. [picture showing Mitt Romney shaking hands with Barack Obama with a speech bubble to the phrase "chicks dig me" and an X through the picture].

[laughter]

Melanie: Did Watson understand the meaning of "chicks dig me?" I would say no, at least not in any deep way in the sense that I'm defining understanding. But then another question, did Copycat understand the meaning of "a b c goes to a b d?" Much less complicated concept. But I would say yes it did in its own simple domain, because it was able to take that concept and apply it fluidly to a vast array of other situations in its domain. So in a way, Copycat is chinking away at this barrier of meaning.


I wanted to tell you a little bit of how I'm trying to extend these ideas from Copycat and analogy in a more realistic domain. What we're working on now is visual concepts via analogy. Here's a simple visual concept that we've all seen, the concept of walking a dog. We might have a paradigm or semantic network, or what you might call an ontology for dog walking, where you have a person holding a leash attached to a dog who's walking.

OK, but then there's some issues, like here's a lot of dogs, and here's people running, not walking. So we might have to extend our concepts from "dog" to "dog group," or from "walking" to "running," allowing conceptual slippage. Here's a picture of somebody walking a very unhappy looking cat.

[laughter]

Melanie: OK, walking an iguana, with its tail taking the place of a leash. These are all in the same general category of this dog-walking concept. Here's a different kind of dog walking [picture of a woman on a bicycle with two dogs running next to her on leashes]. Here's a dog that's riding a skateboard. Somebody on a Segway. This is a device you can buy for $800 where you power your bike with a treadmill and you also walk your dog on the treadmill.

[laughter]

[Shows four other pictures: a dog running on a leash that extends into a car driving next to it, A dog on a leash extending into a helicopter hovering above it, a dog holding a leash in its teeth that is attached to a horse, and a dog holding a leash in its teeth that is attached to a second dog]

Melanie: There’s all kinds of dog-walking. It goes on and on and on. This really illustrates the remarkable fluidity of human concepts, that we really recognize these visual, or any kind of abstract concept, in a very fluid way. We have to start thinking about how do people, and how could we get computers to, make the right kinds of conceptual slippages.

Here's a more abstract idea of dog-walking –

[laughter]

Melanie: – we've all done with our yo-yos. Here's a dog walking picture at different levels. [picture of a dog doing a yo-yo trick of "walking the dog" while being walked].

[laughter]

Melanie: If you go to the Internet you get even more abstract metaphors, "What dog walking taught me about networking," "Meditate like you walk the dog," "Your lawyer on a short leash." A thing you can put on top of your wedding cake to illustrate a metaphor with marriage and so on.

The singularity's been called, "The appearance of smarter-than-human intelligence," but my version's a little different. I would call what I'm looking for, is the appearance of a machine that crosses the barrier of meaning, that uses concepts as fluidly as humans do, in a broad range of domains.

When will AI cross the barrier of meaning? I don't think it is going to be very soon. Marvin Minsky, pioneer of AI, said that one thing we've learned from six decades of research on AI is that "easy things are hard." Easy things for humans like fluid concepts are the hardest things for computers. How will AI cross the barrier of meaning? Eventually, a key component will be via programs that make their own analogies.

Let me finish by making an analogy, and that is that analogy is the key to the barrier of meaning.

Thank you.

[applause]

[Q&A begins]

Man 1: Can you put up your definition of meaning, and tell us why Watson doesn't satisfy it?

Melanie: Well my definition of meaning is to be able to use knowledge and concepts in a fluid way, making appropriate analogies in new situations that haven't been seen before. Watson is able to perhaps make sense in some way of concepts like "chicks dig me," but I don't think it has the ability to fluidly recognize an appropriate "chicks dig me" situation across a vast array of different situations.

[next question]

Man 2: Are there specific improvements we need to make in terms of technique to get to this singularity-like meaning machine, or is it just a matter of we need to just keep making faster computers? Have we discovered enough about the processes?


Melanie: No, we haven't discovered enough about the processes.

Man 2: What's left, roughly?

Melanie: We have to understand how concepts are structured, how to learn concepts in such a way that we can then apply them in this flexible, fluid way to new situations. We don't know how to do that yet. People are working on that, I'm trying to work on that, but I think it's still quite a hard and unsolved problem.

[next question]

Woman 1: I'm impressed that you're going to visual inputs, but to make understanding you also have to have sensory inputs like touch and scent, and all kinds of other things that help you make meaning. How would a machine ever really understand "chicks dig me" at that level, on a sensory –

Melanie: I agree, that eventually, to understand at the level of human understanding, we need to have the same sensory inputs as humans. I think people are trying to do that, they're trying to give computers different kinds of sensory inputs, visual, auditory, haptic, and so on. I'm not sure that any complete combination of those is really required for meaning. Not all humans have all those senses, but I think that obviously some kind of way to sense the world is essential.

[next question]

Man 3: I wonder if you can get a solution to the problem if you were to educate a computer the way children are educated, because when a child is born it knows nothing about language, and if you just had massive database, massive computing power, and took the thing through "Dick, Sally, and Jane," and "this is a cat," and sequentially made the computer learn the information in the same order that children learn the information, if you couldn't get the computer to understand meaning in that scenario.

Melanie: That's a very good question. I think it's unknown whether or not children know anything about language when they're born. There's some thought that there's certain brain structures that have evolved, that we have intrinsically that we don't learn. That's still an open question. I do think that the process of learning and development is absolutely essential for understanding meaning, and we don't know exactly how to get computers to learn in that way yet.

[next question]

Man 4: Do you have any favorite exercises or practices for developing your analogizing mind?

Melanie: [laughs] That's a good question. I don't, but what I've learned to do is to listen for analogies, I've learned to become a collector of analogies. I think that helps me think about what analogy is, its breadth and how to better understand how the mind works by looking at it through the lens of analogy.

[next question]

Man 5: In what sense do you think that Copycat is doing a better job with meaning in its domain of new letter sequences than Watson is in its domain of encountering new Jeopardy queries? It seems to me that they're both pretty narrow, but not entirely narrow domains.

Melanie: I don't think Watson is making any kind of analogies. I don't think it notices that two questions might be similar in some kind of abstract structure. It's able to categorize questions by type, in terms of being a pun, or historical question or something like that. To that extent it can do some kind of analogy, but I don't think it can do the kind of breadth of applying its knowledge in new domains that it hasn't seen before.

[next question]

Man 6: Tying in the earlier talk we saw from Professor Kahneman, analogy is one of the greatest bias-introducing things we humans do, we make faulty analogies all the time. Is it a good idea that we have a computer that makes a lot of analogies to try to discern the facts or reality in a certain situation? It could conceivably make the same mistakes we would, but in a worse scale.

Melanie: I personally think that we're going to make mistakes. We have to have this ability to make analogies in order to be intelligent, it's really the source of our intelligence. We will make mistakes, but those mistakes are in some sense a reflection of our ability to think. So when people asked Professor Kahneman, "How do we fix these bugs in our thinking?", I'm not sure that fixing them is possible without sacrificing something very essential in the way that we think. I'm not so concerned about the biases that analogy-making introduces, I think that's part and parcel of the whole power of our own ability to think in this very fluid way.

[next question]

Woman 2: I'm not quite sure how to phrase this, but... do you think there's a connection between understanding and a sense of humor?

Melanie: Totally! Yes, I think that's a really good point. A lot of our humor comes from analogy, but sort of bad analogies [laughs]. "Chicks dig me" is one example, it's funny in that context because it sets up an expectation in our minds, but then we figure out it's actually being taken very literally. People laugh when I give the letter string example of "p p q q r d," because we've set up this expectation of a situation and then we see this very literal interpretation of it. I think that that sort of – bad analogies are a lot of what lies behind humor.

[next question]

Woman 3: To what extent do you believe that a hierarchical model analogous to our biological cortex is sufficient for meaning?

Melanie: Sorry, you said a hierarchical model...

Woman 3: Analogous to our biological cortex.

Melanie: ...analogous to our biological cortex.

I think our biological cortex, along with all the other part of the brain, are what give meaning, with our senses. If we are able to develop something analogous to that, I believe that will be sufficient for meaning. That "analogous" [laughs] part of it is the key, what that means. That's what I think people don't yet know, how much we need to really imitate.

Moderator: Professor Melanie Mitchell, thank you so much for being here.

Melanie: Thank you very much.

[applause]