Singularity Summit 2012

Q&A: Economist Daniel Kahneman, the Pioneer of Heuristics

[Video]


For more transcripts, videos and audio of Singularity Summit talks visit intelligence.org/singularitysummit

Speaker: Daniel Kahneman

Transcriber(s): Ethan Dickinson and Jeremy Miller


Moderator: Welcome back for day two of the Singularity Summit. I hope everyone enjoyed day one, I hope you all had a great night last night. You may have noticed that the WiFi is currently off in the building. That is by design, and that’s because we'll be doing a live conversation with Professor Daniel Kahneman to kick off day two of the Singularity Summit.

Daniel Kahneman is a Nobel Laureate and a professor emeritus of psychology and public affairs at Princeton's Woodrow Wilson school. He is most famous for his work on the psychology of judgment and decision-making, behavioral economics, and hedonic psychology. His book, "Thinking Fast and Slow," was published last year and quickly became a bestseller. Impressively, Mr. Kahneman's Nobel Prize was awarded for his work in economics, though he never took a single course in the subject. Please join me in welcoming, live via Skype, from Princeton, New Jersey, Professor Daniel Kahneman.

[applause]

Moderator: Good morning Professor Kahneman, how are you?

Daniel Kahneman: Just fine.

Moderator: Thank you for being with us.

Daniel: [xx 1:17] in New York actually.

Moderator: Oh, in New York, I thought you were in Princeton today. Well it's great to have you hear, and the magic of the Internet brings you right – you're right up on the big screen in front of everyone. We're so thrilled you could be here.

We've seen your Sackler lecture, so we're familiar with some of the background of your work. Of course you've described very interestingly a System 1 and a System 2 in the human brain. System 1 being highly associative in its nature, and automatic and very quick. That's the "thinking fast." System 2, more deliberate, slow, you might call that "reasoning." Incredibly interesting paradigm. How did you first become interested in the paradigm, and can you describe a little bit for us the interaction between those two systems.

Daniel: We started out, my late colleague Amos Tversky and I, looking at incorrect intuitions in the domain of statistics and probabilistic thinking. What was interesting, we were both teachers of statistics, and yet we recognized wrong intuitions in our own thinking. This really was the beginning of this dichotomy between intuitive thinking and the more deliberate, and in some cases correct, thinking.

This evolved over the years, and it evolved quite a bit, and now I have essentially a metaphor of two systems, one which essentially is our associative memory, which delivers feelings and interpretations and impressions, and usually very quickly, and is highly context-sensitive. The other one is executively controlled and tends to be lazy and to operate by law of least effort, but ultimately, in most cases, really controls all behavior.

It's the interaction between these two systems that's of interest. We are mostly conscious of our System 2. We identify with that part of us which is deliberate and conscious. But in fact, as I describe it, most of the action occurs in System 1, and System 2 quite often elaborates and rationalizes on what has in a sense already been determined automatically in associative memory.

Moderator: You describe System 2 in some cases as being kind of lazy and not really wanting to do much work, and trusting System 1, the intuitive side of our minds, until the stakes are high. But you also talk in some cases about how we can train our System 2 to be more engaged. You've obviously spent a lot of time thinking about this. Has it impacted the way that you think? Have you developed a System 2 that is more engaged, less lazy, and more at-the-ready than the man on the street's might be?

Daniel: Not at all.

Moderator: [laughs]

Daniel: Really, my book is not intended as a self-help book, and I am the poster-child for how ineffective that thing is, because I really do not think that there has been a large influence of all this research on the way I think or on my decisions. Although in some cases I can slow myself down and actually think about things, but most of the time, like everybody else, I act intuitively.

Moderator: That's funny.

In the book, you describe your hopes for a world in which people are at least more aware of the fact that they're usually operating on intuition. You said that you were hoping for a world in which water cooler gossip could make more use of rationality concepts. For example, "I can't believe the boss is betting the whole company on this. Yeah we had a great quarter, but hasn't he ever heard of regression to the mean." Are you aware of any groups that have really been able to do this successfully and what the results have been?

Daniel: I'm not aware of any groups that have applied my language, or the language I proposed, successfully. But actually, many successful companies have developed cultures of their own and language of their own, and that is a familiar phenomenon for people who study organizations. Not my field, but I have read about it. Companies that have a strong culture tend to have a language to describe mistakes, to describe challenges, and that language actually structures in some important ways how the companies function.

Moderator: In terms of the structure of the brain and how it relates to the System 1, System 2, which is obviously an abstraction or a metaphor, how much do we know about the biology that underlies the two systems?

Daniel: More is known than I know. I know very little. This is a psychological book, and not a book about the neuroscience of it. From my very limited understanding, a fair amount is known at the moment about executive control, which is what I call System 2. The areas in the brain that are critical in executive control have been identified, and in some cases, in various ways, these areas can be trained, to a limited extent at least. Those are also areas that are identified with intelligent functioning. Intelligence and self-control and executive control of actions and thoughts are reasonably beginning to be understood by neuroscientists. About System 1, I don't think that there is any particular location for System 1.

Moderator: You mentioned that, to some degree, we can train our System 2. How much can we do that? How much do we know about that? Is System 2 generally stable in terms of how engaged it is for given individuals?

Daniel: There are individual differences in the operation of System 2, and we recognize them. There are differences in self-control, which are quite obvious, and there is a test that I mentioned that has become quite famous actually over the last decade... I'm blocking on it, but I'll give you the most familiar example of it, which is that a bat and a ball together cost a dollar ten, and the bat costs a dollar more than the ball, how much does the ball cost?

That's an interesting problem because everybody, just about, has an immediate association which is that the answer could be 10 cents. In fact, that answer is wrong. Anybody who says 10 cents, you know something very important about them, you know they haven't trained themselves. That's true of about 50 percent of students at Princeton or Harvard or MIT, so it's not an issue of ability. It really is an expression of the laziness of System 2.

It turns out that there are some fascinating correlates of passing that riddle and others like it in people's attitude to deferred gratification and  to other things. There are individual differences. It's not a huge amount of variance. There is also some flexibility. People evidently can be taught something about self-control, there is definitely work on that. There are effects of education, obviously education has an effect on that. It is not completely fixed. There are probably genetic individual differences. It's affected by culture, and by training, and by context.

Moderator: Great. Turning a little bit more toward your work on biases, which is something that I know is incredibly relevant and of intense interest in this audience, you've identified an array of biases – this question is actually from the audience, from Johan Edstrom – what bias do you think, or maybe it's a couple, are most important in terms of their influence on life success? Which are the ones that we should be most aware of if we want to be successful in the world?

Daniel: I think clearly the most significant bias is not one that I have studied, and it's optimism. I think that people are generally optimistic, and another important thing: optimistic people have a better chance of becoming influential, and influential people tend to be quite optimistic, and biased in that direction. They tend to have an illusion of control, that they control their environment more than they actually do. That allows them to influence people, to get things done, and sometimes it gets them and all of us into trouble.

Moderator: Obviously we can try to promote this idea, but it seems this is deeply ingrained in us. We are inclined always to follow those that lead with conviction. Do you see any way to mitigate that?

Daniel: It's going to be very difficult to train the intuitive response to forcefulness and optimism. That intuitive response is absolutely overwhelming. The debate that we saw last week is an example of what happens to intuitions when you see somebody appearing more forceful than the other. That's not going to change.

It is possible, I think, for organizations to set up systems that can put the brake on overly optimistic visions of the future and illusions of control. It involves to some extent legitimizing dissent and protecting pessimists. Pessimists are really a valuable resource in organizations, and they need protection. By protecting them and protecting their voice, allowing them to be heard, although nobody likes to hear them, an organization to some extent can protect itself against true mistakes.

Moderator: That leads in perfectly to the next question from the audience, which is from Kip Werking, and he noted that in your article "Why Hawks Win," you point out an array of biases that all seem to favor the hawks, and this is hawkes of course in the context of war versus peace, hawks versus doves. Conviction, the ability to inspire, some tendency perhaps toward group conflict, all kind of feading into that, so Kip's wondering, what do you think about the concept of free will? Is that potentially another concept which we all are drawn to because of biases? He's thinking of biases including the fundamental attribution error, illusion of control, et cetera, et cetera. Do you think that we are biased in a way to think that we have more free will than we really do?

Daniel: Whether or not we have free will is a very complicated question. What is certain from psychological research of recent years is that we feel we have free will even when we clearly don't, so that when you cause people to act in a certain way, you quite often can create a situation where they will feel they had free will. So there is an illusion of free will. Whether or not there is free will is a question that I don't feel really equipped to discuss, I don't feel anyone is equipped to discuss very well. That's my skepticism.

Moderator: OK, another question from the audience, coming from Patrick Tucker. You have on many occasions talked about how people make poor predictions when they use the inside view. That is, when I look at the particulars of my situation and I try to think, "What can I accomplish? How will my situation play out?", I tend to focus on the low-level details, when the question that I could ask which would produce in general much better results, is "How have similar situations turned out in the past?" and just look at the reference class, that tends to produce a better result. Patrick is wondering if you have any opinion about how the Big Data movement and self-tracking, self-quantification movements that are probably most popular here in the San Francisco Bay area, how might those influence our ability to use reference class, or might they make the inside view more powerful?

Daniel: I think that's an excellent point, and I think that at least in principle there is an opportunity here. There is an opportunity for people to discover regularities in their own life, and there will be very soon, if there isn't already, an opportunity to look at a distribution of cases and the outcomes of similar cases. That, just adopting the outside view as a procedure, as a process, it's not going to be intuitive, I think. But you may learn that under certain conditions, it is a good idea to consult the outside view, and there certainly will be a lot of information that will do that.

A good example being medical diagnosis, where the physician could have intuitions about the state of the patient by looking at the patient and various cues, but supplementing that by statistics, which can be almost instantly available, is likely, in some cases at least, to prevent mistakes.

Moderator: Next question from the audience is a great one. Simply, can you share a couple of your favorite stories or surprises that you have discovered over the years in investigating biases – funny mistakes people make. I'm sure you've got a ton of these stories.

Daniel: I think my favorite experience, was a story I tell in the book, actually, about training instructors in the Israeli Air Force flight school. I was telling them that using positive reinforcement is actually more effective than screaming at people, than punishment, as a way to teach skills. Somebody piped up and said, "What you're saying may be true for the birds –" I'd been speaking of pigeons, "– but it's not true for people, and I know that when a cadet performs an aerobatic procedure, then if he does it poorly and I scream at him he'll do better next time, and if he does it very well and I praise him he'll do worse next time. So don't tell me that punishment doesn't work and praise does, because I know the opposite is the case."

This is a clean case of regression to the mean, so it happens whether or not you scream or praise. Extremely good performance tends to be followed by deterioration, and extremely weak performance tends to be followed by improvement. But we live in a world, under a perverse contingency that we are really punished for praising people, and we're rewarded for punishing people. That was actually for me the most dramatic example of encountering a bias and discovering it.

Moderator: Another question from the audience. You have noted on several occasions, and even said about yourself this morning, that experts are subject to all the same biases and subject to the same inclination toward intuitive thinking that we all are, despite their expertise in the field. What consequences do you think that has for public discourse? Should we, as the public, be challenging experts more, or are they still the experts and we should be deferring to them?

Daniel: I think that what we should know are what are the limits of expertise. I'm not denying expertise, expertise is wonderful and there is a lot of it around and we should respect it. But there are cases in which people believe that their expertise extends beyond the true limits of what they do know. It's very difficult for experts to know where their expertise ends.

There are a few questions that we should always ask when we're considering expert judgment. The first of these is, is the topic about which the expert is pronouncing one that lends itself to expertise? Personally, I have grave doubts about long-term prediction of any kind, and I think that expertise in long-term prediction is a questionable concept. I know that this is an issue for this audience.

There are domains where people have expertise and have made judgments. We have to know, when they're making a new judgment, whether they have expertise in that domain, and whether they have had a decent opportunity for feedback to learn about that domain. In quite a few cases you find that experts just don't do that spontaneously, and that the audience does not spontaneously question them. Pundits on television are a good example. They seem to believe that they know what they're talking about, and the audience seems to share that belief, and in many cases the belief is quite misplaced.

Moderator: I want to come back in a moment to the topic of long-term predictions and take your temperature on that. This is a conference with the theme of the singularity, the singularity being the emergence of smarter-than-human intelligence, and obviously there are a lot of big predictions being made.

A couple more down-to-earth questions first. Are there examples of people who have, for whatever reason, possibly it's just the way that they are born, or possibly that they have been able to train themselves into it, who have empowered their System 2 so much that they begin to be dysfunctional on the other end of the spectrum, where they are no longer able to react as intuitively as life sometimes demands?

Daniel: I think undoubtedly there are pathological and near-pathological states in which System 2 is actually deficient and over-controlling. Obsessive-compulsive disorder is, in some sense, a disease of System 2. I think that there are people who are paralyzing by analyzing things too much. So yes, certainly, you can have too much of System 2.

Moderator: You also mentioned feedback as a way for, first of all, identifying experts. Expert judgment is more reliable when it has been validated by feedback, and certainly as we think about how to train ourselves about how to be better thinkers, seeking out feedback proactively would seem to be a great strategy. What suggestions can you give for us doing that? Tools, strategies, things that we can take out of this and say, "Here's how I'm going to seek feedback on my own beliefs."

Daniel: The problem is that for an important problem, and for important actions, feedback tends to be very poor, and tends to be far delayed. One obvious example is decisions about the economy, or economic policies, have their effects years later, and the causality of economic observations is always debated, because there are so many factors, and it's very difficult to trace the contribution of a particular action to a particular outcome. Looking at feedback is very, very hard, just where it is most important.

But what is really quite interesting is that in many organizations, and I have in mind financial organizations, that make decisions a lot, make many decisions, they actually very rarely question their own decisions or follow the true outcomes. You might have an organization that picks stocks for a living, and it would be rare that they would have a backward look at their decisions, to see if the stock that they rejected actually did better or worse than the stock that they picked. There are many opportunities for analyzing feedback that are not in fact exploited.

Moderator: Venturing in now to the territory where feedback will be quite hard to come by, and a little bit now closer to the theme of the conference of the singularity, Roy Amara, who is the past president for the Institute for the Future, has famously said that we tend to overestimate the effect of technology in the short run, but underestimate it in the long run. That is, we think that change will be coming sooner than it in fact does, but over the long term, change exceeds our expectations.

Thinking about projecting this statement onto the System-I-System-II paradigm, it would seem that the short run stories of change possibly suffer from the inside-view problem, but the long run stories of change may violate too many of our established associations and they may seem implausible. Do you think that that is a reasonable model of that statement, and does that statement ring true to you?

Daniel: I think that's certainly the case, that we are wired for short-term prediction of the future. That's what we're good at. We can do some extrapolation of the stories that we tell about the present. We are reasonably good at doing that.

Whether long-term prediction is possible or not, I think depends primarily on the world, it depends primarily on reality, whether reality is predictable or not. If it is unpredictable in principle, then no matter how clever we are, we're not going to be very good at it. If it is predictable in principle, at least to some extent, then we might learn how to predict. The main question I think is about the world, not about our mind, in this context.

Moderator: Here's a follow-up which does still talk about our minds, and I think you're right to point out that if the world is fundamentally unpredictable, there's not a lot we can do. But I'm wondering what you think are the most important biases that people should be thinking about as they all are thinking about this concept of the singularity. It's obviously a far-term future event, something that we don't expect tomorrow or next week, but maybe 30 years from now, 100 years from now, whatever. And it's something that feels quite shocking and implausible. So what are the big biases, if you could make a checklist of things people should be checking against.

Daniel: I think there is a major bias, which is believing in scenarios, and it's a very common thing. We tell stories about the future, and we evaluate those stories by their coherence, and to a very large extent by our inability to conceive of alternative scenarios. A few scenarios that can come to our mind tend to dominate our expectations. That is the risk, I think, in any attempt to predict the long-term future, is that we can tell stories.

The stories we tell, to judge by past experience, are almost always wrong. Something else happens that we didn't expect. That seems to be the story. But we do tend to believe in some scenarios. The singularity scenario is particularly appealing, because in one sense it looks like an inevitability, but I think there is a lot of experience that inevitable things don't always happen.

Moderator: Indeed. We've got time for one more question, and this one is again somewhat speculative... very speculative. Some of the folks here have been thinking and advocating the idea that an artificial intelligence of some sort, whether it's more limited as a tool or possibly a more advanced form that can even rival or surpass our own intelligence, could be the best available tool to mitigate our own human biases. Do you have any hope that that might work? Does that seem fanciful to you, or does it sound like the right approach?

Daniel: You remind me that when I was a student, which was a very long time ago, one of my professors was a logician. He was asked about his predictions for when would there be a computer that would understand language, and very confidently he said, "Never. And by 'never' I mean at least 20 years."

[laughter]

Daniel: I think our abilities to predict the future and what is possible and what is impossible are extraordinarily limited, so I'm not going to venture very far along that path.

Moderator: OK. We are unfortunately out of time, Professor Daniel Kahneman thank you so much for joining us live via Skype here at the Singularity Summit. It's been a real pleasure to talk to you.

[applause]

Daniel: It was my pleasure to be with you. Thank you.

Moderator: Thank you.