Singularity Summit 2012

Rationality and the Future

[Video]


For more transcripts, videos and audio of Singularity Summit talks visit intelligence.org/singularitysummit

Speaker: Julia Galef

Transcriber(s): Ethan Dickinson


Moderator: Julia Galef is our next speaker. She is a San Francisco-based writer and public speaker specializing in science, rationality, and design. As the president and co-founder of the nonprofit Center for Applied Rationality, she is dedicated to harnessing breakthroughs in the study of cognition to improve human decision-making through the teaching of math- and science-based techniques. She holds a degree from Columbia University and serves on the board of directors of the New York City Skeptics, where she is co-host of the podcast "Rationally Speaking". Her writing can be found on the websites Rationally Speaking and 3 Quarks Daily. Please join me in welcoming Julia Galef.

Julia Galef: Hi, good morning. My story begins on a hot day in Burma in the 1920s, where a young British officer named Eric Blair was stationed. He received some disturbing news one morning that a work elephant had gotten loose from its chains and was going on a rampage and he was needed to track it down and if necessary, subdue it. So he set off in the pursuit of the beast with a gun. As he tracked it, a crowd of interested onlookers amassed behind him, curious to see what would happen.

When he finally found it, it was in a field grazing peacefully and he realized two things. First, that the elephant was clearly calm now and posed no danger to anyone, and that he had neither the desire nor the justification for killing it. And two, that the crowd of now nearly 2,000 interested onlookers behind him expected him to shoot the elephant, and that they would be disappointed and irritated if he didn't, and that he would look weak and foolish. Blair shot the elephant in the head. That failed to kill it, so he shot it again, and again, until he finally gave up and left it there to die a slow and painful death.

Eric Blair later adopted the pen name George Orwell, and he wrote about this experience in an essay titled "Shooting the Elephant." The last line of the essay is, "I often wondered whether any of the others grasped that I had done it solely to avoid looking a fool."

What I want to highlight about this story is how clearly you can see this tension between these two competing decision-making processes, wrestling for control of George Orwell's decision, one of which is programmed to avoid looking foolish in front of other primates, the other of which can reason about the fact that the elephant is not a danger, and that shooting it would be unnecessary and cruel.

This first process, the one that wants to avoid looking weak, is an example of one of many decision-making processes that were optimized for in the evolutionarily adaptive environment, because they helped our genes spread, but which now today get triggered in modern contexts in which they don't actually even apply, like in front of crowds of strangers who you don't need to impress and who you're not going to see again.

In this case Orwell, instead of reasoning about his values, deferred to this instinctive process that was optimized for a different goal, genetic proliferation, and in a different time and place. When we make decisions about our future, it's crucial that we avoid making the same mistake.

You're probably familiar with some of the ways in which our brains are sub-optimally working in the modern environment, like the fact that we constantly crave sugar and fat, which was of course helpful to our ancestors in motivating them to get enough calories on the savannah, but today in our modern age of abundance, produces things like this: [slide displaying "deep fried butter"].

The thing is, this also holds true of the way that we reason and make decisions. When you encounter a claim, someone makes a claim to you – anything, "The moon is made of green cheese," or "Marijuana causes cancer," whatever – there are all sorts of questions that you could ask that would help you determine how much credence to put in that claim. Like, "Where did this person get their evidence?" or "How would the world look differently if this were true versus if it were false?"

But our brains don't automatically ask any of those questions. Instead, our brains automatically ask questions like, "How symmetrical are the facial features of the person making this claim?" which I wish were an exaggeration, but it is in fact the case, as research has repeatedly shown, that you are much more likely to be given the benefit of the doubt by a jury if you're on trial, or by the electorate if you're running for office, or by adults if you're a child, if you have symmetric and classically handsome features.

What else do our intuitive systems of epistemology find persuasive? Well for one, personal experience and anecdote. The theory that vaccines are responsible for the rise in diagnosis of autism, which has been thoroughly disproven beyond any reasonable doubt by study after large epidemiological study. Yet still, tens of thousands of parents, like celebrity Jenny McCarthy... to them, that scientific evidence doesn't hold a candle to the evidentiary weight of a single emotionally fraught example of a child who developed autism after being vaccinated.

What else? One of the most ubiquitous decision-making heuristics we use is the question "How do I feel emotionally about this issue?" and that was probably a really good proxy for issues that involved darkness or that tiger over there. But now that issues that we're considering are more sophisticated, our emotional affect surrounding an issue becomes an increasingly bad proxy for what the consequences of that decision or that policy actually would be.

For example, natural food. People like the idea of natural food. It has warm, positive emotional affect surrounding it, but it's only very vaguely related to the questions that we really care about, like "Is it healthier?" or "How much better for the environment is it?" Conversely, we have negative associations with nuclear power. It conjures up creepy things like mushroom clouds and mutations. But those emotional associations are again only tangentially related to the real question that we should care about, which is "How many lives does it cost or save relative to other potential sources of energy?"

This tendency that we have to use our emotional affect as a guide to the decisions that we should be making is especially dangerous when it comes to decisions that involve large-scale loss of human life. This is a scene from one of my favorite movies, "The Third Man" by Orson Welles. Orson Welles plays a man named Harry Lime who has been profiting in post-war Vienna from the sale of counterfeit medicine. As a result, hundreds of children have been dying from meningitis. In this scene, he's up in the top of a ferris wheel with his old childhood friend named Holly, and Holly is taking him to task for his actions. Play the clip please.

[video clip begins]

Harry Lime: You oughta leave this thing alone.

Holly: Have you ever seen any of your victims?


Harry: You know I never feel comfortable on these sort of things. "Victims." [inaudible 8:05] be melodramatic. Look down there. Would you really feel any pity if one of those dots stopped moving forever? If I offered you 20,000 pounds for every dot that stopped would you really, old man, tell me to keep my money? Or would you calculate how many dots you could afford to spare – free of income tax, [inaudible 8:25] free of income tax.

[video clip ends]

Julia: Hopefully, none of us are poised to become the next Harry Lime. But we do share one thing with him, which is our ability to be completely emotionless at large-scale loss of human life, if that information is presented to us in an abstract form.

A psychologist named Paul Slovic, who's actually on the board of advisors of my organization, has a really disturbing body of research showing that we care less, and we are less willing to donate money, to save a large group of people, than to save a single person. This is something that, before Slovic proved it, before Welles illustrated it, was observed pithily by a German writer named Kurt Tucholsky. You may have seen this quote before, "One man's death, that is a catastrophe. A hundred thousand dead, that is a statistic."

The corollary to this phenomenon is that we can barely distinguish emotionally between catastrophes of vastly different orders of magnitude. I couldn't possibly say this better than Singularity Institute's own Eliezer Yudkowsky, so I'm just going to quote him here. "The human brain cannot release enough neurotransmitters to feel an emotion a thousand times as strong as the grief of one funeral. A prospective risk going from 10 million deaths to 100 million deaths does not multiply by 10 the strength of our determination to stop it. It adds one more zero on paper for our eyes to glaze over."

Unfortunately, the decisions that we have to make today are increasingly of a kind that our brains are not equipped to make. Questions of how we should be approaching long-term risks like global warming or nuclear proliferation, questions about what might happen if we pursue unprecedented technologies like artificial intelligence or like nanotechnology, or what might happen if we don't pursue them, and what the attendant probabilities are. These are unprecedentedly complex and sophisticated and abstract decisions, and the stakes are unprecedentedly high, we can't afford to get these decisions wrong.

That's why we founded the Center for Applied Rationality, or CFAR for short. We are a nonprofit organization devoted to training people to avoid the kinds of cognitive biases, systematic errors that cognitive scientists have learned the human brain tends to make, and especially training people who want to take an active role in shaping the future of our society, of our civilization.

Today I'm going to share with you three principles that are crucial if you want to actually improve human rationality.

Principle number one. People don't like being told they're not rational.

[laughter]

Julia: Yeah I know, we were surprised too!

This is a comic strip from "Calvin and Hobbes", and if you can't read it, the first box Calvin is sitting at his cardboard stand, with a sign that says, "A swift kick in the butt, $1". Hobbes asks "How's business?", and Calvin says "Terrible." Hobbes says "That's hard to believe," and Calvin says "I know, I can't understand it. Everybody I know needs what I'm selling." Just cross out the phrase "A swift kick in the butt" and replace it with "rationality", and you'll have some sense of the challenge that we're up against here.

But that brings me to principle number two. If you want people to become more rational, you have to teach them how to use rationality on things that they care about. Their finances, their health, their personal relationships, their career choices, their day-to-day happiness. If people are going to go through the work, and it is work, of learning and practicing rationality... with a few exceptions, it's going to be because they expect it to pay off for things that are important to them. And it does pay off. It pays off to notice the cognitive biases, the errors that your brain is making.

Just to pick one example, we had a student recently who was deliberating whether he should accept a job offer in Silicon Valley that would have him making about 70K per year more than he currently was, but the job would mean moving away from his hometown where all of his friends and family were. At CFAR we suggested reframing the question by flipping it around and asking himself, "If I were already at this job, would I be willing to take a 70K per year pay cut in order to move to my hometown and be close to my friends and family." Then the answer was a clear "Oh, no." [laughs] Which indicated that his reluctance had been motivated much more than he realized by what's called the "status quo bias," which is a preference for whatever currently happens to be the case.

It also pays off to notice the beliefs that are rattling around in your brain that you've picked up over the years without consciously evaluating them, from your family, or your culture, or the fiction that you read, or maybe some person with a particularly symmetrical face.

I teach this class at our regular workshops that I call "Epistemic Spring Cleaning." Every time I teach it we end up with a board full of things that we've realized we've internalized, and in had many cases actually been acting on, without ever having thought to ourselves, "Do I have good reason to believe this?" For example, "money doesn't buy happiness" was one example, or "intellectual pursuits are noble." This is not to say you end up always rejecting the beliefs that you uncovered during your epistemic spring cleaning, just that if any belief is going to be guiding your life decisions, it's something that's worth actually thinking about rationally for a minute or two at least.

The other reason it's important to train people to use rationality on real-life domains is this issue of "domain transfer," which is one of the biggesst hurdles to actually becoming more rational. "Domain transfer" refers to the difficulty of learning something in one context, like an artificial problem in a classroom, and then recognizing when to apply it and knowing how to apply it in a real-life domain where it actually applies.

I can personally testify to the difficulty of domain transfer, because when I was an undergraduate and I was considering going into graduate school in economics, I went and talked to a bunch of my professors to ask them "How do you like academia? Would you recommend it to a young student like me?" They were generally very positive, and encouraging. It wasn't until years later that it occurred to me that I had only been surveying people who liked academia enough to stay in it, and that my sample was incredibly biased, which is a classic, textbook example of what's called the "selection bias," where your sample is not representative of the population you're interested in. And I was a statistics major, which gives you some sense of the scale of the problem of domain transfer.

This brings me to principle number three. In teaching rationality, community is key. That's not just because it's fun to learn rationality with people who are becoming your friends. There's another more fundamental reason. I would argue the most important skill in rationality is not probabilistic thinking, or learning to avoid the selection bias, or the status quo bias or any other bias. It's the skill of actually wanting to figure out the truth, more than you want to win this particular argument, or more than you want to enjoy the feeling of having turned out to have been right all along. Some people seem to have been born with this skill, or have been brought up with it. But what about the rest of us?

Well, we're all primates, and as we know, primates enjoy the approval of, and feel influenced by, other primates around them. What we're trying to do at CFAR, in the background of all of the specific classes that we teach on specific skills and biases, is create this culture, in which changing your mind in response to evidence is a virtue that's admired and applauded. I can't tell you how gratifying it's been seeing this community develop, and how inspiring it is being around people who say things to each other like, "Oh you're right, I hadn't thought of that, I'm going to change my mind now," and then actually do change their mind, not just give lip service to the idea of changing their mind and then continue on their merry way as before.

I think my favorite example of this ideal comes by Richard Dawkins, from his days in the zoology department at Oxford. What you're looking at here is the Golgi apparatus, it's a cellular structure that's responsible for distributing macromolecules around the cell. There was an elderly professor in the department at Oxford who was famous for arguing that the Golgi apparatus wasn't real, that it was illusory, an artifact of observation.

One day, while Dawkins was in the department, a visiting professor came from the States to give a talk presenting new and very compelling evidence that the Golgi apparatus was in fact real. During this talk as you can imagine, everyone was glancing over at this old professor to see, "How's he taking this, what's he going to say?" At the end of the talk, the old professor marched right up to the front of the lecture hall, and he extended his hand to the visiting professor, and he said "My dear fellow, I wish to thank you. I have been wrong these fifteen years."

Dawkins describes how the lecture hall exploded into applause. He says, "The memory of this incident still brings a lump to my throat." I'll be honest, it brings a lump to my throat too, every time I retell that story. That is the kind of person who I want to be, that's the kind of person who I want to inspire other people to be, and that is the kind of person who I want making important decisions about the future of our world.

I'm going to leave you with one more thought about why this project of improving human rationality is so inspiring to me personally. This is a scene from one of my other favorite movies, "Blade Runner." This is Rutger Hauer playing Roy, who is a replicant, essentially a very sophisticated, organic robot, created by humanity to serve as a soldier defending their colonies. Roy is reaching the end of his very short pre-programmed life, and he's confronting the fact that he is essentially just a creation of humanity to serve their ends without regard for his desires or needs. The poignancy of his death scene really comes from the contrast between that bitter truth that he's confronting and the fact that nevertheless he still feels deeply like his life has meaning, and that, for lack of a better word, he has a soul.

To me this is poignant, because that's essentially the position that we, as human beings, have found ourselves in over the last 100 years. This is the bitter pill that science has offered us in response to our questions about what the meaning of it all is and where we came from. Turns out, the answer is, we're survival machines, created by ancient replicators to serve their end of making as many copies of themselves as possible. The decision-making processes that are programmed into our brains are not optimized for our interest, they're optimized for this interest, of our genes, of proliferating themselves.

And to make matters even creepier, the genes don't care about us. They don't much care whether we stay healthy and safe after we finish making copies of them. They don't care whether we're happy. In fact, it's to their advantage... we're better at making copies of them when we're constantly on this thing called the "hedonic treadmill," where we want more and more things, and yet once we get them we're dissatisfied with them and we want more things. And our genes don't care about strangers on the other side of the world, or about the distant future of humanity.

But we care. We, as individuals, care. That's why this project of figuring out how to install new processes in our brains, that allow us to optimize for our goals and not for the goals of these ancient replicators, that's why that project is so important, and so inspiring to me. That's why, it feels like, to me, a crucial step in this process towards self-determination as a species.

This idea is beautifully expanded upon in the book "The Robot's Rebellion" by Keith Stanovich, also a pioneer in the field of rationality, and also on our board of advisors. He has this wonderful quote which I'll leave you with, "If you don't want to be the captive of your genes, then you had better be rational."

We are CFAR, please come to our website, learn more about us at appliedrationality.org, and while you're at the summit, please come up and talk to any of the members of our team. There's Anna, our Executive Director, I'm the President, Michael is our Curriculum Developer, Cat is a Volunteer Instructor, and Eliezer has been an invaluable Consultant and inspiration to us.

Thank you so much.

[applause]