Published using Google Docs
Philosophy Talk: Turbo-Charging the Mind
Updated automatically every 5 minutes

Machine Intelligence Research Institute

Philosophy Talk: Turbo-Charging the Mind

Transcript

[Audio]


Speakers: John Perry, Ken Taylor, Caitlin Esch, Michael Anissimov, Anna Salamon

Transcriber(s): Gal, Tarn Somervell Fletcher


[intro music]

Ken Taylor: This is “Philosophy Talk,” the program that questions everything.

John Perry: Except your intelligence! I'm John Perry.

Ken: I'm Ken Taylor. We're coming to your from the Marsh Theater, the Bay area's breeding ground for new performance.

John: Our thinking starts over across the Bay at Stanford University where Ken and I teach philosophy.

Ken: Welcome everyone to “Philosophy Talk”!

[applause]

Ken: Today we're going to turbo-charge the mind. That's the idea that with all the advances in computer technology happening to us every day, a day will come – a day will soon come – when there will be superintelligent machines and humans themselves will be able to achieve machine-enhanced superintelligence. Don't you think that sounds like an exciting possibility John?

John: No. Sounds frightening. Except it sounds like science-fiction, so it's not so frightening. Look, we already got a lot of so-called “smart” technology. None of it is anywhere close to being genuinely smart. Smart technology means you can take photos, surf the internet, listen to music, and check email all on the same attractive light-weight device. OK, big-deal. But that ain't intelligence.

Ken: Come-on. You're picking on a use of “smart” that doesn't really have anything to do with “smart” in the sense that humans are smart. I grant you that, but look. Think about “Deep-Blue” from the '90s. That's the chess-playing computer that beat Grandmaster Garry Kasparov. Remember that? He's one of the smartest human beings on the planet, one of the best chess players ever. But you know what? He was no match for Deep Blue. That's not science-fiction, John. That's science-fact.

John: There have been calculator for years that can add and subtract faster than I can. Not to mention taking square-roots. That doesn't make them intelligent, or even as intelligent as little old me. I grant you that Deep-Blue is better and faster than humans in calculating chess moves. That's a very limited capacity. Not something that deserves to be called intelligence. If you asked Deep-Blue to do something practical, something that any five-year-old could do, like get some milk from the refrigerator, it would be stumped. How is that intelligence?

Ken: I grant you that. But don't underestimate the importance of chess mastery. That used to be considered the holy-grail of artificial intelligence. Because lots of smart people play chess. But we’ve got this fact that we got a machine that outstripped any human being. Doesn't that suggest that we're moving at least in the direction of having genuinely intelligent machines?

John: I suppose that if you had a five-year-old that could get milk out of the fridge and play chess, that would be pretty impressive. But getting milk from the fridge is the trickier of the two. It's a pretty simple task by human standards, but it involves a lot of very different capacities. You got to be able to understand a request, probably knowing language, has to be able to navigate her way through the environment, not walk into the wall, not trip over a chair, figure out how to open the refrigerator door. On top of all that, she has to be able to recognize milk from among the many objects in the refrigerator. Show me a machine that could do all of that, or even any of that, and I'll start to be impressed.

Ken: You're missing the point. I'm not saying that we already have fully intelligent machines. But, look, at least in one domain, a deep and intelligent domain, we got a machine that surpasses anything any human can do.

John: I think you're missing my point. Deep-Blue may be superior to humans in speed and capacity. But you seem to be assuming that human intelligence is just a matter of processing power. If we build machines that are faster at retrieving information and calculating possibilities, will we have done something intelligent? I don't think so.

Ken: It's not like speed and capacity processing, memory, it's not like that's nothing. That's certainly part of the story. But you do raise an important issue. If we're going to build intelligent machines, and are really serious about it, we first have to figure out exactly what intelligence is. And then, if we can duplicate that, why not decide that the next question would be how would we want to incorporate this technology into our lives, and maybe even into our bodies.

John: We already incorporate technology into our bodies. We have pacemakers, artificial hips, cochlear implants, and so-forth. Maybe in time we'll have nanotech phones imbedded in us, or a remote control, so you won't have to always be looking for it.

[laugh]

John: That will happen whether or not we'd be able to build intelligent machines.

Ken: So you think there is a possibility of turbo-charging the mind, you sounded so skeptical at the beginning. I do grant you that there's kind of a danger here. A danger of us becoming instruments of our machines rather than our machines instruments of us. We're talking about machines that make us smarter. I'm not quite sure what technology would be like, or have to be like in order to make us smarter.

John: Shouldn't we also ask about the point of the whole thing? I mean there are six billion pretty intelligent brains on the planet. Mostly underutilized. The world has enough intelligence. The problem is getting it into positions of power. Somebody should think about that.

Ken: That's a good point, John. To start us thinking about some answers to some of these hard questions, we've sent our roving philosophical reporter, Caitlin Esch, to examine some ideas from the world of popular culture about this stuff. She files this report.

[music]

Caitlin Esch: If machines really could surpass human-level intelligence what would our world look like?

Michael Anissimov: If we are able to build artificial intelligences from scratch, we might be able to leave out certain cognitive components that human-beings have that innately make us do nasty things.

Caitlin Linesh: I ask Michael Anissimov from the Singularity Institute for Artificial Intelligence to walk us through some of the utopian, and, much more commonly, dystopian examples in film. Here's a clip from the 2004 movie “I Robot,” where robots plan a revolution. Will Smith plays a cop who's starting to figure out there's something wrong with the robots.

[clip starts]

Will Smith: Murder's a new trick for a robot. Congratulations.

[silence]

Smith: Respond!

Robot: My father tried to teach me human emotions. They are... difficult.

Smith: You mean your designer.

Robot: Yes.

[clip ends]

Caitlin: Anissimov finds it annoying that movies give robots overly human emotions, or portray them as suddenly achieving consciousness or having a soul.

Michael: Where in reality cognitive science tells us that our brain's essentially a tool-box with many different tools, including tools that work on other tools. There probably is no critical threshold, “oh it's conscious, it suddenly has a soul.” It’s more like there'll be an incremental progress where more tools are added to the toolbox until eventually you have a being that’s as flexible and interesting as a human, just based on a criticality of cognitive capabilities.

Caitlin: The machines in “I Robot” tried to conquer humans, but Anissimov doesn't believe that a superintelligent automaton would necessarily be competitive, controlling, or prone to war.

[“The Matrix” movie clip] Man 1: Whoa!

Caitlin: “The Matrix” is another film that presents a dystopian robot takeover. In the movie, humans experience a simulated reality constructed by the machines to subdue people. Their physical bodies are stored in vats hooked up to electrodes and used as a power-source. Their minds are dreaming.

[“The Matrix” movie clip] Man 1: Have you ever had a dream, Neo, that you so were sure was real? What if you're unable to wake from that dream, how would you know the difference between the real world and the dream-world?

Caitlin: Michael Anissimov’s main complaint about the robots in “The Matrix” is that they're too slow.

Michael: Humans neurons fire at about 100-200 times per second, whereas computer chips might be able to do a million logical operations per second. It seems that every fiction, including “The Matrix,” presents artificial intelligences,  or robots, as fundamentally understandable entities, that are essentially human agents in a box.

Caitlin: As with all dystopian scenarios, superintelligent machines are bad. They're written into the film as villains.

Michael: I guess people inherently fear the possibility of something much smarter than them. To me, that is really exciting. The possibilities of a positive society well-integrated with AI and robotics. Where the AIs aren't forcing anyone to become cyborgs, or anything weird like that. We're actually living harmoniously.

Caitlin: Anissimov says there aren't really any examples of films or TV shows where superintelligent robots actually get along with humans. You have to go all the way back to the Jetsons' Rosy or Star-Wars' 3CPO or R2D2 for that.

[Star Wars movie clip] 3CPO: Hello Sir.

Caitlin: But Anissimov would probably argue R2 and Rosy aren't superintelligent anyway.

[Star Wars movie clip] 3CPO: Don't get technical with me!

Caitlin: He believes future superintelligent robots will look nothing like the Hollywood version. AIs will sort of be like people, but better. He thinks a lot of the bad traits could be engineered out.

Michael: Whatever downsides come from individuality, or maybe when it's best for them to be as individualistic as possible, they could maximize their individuality instead of their collectivity, because they're reprogrammable.

Caitlin: To watch that plot-line unfolds, tune-in to the future. For “Philosophy Talk,” I'm Caitlin Esch.

[Star Wars music]

John: Dystopia, that-topia, and utopia. Thanks Caitlin. I'm John Perry along with my Stanford colleague Ken Taylor and we're coming to you from the Marsh Theatre in Berkeley, California.

Ken: We're joined now by a researcher from the Singularity Institute for Artificial Intelligence. That would be Anna Salamon. Anna, welcome to “Philosophy Talk.”

[applause]

         

Anna Salamon: Thanks!

John: So Anna, at one time, not too long ago, you were into philosophy of science, now you're into AI. How did you come to get interested in the work you do at the Singularity Institute?

Anna: I tried to figure out where to donate a thousand dollars.

John: You had a thousand dollars to donate?

Anna: I had a thousand dollars to donate. I was living on graduate student money, so I didn't have a lot more than that. But I had this sort of middle class notion of what it is to be a good person, take a spare thousand dollars, figure out where it can save the most lives, and donate it!

John: Singularity Institute, how does that come in? Artificial intelligence?

Anna: I was surprised by that answer too. I was expecting the answer to be aid to Africa, or something like that. But if you think about it, while charity has had a lot of impact on the world, technology has had a much larger impact. That's why we're rich relative to people a two hundred years ago. So I did a bunch of searches for things like lives saved per dollar, etc. Expecting the answer to be aid to Africa, but thought, upon careful reflection, that the answer was risk of future technology utterly toppling things which we depend on.

John: So what intrigued you was not the development of future technology, but given the development of future technology, the risk – avoiding the risks of it.

Anna: That's right...

John: That's what you thought was worth investing in.

Anna: So I ended up trying to donate my money to long-term risks from future intelligence and then decided that it was more important than I had expected and getting involved with more than the thousand dollars.

John: Help me to clarify one thing. There's the Singularity University and there's Singularity Institute, are they the same things?

Anna: We're not the same thing. We're unrelated organizations with very confusing names. I'd say that the Singularity Institute is both gun-ho and fearful. You can't go backward, we're hoping to steer through the technology in a way that would help humanity.

John: Are we anywhere close to having genuinely smart technology or is it just science fiction?

Anna: I would say neither one! We're not close, but it's not just science fiction. We're making progress, we can see that we’re making progress. Human brains are machines, they work for reasons. These are soluble problems and if science keeps chugging along eventually we'll solve them.

John: But I read these things, people making predictions 2040, 2030. In the beginning of AI, John McCarthy, a former late colleague, dear friend of mine said, by nineteen and something or other they were having a summit Artificial Intelligence... Haven't you guys been predicting the millenium for ever in artificial intelligence?

Anna: A minority of AI researchers have been thinking that AI is just around the corner since 1950, yes.

John: So why should we believe it any more now?

Anna: I don't think you should believe that it's just around the corner now either. I mean, it might be, it's hard to rule it out.

John: There's a lot of successful AI out there, running manufacturing lines and so-on and so-forth. But we're talking about AI in the sense of really replicating or exceeding human intelligence.

Anna: Yes, which should be quite a different matter.

John: Yes. Not right around the corner.

[promo]

Ken: This is “Philosophy Talk.” Coming to you from the Marsh Theatre in Berkeley. We're talking about turbo-charging the mind with Anna Salamon from the Singularity Institute for Artificial Intelligence.

John: What would it mean for a machine to exhibit human intelligence or superhuman intelligence? Is intelligence just a matter of speed and capacity or is there more to it, like good, sound, philosophical judgement.

Ken: Humans, machines, and the nature of intelligence. Along with questions from our live, very intelligent, maybe even superintelligent audience. When “Philosophy Talk” continues.

[music]

[applause]

                                   

John: Thanks to our musical guests “The Playtones.” This is “Philosophy Talk,” I'm John Perry.

Ken: And I'm Ken Taylor. We're thinking about turbo-charging the mind, with Anna Salamon from the Singularity Institute for Artificial Intelligence.

John: Do you think machines could ever be truly intelligence? Does the thought of it feel you with hope, or dread? Would you choose to have a device implanted in your brain to make you smarter. Join the discussion in a bit by stepping up to the microphones and adding your voice.

Ken: Anna, it seems that if there's any hope of building machines that have something like human intelligence, or something that exceeds human intelligence, maybe we'd better get clear on what exactly the goal is. What that thing is. Do you have yourself a working definition of intelligence?

Anna: Yes!

[laughter]

Anna: I like my colleague Eliezer's definition of intelligence as cross-domain optimization power. Let's explain that in two pieces. By “optimization power” we mean being able to hit your goals, being able to rearrange the world to optimize for your goals. So Deep-Blue can optimize within the set of chess games and configurations for ways to get to its goal of checkmate. It’s got narrow optimization power within chess. By “cross-domain” optimization power we mean that it also can do the things that a five-year-old does when it gets milk.

Ken: That sounds kind of Cartesian. Descartes said that one of the distinctive things about human intelligence, human reason as he talked about, was that it was a universal instrument. He contrasted human reason as a universal instrument with whatever capacities animals have. Animals were good at this, good at that – they could hunt better than us, see better than us. But you give a human being a problem in any domain, and it can find a good solution. So is that like a Cartesian conception of intelligence?

Anna: Similar. Similar in that it is also cross-domain general-purpose universal instrument. Different in that it focuses on optimization power, in that it rearranges the world to suit its goals, and not on abstract understanding.

Ken: There's another thing that maybe's a little different than gung-ho AI. Evolutionary psychology has taught us that the human mind is just this massive collection of evolved modules. Some of them work well in certain circumstances, some of them don't work so well. Some of them work well if you put them in certain environment, but take them out of that environment they don't work so well. So they have the very opposite of the idea of the mind as a universal instrument. It's a bunch of special-purpose tools.

Anna: They have the idea of the mind as a Swiss-army knife, is their favourite, right? It's a bunch of tools that turn out to help you out in quite a variety of situations, taken collectively. Human intelligence, maybe it's made out of pieces like a Swiss army knife, but it takes us to the moon, it takes us underwater. There's a lot of different domains that human intelligence could solve that a raccoon intelligence can't solve. They aren't all domains that we specifically evolved to do. It's not because humans evolved on the moon while raccoons didn't.

John: So you might say that human intelligence has human cross-domain optimization abilities which is rather high, but was kind of an evolutionary accident. We kind of backed into it. Whereas most things along the evolutionary chain didn't take that particular route. But with AI, I guess, we’ve got human intelligence, odd as it is, going into the planning of it. From the outset, that's what we're aiming at. So it would be better? Less-filled with problems? Kinder, gentler, intelligence?

Anna: Depends what we engineer. But it certainly seems that it should be possible to make an intelligence that is much smarter than us. I like what my colleague Michael Vassar likes to say on this subject, which is that human beings are probably the dumbest possible thing you could have that still has general intelligence. We got smarter and smarter and smarter and smarter, and finally we passed a threshold past which, with the aid of 10,000 years of cultural accumulation, we can begin to have technological civilization.

John: Now, investment bankers are very intelligent. But there's a lot of psychology that suggests now that the very same things that makes you a good investment bankers would also make you immune to certain kind of risks, or ignore certain kinds of risks. Is that the sort of thing we could avoid having our machines would be like?

                                   

Anna: Sure. Also, we could only hold seven plus or minus two things in our head at the same time, right? Working memory limitations. Maybe you could design something with more than seven working-memory slots. We have models, theoretical toy-models of how much data there is in the environment. You should be able to look around the room and deduce Newton's laws.

Ken: When we talk about artificial intelligence, with the stress on intelligence, it seems that even though we're talking about general-purpose intelligence, this universal instrument, I still wonder about something. Human intelligence is embedded in a mind that has goals, desires, aspirations, it has emotions, it has subjectivity, consciousness, all that stuff. Sometimes you read AI... they're not thinking about all that stuff. A kind of intelligence taken out of the context of consciousness, out of goals, of lived experience, of all that. What's the point of that?

Anna: Serve your goals. If you program your goals into it correctly, successfully.

Ken: You mean, serve my goals as the designer. So it's a tool.

Anna: If you build it correctly. It's either the tool you wanted to build or the tool that the sorcerer’s apprentice accidentally unleashed.

Ken: Is the goal of artificial intelligence as you understand it, as you folks practice it at the Singularity Institute. Is the goal to replicate as it were the complete human being, with a mind, a heart, a consciousness, all that? Or is it just to replicate that narrow thing, which I don't actually quite know how to separate out, intelligence, from the full panoply of human stuff. Like selfhood, consciousness, awareness, emotional life.

Anna: We like to sort-of... separate-out those two questions. Consciousness is a very interesting question, but we're talking about what the broad shape of the world will be in 2100, in 2200, whenever we’ve gotten there. Cross-domain optimization power, like the ability to reshape the world. What powers there are around that have goals and are reshaping the world to hit their goals, is the main question. If those powers don't have consciousness, that's okay.

John: Let me just point out that in your definition, cross-domain optimization abilities, right in the heart of it is a value, right? “Optimization” means success or failure, and success or failure means there's some goal. Now, it seems to me that when we design technology we supply the goal. Let me give you an example. Take a regular mouse-trap. It succeeds if you think it's job is killing mice, but if you were more humane person and just wanted to scare mice, it wouldn't be very good. You'd need to take the platform a little larger so that it wouldn't hit the mouse. So it seems to me that the key is... go ahead and design things with cross-domain optimization abilities, but make sure that the goal that's in the heart of that judgement, is our goal. And don't design into it the ability to come-up with its own stupid goal. Like eating humans.

Anna: Yes.

[laughter]

Anna: That's absolutely what we're after at the Singularity Institute. We're after solving the harder technical problem involved in building an intelligent entity with our goals, and not with some random accidental goal. The problem is, that it looks like designing a superintelligence with our goals is a substantially harder technical problem than designing an arbitrary superintelligence. So even though nobody would want to design an arbitrary superintelligence that didn't have our goals, it might happen anyway. It kind of like wanting the first bomb that ever explodes to explode in the shape of an elephant.

Ken: When you say “design a thing that has our goals.” Are they doing goal-setting for themselves, or we just assign them the goals? Like a tool is assigned a goal.

Anna: It turns out that if you are superintelligent and you meet certain assumptions, you will want to hold on to your same goal, the initial goal. The reason for that, my colleague Eliezer likes to talk about the Gandhi-folk Theorem. Gandhi doesn't like to kill people, so if you offered him a pill that will make him want to kill people he won't take it, because he doesn't want to kill people and if he took the pill he would kill people. There's a similar reason why many entities will try to avoid changing their goals.

Ken: But it hace to have goals? Internalized goals? Not just goals that we say it has.

Anna: Exactly.

Ken: Say, the goal of a thermostat. What's the function of a thermostat? Well, to record the temperature in your house, and to control it. That's a function we assign it, it's not... I say, what's my goal? I'm going to get through this radio show today. I've set myself that goal, out of desires and purposes of my own. I'm just wondering, are you imagining intelligence without designing into the system desires, ambitions, all that stuff? Or do you design those into the system?

Anna: You design into the system not desires, not ambitions. Those are human anthropomorphic things that different kinds of optimization processes might have differently. You design into it an optimization criterion of making the world good according to us.

John: Another way to put it, see if I got it right, would be that you'll need your machines to do means-ends reasoning. That, in a sense, means they'll have to generate their own subsidiary goals as they figure out the means to do it. I'm sure that Deep Blue does it in a way. But the ends, you don't want them to come-up with the ends...

Ken: ...on their own.

John: On their own. I want to make another follow-up point. This idea that you illustrated with Gandhi wouldn't take the pill that would make him kill people. I think that’s very good, and is probably true of Gandhi. But it's not true of human institutions in general. They constantly change their goals so that they can reinterpret whatever they've just done as success.

[laughter]

John: Take the federal government in it's war making. You go to war for reason A. You spend a lot of money, you kill a lot of people. Was it a success? Yes, because of reason B. We went there to get rid of weapons of mass destruction. It was successful because, I don't know, the star-spangled banner was written or something. I'm shifting words there, but...

Ken: But that's not a bad thing, that’s a good thing. Humans have the ability to set goals, change goals, set ends, change ends, choose ends. That's not a bad thing. That's a good thing. But, you know, I think we should let some listeners in here. You're listening to Philosophy Talk. We’re talking about turbo-charging the mind in front of a live audience at the Marsh Theatre. Welcome to Philosophy Talk ma’am. Tell us your name, and where are you from, and what's your question.

Leah: Hi, I'm Leah. I study at Stanford right now. Throughout this discussion of artificial intelligence there is this primacy on how a special a thing human intelligence is. Intelligence construed as a kind of rationality, as an ability to plan forward, and to project goals into the future, and adapt certain means to get to them. And yet, perhaps as Prof. Perry said, I think that what human goals are based on are fundamentally very irrational things. Material facts like where you were born, what you happen to look like... because humans had to establish a kind of identity in the world as a self that's different from other people. So human intelligence is something embodied, we have a body that we're invested in. We care about this body with its brain, with its hormones, with its emotions and desires. I don't think that intelligence as a rational function is separable from that at all. So I don't see the goal of wanting to create some mechanism that is pure rationality.

Ken: Good question. Good challenge. Do you get the challenge? It's kind of related to what I was asking you. Look, human intelligence is an evolved thing. Overall, kind of complex human existence, where we have to relate to others, we have to form societies together. It's an instrument for doing that stuff. It's not something that you could just [quick sound] separate out, you see. “There's the intelligence!” Away from the culture, the formation of culture. Artificial intelligence seems to think of  it that way. I think our questioner is challenging the coherence of thinking of it that way.

Anna: I feel like I want to separate out two separate things that I'm not sure which of them you're saying. One is, could you have powerful optimization engines that didn't have this accumulation of culture. I think the answer is yes. Two is, would that have value? I think I agree with you that the answer is no. It wouldn't have anywhere near as much value as the sort of intelligence and inter-subjectivity and so on that we could potentially otherwise have.

John: I suppose there is a third alternative though. Suppose you said “well, you know, humans are not that important. What's important is Earth, the biosphere, the riches. That it go on as long as possible. That as many species survive. That as little damage would be done to the ecosystem.” It's kind of global values. Not in the sense of global human values, but values for the globe, for our environment. That's what we should be programming into these superintelligences. Maybe their first decision would be to eradicate three-fourths of human life because it's really screwing things up.

[laughter]

Ken: Let's go back to the lowly calculator.

John: No. Is that something that's kept open at the Singularity Institute? That that might be the rational way to go?

Anna: We talk about humane values, and what humane values include.

Ken: Well, let's get some more questions before I go back to the lowly calculator. Welcome to Philosophy Talk, sir.

                                   

Derek: My name is Derek, I'm from Piedmont. I'd like to talk about the philosopher Jean Paul Sartre who said that “existence precedes essence.” The idea that humans are free because they arise naturally. They're not created for a specific purpose or goal. I think it could be a moral concern to create beings who do have a function, a goal, and who might feel some sort of indebtedness to a creator. Couldn't artificial intelligence sort of create a new form of slavery?

John: Do you think it would be a bad thing if we did that but they weren't conscious?

[silence]

John: I mean, you're alluding to Sartre. He really put a lot of emphasis on consciousness. So suppose we made these intelligent slaves but we didn't give them consciousness. Would that let us off the moral hook?

Derek: I guess I have trouble dissociating consciousness from intelligence.

Ken: Our listener’s asking a really good question. I mean, if we could design any kind of intelligence we want, and we could disembody intelligence from autonomy and self-awareness. Maybe it would be a good thing. Slavery was a bad thing because you did it to an autonomous self-governing being. But make some really intelligent machines, give them no autonomy, give them little consciousness. Would that be a good thing or a bad thing?

Anna: Seems possible that it's a good thing, yes. In terms of visualizing optimization power without consciousness... think the economy. Think evolution. Evolution by blind process of trial and error ends up finding entities that are optimized for certain niches. People can talk about things like AI rights. These are serious issues, maybe, if you get to a world with AI. It seems to me that the biggest issue... it’s dwarfed by the fact that if you get to superintelligence it could utterly reshape the world. Humans go extinct, the biosphere goes extinct, etc. If you're not careful how you build it. Now, I know it sounds ridiculous, it sounds like science fiction, but...

Ken: No, it sounds like, why should we do such a thing? Why should we even start down such a path?

Anna: Because we are down the path, we're all down the path. With six billion people, and whatever the world's GDP is. Churning daily and increasing hardware and increasing algorithms. You can't stop. If you stop it we enter a global depression. We're all unhappy, a lot of us die...

[laughter]

Anna: So the question is how to steer it, and how to steer it into something that's good.

Ken: You're listening to Philosophy Talk. We're joined by Anna Salamon from the Singularity Institute for Artificial Intelligence. We're talking about turbo-charging the mind.

John: How should we navigate these rapid advances in technology. Should we feel threatened by the possibility that machines might one-day surpass human intelligence? Or empowered by that very possibility?

Ken: We're coming to you from the Marsh Theatre, the Bay's area breeding ground for new performance. We'll take more questions from our live superintelligent audience when Philosophy Talk continues.

[applause]

[music]

[applause]

John: Thanks to our musical guests The Playtones. I'm John Perry. This is Philosophy Talk. The program the questions everything...

Ken: ...including your intelligence. I'm Ken Taylor. We're talking about turbo-charging the mind with Anna Salamon from the Singularity Institute for Artificial Intelligence.

John: Anna, you made a great point there at the end, which is that AI and the things that are going there is a huge industry, and not everybody pursuing it has this view of the Singularity Institute that it's important to keep our eye on the long term consequences. I imagine that the people who are out there are designing artificial lives that are less problematic than real ones, artificial husbands that are less problematic than real ones. All kinds of things that are morally dubious, as our last questioner suggests. In addition, what's the possibility that something could go into some kind of runaway mode and eventually we'd become extinct like the Neanderthals, or so many other species? What's the thinking over there? How are we going to avoid that? What are we doing? What are the odds that we can avoid it? 1 in 10? 1 in a 100?

Anna: I'd give us 40% chance of avoiding it.

John: Oh. Well that's...

Ken: ... that's pretty optimistic!

John: Yeah! That make me feel a lot better.

[laughter]

Anna: I had a lot of conversations about it. It sounds like a made-up number. It kind of is, but it's a made-up number with thought behind it.

Ken: I think I have an hypothesis for a law that we should have. It's sort of a Kantian law. Build no machine that can set their own ends. So you could only have intelligent tools, you could not have the superintelligent autonomous end-setting being.

John: So it's sort of the inverse of Kant. You don't have them be ends in themselves, they're not allowed into the kingdom of the ends.

Ken: Exactly. What do you think of that?

Anna: This has been proposed a number of times. The problem is that from a technical point of view, the line between tools and entities with their own ends is... there is no clear distinction there. So humans...

Ken: [cuts-in] Well, then, if you can't figure that out then you can’t...

Anna: [cuts-in] From the perspective of evolution, humans are entities that are designed to propagate our own genes. We have some little tools, desired in there that are installed. Find food, find sweet-tasting things. Enjoy particular activities. You give us intelligence and suddenly we find way to hit those goals with artificial flavours, birth-control, etc.

                                   

John: That's a very good point. In other words, you could think of culture as a great practical joke played on nature, right? We took all the things that nature gave us in order to avoid injury, till we were passed the age of propagating and to do as much propagating until then, and we've converted it into something totally different. The enjoyment we get from sweets isn't used to keep us alive until we propagate, but to get us fat. And so forth and so on. And that's the danger that's built into our being nature's [xx] to the future’s robots.

Ken: We got tons of, look at that, whole lot of questions. Welcome to Philosophy Talk, sir.

Scott: Hi, I'm Scott, I'm from Oakland. Dynamic goal-setting is a feature of humanity. How could an entity with a more static, or less dynamic, goal setting abilities even out-compete entities with more dynamic goal settings?

Anna: Roughly speaking, because it could dynamically set its instrumental goals, the goals it uses to get to different purpose, while maintaining a fixed final goal. So humans are actually quite stupid on the scale of possible things. If you want us to not fall off cliffs maybe you have to design a fear of heights. But if you had an entity that was better at reasoning, it could be like “oh, shoot. If I fall off that cliff I will never be able to get my reward counter up to a million because my reward counter will be smashed. So I better not fall off the cliff.” It doesn't need an intrinsically set fear of heights. It can just dynamically set that.

Ken: Let's distinguish between hypothetical reasoning, if blah blah blah then such and such. And reasoning about ultimate ends, what ought I to value, what ought I do. Again, I'm going to go back to my thing. Let these machines do all the hypothetical reasoning you want. But never let them reason about what their end is. So that you could only use them as sort of a simulator. So they can be much much better than us at means and reasoning. But they never choose their ends. You say “I want them not to fall off a cliff, so how should I behave?” and they will give you the answer, so it's like a calculator. I think that an artificial intelligence that did that would be non-threatening. But an artificial intelligence that's super-human, that singularity that's supposed to come... I can't see a non-dystopian future for beings smarter than us with the capacity to set their ends, that are autonomous, self-governed... Why would they put up with us?

Anna: Because their ends involve propagating humane values.

Ken: Yeah, right...

[laughter]

Anna: Because we engineered them. Because out of the large space of possible machines we chose the one that had...

Ken: ...as if once you give the capacity to set ends, that we could somehow guarantee that these were their ends. If we could guarantee that...

Anna: If we start with an entity that has out ends and give it superintelligent ability to extrapolate what's the effect of its choices will be, then when it's saying “well, what choice is it that I can make that will most achieve my goal of humane values. Hmm, which self-engineering thing is it that will most achieve my goal of humane values. Is it to change my goal so that I kill everybody, or is it to continue to have humane values?” It says “Well, I'll achieve humane values better if I continue to have humane values.” You can actually try and write [xx] theorems about it.

Ken: Welcome to Philosophy Talk ma'am.

Naomi: Hi. I'm Naomi from El Cerrito. If we're developing for a future where as a goal we want to develop artificial intelligence, that presupposes we're able to measure it in machines. But if we can measure it in machines, is it then ethical to measure it in human beings? In the same way that's today I'm offered glasses if I can't see very well. Where will this lead us in an ethical sense if it's something we can measure.

Ken: I want to phrase this question in a slightly different direction. Because some people think we're headed to a world in which we can enhance human intelligence. Kind of post-human intelligence. Improve our own intelligence by this interface with machines. Some people talk about “well, would happen for those who don't come along with the singularity”. The pre- those who remain in the human stage of intelligence. What do you think about all of that stuff?

Anna: My Apple IIe is gathering dust somewhere. We can design a society, maybe, if we maintain control, in which people get to live long and happy lives. But, as technology, if you get to the point where you have strong artificial intelligence, we become obsolete.

Ken: You mean, we as we are now. But can't there be some post-humans who are kind of like a combination...

Anna: There can be. But they'd become obsolete too.

[laughter]

Anna: The basic reason for it is that our brains are not at all optimized. Our brain as the first intelligence that evolution happened in its blind trial and error process to come across. Michael Anissimov in the tape that was played at the beginning was talking about how our neurons fire at 100 Hz. This is very slow. We have accidental algorithms that have piles heuristics and biases, cause us to do probability wrong, only keep seven things in our head at a time, etc. You can graft on machines, and the machines... it's like saying will the world best flight machines, what if we graft them into birds? Will birds continue to be part of how we most efficiently get across the ocean? You can graft a cool machine on to birds, and if you intrinsically valued birds maybe that would stay part of the machines. But it's not because the bird's efficient.

John: There's a connection, it seems to me, that's a little implicit in some of what you're saying, between humane values and the value of humans. I'm not sure that follows. For example, some views it doesn't. Bertrand Russell once wrote an essay called “The Free Man's Worship.” His conclusion was the best thing to worship was numbers. Because they're unchanging, they're faithful, they're everlasting.

[laughter]

Ken: They don't complain!

John: If your humane values are things like truth, justice, numbers, beauty, symphonies, math theorems. It may be at the end that the value of humans for the promotion of humane values is not that great. Once the computer can write the symphonies and prove the theorems, what's left for us to do? Not that I've ever proved a theorem or written a symphony...

Ken: ...and could manage economies, and could manage social life. What's left for us?

Anna: Do they appreciate the symphonies?

Ken: Yes!

John: Well... that's what we...

Anna: ...you could design entities that did. You could also design entities that didn't.

John: And your vote is the ones that don't.

Anna: My vote is that we grab hold of the steering wheel, which is non-trivial. It involves solving a whole bunch of hard technical problems. It doesn't involve trying to halt technology...

John: ...and don't let go.

Anna: ...and that we try and steer toward the kind of entities that we'd like to see inhabiting the future. Which may not be exactly humans, but is surely also not arbitrary optimization processes with nobody home inside.

Ken: Let me take you back to that 40%, because I want to get a sense for where this 40% comes from. I suspect it's less than 40%. The probability that we could do this seems plausibly high. The probability that we could do it on a timescale such that human wisdom, human politics, human economies... all evolve so that it distributes its stuff in an intelligent way, seems to me extremely low. If you multiply the probably that we can get with the probability that we can manage it well you get a very low number, I suspect. What do you think about that?

Anna: Sorry. One more time.

[laughter]

Ken: Think about the nuclear bomb. Think about nuclear technology. It evolved in a time of war, right? Long before we got civilian uses of it. Thinking about the future, the probability that nuclear technology was going to arise in a time that we use it well rather than destruction was low. But if the Germans had have gotten it first, they'd still be around. The Nazis would still be around.

Anna: Right.

Ken: Same thing with artificial intelligence, superhuman intelligence. It's going to emerge. But it's going to emerge in a context in which we make a mess out of everything. So the probability that we'll make a mess out of this is really high.

Anna: I disagree. A lot of my colleagues would say that a 40% chance of human survival is absurdly optimistic, like you're saying. But probably we're not close to AI. Probably by the time AI hits we'll have had more thinking going into it. With the nuclear bomb, if the Germans have successfully gotten the bomb and have taken over the world there would have been someone who profited. If AI runs away and kills everyone, there's nobody who profits. There's lots of incentives to try and solve the problem together and develop the technologies we need...

Ken: That's not true that there's no-one who...

Anna: ...as the technical problems get solved... we have actually made progress in terms of understanding how to build safe AI. As AI as a whole gets further it will probably help the technologies.

Ken: It's not so much whether anyone will actually profit. It's whether they will believe. I mean, what drives is what we believe...

Anna: ...arms races...

Ken: ...our beliefs are certainly often false. So it's kind of like an arms race. The question is, is the winner of this arms race going to make a good use of this thing, or destructive use, and what's the probability. I this it's at least as probable...

John: ...I bet our last questioner is going to clear this all up.

Ken: Yes. Welcome to Philosophy Talk ma'am.

Barbara: Hi, my name is Barbara. I'm from Dickson. I'm just interested in the fact that you talk about taking something artificial and making it intelligent, and consciously creating a machine that starts out with some artificial characteristics and becomes intelligent. It seems to me that we already have built a vast system and machine that's taking intelligence and making it artificial. Human intelligence is being dumped every day into servers and machines and Facebook and Twitter and digital libraries, so that it's sitting out there in zeroes and ones. I'm wondering if you have any thoughts on whether that entity as a machine can evolve into an intelligence.

John: The cybersphere as a...

Ken: I think you're making a good point. We've externalized the mind in a certain sense. There's all this information online, I don't have to memorize all that stuff. All those means of mass communication, distance communication, I exploit that. To do stuff that humans used to have to do face-to-face. There's all these calculating tools...

John: ...yes, but how about the network itself. Not our using the network, but the network. How are we going to control whether that is going to evolve into something with its own intelligence and goals?

Ken: Well, I want to think about the human-network interface. The system consisting of the human embedded in all these networks is kind of a new thing that never was before. That's kind of extending human intelligence outward. That's a good thing, but the network standing over against us as a thing with goals and intelligence of its own, that's I don't think is a good thing.

John: No, but is it a dangerous thing? Is that one of your worries? [xx] a very worried young woman, with all this.

[laughter]

Anna: It seems like, yes the global economy can be thought of sort of an optimization process. Yes, it doesn't always pull in the directions we'd like. At the same time it's meaningfully different from the sort of superintelligence that we think might sometime be created because it moves much more slowly.

John: Mm-hmm.

Anna: Just to jump back to the 40%. The biggest reason to suppose there is hope is that you don't actually need a coordinate global solution to this thing. You just need the first superintelligence to be good. That doesn't require world-peace, it just requires that the winning team do it right.

Ken: On that optimistic thought, I'm going to thank you for joining us Anna. It's been a really interesting conversation.

Anna: Thanks, I really enjoyed it.

[applause]

Ken: Out guest has been Anna Salamon. She's a research fellow at the Singularity Institute for Artificial Intelligence. You can find out more about the Singularity Institute by visiting their website singularity.org. This conversation continues, as always, on our blog, the blog.philosophytalk.org where our motto is cogito ergo blogo, I think therefore I blog. You could find out more also by visiting our very active Facebook page, or if you like you could follow our tweets on Twitter.

John: Now we shift our minds into high-gear with Ian Shoales, the 60-second philosopher.

Ian Shoales: Ian Shoales. The story on National Geographic informed me that American skulls have expanded by about a third of an inch between 1825 and 1985. Our new skull space, according to the article “could accommodate a tennis-ball's worth of brain.” Richard Jantz, a biological anthropologist, presented the finding at an American Association for Physical Anthropology meeting in April 2012. He said that exact causes are unknown, but “I'm absolutely certain that it's due to the unparalleled environment that we now live in. Americans drive cars, vaccinate their children, and an excess of food is now a bigger problem than undernutrition.” Jantz also cautions that “other research shows a bigger cranium doesn't necessarily mean more intellect.” When homo-sapiens first evolved, apparently, human skulls became bigger until about 30,000 years ago. Then with the rise of agriculture, about 5,000 years ago, skulls began shrinking. The theory is, the brain became more efficient, requiring smaller space, kind of like computers getting faster as they got smaller. So with our expanding skulls are we getting smarter or dumber? Well, 600 years or so ago, the Romanian [xx] Vlad the Impaler used to put his enemies heads on spikes. I don't think he measured them. The Vikings used to drink mead from human skulls. Maybe they used the skull itself as measure. “Hey, barkeep, give me a skull of ale.” “I had six skulls last night. Boy, did I get hammered!”

[laughter]

Ian: We don't do that much anymore. Now, body-parts have long been used as measurements. Noah's ark, for instance, is measured in cubits. A cubit is the length of a man's forearm. Roughly, a foot and a half. We measure horses by hands. And the horse itself is a unit of measurement, horse-power. A close race is neck-and-neck. Other handy measurement units that we still use include football fields. How many football fields is it to the moon. Texas. How big are countries compared to Texas? How many could fit inside Texas. TNT. Nuclear weapons are measured by their equivalent in TNT. And, of course, the Albert Hall. How many Halls it takes to fill it. Thanks to the Beatles we now have that information.

[laughter]

Ian: We also have the country-mile, the New-York minute, the baker's dozen. We have time measured in shakes of a lamb-tail units. So I'm reading this article that expanded brain size could accommodate a tennis-ball's worth of brain. I realized there's a world of difference between the precision of science with its parsecs and nanoseconds and Higgs bosons, and real-world measurements chock-full, or at least half a chock-full with measurements like tennis-balls, yea-high, a skosh, a tad, dog-ears, shot-glasses, smidgens, a little bit, whole-lot, bunches, this-much, and hands-full. So if the Singularity occurs we may indeed become smart as a barrel of whips. But I suspect that humans will always be ... not quite playing with a full deck, sandwich short of a picnic, few fries short of a happy meal, and exactly three tomatoes short of a salad. I got to go.

[laughter]

[outro music]

[copyrights and credits]

[thanks]

[disclaimer]