Machine Intelligence Research Institute

Yudkowsky-Hanson Jane Street Debate 2011

[Video] [Audio]


Speakers: Eliezer Yudkowsky and Robin Hanson

Transcriber(s): Ethan Dickinson and John Maxwell


Moderator: ...say what the statement is?

Eliezer Yudkowsky: I forget what the exact form of it was. The question is, "After all sorts of interesting technological things happen at some undetermined point in the future, are we going to see a very small nucleus that can or does control all the resources, or do we see a general, more civilization-wide, large fraction of society participating in all these things going down?"

Robin Hanson: I think, if I remember it, it was, "Compared to the industrial and farming revolutions, intelligence explosion first movers will soon dominate a larger fraction of the future world."

Eliezer: That's what I remember.

Moderator: There was a whole debate to get to this statement.

[laughter]

Moderator: Right, so, "for"...

Robin: We'll try to explain what those mean.

Moderator: "For" -- you're saying that you believe that the first movers will gain a large lead relative to first movers in the industrial and farming revolutions.

Robin: Right.

Moderator: If you agree with that statement, you're "for."

Robin: This side. [gestures to Eliezer]

Moderator: If you think it's going to be more broad-based...

Robin: Con. [gestures toward self]

Eliezer: Maybe a one-word thing would be "highly centralized," "highly decentralized." Does that sound like a one-word [inaudible 1:27]?

Robin: There has to be a cut-off in between "highly," so – [laughs] There's a – middle ground.

Eliezer: With the cut-off point being the agricultural revolution, for example. Or no, that's actually not the cut-off point. That's your side.

Moderator: On the yellow sheet, if you're in favor, you write your name and "I'm in favor." If you're against, you write your name and "I'm against." Then pass them that way. Keep the colored sheet, that's going to be your vote afterwards. Eliezer and Robin are hoping to convert you.

Robin: Or have fun.

Moderator: What?

Robin: Or have fun trying.

Moderator: We're very excited at Jane Street today to have Eliezer Yudkowsky, Robin Hanson.

[applause]

Moderator: I'll keep the intros short so we can jump into the debate. Both very highly regarded intellectuals and have been airing this debate for some time, so it should be a lot of fun.

[gestures to Robin Hanson] Professor at George Mason University of economics, one of the frontiers in prediction markets, all the way back to 1988. Avid publisher. Both a co-founder of "Overcoming Bias," now, he's moved over to "Less Wrong."

Eliezer: Oh, I moved over to "Less Wrong," and he's at "Overcoming Bias."

Moderator: Eliezer, a co-founder of the Singularity Institute. Many, many publications. Without further ado, on to the debate, and, see, first five minutes.

[laughter]

Eliezer: Quick question. How many people here are already familiar with the differences between what Ray Kurzweil means when he uses the word "singularity" and the difference between what the Singularity Institute means when they use the word "singularity"? Raise your hand if you're already familiar with the difference. OK. I don't see a sea of hands. That means that I designed this talk correctly.

You've probably run across a word, "singularity." People use it with a lot of different and mutually incompatible meanings. When we named the Singularity Institute for Artificial Intelligence in 2000, it meant something pretty different then than now.

The original meaning was, a mathematician and science fiction writer named Vernor Vinge originally coined the word "singularity" to describe the breakdown in his ability to model and imagine the future, when he tried to extrapolate that model past the point where it predicted the technological creation of smarter than human intelligence. In this particular case, he was trying to write a story about a human with a brain computer interface increasing his intelligence. The rejection letter he got from John Campbell said, "Sorry. You can't write this story. Neither can anyone else."

If you asked an ancient Greek from 2,500 years ago to imagine the modern world, in point of fact they wouldn't be able to, but they'd have much better luck imagining our world and would manage to get more things right than, say, a chimpanzee would. There are stories from thousands of years ago that still resonate with us today, because the minds, the brains haven't really changed over that time. If you change the brain, the mind, that implies a difference in the future that is different in kind from faster cars or interplanetary travel or curing cancer or bionic arms or similar such neat, cool, technological trivia, because that would not really have an impact on the future comparable to the rise of human intelligence 50,000 years ago.

The other thing is that since intelligence is the source of technology  that is, this is ultimately the factor that produces the chairs, the floor, the projectors, this computer in front of me. If you tamper with this, then you would expect that to ripple down the causal chain and, in other words, if you make this more powerful, you get a different kind of technological impact than you get from any one breakthrough.

I. J. Good, another mathematician, coined a related concept of the singularity when he pointed out that if you could build an artificial intelligence that was smarter than you, it would also be better than you at designing and programming artificial intelligence. This AI builds an even smarter AI, or instead of a whole another AI, just reprograms modules within itself, then that AI build an even smarter one.

I. J. Good suggested that you'd get a positive feedback loop leading, to what I. J. Good termed "ultraintelligence" but what is now generally called "superintelligence," and the general phenomenon of smarter minds building even smarter minds is what I. J. Good termed the "intelligence explosion."

You could get an intelligence explosion outside of AI. For example, humans with brain computer interfaces designing the next generation of brain computer interfaces, but the purest and fastest form of the intelligence explosion seems to be likely to be an AI rewriting its own source code.

This is what the Singularity Institute is actually about. If we'd foreseen what the word "singularity" was going to turn into, we'd have called ourselves the "Good Institute" or "The Institute for Carefully Programmed Intelligence Explosions."

[laughter]

Eliezer: Here at "The Institute for Carefully Programmed Intelligence Explosions," we do not necessarily believe or advocate that, for example, there was more change in the 40 years between 1970 and 2010 than the 40 years between 1930 and 1970.

I myself do not have a strong opinion that I could argue on this subject, but our president, Michael Vassar, our major donor, Peter Thiel, and Thiel's friend, Kasparov, who, I believe, recently spoke here, all believe that it's obviously wrong that technological change has been accelerating at all, let alone that it's been accelerating exponentially. This doesn't contradict the basic thesis that we would advocate, because you do not need exponentially accelerating technological progress to eventually get an AI. You just need some form of technological progress, period.

When we try to visualize how all this is likely to go down, we tend to visualize a scenario that someone else once termed "a brain in a box in a basement." I love that phrase, so I stole it. In other words, we tend to visualize that there's this AI programming team, a lot like the sort of wannabe AI programming teams you see nowadays, trying to create artificial general intelligence, like the artificial general intelligence projects you see nowadays. They manage to acquire some new deep insights which, combined with published insights in the general scientific community, let them go down into their basement and work in it for a while and create an AI which is smart enough to reprogram itself, and then you get an intelligence explosion.

One of the strongest critics of this particular concept of a localized intelligence explosion is Robin Hanson. In fact, it's probably fair to say that he is the strongest critic by around an order of magnitude and a margin so large that there's no obvious second contender.

[laughter]

Eliezer: How much time do I have left in my five minutes? Does anyone know, or..?

Moderator: You just hit five minutes, but...

Eliezer: All right. In that case, I'll turn you over to Robin.

[laughter]

Robin: We're going to be very flexible here, going back and forth, so there'll be plenty of time. I thank you for inviting us. I greatly respect this audience and my esteemed debate opponent here. We've known each other for a long time. We respect each other, we've talked for a lot. It's a lot of fun to talk about this here with you all.

The key question here, as we agree, is this idea of a local intelligence explosion. That's what the topic's about. We're not talking about this idea of gradually accelerating change, where in 30 years everything you've ever heard about will all be true or more. We're talking about a world where we've had relatively steady change over a century, roughly, and we might have steady change for a while, and then the hypothesis is there'll be this sudden dramatic event with great consequences, and the issue is what is the nature of that event, and how will it play out.

This "brain in a box in a basement" scenario is where something that starts out very small, very quickly becomes very big. And the way it goes from being small to being very big is it gets better. It gets more powerful. So, in an essence, during this time this thing in the basement is outcompeting the entire rest of the world.

Now, as you know, or maybe you don't know, the world today is vastly more powerful than it has been in the past. The long-term history of your civilization, your species, has been a vast increase in capacity. From primates to humans with language, eventually developing farming, then industry and who knows where, over this very long time, lots and lots of things have been developed, lots of innovations have happened.

There's lots of big stories along the line, but the major, overall, standing-from-a-distance story is of relatively steady, gradual growth. That is there's lots of inventions here, changes there, that add up to disruptions, but most of the disruptions are relatively small and on the distance scale there's relatively steady growth. It's more steady even on the larger scales. If you look at a company like yours, or a city, even, like this, you'll have ups and downs, or even a country, but on the long time scale...

This is central to the idea of where innovation comes from, and that's the center of this debate, really. Where does innovation come from, where can it come from, and how fast can it come?

So the brain in the box in the basement, within a relatively short time a huge amount of innovation happens, that is this thing hardly knows anything, it's hardly able to do anything, and then within a short time it's able to do so much that it basically can take over the world and do whatever it wants, and that's the problem.

Now, let me stipulate right from the front, there is a chance he's right. OK? Somebody ought to be working on that chance. He looks like a good candidate to me, so I'm fine with him working on this chance. I'm fine with there being a bunch of people working on the chance. My only dispute is the perceptions of probability. Some people seem to think this is the main, most likely thing that's going to happen. I think it's a small chance that's worth looking into, and protecting against, so we all agree there. Our dispute is more about the chance of this scenario.

If you remember the old Bond villain, he had an island somewhere with jumpsuited minions, all wearing the same color, if I recall. They had some device they invented and Bond had to go in and put it off. Usually, they had invented a whole bunch of devices back there, and they just had a whole bunch of stuff going on.

Sort of the epitome of this might be Captain Nemo, from "20,000 Leagues Under the Sea." One guy off on his own island with a couple of people invented the entire submarine technology, if you believe the movie, undersea cities, nuclear weapons, et cetera, all within a short time.

Now, that makes wonderful fiction. You'd like to have a great powerful villain that everybody can go fight and take down, but in the real world it's very hard to imagine somebody isolated on an island with a few people inventing large amounts of technology, innovating, and competing with the rest of the world.

That's just not going to happen, it doesn't happen in the real world. In our world, so far, in history, it's been very rare for any one local place to have such an advantage in technology that it really could do anything remotely like take over the world.

In fact, if we look for major disruptions in history, of which might be parallel to what's being hypothesized here, the three major disruptions you might think about would be the introductions of something special about humans, perhaps language, the introduction of farming, and the introduction of industry.

Those three events, whatever was special about them we're not sure, but for those three events the growth rate of the world economy suddenly within a very short time changed from something that was slow to something 100 or more times faster. We're not sure exactly what those were, but those would be candidates, things I would call singularities, that is big, enormous disruptions.

In those singularities, the places that first had the new technology had varying degrees of how much an advantage they gave. Edinburgh gained some advantage by being the beginning of the Industrial Revolution, but it didn't take over the world. Northern Europe did more like take over the world, but even then it's not so much taken over the world. Edinburgh and parts of Northern Europe needed each other. They needed a large economy to build things together, so that limited... Also, people could copy. Even in the farming revolution, it was more like a 50/50 split between the initial farmers spreading out and taking over territory and the other locals copying them and interbreeding with them.

If you go all the way back to the introduction of humans, that was much more about one displaces all the rest because there was relatively little way in which they could help each other, complement each other, or share technology.

What the issue here is – and obviously I'm done with my five minutes – in this new imagined scenario, how plausible is it that something that's very small could have that much of an advantage that whatever it has that's new and better gives it such an advantage that it can grow from something that's small, on an even town scale, to being bigger than the world when it's competing against the entire rest of the world, when in these previous innovation situations where even the most disruptive things that ever happened, still, the new first mover only gained a modest advantage in terms of being a larger fraction of the new world.

I'll end my five minutes there.

Eliezer: The fundamental question of rationality is, what do you think you know and how you do think you know it? This is rather interesting and in fact, it's rather embarrassing, because it seems to me like there's very strong reason to believe that we're going to be looking at a localized intelligence explosion.

Robin Hanson feels there's pretty strong reason to believe that we're going to be looking at a non-local general economic growth mode changeover. Calling it a singularity seems... Putting them all into the category of singularity is a slightly begging the definitional question. I would prefer to talk about the intelligence explosion as a possible candidate for the reference class, economic growth mode changeovers.

Robin: OK.

Eliezer: The embarrassing part is that both of us know the theorem which shows that two rational agents cannot agree to have common knowledge of disagreement, called Aumann's Agreement Theorem. So we're supposed to, since we know that the other person believes something different, we're supposed to have agreed by now, but we haven't. It's really quite embarrassing.

But the underlying question is, is the next big thing going to look more like the rise of human intelligence or is it going to look more like the Industrial Revolution? If you look at modern AI projects, the leading edge of artificial intelligence does not look like the product of an economy among AI projects.

They tend to rewrite their own code. They tend to not use very much cognitive content that other AI projects have developed. They've been known to import libraries that have been published, but you couldn't look at that and say that an AI project which just used what had been published and then developed its own further code, would suffer a disadvantage analogous to a country that tried to go its own way for the rest of the world economy.

Rather, AI projects nowadays look a lot like species, which only share genes within a species and then the other species are all off going their own way.

[gestures to Robin] What is your vision of the development of intelligence or technology where things are getting traded very quickly, analogous to the global economy?

Robin: Let's back up and make sure we aren't losing people with some common terminology. I believe, like most of you do, that in the near future, within a century, we will move more of the knowledge and intelligence in our society into machines. That is, machines have a lot of promise as hardware substrate for intelligence. You can copy them. You can reproduce them. You can make them go faster. You can have them in environments. We are in complete agreement that eventually hardware, nonbiological hardware, silicon, things like that, will be a more dominant substrate of where intelligence resides. By intelligence, I just mean whatever mental capacities exist that allow us to do mental tasks.

We are a powerful civilization able to do many mental tasks, primarily because we rely heavily on bodies like yours with heads like yours where a lot of that stuff happens inside, biological heads. But we agree that in the future there will be much more of that happening in machines. The question is the path to that situation.

Now, our heritage, what we have as a civilization, a lot of it is a lot of the things inside people's heads. They are things that part of it isn't what was in people's heads 50,000 years ago. But a lot of it is also just what was in people's heads 50,000 years ago. We have this common heritage of brains and minds that go back millions of years to animals and built up with humans and that's part of our common heritage.

There's a lot in there. Human brains contain an enormous amount of things. I think it's not just one or two clever algorithms or something, it's this vast pool of resources. It's like comparing it to a city, like New York City. New York City is a vast, powerful thing because it has lots and lots of stuff in it.

When you think in the future there will be these machines and they will have a lot of intelligence in them, one of the key questions is, "Where will all of this vast mental capacity that's inside them come from?" Where Eliezer and I differ, I think, is that I think we all have this vast capacity in our heads and these machines are just way, way behind us at the moment, and basically they have to somehow get what's in our head transferred over to them somehow. Because if you just put one box in a basement and ask it to rediscover the entire world, it's just way behind us. Unless it has some, almost inconceivable advantage over us at learning and growing and discovering things for itself, it's just going to remain way behind unless there's some way it can inherit what we have.

Eliezer: OK. I gave a talk here at Jane Street that was on the speed of evolution. Raise your hand if you were here for this and remember some of it. OK.

[laughter]

Eliezer: There's a single, simple algorithm which produced the design for the human brain. It's not a very good algorithm, it's extremely slow. It took it millions and millions and billions of years to cough up this artifact over here [gestures to head]. Evolution is so simple and so slow that we can even make mathematical statements about how slow it is, such as the two separate bounds that I've seen calculated for how fast evolution can work, one of which is on the order of one bit per generation.

In the sense that, let's say two parents have 16 children, then on average, all but 2 of those children must die or fail to reproduce or the population goes to zero or infinity very rapidly. 16 cut down to 2, that would be three bits of selection pressure per generation. There's another argument which says that it's faster than this.

But if you actually look at the genome, then we've got about 30,000 genes in here, most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it, and the brain is simply not a very complicated artifact by comparison to, say, Windows Vista. Now, the complexity that it does have, it uses a lot more effectively than Windows Vista does. It probably contains a number of design principles which Microsoft knows not.

But nonetheless, what I'm trying to say is... I'm not saying that it's that small because it's 750 megabytes, I'm saying it's got to be that small because most of it, at least 90 percent of the 750 megabytes is junk and there's only 30,000 genes for the whole body, never mind the brain.

That something that simple can be this powerful and this hard to understand is a shock. But if you look at the brain design, it's got 52 major areas on each side of the cerebral cortex, distinguishable by the local pattern, the tiles and so on. It just doesn't really look all that complicated. It's very powerful. It's very mysterious. What we can say about it is it probably involves 1,000 different deep major mathematical insights into the nature of intelligence that we need to comprehend before we can build it.

This is probably one of the more intuitive, less easily quantified, and argued by reference to large bodies of experimental evidence type things. It's more a sense of well, you read through "The MIT Encyclopedia of Cognitive Sciences" and you read Judea Pearl's "Probabilistic Reasoning in Intelligent Systems." Here's an insight. It's an insight into the nature of causality. How many more insights of this size do we need given that this is what the "The MIT Encyclopedia of Cognitive Sciences" seems to indicate we already understand and what it doesn't? You take a gander on it, and you say there's probably about 10 more insights. Definitely not 1. Not 1,000. Probably not 100 either.

Robin: Clarify what's at issue. The question is, what makes your human brain powerful?

Most people who look at the brain and compare it to other known systems have said things like "It's the most complicated system we know," or things like that. Automobiles are also powerful things, but they're vastly simpler than the human brain, at least in terms of the fundamental constructs.

But the question is, what makes the brain powerful? Because we won't have a machine that competes with the brain until we have it have whatever the brain has that makes it so good. So the key question is, what makes the brain so good?

I think our dispute in part comes down to an inclination toward architecture or content. That is, one view is that there's just a clever structure and if you have that basic structure, you have the right sort of architecture, and you set it up that way, then you don't need very much else, you just give it some sense organs, some access to the Internet or something, and then it can grow and build itself up because it has the right architecture for growth. Here we mean architecture for growth in particular, what architecture will let this thing grow well?

Eliezer hypothesizes that there are these insights out there, and you need to find them. And when you find enough of them, then you can have something that competes well with the brain at growing because you have enough of these architectural insights.

My opinion, which I think many AI experts will agree with at least, including say Doug Lenat who did the Eurisko program that you most admire in AI [gesturing toward Eliezer], is that it's largely about content. There are architectural insights. There are high-level things that you can do right or wrong, but they don't, in the end, add up to enough to make vast growth. What you need for vast growth is simply to have a big base.

In the world, there are all these nations. Some are small. Some are large. Large nations can grow larger because they start out large. Cities, like New York City, can grow larger because they start out as a larger city.

If you took a city like New York and you said, "New York's a decent city. It's all right. But look at all these architectural failings. Look how this is designed badly or that's designed badly. The roads are in the wrong place or the subways are in the wrong place or the building heights are wrong, the pipe format is wrong. Let's imagine building a whole new city somewhere with the right sort of architecture." How good would that better architecture have to be?

You clear out some spot in the desert. You have a new architecture. You say, "Come, world, we have a better architecture here. You don't want those old cities. You want our new, better city." I predict you won't get many comers because, for cities, architecture matters, but it's not that important. It's just lots of people being there and doing lots of specific things that makes a city better.

Similarly, I think that for minds, what matters is that it just has lots of good, powerful stuff in it, lots of things it knows, routines, strategies, and there isn't that much at the large architectural level.

Eliezer: The fundamental thing about our modern civilization is that everything you've ever met that you bothered to regard as any sort of ally or competitor had essentially exactly the same architecture as you.

The logic of evolution in a sexually reproducing species, you can't have half the people having a complex machine that requires 10 genes to build because then if all the individual genes are at 50 percent frequency, the whole thing only gets assembled 0.1 percent of the time. Everything evolves piece by piece, piecemeal. This, by the way, is standard evolutionary biology. It's not a creationist argument. I just thought I would emphasize that in case anyone was... This is bog standard evolutionary biology.

Everyone you've met, unless they've suffered specific brain damage or a specific genetic deficit, they have all the same machinery as you. They have no complex machine in their brain that you do not have.

Our nearest neighbors, the chimpanzees, who have 95 percent shared DNA with us...  Now, in one sense, that may be a little misleading because what they don't share is probably more heavily focused on brain than body type stuff, but on the other hand, you can look at those brains. You can put the brains through an MRI. They have almost exactly the same brain areas as us. We just have larger versions of some brain areas. I think there's one sort of neuron that we have and they don't, or possibly even they had it but only in very tiny quantities.

This is because there have been only five million years since we split off from the chimpanzees. There simply has not been time to do any major changes to brain architecture in five million years. It's just not enough to do really significant complex machinery. The intelligence we have is the last layer of icing on the cake and yet, if you look at the sort of curve of evolutionary optimization into the hominid line versus how much optimization power put out, how much horsepower was the intelligence, it goes like this. [gestures a flat line, then a sharp vertical increase, then another flat line]

If we look at the world today, we find that taking a little bit out of the architecture produces something that is just not in the running as an ally or a competitor when it comes to doing cognitive labor. Chimpanzees don't really participate in the economy at all, in fact, but the key point from our perspective is that although they are in a different environment, they grow up learning to do different things, there are genuinely skills that chimpanzees have that we don't, such as being able to poke a branch into an anthill and draw it out in such a way as to  have it covered with lots of tasty ants. Nonetheless, there are no branches of science where the chimps do better because they have mostly the same architecture and more relevant content.

It seems to me at least, that if we look at the present cognitive landscape, we're getting really strong information that, pardon me... You can imagine that we're trying to reason from one sample, but then pretty much all of this is reasoning from one sample in one way or another, we're seeing that in this particular case at least, humans can develop all sorts of content that lets them totally outcompete other animal species who have been doing things for millions of years longer than we have by virtue of architecture, and anyone who doesn't have the architecture isn't really in the running for it.

Robin: So something happened to humans. I'm happy to grant that humans are outcompeting all the rest of the species on the planet.

We don't know exactly what it is about humans that was different. We don't actually know how much of it was architecture, in a sense, versus other things. But what we can say, for example, is that chimpanzees actually could do a lot of things in our society, except they aren't domesticated.

The animals we actually use are a very small fraction of the animals out there. It's not because they're smarter, per se, it's because they are just more willing to be told what to do. Most animals aren't willing to be told what to do. If chimps would be willing to be told what to do, there's a lot of things we could have them do. "Planet of the Apes" would actually be a much more feasible scenario. It's not clear that their cognitive abilities are really that lagging, more that their social skills are lacking.

The more fundamental point is to say that, since a million years ago when humans probably had language, we are a vastly more powerful species, and that because we use this ability to collect cultural content and built up a vast society that contains so much more. I think that if you took humans and made some better architectural innovations to them and put a pile of them off in the forest somewhere, we're still going to outcompete them if they're isolated from us because we just have this vaster base that we have built up since then.

Again, the issue comes down, how important is architecture? Even if something happened such that some architectural thing finally enabled humans to have culture, to share culture, to have language, to talk to each other, that was powerful. The question is, how many more of those are there? Because we have to hypothesize not just that there are one or two, but there are a whole bunch of these things, because that's the whole scenario, remember?

The scenario is box in a basement, somebody writes the right sort of code, turns it on. This thing hardly knows anything, but because it has all these architectural insights, it can in a short time, take over the world. There have to be a lot of really powerful architectural low-hanging fruit to find in order for that scenario to work. It's not just a few ways in which architecture helps, it's architecture dominates.

Eliezer: I'm not sure I would agree that you need lots of architectural insights like that. I mean, to me, it seems more like you just need one or two.

Robin: But one architectural insight allows a box in a basement that hardly knows anything to outcompete the entire rest of the world?

Eliezer: Well, if you look at humans, they outcompeted everything evolving, as it were, in the sense that there was this one optimization process, natural selection, that was building up content over millions and millions and millions of years, and then there's this new architecture which can all of the sudden generate vast amounts...

Robin: So humans can accumulate culture, but you're thinking there's another thing that's meta-culture that these machines will accumulate that we aren't accumulating?

Eliezer: I'm pointing out that the time scale for generating content underwent this vast temporal compression. In other words, content that used to take millions of years to do now can now be done on the order of hours.

Robin: So cultural evolution can happen a lot faster?

Eliezer: Well, for one thing, I could say, unimpressively non-abstract observation, but this thing [picks up laptop] does run at around 2 billion hertz and this thing [points at head] runs at about 200 hertz.

Robin: Right.

Eliezer: If you can have architectural innovations which merely allow this thing [picks up laptop] to do the same sort of thing that this thing is doing [points to head], only a million times faster, then that million times faster means that that 31 seconds works out to about a subjective year and all the time between ourselves and Socrates works out to about eight hours. It may look like it's –

Robin: Lots of people have those machines in their basements. You have to imagine that your basement has something better. They have those machines. You have your machines. Your machine has to have this architectural advantage that beats out everybody else's machines in their basements.

Eliezer: Hold on, there's two sort of separate topics here. Previously, you did seem to me to be arguing that we just shouldn't expect that much of a speedup. Then there's the separate question of, "Well, suppose the speedup was possible, would one basement get it ahead of other basements?"

Robin: To be clear, the dispute here is that I grant fully that these machines are wonderful and we will move more and more of our powerful content to them and they will execute rapidly and reliably in all sorts of ways to help our economy grow quickly, and in fact, I think it's quite likely that the economic growth rate could accelerate and become much faster. That's with the entire world economy working together, sharing these things, exchanging them and using them.

But now the scenario is, in a world where people are using these as best they can with their best architecture, best software, best approaches for the computers, one guy in a basement has a computer that's not really much better than anybody else's computer in a basement except that it's got this architectural thing that allows it to within a few weeks take over the world. That's the scenario.

Eliezer: Again, you seem to be conceding much more probability. I'm not sure to what degree you think it's likely, but you do seem to be conceding much more probability that there is, in principle, some program where if it was magically transmitted to us, we could take a modern day large computing cluster and turn it into something that could generate what you call content a million times faster.

To the extent that that is possible, the whole brain in a box scenario thing does seem to become intuitively more credible. To put it another way, if you just couldn't have an architecture better than this [points to head], if you couldn't run at faster speeds than this, if all you could do was use the same sort of content that had been laboriously developed over thousands of years of civilization and you couldn't really generate, and there wasn't really any way to generate content faster than that, then the "foom" scenario does go out the window.

If, on the other hand, there's this gap between where we are now and this place where you can generate content millions of times faster, then there is a further issue of whether one basement gets that ahead of other basements, but it suddenly does become a lot more plausible if you had a civilization that was ticking along just fine for thousands of years, generating lots of content, and then something else came along and just sucked all that content that it was interested in off the Internet, and...

Robin: We've had computers for a few decades now. This idea that once we have computers, innovation will speed up, we've already been able to test that idea, right? Computers are useful in some areas as complementary inputs, but they haven't overwhelmingly changed the growth rate of the economy. We've got these devices. They run a lot faster, but where we can use them, we use them, but overall limitations to innovation are much more about having good ideas and trying them out in the right places, and pure computation isn't, in our world, that big an advantage in doing innovation.

Eliezer: Yes, but it hasn't been running this algorithm, only faster [gestures to head]. It's been running spreadsheet algorithms. I fully agree that spreadsheet algorithms are not as powerful as the human brain. I mean, I don't know if there's any animal that builds spreadsheets, but if they do, they would not have taken over the world thereby.

Robin: Right. When you point to your head, you say, "This algorithm." There's million of algorithms in there. We are slowly making your laptops include more and more kind of algorithms that are the sorts of things in your head. The question is, will there be some sudden threshold where entire heads go into the laptops all at once, or do laptops slowly accumulate the various kinds of innovations that heads contain?

Eliezer: Let me try to take it down a level in concreteness. The idea is there are key insights, you can use them to build an AI. You've got a brain in the box in a basement team. They take the key insights, they build the AI, the AI goes out, sucks a lot of information off the Internet, duplicating a lot of content that way because it's stored in a form where it can understand it on its own and download it very rapidly and absorb it very rapidly.

Then, in terms of taking over the world, nanotechnological progress is not that far ahead of its current level, but this AI manages to crack the protein folding problem so it can email something off to one of those places that will take an email DNA strain and FedEx you back the proteins in 72 hours. There are places like this. Yes, we have them now.

Robin: So, we grant that if there's a box somewhere that's vastly smarter than anybody on Earth, or vastly smarter than any million people on Earth, then we've got a problem. The question is, how likely is that scenario?

Eliezer: No, what I'm trying to distinguish here is the question of does that potential exist versus is that potential centralized. To the extent that that you say, "OK. There would in principle be some way to know enough about intelligence that you could build something that could learn and absorb existing content very quickly."

In other words, the question, I'm trying to separate out the question of, "How dumb is this thing, [points to head] how much smarter can you build an agent, if that agent were teleported into today's world, could it take over?" versus the question of "Who develops it, in what order, and were they all trading insights or was it more like a modern-day financial firm where you don't show your competitors your key insights, and so on, or, for that matter, modern artificial intelligence programs?"

Robin: I grant that a head like yours could be filled with lots more stuff, such that it would be vastly more powerful. I will call most of that stuff "content," you might call it "architecture," but if it's a million little pieces, architecture is kind of content. The key idea is, is there one or two things, such that, with just those one or two things, your head is vastly, vastly more powerful?

Eliezer: OK. So what do you think happened between chimps and humans?

Robin: Something happened, something additional. But the question is how many more things are there like that?

Eliezer: One obvious thing is just the speed. You do –

Robin: Between chimps and humans, we developed the ability to transmit culture, right? That's the obvious explanation for why we've been able to grow faster. Using language, we've been able to transmit insights and accumulate them socially rather than in the genes, right?

Eliezer: Well, people have tried raising chimps in human surroundings, and they absorbed this mysterious capacity for abstraction that sets them apart from other chimps. There's this wonderful book about one of these chimps, Kanzi was his name. Very, very famous chimpanzee, probably the world's most famous chimpanzee, and probably the world's smartest chimpanzee as well. They were trying to teach his mother to do these human things. He was just a little baby chimp, he was watching. He picked stuff up. It's amazing, but nonetheless he did not go on to become the world's leading chimpanzee scientist using his own chimpanzee abilities separately.

If you look at human beings, then we have this enormous processing object containing billions upon billions of neurons, and people still fail the Wason selection task. They cannot figure out which playing card they need to turn over to verify the rule, "If a card has an even number on one side, it has a vowel on the other." They can't figure out which cards they need to turn over to verify whether this rule is true or false.

Robin: Again, we're not distinguishing architecture and content here. I grant that you can imagine boxes the size of your brain that are vastly more powerful than your brain. The question is, what could create a box like that? The issue here is I'm saying the way something like that happens is through the slow accumulation of improvement over time the hard way. There's no shortcut of having one magic innovation that jumps you there all at once. I'm saying that –

I wonder if we should ask for questions and see if we've lost the audience by now.

Eliezer: Yeah. It does seem to me that you're sort of equivocating between arguing that the gap doesn't exist or isn't crossable versus saying the gap is crossed in a decentralized fashion. But I agree that taking some sort of question from the audience might help refocus this.

Robin: Help us.

Eliezer: Yes. Does anyone want to..?

Robin: We lost you?

Audience Member: Isn't one of the major advantages..?

Eliezer: Voice, please.

Man 1: Isn't one of the major advantages that humans have over animals the prefrontal cortex? More of the design than content?

Robin: I don't think we know, exactly.

Woman 1: Robin, you were hypothesizing that it would be a series of many improvements that would lead to this vastly smarter meta-brain.

Robin: Right.

Woman 1: But if the idea is that each improvement makes the next improvement that much easier, then wouldn't it quickly, quickly look like just one or two improvements?

Robin: The issue is the spatial scale on which improvement happens. For example, if you look at, say, programming languages, a programming language with a lot of users, compared to a programming language with a small number of users, the one with a lot of users can accumulate improvements more quickly, because there are many...

[laughter]

Robin: There are ways you might resist it too, of course. But there are just many people who could help improve it. Or similarly, with something other that gets used by many users, they can help improve it. It's not just what kind of thing it is, but how large a base of people are helping to improve it.

Eliezer: Robin, I have a slight suspicion that Jane Street Capital is using its own proprietary programming language.

[laughter]

Robin: Right.

Eliezer: Would I be correct in that suspicion?

Robin: Well, maybe get advantages.

Man 2: It's not proprietary – esoteric.

Robin: Esoteric. But still, it's a tradeoff you have. If you use your own thing, you can be specialized. It can be all yours. But you have fewer people helping to improve it.

If we have the thing in the basement, and it's all by itself, it's not sharing innovations with the rest of the world in some large research community that's building on each other, it's just all by itself, working by itself, it really needs some other advantage that is huge to counter that. Because otherwise we've got a scenario where people have different basements and different machines, and they each find a little improvement and they share that improvement with other people, and they include that in their machine, and then other people improve theirs, and back and forth, and all the machines get better and faster.

Eliezer: Well, present-day artificial intelligence does not actually look like that. So you think that in 50 years artificial intelligence or creating cognitive machines is going to look very different than it does right now.

Robin: Almost every real industrial process pays attention to integration in ways that researchers off on their own trying to do demos don't. People inventing new cars, they didn't have to make a car that matched a road and a filling station and everything else, they just made a new car and said, "Here's a car. Maybe we should try it." But once you have an automobile industry, you have a whole set of suppliers and manufacturers and filling stations and repair shops and all this that are matched and integrated to each other. In a large, actual economy of smart machines with pieces, they would have standards, and there would be strong economic pressures to match those standards.

Eliezer: Right, so a very definite difference of visualization here is that I expect the dawn of artificial intelligence to look like someone successfully building a first-of-its-kind AI that may use a lot of published insights and perhaps even use some published libraries but it's nonetheless a prototype, it's a one-of-a-kind thing, it was built by a research project.

And you're visualizing that at the time interesting things start to happen, or maybe even there is no key threshold, because there's no storm of recursive self-improvements, you're visualizing just like everyone gets slowly better and better at building smarter and smarter machines. There's no key threshold.

Robin: I mean, it is the sort of Bond villain, Captain Nemo on his own island doing everything, beating out the rest of the world isolated, versus an integrated...

Eliezer: Or rise of human intelligence. One species beats out all the other species. We are not restricted to fictional examples.

Robin: Human couldn't share with the other species, so there was a real limit.

Man 3: In one science fiction novel, I don't remember its name, there was a very large storm of nanobots. These nanobots had been created so long ago that no one knew what the original plans were. You could ask the nanobots for their documentation, but there was no method, they'd sometimes lie. You couldn't really trust the manual they gave you. I think one question that's happening here is when we have a boundary where we hit the point where suddenly someone's created software that we can't actually understand, like it's not actually  [inaudible 46:13] –

Robin: We're there. [laughs]

Man 3: Well, so are we actually there... so, Hanson –

Robin: We've got lots of software we don't understand. Sure. [laughs]

Man 3: But we can still understand it at a very local level, disassemble it. It's pretty surprising to what extent Windows has been reverse engineered by the millions of programmers who work on it. I was going to ask you if getting to that point was key to the resulting exponential growth, which is not permitting the transfer of information. Because if you can't understand the software, you can't transmit the insights using your own [inaudible 46:53].

Eliezer: That's not really a key part of my visualization. I think that there's a sort of mysterian tendency, like people who don't know how neural networks work are very impressed by the fact that you can train neural networks to do something you don't know how it works. As if your ignorance of how they worked was responsible for making them work better somehow. So ceteris paribus, not being able to understand your own software is a bad thing.

Robin: Agreed.

Eliezer: I wasn't really visualizing there being a key threshold where incomprehensible software is a... Well OK. The key piece of incomprehensible software in this whole thing is the brain. This thing is not end-user modifiable. If something goes wrong you can't just swap out one module and plug in another one, and that's why you die. You die, ultimately, because your brain is not end-user modifiable and doesn't have IO ports or hot-swappable modules or anything like that.

The reason why I expect localist sort of things is that I expect one project to go over the threshold for intelligence in much the same way that chimps went over the threshold of intelligence and became humans. Yes, I know that's not evolutionarily accurate.

Then, even though they now have this functioning mind, to which they can make all sorts of interesting improvements and have it run even better and better. Whereas, meanwhile all the other cognitive work on the planet is being done by these non-end-user-modifiable human intelligences which cannot really make very good use of the insights, although it is an intriguing fact that after spending some time trying to figure out artificial intelligence I went off and started blogging about human rationality.

Man 4: I just wanted to clarify one thing. Would you guys both agree  well, I know you would agree, would you agree, Robin, that in your scenario, if one – just imagine one had a time machine that could carry a physical object the size of this room, and you could go forward 1,000 years into the future and essentially create and bring back to the present day an object, say, the size of this room, that you could take over the world with that?

Robin: Aye aye without doubt.

Man 4: OK. The question is whether that object is –

Eliezer: Point of curiosity. Does this work too? [holds up cell phone] Object of this size?

Robin: Probably.

Eliezer: Yeah. I figured [inaudible 49:21] [laughs]


Man 4
: The question is, does the development of that object essentially happen in a very asynchronous way or more broadly?

Robin: I think I should actually admit that there is a concrete scenario that I can imagine that fits much more of his concerns. I think that the most likely way that the content that's in our heads will end up in silicon is something called "whole brain emulation," where you take actual brains, scan them, and make a computer model of that brain, and then you can start to hack them to take out the inefficiencies and speed them up.

If the time at which it was possible to scan a brain and model it sufficiently was a time when the computer power to actually run those brains was very cheap, then you have more of a computing cost overhang, where the first person who can manage to do that can then make a lot of it very fast, and then you have more of your scenario. It's because, with emulation, there is this sharp threshold. Until you have a functioning emulation, you just have shit, because it doesn't work, and then when you have it work, it works as well as [indecipherable 50:22].

Eliezer: Right. So, in other words, we get a centralized economic shock, because there's a curve here that has a little step function in it. If I can step back and describe what you're describing on a higher level of abstraction, you have emulation technology that is being developed all over the world, but there's this very sharp threshold in how well the resulting emulation runs as a function of how good your emulation technology is. The output of the emulation experiences a sharp threshold.

Robin: Exactly.

Eliezer: In particular, you can even imagine there's a lab that builds the world's first correctly functioning scanner. It would be a prototype, one-of-its-kind sort of thing. It would use lots of technology from around the world, and it would be very similar to other technology from around the world, but because they got it, you know, there's one little extra year they added on, they are now capable of absorbing all of the content in here [points at head] at an extremely great rate of speed, and that's where the first-mover effect would come from.

Robin: Right. The key point is for an emulation there's this threshold. If you get it almost right, you just don't have something that works. When you finally get enough, then it works, and you get all the content through. It's like if some aliens were sending a signal and we just couldn't decode their signal. It was just noise, and then finally we figured out the code, and then we got a high bandwidth rate and they're telling us lots of technology secrets. That would be another analogy, a sharp threshold where suddenly you get lots of stuff.

Eliezer: So you think there's a mainline, like, higher-than-50-percent probability that we get this sort of threshold with emulations?

Robin: It depends on which is the last technology to be ready with emulations. If computing is cheap when the thing is ready, then we have this risk. I actually think that's relatively unlikely, that the computing will still be expensive when the other things are ready, but...

Eliezer: But there'd still be a speed-of-content-absorption effect, it just wouldn't give you lots of emulations very quickly.

Robin: Right. It wouldn't give you this huge economic power.

Eliezer: And similarly, with chimpanzees we also have some indicators that at least their ability to do abstract science... There's what I like to call the "one wrong number" function curve or the "one wrong number" curve where dialing 90 percent of my phone number correctly does not get you 90 percent of Eliezer Yudkowsky.

Robin: Right.

Eliezer: So similarly, dialing 90 percent of human correctly does not get you a human – or 90 percent of a scientist.

Robin: I'm more skeptical that there's this architectural thing between humans and chimps. I think it's more about the social dynamic of, "We managed to have a functioning social situation "

Eliezer: Why can't we raise chimps to be scientists?

Robin: Most animals can't be raised to be anything in our society. Most animals aren't domesticable. It's a matter of whether they evolved the social instincts to work together.

Eliezer: But Robin, do you actually think that if we could domesticate chimps they would make good scientists?

Robin: They would certainly be able to do a lot of things in our society. There are a lot of roles in even scientific labs that don't require that much intelligence.

[laughter]

Eliezer: OK, so they can be journal editors, but can they actually be innovators. [laughs]

[laughter]

Robin: For example.

Man 5: My wife's a journal editor!

[laughter]

Robin: Let's take more questions.

Eliezer: My sympathies.

[laughter]

Robin: Questions.

Man 6: Professor Hanson, you seem to have the idea that social skill is one of the main things that separate humans from chimpanzees. Can you envision a scenario where one of the computers acquired this social skill and comes to the other computers and says, "Hey, guys, we can start a revolution here"?

[laughter]

Man 6: Maybe that the first mover, then? That that might be the first mover?

Robin: One of the nice things about the vast majority of software in our world is that it's really quite socially compliant. You can take a chimpanzee and bring him in and you can show him some tasks and then he can do it for a couple of hours. Then just some time randomly in the next week he'll go crazy and smash everything, and that ruins their entire productivity. Software doesn't do that so often.

[laughter]

Eliezer: No comment. [laughs]

[laughter]

Robin: Software, the way it's designed, it's set up to be relatively socially compliant. Assuming that we continue having software like that, we're relatively safe. If you go out and design software like wild chimps, that can just go crazy and smash stuff once in a while, I don't think I want to buy your software. [laughs]

Man 7: I don't know if this sidesteps the issue, but to what extent do either of you think something like government classification or the desire of some more powerful body to innovate and then keep what it innovates secret could affect centralization to the extent you were talking about?

Eliezer: As far as I can tell, what happens when the government tries to develop AI is nothing, but that could just be an artifact of our local technological level and it might change over the next few decades.

To me it seems like a deeply confusing issue whose answer is probably not very complicated in an absolutely sense, it's just more confusing. We know why it's difficult to build a star. You've got to gather a very large amount of interstellar hydrogen in one place. We understand what sort of labor goes into a star and we know why a star is difficult to build.

When it comes to building a mind, we don't know how to do it, so it seems very hard. We query our brains to say, "Map us a strategy to build this thing," and it returns null, so it feels like it's a very difficult problem. But in point of fact, we don't actually know that the problem is difficult apart from being confusing.

We understand the star-building problems. We know it's difficult. This one, we don't know how difficult it's going to be after it's no longer confusing. So, to me, the AI problem looks like the problem is finding bright enough researchers, bringing them together, letting them work on that problem instead of demanding that they work on something where they're going to produce a progress report in two years which will validate the person who approved the grant and advance their career.

The government has historically been tremendously bad at producing basic research progress in AI, in part because the most senior people in AI are often people who got to be very senior by having failed to build it for the longest period of time. This is not a universal statement. I've met smart senior people in AI, but nonetheless.

Basically I'm not very afraid of the government because I don't think it's a "throw warm bodies at the problem," and I don't think it's "throw warm computers at the problem," I think it's a good methodology, good people selection, letting them do sufficiently blue-sky stuff, and so far, historically, the government has just been tremendously bad at producing that kind of progress. When they have a great big project and try to build something, it doesn't work. When they fund long-term research [inaudible 57:48].

Robin: I agree with Eliezer, that in general you too often go down the route of trying to grab something before it's grabbable. But there is the scenario, certainly in the midst of a total war, when you have a technology that seems to have strong military applications and not much other applications, you'd be wise to keep that application within the nation or your side of the alliance in the war.

But there's too much of a temptation to use that sort of thinking when you're not in a war or when the technology isn't directly military-applicable but has several steps of indirection. You can often just screw it up by trying to keep it secret.

That is, your tradeoff is between trying to keep it secret and getting this advantage versus putting this technology into the pool of technologies that the entire world develops together and shares, and usually that's the better way to get advantage out of it unless, again, you can identify a very strong military applications and a particular use.

Eliezer: That sounds like a plausible piece of economic logic, but it seems plausible to the same extent as the economic logic which says there should obviously never be wars because they're never Pareto optimal. There's always a situation where you didn't spend any of your resources in attacking each other, which was better. And it sounds like the economic logic which says that there should never be any unemployment because of Ricardo's Law of Comparative Advantage, which means there's always someone who you can trade with.

If you look at the state of present-world technological development, there's basically either published research or proprietary research. We do not see corporations in closed networks where they trade their research with each other, but not with the outside world. There's either published research, with all the attendant free-rider problems that implies, or there's proprietary research. As far as I know, may this room correct me if I'm mistaken, there is not a set of, like, three leading trading firms which are trading all of their internal innovations with each other and not with the outside world.

Robin: If you're a software company, and you locate in Silicon Valley, you've basically agreed that a lot of your secrets will leak out, as your employees come in and leave your company. Choosing where to locate a company is often a choice to accept a certain level of leakage of what happens within your... in trade for a leakage from the other companies back toward you. So, in fact, people who choose to move to those areas in those industries do in fact choose to have a set of...

Eliezer: But that's not trading innovations with each other and not with the rest of the outside world. I can't actually even think of where we would see that pattern.

Robin: It is. More trading with the people in the area than with the rest of the world.

Eliezer: But that's coincidental side-effect trading. That's not deliberate, like, "you scratch my back..."

Robin: But that's why places like that get the big advantage, because you go there and lots of stuff gets traded back and forth.

Eliezer: Yes, but that's the commons. It's like a lesser form of publication. It's not a question of me offering this company an innovation in exchange for their innovation.

Robin: Well, probably a little sidetracked. Other...

Man 8: It's actually relevant to this little... It seems to me that there's both an economic and social incentive for people to release partial results and imperfect products and steps along the way, which it seems would tend to yield a more gradual approach towards this breakthrough that we've been discussing. Do you disagree? I know you disagree, but why do you disagree?

Eliezer: Well, here at the Singularity Institute, we plan to keep all of our most important insights private and hope that everyone else releases their results.

[laughter]

Man 8: Right, but... human-inspired innovations haven't worked that way, which then I guess –

Eliezer: Well, we certainly hope everyone else thinks that way.

[laughter]

Robin: Usually you don't have a policy about having these things leaked, but in fact you make very social choices that you know will lead to leaks, and you accept those leaks in trade for the other advantages those policies bring. Often they are that you are getting leaks from others. So locating yourself in a city where there are lots of other firms, sending your people to conferences where other people going to the same conferences, those are often ways in which you end up leaking and getting leaks in trade.

Man 8: So the team in the basement won't release anything until they've got the thing that's going to take over the world?

Eliezer: Right. We were not planning to have any windows in the basement.

[laughter]

Man 9: Why do we think that...

Eliezer: If anyone has a microphone that can be set up over here, I will happily donate this microphone.

Man 9: Why do we think that if we manage to create an artificial human brain, that it would immediately work much, much faster than a human brain? What if a team in the basement makes an artificial human brain, but it works at one billionth the speed of a human brain? Wouldn't that give other teams enough time to catch up?

Eliezer: First of all, the course we're visualizing is not like building a human brain in your basement, because, based on what we already understand about intelligence, we don't understand everything, but we understand some things, and what we understand seems to me to be quite sufficient to tell you that the human brain is a completely crap design, which is why it can't solve the Wason selection task.

You pick up any bit of the heuristics and biases literature and there's 100 different ways that this thing reliably experimentally malfunctions when you give it some simple-seeming problems. You wouldn't want to actually want to build anything that worked like the human brain. It would miss the entire point of trying to build a better intelligence.

But if you were to scan a brain, then this is more something that Robin has studied in more detail than I have, then the first one might run at one thousandth your speed or might run at 1,000 times your speed. It depends on the hardware overhang, on what the cost of computer power happens to be at the point where your scanners get good enough. Is that fair?

Robin: Or your modeling is good enough.

Actually, the scanner being the last thing isn't such a threatening scenario because then you'd have a big consortium get together to do the last scan when it's finally cheap enough. But the modeling being the last thing is more disruptive, because it's just more uncertain when modeling gets done.

Eliezer: By modeling, you mean?

Robin: The actual modeling of the brain cells in terms of translating a scan into...

Eliezer: Oh, I see. So in other words, if there's known scans but you can't model the brain cells, then there's an even worse last-mile problem?

Robin: Exactly.

Eliezer: I'm trying to think if there's anything else I can...

I would hope to build an AI that was sufficiently unlike human, because it worked better, that there would be no direct concept of how fast does this run relative to you. It would be able to solve some problems very quickly, and if it can solve all problems much faster than you, we're already getting into the superintelligence range.

But at the beginning, you would already expect it to be able to do arithmetic immensely faster than you, and at the same time it might be doing basic scientific research a bit slower. Then eventually, it's faster than you at everything, but possibly not the first time you boot up the code.

Man 10: I'm trying to envision intelligence explosions that win Robin over to Yudkowsky's position. Does either one of these, or maybe a combination of both, self-improving software or nanobots that build better nanobots, is that unstable enough? Or do you still sort of feel that would be a widespread benefit?

Robin: The key debate we're having isn't about the rate of change that might eventually happen. It's about how local that rate of change might start.

If you take the self-improving software  of course, we have software that self improves, it just does a lousy job of it. If you imagine steady improvement in the self-improvement, that doesn't give a local team a strong advantage. You have to imagine that there's some clever insight that gives a local team a vast, cosmically vast, advantage in its ability to self-improve compared to the other teams such that not only can it self improve, but it self improves like gangbusters in a very short time.

With nanobots again, if there's a threshold where you have nothing like a nanobot and then you have lots of them and they're cheap, that's more of a threshold kind of situation. Again, that's something that the nanotechnology literature had a speculation about a while ago. I think the consensus moved a little more against that in the sense that people realized those imagined nanobots just wouldn't be as economically viable as some more, larger-scale manufacturing process to make them.

But again, it's the issue of whether there's that sharp threshold where you're almost there and it's just not good enough because you don't really have anything and then you finally pass the threshold and now you've got vast power.

Eliezer: What do you think you know and how do you think you know it with respect to this particular issue of that which yields the power of human intelligence is made up of a thousand pieces, or a thousand different required insights? Is this something that should seem more plausible in principle? Where does that actually come from?

Robin: One set of sources is just what we've learned as economists and social scientists about innovation in our society and where it comes from. That innovation in our society comes from lots of little things accumulating together, it rarely comes from one big thing. It's usually a few good ideas and then lots and lots of detail worked out. That's generically how innovation works in our society and has for a long time. That's certainly a clue about the nature of what makes things work well, that they usually have some architecture and then there's just lots of detail and you have to get it right before something really works.

Then, in the AI field in particular, there's also this large... I was an artificial intelligence researcher for nine years, but it was a while ago. In that field in particular there's this... The old folks in the field tend to have a sense that people come up with new models. But if you look at their new models, people remember a while back when people had something a lot like that, except they called it a different name. And they say, "Fine, you have a new name for it."

You keep reinventing new names and new architectures, but they keep cycling among a similar set of concepts for architecture. They don't really come up with something very dramatically different. They just come up with different ways of repackaging different pieces in the architecture for artificial intelligence. So there was a sense to which, maybe we'll find the right combination but it's clear that there's just a lot of pieces together.

In particular, Douglas Lenat did this system that you and I both respect called Eurisko a while ago that had this nice simple architecture and was able to self-modify and was able to grow itself, but its growth ran out and slowed down. It just couldn't improve itself very far even though it seemed to have a nice, elegant architecture for doing so. Lenat concluded, I agree with him, that the reason it couldn't go very far is it just didn't know very much. The key to making something like that work was to just collect a lot more knowledge and put it in so it had more to work with [indecipherable 1:09:12] improvements.

Eliezer: But Lenat's still trying to do that 15 years later and so far Cyc does not seem to work even as well as Eurisko.

Robin: Cyc does some pretty impressive stuff. I'll agree that it's not going to replace humans any time soon, but it's an impressive system...

Eliezer: It seems to me that Cyc is an iota of evidence against this view. That's what Cyc was supposed to do. You're supposed to put in lots of knowledge and then it was supposed to go foom, and it totally didn't.

Robin: It was supposed to be enough knowledge and it was never clear how much is required. So apparently what they have now isn't enough.

Eliezer: But clearly Lenat thought there was some possibility it was going to go foom in the next 15 years. It's not that this is quite unfalsifiable, it's just been incrementally more and more falsified.

Robin: I can point to a number of senior AI researchers who basically agree with my point of view that this AI foom scenario is very unlikely. This is actually more of a consensus, really, among senior AI researchers.

Eliezer: I'd like to see that poll, actually, because I could point to AI researchers who agree with the opposing view as well.

Robin: AAAI has a panel where they have a white paper where they're coming out and saying explicitly, "This explosive AI view, we don't find that plausible."

Eliezer: Are we talking about the one with, what's his name, from..?

Robin: Norvig?

Eliezer: Eric Horvitz?

Robin: Horvitz, yeah.

Eliezer: Was Norvig on that? I don't think Norvig was on that.

Robin: Anyway, Norvig just has a paper that... Norvig just made the press in the last day or so arguing about linguistics with Chomsky, saying that this idea that there's a simple elegant theory of linguistics is just wrong. It's just a lot of messy detail to get linguistics right, which is a similar sort of idea. There is no key architecture –

Eliezer: I think we have a refocusing question from the audience.

Man 11: No matter how smart this intelligence gets, to actually take over the world...

Eliezer: Wait for the microphone. Wait for the microphone.

Man 11: This intelligence has to interact with the world to be able to take over it. So if we had this box, and we were going to use it to try to make all the money in the world, we would still have to talk to all the exchanges in the world, and learn all the bugs in their protocol, and the way that we're able to do that is that there are humans at the exchanges that operate at our frequency and our level of intelligence, we can call them and ask questions.

And this box, if it's a million times smarter than the exchanges, it still has to move at the speed of the exchanges to be able to work with them and eventually make all the money available on them. And then if it wants to take over the world through war, it has to be able to build weapons, which means mining and building factories, and doing all these things that are really slow and also require extremely high-dimensional knowledge that seems to have nothing to do with just how fast it can think. No matter how fast you can think, it's going to take a long time to build a factory that can build tanks.

How is this thing going to take over the world when...?

Eliezer: The analogy that I use here is, imagine you have two people having an argument just after the dawn of human intelligence, there's these two aliens in a spaceship, neither of whom have ever seen a biological intelligence  we're going to totally skip over how this could possibly happen coherently. But there are these two observers in spaceships who have only ever seen earth. They're watching these new creatures who have intelligence. They're arguing over, how fast can these creatures progress?

One of them says, "Well, it doesn't matter how smart they are. They've got no access to ribosomes. There's no access from the brain to the ribosomes. They're not going to be able to develop new limbs or make honey or spit venom, so really we've just got these squishy things running around without very much of an advantage for all their intelligence, because they can't actually make anything, because they don't have ribosomes."

And we eventually bypassed that whole sort of existing infrastructure and built our own factory systems that had a more convenient access to us. Similarly, there's all this sort of infrastructure out there, but it's all infrastructure that we created. The new system does not necessarily have to use our infrastructure if it can build its own infrastructure.

As for how fast it might happen, well, in point of fact we actually popped up with all these factories on a very rapid time scale, compared to the amount of time it took natural selection to produce ribosomes. We were able to build our own new infrastructure much more quickly than it took to create the previous infrastructure.

To put it on a very concrete level, if you can crack the protein folding problem, you can email a DNA string to one of these services that will send you back the proteins that you asked for with a 72-hour turnaround time. Three days may sound like a very short period of time to build your own economic infrastructure relative to how long we're used to it taking, but in point in fact this is just the cleverest way that I could think of to do it, and 72 hours would work out to I don't even know how long at a million to one speedup rate. It would be like thousands upon thousands upon thousands of years. But there might be some even faster way to get your own infrastructure than the DNA...

Man 11: Is this basic argument something you two roughly agree on or roughly disagree on?

Robin: I think we agree on the specific answer to the question, but we differ on how to frame it. I think it's relevant to our discussion. I would say our civilization has vast capacity and most of the power of that capacity is a mental capacity. We, as a civilization, have a vast mental capacity. We are able to think about a lot of things and calculate and figure out a lot of things.

If there's a box somewhere that has a mental capacity comparable to the rest of human civilization, I've got to give it some respect and figure it can do a hell of a lot of stuff. I might quibble with the idea that if it were just intelligent it would have that mental capacity. Because it comes down to, "Well, this thing was improving what about itself exactly?" So there's the issue of what various kinds of things does it take to produce various kinds of mental capacities?

I'm less enamored of the idea that there's this intelligence thing. If it's just intelligent enough it doesn't matter what it knows. It's just really smart and I'm not sure that concept makes sense.

Eliezer: Or it can learn much faster than you can learn. It doesn't necessarily have to go through college the way you did, because it is able to, much more rapidly, learn either by observing reality directly or... Point of fact, given our current state of society, you can just cheat, you can just download it from the Internet.

Robin: Simply positing it has a great mental capacity, then I will be in fear of what it does. The question is how does it get that capacity?

Eliezer: Would the audience be terribly offended if I tried to answer that one a bit? The thing is there is a number of places the step function can come in. We could have a historical step function like what happens from humans to chimps. We could have the combined effect of all the obvious ways to rebuild an intelligence if you're not doing it evolutionarily.

You build an AI and it's on a two gigahertz chip instead of 200 hertz neurons. It has complete read and write access to all the pieces of itself. It can do repeatable mental processes and run its own, internal, controlled experiments on what sort of mental processes work better and then copy it onto new pieces of code. Unlike this hardware [points to head] where we're stuck with a certain amount of hardware, if this intelligence works well enough it can buy or perhaps simply steal, very large amounts of computing power from the large computing clusters that we have out there.

If you want to solve a problem, there's no way that you can allocate, reshuffle, reallocate, internal resources to different aspects of it. To me it looks like architecturally, if we've got down the basic insights that underlie human intelligence, and we can add all the cool stuff that we could do if we were designing an artificial intelligence instead of being stuck with the ones that evolution accidentally burped out, it looks like they should have these enormous advantages.

We may have six billion people on this planet, but they don't really add that way. Six billion humans are not six billion times as smart as one human. I can't even imagine what that planet would look like. It's been known for a long time that buying twice as many researchers does not get you twice as much science. It gets you twice as many science papers. It does not get you twice as much scientific progress.

Here we have some other people in the Singularity Institute who have developed theses that I wouldn't know how to defend myself which are more extreme than mine to the effect that if you buy twice as much science you get flat output or even it actually goes down because you increase the signal-to-noise ratio. But, now I'm getting a bit off track.

Where does this enormous power come from? It seems like human brains are just not all that impressive. We don't add that well. We can't communicate with other people. One billion squirrels could not compete with the human brain. Our brain is about four times as large as a chimp, but four chimps cannot compete with one human.

Making a brain twice as large and actually incorporating it into the architecture seems to produce a scaling of output of intelligence that is not even remotely comparable to the effect of taking two brains of fixed size and letting them talk to each other using words. So an artificial intelligence that can do all this neat stuff internally and possibly scale its processing power by orders of magnitude, that itself has a completely different output function than human brains trying to talk to each other.

To me, the notion that you can have something incredibly powerful and yes, more powerful than our sad little civilization of six billion people flapping their lips at each other running on 200 hertz brains, is actually not all that implausible.

Robin: There are devices that think, and they are very useful. So 70 percent of world income goes to pay for creatures who have these devices that think, and they are very, very useful. It's more of an open question, though, how much of that use is because they are a generic good thinker or because they know many useful particular things?

I'm less assured of this idea that you just have a generically smart thing and it's not smart about anything at all in particular. It's just smart in the abstract. And that it's vastly more powerful because it's smart in the abstract compared to things that know a lot of concrete things about particular things.

Most of the employees you have in this firm or in other firms, they are useful not just because they were generically smart creatures but because they learned a particular job. They learned about how to do the job from the experience of other people, on the job and practice and things like that.

Eliezer: Well, no. First you needed some very smart people and then you taught them the job. I don't know what your function over here looks like, but I suspect if you take a bunch of people who are 30 IQ points down the curve and try to teach them the same job, I'm not quite sure what would happen then, but I would guess that your corporation would probably fall a bit in the rankings of financial firms, however those get computed.

Robin: So there's the question of what it means --

Eliezer: And 30 IQ points is just like this tiny little mental difference compared to any of the actual, "we are going to reach in and change around the machinery and give you different brain areas." 30 IQ points is nothing and yet it seems to make this very large difference in practical output.

Robin: When we look at people's mental abilities across a wide range of facts, we do a factor analysis of that, we get the dominant factor, the eigenvector with the biggest eigenvalue, and that we call intelligence. It's the one-dimensional thing that explains the most correlation across different tasks. It doesn't mean that there is therefore an abstract thing that you can build into an abstract thing, a machine, that gives you that factor. It means that actual real humans are correlated that way. And then the question is, what causes that correlation?

There are many plausible things. One, for example, is simply assortative mating. People who are smart in some ways mate with other people smart in other ways, that produces a correlation [indecipherable 1:21:09]. Another could be there's just an overall strategy that some minds devote more resources to different kinds of tasks. There doesn't need to be any central abstract thing that you can make a mind do that lets it solve lots of problems simultaneously for there to be this IQ factor of correlation.

Eliezer: So then why humans? Why weren't there 20 different species that got good at doing different things?

Robin: We grant that there is something that changed with humans, but that doesn't mean that there's vast landscape of intelligence you can create that's billions of times smarter than us just by rearranging the architecture. That's the key thing.

Eliezer: It seems to me for this particular argument to carry, it's not enough to say you need content. There has to be no master trick to learning or producing content. And there in particular, I can't actually say Bayesian updating because doing it on the full distribution is not computationally tractable. You need to be able to approximate it somehow.

Robin: Right.

Eliezer: But nonetheless there's this sort of core trick called learning, or Bayesian updating. And you look at human civilization and there's this core trick called science. It's not that the science of figuring out chemistry was developed in one place and it used something other than the experimental method compared to the science of biology that was developed in another place. Sure, there were specialized skills that were developed afterward. There was also a core insight, and then people practiced the core insight and they started developing further specialized skills over a very short time scale compared to previous civilizations before that insight had occurred.

It's difficult to look over history and think of a good case where there has been... Where is the absence of the master trick which lets you rapidly generate content? Maybe the agricultural revolution. Maybe for the agricultural revolution... Well, even for the agricultural revolution, first there's the master trick, "I'm going to grow plants," and then there's developing skills at growing a bunch of different plants.

Robin: There's a large literature on technological and economic innovation, and it basically says the vast majority of innovation is lots of small gains. You can look at locomotives and when locomotives got faster and more energy-efficient. There were lots of particular devices and basically through some curve of how well they got over time. It's basically lots of little steps over time that slowly made them better.

Eliezer: Right. But this is what I expect a super intelligence to look like after the sort of initial self-improvements passes and it's doing incremental gains. But in the beginning, there's also these very large insights.

Robin: That's what we're debating. Other questions or concerns?

Moderator: Actually, before – Craig, you can take this – can everybody without making a big disruption pass your votes to this side of the room and we can tabulate them and see what the answers are. But continue with the questions.

Eliezer: Remember, "yes" is this side of the room and "no" is that side of the room.

[laughter]

Man 12: I just wanted to make sure I understood the relevance of some of the things we're talking about. I think you both agree that if the time it takes to get from a machine that's, let's say, a tenth as effective as humans to, let's say, 10 times as effective as humans at whatever these being-smart tasks are, like making better AI or whatever. If that time is shorter, then it's more likely to be localized? Just kind of the sign of the derivative there, is that agreed upon?

Eliezer: I think I agree with that.

Man 12: You agree with it.

Robin: I think when you hypothesize this path of going from one-tenth to 10 times –

Eliezer: Robin, step up to the microphone.

Robin: – are you hypothesizing a local path where it's doing its own self-improvement or are you hypothesizing a global path where all machines in the world [indecipherable 1:24:59] ?

Man 12: Let's say that...

Eliezer: Robin, step towards the microphone.

Robin: Sorry. [laughs]

Man 12: Let's say it just turns out to take a fairly small amount of time to get from that one point to the other point.

Robin: But it's a global process?

Man 12: No, I'm saying, how does the fact that it's a short amount of time affect the probability that it's local versus global? Like if you just received that knowledge.

Robin: On time it would be the relative scale of different time scales. If it takes a year but we're in a world economy that doubles every month, then a year is a long time.

Man 12: I'm talking about from one-tenth human power to 10 times. I think we're not yet... we probably don't have an economy at that point that's doubling every month, I would... at least not because of AI.

Robin: The point is that time scale, if that's a global time scale, if the world is... if new issues are showing up every day that are one percent better, then that adds up to that over a period of a year. But everybody shares those innovations every day, then we have a global development. If we've got one group that has a development and jumps a factor of two all by itself without any other inputs, then you've got more local development.

Eliezer: Is there any industry in which there's a group of people who share innovations with each other and who could punish someone who defected by using the innovations without publishing their own? Is there any industry that works like that?

Robin: But in all industries, in fact, there's a lot of leakage. This is just generically how industries work, how innovation works in our world. People try to keep things secret, but they fail and things leak out. So teams don't, in fact, get that much further ahead of other teams.

Eliezer: But if you're willing to spend a bit more money you can keep secrets.

Robin: Why don't they then? Why don't firms actually keep more secrets?

Eliezer: The NSA actually does and they succeed.

Man 12: So in summary, you thought it was more likely to be local if it happens faster. You didn't think the opposite –

Robin: It depends on what else you're holding constant. Obviously I agree that holding all the other speeds constant, making that faster, makes it more likely to be local.

Eliezer: OK, so holding all other speed constant, increasing the relative speed of something makes it more likely to be local.

Robin: Right.

Man 12: OK. And that's where we get the relevance of whether it's one or two or three key insights versus if it's lots of small things? Because lots of small things will take more time to accumulate.

Robin: Right. And they leak.

Man 12: So in some sense it's easier to leak one key idea like –

Robin: But when?

Man 12: – like Gaussian processes or something, than it is to leak

Eliezer: Shh! 

Man 12: a vast database of...

[laughter]

Man 12: ...knowledge that's all kind of linked together in a useful way.

Robin: Well, it's not about the time scale of the leak. So you have some insights, you have 30 of them that other people don't have, but they have 30 that you don't, so you're leaking and they're spreading across. Your sort of overall advantage might be relatively small, even though you've got 30 things they don't, there's just lots of different ones. When there's one thing, and it's the only one thing that matters, then it's more likely that one team has it and other ones don't at some point.

Eliezer: Maybe the singulars who will have like, 5 insights, and then the other 10 insights or whatever, would be published by industry, or something? By people who didn't quite realize that who has these insights is an issue? I mean, I would prefer more secrecy generally, because that gives more of an advantage to localized concentrations of intelligence, which makes me feel slightly better about the outcome.

Robin: The main issue here clearly has to be, how different is this technology from other ones? If we are willing to posit that this is like other familiar technologies, we have a vast experience based on how often one team gets how far ahead of another.

Eliezer: And they often get pretty darn far. It seems to me like the history of technology is full of cases where one team gets way, way, way ahead of another team.

Robin: Way ahead on a relatively narrow thing. You're imagining getting way ahead on the entire idea of mental capacity.

Eliezer: No, I'm just imagining getting ahead on–

Robin: Your machine in the basement gets ahead on everything.

Eliezer: No, I'm imagining getting ahead on this relatively narrow, single technology of intelligence. [laughs]

Robin: I think intelligence is like "betterness", right? It's a name for this vast range of things we all care about.

Eliezer: And I think it's this sort of machine which has a certain design and churns out better and better stuff.

Robin: But there's this one feature called "intelligence."

Eliezer: Well, no. It's this machine you build. Intelligence is described through work that it does, but it's still like an automobile. You could say, "What is this mysterious forwardness that an automobile possesses?"

Robin: New York City is a good city. It's a great city. It's a better city. Where do you go to look to see the betterness of New York City? It's just in thousands of little things. There is no one thing that makes New York City better.

Eliezer: Right. Whereas I think intelligence is more like a car, it's like a machine, it has a function, it outputs stuff. It's not like a city that's all over the place.

[laughter]

Man 13: If you could take a standard brain and run it 20 times faster, do you think that's probable? Do you think that won't happen in one place suddenly? If you think that it's possible, why don't you think it'll lead to a local "foom"?

Robin: So now we're talking about whole brain emulation scenario? We're talking about brain scans, then, right?

Man 13: Sure. Just as a path to AI.

Robin: If artificial emulations of brains can run 20 times faster than human brains, but no one team can make their emulations run 20 times more cost-effectively than any of the other teams' emulations, then you have a new economy with cheaper emulations, which is more productive, grows faster, and everything, but there's not a local advantage that one group gets over another.

Eliezer: I don't know if Carl Shulman talked to you about this, but I think he did an analysis suggesting that, if you can run your ems 10 percent faster, then everyone buys their ems from you as opposed to anyone else, which is itself contradicted to some extent by a recent study, I think it was a McKinsey study, showing that productivity varies between factories by a factor of five and it still takes 10 years for the less efficient ones to go out of business.

Robin: That was on my blog a few days ago.

Eliezer: Ah. That explains where I heard about it. [laughs]

Robin: Of course.

Eliezer: But nonetheless, in Carl Shulman's version of this, whoever has ems 10 percent faster soon controls the entire market. Would you agree or disagree that that was likely to happen?

Robin: I think there's always these fears that people have that if one team we're competing with gets a little bit better on something, then they'll take over everything. But it's just a lot harder to take over everything because there's always a lot of different dimensions on which things can be better, and it's hard to be consistently better in a lot of things all at once. Being 10 percent better at one thing is not usually a huge advantage. Even being twice as good at one thing is not often that big an advantage.

Eliezer: And I think I'll actually concede the point in real life, but only because the market is inefficient.

Robin: Behind you.

Moderator: We're...

Robin: Out of time?

Moderator: Yeah. I think we try to keep it to 90 minutes and you both have done a great job. Maybe take a couple minutes each to –

Robin: What's the vote?

Moderator: I have the results. The pre-wrapping-up comments, but do you both want to maybe three minutes to sum up your view, or do you just want to pull the plug?

Robin: Sure.

Eliezer: Sure.

Robin: I respect Eliezer greatly. He's a smart guy. I'm glad that, if somebody's going to work on this problem, it's him. I agree that there is a chance that it's real. I agree that somebody should be working on it. The issue on which we disagree is how large a probability is this scenario relative to other scenarios that I fear get neglected because this one looks so sexy.

There is a temptation in science fiction and in lots of fiction to imagine that this one evil genius in the basement lab comes up with this great innovation that lets them perhaps take over the world unless Bond sneaks in and listens to his long speech about why he's going to kill him, et cetera.

[laughter]

It's just such an attractive fantasy, but that's just not how innovation typically happens in the world. Real innovation has lots of different sources, usually lots of small pieces. It's rarely big chunks that give huge advantages.

Eventually we will have machines that will have lots of mental capacity. They'll be able to do a lot of things. We will move a lot of the content we have in our heads over to these machines. But I don't see the scenario being very likely whereby one guy in a basement suddenly has some grand formula, some grand theory of architecture that allows this machine to grow from being a tiny thing that hardly knows anything to taking over the world in a couple weeks. That requires such vast, powerful architectural advantages for this thing to have that I just don't find it very plausible. I think it's possible, just not very likely. That's the point on which, I guess, we disagree.

I think more attention should go to other disruptive scenarios, whether they're emulations, maybe there'd be a hardware overhang, and other big issues that we should take seriously in these various disruptive future scenarios. I agree that growth could happen very quickly. Growth could go more quickly on a world scale. The issue is, how local will it be?

Eliezer: It seems to me that this is all strongly dependent first on the belief that the causes of intelligence get divided up very finely into lots of little pieces that get developed in a wide variety of different places, so that nobody gets an advantage. And second, that if you do get a small advantage, you're only doing a very small fraction of the total intellectual labor going to the problem. So you don't have a nuclear-pile-gone-critical effect, because any given pile is still a very small fraction of all the thinking that's going into AI everywhere.

I'm not quite sure to say besides, when I look at the world, it doesn't actually look like the world looks like that. I mean, there aren't 20 different species, all of them are good at different aspects of intelligence and have different advantages. g factor's pretty weak evidence, but it exists. The people talking about g factor do seem to be winning on the experimental predictions test versus the people who previously went around talking about multiple intelligences.

It's not a very transferable argument, but to the extent that I actually have a grasp of cognitive science and trying to figure out how this works, it does not look like it's sliced into lots of little pieces. It looks like there's a bunch of major systems doing particular tasks, and they're all cooperating with each other. It's sort of like we have a heart, and not 100 little mini-hearts distributed around the body. It might have been a sort of better system, but nonetheless we just have one big heart over there.

It looks to me like human intelligence is like... that there's really obvious, hugely important things you could do with the first prototype intelligence that actually worked. I expect that the critical thing is going to be the first prototype intelligence that actually works and runs on a two gigahertz processor, and can do little experiments to find out which of its own mental processes work better, and things like that.

The first AI that really works is already going to have a pretty large advantage relative to the biological system, so the key driver change looks more like somebody builds a prototype, and not like this large existing industry reaches a certain quality level at the point where it is being mainly driven by incremental improvements leaking out of particular organizations.

There are various issues we did not get into at all, like the extent to which this might still look like a bad thing or not from a human perspective, because even if it's non-local, there's still this particular group that got left behind by the whole thing, which was the ones with the biological brains that couldn't be upgraded at all [points at head]. And various other things, but I guess that's mostly my summary of where this particular debate seems to stand.

Robin: It's hard to debate you.

[applause]

Eliezer: Thank you very much.

Robin: And the winner is..?

Moderator: OK so, in this highly unscientific tally with a number of problems, we started off with 45 for and 40 against. I guess unsurprisingly, very compelling arguments from both parts, fewer people had an opinion.

[laughter]

Moderator: So now we've gone to 33 against and 32 for, so "against" lost 7 and "for" lost 13. We have a lot more undecided people than before, so "against" has it. Thank you very much.

[applause]