Machine Intelligence Research Institute

Eliezer and Scott Aaronson on bloggingheads.tv

[Video] [Audio]


Speakers: Eliezer Yudkowsky and Scott Aaronson

Transcriber(s): Ethan Dickinson, Daniel Kokotajlo, Rick Schwall


Eliezer Yudkowsky: Hello, I'm Eliezer Yudkowsky. It's been a while, but I'm back to bloggingheads.tv. I'm a research fellow at the Singularity Institute for Artificial Intelligence, and with me today is Scott Aaronson.

Scott Aaronson: Hi, I'm Scott Aaronson. I'm an assistant professor of electrical engineering and computer science at MIT, and I write a blog called Shtetl-Optimized. It's great to talk to you Eliezer. We have a lot to talk about. We actually tried this once before, had some technical difficulties, and we're giving it another go.

Eliezer: The first topic of conversation we are redoing today, actually, is the Singularity. I thankfully have completely forgotten what we talked about last time, so we can start from scratch.

Scott: OK.

Eliezer: Now Scott, it seems pretty obvious to me that at some point in the not-too-distant future we're going to build an AI smart enough to improve itself, and having improved itself, make additional improvements to itself, and it will go FOOM upward in intelligence, and by the time it exhausts available avenues for improvement it will be a superintelligence relative to us.

Do you feel that this is obvious? And if not, why not?

Scott: The main thing that I consider non-obvious in your statement is the qualifier "not-too-distant" in "not-too-distant future." I think our difference here is really a quantitative one. But quantitative differences, if they're large enough, can look a lot like qualitative differences.

I saw this bloggingheads that you did a couple years ago with John Horgan and he expressed what I think is the reaction of many people when they first hear about this Singularity idea, which is that it sounds like completely crazy science fiction. "The rapture of the nerds" is the word used for it, or it's this religious kind of fantasy that we're all going to have eternal life, we're going to be immortal. This superintelligence is either going to create a utopia or a dystopia, one or the other. I mean, it just sounds so much like what people have been fantasizing about for such a long time, but just a new [indecipherable] –

Eliezer: No, it doesn't actually sound like that. It is recognized by them, so their brain completes the pattern and they fill in all this other stuff regardless of whether or not anyone has actually told it to them. They expect it to be there so they just assume that it is there. I think it's fairly important to distinguish between things that are actually part of the position, versus things that people expect ought to be there since their brain has categorized it as a fantasy and then they just make up all the details that they expect to hear.

Scott: Yes, well I did say "sounds like," I didn't say "is like." I was just summarizing the reaction of most people to that because I wanted to contrast it with my reaction. I don't reject it and its terms. I think that first of all, if it is wrong then it's certainly not obviously wrong. The idea that we could build computers that are smarter than us, and that those computers could build still smarter computers and so on until we reach the physical limits of what kind of intelligence is possible, or that we could build things that are to us as we are to ants, all of this is compatible with the laws of physics as I understand them, and I can't find a reason of principle why it couldn't eventually come to pass.

As I said, I think the main thing that we really disagree about is really the timescale. You said the not-too-distant future, I assume by that you're talking on a span of decades?

Eliezer: Decades is fair. And if I meant ten decades, or more, I would have said centuries. One to ten decades, sure.

Scott: One to ten decades, OK.

Eliezer: Probably on the lower side of that.

Scott: OK. So my gut-level intuition, and I haven't...

Eliezer: Just spit out your estimate. [laughs]

Scott: I know, you believe in Bayesianism, you believe in spitting out estimates. I think a few thousand years seems more reasonable to me, based on...

Eliezer: So in other words, considering where we started, at the time of Galileo, and where we are now, you think we've got four to six times the distance left to cover that we've covered already? From Galileo's starting point to where we are now.

Scott: That's correct.

Eliezer: So, you think that we’ve got... 500 years is a lot of time in science, and we don't even know how much time a thousand years in science, because we've never had a thousand years of science.

Scott: That's correct.

Eliezer: Where are you getting this estimate from? Why do you believe what you believe?

Scott: OK, so I could – the obvious response would be "where are you getting your estimate from”?, but that's maybe too glib. Where I'm getting it from is thinking about, for example, how much distance has been traversed in the last 50 years of AI research. In the ’60s for example, there are these famous stories that vision was considered a summer project for an undergraduate, right? And what we realized is that we're actually trying to build intelligent machines, competing against a billion years of evolution in some sense. It's a staggeringly hard problem.

Now, "A few thousand years" is just a way of saying my uncertainty is in the exponent. It could be hundreds of years, it could be tens of thousands of years. At that point the prior just becomes logarithmic. That's the view that I think would have formed just from what I know about AI for example. Then the question for me is, should I revise my estimate just based on the fact that people like you, other people who've thought about this and whose judgments I respect, for some reason seem to believe it's going to happen in some tens of years. So far I haven't found the compelling enough reason to shift to think that this is going to happen in tens of years.

Eliezer: I point out that if you literally use a logarithmic prior, that what I believe updating on the logarithmic prior ends up telling you is that you should expect it to take roughly as long as it's taken already. Counting from Dartmouth, which I believe was 1955 or 1958 or something like that, it would be another 50 or 60 years. The thing is, if it does take, say, another 50 years, then two days before it actually happens you'll be estimating 100 years in the future.

Scott: Well I'm not literally talking about a uniform prior over all exponents, that doesn't exist first of all.

Eliezer: Well it's an improper prior.

Scott: Yes. So what I mean is I'm not prepared to say, are we talking about 10-to-the-3 years or 10-to-the-4 years or something like that. Between those two, the uncertainty is in the exponent rather than in the base.

Eliezer: For a species that literally only started building serious computers around 65 years ago, I think that the idea that AI progress has been so slow that we're looking at a hundred times as much more work just seems pretty odd. I mean especially... I hate to say "Moore's Law," because by and large I feel it's a question of software rather than hardware...

Scott: Yes, and we also know that Moore's Law is not a law at all, in some respects it's already stopped, in others it will stop very soon.

Eliezer: Moore's empirical regularity.

Scott: Yes.

Eliezer: And, it's not even your friend, because Moore's Law of Mad Science: every 18 months the minimum IQ necessary to destroy the world drops by one point. Making it easier to build AI so that you can be stupider and still build one is not necessarily a good thing, et cetera et cetera et cetera.

But when I hear a time estimate of 5,000 years, where has Moore's Law ground to a halt? Do people have moon-sized, superconducting, low-temperature, reversible quantum computers, and are they still unable to get human level intelligence out of that thing?

Scott: I see that as a plausible possibility. I don't think that you would want to build a moon-sized computer, probably. You would probably be putting your effort into miniaturizing the components. But anyway, I see it as plausible that we could have quantum computers, for example, long before we had human-level AI.

Eliezer: I agree, but that's because quantum computers offer a type of computing power where it doesn't quite translate into doing whatever a human imagines it doing, the way that modern computers do, which is of course a topic that you are very familiar with. [laughs]

If there's any way to use the power of a quantum computer for the kind of artificial intelligence I'm interested in, I don't know what it is, which...

Scott: Right, and I don't either. You could get some speed-ups for some of the basic tasks of AI, like search or like gametree evaluation, things like that, but these are not exponential improvements, or they're not believed to be.

Eliezer: Right. Square root of the exponent was...?

Scott: Things like that, basically.

Eliezer: And what that means, I think, is something along the lines of, well we've got Deep Blue, and it's searching a billion moves per second, so it would be able to search twice as far into the search tree if it were running on a quantum computer.

Scott: Right, so if Moore's Law was actually a law, then it's halving the amount of time that you'd need.

Eliezer: If you're just throwing brute force at the problem, then Moore's Law actually speeds you up much more slowly than quantum computing. If we had the same computers but they were quantum, then Deep Blue would be searching twice as far, whereas just throwing raw Moore's Law at the problem, without software improvements, is just going to yield another ply or two of searching into the game.

Scott: Switching to quantum is like a one-time thing, you do it and then you're on a different Moore's Law trajectory which is a little bit faster than the other one.

Eliezer: Right, but what it actually works out to I think would be a 9 months doubling time rather than an 18 months doubling time if we had exactly the same progress but it was all quantum.

Scott: Yes.

Eliezer: I think that neither of us believes that Moore's Law is going to continue 5,000 years into the future on the same trajectory?

Scott: That's correct.

Eliezer: But suppose I were to put it to you that when you say 5,000 years, what that actually is is an expression of an emotional reaction that implementing intelligence feels really, really hard to you, and when I elicit from you an amount of time, you choose a time that sounds equally impressive, and that comes out as 5000 years. So it's not a time estimate, more of making emotional change between two different currencies.

Scott: I think that's basically correct. I was basically admitting as much when I tried to explain just how much uncertainty there is. It depends on all sorts of other things about how does civilization progress, what basic, conceptual things have I not understood about this question.

This gets into another point of difference between us. I'm not willing to extrapolate Bayesianism into my rationalist theory-of-everything in the way that you are. I'm willing to be a Bayesian in limited domains where I feel like I understand what does it mean to have a prior, what state-space we're talking about, and so forth. But then I'm not willing to take that local Bayesianism and extend it into a global Bayesianism.

As a game, since I do like to play games, you asked me for an amount of time before we develop superintelligence, and fine, I'll give you a number then.

Eliezer: Fair enough. Given that it actually hasn't happened yet, trying to get grasp on when AI will happen is very much a battle of the priors...

Scott: Yes, absolutely.

Eliezer: ...and those are always unpleasant for rationalists who know in principle that they ought to be able to exchange information to come to some sort of agreement, but most of their information is of the sort that's difficult to verbalize.

Scott: "Unpleasant" is a mild word for it. This is like the fundamental problem for Bayesians.

Eliezer: I had a similar sort of dilemma with Robin Hanson. We weren't so much disagreeing over the timescale to development of AI as disagreeing over what would happen once it was developed and whether it would go on that sharp upward FOOM, or just continue exponential growth with a higher exponent. We actually threw very detailed arguments at each other, but then we looked at each other's arguments and said "Well that doesn't make any sense." It came down to the battle of the competing priors again.

Scott: Right. The way I think about it is that there are two different ways that things could develop. You've in blog posts often used this fable of nine people in a basement just doing some math that will suddenly give rise to some superintelligence. If that were the way that it would happen, then I have no intuition that tells me that couldn't happen 20 years from now, conditioned on it happening that way. But I also have a presumption that it wouldn't happen that way.

And, maybe I'm wrong, but that just doesn't accord with my experience of what technological change looks like. It seems to me that most technological change is much more of a global effort, it's not something that's done in secret by only a few people. If it is, then it's in very very exceptional circumstances, like the Manhattan Project in the Second World War.

In other words, I feel like conditioned on the superintelligence happening, I feel like we would have much more of a warning that it was coming than you seem to think. I feel like we would first see ant-level AI and frog-level AI and we would see a whole series of intermediate things on the way.

Eliezer: I think we're up to ant level.

Scott: You do?

Eliezer: I think we're past ant level and heading for frog. We may not be able to duplicate all the capabilities of an ant, but we can do a whole lot of things that ants can't do.

Scott: That's true.


Eliezer: I think it's pretty fair to say we're past the level of ant and heading for frog at this point.

Scott: Gosh, how do you judge such things?

Eliezer: [laughs] I don't know, I don’t mean to be underimpressed by the lowly ant, and I'm not talking about the materials technology involved in the ant, the metabolism and so on, but in terms of what the ant brain can do versus what modern AIs can do, I think that on the whole I'm noticeably more impressed with modern AIs than with an ant, although a frog might still be more impressive.

Scott: I see. Although maybe you can take even the simplest organisms and if you were to look at them at the cellular level then that's still a great deal more impressive probably than even the best AIs.

Eliezer: The cellular level is this whole different level of what has been optimized over how much time versus the brains of these creatures.

Scott: Actually you might be amused to know, I was at this workshop in the Azores a month ago, it was sponsored by this FQXI, which is from the Templeton Foundation actually, but they had us thinking about the consequences of superintelligence, so we tried to do some Friendly AI theory, which I know is your area of expertise.

Eliezer: [laughs] What did you end up doing with your ad-hoc attempt?

Scott: I'll tell you. Just for the people listening in, Eliezer has thought a great deal about the question of, and you can correct me if I misstate it, but supposing that we do create such a superintelligence, it seems awfully important to get it right the first time, and to make sure it shares the same sorts of goals that humans do rather than, say, wanting to maximize the total number of paper clips in the universe, and just converting all visible matter into paper clips for example.

Eliezer: I think you're skipping over a lot of interim logic there about notions like the size of mind-design space, that the values you perceive are a function of how a mind is constructed, and it's not that most of mind-design space is paper clips, but most of mind-design space is minds with values that we would find as, not alien beauty, but rather alien uninteresting.

Scott: Yes. I actually like your story of the paperclip maximizer, I think it does make that point, that we can easily imagine all sorts of AI’s that even if they weren’t malevolent they might have goals that are just completely unrelated to our own goals.

Eliezer: And you are made out of matter that they can use for paper clips. It’s not that they particularly hate you, it’s just that they perceive higher value configurations into which you can be organized, and if it’s something like paper clips we just don’t find those values interesting at all.

Scott: Right, so they don’t hate me, they just need perfectly good raw materials for paper clips.

Scott: Exactly. Okay, so we came up with two ideas, and I’d love to get your reaction to them as an acknowledged expert in this field.

Eliezer: Go for it.

Scott: So the first was that we should work on ways to guilt-trip a potential AI...

Eliezer: [laughs]

Scott: So, you know, like we should tell it things like, oh yes please, just go ahead, you are just so superior to us, there’s not even a comparison, use the matter of our bodies for any purpose you deem appropriate

Eliezer: It wasn’t you who came up with this idea, right? It was someone else.

Scott: Right, it was sort of a joint idea, but we decided that some research effort was needed into creating either a catholic or a jewish AI.

Eliezer: It strikes me that you are not taking this problem entirely with the gravity that it properly deserves.

Scott: [laughs] Okay, all right, so the second idea was a little more serious. So the idea was this: Maybe we could try to create an AI that has the same relationship to us as we do to our genes. As we know our genes are very very stupid things compared to us, yet as we know from experience, despite their relative stupidity they can exert an enormous amount of influence over us. Even if we understand exactly what’s going on. Even if we sort of regard...

Eliezer: I have to say I regard the case of genes as a paradigm case of loss of control. We have no explicit drive either to maximize our inclusive reproductive fitness or to make copies of the DNA composing us. Transhumanists like myself, which is to say, people who have been properly exposed to their likely options, tend to see no qualms whatsoever, relative to our values, on changing substrate, in such fashion that we leave our DNA and our carbon-based metabolisms completely behind, as long as we remain the same sort of people with the same emotions and the same values. We would say that nothing of much significance had happened in that change of substrate, except insofar as it preserves or improves our quality of life in other ways. Genes are stupid, they have no foresight, they don’t plan for things that are going to happen, gene frequencies merely change in reaction to things that have previously happened, and I would regard the construction of human brains by the genes as a paradigmatic example of how -- not -- to do friendly AI.

Scott: I see. Okay well, in this particular context the idea was that you could imagine an AI that might be able to reason itself out of the idea that it should bother at all with humans, it might see that as a complete waste of its time, and yet still it feels these urges to help humans. It just feels so good for it to help humans that it just can’t help itself.

Eliezer: I think you are sort of presuming that these minds are coming off a factory assembly line that you don’t control, with all sorts of values that you can’t influence but you can sort of appeal to, try to persuade it as if you were talking to some human from the tribe who lives across the water where you didn’t build them and they come with all sorts of desires built in and you have to persuade them, rather than actually designing them from scratch.

Scott: Well I’m not trying to make any presumption about what their desires actually are; they may have desires that we can’t even conceive of or understand. So I’m not trying to make a presumption about that, I’m saying the question is can we design something that is much more intelligent than us, but no matter what -- other -- desires it has, it feels constantly impelled or steered in the direction of some particular desire. Just like, with humans we find that no matter how intelligent they are, they just seem pushed constantly upstream in the direction of certain things like wanting food and sex. In fact intelligence seems to bear little or no relation to it.

Eliezer: There is a lovely little parable that I once heard at a science fiction convention. From someone who had been around in the old days. He told me about a science fiction TV show he watched, back in the days when science fiction TV shows were a lot less sophisticated. There were these people flying around in a spaceship, and fighting aliens. And when the heroes flew through an asteroid belt they always had to dodge the asteroids, which tumbled through space with a huge grinding noise (and of course space was entirely full of asteroids, packed densely full of them), and the aliens, though, had this mysterious ability to dematerialize and fly right through the asteroids. So one time they discovered a derelict alien ship, boarded it, found the control room, and the captain of the hero ship looks at a lever and says “Aha! This must be the lever that controls the dematerializer”! So he pries up the lever, takes it back to his own ship, and now they can dematerialize and fly through asteroids. I call this the detached lever fallacy. The problem is that when people anthropomorphise AI and they try to pull levers without thinking whether or not you still have the machinery behind them. So, if you say something along the lines of “Well, we’ll appeal to the AI’s sense of duty” then you are assuming that the AI already has a sense of duty and you just need to pull the lever on it somehow.

Scott: Well I’m not presupposing how this would happen, I’m just asking the question of how we could make it happen, since that seems like something we might want to do.

Eliezer: Well if you can add one value and have it stable then why are you assuming that there are these other values coming into it from nowhere, that you can’t control its other values.

Scott: Because if we look at the analogy of humans then we find that that’s the case: humans have all sorts of desires that have nothing to do with what our genes want for us...

Eliezer: Name one.

Scott: Well, the desire to do higher math.

Eliezer: A friend of mine, Marcello Herreshoff, once compared math to ice cream. So, ice cream has more sugar, salt, and fat in it than anything in the ancestral environment, it’s a superstimulus. In the same way we evolved to do abstract thinking and even to enjoy certain kinds of abstract thinking, and for people who are sufficiently good at abstract thinking we invented a kind of abstract thinking ice cream, which is more beautiful and elegant than any sort of abstract thinking you’d find in the ancestral environment. And that was what we call math.

Scott: Yes. I agree with your causal story; people who do this, we basically are just gluttons for ice cream.

Eliezer: And some of us are gluttons for mathematical elegance. But in each case, even though you are doing something that has no analogy in the ancestral environment, your values are growing out of the emotions and other moral circuitry that were built by your genes. They are not happening sui generis (if I’m pronouncing that correctly.)

Scott: Right. So we know the causal story in this case. For creating AI we are going to have to make up our own causal story.

Eliezer: In loco evolutionis. It’s not in loco parentis because parents don’t get to choose their kids source code. They get to pull the levers, they don’t get to build the machinery. What we’re doing here is actually reaching into mind design space, pinpointing a possibility, and pulling it out. Well, that’s what you’re doing if you’re doing Friendly AI; otherwise you’re just hacking something together because you think it’s going to be cool.

Scott: Okay. So should we talk about many-worlds?

Eliezer: Um. Sure, we can call for that topic change at this point.

So, many-worlds. It is completely obvious that the process that has been called quantum measurement is no different from any other quantum process. We’ll take the case of Schrödinger’s cat. An atom fizzes or does not fiz. That is to say, we’ve got a segment of configuration space in which the radioactive atom has decayed and a segment and a little blob of configuration space where it hasn’t decayed. Then the sensor, which detects the radioactive decay, goes into a superposition of firing and not firing. Which is to say the sensor becomes entangled with the distinctions that we now have a blob in which the sensor has fired and the radioactive atom has decayed, and another blob in which the sensor has not fired and the radioactive atom has not decayed. The hammer either smashes the vial of poison gas or does not smash it. That is, we have one blob in which each happens. The cat goes into a superposition of being alive and being dead. When the human opens Schrödinger’s box and looks in, the human, just like every other part of this causal process, goes into a superposition of seeing the cat alive and seeing the cat dead. So this is completely obvious. I’m wondering if you think there’s any reason why you think it’s not obvious.

Scott: Okay. I think that it’s not obvious because it’s I think it’s the closest thing to an explanation that we have in some sense. I think that many-worlds, first of all, is a definite, clear improvement over the type of talk about quantum mechanics that preceded it, which was known as the Copenhagen Interpretation, and which viewed measurement as this process that you have to treat separately from the rest of quantum mechanics. So, systems evolve you know by ordinary physical laws, until someone looks at them. At that point, a new physical law comes into play, namely the collapse of the wave function.

Eliezer: And at that point the probability amplitudes go to zero because we know they are false. So, the old Copenhagen Interpretation didn’t just treat measurement specially. It treated it as something ontlologically fundamentally mental. There were mind things like “knowledge,” that were now supposed to be playing a basic role in physics. Some physicist were even saying explicitly, “Consciousness causes the collapse.”

Scott: Yes

Eliezer: So, just talking about this as measurement being special, you are rather understating the level of confusion that was caused by not realizing the obvious fact that the macro world is just like the micro world: things go into entangled superpositions.

Scott: In fact, I would say that, until recently, a large part of the problem that I had with the many-worlds interpretation is just that it seemed so obvious that you could view quantum mechanics this way that I didn’t see where the intellectual work was. I thought, did it really take someone like Everett to point this out? Like wasn’t this obvious to Schrödinger, to von Neumann, to people like that?

Eliezer: [loudly:] NO! No! No it wasn’t. Yes, yes it did! It did take Everett! People really are that silly! To the shame of the human species! [laughing]

Scott: Yes. So actually this is another thing that came out of this workshop. I was mentioning we heard a talk from Peter Byrne, who actually has a book coming out, a biography of Everett, and a documentary about him also. Seeing the actual history helped me appreciate more that there was some intellectual work that happened here. When you read Schrödinger writing about this in the 1930s for example, he did see this. He did see that if you take the equations of quantum mechanics seriously, if you take Schrödinger’s Equation seriously as a description of reality, this is what it predicts, that you are going to evolve into 2 copies, one copy of which sees the cat alive, and one copy sees the cat dead. This is because Schroedinger, unlike Bohr or some of the others, was willing to take mathematics seriously to see where it leads. But then Schrödinger didn’t really pursue that line of thought much further. I don’t think he really stomped his foot and said, “No, this is really the way things are.” This was just sort of one way you could think about them.

Eliezer: That’s pretty sad. [sighs]

Scott: Yes. So I think it is doing intellectual work in the sense that: what you can explain here is why you are getting out of quantum mechanics, the appearance of a classical world. That’s sort of the thing that’s being explained here. What’s still not being explained to my satisfaction is what’s the actual connection to what we observe, and in particular to the probability distribution over measurement outcomes that we observe.

Eliezer: -- The observed frequencies -- . Calling them probability distributions is presuming a bit. What we are talking about is a certain proportion of experimental results that are observed when we do certain experiments.

So, I don’t know where the Born probabilities come from either, but how could you possibly believe that any answer could postulate a single global world which would violate special relativity by Bell’s theorem.

Scott: Well, I’m not convinced that many-worlds makes the situation better with respect to special relativity. Any interpretation, because they all yield the same experimental predictions, at least for any experiments that we can do currently, are not going to violate causality in a way that would lead to backwards-in-time signalling.

Eliezer: There is no backwards-in-time signalling. There is no violation of causality. The many-worlds makes that perfectly clear. The other theories -- postulate --, the single-world theories postulate violations of causality, but, because they have to match the predictions of many-worlds, they are constrained to postulate violations of causality which can never actually be used to send signals.

So, I think that the idea that, “This is not a sin,” because it doesn’t yield any testable observable violations of causality, is an excuse that was historically developed within physics to deal with the fact that violations of causality were being postulated, but then never actually observed. Now, if you just had many-worlds from the beginning and this new single-world theory came along, the thing you would say when you looked at it would be, “This violates special relativity, and -- worse --, it does so in a way that never lets me test it.”

Scott: Well look, there are people who think that quantum mechanics itself violates special relativity. In fact, there was a whole cover article in Scientific American fairly recently about this. I found it kind of execrable, because the idea is that if you look at quantum mechanics in a certain way, then it doesn’t matter if it’s the many-worlds interpretation or which interpretation it is, it seems to sort of require this faster-than-light signalling. And the reality is something... it’s a logical possibility that people didn’t even think of before quantum mechanics... that you can do things like violate the Bell inequality, which are stronger, which are not allowed by a local realistic theory, but which on the other hand, are not as strong as superluminal signalling. I think that people didn’t even recognize that there could be this third possibility in between the two, before quantum mechanics came along and actually exhibited it.

Eliezer: I think we should offer a bit of background for our confused viewers at this point. What Bell's theorem says is something along the lines of, to describe it in the old classical way, if measurements were predetermined, that is if you had predetermined results of particular measurements, then by saying if we measure this particle one way and this particle the other way we find that this happens five percent of the time, and we measure this particle one way and another particle another way we find that that happens five percent of the time, if we measure them at these two angles we find that this happens 20 percent of the time, and if it was predetermined, it couldn't happen 20 percent of the time, it could only happen five plus five equals 10 percent of the time at most.

 

That's a very rough description of Bell's theorem, which you can look up online. I've written an essay that should hopefully serve to explain it to any audience that can read algebra. The point being, that you can't give a consistent, local realistic description of reality if you assume that when you perform a measurement only a single thing happens.

 

Now, Einstein, Podolsky, and Rosen pointed out the way that this violates the spirit of special relativity, because it seems that, depending on which measurement you choose to perform at location A, the probabilities at location B have to change, although, never in a way which would let someone at B ever get a signal from you, they only change in a way that make the two results compatible once you bring them together again. Einstein, Podolsky, and Rosen says that this contradicts special relativity, which is quite right if you assume you only get a single measurement result at each location.

Scott: I don’t agree with that, but go on.

Eliezer: Now, what many-worlds says is: it’s not that you're changing the probabilities at the other location, the experienced probabilities at the other location are exactly the same, the world just divides into two blobs and when you meet the other people, you find that you are in a consistent blob.

Now this description that I just gave, taken at face value, still involves something that looks a bit non-relativistic, but it is having the global description of some instantaneous slice of space through time…

...this still contradicts the spirit of special relativity to some extent, but there’s no longer any actual probabilities changing at different locations or experimental statistics changing at different locations, all we have is a problem that looks like it could easily be a problem with the way we’re describing things, rather than a problem with physics, and if we just a slightly different way of describing things, even the appearance of a problem would go away.

But, if you do have a single local world, then the choice of measurement at A has to affect the probabilities at B, and that has to violate special relativity, there’s no way around that. And that’s why, even though we still have some lingering puzzles in quantum mechanics around the Born probabilities, unless you want to say that the things I can do can affect the Born probabilities faster than light, it seems to me you are committed to many-worlds.

Scott: The arguments for many worlds have the gist of saying, we can resolve all these problems, all you have to do is give up the assumption that measurements have one outcome. Where the unease comes from for some people, is you’re giving up part of what a scientific theory is supposed to do in the first place, which is to explain our observations. So there’s one little thing you’re giving up, it seems like giving up the entire game.

Now I’ll admit that the Copenhagen point of view gives up enormous things. It gives up the idea that we can factor consciousness out of our scientific theory. Maybe one way to think about it is that the Copenhagen point of view just starts with our experience, our observations, our measurements as the central thing, and we’re only interested in the physical world only insofar as it relates to our observations. It’s a very subjectivist point of view, and it doesn’t give you a coherent way to talk about what happens if you put yourself in superposition. Then what happens.

Eliezer: It doesn’t allow people to be made of atoms.

Scott: At least, if people are made of atoms, them what’s doing the observation has to be some other kind of soul stuff.

Eliezer: That’s not made of atoms!

Scott: Which is not your brain, it’s something else which is somehow looking at your brain.

Eliezer: What the Copenhagen interpretation is giving up is nothing less than naturalism itself.

Scott: Yes. Peter Byrne made an interesting point, he said that part of the issue with many-worlds is that you really can’t accept Everett until you’ve first accepted Dennett. That you can’t fully be on board with Everett until you’re first on board with Dennett, in other words, with the idea that the brain really is just physical stuff and it’s nothing more than that.

Eliezer: Just physical stuff? What other sort of stuff is there?

Scott: …that there’s not this free-will stuff. You could imagine some sort of stuff acting in this non-physical space but in Hilbert space.

Eliezer: I’ll certainly agree that the to-modern-ears-incredible resistance of the early physicists to thinking of themselves as being made out of atoms, I mean many of them were probably even outright religious and thought they had souls, because you could actually do that back in the early twentieth century. So that may certainly have played a huge role in their missing out on the retrospectively obvious fact of many worlds.

But, I don’t see how you can possibly say that by postulating a single world you are gaining anything whatsoever back in the fashion of explanation. I mean many-worlds, for those of us who think the Born probabilities are still up for grabs, says, “here are the experimental statistics.” And the single-world says all but one world are eliminated by a magic, faster-than-light, non-local, time-asymmetric, acausal collapsor device in accordance with these statistics. So, it’s still handing you the statistics, it now just has a completely unphysical magical mechanism producing them. And, that doesn’t gain anything back in the way of explaining observations.

Scott: What I was going to say is that while I think that these are -- huge -- gaps in the Copenhagen point of view, that there’s also this, to me, serious gap in the many-worlds point of view. One is starting with experience as fundamental and not having a coherent way to deal with the fact that we’re made of atoms, the other is taking the evolution of a physical state as fundamental but then having trouble accounting for the fact that the only way, as Democritus put it 2300 years ago, the only way that we even know in the first place about this physical world is that at the end of the day we’re able to make observations of it. It’s not telling a story that makes a lot of sense to many people about what is an observation and where does observation come out of this...

Eliezer: Well, don’t tell me what happens to many people. Do you feel that you are personally gaining anything by postulating a single world instead of many worlds?

Scott: I feel I’m certainly less uncomfortable with many-worlds than with Copenhagen, but I’m not as confident as you are that I understand the right way to think about it. The way I think about it is that I look... I imagine as an analogy that people arguing around 1700 or so about the origin of life. It might have been obvious that there are a few possibilities: either God did it, or maybe we were planted here by beings on some other planet, or maybe it was all random accident. And maybe all these seem like really lousy theories, but maybe the theory that it’s just a random accident is not as bad as the other two, and so therefore that one has to be the truth.

Basically I see several different interpretations of quantum mechanics: the many-worlds, the Copenhagen, the Bohmian point of view which we haven’t talked about much. They basically all seem lousy to me, and the many-worlds seems less lousy than the others. But, I’m willing to accept the possibility that we, just like people in the 1700s haven’t even come up with the possibility of natural selection, maybe we just haven’t come up with the right possibility yet.

Eliezer: That’s certainly possible, but whatever the correct theory is, it has to be a many-worlds theory, rather than a single-worlds theory, or else it has a special-relativity-violating, non-local, time-asymmetric, non-linear, and non-measure-preserving collapse process which magically just causes entire large blobs of configuration space to vanish, in such a fashion that it is never detectable to us whenever the many-worlds theory predicts that the blobs stop interacting with us, because they’re too distant in the space. So, I don’t see how your remaining uncertainty, legitimate as it may be, permits you to hold out any hope whatsoever of getting the naive single world back, any more than combining general relativity with quantum mechanics lets the earth go back to being flat.

Scott: Well, I was going to say, “non-measure-preserving” seems a bit too strong. I mean, after all, the possibility that you observe then has measure one.

Eliezer: By “measure-preserving,” I mean the processes within quantum configuration space. Large quantities of measure are disappearing in this magical collapse process. Things that had measure now have measure zero.

Scott: Yeah, but that’s because that’s what happens in the measurement process.

Eliezer: Yeah, God did it.

Scott: It’s entirely possible to me that whatever... if we were to some day discover something that replaces quantum mechanics, it will be even stranger than the thing that replaced it, even farther from our certain classical intuition about the world.

What you forced me to realize, Eliezer, and I thank you for this, is that what I’m uncomfortable with is not the many-worlds interpretation, it’s the air of satisfaction that often comes with it.

Eliezer: Would you agree with me that the Born probabilities are this -- huge gaping hole -- in our understanding of the fundamental physical processes of the world, and would you agree with me that whatever fills that hole, there’s absolutely no reason given our present state of knowledge, to think that a single world is going to come out of it any more than a flat earth is going to come out of it.

Scott: I agree with the first part, not with the second part. I think that a single world will always remain a fairly attractive thing in possibility space.

This might be a little bit of a digression, but these sorts of issues relate a lot to how I got interested in quantum computing, because we can build a quantum computer, and all of the debates about interpretations of quantum mechanics could go on just as before, everyone can just reshuffle their cards but keep playing the same game. Quantum computing is built out of an effort to test... if there really is this enormous multiplicity of worlds, we know that quantum mechanics itself tells us that we can’t visit the other worlds, we can’t communicate with them, but can we at least have some kind of more explicit evidence that they’re there than we’ve had so far? Can we get, for example, a huge number of these different worlds or branches of the wavefunction, whatever you want to call them, if you don’t want to call them worlds, that’s fine, can we get them to all participate in a computation like factoring an enormous integer, such that we don’t believe that if there was only one world, or one branch, or only one path, that the computation could even be done efficiently.

Eliezer: Well, is there any serious doubt in your heart whatsoever that you’re going to find that all these “other worlds” “exist”? I mean, the Copenhagen-type crowd have always been sneaking peeks at many-worlds equations to tell when many-worlds says that the other blobs are sufficiently separate from us that they should no longer be observable so that at exact moment the Copenhagen crowd can say, “Ah ha! They’ve disappeared”! Now the standard equations say that when you do quantum computing, those other worlds are not yet distant enough to disappear. So, of course, the Copenhagen crowd will say than no collapse has occurred, and having just stolen many-worlds’s experimental predictions, they will claim that there is no testable difference.

Scott: OK, you can argue about who is stealing whose predictions, and the Copenhagen view came first.

Eliezer: “Came first” has no rational meaning.

Scott: But, you’re using words like stealing...

Eliezer: By stealing, I mean they have to peek at the many-worlders’ equations to tell when the collapse occurs, because their own equations make no mention of -- when -- the collapse occurs. So, by “peeking,” I’m not talking about who came first, I’m talking about that they have to perform many-worlds type calculations on the sly, in order to tell when their collapse occurs.

Scott: Well, that’s not really true. In all the experiments that people have actually done, there’s not an ambiguity in separating the quantum world from the macro world. But then, quantum mechanics explains that, it explains why decoherence is such a rapid process. In practice, in quantum experiments that people know how to do, you don’t have to worry about where the border is.

Eliezer: What I’m talking about is the sort of thing where you take a half-silvered mirror, split a photon into two worlds, and traditionally the Copenhagen people say, “Ah, ha, you measured it so it went down one course and not the other,” but now I combine the two worlds and change the phase of one of them so I get a certain quantum result, and the Copenhagen people all take a quick peek at the many-worlds calculations and their actual experimental results and say, “Ah, ha, we changed our minds, a measurement didn’t occur after all.” That sort of thing.

I think that the real thing I am trying to convince you of is that a single world is ridiculous. We’ve got these wonderful, differentiable, continuous, time-symmetric, perfectly local, linear, measure-preserving equations over the configuration space.

Scott: Well, the thing I’m trying to convince you of that it’s not a choice between ridiculousness and sanity, it’s a choice between different kinds of ridiculousness. I argue a lot in my day job with computer scientists who think that the ability to factor an integer efficiently is ridiculous. They just take it as obvious, not even an interesting question, that clearly quantum mechanics must be wrong if it predicts that you can do that.

Eliezer: Well, factor the integers and tell them to shut up. That’s the wonderful thing about science.

Scott: We’re working on that. I expect there will be some who still won’t shut up even then. Y’know, maybe a demon came and factored the number. This is exactly why some of us got interested in this field. It’s not about building faster computers, although that would be cool, there’s no doubt about that, but it’s about testing quantum mechanics itself more stringently than it has ever been tested.

The truth is, I think there is legitimate doubt about whether quantum mechanics is the final answer. My own guess would be that it is, but that’s because it seems to be the most boring possibility.

Eliezer: You keep on trying to sneak in uncertainty over whether there’s a single copy of Earth or many copies of Earth into your uncertainty about whether or not quantum physics is the final theory. I’m totally on board with the latter kind of uncertainty. It’s the existence of zillions of copies of this Earth that science and experiment and math have established beyond a reasonable doubt at this point. And you keep on trying to smuggle your uncertainty about the final theory into your uncertainty about whether the other Earths exist. And I don’t understand how you can do that.

Scott: It’s that a theory that could replace quantum mechanics could just carve up conceptual space in completely different ways. Think about before quantum mechanics...

Eliezer: But, why do it in that particular way, why do it in just exactly the right...

Scott: I don’t know. Why not?

Eliezer: Why not? Why shouldn’t the theory that succeeds quantum mechanics recarve the conceptual space in just such a way as to make giant pink dragons materialize in the sky, or put Russell’s teapot about beyond Pluto, or show that atoms are secretly made out of chocolate or... Why this -- one -- miracle?

Scott: All of those seem to be adding a lot more Kolmogorov complexity to your description, than just saying there’s just one world.

Eliezer: But, saying there’s one world violates continuity, differentiability, locality, relativity, time-symmetry, causality, I could go on like this for days. You’re talking about a specific miracle that we just have absolutely no reason to expect. Why are you focussing on this one miracle instead of saying the final theory may say that teapots violate conservation of momentum, where yes, teapots are these big complicated things, but so are entire worlds. I mean, by carving out just the right segment of your configuration space to preserve a single world and all the things in it and none of the other worlds around it, you actually -- are -- saying something along the lines of: well, maybe in the final theory, angular momentum is going to be violated for teapots and for nothing else.

Scott: OK, I see the switch from classical physics to quantum physics as a different category of thing than talking about whether there are teapots around Pluto. In this case we are dramatically enlarging the configuration space of reality. I see this as something that indeed, my guess would be, we have to do. The point is that when people argue about it, the point of disagreement seems to be not just about the evidence, but about what is the right way to think about the question. What is it that you want a scientific theory to do for you, what counts as a measurement...

Eliezer: I don’t understand how saying, “There’s only one world” help at all. Even a smidgen. Even a tiny bit.

Scott: OK. It helps in the sense that you might believe this is the point of a scientific theory in the first place: that we are observers and we’re supposed to explain the results of our observations. If you say the results of your observation could be any number of these different things, then you haven’t done the basic thing that a scientific theory is supposed to do.

Eliezer: So, in other words, if I put you on a computer and then make copies of you, there can be no scientific theory describing what you see because different versions of you see different things and we can see them all right there on the computer?

Scott: That’s a very interesting analogy, because in fact that happens to be another situation, one of the most interesting other situations where my intuitions about what I would want a scientific theory to do for me do in fact break down, and I don’t know the right way to think.

You’ve probably heard of this Dr. Evil paradox...

Eliezer: We’re actually going into a bit overtime, so let’s finish the Dr. Evil paradox and then conclude.

Scott: Sure. You could imagine a Dr. Evil on a moon base saying, “I am threatening to destroy the Earth”! And, as our defense against that, we could say, “We’ve created 200 exact replicas of you. If you try to destroy the Earth, then we’re going to torture all of them. And now, because they really are exact replicas, you should assume that probably you are one of those replicas, and therefore you are much, much more likely to be one of the people who gets tortured than the real you.”

These are sorts of situations where our usual tools of science don’t help us very much in figuring out how we should think about them because...

Eliezer: But you can’t say, “Because I don’t know how to think about this problem, therefore it is forbidden that this problem be handed to me.” I mean, if your brain were on read / writable hardware, I could present you with a Dr. Evil problem. And reality, in the form of quantum mechanics, where worlds are splitting all the time, has -- presented us -- with multiple copies of ourselves experiencing multiple problems. It happened! It’s a fact! You don’t get to say, “Well the job of a scientific theory is not to handle this sort of thing, therefore it didn’t happen.” That’s medieval philosophy, man!

Scott: I didn’t say that we’re forbidden from thinking about it. I hate that kind of move every bit as much as you do. What I said is that I don’t know how to think about it.

So, maybe we should wrap it up, here?

Eliezer: OK! I have been Eliezer Yudkowsky, for the Singularity Institute for Artificial Intelligence, and I’ll take the last few seconds to put in a plug for the Singularity Summit 2009, in New York on October third and fourth.

Scott: Great! I have been and continue to be Scott Aaronson, MIT

Eliezer: Be seeing you.

Scott: See you. Bye.