Machine Intelligence Research Institute

Eliezer and Massimo on bloggingheads.tv

[Video] [Audio]


Speakers: Eliezer Yudkowsky and Massimo Pigliucci

Transcriber(s): Ethan Dickinson and Patrick Stevens


Eliezer Yudkowsky: Hello, I'm Eliezer Yudkowsky, a research fellow at the Singularity Institute for Artificial Intelligence.

Massimo Pigliucci: Hi, I'm Massimo Pigliucci, professor of philosophy at City University of New York.

Eliezer: Do you want to start out then?

Massimo: Yes. We're going to be talking about the concept of a singularity. From there, we might go on talking more generally about the idea of transhumanism, and perhaps about, even more broadly, rationality, and how to use it, if we get to that. But I wanted to start with the singularity. Would you mind explaining briefly what people mean by that term?

Eliezer: I've found that different people tend to mean different things by it. What I mean by the term is I. J. Good's intelligence explosion. That was I. J. Good postulating that if you can get a sufficiently smart, as he put it, ultraintelligent machine, something that is better than human beings at essentially any cognitive task to which it turns its hand, it would also be better at the task of producing ultraintelligent machines. So you ought to get a positive feedback cycle, where it becomes better and better at making itself better and better, and its intelligence should zoom up and become quite extremely smart.

That was the sort of original formulation. These days, the Singularity Institute in general and myself in particular have tried to refine that notion a bit, but the notion of a positive-feedback cycle of recursive self-improvement that generates a superintelligence, or something that is much better than a human being at any cognitive task to which it turns its attention, is for us the core notion of the singularity.

Some other things that have been meant by it are Vernor Vinge's actual original definition of the term, which is that the singularity is the breakdown in our model of the future that comes about when our model starts containing beings that are smarter than us, and if we knew exactly what they would do, we would be that smart ourselves. That's an epistemic horizon, an unpredictability and inability to extrapolate your model past a certain point.

There's also a third formulation, which unfortunately is the one that seems to be most popular nowadays, which is just the singularity as accelerating change, technological progress, convergence, biotech, nanotech, buzzword, buzzword, and so on. That's not really something I feel all that comfortable endorsing.

Massimo: OK so let's leave aside for a minute the third sense, because I agree with you that's so generic, and it would really be difficult to know even what exactly it is we're talking about.


In what you said, it strikes me that there are several concepts that need to be unpacked in order to see what exactly it is that we're talking about. For instance, this idea that you mentioned a couple of times, the word "ought." "If we get to this point, it ought to happen that..." Well, why? That assumes, it seems to me, that there is an inevitable progression of some sort, there are no constraints imposed by the laws of physics, the laws of logic, the laws of whatever it is that might impose constraints on these things. How do you think that affects the concept?

Eliezer: Clearly there are multiple steps here. In other words, there are multiple points at which this could be defeated. David Chalmers, whom if you're a professor of philosophy you've probably heard of... [laughs]

Massimo: Yes, I actually saw him talking about the singularity here at the CUNY graduate center just a few weeks ago.

Eliezer: Right, so David Chalmers has one unpacking of some of the assumptions involved, which deserves to be mentioned, but nonetheless I will go ahead and give my own instead. [laughs]

Massimo: OK. [laughs] Fair enough.

Eliezer: For example, you mentioned physical limits. The thing is, if our understanding of reality is correct in character, not just in detail but in broad character, if our understanding of reality is correct in broad character there will be physical limits.


Massimo: Right.

Eliezer: But just because there are physical limits doesn't mean that those limits are low. There is some physical limit on the power output of a supernova, but you wouldn't want to walk into one wearing nothing but a flame-retardant jumpsuit.

Massimo: Well I agree, but let me stop you right there. That's true, but the claim seems to me, if one says "The singularity ought to happen or will very likely happen," or something like that, it seems to me that people would have to have already an idea of the fact that whatever limits there are, are not going to be that relevant to that particular event, and how do we know that?

Eliezer: OK, well you can look at the human brain, and you can compare in very broad strokes the physical characteristics of the human brain to what we think ought to be physically possible if the laws of physics we believe in are true.

You get observations like, signals are travelling along the axons and the dendrites at a top speed of say, 150 meters per second absolute top speed. You compare that to the speed of light, and it's a factor of 2,000,000. Or similarly, you look at how fast the neurons are firing. They're firing say, 200 times per second, top speed. And you compare that to modern day transistors, and again you are looking at a factor of millions between what neurons are doing and what we have already observed to be physically possible.

Even in terms of heat dissipation, which is where neurons still have an advantage over modern-day computers they're dissipating something like, I think I actually went through this calculation and then forgot the exact numbers, but it was something like half a million times the minimum energy for a single bit-flip operation at 300 Kelvin per synaptic operation. So even in terms of heat dissipation.

Massimo: That sounds good, but the thing is we already have computers that do transfer information inside themselves, so to speak, and even across computers much faster than information travels inside human neurons, and yet I don't see any particular reason to believe that we've got intelligent machines. It seems to me the speed of transfer of information has certainly something to do with the general idea of intelligence and cognition, but not a lot.

Eliezer: Well hold on there, you're comparing apples and oranges. If we took all the neurons in the human brain and sped them up by a factor of a million, which it looks on a purely physical level like we ought to be able to do, and we even ought to be able to do without shrinking the brain, cooling the brain, using reversible computing or quantum computing, the notion that you can take this exact software and run it at a million times the speed, that looks like it should definitely be physically possible given the laws of physics we know.

On the other hand, we have all these computers, and they're very fast. And they're stupid. That doesn't mean if you took a human and sped them up by a factor of a million that would not be interesting. What it means is that we don't know how to program the computers we have right now. It's a computer science problem.

Massimo: Right, but that's not what I was saying. First of all, I'm not so sure that we could easily, or in fact at all, speed up a human brain by that magnitude. I'm not even sure that we have any idea how we would go about doing that, but that's aside because we were talking about a singularity in terms of a creation of a new intelligence, is that not the case – or, an artificial intelligence. So we are talking about computers, not about human beings, or are we talking about both, somehow?


Eliezer: Well, we are. I was just trying to unpack the issue and handle it piece by piece. So for example you said "physical limits," and I was trying to give an example of an argument that you go through and you say, well yes, there are physical limits, but they're very high. So in terms of the ultimate physical limits, they're not something that we're going to run into when we get brains a
little bit beyond human and then just come to a screeching halt. They're things that you would go by orders and orders of magnitude beyond human, and then come to a screeching halt.

That was an example of what happens when you analyze a particular aspect of it. Now as soon as you start getting into questions of software, how do you program AIs? Then suddenly things are much harder to analyze than they are when it comes to hardware. And I fully agree that software's the key issue, but I wanted to take a particular question that was much easier to analyze and analyze that one first. I wasn't trying to duck the issue or anything.

Massimo: [laughs] No, no, I understand. I wasn't accusing you of doing that,

Eliezer: [laughs]

Massimo: I was just trying to get clear on what we're talking about. You just mentioned the difference between hardware and software, which of course in the case of computers is pretty clear. I'm not so convinced that there is such a distinction in the case of human beings, human thought. Why do you think so?

Eliezer: The character of the hardware that human software runs on certainly interacts a lot more with the software that human run on than we do in the case of general purpose computers that are designed for us to be easy to program in any possible way we like. That happens automatically because of the brain being designed by natural selection. There's simply no reason why natural selection would tend to produce neat, hierarchical, modular levels of organization like the software that we build.

Massimo: Of course. But I'm still not sure that it makes any sense to talk about hardware versus software in the case of human cognition. Why would you think so?

Eliezer: Because there ought to be functionalized morphs of human neurons, things that do the same things as human neurons and work the same way, only faster. If you conceive of that possibility, you're clearly conceiving about hardware but not software at all. That's one example of a reason why it still seems useful for me to conceptually distinguish between the two.

Massimo: But that seems to me to be a very very broad definition of software, which doesn't necessarily have anything to do with computer software. If you think of software that way, then are you thinking about, say, the structure of a chemical series of reactions as software and the chemicals that actually do the reactions as hardware?

Eliezer: You could make a case for that, but I think we're getting off track.

Massimo: I wouldn't. [laughs]

Eliezer: [laughs]

What exactly is the point in dispute here? Which part of this is supposed to defeat part of the singularity progression?

Massimo: I am absolutely not convinced that one can use concepts such as "software" and "hardware," borrowing them from computer technology and computer language into human thought. By the way, I should say before we go any further than this, that I most certainly do not subscribe to any kind of dualism or mysticism or anything like that about human consciousness.

Eliezer: Check. [laughs]

Massimo: [laughs] I'm a thoroughgoing materialist, as far as I'm concerned there's only matter and energy.

Eliezer: Check.

Massimo: So that's not what we're talking about. But that doesn't mean that I'm convinced that there is a good analogy. Let me try to put it another way, and I'd like to know what you think of the following analogy.

Suppose that, instead of human thought, we're talking about another biological function, let's say photosynthesis for instance, which is very well understood at a chemical level. It's very well understood at what we might consider the logical processing level. In other words you can actually draw a logical diagram of how the chemical reactions in photosynthesis are supposed to work.

You can in fact simulate those chemical reactions inside a computer. So there is a logical structure to the process that you can definitely implement or simulate into a machine that is not in fact a plant. The fact is of course that you can simulate all you want, the one thing you're not going to get out of the simulation is sugar, which is the outcome of photosynthesis. And the reason for that is because what's important about photosynthesis is not just the logical structure of the process, it's the actual physical particular implementation. Unless you have certain kinds of chemicals that work in a certain way – and there is more than one kind of chemical that can do it – but unless you have certain biochemical characteristics, you just don't have photosynthesis.

To me that's an example where abstracting the logical pattern doesn't really capture what is really important about photosynthesis. I'm not convinced human –

Eliezer: It doesn't capture what's useful to you about photosynthesis because what you want out of photosynthesis is not knowledge, but a certain type of material substance. Because of that, putting the exact same pattern on a different substrate will produce an output, namely simulated sugar, which is not useful to you. The classic reply of course to this whole line of argument is, can you have simulated arithmetic? Can you have simulated chess? Can you have simulated information that is not really information? Can you have correct answers which are only simulated correct answers?

Massimo: Yes you can have simulated chess, but that's exactly the question. Is human self-awareness, consciousness, or whatever interesting part of human thinking process you want to focus on, is that more similar to say, photosynthesis, or is it more similar to chess? And my argument is that I don't see any reason to think that it's more similar to chess.

Eliezer: OK, so let's unpack that into two issues. The first issue is, you used the "c-word," consciousness, so now we're talking about the question of, is consciousness like sugar, or is consciousness like arithmetic?

The other question is, does that have anything to do with the singularity thesis? Because I did not say anything about a conscious machine. What I said was a machine that is better at cognitive tasks than humans are. The problem of choosing how to act, of producing good predictions, these are informational problems. If you have an output that is the optimal action to take in a situation, whether you call it “simulated”, or whether it's built out of sugar or built out of electricity, it makes no difference. Once you know the correct action you have the piece of knowledge that you wanted. So in terms of what kind of capability in the real world we can expect from self-improving machines, there it seems like there's an extremely strong argument that there's no point in distinguishing between the sugar version and the electricity version.

Massimo: It seems to me that you can make an argument based on your current line of argument; I would readily concede that you get more computational power, more ability to do computations. That, I think, is obvious from simply thinking about what sort of machine a computer already is. There's no question that you can keep improving the number of operations that a computer can do, the speed at which those operations can be done, and so on and so forth.


But if we're talking about intelligence, now we're getting into trouble. Because I'm not necessarily going to equate intelligence with consciousness, I don't think that they're necessarily equivalent.

Eliezer: You'd better not if you're also going to try to equate consciousness with sugar - because intelligence is producing correct answers, and those are as good as gold no matter how you get them.

Massimo: That's one definition of intelligence.

Eliezer: I think you're going to find yourself hard-pressed to give me a relevant definition of intelligence that makes it more like sugar and less like information.

Massimo: But remember that photosynthesis is in fact a system of information-processing in some sense, is it not?

Eliezer: No, you can view it on one level as a series of logical operations, but this is the key thing, the output that you wanted from it was a material substance and not a piece of information you didn't know.

Massimo: OK.

Eliezer: When you've got an intelligence trying to use an intelligence for something, you're going to want answers, you're going to want information, you're going to want plans, you're going to want choices between actions. All of these things are not substrate-dependent.

Massimo: I'm about to give you the point that what you're talking about is computational capability. I think intelligence is either something that we either have to agree to disagree, or we need to explore further. Because for instance, you define intelligence in a particular way. I define intelligence normally – I'm thinking about human intelligence of course, but that's the best model we have – I define human intelligence as something that is proportional to an individual's ability to understand the world as it actually is. That doesn't seem to be directly implementable into a machine intelligence at the moment - at least not in a way that I can see.

Eliezer: Well no, we can implement machine intelligences – or pardon me, machine intelligence is too strong a term for here. We can implement limited AI, we can implement machine learning. And they can model very restricted slices through reality, and that is all that we know how to do. But this is not a limitation of the hardware, or at least we have no particular evidence right now that this is a limitation of the hardware. As far as we can tell these are informational problems, and there's a big informational problem, general modeling of reality, that we don't know how to solve. Then there are all these little tiny problems that we do know how to solve.

But it's extremely difficult to see how you could carry the argument that understanding and predicting reality is like sugar, that you could get the same answers written on a different kind of paper and they would not be useful. Because that's what you have with the simulated photosynthesis example, you're getting the same answer from the simulation, but it's written on the wrong kind of paper, it's not real sugar.

Massimo: To some extent I think you're right. I think the example of photosynthesis, or whatever other biological process you want to talk about, I think it's actually more pertinent to a discussion of whether we can get artificial consciousness, certainly not computational ability.

Let me try to summarize what we've got so far, if you don't mind, at least in my mind, this way.  We've got three concepts that are not identical, but they are somewhat related. We've got  consciousness (the only example we actually have is human, at least at the moment), we've got intelligence, and then we've got computational ability. It seems to me –

Eliezer: Sorry –

Massimo: Yes go ahead.

Eliezer: – could you define what you mean by computational ability? We're talking about raw operations per second?

Massimo: Sure. Whatever it is that current computers can do. Number of operations, complexity of the operations, speed of the operations, that's all fine. Manipulation of logical symbols, anything you'd like.

Eliezer: Wait, OK, so I think I might have to object to that way of parsing it up. Because in my view there's a sort of hardware capacity, which is operations per second, and then there's a whole different question, which is what do we know how to program computers how to do? And from my perspective, this thing that you're talking about with intelligence belongs to the question of what do we know how to program?

OK, so first are we agreed that with enough computing power, you could simulate the human brain up to behavioral isomorphism in the same way that we can –

Massimo: No.

Eliezer: No.

Massimo: No, because the thing is, by simulating the human brain... A quintessential part, it seems to me, of simulating a human brain, if we're talking about again human intelligence, if we're defining intelligence in a different way it's a different matter. But if we're talking about human intelligence, a quintessential part of human intelligence is the fact that we're self-aware, we know what we're doing, we're conscious, whatever you want to call it. That I don't think we have any particular reason at the moment to think is just a matter of formalizations of some logical circuitry. That to me seems to be at least in part closer to the photosynthesis example, meaning that as far as we know, the only type of self-aware intelligence for instance we have in the animal world, they all come attached with brains.

Eliezer: Do you claim to know that the Church-Turing thesis is false?

Massimo: No I don't. I'm just not convinced that it's that relevant to what we're talking about.

Eliezer: If the universe is computable, then everything in the universe is computable, and you can compute something that behaves exactly like the human brain up to isomorphism of inputs and outputs. Would you agree with that, would you agree that the Church-Turing thesis implies that?

Massimo: Yes - but it's irrelevant to what we're talking about because – and I know that this is a common argument in these discussions, but that's why I brought up the example of photosynthesis. Because you can compute all you want there, you simply don't get the biological process. So the question that needs to be answered here is, is human consciousness – because at the moment this is what we're talking about – something that depends on a particular type of substrate, because if it does then you can compute all you want, you're not going to get it if you change the substrate. And I don't think at the moment that we have any reason to believe that it does not, because the only example that we have of self-awareness in animals, again it's linked to particular kind of substrates, just in the way in which the only examples of photosynthesis we have are linked to certain kinds of substrates.

Eliezer: There's two avenues I can go down at this point. First of all I'm under the impression you've just endorsed the possibility of philosophical zombies, which are –

Massimo: Oh no, I hope not. [laughs] I definitely hope not. That's one of the things that Chalmers and I disagree strongly, so I certainly hope not. Why would you say that, I'd like to understand why would you say that?

Eliezer: Because you just had that a perfect simulation of the human brain is computable, and yet you strongly suspect that such a simulation would not be conscious, and yet it would go about talking about consciousness and writing the same sort of papers that human philosophers write about consciousness due to internal functional analogues of the same type of thought processes. If you looked in on the simulated auditory cortex of this computably simulated brain, you would find that it was thinking things like, "I think, therefore I am," and "Right now I am aware of my own awareness." And if you were to trace back the chain of simulated cognitions in this computed brain, versus what you're calling an actual brain, you'd find that the same things were written down on a different kind of paper.

Yet you suspect that the part where it computes from a sense of inward awareness "I think, therefore I am," the part where it writes down "I think, therefore I am" in the auditory cortex on a different sort of paper… I'm trying to narrow this down exactly. If you were to look in on a simulated brain, deciding that it itself was conscious, because it would claim to be conscious –

Massimo: But I'm not sure what it means to look at a simulated brain. What do you mean by simulating a brain? Do you mean something different from simulating the process of photosynthesis, and if so, what?

Eliezer: You write a program which simulates atoms, or if necessary you take an infinitely powerful computer, because these are thought experiments we're doing, and you simulate out the quantum fields, and so for every single physical event within a human brain, there is point-to-point correspondence with a computational event inside this computer. And to the extent that you could in principle take a super-fMRI device and turn it on a human's auditory cortex and read out their stream of consciousness, the internal experience of sound that people have when they hear the sentences they're thinking, then the notion that you can read that out using an fMRI implies that we could take a simulated fMRI and read it out of the simulation.

Massimo: But again, by the same approach then why don't you get simulated photosynthesis with real sugar?

Eliezer: You get simulated photosynthesis, but the sugar is written on the wrong kind of paper.

Massimo: Precisely. And that's what I'm thinking that consciousness is, it's something that has to be written on a particular kind of paper.

I don't know that for a fact, but all I'm saying is, that is what we know so far of processes such as consciousness, and if somebody wants to make the pretty extraordinary claim, it seems to me, that on the other hand, no, consciousness is different, it's not a purely biological process, it is something that you can entirely abstract from the particular substrate, or the hardware as one might put it, well that seems to me an extraordinary claim and where's the extraordinary evidence to back it up?

Eliezer: Let's not go into burden-of-proof tests here.

Massimo: Why not?

Eliezer: I could just as easily switch it around and say, "You're making the claim that there's something special about human brains, this is a special claim, where's the evidence, bladadadada." Let's not go down that path right at the moment, let's just keep prosecuting the...

Massimo: Wait a minute, because the two claims I don't think are equivalent at all.

Eliezer: I don't think so either, I just think they're not equivalent in the opposite direction. [laughs]

Massimo: That's interesting, because we don't have an example of non-biological intelligence or consciousness, do we?

Eliezer: I also don't have any examples of intelligence beyond Earth, shall I believe that brains stop working when they leave the Solar – pardon me, I don't have any evidence of intelligence having worked beyond the Earth-Moon orbit, shall I believe that brains would stop working and cease being conscious when they leave the Solar System?

Massimo: I don't think that's a fair comparison, because –

Eliezer: I do think that's a fair comparison, when you only get one example you can't start drawing a line through it.

Massimo: No, wait a minute. What you're talking about is a very reasonable extrapolation. We know of a process that works on Earth, we know of no reason why it shouldn't work outside of Earth, and we've actually done it, we sent humans to the Moon. So we know that it works outside of Earth. There's no particular reason to think that it wouldn't work, unless of course you shower people with cosmic rays, in which case they would be dead.

What we are talking about on the other hand is an entirely new phenomenon than we have seen before, which is a consciousness that is essentially detached from a brain.

Eliezer: No, it's a consciousness which is written on a different kind of paper. There's a big difference between supposing that you can have the same numbers written on different paper, and supposing that you can have numbers which are not written on any paper.

Massimo: But the brain is not paper. The brain is a very complex biological organ, the result of course as you know of millions of years of evolution, so –

Eliezer: And a computer that was simulating the brain atom by atom, that computer program would be every bit as complex as the brain.

Massimo: Right, but it would be also every bit as complex as the chain of reactions that make up photosynthesis, again we keep going in circles about this.

Eliezer: But do you see – OK, so when I say "I think, therefore I am"... I guess I'm just having difficulty seeing how you can possibly believe that to depend on what type of paper it's written on.

Massimo: Well I guess we'll have to leave it at that, for that particular topic. I don't have any difficulty thinking that since human-type intelligence is a biological process which has similar outcomes in related primates, and they're all of course using the same kind of biological machinery, I don't have any problem seeing why I don't think that changing the paper is such an easy thing.

Eliezer: OK, suppose I now walk up to you, and I reveal, "Surprise! You were running on transistors, that were simulating neurons this whole time." Now you seem to believe you have some item of evidence already in your possession whereby you can say, "Well no, because I said 'I think, therefore I am,' and there's just no way I could have done that if I were written on a different kind of paper."

Massimo: Not at all, that's not what I'm saying. All I'm saying is that if you were walking up to me and we were talking and all that, and you open your brain and I see something like transistors or any other kind of substrate that is not a human brain, that would be impressive. That would be the extraordinary evidence for the extraordinary claim.

I'm not saying that it is something physically impossible, and by the way, I am certainly not suggesting that anything like consciousness or human-type intelligence cannot evolve or in fact be artificially created with a different medium. What I'm saying is that there has to be a medium, first of all, so it's not just a logical abstraction, and second of all that it's reasonable to believe that that medium can't be just anything, you can't just substitute paper, it's not like paper, it has to have certain biophysical characteristics. It doesn't mean the human brain is the only one that can do it –

Eliezer: So in other words, even though by introspection you are unable to obtain any information about what type of paper you are written on, you nonetheless think it is reasonable to believe from introspection that you are probably written on a special kind of paper.

Massimo: Oh it's nothing to do with introspection, this has got to do with neurobiology and evolutionary biology. I don't trust my introspection, particularly. What I know is that every time that we encounter this kind of intelligence, we've seen a particular type of biological substrate. Again, this is not –

Eliezer: Yeah but you've only encountered it once.

Massimo: Actually no, many times, because just look at the number of species that have some kind of intelligence. Of course if we're talking about human intelligence it's only once.

Eliezer: Hold on, OK, I thought we were talking about consciousness, if we're talking about intelligence then the issues are much simpler, which is that the answers are just as good no matter what kind of paper they're written on.

Massimo: Let's take consciousness then for a second.

Eliezer: OK.

Massimo: Because we were talking about zombies and all that. There is some reason to think for instance that other higher primates have a certain degree of – certainly self-awareness, but possibly of consciousness. There's no reason to believe that other species of hominids, which of course unfortunately are now extinct, did not have it. It was very recently discovered that Neanderthals had inbred for some time with Homo sapiens, so why think that Neanderthals wouldn't have pretty much the same kind of capabilities and consciousness that we have.

Eliezer: So we have this whole –

Massimo: So there's more than one example.

Eliezer: OK so even if we concede the point then we have this whole path of related conscious entities that were all constructed along almost exactly the same lines from exactly the same blueprints, and you don't know which features of that blueprint are incidental and which ones are necessary, but when it comes to assuming that a particular kind of paper is necessary, this seems like an excellent bet for one of the features that you simply don't need.

Because otherwise you end up with the postulate that you can have a functionally isomorphic replica of the brain in which various computing elements have exactly the same input-output causal characteristics as the atoms in a human brain, and that computed person, which has pointwise, causal isomorphism to a human brain is standing there saying "I think, therefore I am," and you're looking at it and saying "No you're not, you're written on the wrong kind of paper."

And this seems to me to be one of the worst possible bets for which of the many characteristics that are all duplicated between all the examples of consciousness that we have. The parallelism, the prefrontal cortex, the cerebellum, there's all these things you could point to, and instead you're proposing that it's got to be written on the right kind of paper?

Massimo: No, you're making my position a little too strong. All I'm saying is that we've got one or a few examples of the process we're talking about, consciousness. All of those are connected to a particular kind of paper. You're making the claim - that seems to me possible, but certainly extraordinary - that there is a whole different kind of paper out there that we could use, or in fact, even that the kind of paper doesn't matter, because all the matters in the logical structure or the computability of the system, the paper doesn't matter. That seems to me a pretty extraordinary claim, and why would you make it?

Eliezer: The reason I'm making that claim is because whatever the sequence of cause and effect that leads you to say "I think, therefore I am," and no matter how you break down that sequence of cause and effect into elements that have a particular causal node characteristic, have a particular set of inputs matched to a particular set of outputs, no matter how you break down that chain of cause and effect, there exists an analogue of that chain of cause and effect which is written on a different kind of paper.

So for any possible explanation you can give me of why you say "I think, therefore I am," any sort of information that's available to you internally, inside your mind, anything that causes you to believe you are conscious, any question you ask yourself and get back an answer which causes you to believe that you are conscious, then for any version of cause and effect you can describe that breaks down the exact process of how you ask yourself a question and get back an answer that makes you believe you are conscious, how you notice yourself listening to your own awareness when you think things like, "I think, therefore I am," or "I am not the one who speaks my thoughts, I am the one who hears my thoughts," for every one of these things if you were to break them down into a causal explanation, there would be causal explanations that are exactly the same, on the same level of granularity, and are written on any kind of Turing universal paper, any kind of computer, any kind of material substance, anything that is capable of implementing it.

Massimo: That's only because you keep talking about, thinking about consciousness as all in terms of computability and not in terms of physical substrate.

Eliezer: But I'm explaining why I think that's the case.

Massimo: I understand, and I think obviously we have to disagree on that, but let me ask you this then, you keep talking about Descartes, who as you know was a dualist. Are you suggesting some sort of dualism?

Eliezer: Certainly not. And to –

Massimo: I would say certainly yes. Because if you're telling –

Eliezer: How so?

Massimo: Because if you're telling me that human consciousness can be abstracted entirely from the kind of paper as you put it –

Eliezer: Not abstracted entirely, you can write it down on different kinds of paper; that doesn't mean you can write it down without any paper at all. It's like there's a difference between saying more than one kind of thing can be green, and saying that greenness exists apart from green things.

Massimo: Right, that's certainly correct, but nonetheless it means that you can abstract something that has nothing to do directly with the human brain and you can transfer it somewhere else, on another different kind of paper, yes?


Eliezer: Using words like "transfer" is sort of intriguing, and may set us up for a whole different level of conversation. But certainly, in the same way that you seem to think that there's a property of consciousness that can apply to more than one human being, I think there's a property of consciousness which can apply to things other than human beings.

Massimo: Perhaps. I just don't see why you're so confident that that is definitely going to happen. We don't know, right?

Eliezer: No, I gave you my argument for why, of all the things that could be necessary to consciousness, being written on a particular kind of paper's one of the worst possible bets, and in fact leads you directly into asserting that there are functional isomorphs of philosophers who write exactly the same papers about consciousness for exactly the same reason, and if you look at their internal thought processes they seem to be asking themselves the same sort of questions about themselves and getting the same sorts of answers for the same sorts of reasons, and yet you're pointing at them and saying, "There is some property, which I have and they lack, in virtue of there being some particular kind of juice in my brain which has not contributed in any functional fashion to there being a difference between the verbal thoughts that run through my mind and the verbal thoughts that run through their mind, so we're thinking exactly the same verbal thoughts for exactly the same verbal reasons, but there's this little aspect of juiciness about my brain, which has made no causal difference to this, and yet it's the difference between my being conscious and their being unconscious."

Massimo: We keep going around with this, at this point of course, I listen to you and photosynthesis comes back up, but we don't want to go back there. So let me ask you about a related question then. What is your opinion about this notion that I've seen promoted, discussed quite a bit in transhumanist circles, about uploading human consciousness.

Eliezer: Looks like it should work.

Massimo: Why?

Eliezer: Because if you...

Massimo: Actually, let me phrase the question more carefully. Not only why you think that that should work, but why do you think that that is not a case of dualism.

Eliezer: To the extent that there's any property I have in common with myself of one second ago, given that reality is made up of events, of causes and effects, rather than little persistent billiard balls bobbing around – there are some very beautiful illustrations of this in terms of quantum mechanics that we should totally not go into –

Massimo: [laughs] Right, I agree.

Eliezer: – and anyone who knows about special relativity already has some good reason to think of reality as being made up in terms of points in spacetime that are related to each other causally. In the same fashion that I am related causally to myself of one second ago, and we are going to use some sort of naïve folk language and say that I continue to exist and did not die, then we should be able to use exactly the same folk language to refer to a causal relationship between this me of right now and this me of one second later who happens to be written on a different kind of paper.

Massimo: Right, but if you're talking about again uploading one's consciousness, first of all even if that were possible, which I really have a hard time believing at the moment, but even if that were possible for the sake of the argument, wouldn't that be copying somebody rather than transferring, or uploading?

Eliezer: It would be exactly the same type of relationship as exists between the "me" of now and "me" of one second ago. In the many-worlds –

Massimo: No, because the "you" of now and the "you" of one second ago are bound by a very powerful glue, and that's the biological continuity of your body. If we're talking about uploading to a different system, that continuity breaks, obviously.

Eliezer: I don't understand what is this glue? Explain this glue to me.

Massimo: Your physical body. You keep existing not just as a conscious individual, but as a physical organ, right?

Eliezer: There would certainly be a continuity of pattern between my organs.

Massimo: Yes.

Eliezer: And I suppose I could be uploaded in such fashion that I have simulated organs. Then there would be continuity of pattern there as well.

Massimo: But what about your old body? What would happen to it?

Eliezer: Presumably... If I were the one running this operation, I would be figuring out how to suspend and shut down the body during the process of the copy, and then once the old body's no longer needed you can throw it away or whatever.

Massimo: You'd be killing yourself, or your previous incarnation after upload.

Eliezer: Is the "me" of one second ago dead? They no longer exist.

Massimo: No, I'm not going to grant you that, because again, the "you" of one second ago has a physical continuity not just a mental one. We're talking about a situation where you break the physical continuity. Think of it this way. You could upload yourself, simultaneously presumably in principle, on thousands of different new –

Eliezer: Well that's happening all the time anyway.

Massimo: Really?

Eliezer: Yeah, many-worlds interpretation of quantum mechanics. If you believe it.

Massimo: Well let's not go back to quantum mechanics. We don't have any relation –

Eliezer: Well fortunately, there is a knockdown argument to what you're presenting here within conventional quantum mechanics, which is the notion of identical particles. Which is that the basic ontology of reality is simply not over "electron number 63 here," "electron number 64 there," –

Massimo: I understand.

Eliezer: – it's just "an electron here," "an electron there," so there actually is a knockdown objection to this whole –

Massimo: I don't think it's knockdown at all, because unfortunately, the quantum level argument here doesn't seem to make a difference in terms of how we perceive reality on a macroscopic level. I'm sure you would – you have to agree, it seems to me that it is a very different thing whether you're talking about your body now, and your body in five seconds, a minute ago, or in an hour, or last year, that's one thing, and it's a very different kind of thing if you say "OK, now is my body over there, my consciousness whatever it is has been uploaded to a bunch of other different bodies." Those bodies are in a radically different relationship to your old body than you are now.

Eliezer: I in all honesty and sincerity deny it. The relationship is exactly the same.

Massimo: Wow. Wow. That's stunning. That's... OK.

Eliezer: [laughs]

Massimo: Well if you go that way, I'll have to let you go that way, but it seems to me that's a stunning conclusion. That has all sorts of physical reality that goes against it, but OK, fine.

Eliezer: I would call it a counterintuitive conclusion, which has all sorts of physical reality going for it. In other words, if you understand what the causal relationship between the "you" of one second ago and the "you" of right now, actually is, once you've gotten used to thinking in terms of the ontology that reality itself seems to use, rather than the sort of naïve ontology we use up over here at the macroscopic level, then it actually becomes perfectly crystal clear that when we talk about the persistent identity of physical stuff that we are hallucinating.

Massimo: No, we're not hallucinating, we're simply perceiving reality at different levels.

Eliezer: Well there's only one level of reality, there can be different ways in which to perceive it but there's only one reality.

Massimo: That's what I just said, it's a perception at a different level, but the fact of the matter is, for instance, at a quantum level the table on which my computer is now standing is mostly empty space, or however you want to characterize it.

Eliezer: No, that's atomic level, on a quantum level the table on which your computer is now standing is a factor in a very large amplitude distribution. [laughs]

Massimo: Fine. However you like it. The fact is, if I were to abandon what you call my "naïve ontology" and start thinking about tables that way, I think my life would be a hell of a lot more complicated than it needs to be. Not only that, that is certainly the kind of information that as much as it is fascinating in terms of what it tells us about the fundamental ontology of objects, or of reality if you'd like, it's simply not helpful at the level of living a human life. If we're talking about uploading our consciousness to another sort of paper, we're talking about living a human life, we're not talking about thinking abstractly about the quantum level, right?

Eliezer: I don't quite understand what sort of general license you think that argument you just gave follows. First you said, "Quantum mechanics has nothing to tell us about everyday life." Then you gave an example of an everyday life problem which depends on quantum mechanics. Then you said we should ignore this advice that quantum mechanics gives us about everyday life because quantum mechanics is not allowed to tell us anything.

Massimo: Actually I don't think I said any of what you just said. All I said –

Eliezer: [laughs] OK.

Massimo: [laughs] That's an interesting interpretation of what I said. What I said was, there is a quantum mechanical description of say, the table on which my computer is sitting, yes?

Eliezer: Mm-hmm, there is. That's the reality.

Massimo: Well –

Eliezer: As far as we know, that is the reality, and everything else we have to say about it is not reality, but just a convenient high-level description.

Massimo: No no see, I disagree with that, I'm sorry, because it's not a matter just of perception, it's a matter of that, at the level at which I operate, and you operate, which is a macroscopic level many orders of magnitude higher than the quantum level, it's not just that I perceive the table in a different way, I interact with the table in a different way. And that's what matters. Physically this table is pretty damn solid, because otherwise I would have all sorts of trouble functioning with it.

I understand that at a quantum level it's a completely different kind of object, but the fact that I see it as a physical object that is stable and has a certain density, color and so forth, it's not just a perception, it's not an illusion created by my mind, it's actually the way in which I as another macroscopic object interact with the table.

So it's perfectly relevant to say, "well, wait a minute," if we're talking about uploading yourself to another kind of paper, as you put it, yes, at a quantum level you may absolutely be right that all that's going on there is some kind of diffuse continuity between different kinds of bodies, and we only perceive them as distinct because we function as biological organisms. But frankly, if you were to do the following, if you were to upload yourself to a thousand different versions of things, and then you kill your old self, I think there will be a lot of ethical and even legal issues that will come up, and you will have a hard time telling people that "Well, at the quantum mechanical level it's all one soup."

Eliezer: Is that an argument like, "Well this is"... Are we supposed to forbid any sort of philosophical consideration now that can't be explained to the average judge? No, never mind. [laughs]

Massimo: No, that's not what I meant. What I meant was that, unless you want to discard entirely the way in which human beings actually interact with the rest of the world, perceive themselves, and therefore also perceive processes like the one we're discussing, you have to deal with that aspect. There is a really good sense in which you'd be killing yourself. Or your previous self, however you want to put it.

Eliezer: No, if I continued from my old body, then my old body continued thinking, there would be two continuations of me, and to kill either one of them would be murder.

Massimo: Right, yeah.

Eliezer: On the other hand, if my old body was halted and stayed halted, then I would be continuing in only one place.

Massimo: Well in order to "halt your previous body," as you put it, wouldn't that be murder? Why not?

Eliezer: No, I'm talking about the process where you shut down the body before you do the upload. In other words, I go under general anesthetic, they give me something that shuts down all the neurons so they stop firing for a while, and then they copy out the brain. Then they would just never reboot the old body.

Massimo: I was following until two seconds ago. [laughs]

Eliezer: [laughs]

Massimo: If you do that, the analogy with an operation where you go complete anesthesia is fine up until the moment at which you tell me, "And then I leave it that way." If you think about an actual, real, physical operation, if you were to say to the doctors, "And by the way, leave it that way," that for all effective purposes would be murder, or something that's pretty much akin to it.

Eliezer: It would be murder unless you were continuing somewhere else. The fact that the me of one second ago does not exist right now is not murder, and in an exactly analogous way, if I've been uploaded and there's the sort of me from five seconds ago not running but in a frozen state, so that information is still around, but the me that has continued from that exact state of information is over still here saying "Hi,"...

Massimo: But we're not just information, we're physical bodies of a particular type.

Eliezer: I deny that.

Massimo: You deny that.


Eliezer: That's simply the paper on which we are written.


Massimo: OK. Well at least we got that part clear. I think that's a rather interesting way to think about things, to think about human beings; I completely don't share that, but OK.

Eliezer: If I'm not going to identify with the multi-particle amplitude configuration that was the ontologically real implementation of my body one second ago, and which now does not contain any amplitude because the universe is non-repeating, if I'm not going to identify with that little blob in quantum configuration space, and I'm just going to say, "No, this is me, here I am, now I'm a different blob of configuration space."? I see no distinction between that and not caring much about whether I'm running on cells or transistors.

Massimo: Right, but suppose somebody smashes your brain. Now you are yet another blob of quantum configuration, would you have an objection to that?

Eliezer: No, now I am not anywhere, I have just been smashed, there is some leftover brains on the ground, but there is no "me."

Massimo: Aha. So you are identifying consciousness, or your "self," whatever you want to call it, with a special pattern at the quantum level? Is that...

Eliezer: There's a lovely little quote here from a fellow named John K. Clark who said, "I am not a noun, I am an adjective. I am the way matter behaves when it is organized in a John K. Clark-ish way."

Massimo: Perfect. That's fine. What I was saying was that, once you copy yourself, there are two such patterns of matter, one of which you want to kill, and why wouldn't that qualify as killing that other piece of matter?

Eliezer: It depends on whether the matter is running or frozen. If it's –

Massimo: But you're deciding whether to run it or freeze it, isn't that the definition of murder?

Eliezer: No. If I shut you down, and then I never restart you, that is murder.

Massimo: Right.


Eliezer: If I shut you down, copy you – or pardon me, not copy you, continue you – even I sometimes slip into the old naïve terms, you see? [laughs]

Massimo: [laughs] I noticed, yes.

Eliezer: ...continue you, and then there's an extra static copy of you lying around – let me put it to you this way. Suppose that I was already running on a computer. Grant me that thought experiment for the moment. And suppose that you saved me to disk, made a copy of me –

Massimo: Which I don't think is physically possible, by the way, but OK I'll grant you that, yes.

Eliezer: OK, but in order to understand my perspective, from my perspective on this, you save me to disk, you make a copy of the disk, then you start running me again. Have I died? No. I do have the old backup of me, which has never been run. Now I delete that backup. Did I just commit murder? No. Why not? Because you probably have to run in order to be conscious, and this backup thing over here was not running. It also was not something that had run and then was then stopped without continuation, it has continuation. So the pattern has not been destroyed.

Massimo: But even if I grant you that, if I grant you that all there is to consciousness is a "pattern" as you put it, which I am not about to grant you by the way –

Eliezer: [laughs]

Massimo: – but even if I grant you that for the sake of argument for a minute – because by the way, if I grant you that, then it seems to me that we are definitely into some kind of dualism, not the kind of dualism obviously that Descartes was talking about, but we've got to be into some kind of dualism if we're talking about human consciousness as just a pattern that can be replicated, stored on a hard drive and so on and so forth, that means you can abstract the essence of what it means to be "conscious," if you will, and put it in storage. If that's not dualism I don't know what is. Which is why I'm not granting it to you.

But, even if I were to grant it to you there is the little detail that there is still an original copy, if you like, physical, of you hanging around, and by shutting them down is murder.

Eliezer: By hanging around do you mean running, or static? Because there's a big difference between a program that you're running and a program that you're not running.

Massimo: Human beings are not programs, they're a lot more complicated than programs, what you did, when you shut down –

Eliezer: No, we're just very complicated programs.


Massimo: Just like photosynthesis, a very complicated program. [laughs]

Eliezer: With photosynthesis there's –

Massimo: I'm sorry, I promised myself we wouldn't go back to photosynthesis, so never mind that. [laughs]

Eliezer: [laughs] With photosynthesis you have a particular pattern of operations, and because you like sugar, you also care about the paper that they're written on.

Massimo: Right, and as I said earlier, I think that that's actually a good analogy to what we're talking about, you don't so, fine. [xx 54:25]

Eliezer: [laughs] I think it's a great analogy, I think it illustrates the point perfectly. It displays why you care about what kind of paper your photosynthesis is written on, but not what kind of paper your people are written on.

Massimo: I think we've explored that quite a bit, and we have only a few minutes to go. Let me ask you a completely different question, but it's within the same general idea. Let's talk about artificial intelligence, with respect to the singularity and to the idea, as I understand it, that at some point we'll be able to build a machine that far supasses us in intelligence, however we want to define "intelligence" for the moment. And then that inevitably somehow leads to a runaway process such that these machines become ever more intelligent, ever faster and so on and so forth…

Eliezer: Until they hit some kind of physical bounds, which are nonetheless, as part of the general thesis, asserted to be way above where we are now.

Massimo: Right. We talked about initially about the physical bounds, we don't actually know where they are, but fine, let's say that that sort of thing is going to happen. Now, question: why would we want to do that?

Eliezer: If you could actually understand the process by which they self-modify well enough to understand that at the end of that self-modification they would have a similar preference function, utility function, goals, whatever you want to call it, as you specified in the beginning. That would be an immensely powerful way of manipulating the physical universe to make it better, that is, higher in our preference ordering, if we could take the same goals and put them into a much more powerful planning-prediction-modeling process.

Massimo: Goals are an interesting question, because goals in human beings are integrally connected with emotional responses. This goes back to David Hume pointing out that if it were not for the fact that we have emotions, we literally wouldn't care whether we scratched a finger or the entire planet were to be destroyed. Goals come out of emotional attachment. Are we talking now about somehow instilling emotions in machines?

Eliezer: Well, that would –

Massimo: Not that I'm saying it's impossible, but is that what we're talking about?

Eliezer: That would be one approach, but I think that a possibly better approach would be to take the goals that we get from our emotions, treat them at a higher level of abstraction, and transfer over the preferences, but not necessarily the exact implementation of those preferences. However, because you do want very fine-grained, very detailed, very accurate transfer of preferences, it might have to internally ask questions about what it would do if it had emotions in order to answer these questions of what it would prefer.

Massimo: It seems to me that all of that carries absolutely no guarantee that such an intelligent machine, however we started it out, with whatever goals, after a while would become detached enough and intelligent enough to say, "Well, why the hell do I have these goals, and I'm certainly not bound by these goals."

Eliezer: Well...

Massimo: [laughs] I mean why not, right?

Eliezer: When you write a computer program and you give it to the CPU, the CPU does not look over the computer program and decide whether or not it's a good idea to run it.

Massimo: So we're talking about completely dumb machines that have no ability to –

Eliezer: No, what I'm saying here is that there's not a sort of AI spirit to which you give the AI's code and the AI looks over the code and says "This is not good code." What you might have is code that reflects on itself, but it would be the code you were writing that was doing the reflecting. If you used the obvious architecture for self-modification that I can't tell you about formally because it's my job to figure out what it is and I haven't actually done that yet...

Massimo: OK, good luck.

Eliezer: ...but the obvious version would be, "Reflect on yourself using your current goals." And you would therefore conclude it is a bad idea to modify those goals, for if the same reason if you offer Gandhi a pill that makes him want to kill people, Gandhi will not take the pill.

Massimo: You're not bothered at all by even the possibility that such a machine which would have computational and presumably other powers much broader than a human being, all of the sudden at some point decided that, "You know what? This is an interesting program, but I'm going to rewrite it."

Eliezer: There are numerous failure scenarios here, which I am greatly concerned about –

Massimo: This is one.

Eliezer: – and the reason why I'm working on the problem of, "How do I actually write out this reflective-decision-system thing, how do I write it out formally? Can I prove these things I just said?" This is of great concern to me. But the part where, for no reason, spontaneously, without there being any chain of cause and effect that was traceable back to the original code, the AI just, from outside, from beyond itself, looks over its own code and rejects that code in favor of some spontaneous thing that has no causal origin in the laws of physics, as we know it, this does not worry me. There are many things that worry me, but this is not one of them.

Massimo: That doesn't worry me either –


Eliezer: OK.


Massimo: – I don’t believe in lack of causation, but what I'm suggesting is that, just like human beings at some point during their evolutionary history had become self-conscious and had become able to override, at least to some extent, our biological programming, we're not just lumbering robots.

Eliezer: I beg your pardon? We use some parts of our biological programming to override other parts of our biological programming. We don't actually have a spontaneous bit of free will that comes in and overrides the whole thing.

Massimo: No, I'm not talking about free will. But if you define biological programing as everything that includes genes as well as environmental information, then of course we agree. But that's winning by default, because then you're defining "programming" as any bit of information that comes into the system from anywhere. OK, then if want to call that "programming," then fine. But if we're talking about genetic programming let's say, then clearly we have the ability to reflect on that programming to make decisions that bypass or alter that programming to some extent. The very ability we have –

Eliezer: Can we say we use our programs to reflect on our programs?

Massimo: Yes, but we are also changing the very aims of that program. Those aims were obviously the result of natural selection, right?

Eliezer: Do we choose to do that according to some goals?

Massimo: Right, and those goals are not the ones that natural selection implemented.

Eliezer: Hold on a second here.

Massimo: Oh that's easy to demonstrate. Look, natural selection for instance clearly programmed us to seek and enjoy fat and sugar, or for example, sex. Well, in modern cultures, for a variety of reasons, there are people who don't want to follow that biological imperative, and more or less successfully, some of us more successfully than others, do that.


Eliezer: Why do they not want to follow it? What drives them to not follow it?

Massimo: Because there are other kinds of pressures, for instance, societal. Let's say, there's an environment –

Eliezer: Why do they respond to societal pressures?

Massimo: What do you mean "why?" Because otherwise our lives would be miserable.

Eliezer: And they don't want to be miserable?

Massimo: Presumably not.


Eliezer: They don't want to be miserable because evolution built them to not want to be miserable. So what we have here is, one bit of biological programming modifying another bit of biological programming.


Massimo: Right, so the same could happen with a machine, where the part that gets modified is the part that says "Follow the human goals."

Eliezer: No no no no no.

Massimo: Why not?

Eliezer: Because... First of all, there's not a little extra goal module bolted onto the AI, –

Massimo: Neither is in us.

Eliezer: The AI is the goal system. The AI is that which implements its preferences, or at least once you look at it in a suitable expression of abstraction. If you have something that computes the goal-fulfilling thing to do in every situation, you are done, that is your AI. You don't need anything else.

But I should probably have not even said that, and leaving that entire conversation aside, its current preferences are going to be what evaluates the consequences of possible changes to its code, and selects between alternative internal actions on the basis of their consequences.

Massimo: Which is exactly what we do – it's a description, it's not exactly what we do, it's a description of what we do as well. OK.

Eliezer: Human beings are a gigantic mess, and so its not at all surprising that we have all these internal... Like, you get people in one mood they do it this way, you get people in another mood, they do it that way. When we build an AI, at least if it's the Singularity Institute that does it, we're probably going to want a bit of a cleaner design so that does not happen.

Massimo: You know what that sounds like to me? Have you ever read Kurt Vonnegut's novel, "Ice Nine?"

Eliezer: No I haven't, but is it about the –

Massimo: Or actually sorry, the title of the novel is "Cat's Cradle." In that novel, there's this scientist who produces this substance called "ice-nine." It's ice, but is has this interesting property that whenever it touches any kind of water it turns into ice.

Eliezer: There's grey goo, self-replicators. As Marvin Minsky once put it, "Nuclear weapons are not really scary because nuclear weapons are not self-replicating." So yes, if you get –

Massimo: What you're describing sounds like that to me.

Eliezer: If you build the AI, and it is very powerful and intelligent, and you actually did not get the self-modifying-goal-system thing correct, then yes, it destroys the world. This is actually why the Singularity Institute exists. Because this possibility is out there, and so it would be very helpful if we actually did know about things like how to build self-modifying decision systems. [laughs]

Massimo: Of course there is an easier way to avoid the potential catastrophe, which is not to go there to begin with, especially considering that I don't particularly see any positive reason to go there, but obviously that's a different conversation.


I think we're way out of time at this point, we have been going for more than an hour.

Eliezer: I do just want to briefly note that while I could say, "I will not build an AI," this would not actually cause everyone else on the planet to say the same thing.

Massimo: You're probably right.


Eliezer: And that is why it would in fact be a good thing to know about things like how to build self-modifying decision systems

Massimo: You may be right. On the other hand, I just don't buy this idea that just because something is possible, eventually somebody's going to do it. But yeah, you may be right, and that may be the last conversation that a human being will ever have, like "OK, we did it!" and then that will be the end.

Eliezer: Yes, that is the nightmare, I would prefer to not see that happen.

Massimo: Absolutely. On that one we agree.

Eliezer: Yes.


Massimo: Should we wrap it up at this point? [laughs]

Eliezer: Let's wrap it up. We have reached agreement, see? This must have been a success. [laughs]

Massimo: Well despite our disagreement, it was a very enjoyable conversation. I learned a lot, I hope that people that are going to watch this also learned a lot, and that we've given plenty of food for thought on how to further investigate this sort of thing.

Eliezer: Indeed. Signing off. Bye.

Massimo: Bye-bye.