Singularity Summit 2012

How to Create a Mind

[Video]


For more transcripts, videos and audio of Singularity Summit talks visit intelligence.org/singularitysummit

Speaker: Ray Kurzweil

Transcriber(s): Ethan Dickinson and María Teresa Chávez


Moderator: Ray Kurzweil, as you probably know, has been described as "The Restless Genius," "The Ultimate Thinking Machine," and he's been called the rightful heir to Thomas Edison. He's also been named one of 16 revolutionaries who made America.

His long career as an inventor has produced many firsts. At 17, he built a computer that composed music. Later, he developed the first text recognition software capable of recognizing any font, and also the first commercially marketed large-vocabulary speech recognition device. Ray has won the National Medal of Technology, was inducted into the National Inventors Hall of Fame in 2002, has received 19 honorary doctorates, and has been honored by three U.S. Presidents.

His latest book, to be released next month, will be called "How to Create a Mind: The Secret of Human Thought Revealed." Please join me in welcoming our keynote speaker to close day one of the Singularity Summit, Ray Kurzweil.

[applause]

Ray Kurzweil: It's great to be here on the seventh Singularity Summit. The singularity is still not here...

[laughter]

Ray: ...it's still near. It's actually nearer...

[laughter]

Ray: ...even judging by viscerally identifiable events, and I'll talk a bit about that.

Just to comment on Steven [Pinker]'s thesis, I think it's a brilliant insight. I go around the world and speak to a lot of different audiences, and unlike I think this audience, lots of audiences have the idea that things are getting worse. A subset of that school of thought blames technology, that things were much better before technology spoiled our lives. I often remind these people to read Charles Dickens or Thomas Hobbes as to what life was really like hundreds of years ago.

There are a number of works coming out now showing how life has really improved, but people very quickly lose perspective. They're very focused on the latest political issues and whether education or the economy is up one percent or down two percent over the last year. But if you broaden your horizon, really remarkable things have happened. I'll show you a graph that plots what happened to health and wealth over the last 200 years, and it's quite dramatic, the same thing for education. Professor Pinker has done an outstanding job on collecting the data for violence.

I agree with the various reasons that he cites, but I would posit an overriding thesis that all of this stems from the law of accelerating returns, the acceleration and exponential growth of information technology. In particular two aspects of it. One is the growth of communication technologies. Communication technologies have directly fostered democratization and democratization is really responsible for peace.

I wrote in the 1980s, this is when the Soviet Union was going strong, and in school we were hovering under our desks, putting our hands over our heads to protect us in these drills from all-out atomic war. I said the Soviet Union would be swept away by the then-emerging decentralized electronic communication, which at that time consisted of early forms of email, using teletype machines, fax machines. That created the social network of that day.

People thought that was nuts, that this mighty superpower, nuclear-armed superpower, would be swept away by a few teletype machines. But that is what happened, in the 1991 coup against Gorbachev. The central authorities grabbed the TV and radio station, expecting to keep everybody in the dark, and it didn't work anymore. There was this clandestine network of hackers that kept everybody in the know, and it swept away totalitarian control. We had a great rise of democratization in the 90s, with the rise of the West. We're seeing the impact now of social networks. Dr. Pinker's correct, that you can always see pluses and minuses, but I think fundamentally it's a very democratizing force.

We've seen a continual acceleration of communication technologies. It took hundreds of thousands of years for the first communication technology, which was spoken language, and we could share stories. Then tens of thousands of years to develop written language, so we could prevent those stories from drifting and we could write down permanent insights to share across generations.

Then 500 years ago the printing press was invented, and that took 400 years to reach a mass audience. The telephone reached a mass audience in 50 years. The cell phone reached a quarter of the population in 7 years. Social networks, wikis, blogs, reached a mass audience in 3 years. It's been a continual acceleration, and a continual global community with language translation that's not perfect, but it's pretty good. This is one major factor to create a global community, to harness this innate empathy that we have as people for each other by fostering communication. Empathy actually doesn't work if you can't communicate and don't understand your fellow man and woman in other societies.

The other result of the law of accelerating returns derives from its 50 percent deflation rate, I'll talk a little bit about that. You can get the same information technology – you want to compute a million instructions, store a million bits of memory, communicate a million bits on the Internet, sequence a million base pairs of DNA, it'll cost you half as much today as it did a year ago. It's 50 percent, or 55 percent or 45 percent, depending on exactly what you're measuring, but it's approximately 50 percent deflation rate. It's not just these gadgets we carry around. [takes phone out of pocket] This is actually a billion times more powerful per constant dollar than the computer I used when I was a student. This 50 percent deflation rate adds up, that means a thousandfold increase in price-performance each decade.

It's continually influencing everything we care about. Health and medicine is becoming an information technology. The world of physical things... We're right before the storm with three-dimensional printing. You can now print out a lot of useful things. I can email you a book or a movie or a sound recording, whereas I used to have to send you a FedEx package. I can now email you a violin if you have a three-dimensional printer that can print that out. At Singularity University we have a project to print out houses for the developing world, one little module at a time. It's really the quiet before the storm, but the world of physical things is going to become an information technology.

We have a rising abundance. You can see that in statistics. According to the World Bank, deep poverty in Asia has been cut by 90 percent over the last 15 years. Europe, despite all the talk about an economic crisis, is far wealthier than it was a quarter of a century ago, let alone right after World War II. Greater abundance means less fighting over scarcity. The world was awash in scarcity, people did not have what they needed to survive, and that was one of the major factors in the high level of violence some centuries ago. We do have more sophisticated social and political systems, but that's fostered, I believe, by our increasing ability to communicate.

Let me talk to you about some of the trends that we've seen. I want to touch upon this new book – and I just got my first copy of it yesterday, but these will be available soon. It's about a topic I've been thinking about actually for 50 years. I've been thinking about thinking for a long time.


Let me ask you a question, and I think a number of people in this audience will be able to answer this. I sometimes ask audiences around the world, and almost nobody gets it, but this audience I think will get it. If you had a Jeopardy! query of the following, what would the response be? "A long tiresome speech delivered by a frothy pie topping." Raise your hand if you know what the correct response is.

Woman 1: Can you repeat it [xx 9:23]?

Ray: "A long tiresome speech delivered by a frothy pie topping." OK, there are a few hands, who probably paid attention to the televised tournament of Watson against the two best players of Jeopardy! in the world. The response is "What is a meringue harangue?" Watson got that correct. These two human players, who were the best in the world, did not get it correct. Watson got a lot of other impressive answers in search of a question correct. It made some stupid mistakes, so did the humans. It's not perfect.

What's generally not appreciated is that Watson got its knowledge not by being hand-coded by the engineers, they didn't sit down with a language like Lisp and write down, "OK there was a queen in Norway in the 16th century who had blonde braids," and so on. Watson actually got that knowledge by reading Wikipedia, and several other encyclopedias, 200 million pages of natural language documents.

People say, "Well, it's not operating at human levels," that's actually true. But it brings up a key strength of AI. Watson can read one of those pages. If you read a particular page, and let's say you knew nothing about the presidency, you might come to the conclusion that there's a 95 percent chance that Barack Obama is president. Watson would read it and conclude "Aha! There's a 58 percent chance that Barack Obama is president." You probably will read hundreds of pages that deal with that, and you'd be pretty confident as to who is the president. Watson read 200 million pages, tens of thousands of pages about the presidency, combines all of those probabilities using Bayes' and Shannon's theorem and comes up with the conclusion that there's over a 99 percent chance that Barack Obama is president. Or any other, even much more obscure, examples. And it got a higher score than the best two human players put together.

You and I could read Wikipedia too, I've actually estimated it would take about three years. By that time, Wikipedia will have doubled in size, but more importantly, I don't know about you, but I will have forgotten 98% of what I read. Watson actually remembers it all and has perfect recall, so whatever level it's understanding – and it was understanding, because it was dealing with very convoluted forms of language. The queries themselves include metaphors and puns and riddles and similes, and very obscure forms of human language. So it kind of gets what's being talked about. It can apply that level of understanding – lower than humans today, but that's not a permanent gap – it can apply it with perfect recall to 200 million pages. It can in fact review all of that in three seconds.

So that's part of the power of AI. Once the systems can read human language at human levels, which I believe will happen in 2029, it'll be an extremely powerful combination, it can read then – right now there's actually about 10 billion meaningful pages of information, less than you might think, on the web, but it's a lot more than we can master – it can read all of that, and really understand it, and recall all of it very quickly.

I think we're actually getting to an era very soon, and this is something I'm working on, to actually bring natural language understanding to computers. You have a little bit of that in Siri. I think the natural language needs to be strengthened in Siri and can be. I think it's pretty amazing people are talking to their computers. People are constantly citing mistakes it made, but they keep using it. It's a little bit like the woman whose dog plays chess and people say, "Aren't you amazed that your dog can play chess?" and she says, "Yeah, but his endgame is weak."

[laughter]

Ray: It's pretty interesting that people are routinely talking to their computers and they keep doing it and they find it useful. It's only going to get more powerful as we go forward.

We have many other visceral examples of AI that are impressive, like the Google self-driving cars, where there's actually a plan to introduce them to the marketplace within five years. It's interesting as these things happen, things that just seemed incredible and people would think are science fiction only two or three years ago, as they happen we very quickly accept them as part of everyday reality.

I've been thinking about how does the brain work, can we actually learn from that to help us with building more powerful AI? Ultimately with the goal to make ourselves smarter. We are smarter already with our brain extenders. When the SOPA strike occurred and Wikipedia and Google and a few other websites went on strike for a day, I felt like a part of my brain was going on strike.

[laughter]

Ray: Of course there was then a way to get around it, but I didn't know that at first. I actually didn't learn about that till that day, so the idea that I actually would be without these brain extenders was horrifying. It really shows how much these technologies are being used to make ourselves smarter. It also shows the tremendous political power of these websites, because just the threat to make these services unavailable was enough to completely kill this piece of legislation which was sailing to adoption.

The systems we have today are showing very limited intelligence, but that's going to change rapidly. It's only very recently that we have enough information about the brain, that we actually see inside the brain with enough specificity, to figure out what's going on. One of the interesting discoveries that I made was that the techniques that we've evolved in the field of AI, the ones that have worked the best and have taken over the field, and I'll come back and talk more about that, are the same ones, or at least mathematically very, very similar, to the techniques that biological evolution evolved for the brain itself.

It's not because the AI field was copying the brain, because we didn't really have enough insight into the brain, nor could we see inside the brain, up until very recently. In fact some of the best evidence I have for my thesis as to how the neocortex works – that's the part of the brain where we do our thinking – came really in the final months that I was writing the book. I had some evidence and as I was writing it new research was coming out that was the best research. We can really see now individual inter neural connections being formed in real time, and firing in real time. We can see our brains create our thoughts. We can see our thoughts create our brain, because we very much are what you think, so be careful who you hang out with.

But it's very much the case that we create our own brain. If one were to create an absolutely perfect simulation of the neocortex, it wouldn't do anything unless you actually educated it. That's actually a key part of the engineering of an AI, and of course we put a lot of concern into that with our biological brains. It's not just formal education, but everything a child does from birth till they're capable of doing things, is the process of creating intelligence. The intelligence of our neocortex is an elaborate hierarchy, as I will talk about, and the links of that hierarchy we create, painstakingly, link by link. There's actually many trillions of these links. We have 300 million pattern-recognition modules, and we wire them ourselves based on what we're thinking.

This is all subject also to the law of accelerating returns. The hardware and the software of human intelligence is all growing exponentially. These are some of the points I wanted to make. [slide showing complex hand-drawn outline] Are there any questions on any of this?

[laughter]

Ray: I'm going to cover a few of the points in more detail. Many of you have heard me speak before, so I'll try not to repeat myself.

The law of accelerating returns does bear emphasizing. It's not just a superficial exponential trend. I still get objections, "Oh, exponentials don't go on forever." That's true, in fact even these exponentials don't go on forever, but if we look at the physics of the next paradigm of computing communications based on molecular computing, it'll keep this acceleration going well into this century, at a point where these technologies will be trillions of times more powerful than the human brain.

This is actually the first graph I had, in 1981 – well I didn't have it through 2009, I had it through 1980. This is the power of computers, instructions per second per constant dollar. On this graph it's in 2009 dollars. Actually Moore's Law at that point was only a decade old, maybe 12, 13 years old. People go, "Oh, Moore's Law, Moore's Law's going to come to an end," but Moore's Law was actually the fifth, not the first paradigm to bring exponential growth to computing. We're already starting the sixth paradigm, which is three-dimensional computing at the molecular level. I don't want to get bogged down describing how that works, but there are different approaches, already 30 percent of memory chips are three-dimensional.

This is a logarithmic scale, so this represents trillions-fold increase in the amount of computation you can get for the same cost compared to the 1890 American census. Several billionfold just since I was a student. Look at how smooth a trajectory this is. This is actually a curve I laid out in 1981 through 2050, and we're right where we should be in 2012. It's been remarkably predictable.

Cost of transistor has been coming down very smoothly exponentially, you could buy one in 1968, and several billion today, for a dollar. Again, look at how smooth a trajectory that is. You don't see any evidence of anything: recessions, wars. Transistors are actually better, they're faster, so the cost of transistor cycles come down by half every year, that's a 50 percent deflation rate.

Economists worry about, if there's a 50 percent deflation rate, you're not going to actually double your consumption. You'll buy more, that's Economics 101, if it's half as expensive, but are you going to double your consumption? There's only so much of it you need. Year after year, we actually more than doubled our consumption. There's been 18 percent growth per year in constant currency in every form of information technology, despite the fact that you can get twice as much of it each year for the same cost.

The reason for that is, as price performance reaches certain levels, whole new applications explode off the landscape. NIH spent several billion dollars to collect the first genome, and then there's a project that did it for under a billion dollars. Today they're about 10,000 dollars.

Actually it reminds me of a comment on Linda [Avey]'s presentation. She said it was impractical, if you were to specify 20 different characteristics you'd like in your newborn child, to actually find... the combinatorial explosion of that is, you're never going to find the exact egg and sperm that would match that. But that doesn't rule out that you can't modify the genes in that egg and sperm. We're actually doing that already.

I'm involved with a project, where we do that not just in a baby, but in a baby boomer. We actually take cells out of a person who's missing a gene, and if you're missing this gene you're very likely to get pulmonary hypertension, it's a terminal disease, and it's caused by the lack of a gene. So we take out their lung cells, scrape them out of their throat, add this gene, using a new form of gene therapy, in a petri dish. That doesn't help the patient, but then we inspect that it got done correctly with a very high-powered microscope, that's another new technology, replicate it several millionfold, another new technology, now inject it back in the body. It ends up back in the lungs because the body recognizes them as lung cells. Now they have lung cells with their DNA but with the gene that they were missing that they needed to prevent this disease. It's actually cured this disease. It's continuing to undergo human trials. There's many other examples like that in the development and testing pipeline already.

Bits of memory shipped, it more than doubles. It's 18 percent growth in constant currency despite the fact that you can get twice as many each year for the same cost. It wasn't that Mark Zuckerberg just wasn't around, or that nobody had the bright idea of a social network 12 years ago, it just wasn't cost-effective. It wasn't cost-effective to do search engines 20 years ago. When these applications become cost-effective they explode and add to our demand for these technologies. NIH is now collecting first a thousand genomes, then a million genomes, to have a massive database to relate gene states to disease states. They wouldn't do that if it still cost a billion dollars each. At an estimated average cost of a thousand dollars each, that's only a billion dollars for a million of them.

Time Magazine ran this cover story recently on the law of accelerating returns. They insisted on putting the particular computer they were fond of on the graph. I didn't actually know if it would be on the graph. I knew it wouldn't be above the line, but sometimes people will put out something that's not cost-effective. It won't last in the marketplace, unless it has some other features, like nice furniture or something. It's right on the curve, which is pretty remarkable given that this a curve that I laid out 30 years ago. It's been very predictable.

We're doubling the number of bits we move around wirelessly, that's the communication exponential growth that I referred to, that I believe it's highly democratizing. This was Morse code or AM radio over a century ago, it's 4G networks today, but it's remarkable how smooth and predictable this is. You don't see little things like World War I or World War II or the Cold War or the Great Depression. Internet data traffic doubling every year.

Biology... We could of course spend a whole conference on this, and there are many conferences on this topic. MIT has its first new department in 39 years on this idea, that biology is now an information technology, it's now computer science. Basically, that's recognizing biology for what it is, it is an information process. It starts with our genes, genes are little software programs. They're linear sequences of data. They're not written in C++. I think they're written in COBOL.

[laughter]

Ray: Actually they're written in three-dimensional amino acid sequences, which fold up in three-dimensional proteins. We're learning to simulate those and predict those properties, although that's gearing up at an exponential pace.

This is an actual violin that was on the cover of the Economist, that was printed on a three-dimensional printer. There's still some limitations. The precision is measured in some number of microns. That sounds pretty good, but we really want to get it in the tens or hundreds of nanometers range. That's coming. They're still a little expensive, but they're getting cheaper. This is going to be a major revolution in manufacturing over the next five years, say.

Let me talk a little bit about the brain and AI. Sometimes the three overlapping revolutions that pertain to the law of accelerating returns go by the letters G, N, R: genetics, nanotechnology, robotics. They're all significant, but the one that really is fueling the singularity, and a very profound transformation in our society, is AI.

Because intelligence is what has enabled us to do all the things that we're able to do. So it would be useful to understand how intelligence works. For one thing it would give us more insight into ourselves. That has been a major goal, perhaps the major goal, of the arts and the sciences ever since we invented those fields. It would also enable us to create AI at human levels, then apply the vast scale of computers to make a very powerful combination, then use that to extend our own mental reach. As I argue, we already do.

I had the insight, actually 50 years ago, I was 14, I wrote this paper for this music project that was mentioned that I submitted to the Westinghouse Science Talent Search, and I said, the essence of the human brain is pattern recognition, that's what it does really well. It is a pattern-recognition engine, and it's actually not very good at doing logical thinking. We actually apply the ability to recognize patterns to try to do logic. It's a very inefficient method. Computers can do logic much more directly and efficiently, but we have a much more flexible form of intelligence based on patterns. And I continue to believe that's the case.

Where this takes place is a region of the brain called the neocortex. The neocortex is a region of the brain that's on the outside of the brain. "Neocortex" means "new rind," it's literally a covering of the brain. It's very thin, it's about the thickness of 10 sheets of paper, or a paper napkin. It's about this big [holds hands slightly past shoulder width] if you were to stretch it out. But it has many convolutions, in order to increase its surface area. That's why the brain is convoluted the way it is.

It first emerged in mammals. These young rodents who were the first mammals had a neocortex the size of a postage stamp, the thickness of a postage stamp, it had no convolutions, it was just this flat thing around the old brain. But it allowed these early mammals to think hierarchically. They could think in patterns of patterns of patterns. They could have an idea, combine some other ideas, call that an idea and give it a symbol, then use that symbol with other symbols to create a higher-level idea, and build up this whole hierarchy of symbols.

It was limited initially, so there wasn't a lot they could do with it, but they could learn new skills that had some complexity and some different levels to it, that had a hierarchy of structures within that skill. It allowed them to develop new behaviors very quickly. If one were to stumble accidentally, or invent – stumbling accidentally is a form of invention – on a new idea that was useful, it would spread virally to a whole community of neocortex-bearing animals, by copying each other and teaching each other that new skill. As we developed language, we had even more powerful ways of spreading these ideas.

Animals without a neocortex were able to learn also, but not within a single lifetime. They had to learn through biological evolution, which would evolve over thousands of generations some new behavior. And the behaviors were genetically determined. Generally, that was actually fast enough, because environments actually change very slowly. It might take 30, 40,000 years for the environment to change, and over that period of time, through biological evolution, they could evolve gradual adaptations in their behavior to accommodate that.

65 million years ago there was something called the Cretaceous Extinction Event. We think it had to do with a meteor. It was a sudden, cataclysmic change in the environment, it was very quick. We can see the evidence of that today, everywhere around the world there's this layer of sediment showing very radical changes in the environment that we can date to 65 million years ago. Animals without a neocortex couldn't adapt quickly enough. Most of those species died out. That was when the mammals took over, with their ability to adapt very quickly.

Biological evolution noticed this and started growing the neocortex. It started getting bigger and bigger, it started developing these convolutions to increase its surface area. By the time it got to Homo sapiens, it was actually 80 percent of the brain by weight. It is really most of our brain. It still is over the old brain, the old brain provides motivations in terms of desire and fear, but those get modulated by the neocortex, like a very intelligent bureaucracy that can translate fear or sexual desire into something else, it's called "sublimation." The neocortex is where we do our thinking.

The big innovation in Homo sapiens is we have a larger forehead, so we can squeeze in more neocortex. If you look at a primate's brain, it looks very similar, but they don't have this large frontal cortex. The frontal cortex is just a larger quantity of neocortex, it's not qualitatively different. But out of that greater quantity came a qualitative change in human society. Because with that greater amount of neocortex we could continue going up the conceptual ladder and create higher levels of abstraction. We created such inventions as language, and art, and science, and music, something that no other primate or other animal had done. So with greater quantity came this qualitative change.

I describe in the book that this neocortex has about 300 million modules, each of which can recognize a pattern. For example, I have some pattern-recognizers that look around and go, "Aha, a crossbar and a capital A," [points at the "SINGULARITY SUMMIT" banner] and, "Aha, there's a concave region facing north in that U, there's a concave region facing south in the capital A," and it's detecting these features, and that's all these little pattern-recognizers do.

That feeds up to a higher level, I'll show you a picture of that. So there'll be a higher-level recognizer that goes, "Aha, a capital A." That feeds up to a higher level, and that pattern-recognizer goes, "Aha, the written word 'apple'." In another region of the neocortex, there would be another pattern-recognizer that would go, "Aha, an actual apple," or "Aha, someone said the word 'apple'." Then it feeds up to a higher level, as they combine inputs from different senses, those are called the association areas.

It keeps on getting more and more abstract, go up another 20 or 30 levels, and there's a pattern-recognizer that goes, "Aha, she's pretty," or "That's funny," "That was ironic." You probably think that those are much more complex than the ones that just recognize edges, and shapes, and concavities, and crossbars, and capital A's, they're actually the same, they just exist at different levels in this conceptual hierarchy.

Some of the best evidence for this is the tremendous interchangeability of these areas. Neuroscience has been very fond of talking about specialization, "OK this little region here, the fusiform gyrus, that's where faces are recognized, and V1, that's where the optic nerve feeds in, and very basic shapes and shading and edges are detected, and V2 where it recognizes somewhat higher-level features of visual images." That's true in the quote "normal" way in which the information flows, but when that normal flow is disrupted or changed, because of disability or disease or accident, the information flows a completely different way, and one region can take over the function of another.

One really good piece of evidence that came out just when the book was going to press is that when neuroscientists looked at what happens to these visual areas – because a very high percentage of the neocortex deals with visual images – in someone who's congenitally blind. Does it sit there and do nothing? Well it turns out that V1, which is supposedly this area that's devoted to low-level features, edges and shading, is actually re-assigned at a very high level to dealing with high-level language concepts. So it shows a tremendous interchangeability.

We see the plasticity if one area is damaged, another can take over. There are mitigating factors in that another area's not necessarily just sitting there waiting to take over for this other area, it's already doing things. It's going to keep some of its patterns. It'll give up some of its redundancy, because redundancy is a major factor. But there's tremendous interchangeability of these patterns.

Another very powerful piece of evidence is that the wiring within these modules, which is about 100 neurons, is stable throughout life. It's not plastic, there's no change, it's a stable unit of 100 neurons. So-called neural nets are based on the idea that the key learning unit is one neuron, but that's always struck me as much too simplistic, and neural nets have always been limited in their performance. This module is much more sophisticated, because it can recognize a complex pattern, and it also can wire itself up and down the conceptual hierarchy to other ones that it wants to connect itself to.

What patterns does it want to connect itself to? Well it depends on what you're experiencing. If you're a child and you're learning Chinese, you're going to be learning those patterns, versus a child learning English. We create that hierarchy completely based on our experience. What the research shows is that the wiring between the connections is completely plastic, and that is what we're creating, and that's where we learn everything that we know. So be careful who you hang out with, because you not only are what you eat, you are what you think.

We are simulating the brain. There are simulation projects. I think the purpose of those is not that that's the way to create AI, but that's really the way to confirm these models. Like the Blue Brain Project, which is simulating larger and larger regions of the neocortex. It's trying to actually show where the salient information process is taking place. Some of this evidence for modularity of the neocortex comes from this project.

I've had discussions with [Henry] Markram that, even if you perfectly simulate the neocortex, it's not going to do anything, just as a newborn child has a perfect neocortex but doesn't do very much until it learns its lessons. It takes actually a long time for it to get to the point where you could hold a conversation about, let's say, an adult subject.

We learn approximately one conceptual level at a time. That's true as a child, as we're struggling with basic shapes and basic skills. Our actual skill and movement are based on the same type of hierarchical learning in the neocortex.

In a computer we can create a connection from one module to another module just with a link. In the brain it actually has to be a physical wire, a biological wire, an axon connected to a dendrite. It's not always possible to just grow one from here to some place that might be inches away in the brain. So there's actually a grid, and it looks very, very orderly. That was not known up until a few months ago, that we had this very orderly grid of basically connections in waiting.

If you have this module and you want to create a connection because this pattern is now part of this higher level pattern, there's probably one of these connections, and maybe a connection going the other way because it's very much like a two-dimensional Manhattan street map. It can then be harnessed, and the final connections are made harnessing these long term connections. Then it prunes back ones that are never used. This is all research that's just come out recently.

If you were to actually look inside the brain, and had scanning technology fine enough to actually see the firing of these modules, you really wouldn't be able to know what's going on. Because, OK, you see this module firing, that pattern has been recognized, thought about. What does that pattern mean? Oh, well, that pattern is a pattern of all these things feeding into it. So just look at those. Well you look at those, what does that mean? Well that's just a pattern of the ones feeding into it. That whole hierarchy might be thirty levels deep, and if you look at all the firings it looks something like this, in fact a lot more complex. You would have to have a copy of the entire neocortex to interpret what was going on.

But we are getting more and more information. There are different forms of brain scanning. All of them are scaling up exponentially in spatial-temporal resolution. The amount of data we're getting on the brain is doubling every year. The scale and precision of the simulations is doubling every year.

Most startling is... I describe how these modules recognize patterns. It's a little bit different from the technique described in Hawking's book on intelligence, that is also an elegant book that talks about the hierarchical organization of the neocortex. I think it's unarguable that the neocortex is hierarchical. The world is hierarchical, and we're able to deal with those hierarchies, you have to have a neocortex to deal with it, so clearly the neocortex is organized in a hierarchical fashion.

The technique that's used is actually mathematically very similar to something that's called hidden Markov models, in fact hierarchical hidden Markov models. This is a technique that I and others had pioneered in the 80s and 90s in speech recognition and early natural language recognition. It's interesting because we had no idea how the brain worked, but these kind of self-organizing techniques seem to work.

It's also interesting when people dismiss things like Watson and say, "Well, it doesn't really have any real understanding, because it's just doing statistical processing of language," by which people assume it means all it's doing is considering statistics of different word sequences, but that's not actually what it's doing. It has a hierarchy of self-organizing pattern-recognizers that have embedded parameters, all of which are probabilities. The understanding is embedded in that vast three-dimensional network of probabilities.

So it's statistical in that sense, but that's where the understanding lies. If that sort of statistical understanding does not present true understanding of language or knowledge, then human beings have no understanding either, because that is how the human brain works. We resolve ambiguities in exactly the same way that Watson does.

Actually, it's hard to find people that fully understand Watson, because even the people who were in charge of the project really were in charge of the framework, which is called Uma, which was a framework to stick in lots of different modules that did natural language understanding and assess what they're capable of doing, and then combining dozens of different modules to get one result. But it's actually in those modules where the natural language understanding is being done. The techniques go by different names, but they're all mathematically very similar to what I'm describing.

In the couple of minutes I have left – I want to leave some time for questions – I want to return to I think the very insightful talk and book that Dr. Pinker presented, and talk about the long term scale of improvement in human lives, which is very easy to lose sight of. In fact people lose sight of the improvements that have taken place in their own lives.

I talk about exponential growth of information technology in lots of different audiences. Some are immediately sympathetic. Some are resistant. Ironically, the audiences that are most receptive are the youngest audiences. Which is ironic because they haven't been around that long to actually see the change. I gave a presentation recently to junior high school kids, 12 and 13-year-old high school science winners that came together for a conference. I presented some ideas, and they were coming up afterward saying, "Yeah, things are so different today! When I was 8, I couldn't do such-and-so."

[laughter]

Ray: Let's go back a couple hundred years. This is the world on two dimensions. There's actually dozens of graphs like this, that show different things. We could show violence too, and that would actually be an interesting thing to look at. This is health and wealth. Wealth in terms of per capita income per person, expressed in 2009 dollars, not 1800 dollars, because nobody would know what that means. It was hundreds of dollars per person. There were richer countries, there were poorer countries. The big red circle was China – keep an eye on China because it does some interesting things.

The world was pretty poor. There was no social safety net in the United States until 1930. The very first part of it was put in with Social Security. Why didn't that take place before 1930? Was there not enough liberalism before that time? No, there wasn't enough wealth to afford these kinds of programs. We argue vehemently today where that should be, but these programs get more and more comprehensive as the world gets wealthier.

On the y-axis is life expectancy, which was in the 20s and 30s. Worldwide average was 37. Schubert and Mozart died in their 30s, and that was typical. There was no understanding of the germ theory of disease, no sanitation, no antibiotics. Life was short, brutish, and pretty violent, although not as violent as say the 16th century.

Let's see what happened. [graph starts ticking forward starting at 1800] This was the early part of the Industrial Revolution, started in the textile industry in England. A few countries are experimenting with it. As we get to the 20th century, you'll see a wind that carries all these countries toward the upper-right-hand part of the graph. The have-have-not divide does not go away. The wealthier countries are better off than the poorer countries, but at the end of this process, the countries that are worst off are far better off than the countries that were best off at the beginning of the process.

And I shouldn't say "end of the process," because it's not ending, it's going into high gear. Particularly now that more and more industries and technologies are becoming information technologies. Not every technology is an information technology. All of health and medicine was not up until just recently.

We see the same thing in education. We are destroying jobs at the bottom of the skill ladder, adding new jobs at the top of the skill ladder. We're adding more education. We spend ten times as much per capita on K-12 per child than we did a hundred years ago. We had 50,000 college students in 1870, we have over 10 million today. Average years of schooling. You can see the have-have-not divide, but it's tripled in the developing world, it's doubled in the developed world. There's a constant gap, but they're both moving in the right direction.

This is what we've accomplished in longevity, in related health technologies before health and medicine was an information technology. As we master the information processes underlying our biology, as we can deliver biotechnology with nanotechnology, as we increase our intelligence by merging with artificial intelligence, this will go into high gear.

These technologies are a double-edged sword. We have lived, since I was born, in a world where there is an existential meltdown scenario that didn't exist before my time. Technology's been a double-edged sword ever since fire, which kept us warm and cooked our food but also burned down our villages. I'm accused of being an optimist, and I'm optimistic we'll make it through. I'm not as optimistic that we'll avoid all painful scenarios, but I do think intelligence is a good thing to bring greater health, wealth, and other harbingers of happiness to humankind.

Thank you very much.

[applause]

[Q&A period begins]

Man 1: Thanks so much for your talk [xx 51:17] revolutionary. I'm tempted to ask two questions. One, we're both in our 60s, and the possibility that we'll live to the singularity and living forever, blah blah blah...

[laughter]

Man 1: Do you have a fallback position if you don't quite make it? That's A. And what do you think of the Global Consciousness Project at Princeton?

Ray: I'll answer the first question. I've written three health books, specifically my main motivation was, I figured if I wrote them, I would have to follow them, because I'd shamed into following them.

[laughter]

Ray: They are a wake-up call for our generation, because I think the baby boomer generation is not assured of getting through. Before 2030, we'll be adding more than a year every year to our life expectancy, which is not a guarantee. Life expectancy is a funny concept, it's based entirely on the past. Life insurance statisticians look at everything that's gone on up until now, and the assumption is nothing is going to happen, but that's a ridiculous assumption. We're going to get to the point fairly soon, within about 15 years, where the scientific progress is fast enough that we're adding more time than is going by.

But even if, through some calculation, you get a life expectancy of 50 years remaining, that doesn't mean you wouldn't be hit by the proverbial bus tomorrow. On the other hand we're doing something about that do, with the Google self-driving cars, and other technologies that will reduce accidents.

[laughter]

Ray: It's a wake-up call to my generation, you really can overcome most diseases. Heart disease, with rare exceptions, you don't really need to succumb to. We know how to do it but it's not simple yet. Eventually, I think through biotech we'll have much easier ways of overcoming these diseases. Cancers get a little bit harder to circumvent today, but I'm involved in some cancer projects, and I'd be surprised if we didn't have dramatic breakthroughs over the next decade or somewhat sooner.

You don't want to be the first person in line to not get into the theater.

[laughter]

Ray: That's why it's a wake-up call to the baby boomer generation.

My plan A is to follow these health recommendations – which is not a fixed set. Part of it is to keep an open mind and really keep my ear to the ground and keep reading everything that this group and your colleagues are producing in terms of new scientific insights, new insights into myself. It is an evolving program.

Plan B is the same thing. Plan C is the same thing. Plan D is cryonics, which I'm not enthusiastic about because I have enough trouble keeping on top of my affairs when I'm alive and kicking.

[laughter]

Ray: Their argument that it's better than the alternative is hard to refute.

[laughter]

Ray: So that's perhaps the back-up plan. But I actually think we have a very good chance of making it through. A lot of people in the audience here are in much younger generations. I think you're very well-positioned to get to a point – people say, "Well how long do you think you can live with these things? Is it going to add 30 years, 40 years to life-expectancy?" Suppose that was the case. Do you think then nothing is going to happen during those 30 or 40 years to push it further? We're going to get to a point where it's being pushed out so fast that we can look forward to an indefinite future. Then people worry about being bored, but I think if you look at everything else that's going to go on, we won't be bored.

[laughter]

[next question]

Man 2: Thank you for the inspiring talk. I have a question related to the book "On Intelligence" by Jeff Hawkins. He had an interesting and simple point about intelligence, that you can use this framework of cortical algorithms to analyze streams of data, and you can do helpful things with intelligence even with unsentient algorithms and programs. I mean, you can just take a stream of data, put the algorithm which mimics the work of our brains, and get intelligent results out of this data.

So my question to you is, do you think we really need to have sentient superhuman intelligence to solve big problems in our society, like with medicine, hunger, and other things? Can we use these neocortex-like algorithms to solve particular specific problems without having sentient superficial creatures, but still solving our problems.

Ray: Yes.

[laughter]

Ray: We've always used the most advanced technology, and we don't have to have Turing test capable AI to be useful. We have useful AI today. Search engines are far from Turing test capable, but they're incredibly useful. I think the next step is AI is going to have some level of natural language understanding. Watson has demonstrated to a mass audience that's feasible, there are a lot of other demonstrations as well.

That actually is a good example of less-than-human natural language understanding, but being applied then on such a scale, and applying the tentative insights from each page, but then applying it to 200 million pages, and validly using probabilistic methods like Bayes' theorem and Shannon's theorem to really come up with very accurate insights. I think we're going to be bringing natural language understanding to search engines.

My vision is to have – we're going to be in augmented reality all the time, things like Google Glass will put us in an augmented reality environment. You'll look at someone, it'll remind you who they are. But I envision these so-called search engines, we may call them something else, as friendly assistants, and listening to everything you say, and hear, and read, and write, and are constantly popping up with information you find useful. Now if it's popping up with stuff you don't find useful you'll turn it off.

Just like people do continue to use Siri, people will actually use these, because they will help guide you through the day. They'll say, "You know, you were just talking yesterday about a certain type of medical research, and Parkinson's, use this particular gene, well someone just came out with a paper on that," or you're having a conversation and it can see you're trying to search for the name of the actress, you'll just be used to her name and picture popping up without you ever asking for it. So anticipating your questions before you've asked them, or before you've even realized that it's a question, I think is where we're going. You don't actually have to have fully human intelligence to do that.

[next question]

Woman 1: You talk about technology making us smarter. One way to think about it, we now outsource our phone numbers to our iPhones, and our iPhones remember for us 400,000 phone numbers. But where is the line between where it makes us smarter, helping us find information, and maybe somewhat dumber? Where is the line? How do we apply it to raising our children, where do we limit using iPads for 5-year-olds, or do we just give them iPads and let them search for whatever they feel like? Where is that line between what do we need to learn to continue being intelligent, and where do we just let search engines answer all our questions? Is there a line?

Ray: The goal of human life is not just to find out answers to questions, but actually to create new knowledge, like creating a song, or write a novel, or a good advertisement, or solve problems. Actually a whole other theme I'm interested in is learn from doing, I think we should bring entrepreneurship into the schools, because the goal of education is not just to force-feed facts to kids, but actually to teach them how to solve problems, overcome suffering, create creations, add new knowledge.

These tools are extensions of our brain. Technology's always been that way. We're the only species that compensates for our weaknesses with our tools. Ever since we fashioned a stick to reach a higher branch, we've been extending our physical and our mental reach. We're already certainly doing that, I've been managing work groups for 45 years, and I can have a group now of 3 people in a matter of weeks do what used to take 100 people years. We can absolutely see that. You don't see that in economic statistics, because we factor it out. We still get one person-day of work from one person in a day. The fact that we actually expect more is factored out of the equation.

These are tools to make us smarter, and actually have us move up Maslow's hierarchy. To go back to Pinker, hundreds of years ago, we were fighting over very basic needs, food and shelter and even air, and so on. We're now struggling for much higher levels of satisfaction in trying to create intellectual creations, and these tools make us smarter, at every age. Just the other day I was sitting next to a 9-month-old on a plane who spent the hour with her iPad and these apps, very happily, and I think creatively, interacting with – they're called baby apps, it's a whole category.

[laughter]

Ray: Obviously as parents you don't want to overindulge any particular type of experience, so I'm not advocating kids spend all their time on their computers. I think basically these are extending our horizons.

[next question]

Man 3: The law of accelerating returns seems to apply to a lot of technology areas, but one area it almost certainly doesn't apply to is of course career trajectory of human decision-makers. It's still essentially based on the kind of slow apprenticeship within whatever political machine exists in a society. And this is of course a problem if you have accelerating technologies, because the decision-maker who finally gets to the top level and gets to make the decisions, they were brought up several technology generations back. So what do you think we can do about this growing discrepancy between our decision-makers and the technology we need to make decisions about?

Ray: I think what you can do about it, and what we actually are doing about it, is not to have top-down decision-making, where one person, who may have their own limited perspectives and idiosyncratic views, is making decisions. Even in the political process it's greatly shaped by widely distributed opinions being formed through social networks. Viewpoints get shaped by millions of people very quickly. Overall I think that leads to better decision-making.

There's ways to improve on that. Now there's this whole thesis on market-based decisions and that markets are very accurate, and the market's approach to predicting, say, elections, in the 2008 and 2004 elections were very accurate, more accurate than the polls. You can actually get very good predictions by letting thousands of people predict something, even something as simple as having a big jar of jellybeans and how many jellybeans are in this, and they used this kind of collaborative, market-based decision-making in a group as an experiment, and it turned out to be extremely accurate.

There's ways to improve even on that, to have collaborative decision-making where you have lots of different people with different perspectives solve one problem. There was recently a mathematical problem that had stumped the best mathematicians in the world for a century, where several hundred mathematicians who had never met each other used this collaborative problem-solving software and solved it in a matter of weeks. It was very clear, if you traced back what happened, it never would have happened without that collaborative decision-making.

So there are ways of having groups make decisions that are far superior to individuals. There are counterexamples, I mean a lynch mob is a counterexample. If you look at mass-communication like social networks, you see some lynch mob behavior, because there's lots of bullying on the web. But you also see enlightened, collaborative decision-making and formation of viewpoints as well. I actually feel that is a more powerful factor than the lynch mob behavior, which I think is ultimately discouraged in the social network world, but I think we can fashion the software to improve on that. I think Professor Pinker would agree on that impact.

I think very few politicians today would ignore the power of the social networks, and they're very powerful in countries that are nominally trying to control it. There are 200 million blogs in China. There's tens of millions of blogs in Iran. And even though there are controls of various kinds, it's actually a very powerful democratizing force.

Thank you very much.

[Applause]