Published using Google Docs
Thomas - Hidden in Plain Sight 4: The Uncertain Universe
Updated automatically every 5 minutes

Hidden In Plain Sight 4: The uncertain universe 

by Andrew Thomas

You have 228 highlighted passages

You have 30 notes

Last annotated on March 3, 2015

PREFACE 

This book considers the exciting — and perhaps ultimately disappointing — story of 2014, showing that not all the 20th century lessons about uncertainty have been learnt.  Read more at location 29

1.  LAS VEGAS 

According to Packard: "I was interested in the intellectual interest of conquering randomness, or understanding exactly what were the limits of randomness and predictability." However, there was also another — less worthy — motivation: "It seemed like a great idea to rip off casinos, who get so much pleasure out of ripping off everyone else."  Read more at location 56

As Farmer explained: "Roulette is a physical system, and if you can measure the initial position and velocity of the ball, and you know the forces acting on it, you should be able to predict what is going to happen." Analysis of the movie footage allowed them to accurately measure the ball's velocity and deceleration. The effect of friction and drag on the ball had to be determined. They could also measure scatter and bounce. Scatter occurred when the ball hit the diamonds on the sloping sides of the wheel, and bounce occurred when the ball hopped from cup-to-cup before finally coming to rest. The final program included a set of equations which resembled the equations used by NASA for landing spacecraft on the Moon. But how could complex calculations be performed in the middle of a busy casino?  Read more at location 69

In binary, the three vibrators could encode a digit from 1 to 8, (001, 010, 011, etc.), thus indicating on which octant the bet should be placed.  Read more at location 80

"The predictions were right on target. Playing quarters, we recouped our losses and stacked several hundred dollars in chips in front of us. There was a gut rush of excitement in seeing everything fall into place. After all the time spent testing and troubleshooting it, the computer was finally up and running perfectly. I no longer had the slightest doubt. From that session, I knew the game of roulette had been beaten."  Read more at location 98

The clockwork universe 

The method of Packard and Farmer was successful because Newton's three laws of motion and his law of universal gravitation are completely deterministic: there is no mention of chance. Given the state of a system — the position and velocity of all its parts — its movement from that point on is, according to Newton, completely predictable and determined.  Read more at location 108

On this basis, the 18th century French mathematician and philosopher the Marquis de Laplace realised that, if complete knowledge was obtained of the position and velocities of all the objects in the universe, then the future could be forever predicted.  Read more at location 113

This view became known as the clockwork universe. The image of the clockwork universe became the dominant model in physics for 200 years after Newton, and also influenced other areas of society.  Read more at location 119

"The way that Newton had shown that a few universal laws could explain so much of the physical world inspired other intellectuals to look for universal laws that could explain human behaviour, politics, even history. Newton became a hero to revolutionaries who dreamt of utopian societies founded on reason."  Read more at location 121

As Woodrow Wilson later wrote: "The Constitution of the United States has been made under the dominion of the Newtonian theory."  Read more at location 127

probability arises purely due to ignorance. Yes, if we were a super-being with complete knowledge of the position and velocity of every atom in the roulette wheel and the spinning ball, then the clockwork universe suggests we could predict the outcome with certainty. But we inevitably do not have complete knowledge.  Read more at location 132

"What hath God wrought?" 

The scientific advisor of the transatlantic cable project was the Belfast-born William Thomson, who was known as one of the finest physicists of the age. Thomson had entered the University of Glasgow at the age of ten, and within two years he was publishing scientific papers. He became a professor of natural philosophy by the age of 22, and went on to make important contributions in the field of thermodynamics, playing an important role in establishing physics as a modern scientific discipline.  Read more at location 146

Lord Kelvin issued a famous quote which reflected the confidence and certainty of this period: "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement." This view of Lord Kelvin was certainly not isolated.[2] In 1894, the American physicist Albert Michelson said: "The most important and fundamental laws and facts of physical science have all been discovered. Our future discoveries must be looked for in the sixth place of decimals."  Read more at location 154

2.  THE UNCERTAIN QUANTUM 

At lower temperatures, radiation of a lower frequency is emitted (red visible light), whereas at higher temperatures, higher-frequency radiation of a brilliant blue-white colour is emitted. However, when physicists tried to describe the relationship between temperature and frequency, they found a problem. The classical model agreed well at low frequencies, but it predicted ever-increasing emitted energy at high frequencies (ultraviolet and higher).  Read more at location 164

the radiation was predicted to rise to an infinitely large amount. This was known as the ultraviolet catastrophe. At the turn of the 20th century, the German physicist Max Planck cast a new light on the mystery. However, things might have turned out differently. When he was 16 years old, Planck had been told by the physicist Philipp von Jolly that he should not study physics because "in this field, almost everything is already discovered,  Read more at location 169

(Note: e = hf)  In order to solve the ultraviolet catastrophe, Planck took the radical step of suggesting that the emitted radiation was composed of tiny, indivisible packets of energy, with each packet being called a quantum. The energy of each packet was proportional to the frequency of the radiation: Where e is the energy of the emitted quantum, f is the frequency of the radiation, and h is Planck's constant. Planck's constant is equal to 0.000000000000000000000000006626, a very small number which has a huge importance. It appears in many important equations in physics. 

...Quantum theory had been born.  Read more at location 183

This latest result of Planck, however, seemed to suggest that energy could be formed of discrete, discontinuous chunks.  Read more at location 185

the second major mystery involving radiation: the photoelectric effect. In 1899, the German physicist Philipp Lenard shone a bright light onto metal and discovered that the metal ejected electrons. According to the classical viewpoint, it would be expected that the ejected electrons would have more energy if the light was brighter. However, this was not what was discovered.  Read more at location 187

(Note: e=hf)  How could low intensity light be capable of ejecting electrons? Well, in his annus mirabilis ("miracle year") of 1905, Albert Einstein proposed a solution. Einstein proposed that light was composed of packets of energy, with each packet having an energy equal to Planck's constant multiplied by the frequency of the light: Hence, this is precisely the same formula discovered five years earlier by Max Planck. This formula now made sense of the photoelectric effect.  Read more at location 193

even a single packet of light could potentially eject an electron from metal, the energy of the light transferring to the kinetic energy of the electron. This explained how even low intensity light could eject electrons. High intensity light was composed of many more of these packets of energy, but the energy of each individual packet remained only proportional to the colour of the light. Hence, increasing the brightness of the light did not increase the energy of the ejected electrons: you had to alter the colour of the light to do that.  Read more at location 198

light energy really was quantized into discrete packets. Each packet of light energy was called a photon. 

...Einstein received his only Nobel prize in 1921 for his discovery of the light quantum.  Read more at location 210

The third major mystery involving radiation considered the nature of the light emitted from different materials. When a substance is heated, it emits a light with a characteristic colour.  Read more at location 211

Every chemical element has a different associated colour. If the light from a pure element is passed through a prism, the prism splits the light into its various constituent colours. This spread of colours is called a spectrum. For white light, a rainbow effect is produced as white light contains all colours. However, each particular chemical element has a different spectrum, and it was found that the spectrum contains very sharp peaks in intensity at different light wavelengths. This means the spectrum contains very distinct bright coloured lines. It is as if each element has an identifying barcode.  Read more at location 213

The stage was set for the entrance of a young Danish physicist who has been called the father of quantum physics: Niels Bohr.  Read more at location 220

Bohr considered the previous discovery of Einstein — that light energy was quantized — to suggest that the energy of electrons in their orbits was quantized. This would mean that the electrons could only occupy special orbits with clearly defined energy levels. Hence, there could be no continuous drift towards the nucleus. Bohr then realised that this could also provide an explanation for the spectral lines. Electrons which were orbiting further away from the nucleus would have higher energy than electrons orbiting nearer the nucleus. When an electron jumped (a quantum jump) from a higher orbit to a lower orbit, it would release a photon. Because of the law of conservation of energy, the energy of the photon would be equal to the energy lost by the electron.  Read more at location 227

The uncertainty principle 

It was in this distraction-free environment that Heisenberg made his great breakthrough. He plotted all the observed values in the form of a rectangular grid, known as a matrix.  Read more at location 248

But Heisenberg realised that if matrices have to be used to determine quantum mechanical behaviour, then that would introduce a remarkable side-effect. And that is because matrix multiplication is noncommutative. Conventional multiplication is commutative in that it does not matter in which order the numbers are multiplied, the answer will be the same.  Read more at location 271

Heisenberg realised that the noncommutative properties of matrix mechanics had startling consequences for quantum behaviour. It suggested that if the position of a particle was measured, and then the momentum of the particle was measured, a different result would be obtained than if the momentum had been measured first. The ordering of the measurements mattered. 

...it was not possible to accurately know both a particle's position and momentum!  Read more at location 281

This startling result became known as the Heisenberg uncertainty principle. For the first time, it appeared that there were fundamental limits on human knowledge about the universe.  Read more at location 287

There was a limit on human knowledge. The old certainties were starting to unravel. From now on, the only certainty was uncertainty.  Read more at location 292

Schrödinger's waves 

The development of Heisenberg's matrix mechanics was the first time that quantum theory represented a complete description of the quantum behaviour of a particle: the quantum equivalent of Newton's laws of mechanics. This complete theory became known as quantum mechanics. However, matrix mechanics was difficult to use — most physicists had never used matrices before — and difficult to visualise.  Read more at location 293

Erwin Schrödinger presented a new method which substituted the use of waves instead of matrices the method swiftly proved popular (most physicists were familiar with waves, and the model was much easier to visualise). Schrödinger's method became known as wave mechanics.  Read more at location 297

Schrödinger used a mathematical model of a wave as a tool for determining these allowed orbits of electrons. An electron orbit was modelled as a wave around the nucleus. An orbit was only allowed if a whole number of wavelengths fitted around the orbit: 

...This mathematical wave proposed by Schrödinger was called a wavefunction.  Read more at location 304

The situation was made more complicated by the fact that the values of the wavefunction were complex numbers, meaning they had a "real" part and an "imaginary" part. Imaginary numbers are based on the square root of -1. As no real number squared can result in -1, this explained why these numbers were considered "imaginary". Only if the value was squared did the imaginary part disappear  Read more at location 306

Max Born who provided the incredible answer: the value of the square of the wavefunction represented the probability that the electron is found at a particular location.  Read more at location 311

Before measurement, all that could be known was possibilities. So this is what the great edifice of physics was finally reduced to: at the most fundamental level, it was only possible to talk about the probability of a particular event occurring. It was not possible to achieve greater certainty.  Read more at location 315

****  Max Born's revelation of a fundamentally probabilistic universe meant the end of determinism. In a Newtonian deterministic universe, one event causes another in an entirely predictable manner. Quantum mechanics, however, revealed that the outcome of an event could only be considered in terms of probabilities.  Read more at location 318

(Note: why is this uncertainty more certain than the previously held certainty? why is this deemed fundamental?)  Manjit Kumar says in his book Quantum: "Quantum probability was not the classical probability of ignorance that could in theory be eliminated. It was an inherent feature of atomic reality.  Read more at location 322

3.  COPENHAGEN

The resulting 1927 conference entered the annals of physics legend by attracting 29 of the greatest physicists of all time, including 17 Nobel Prize winners. Among those attending the conference were Niels Bohr, Werner Heisenberg, Erwin Schrödinger, Max Born, Max Planck, and Albert Einstein.  Read more at location 333

According to the physicist John Wheeler: "In all the history of human thought, there is no greater dialogue than that which took place over the years between Niels Bohr and Albert Einstein about the meaning of the quantum."  Read more at location 337

Niels Bohr believed quantum mechanics provided a complete description of Nature at the microscopic level, in which case Nature was fundamentally governed by probabilistic processes. On the other side of the debate, Einstein believed that the current quantum theory was merely a temporary contrivance which would be replaced when a deterministic theory was uncovered.  Read more at location 339

Rutherford wanted to know what underlying process controlled the quantum jumping: "Bohr's answer was remarkable. Bohr suggested that the whole process was fundamentally random, and could only be considered by statistical methods: every change in the state of an atom should be regarded as an individual process, incapable of more detailed description. We are here so far removed from a causal description that an atom may in general even be said to possess a free choice between various possible transitions." If Bohr was correct, the implications were staggering: there was no "causal" description of the process, nothing should be considered as causing the quantum jumps.  Read more at location 344

This would imply that Nature, at its root, was random, and no deeper explanation could ever be produced. Einstein, however, was far from convinced by this explanation. According to Einstein: "I find the idea quite intolerable that an electron exposed to radiation should choose of its own free will, not only its moment to jump off, but also its direction. In that case, I would rather be a cobbler, or even an employee in a gaming-house, than a physicist." Einstein remained convinced that there had to be some underlying deterministic mechanism — hidden from our eyes — which was responsible for determining the timing and direction of the quantum jumps.  Read more at location 349

Heisenberg and Born issued a provocative joint statement: "We consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification." Essentially, Heisenberg and Born were suggesting that quantum mechanics should be considered to be the final theory, and that the fundamental probabilistic nature of reality just had to be accepted. It was almost as if they were goading Einstein into making a response. In one of his most famous quotes, Einstein replied: "God does not play dice with the universe". From now on, Einstein would no longer be silent about quantum mechanics.  Read more at location 357

Einstein's box 

Each morning, Einstein would arrive for breakfast armed with a new thought experiment which he believed revealed a flaw in the uncertainty principle. Einstein and Bohr would continue their discussions as they walked to the institute, and during breaks in the conference. Usually by dinner back in the Metropole, Bohr would reveal a flaw in Einstein's argument, thus saving the uncertainty principle and the prevailing interpretation of quantum mechanics. However, one of Einstein's thought experiments was so ingenious that Bohr was stunned when he heard of it.  Read more at location 364

At a certain point in time, the clock in the box opens the shutter and a photon of light escapes. After this point, if the box is weighed again then it is possible to determine the mass of the photon. Then, from Einstein's own equation, E=mc2, it is possible to determine the energy of the emitted photon. The time at which the photon was emitted is also known because it is the time at which the shutter opened. It therefore appears possible to know the energy of the photon and the time at which it escaped.  Read more at location 372

Bohr realised that Einstein — in his rush to overthrow quantum mechanics — had forgotten the implications of Einstein's own theory of general relativity. General relativity predicted that a clock in a stronger gravitational field would run slower than a clock in a weaker gravitational field. Bohr realised that when the photon was emitted from the box, the spring attached to the top of the box would lift the box slightly higher in the Earth's gravitational field. This would result in the clock inside the box no longer being perfectly synchronised with the clock in the laboratory. Hence, it was no longer possible to obtain the time measurement with perfect accuracy. The uncertainty principle was saved.  Read more at location 385

Is the Moon there when no one is looking at it? 

it was no longer possible to draw a clear boundary between the observer and the object being observed: the effect of the observer became all-important. With this in mind, Bohr argued that, before an observation or measurement was made, it was scientifically meaningless to talk about the properties of a particle.  Read more at location 395

****  Bohr said: "It is wrong to think that the task of physics is to find out how Nature is. Physics concerns what we can say about Nature." In other words, it is only our observations which are real. It is meaningless to talk about objective reality in the absence of observation.  Read more at location 398

the wavefunction can represent all the possible positions in which an electron might be discovered after a measurement is taken. After the measurement is taken, the wavefunction "collapses" so that only a single value is measured for the location of the electron. [5] This interpretation of quantum mechanics became known as the Copenhagen interpretation (because Niels Bohr's institute of physics was in Copenhagen).  Read more at location 402

quantum mechanics is a statistical theory: it provides you with the probabilities of a certain result if multiple observations are taken.  Read more at location 413

In order to correctly understand the effect of observation at the quantum level, it must be realised that observation can never be responsible for making an object "real", for somehow conjuring it into existence. However, it is the case that the act of observation can have the effect of modifying some properties of an observed object. These properties of a particle — such as spin and location — which are subject to the quantum superposition principle are called dynamic properties. But a particle also possesses properties which are fixed and are not subject to the principles of quantum mechanics, such as a particle's electric charge. These properties are called static properties. The existence of static, unchanging properties whose values are known prior to observation indicates that the particle does, indeed, exist prior to observation.  Read more at location 424

****  it is clear that the act of observation inevitably affects the object under observation, this means that the observed object should never be considered as an isolated object. Instead, the observer and the observed object should be considered to be a single, combined system. An object cannot produce a property value without an observer, and an observer cannot produce a property value without an object to observe. It is only by considering a combined system that a property value can be produced.  Read more at location 444

It is the combination of observer and observed object which generates reality.  Read more at location 451

The EPR paradox 

(after the initials of Einstein, Podolsky, and Rosen). The EPR paradox was another attempt by Einstein to show that quantum mechanics could not be a complete description of the workings of Nature. The basis of the EPR paradox rested on the principle of entanglement.  Read more at location 475

it is possible to produce a pair of electrons using a method which ensures that each electron must have opposite spin to the other electron, i.e., if one electron is "spin down" then the other electron must be "spin up" (this is due to the law of conservation of angular momentum: total angular momentum of the system before the electrons are emitted must equal the total angular momentum of the system after the electrons are emitted). This pair of electrons are then said to be entangled: the property values of one electron is dependent on the property values of the other electron.  Read more at location 481

Einstein realised that quantum mechanics insisted that this effect of one particle onto the other particle must be instantaneous — even if the particles are separated by quite a distance. And here Einstein believed he had found a flaw in quantum mechanics, because Einstein's own theory of special relativity prohibited any such influence over distance acting faster than the speed of light.  Read more at location 497

This principle — that an object should only be influenced by its immediate surroundings, and cannot be influenced by some instantaneous effect from a distant object — is called locality.  Read more at location 501

Famously, Einstein said: "Physics should represent reality in time and space, free from spooky action at a distance."  Read more at location 514

Bell's theorem 

Applications of quantum mechanics included semiconductors (leading to the computer age), and lasers (which introduced CDs and DVDs).  Read more at location 523

John Bell became fascinated by the EPR paradox and decided to see if it was possible to design an experiment which could distinguish between the viewpoints of Einstein and Bohr. Instinctively, Bell felt that Einstein must be right — that there was an objective reality even in the absence of observation. Bell believed that that the properties of particles had to have fixed values — even before those property values were measured. In an effort to prove Einstein correct, John Bell proposed an ingenious experiment.  Read more at location 526

When the results were analysed, the result was startling. It was shown that the level of correlation was, indeed, 85% — the value predicted by quantum mechanics. This was a higher value than the value which should have been possible (75%) if the particles had fixed property values. This meant that Bohr was right all along and Einstein was wrong: particles could not have fixed property values.  Read more at location 550

This was an astonishing result, which Physics World magazine called "the most profound discovery of science." Unfortunately, John Bell died suddenly of a stroke in 1990. In that same year he had been nominated for a Nobel Prize for his discovery.  Read more at location 553

Fundamental uncertainty 

at the lowest quantum level, this option of digging deeper is not available to us. It is called fundamental uncertainty because "fundamental" implies that there is no deeper layer. And it is because this behaviour seems so alien to us that the possibility of this behaviour receives so much resistance, so much hostility.  Read more at location 568

the skill is to only consider experimental data, the results of experiments, and listen to Nature talking.  Read more at location 577

Einstein said: "What I am really interested in is whether God could have made the world in a different way; that is, whether the necessity of logical simplicity leaves any freedom at all."  Read more at location 581

Our theories are not yet capable of predicting what precisely happened in the earliest moments of the existence of the universe, in the so-called Planck epoch, from 0 up to 10—27 seconds. However, if we wind the clock back, it is believed that the universe was compressed to a size smaller than an atomic nucleus, an incredibly hot and dense point. The condition of the universe at this moment is called the boundary condition.  Read more at location 586

Quantum indeterminism breaks the strict laws of cause-and-effect. And, according to the uncertainty principle, it is even possible that a small amount of energy can spontaneously appear from empty space in a so-called quantum fluctuation, but only for a very short time. In fact, if you had to design a theory aimed at coping with the craziness of the earliest moments of the universe, you could not do much better than "anything goes" quantum mechanics. So maybe at the lowest levels of reality there must be indeterminism and uncertainty, otherwise there could be no way that the universe could exist. Maybe every conceivable universe has to be fundamentally non-deterministic at its base level? Maybe this makes indeterminism and uncertainty a logical necessity in the laws of Nature.  Read more at location 598

4.  VIENNA 

The city was a centre of the modernist movement, buzzing with the new ideas of psychoanalyst Sigmund Freud and philosopher Ludwig Wittgenstein who both chose to make the city their home. The centres of intellectual debate were the Viennese coffee houses, where arguments raged through the day. The clientele of the coffee houses included Adolf Hitler, Joseph Stalin, Leon Trotsky, and Sigmund Freud who were all living in the city within a few miles of each other.  Read more at location 606

The certainty of mathematics 

What do we mean when we say a statement is "true" or "false"? Well, the statement might refer to some factual aspect of the material world,  Read more at location 624

A proposition is a statement which can be either true or false. The rules of logic allows us to use reasoning to combine propositions.  Read more at location 627

steps we used to prove that proposition would be called a proof. In mathematics, if a proposition is true and it is particularly useful or important, then it is called a theorem. Not only is a mathematical theorem true, but it will be true under all circumstances, in any conceivable world. This differs from a proposition about the material world.  Read more at location 629

(Note: leading to godel)  In mathematics, however, there is no uncertainty: a true proposition (theorem) would have to be true in any possible world. So a mathematical proof is a valuable thing. It seems to reveal some insight into "the way things have to be".  Read more at location 634

We can feel certain about the universal truth of mathematical theorems because they are derived — in a sequence of logical steps — from simple axioms. Axioms can be considered the foundation stones of mathematics. They are simple propositions which are considered as being "obviously true".  Read more at location 644

the axioms of classical geometry were defined by the Greek mathematician Euclid. These five axioms are: It is possible to draw a straight line from any point to any other point. It is possible to extend a straight line indefinitely. It is possible to draw a circle with any centre and any radius. All right angles are equal to one another. Parallel lines never meet — no matter how long they are. Though these axioms might appear trivially simple, all of classical geometry (including the Pythagorean theorem) can be derived by logical reasoning (building up) from these five axioms.  Read more at location 646

Euclidean geometry is also called plane geometry because it assumes that the geometry is being constructed on a flat surface. However, in the 19th century it was realised that geometry could also be performed on curved surfaces, e.g., the surface of a sphere. In that case, the parallel postulate does not hold: parallel lines can eventually meet.  Read more at location 656

The resultant form of geometry — which can deal with curved surfaces — is called non-Euclidean geometry. In the early years of the 20th century, the real-world application of non-Euclidean geometry was revealed when Einstein's theory of general relativity showed that space itself had curvature, so only a non-Euclidean geometry could correctly describe space. So the introduction of non-Euclidean geometry showed that our certainty of mathematics is only as secure as our certainty of the axioms on which mathematics is based. Suddenly, mathematics seemed less than secure.  Read more at location 660

****  It undermined absolutist views about human knowledge across a vast spectrum of human thinking. Prior to the coming of non-Euclidean geometry, there was a unity, a confidence, and a certainty to our knowledge of the world.  Read more at location 666

The Barber of Seville 

In the early years of the 20th century, there was a great deal of interest in a number of logical paradoxes which threatened to undermine the foundations of mathematics. As an example, Epimenides — a Greek philosopher of the sixth century — once said: "All Cretans are liars". This might appear rather a racist comment, however, it should be mentioned that Epimenides was from Crete himself. So what are we to make of Epimenides' statement? If all Cretans really are liars, then Epimenides is a liar himself. In which case, we should not believe his statement. Therefore, Epimenides might be a truthful Cretan. In which case, all Cretans are liars, etc.  Read more at location 670

The liar's paradox is not written in the language of mathematics, it is written in English which is a language not notable for its logical consistency. Hence, this paradox is considered to be nothing more than a linguistic oddity, and not a threat to mathematics. However, other paradoxes were discovered which were phrased in the language of mathematics, and these paradoxes posed more serious problems.  Read more at location 677

One of these paradoxes is known as the paradox of the Barber of Seville: "A man of Seville is shaved by the Barber of Seville if and only if the man does not shave himself. Does the Barber shave himself?"  Read more at location 680

If we assume the barber does not shave himself, then we should put him in the set. But if we put him in the set, then the barber should shave himself because the barber shaves all the men in the set. So if the barber shaves himself, then he does not shave himself. But if the barber does not shave himself, then he does shave himself. Hence, this is another paradox. But unlike the previous liar's paradox — which was expressed in the English language — sets are mathematical structures which can be expressed mathematically. So the tale of the Barber of Seville represents a true mathematical paradox.  Read more at location 684

Principia Mathematica 

After 360 pages of writing, their first major result was definitively proved: 1+1=2 It might seem rather excessive to spend such a long time to prove such an apparently trivial result, but it revealed Russell's determination to build mathematics on a solid logical foundation and thereby eliminate all paradoxes and contradictions. After almost ten years of work, the first volume of Russell's magnum opus Principia Mathematica was released in 1910. The book was so large that Russell and Whitehead had to transport the book to their publisher's office in a wheelbarrow. Two further volumes followed in 1912 and 1913. Unfortunately, the writing of the Principia required an almost obsessive approach, with no room for creativity or inspiration. Russell remarked that the mental effort had irretrievably drained his mental abilities. Talking of his experience, Russell said: "The effort was so severe that at the end we both turned aside from mathematical logic with a kind of nausea."  Read more at location 697

Gödel's incompleteness theorem 

Gödel showed that there are mathematical statements which are true, but which cannot be proved to be true. This meant that mathematics was not complete: there are some things you cannot prove using mathematics. This effectively proved the death knell of David Hilbert's program to eliminate uncertainty from mathematics. This result is known as Gödel's incompleteness theorem.  Read more at location 717

****  The method was actually very similar to the liar's paradox which we considered earlier. If you remember, the liar's paradox essentially consisted of the English language sentence "This sentence is false". If the sentence is false, then the sentence must be true. But if the sentence is true, then the sentence must be false. Hence the contradiction.  Read more at location 720

Gödel's brilliant stroke of genius was to show how any statement could be converted into a mathematical statement. He achieved this translation by using a code. The code converted a statement into a single whole number (usually a very large whole number). The whole number is called the Gödel number. The code worked on the basis that any number can be calculated by multiplying prime numbers together in one — and only one — way. For example, 51 = 3 × 17. Because there is only one way of calculating a number in this manner, that meant that there is a one-to-one relationship between a string of prime numbers and a unique Gödel number. If each prime number is then taken as representing a symbol in a statement, then this allows a statement to be converted into a Gödel number. This conversion meant that any statement about mathematics could then be imported into mathematics. Most importantly, this allowed a mathematical statement to refer to itself.  Read more at location 725

instead of considering the statement "This sentence is false", Gödel considered the statement "This statement cannot be proved to be true". More precisely, Gödel considered the statement "The statement with Gödel number x cannot be proved to be true". What Gödel then did was to encode that entire statement into a Gödel number, and replace the letter x in the statement with that Gödel number. In that way, the statement forms a strange kind of loop, looping back to refer to itself. And the end result is that the final statement refers to itself and so does, indeed, end up representing the statement "This statement cannot be proved to be true". Let us now consider that statement "This statement cannot be proved to be true". If that statement is false, then we can prove it to be true — a contradiction. But if the statement is true, then we have a true statement which we cannot prove! So either way — if the statement is true or the statement is false — mathematics is in trouble. By converting the statement into a mathematical form using his coding approach, Gödel revealed that a statement could exist which was either a contradiction (a paradox), or else there existed a true statement which could never be proved to be true.  Read more at location 733

****  Gödel had showed that there could never be complete certainty in mathematics, because mathematics itself was not complete.  Read more at location 745

Douglas Hofstadter wrote an extraordinary Pulitzer Prize-winning book about the incompleteness theorem entitled Gödel, Escher, Bach. In an extract from that book, Hofstadter describes the impact of the incompleteness theorem on the general public: "Modern readers may not be as nonplussed by this as readers of 1931 were, since in the interim our culture has absorbed Gödel's theorem, along with the conceptual revolutions of relativity and quantum mechanics, and their philosophically disorienting messages have reached the public, even if cushioned by several layers of translation (and usually obfuscation). There is a general mood of expectation, these days, of 'limitative' results — but back in 1931, this came as a bolt from the blue."  Read more at location 750

Gödel and physics 

It is often said that mathematics is the "language of Nature", with mathematics proving to be an uncanny match to the laws of physics. Indeed, discoveries in mathematics sometimes anticipate corresponding discoveries in physics (the most recent example of this being the discovery of the Higgs boson — after its existence was predicted by mathematical reasoning back in 1964).  Read more at location 765

The British mathematician Alan Turing showed that there were certain mathematical problems which could not be solved by a computer. The principle behind these so called uncomputable problems was directly analogous to the incompleteness theorem.  Read more at location 770

in his 2002 lecture entitled Gödel and the End of Physics, Stephen Hawking presented the following insight: "In the standard approach to the philosophy of science, physical theories live rent free in a Platonic heaven of ideal mathematical models. That is, a model can be arbitrarily detailed and can contain an arbitrary amount of information without affecting the universes they describe. But we are not angels, who view the universe from outside. Instead, we and our models are both part of the universe we are describing. Thus a physical theory is self-referencing, like in Gödel's theorem. One might therefore expect it to be either inconsistent or incomplete. The theories we have so far are both inconsistent and incomplete." Hawking's insight that we are "trapped" inside the universe and so are inevitably limited in what truths we can know about Nature.  Read more at location 784

The uncertain century 

(Note: rationalistic interpretations of Christianity gave way to fundamentalism or altogether non-Christian solutions.) once the 20th century arrived, discoveries in science and mathematics (quantum mechanics and Gödel's theorem) seemed to undermine this certainty, and it is uncanny how this loss of confidence seemed to mirror a loss of direction in other areas of human endeavour. We might regard David Hilbert's effort to formalise mathematics in the 1920s as a last-gasp attempt to ensure certainty and stability in a world on the brink of chaos.  Read more at location 793

it was those other citizens of Vienna — Hitler and Stalin — who were to have the greatest influence on the 20th century. Once aimed solely at the good of mankind, science was turned to the creation of weapons. During the First World War, the Jewish director of Berlin's Institute for Chemistry, Fritz Haber, turned his attention to the creation of chemical weapons. He developed a mathematical formula — known as Haber's law — which gave the relationship between the concentration of a poisonous gas and how long the gas had to be breathed to achieve death. Haber was awarded the Nobel Prize for Chemistry in 1918. Progress in science and technology was being warped so that it was no longer the servant of humanity. Instead, science became the enemy of millions. Haber became the first director of the corporation which produced the poisonous gas Zyklon B, later responsible for the deaths of millions in the gas chambers during the Second World War. Meanwhile, in America, the greatest physics project of all time — which at its peak was greater than the entire American automobile industry — was devoted not to the benefit of mankind but to the development of the ultimate weapon of mass destruction: the atomic bomb.  Read more at location 800

the record of the largest passenger ship was beaten early in the 20th century by a steamship whose selling-point was certainty: the perfect certainty of safety and security. That ship was the Titanic. Welcome to the uncertain century.  Read more at location 811

5 THE COASTLINE OF BRITAIN 

Lorenz had constructed a simple model based on convection. Convection occurs when a volume of air is heated from below. As far as the weather is concerned, this is the situation when the Sun heats the ground and the air immediately above the ground becomes warm. Warm air is less dense than cold air, so this warm air rises to the higher levels of the atmosphere where it cools. This cooled air then descends again to replace the newly-warmed air at ground level. Hence, over time a circular convection current is formed.  Read more at location 818

While Lorenz's equations predicted that the system would always tend to form a simple circular current, they also revealed that this stability was deceptive. The equations also showed that sometimes — in a completely unpredictable way — the direction of circulation could slow down and then reverse. This was a remarkable discovery.  Read more at location 822

Lorenz's discovery showed that very simple equations can generate behaviour which appears random. In order to illustrate this principle, Professor Ian Stewart in his book Does God Play Dice? presents a very simple mathematical expression: 2x2‑1. We will discover how a simple expression such as this can produce complex behaviour which appears random. Pick a fractional value between 0 and 1 and substitute that value for x in the expression  Read more at location 827

The values and the plot appears completely random! This is bizarre: a simple, deterministic equation has generated behaviour which appears completely random. Certainly, if you encountered this sort of behaviour in Nature then you would probably assume that there was some complex mechanism underlying it, such as the interactions of a large population of animals. You would not imagine that such randomness could be generated by something as simple as 2x2‑1. But still, at least this appears like a predictable process.  Read more at location 836

one day when he was comparing two plots of output data which were supposedly generated by the same input data, Lorenz got a shock. The two plots were very similar at first, but diverged after a few values until the two graphs became completely different. Lorenz was baffled.  Read more at location 842

When he examined the second sequence he saw he had set the value at the start of the sequence to be 0.506. He had copied this value from the first sequence. However, when he examined the first sequence he found the values there actually had six decimal places: 0.506127.  Read more at location 847

The butterfly effect 

In order to duplicate Lorenz's discovery, let us return to our simple expression 2x2‑1. Our first graph of values was generated from a starting value of 0.9. This time let us start with a very slightly different value of 0.9001.  Read more at location 854

At the start, you will see that the two graphs are in very close agreement. However, after approximately ten points the two graphs start to diverge. After fifteen points the two graphs look completely different — despite starting with almost identical values. This extreme sensitivity to initial conditions is a property of all deterministic systems which behave in this chaotic manner. This behaviour is called chaos.  Read more at location 862

****  Even the tiniest fluttering of a butterfly's wings would affect the state of the current weather by a small amount, and that effect would become magnified over time. Lorenz called this principle the butterfly effect, the principle that a butterfly flapping its wings in China could potentially cause a hurricane next month in New York.  Read more at location 870

Lorenz was well aware of the implications of this discovery for long-range weather forecasting: "When our results concerning the instability of nonperiodic flow are applied to the atmosphere, which is ostensibly nonperiodic, they indicate that prediction of the sufficiently distant future is impossible by any method, unless the present conditions are known exactly. In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be non-existent."  Read more at location 872

The processes which produce chaos are completely deterministic (unlike the case with quantum mechanics). Every pocket of air in the atmosphere moves in a perfectly deterministic manner according to Newtonian laws. There is no randomness in this situation, there is only the illusion of randomness. Chaos is produced by simple deterministic equations creating behaviour that appears random. From the point of view of this book, we have found another source of uncertainty, but this uncertainty is generated by a very unlikely source: determinism.  Read more at location 878

James Gleick says in his book Chaos: "Twentieth-century science will be remembered for just three things: relativity, quantum mechanics, and chaos. Of the three, the revolution in chaos applies to the universe we see and touch, to objects at human scale."  Read more at location 885

Strange attractors 

Lorenz's three simple equations had just three variables — x, y, and z — which denoted the current state of the system at any particular time, plus two fixed parameters: σ (which represented the viscosity of the fluid) and ρ (which represented the temperature difference between the top and bottom of the volume of air). These fixed parameters would be set to particular values before the simulation was run in order to obtain an interesting output. In order to examine how the convection system developed over time, Lorenz took the x, y, and z values (which represented the state of the system at any particular time) and plotted a single point in a three-dimensional space,  Read more at location 888

This method — by which the state of a system is represented by a single point moving in a multi-dimensional space — is a common approach in physics and engineering. The space is called a phase space.  Read more at location 896

Strange attractors are objects which live in phase space. This particular diagram is called the Lorenz attractor. These diagrams are called "attractors" because whatever random state a system might be in initially, the effect of chaos will always be to drag the state of that system towards the attractor.  Read more at location 901

ironically, chaos operates to create structure, and that structure is the strange attractor. The most important feature of the Lorenz attractor diagram is that the point never writes over the same point twice — the point continually writes a new curve through space, showing that the system is never in the same state twice. Hence, the system never repeats — it continually creates new convection patterns in a completely unpredictable way. Strange attractors are therefore infinitely detailed: no matter how much you zoom in on them, you will always find more detail. This infinite detailing causes the extreme sensitivity to initial conditions in chaotic systems:  Read more at location 906

According to Ian Stewart: "It is now customary to define a strange attractor to be one that is fractal." Fractals are the geometry of chaos, and they are found throughout Nature.  Read more at location 913

Fractals 

Legend has it that Mandelbrot became interested in unconventional forms of geometry when he wanted to obtain a value for the length of the coastline of Britain. On examining several encyclopedias, he found they all gave different values for the length. Mandelbrot realised that there was no single correct value for the length: it depended on how the length was measured. To be precise, it depended on the length of your measuring rod.  Read more at location 920

****  the smaller our ruler, the longer our measurement of the length of the coast. So, remarkably, it emerges that there is no "correct" measurement of the length of the coast of Britain: the value is completely dependent on the length of the measuring ruler.  Read more at location 929

there can be a similarity between a large object, and the smaller objects which comprise the large object. This is called self-similarity, and it is a common property of many natural objects. Benoit Mandelbrot realised that these objects possessed a symmetry.  Read more at location 944

****  Mandelbrot realised that self-similarity resulted in a symmetry when an object was examined at different scales: if you zoomed into an object, it still looked the same.  Read more at location 947

In the following diagram of a tree, you will see that each branch, and each twig, of the tree resembles the tree in its entirety.  Read more at location 949

trees — and many other plants — possess self-similarity in abundance. In the following diagram, an example of a fern leaf is also shown, the leaf being composed of 30 segments. You will see that each of those 30 segments resembles the entire fern leaf.  Read more at location 951

We could imagine the jagged edge of the Koch snowflake as representing a coastline: you can keep on zooming forever. So we find that by applying very simple construction rules repeatedly, Nature can produce structures of infinite complexity. Mandelbrot named these structures fractals.  Read more at location 961

****  The true geometry of Nature is fractal geometry, not Euclidean geometry. Just as chaos results from simple rules which produce endlessly complex behaviour, so fractals result from simple rules which produce endlessly complex structures. Just as chaos is the true language for describing the behaviour of Nature, so fractals are the true language for describing the structure of Nature.  Read more at location 967

The Mandelbrot set 

the Mandelbrot set, which is an incredibly beautiful, infinitely detailed fractal which is produced by a very simple formula.  Read more at location 972

Benoit Mandelbrot discovered the Mandelbrot set by considering complex numbers, which we first encountered in Chapter Two in the discussion of Schrödinger's wavefunction. If you remember, a complex number has two parts: a real part (which is a conventional real number) and an imaginary part. The imaginary part is based on the square root of -1. This is rather strange because you might realise that the square of no conventional number can be a negative number. So how can -1 have a square root? Well, we don't worry about that — we just symbolise the square root of -1 by the letter i. The imaginary part of a complex number then consists of a number multiplying i. Hence, an example of a complex number might be: 6 + 3i. One of the most useful features of complex numbers is that they can be represented geometrically. It is possible to plot complex numbers on a two-dimensional graph with the real part being plotted along the horizontal x axis, and the imaginary part being plotted up the vertical y axis. This is called the complex plane. Hence, in the complex plane, the real axis is horizontal, and the imaginary axis is vertical. An image of the Mandelbrot set can be produced very simply. Consider a point in the complex plane, and the complex number represented by that point. You then apply a simple iterative process which involves taking the square of the number and adding the original number. You then square the result, and add the original number again. This process is performed several times. If the resultant number stays low (within a certain limit) then the point is in the Mandelbrot set, and the associated pixel is coloured black. However, if after a number of iterations the resultant number shoots off to a very large value, the point is not in the Mandelbrot set. In that case, instead of colouring the pixel black, the pixel is assigned a colour based on how many iterations were performed.  Read more at location 973

Fractal dimensions 

When we consider a fractal border, such as the Koch snowflake, we find that as we zoom in to the fractal the border gets fractionally longer.  Read more at location 998

Mandelbrot realised that another form of measurement was required: the dimension. This is a surprise as we do not usually think of the number of dimensions of an object as a value of measurement. However, a square is a two-dimensional object, and a cube is a three-dimensional object, so the number of dimensions an object possesses is clearly a property of that object — just like its width and height.  Read more at location 1001

****  the number of dimensions of the ball of string is dependent on your point of view, a number from zero to three. So returning to the theme of this book, once again the old notion of certainty in mathematics has taken a blow. From considering objects as having a clear observer-independent number of dimensions, we now find that the number of dimensions is uncertain — dependent on the observer. Mandelbrot — a mathematician — was even so bold as to suggest a connection with developments about uncertainty in physics: "The notion that a numerical result should depend on the relation of object to observer is in the spirit of physics in this century and is even an exemplary illustration of it."  Read more at location 1007

(Note: d = log (n) / log(z))  Mandelbrot realised that the number of dimensions of a self-similar object (such as a fractal) could be calculated by a simple formula: where N is the total number of self-similar pieces in the fractal, and Z is the zoom factor. The formula basically reveals how many more self-similar pieces of the fractal do we see as we zoom out (as if we are zooming out of a coastline to reveal more detail).  Read more at location 1015

one of the simplest self-similar objects: a square. 

(Note: d = log (4) / log (2) = 2)  ...It can be seen that each side of the resultant larger square is twice as long (because of the zoom factor). However, it can also be seen that the larger square actually contains four self-similar smaller squares. So, from Mandelbrot's formula, we can calculate the dimension of the square as:  Read more at location 1021

Now let us use the formula to calculate the number of dimensions of a cube 

(Note: d = log (8) / log (2) = 3)   ...the resultant cube contains eight self-similar smaller cubes. So, according to Mandelbrot's formula, the number of dimensions of a cube is:  Read more at location 1027

the formula tells us that a line has one dimension — which is again the value we would expect. So when we consider the classical Euclidean shapes of the line, the square, and the cube, we find they all have an integer (whole number) dimensionality.  Read more at location 1030

Remember how the Koch snowflake fractal was generated by a triangular line with a "kink" in the middle: This type of kinky line which is used to form a fractal is called a generator. 

...(Note: d = log (4) / log (3) = 1.26)  You will see that as we zoom out of the generator, more detail (the "coastline") starts to emerge. However, although we have zoomed-out by a factor of three, you will see that the second zoomed-out image is now composed of four of the initial generator shapes. So, from Mandelbrot's formula, we can calculate the dimension of the Koch snowflake as:  Read more at location 1036

****  So the formula is telling us that the Koch snowflake is an object with 1.26 dimensions! It might seem bizarre to consider a structure as having 1.26 dimensions. Perhaps think of the Koch snowflake as being "rougher" than a line — which has a dimensionality of one — but not filling the entire two dimensional plane like a square would do (a square having two dimensions). So, on that basis, we might expect the dimensionality of a Koch snowflake to lie somewhere between 1 and 2. When Mandelbrot considered the fractal coastline of Great Britain he found it had a dimensionality of approximately 1.24. In contrast, the coastline of South Africa — which is nearly circular — has a dimensionality of approximately one. It is clear that the rougher the fractal shape, the higher its dimensionality.  Read more at location 1039

****  As we find fractal structures throughout the natural world here on Earth, we should perhaps not be so surprised to discover that the entire universe itself appears to have a self-similar fractal shape. Stars group together to form galaxies, galaxies group into clusters, and clusters group together into superclusters. There is some symmetry over scale here — just as in a fractal. In fact, the universe has been measured as being fractal on scales up to 350 million light years, with a fractal dimension of 1.2.  Read more at location 1046

At the mercy of chaos 

prices should reach a steady equilibrium when supply of a product matches the demand for that product. But if equilibrium is the natural state of affairs, then how come markets are so turbulent? As Mandelbrot says in his book The (Mis)Behaviour of Markets: "To me, all the power and wealth of the New York Stock Exchange or a London currency-dealing room are abstract; they are analogous to physical systems of turbulence in a sunspot or eddies in a river."  Read more at location 1059

Mandelbrot considered the interactions in a simulation of two groups of investors who behave differently: "In computer simulations by economists in Belgium, the two groups start interacting in unexpected ways, and price bubbles and crashes arise spontaneously. The market switches from a well-behaved linear system in which one factor adds predictably to the next, to a chaotic nonlinear system in which factors interact and yield the unexpected." This switch from linearity to nonlinearity is crucial. A linear system can be broken into individual elements — "black boxes" — and the behaviour of the entire system can be analysed by considering the behaviour of each individual element. This is not possible with a nonlinear system which cannot be broken-down in the same way. Instead, it is only possible to analyse the system as a whole — a far more complex task.  Read more at location 1071

****  The financial crisis of 2008 (which Mandelbrot considers in later versions of his book) is often blamed on the missellling of subprime mortgages in the United States. However, this implies that we can break down the market into individual units (one of those units being the selling of mortgages) and place the blame for the crisis on one particular unit — as if the selling of mortgages was the "cause" of the crash. But this is simply not the case in a nonlinear system. A nonlinear system cannot be subdivided. Its individual units cannot be considered in isolation. No single unit can be considered as being the cause of the overall behaviour.  Read more at location 1078

A recent news story was that a 90-year-old famous man had "died from pneumonia". But, of course, he did not die of pneumonia: he died from being 90 years old. When you are 90 years old, even a common cold can kill you. But the true cause of death would not be the cold — it would be your age. To try to isolate one particular cause is to miss the bigger picture, it is to fail to see that the real problem lies with the entire nonlinear system (yes, a human body is another example of a nonlinear system). Once again, individual units cannot be considered in isolation.  Read more at location 1082

to identify a single cause and try to isolate that cause to eliminate the possibility of market chaos. The possibility of chaos is always latent in the system and can never be eliminated — it is built into the design of the system. It is an inherent property of nonlinear systems.  Read more at location 1088

****  we can never eliminate the possibility of chaos in our financial markets. All we can do is try to insulate ourselves from the financial storms when they do eventually come. As Mandelbrot says: "For centuries, shipbuilders have put care into the design of their hulls and sails. They know that, in most cases, the sea is moderate. But they also know that typhoons arise and hurricanes happen. They design not just for the 95 percent of sailing days when the weather is clement, but also for the other 5 percent, when storms blow and their skill is tested. The financiers and investors of the world are, at the moment, like mariners who heed no weather warnings."  Read more at location 1091

the system of financial markets is not the nonlinear system which poses the greatest threat for the stability of humanity. The complex military network around the world poses the greatest threat. When nations arm themselves, they might feel they are acting responsibly as they have created a strong defence against possible aggression. But all they are really doing is adding to the complexity of the existing worldwide military nonlinear system. They are simply making a bad situation worse.  Read more at location 1096

****  The tragedy of the war was that no matter what carnage unfolded, they believed there was not only a science of ballistics, but a science of war: a way of predicting how many men per mile of front would win the objective. They thought if they understood the science of war they could control its outcome — they could simply overpower the chaos. In the end, they weren't fighting each other. Both sides were fighting the chaos of reality. In the end, the chaos defeated them all, and ten million men died."  Read more at location 1108

6 HOLLYWOOD 

However, we are not only transported to another world when we enter a movie theatre. Modern consumer society is based on creating artificial reality.  Read more at location 1120

In 1981, the French philosopher Jean Baudrillard wrote a short book entitled Simulacra and Simulation which explored the idea of modern life being a simulation. Baudrillard picked the particular example of Disneyland as an extreme example of a completely immersive artificial reality.  Read more at location 1125

Baudrillard is making the point that visitors to Disneyland mistakenly perceive Main Street in Disneyland as being an accurate — though improved — copy of a Main Street in a real American town outside the gates of Disneyland. Whereas, the truth is that old-fashioned perfect Main Street simply does not exist in modern America. So Disneyland is not a copy of anything: Disneyland is a completely original reality. It is not accurate to call Disneyland an "artificial reality" because that would infer that it was a copy of a reality that actually existed. No, in Disneyland (and in our modern consumer culture) the simulation has become the reality. It is literally the American dream. Baudrillard referred to this artificial reality, which has no basis or resemblance to actual reality, as hyperreality.  Read more at location 1132

The Matrix 

The Matrix was a science fiction movie released in 1999 which became something of a sensation, becoming the biggest-selling DVD of all time. About eight minutes into the movie, the character of Keanu Reeves takes a book down from a shelf. The book is hollow. It is a fake book, a simulation. In its hollow compartment it contains some computer disks. The book is none other than Baudrillard's Simulacra and Simulation.  Read more at location 1144

The idea dates back as far as 1641 when the French philosopher René Descartes suggested the possibility that the entire world was merely an illusion created by an evil demon. In that case, Descartes suggested that all sensations might be inserted directly into the mind, and even the human body might be an illusion. In the modern era, a computer-based version of the evil demon was provided by the American philosopher Hilary Putnam. Putnam proposed what is known as the brain-in-a-vat thought experiment. Putnam suggested that an evil mad scientist might have extracted your brain and was keeping it alive in a vat of liquid. All sensory neurons of your brain were connected to an external computer which was supplying an accurate simulation of an external world. The disembodied brain would have no idea of its predicament, and would probably be blissfully happy.  Read more at location 1153

Simulacron‑3 

In 1964, the second book by science fiction author Daniel F. Galouye was published. The book was called Simulacron‑3. It really was quite a remarkably foresighted work, and has been called a "virtual reality novel from a time before virtual reality". Not only was the novel ahead of its time in terms of technology, its style was also highly-influential, being called the first "cyberpunk" novel.  Read more at location 1172

There is a wonderful quote from one of the characters in Simulacron‑3: "You can hardly stuff people into machines without starting to wonder about the basic nature of both machines and people." So here we have an example of a simulated reality which exists merely as states in a computer. People are programmed into existence.  Read more at location 1187

In The Sims, virtual people go about their daily activities oblivious to the fact that they are being observed by the game player who retains complete control over the environment. The virtual people have individual personalities and experience desires and fears. There is no obvious goal to the lives of these "people" except to entertain the game player. Again, as with the case of the brain-in-a-vat thought experiment, this seems to have implications for certainty and uncertainty. As Descartes realised back in the 17th century, if you cannot be certain as to whether or not you are in a simulation, then you cannot be certain of anything  Read more at location 1192

cat and no mat: they would be states in a computer program. This type of general uncertainty — in which you can no longer be certain of anything — is called Cartesian uncertainty  Read more at location 1199

The Thirteenth Floor

we could consider an infinity of levels. The point is that — no matter which level you find yourself — you could never be certain that you were the highest level. Even at the level of the supposed simulation programmers there would always be uncertainty as to whether there was a level above. It would appear that uncertainty is a fundamental property of all universes.  Read more at location 1216

I believe Descartes' argument — which was first presented in the 17th century — remains valid to this day. The argument reveals that there lies a fundamental, inescapable uncertainty at the heart of reality.  Read more at location 1221

we can rephrase Descartes' argument along more scientific terms as "there will always be an uncertainty about any truth which lies outside our universe". I believe it is very hard to argue with this statement.  Read more at location 1223

****  Uncertainty is inescapable, not just in our universe, but in any conceivable universe. In fact, I believe this could be considered a fundamental principle, 

...Uncertainty is the only certainty.  Read more at location 1229

The multiverse 

these hypotheses suggest that there are a vast number of alternate universes. These parallel universes form what is known as the multiverse. The associated theories are called multiverse theories. (If you read my first book, you will know I am not a fan of multiverse theories,  Read more at location 1235

A popular multiverse theory is the Many Worlds interpretation (MWI) of quantum mechanics.  Read more at location 1239

The MWI says there is no uncertainty — it knows precisely what is going to happen. The MWI says that all possible outcomes occur, but each of those outcomes only occurs in a different parallel universe.  Read more at location 1244

The MWI arises from a dislike of uncertainty and a desire for certainty. Instead of accepting that only one reality is chosen at random, the MWI states that all possible worlds exist.  Read more at location 1248

****  Scientists prefer linearity over nonlinearity, as was made clear in the earlier chapter on chaos. Linearity is neater, more elegant, and — crucially — easier to analyse. But Nature tends to be determinedly nonlinear, as Benoit Mandelbrot made clear. And, as I said earlier in this book, the key to being a good physicist is to listen to Nature talking, and not to impose your own preconceptions on how Nature should behave.  Read more at location 1252

****  The MWI very sneakily attempts to exchange the uncertainty of quantum mechanics for the uncertainty of Descartes' demon.  Read more at location 1257

****  Sabine Hossenfelder features a quote from a famous (unnamed) physicist: "The multiverse, the simulation hypothesis, modal realism, or the Singularity — it's all the same nonsense, really." The MWI emerged from the belief that there can be no such thing as fundamental uncertainty. If we have fundamental uncertainty we don't need many worlds — one universe is more than enough.  Read more at location 1263

Einstein was so repelled by the idea that he constructed endless thought experiments to argue with Niels Bohr. Bertrand Russell tried to eliminate uncertainty from mathematics by writing three huge volumes of the Principia Mathematica. In the end, though, Bell's theorem proved that Einstein was wrong, and Kurt Gödel's incompleteness theorem proved that Russell was wasting his time. Some of the greatest minds have been made to look rather foolish in their attempts to eliminate fundamental uncertainty. The MWI is the latest in a long line of attempts to eliminate fundamental uncertainty.  Read more at location 1267

Anthropic reasoning

One of the most appealing features of multiverse theories is that they seem to explain the apparent fine-tuning of some of the fundamental constants of Nature.  Read more at location 1273

the great physicist John Wheeler said: "It is not only that man is adapted to the universe, the universe is adapted to man. Imagine a universe in which one or another of the fundamental dimensionless constants of physics is altered by a few percent one way or another. Man could never come into being in such a universe."  Read more at location 1275

One such apparently fine-tuned value is the proposed dark energy density of the universe, which essentially plays the role of Einstein's cosmological constant (considered in detail in my second book). It is believed that the dark energy density controls the rate at which the expansion of the universe is accelerating. The dark energy density appears to be set to the particular value which allows the universe to expand at just the correct rate to be amenable to life.  Read more at location 1278

It appears that the required value for the dark energy density is extraordinarily small: in the order of a millionth of a billionth of a joule per cubic centimetre. This is equivalent to just six protons per cubic metre. However, the predicted value of the quantum vacuum energy is calculated to be far larger (by a factor of 10120) than this measured value. If the quantum vacuum energy really is the source of dark energy then there must be some mechanism which reduces its value, such as the energies of different types of symmetrical particles cancelling each other out. However, it seems bizarre that a cancelling mechanism should leave such a tiny value. If the value of dark energy was zero, not this strange tiny non-zero value, then it would be much easier to accept. It appears almost as if the value has been set specifically to allow the formation of life-supporting structures in the universe.  Read more at location 1283

This type of reasoning is called anthropic reasoning, and is one of the most contentious aspects of multiverse theories. It is either one of the most attractive features, or one of the reviled features — depending on your point of view. On the plus side, it seems to solve apparently intractable problems regarding fine-tuning. On the negative side, it doesn't really solve anything at all: the values of the fundamental constants have not been determined uniquely by the theory.  Read more at location 1291

if we don't give up and instead continue our hard work to better understand the laws of Nature then we might find alternative solutions which do not require fine-tuning, or maybe solutions which unambiguously determine the values of the physical constants.  Read more at location 1298

7.  THE SAGA OF THE SOUTH POLE AND THE MULTIVERSE 

The purpose of the BICEP2 telescope is to detect microwave radiation from the cosmic microwave background (CMB) which is the most distant — and therefore the oldest — object it is possible to observe. This radiation has the potential to reveal the imprint of gravitational waves from the early universe. Virtually every physicist believes gravitational waves exist, but they have never been directly detected.  Read more at location 1320

the Sun is moved — or disappears — then the effect of the disturbance would not be felt instantaneously. Instead, it would take approximately seven minutes for the effect to be felt on Earth, the force reaching the Earth via gravitational waves. It is predicted that gravitational waves will deform any space they pass through, squashing that space into an elliptical shape. Space would wobble like a jelly, first vertically and then horizontally:  Read more at location 1325

because gravity is such a weak force, these waves — and their deformation of space — would be very weak, and that makes them very difficult to detect. Which brings us back to the BICEP2 telescope. It was predicted that gravitational waves from the enormous explosion of the Big Bang would have become frozen into the structure of the CMB. Due to the elliptical squashing of space, the radiation was expected to be polarised in the direction of the major axis of the ellipse.  Read more at location 1328

The particular type of polarised wave predicted to have been produced by the Big Bang was called B‑mode polarisation. The possible detection of B‑modes was considered to be so important because it would also represent a "smoking gun" for the inflation hypothesis.  Read more at location 1333

on the morning of 17th March, I was one of many thousands of excited cosmologists and physicists eagerly sitting by their computer waiting to download the BICEP paper as soon as it was made available. The paper certainly looked impressive.[11] It announced detection of B‑modes with a 5 sigma confidence level. This meant that if the signal was due to chance, then the experiment would have to be repeated 3.5 million times before you would expect to find this result (basically, it was highly unlikely the result was due to chance). The paper included a picture which showed the various twisty polarisation directions in the CMB, the "pinwheel" twists being characteristic of B‑mode polarisation. The picture went around the world and was reprinted in many major newspapers. This looked extremely convincing:  Read more at location 1339

Scientists are notorious for downplaying their results, for being humble, and especially for being cautious. This paper was not like that. There was a certain triumphalism about the paper, a sense of hubris. The last sentence captured the tone: "The long search for tensor B‑modes is apparently over, and a new era of B‑mode cosmology has begun".  Read more at location 1346

This was science crossed with reality TV. It seemed aimed more at generating media buzz and column inches than providing a sober and cautious assessment of the result. This was not how science was supposed to be done.  Read more at location 1353

I covered inflation in detail in my second book, but it can be explained briefly how the inflation hypothesis solved two outstanding problems in cosmology: The horizon problem. It was a mystery how the temperature of the CMB was so evenly distributed as distant regions were not causally connected (i.e., light had not had enough time to travel between the regions, equalising the temperature). The flatness problem. The universe appears to be spatially flat. This is a mystery as, according to general relativity, the slightest initial variations from flatness would have been amplified over time. Inflation appears to solve both these problems. The horizon problem is solved because temperature equalisation could have occurred before inflation, when the universe was so small. While the flatness problem is solved because inflation smooths and flattens the universe — like a wrinkly rubber sheet being stretched. No one is sure why inflation started and, perhaps more importantly, no one knows why it ended.  Read more at location 1365

But if our universe was initially a tiny region of space, maybe that suggests that it was merely part of a much larger region of space. And maybe that region was undergoing unrestrained continuous inflation. Our universe would then just be a region in which the inflation process decayed and stopped. As Max Tegmark says in his book Our Mathematical Universe: "In other words, what we've called our Big Bang wasn't the ultimate beginning, but rather the end — of inflation in our part of space." So maybe our universe has a much vaster region of continuously-inflating space outside it. In which case, maybe other universes are popping into existence continuously inside that larger region of eternally-inflating space:  Read more at location 1376

it is another multiverse theory, this theory being called eternal inflation. This is a particularly important multiverse theory because of its close connection with the inflation hypothesis. It has been said that if inflation is true then it is very likely that eternal inflation is true. I think it is fair to say that the eternal inflation hypothesis is probably the most widely-accepted multiverse theory. However, there are plenty of critics of eternal inflation. One of the most notable is Paul Steinhardt, a professor at Princeton University.  Read more at location 1382

****  Steinhardt is notable in this respect as, surprisingly, he is one of the originators of the inflation hypothesis. In an interview for Scientific American in 2014, Steinhardt produced a remarkable criticism of inflation and the multiverse, suggesting that inflation creates as many problems as it solves: "The whole point of inflation was to get rid of fine-tuning — to explain features of the original big bang model that must be fine-tuned to match observations. The fact that we had to introduce one fine-tuning to remove another was worrisome. This problem has never been resolved. But my concerns really grew when I discovered that, due to quantum fluctuation effects, inflation produces a multitude of patches (universes) that span every physically conceivable outcome (flat and curved, smooth and not smooth, isotropic and not isotropic, etc.). So we have not explained any feature of the universe by introducing inflation after all. To me, the accidental universe idea is scientifically meaningless because it explains nothing and predicts nothing. Also, it misses the most salient fact we have learned about large-scale structure of the universe: its extraordinary simplicity when averaged over large scales. Scientific ideas should be simple, explanatory, predictive. The inflationary multiverse as currently understood appears to have none of those properties."   Read more at location 1386

would suggest you take a look at my second book, but I can briefly explain the hypothesis here. If we consider that the universe must have zero total energy, then that suggested a quite ingenious modification to the theory of general relativity (gravity). Instead of attracting objects to an infinitely small distance (as is predicted by general relativity), the modified gravity hypothesis predicted that objects would be attracted until they were a certain distance apart: an equilibrium distance which reflected the zero energy condition. The value of this equilibrium distance is well known as the Schwarzschild radius (a distance equal to the event horizon of a black hole). Fortunately, for almost every object this equilibrium distance — its Schwarzschild radius — is an incredibly small distance. For example, the Schwarzschild radius of a human being is about the size of an atom. Crucially, this meant the hypothesis agreed with all existing measurements of general relativity, and explained why we only ever see gravity as an attractive force.  Read more at location 1398

However, this subtle modification made quite extraordinarily accurate additional predictions. For the universe as a whole, the Schwarzschild radius is very close to the radius of the observable universe, so if the universe is expanding to its Schwarzschild radius then this would explain the observed radius of the universe, and the observed accelerated expansion of the universe (generally attributed to dark energy). It also agreed with the two main predictions of the inflation hypothesis. It solved the horizon problem, and predicted a flat universe (as spatial flatness arises naturally in a universe at its Schwarzschild radius). For the details, see my second book. So, crucially, the modified gravity hypothesis predicted a flat universe without the need for the mysterious fine-tuned dark energy (considered in the last chapter), and so removed the need for a multiverse. And in many ways, the predictions of the hypothesis went beyond the explanatory power of inflation. The hypothesis removed the singularities at the heart of black holes, presented a simple solution to the mystifying black hole information loss paradox, and it also predicted the correct value for black hole entropy.  Read more at location 1405

However, what the hypothesis did not predict was an initial explosive expansion generating primordial gravity waves in the CMB. So the BICEP result appeared in conflict with the hypothesis. If I am honest, I was fairly disappointed when I heard the BICEP news — it looked like I had screwed-up big time.  Read more at location 1413

Flauger went on to co-author a paper with David Spergel which suggested that the BICEP signal could be completely due to polarised dust. The story emerged that David Spergel had first realised the possibility of dust contamination back in March while travelling on a train to New York to give a lecture. In June, the BICEP paper was finally published with some of its claims watered-down. The infamous last sentence had been replaced. In September, a paper published by the Planck team announced that the amount of dust in BICEP's "Southern Hole" was considerably greater than had been assumed. There were no clear regions in the sky where BICEP could have attained a signal clear of dust, but they were unfortunate to have chosen a particularly dusty window. Planck showed that the expected signal from dust was a good match for the BICEP signal.  Read more at location 1452

If the 20th century has taught us anything, it is that we should beware of those who speak loudest of absolutes, with absolute certainty. Instead, we should accept that Nature is based on relatives and uncertainty (relativity and quantum mechanics).  Read more at location 1475

NOTES 

Another visionary quote from Lord Kelvin in 1895 was that "Heavier than air flying machines are impossible."  Read more at location 1499

If we cannot trust our senses, then Descartes reasoned that we could only trust the contents of our own minds. Hence his famous dictum cogito, ergo sum ("I think, therefore I am").  Read more at location 1510