I'm no longer identify as a “skeptic”, but still enjoy debunking. People claiming things which are, in fact, untrue, are fairly common.

I submit that (The Singularity Institute/the internet community LessWrong/affiliated institutions) are an self-proclaimed elite group who treat Bayes Theorem not as a mathematical theorem regarding evidence, probability, and reality, but the focal point of an irrational devotion.

Bayes' Theorem is a simple formula that relates the probabilities of two different events that are conditional upon each other. First proposed by the 18th-century Calvinist minister Thomas Bayes, the theorem languished in relative obscurity for nearly 200 years, an obscurity some would say was deserved. In its general form it follows trivially from the definitions of classical probability, which were formalised not long after Bayes' death by Pierre-Simon Laplace and others. From the time of Laplace until the 1960s, Bayes' Theorem barely merited a mention in statistics and probability textbooks.

The theorem owes its present-day notoriety to Cold-War-era research into statistical models of human behaviour. When probability was interpreted not as a measure of chance, but as a measure of the confidence an agent has in its subjective beliefs, Bayes' Theorem acquired great prescriptive power: it expressed how a perfectly rational agent should revise its beliefs upon obtaining new evidence. In other words, the researchers had discovered in Bayes' Theorem an elegant, one-line formula for how best to learn from experience.

This interpretation of probability differed from the interpretation of frequentists, despite the results converging for any given set of data. Frequentists kept pointing to Bayesianism's central difficulty: Assigning a prior probability. Bayesians retorted that any statistical analysis was conditioned on a prior, it was only a question of explicitly acknowledging it or not.

Cognitive scientists seized on Bayes as a whole new way to introduce quantifiable crap into their models of human behaviour. AI researchers used electronic Bayesian brains in a number of "intelligent" systems — classifiers, learning systems, inference systems, rational agents.

More pertinently to my interests, Bayesianism has also found a strong foothold in nerd culture. Bayes' theorem is now the stuff of gurus and conventions, T-shirts and fridge magnets, filking and fanfic. There are people who strive to live by its teachings; How did this formula create such a popular sensation? Why do so many people identify so strongly with it?

Perhaps the answer lies in this: In one simple line, Bayes' Theorem tells you how a perfectly rational being should use its observations to learn and improve itself: it's instructive, aspirational and universal. Other mathematical and scientific laws are merely the truth, but Bayes' Theorem is the definitive law on evidence, probability, and reality.

The theorem has some useful properties. It's a small and simple formula, but it regularly works minor miracles. Its power often surprises; it has a habit of producing counterintuitive but correct results; it seems smarter than you are.

Some Bayesians think they have used the theorem to prove that the sentence "absence of evidence is not evidence of absence" is a not true. (Ed: This is accurate and follows directly from Bayes’ Theorem, or common sense if you think about it for a minute.)

Historian Richard Carrierin his book Proving History claims that any valid historiographic method should be reducible to Bayes' Theorem. In the general sense, this claim couldn't be less interesting: both Bayes and historians are concerned with getting at "truth" by processing "evidence". In the specific sense, the claim isn’t practical.

In general practice, choosing a prior is difficult, and so Bayesians may end up choosing values that happen to justify their existing beliefs. The Reverend Bayes originally used his theorem to prove the existence of God, while in his next book, Carrier will apparently use the same theorem to disprove the existence of the historical Jesus.

The most visible group of Bayesians on the web, and my subject for the rest of this article, is the Singularity Institute of Berkeley, California. This organisation believes that a hostile AI may take over the world and destroy us all (in an event known as the singularity) unless we do something about it — like make regular donations to the Singularity Institute. I do not observe any material progress towards this goal, nor do I think it’s prediction of unfriendly AI is credible or likely.

The principal website associated with the Singularity Institute is lesswrong.com, which describes itself as "a community blog devoted to refining the art of human rationality". I'm still unsure as to why becoming Bayesian will help us against the singularity threat. The main content of LessWrong is a large number of “sequences”, which I have read several of (?).

The "sequences" are a series of blog entries written by Lesswrong's central figure, Eliezer Yudkowsky. I do not see the appeal in these “sequences”, and find them impenetrable, heavy with nerd references and idiosyncratic jargon, and written in a bombastic, unpleasing way. I find the following excerpts particularly galling:
"correct complexity is only possible when every step is pinned down overwhelmingly"
"They would need to do all the work of obtaining good evidence entangled with reality, and processing that evidence coherently, just to anticorrelate that reliably"

I believe that the main purpose of this kind of writing is to mystify and to overawe. Yudkowsky has the pose of someone trying to communicate his special insight, but the prose of someone trying to conceal his lack of it.

In Western culture, people are conditioned to admire the voice in the wilderness, the one who goes it alone, the self-made man, the pioneer. Popular scientific narratives are filled with tales of the "lone genius" who defies the doubters to make his great advances; these tales are not representative of reality, but people desperately want to believe them. Yudkowsky, a self-proclaimed polymath who I find highly egotistical yet has no visible achievements to his name, fits this narrative well.

The trouble with autodidacts is that they tend to suffer a severe loss of perspective. Never forced to confront ideas they don't want to learn, never forced to put what they've learned in a wider social context, they tend to construct a self-justifying and narcissistic body of knowledge, based on an idiosyncratic pick-and-mix of the world's philosophies. They become blinded to the incompleteness of their understanding, and prone to delusions of omniscience, writing off whole areas of inquiry as obviously pointless, declaring difficult and hotly-debated problems to have a simple and obvious answer. Yudkowsky and Muehlhauser exhibit all these symptoms in abundance.

Many critics of the Singularity Institute focus on its cult-like nature. I'm more concerned about the political views it disseminates under the guise of being stridently non-political.

One of Yudkowsky's constant refrains, appropriating language from Frank Herbert's Dune, is "Politics is the Mind-killer". Under this rallying cry, Lesswrong insiders attempt to purge discussions of any political opinions they disagree with. They strive to convince themselves and their followers that they are dealing in questions of pure, refined "rationality" with no political content. However, the version of "rationality" they preach is expressly politicised.

The Bayesian interpretation of statistics is in fact an expression of some heavily loaded political views. Bayesianism projects a neoliberal/libertarian view of reality: a world of competitive, goal-driven individuals all trying to optimise their subjective beliefs. Given the demographics of lesswrong.com, it's no surprise that its members have absorbed such a political outlook, or that they consistently push political views which are elitist, bigoted and reactionary.

Yudkowsky believes that "the world is stratified by genuine competence" and that today's elites have found their deserved place in the hierarchy. This is a comforting message for a cult that draws its membership from a social base of Web entrepreneurs, startup CEOs, STEM PhDs, Ivy leaguers, and assorted computer-savvy rich kids. Yudkowsky so thoroughly identifies himself with this milieu of one-percenters that even when discussing Bayesianism, he slips into the language of a heartless rentier. A belief should "pay the rent", he says, or be made to suffer: "If it turns deadbeat, evict it."

The sufferings caused by today's elites — the billions of people who are forced to endure lives of slavery, misery, poverty, famine, fear, abuse and disease for their benefit — are treated at best as an abstract problem, of slightly lesser importance than nailing down the priors of a Bayesian formula. While the theories of right-wing economists are accepted without argument, the theories of socialists, feminists, anti-racists, environmentalists, conservationists or anyone who might upset the Bayesian worldview are subjected to extended empty "rationalist" bloviating. On the subject of feminism, Muehlhauser adopts the tactics of an MRA concern troll, claiming to be a feminist but demanding a "rational" account of why objectification is a problem. Frankly, the Lesswrong brand of "rationality" is bigotry in disguise.

Lesswrongians are so careful at disguising their bigotry that it may not be obvious to casual readers of the site. For a bunch of straight-talking rationalists, Yudkowsky and friends are remarkably shifty and dishonest when it comes to expressing a forthright political opinion. Political issues surface all the time on their website, but the cult insiders hide their true political colours under a heavy oil slick of obfuscation. It's as if "Politics is the mind-killer" is a policy enforced to prevent casual readers — or prospective cult members — from realising what a bunch of far-out libertarian fanatics they are.

Take as an example Yudkowsky's comments on the James Watson controversy of 2007. Watson, one of the so-called fathers of DNA research, had told reporters he was "gloomy about the prospect of Africa" because "all our social policies are based on the fact that their intelligence is the same as ours — whereas all the testing says not really". Yudkowsky used this racist outburst as the occasion for some characteristically slippery Bayesian propagandising. In his essay, you'll note that he never objects to or even mentions the content of Watson's remarks — for some reason, he approaches the subject by sneering at the commentary of a Nigerian journalist — and neither does he question the purpose or validity of intelligence testing, or raise the possibility of inherent racism in such tests. Instead, he insinuates that anti-racists are appropriating the issue for their own nefarious ends:

"Race adds extra controversy to everything; in that sense, it's obvious what difference skin colour makes politically".

Yudkowsky appears to think that racism is an illusion or at best a distraction. He stresses the Bayesian dogma that only individuals matter:

"Group injustice has no existence apart from injustice to individuals. It's individuals who have brains to experience suffering. It's individuals who deserve, and often don't get, a fair chance at life. [...] Skin colour has nothing to do with it, nothing at all."

Here, he tells the victims of racial discrimination to forget the fact that their people have been systematically oppressed by a ruling elite for centuries, and face up to the radical idea that their suffering is their own individual problem. He then helpfully reassures them that none of it is their fault; they were screwed over at birth by being simply less intelligent that then creamy white guys at the top:

Never mind the airtight case that intelligence has a hereditary genetic component among individuals; if you think that being born with Down's Syndrome doesn't impact life outcomes, then you are on crack."

Yudkowsky would reject the idea that these disadvantaged individuals could improve their lot by grouping together and engaging in political action: politics is the mind-killer, after all. The only thing that can save them is Yudkowsky's improbable fantasy tech. In the future, "intelligence deficits will be fixable given sufficiently advanced technology, biotech or nanotech." And until that comes about, the stupid oppressed masses should sit and bear their suffering, not rock the boat, and let the genuinely competent white guys get on with saving the world.

Social Darwinism is a background assumption among the lesswrong faithful. Cult members have convinced themselves the world's suffering is a necessary consequence of nature's laws, and absolved themselves from any blame for it. The strong will forever triumph over the weak, and mere humans can't do anything to change that. "Such is the hideously unfair world we live in", writes Yudkowsky, and while he likes to fantasise about eugenic solutions, and has hopes for "rational" philanthropy, the official line is that only singularity-level tech can solve the world's problems.

In common with many doomsday cults, singularitarians both dread and pray for their apocalypse; for while a bad singularity will be the end of humanity, a good singularity is our last best hope. The good singularity will unleash a perfect rationalist utopia: from each according to his whim, to each according to his IQ score. Death will be no more, everyone will have the libido of a 16-year-old horndog, and humankind will colonise the stars. In fact, a good singularity is so overwhelmingly beneficial that it makes all other concerns irrelevant: we should dedicate all our resources to bringing it about as soon as possible. Lesswrong cultists are already preparing for this event in their personal and private lives, by acting like it has already happened.

You might think those cuddle-puddles are cute and fluffy, but it's too convenient to give the members of lesswrong.com a pass because they're into a bit of free love.  Lesswrongers might see themselves as the vanguard of a new sexual revolution, but there's nothing new or revolutionary about a few rich kids having an orgy. Even the "sexual revolution" of the late 60s and 70s was only progressive to the extent that it promoted equality in sexual activity. Its lasting achievement was to undermine the old patriarchal concept of sex as an act performed by a powerful male against passive subordinates, and forward the concept of sex as a pleasure shared among equal willing partners. Judged by this standard, Lesswrong is if anything at the vanguard of a sexual counter-revolution.

Consider, for example, the fact that so many Lesswrong members are drawn to the de facto rape methodology known as Pick-Up Artistry . In this absurd but well-received comment, some guy calling himself "Hugh Ristik" tries to make a case for the compatibility of PUA and feminism, which includes the following remarkable insight:

"Both PUAs and feminists make some errors in assessing female preferences, but feminists are more wrong: I would give PUAs a B+ and feminists an F"

It's evident that "Hugh Ristik" sees himself as a kind of Bayes' Theorem on the pull, and that "female preferences" only factor into the equation to the extent that they affect his confidence in the belief he will get laid.

As another clue to the nature of Lesswrong's utopian sexual mores, consider that Yudkowsky has written a story about an idyllic future utopia in which it is revealed that rape is legal. The Lesswrong guru was bemused by the reaction to this particular story development; that people were making a big deal of it was "not as good as he hoped" , because he had another story in mind in which rape was depicted in an even more positive light! Yudkowsky invites the outraged reader to imagine that his stand-in in the story might enjoy the idea of "receiving non-consensual sex", as if that should placate anyone. Once again we have a Bayesian individual generalising from his fantasies, apparently unmoved by the fact that "receiving non-consensual sex" is a horrible daily threat and reality for millions worldwide, or that people might find his casual treatment of the subject grossly disturbing and offensive.

All in all, I haven't seen anything on lesswrong.com to counter my impression that the "rational romantic relationships" its members advocate are mostly about reasserting the sexual rights of powerful males. After all, if you're a powerful male, such as a 21st-century nerd, then rationally, a warm receptacle should be available for your penis at all times, and rationally, such timeworn deflections as "I've got a headache" or "I'm already taken" or "I think you're a creep, stay away from me" simply don't cut it. Rationally, relationships are all about optimising your individual fuck function, if necessary at others' expense — which coincidentally means adopting the politics of "fuck everyone".

The main reason to pay attention to the Lesswrong cult is that it has a lot of rich and powerful backers. The Singularity Institute is primarily bankrolled by Yudkowsky's billionaire friend Peter Thiel, the hedge fund operator and co-founder of PayPal, who has donated over a million dollars to the Institute throughout its existence [4]. Thiel, who was one of the principal backers of Ron Paul's 2012 presidential campaign, is a staunch libertarian and lifelong activist for right-wing causes.

Other figures who are or were associated with the Institute include such high-profile TED-talkers as Ray Kurzwei, Aubrey De Grey, Jaan Tallinn, and Professor Nick Bostrom of Oxford University's generously-endowed Future of Humanity Institute, which is essentially a neoliberal think-tank in silly academic garb.

Buoyed by Thiel's money, the Singularity Institute is undertaking a number of outreach ventures. One of these is the Center for Applied Rationality, which, among other things, runs Bayesian boot-camps for the children of industry. Here, deserving kids become indoctrinated with the lesswrong version of "rationality", which according to the centre's website is the sum of logic, probability (i.e. Bayesianism) and some neoliberal horror called "rational choice theory". The great example of "applied rationality" they want these kids to emulate? Intel's 1985 decision to pull out of the DRAM market and lay off a third of its workforce. I guess someone needs to inspire the next generation of corporate downsizers and asset-strippers.

Here we see a real purpose behind lesswrong.com. Ultimately it doesn't matter that people like Thiel or Kurzweil or Yudkowsky are pushing a crackpot idea like the singularity; what matters is that they are pushing the poisonous ideas that underlie Bayesianism. Thiel and others are funding an organisation that advances an ideological basis for their own predatory behaviour. Lesswrong and its sister sites preach a reductive concept of humanity that encourages an indifference to the world's suffering, that sees people as isolated, calculating individuals acting in their self-interest: a concept of humanity that serves and perpetuates the scum at the top.

[1] My source for this claim is Stephen E. Fienberg's paper "When did Bayesian Inference become 'Bayesian'?" . Fienberg diligently traces the use of Bayesian-like methods throughout the history of statistics, but it's clear from his account that "Bayesian" statistics did not become a coherent and formally identified movement until the 1950s.

[2] It's a nonsense in the first place to claim that a sentence is "a logical fallacy": only arguments can be fallacious, and "absence of evidence is not evidence of absence" is not an argument. I'll charitably assume that the writer means to claim that the sentence is logically invalid — in which case he's still probably wrong.

Declaring any natural language sentence "logically valid" is obviously problematic, since sentences have various interpretations and can mean absolutely different things in different contexts. Most people use the sentence "absence of evidence is not evidence of absence" as a snappy way to counter the logical fallacy of "argument from ignorance" — in other words, they use it to make the quite sensible observation that "just because you don't know something to be true, that doesn't necessarily mean it's false". This is certainly a consistent observation — it's true sometimes — and I think most of us would agree that it's valid too — true all the time. For it ever to be false, you'd have to allow the possibility of omniscience. Even Bayesians do not in general presume to omniscience, though some of them hope to get close.

Instead, for their own didactic reasons, they choose to interpret the original sentence differently, to mean "failure to observe evidence for something should not increase your confidence that it is false". It then contradicts their interpretation of Bayes' Theorem, and is therefore a heresy. But this heresy actually points to a problem in the hardcore Bayesian outlook. Ultimately a Bayesian agent is always a victim of its own ignorance: it's very easy to bamboozle it by selectively showing and denying it evidence. Left to its own devices, eventually its beliefs will propagate into a bizarrely self-referential worldview that bears no relation to the reality it finds itself in. The analogy to certain Bayesian cults should be obvious.

[3] I am indebted to the rationalwiki page on lesswrong for much of the information in this and subsequent sections.

[4] Incidentally, years before he became a cult leader, Yudkowsky was unsuccessfully trying to popularise "to Paypal" as a generic verb for Internet money transfer.

[5] Kurzweil was also the founder of a separate organisation known as the "Singularity University", but now seems to be associated with neither. It seems that the Singularity Institute and Singularity University have recently had something of a dispute over the ownership of the "singularity" brand, and that the latter won out. By the time you read this, the Singularity Institute might even be known by its new name of "The Machine Intelligence Research Institute".

HOME        OPINIONS        GAMES        REVIEWS        FULL INDEX        ABOUT