Interstitial comments on Dawkins
Like part 1, part 2 received a lot of interesting, and generally quite cogent, comments. A few raised doubts about the whole effort, so I will break my promise and address them briefly before returning to the coalface.
George Weinberg shared his objections to the whole Dawkinsian “meme” metaphor. Of course I agree with these objections. George is right. (George is pretty much always right.) And I think the thing to remember is that the metaphor is only a metaphor. Genes are digital and “memes” are not. This is quite sufficient to shatter any logical abstraction.
Nonetheless, once we accept that traditions exist and have names, we have accepted the problem of taxonomy. Humans have a remarkable bit of mental machinery devoted to classifying the world around us. When we apply this machinery to history, it seems to want to show us patterns of cultural continuity and evolution. Contrary to popular belief, it is possible to think both precisely and intuitively about history. This is where the cladistic metaphor is helpful, because we can borrow its rigorous logic for intuitive purposes only, even though there is no comparable underlying rigor in the “memetic” context.
There was some interesting discussion about the specifics of Universalism. Baduin suggested that we really have two distinct traditions, M.42 or classical Enlightenment “Old Left” liberalism, and M.43 or hippie postmodernist “New Left” Universalism. It’s certainly true that the two differ in some ways, and the latter is distinctly scarier. For example, it includes many mystical and romantic themes.
I think this is cutting the pie too finely. The liberal Western tradition over the last 250 or so years is a huge stew of themes which resists this level of classification. There is no exact “memetic” equivalent of reproductive isolation, but patterns of political conflict come close. To me, Universalism is best defined as the orthodox belief system that emerged in the West after World War II, and while its themes have definitely mutated over time, the whole thing strikes me as having a general aesthetic unity.
Perhaps we can use modifiers to distinguish between various Universalist tropes. Let’s call paleo-Universalism the original beast, a la Atlantic Charter; neo-Universalism, its 1960s mutation, a la Port Huron Statement; and retro-Universalism, the neoconservative resurrection of paleo-Universalism.
Let me also second Michael S’s response to Eliezer Yudkowsky—and try to broaden it slightly.
My original point about Eliezer’s reasoning was that he classifies traditions primarily as “theistic” or “nontheistic,” which is like classifying animals as “flying” or “non-flying.” Or maybe even as bad as classifying mammals as “long-haired” or “short-haired.”
Au contraire, Eliezer responds. He is classifying them as “evidence-based” or “non-evidence-based.” Everything else is just a matter of “literary style.”
The fons origo of bias in the Yudkowskian school, I think, is the fact that Eliezer Yudkowsky is an AI researcher. He sees an easily-defined, trivially correct algorithm for reasoning—Bayes’ Theorem—and latches onto it like a dog on a sausage. As anyone would, if they had a problem to solve and saw an obvious answer to it. And indeed I have no reason at all to believe that Bayesian inference will not be part of the first working AI, which someone—perhaps even Eliezer himself—will manage to build at some point.
But at some point in this process, Eliezer fell into a very deep trap. Because he decided to define all rational thought as Bayesian inference. Either you are applying Bayesian reasoning, or you are drifting in flights of whimsy. Hence “literary style.”
Excluding for a moment the generally accepted frequentist interpretation of probability, which informs us quite cogently that the concept of quantitative probability is meaningless except in the context of a defined sample space, and that therefore so is Bayes’ Theorem (I thank this QJAE paper for pointing me toward the frequentist school, whose insights I was groping painfully toward in this anti-Bayesian screed), there is an even more obvious problem here, which is that neither Eliezer, nor I, nor you, dear reader, is an AI. Rather, we are two-legged apes and we think with a big lump of fat.
The properties of this big lump of fat are well-known. It can reason deductively, inductively, or intuitively. It can also go off the rails in quite a few well-known ways.
It is certainly possible to argue that any two of these forms of reason are a special case of the other. For example, you can go here and watch Eliezer argue that deduction is really just a case of induction, because we learn inductively that deduction works. Und so weiter. Frankly, I’m afraid Neoplatonism lost a great mind when Eliezer decided he didn’t believe in the One.
We use terms like deduction, induction and intuition because they describe phenomena in the real world—the strategies of reason that a real human brain uses. They are concepts on which we can agree. If we are to think about thinking, surely it makes sense to think about thinking in the ways that people actually think—as opposed to the ways that AIs would think, that is, if we had AIs.
The irony of it all is that Eliezer is a really good philosopher. You can watch him reasoning deductively and intuitively all day long. His “literary style” is excellent. The problem is that he devotes so much of his deductive and intuitive firepower to the rather fruitless task of explaining that all reason is a special case of Bayesian induction. Perhaps this is true for his AI, but it certainly doesn’t strike me as the most cogent description of Eliezer’s lump of fat.
Worse, this rather Plotinian transformation seems to apply entirely to deduction. Which is fortunate because it allows Eliezer to believe that 2+2 = 4, and perhaps even to accept the Rev. Bayes’ proof of his famous theorem. I’m afraid intuition is mere “literary style,” however.
The problem is that intuition is the form of reason that the lump of fat uses to understand history. History is not a science. Its purpose is to parse the past, to present it as a set of coherent patterns. If you can’t think intuitively, you may be able to verify specific factual claims, but you certainly can’t think about history.
Classifying traditions by their cladistic ancestry is a fine example. The statement that Universalism exists, that it is a descendant of Christianity, and that it is not a descendant of Confucianism, can only be interpreted intuitively. It is not a logical proposition in any sense. It has no objective truth-value. It is a pattern that strikes me as, given certain facts, self-evident. In order to convince you of this proposition, I repeat these facts and arrange them in the pattern I see in my head. Either you see the same pattern, or another pattern, or no pattern at all.
When you get all Mr. Spock and you refuse to believe in intuition, you are essentially turning off a very substantial lobe of your brain. Worse, there is no actual off switch on this lobe. You will continue to think intuitively whether you like it or not. But you will think intuitively in an unexamined way. As both Yudkowsky and Dawkins do—when they regurgitate the anticlerical themes of Universalism without asking where anticlericalism comes from, how it got into their lumps of fat, or whether it belongs there.
Finally, there is a very practical reason why it’s imprudent to categorize traditions—or even individual themes—as either “evidence-based” or “non-evidence-based.” The trap is that the God delusion is not just non-evidence-based. It is blatantly non-evidence-based. As such, it seems very sensible to single it out for special ridicule.
But it is profoundly imprudent to do so. If your goal is to overcome bias, the God delusion is the least of your concerns. It has actually tagged itself as non-rational. There is no reason to waste any time in attaching further antibodies. If someone believes in God, why on God’s green earth would you think reason would be an effective way to convince him otherwise?
The real danger is the set of received themes which purport to be rational, but in fact are not. And in the next post we’ll look at some of these.
And TGGP: note the frequent use of the word “nonconformist” in that article. With a small n. If you capitalize the N, I think you learn more than the survey tells you.