Two kinds of repeaters
A little while ago I noted that beliefs about the paranormal world can’t directly motivate actions in the physical world. The proximate motivation of any physical action must be some belief about the real world, because (pace Mises) any action implies some attempt to change the world’s state to one the actor considers preferable. So real-world principles which depend on the paranormal, such as “separation of church and state,” are suboptimal by definition. Any real-world problems they could prevent could be also caused by a nonparanormal belief system, against which such a rule is no defense.
For example, if two movements A and B propagate the same beliefs about the real world, but A includes a paranormal plane whereas B does not, “separation of church and state” will protect us against A but not B. This is what in my line of work we call a “security hole.”
The general approach when you find a security hole is to (a) fix it, and (b) figure out what-all has crawled through the hole. This is going to require more than one blog post, but we might as well start on (a).
The only way (that I know of) to repair such a mental lacuna is to rebuild the language we use to think about the problem. As long as we have to change linguistic gears to compare paranormal and nonparanormal belief systems, we will have a vulnerability, because this irrelevant categorization constantly tempts us to craft overspecified tests which a mutating attacker can evade.
(For example, a rule that tells us to “keep Mithra out of the schools” is overspecified, unless you think Mithra in specific is the great danger to impressionable young minds. If we keep Mithra out of the schools but we say nothing about Baal, Baal will outcompete Mithra and our children will grow up as Baalist bots. Of course, if Baal is real and all the bad news we read in the paper is caused by our failure to sacrifice to him, this is ideal.)
So I suggested the terms “kernel” and “repeater,” defining a kernel as a set of factual and ethical assertions about the real world, and a repeater as an institution that propagates such assertions. A religion is a kernel and a church is a repeater. But not all kernels are religions, nor repeaters churches.
Let’s extend the “kernel” concept slightly, to also include metaphysical assertions. A metaphysical assertion is any statement that makes no factual or ethical claim (Hume’s “is” or “ought”) about the real world. This includes beliefs about paranormal entities, such as gods, but it also includes hermetic philosophical concepts such as those in Neoplatonism, Buddhism, Hegelism, etc., etc.
By definition, metaphysics does not directly affect reality. But since metaphysical assertions are often sources of conflict, and since they can motivate beliefs about the real world, it can be useful to track them—as long as we remember that they are pathologically neutral, and eliminating metaphysics, while it may be desirable, cannot by itself eliminate factual errors or ethical disagreements.
Your kernel is the set of assertions you agree with. In theory, since no one can physically stop you from thinking for yourself, everyone could have a different kernel. But in practice, people are social animals, they get most of their assertions from others, and their kernels cluster.
Therefore, we can speak of “prototype” kernels, implying patterns of agreement across social groups. Methodism, for example, is a “prototype” under this definition. Not all Methodists agree on all assertions factual, ethical, or metaphysical, but there is clearly a general pattern of consensus.
These patterns correspond to the networks by which assertions are transmitted between individuals. Let’s call a assertion in transmission a “packet.” If you “accept” the packet, it means you agree with the assertion. If you “reject” it, you don’t.
(There’s another word that means “transmitted belief.” I’ve made up my mind about this word: I don’t like it. Mainly because it makes me sound like a dork. The mere auditory tone of the word, its mouth-feel, is awful, and its various declensions (such as “memeplex”) are even worse. But “meme” also implies a sort of scientistic pretense that I find unwholesome, an attempt to intimidate the reader through the bogus authority of jargon. I prefer to borrow words from the computer business specifically because I think of programming as a trade, not a science.)
So a “repeater” is an institution which sends packets. A “church,” in the Christian sense of the word, is a repeater because the point of going to church is that the minister, or other religious official, tells you what he or she is thinking—with the implication that you should share these thoughts. If you are a churchgoer and you find yourself frequently rejecting the church’s packets, you’re likely to switch churches.
We can call the people who generally accept the packets produced by some repeater its “clients.” There is obviously a trust relationship from client to repeater. If you feel the need to evaluate every packet you receive from scratch, you have no need for a repeater.
Another example of a repeater is Wikipedia. I certainly don’t trust Wikipedia absolutely, any more than I think most churchgoers trust their ministers absolutely. However, I do assign more credibility to articles produced by the Wikipedia editing process than to, say, some random blog.
Finally, to finish off this terminology-fest, we need to wade into the deep end of the swamp and come up with some way of defining “good” or “bad” assertions, and hence packets—so that we can actually turn our firewall back on.
Metaphysical assertions, again, are neither bad nor good, as they do not reflect on the real world. This leaves us with only factual and ethical assertions. Let’s say that an assertion is good unless it’s bad. This leaves us with the problem of defining bad facts and bad ethics. The word “bad” is a little coarse for my taste, so let’s say “toxic” instead.
A toxic factual assertion is a misperception of reality. For example, I think Holocaust revisionism is a toxic assertion, because the Holocaust strikes me as pretty well-documented. But I prefer to avoid the word “lie,” because I don’t and can’t know the motives of those who repeat this (or any other) packet.
A toxic ethical assertion is an internal inconsistency. For example, in the American South from the 1830s to the 1860s, the idea developed that enslaving Africans was compatible with Christianity. This assertion would have struck even the grandparents of those who held it as toxic, because human equality has always been a central concern of Christianity. At the time of the Revolution those who accepted slavery generally thought of it as an inescapable evil. (Slavery is mentioned in the Bible, but the system of slavery in the classical world was very different from that practiced in the South, nor were Southerners unaware of this.) If Southerners had rejected Christianity in favor of some more Nietzschean ethical kernel, as at least some National Socialists did, they could have avoided inconsistency. Their ethics would not have been compatible with mine, or probably with yours, but they would not be “toxic” by this definition.
Toxic packets (which carry toxic assertions) are really not that hard to detect. Epistemology and ethics are not rocket science. Given that we live in the 21st century and we generally seem pretty good about getting our rockets into orbit, the persistence of toxic assertions is hard to explain.
But persist they do. The clients of Daily Kos and Free Republic—to name a couple of the Internet’s more egregious repeaters—can’t possibly both be right. Their ethics could differ without toxicity, but my guess is that if you polled readers of each for a statement of general ethical principles, what you would get on both sides would be pretty familiar, and probably more or less compatible with the broad tradition of Christianity. Certainly, either or both of the prototype kernels these sites offer must contain ethical inconsistencies and/or misperceptions of reality.
So toxic packets are flying all around us. Why?
This one is already getting long. But there’s one way to classify repeaters that may provide one clue. We can divide repeaters into “disinterested” and “concerned” classes.
A disinterested repeater has no organizational motivation to repeat anything but what its clients want to hear. It has the same relationship to them as any business to its customers. If its clients want the truth, it will try to give them the truth. If they prefer nonsense and illusion, that’s what they’ll get. The success of a disinterested repeater depends only on the popularity of the prototype kernel it delivers its clients, not on the actual content.
A concerned repeater has some reason to care what its clients think. It has ulterior motives. The success of a concerned repeater will depend on the nature of the assertions it makes. There is some external force, not related to the preferences of its clients, that rewards the repeater for propagating certain assertions and/or deters it from propagating others.
Which is better? And why? Hm…