Tuesday, October 21, 2014

Three Fallacious Arguments: An Interlude

I'm not an enemy of science. It’s probably necessary to say that up front, given the theme I intend to develop in this month’s post. I consider the scientific method one of the half dozen or so truly great creations of the human mind, important enough that I’ve discussed at length in my other blog how best to get it through the imminent crises of our age into the waiting hands of the future. Even more to the point, I enjoy science as an activity and a subject of study; I’ve spent many pleasant evenings reading Darwin’s great treatise on natural selection and other classics of scientific literature; I’ve contemplated the structure of leaf tissues through my well-used microscope, run controlled experiments on the effect of soil amendments on plant growth, and crunched the numbers to see if the results were statistically significant. My first sight of the rings of Saturn through my own homebuilt telescope remains one of the defining memories of my childhood.

Nothing I’ll be saying here, in other words, is in any way meant to devalue the process of science, the richly human activity of testing hypotheses against quantitative measurements of some portion of the world we experience. That process, though, does not exist in a void. Alongside science as process, there’s also science as product, a set of detailed claims about what is and isn’t true about the world, some of which—though not all—have come into being through the systematic use of science-as-process. By and large, I have no objection at all to what science-as-product claims is true about the world. It’s certain of its claims about what’s not true about the world that I find problematic. 

The difficulty is partly a matter of repeated conflicts between certain claims currently part of science-as-product and my own experience, and partly a wholly personal dislike of a certain kind of dogmatism that’s become deeply entrenched among some of the people who claim to speak for science these days. There are far more pragmatic factors as well, but the way that science of both kinds, process as well as product, has been prostituted for the benefit of ideological stances and economic interests is a subject for my other blog, not this one. The point that’s most relevant here is that magic, the art and science of causing change in consciousness in accordance with will, works with two distinct sets of processes. One of them is acceptable to even the most extreme materialists, but the other depends on something that, according to current versions of science-as-product, does not, cannot, and must not exist.

We could plunge into a discussion of that “something” right now, but an alternative approach will be more useful, for a curious reason. It so happens that these days, any attempt to raise questions about the claims of science-as-product inevitably fields a flurry of counterclaims, and all these latter depend on the same handful of canned arguments. What makes this all the more interesting is that the arguments in question are all based, in turn, on a set of classic logical fallacies. There’s a rich vein of irony here, since nearly all the people who trot out these fallacies like to present themselves as defenders of logic and reason, but we can let that pass for the moment. What I want to do here is look at the fallacies one at a time, see why they don’t prove what they claim to prove, and thus (with any luck!) get past the rehashing of canned arguments to the actual issues at hand when next month’s post begins talking about the relevant dimensions of magic.

The first of the arguments I want to consider here is the insistence, very common on the lips of today’s skeptics, that extraordinary claims require extraordinary proof.  That sounds reasonable, until you take five minutes or so to think about what it actually means. To begin with, what defines a claim as extraordinary? Does a claim become extraordinary because most people disagree with it? Does it become extraordinary because experts disagree with it? Does it become extraordinary because it violates “common sense”—and whose version of common sense are we discussing here?

The phrase “extraordinary claims” is thus highly ambiguous. In practice, to those who use this argument, a claim is extraordinary if they don’t agree with it, and ordinary if they do. The phrase “extraordinary proof” embodies a similar ambiguity:  in practice, to those who use this argument, a proof is extraordinary if they choose to give it this status and merely ordinary if they don’t. This is very convenient for them, since no matter what proof is offered, they can just keep on raising the bar and saying “That’s not extraordinary enough.” 

That is to say, “extraordinary claims require extraordinary proof” is an example of the logical fallacy of petitio principii, also known as “begging the question.”  The essence of petitio principii is that the evidence and arguments for one side of a debate are judged according to a lenient standard, one that presupposes that they are correct, while those for the opposing side are judged according to a harsher standard that presupposes that they are incorrect. This is a great debating trick, but it’s lousy logic: among the most basic rules of fair reasoning is the principle that the evidence for each side of a question must be judged according to the same standards of proof.  Once any claim, however “extraordinary,” is expected to meet standards of proof the other side can change at will, what’s being offered is a rhetorical gimmick, not a reasonable claim.

The second argument I want to discuss here also depends on petitio principii. This one is a little subtler, though, and is best understood by way of an example.

Back in the early 1980s—I believe it was in the pages of Omni, but I can’t swear to that—I read a lively essay by the late Carl Sagan about near-death experiences, which were getting one of their periodic bouts of media exposure at that time. Sagan talked about some of the common features reported by people who underwent such experiences, especially the sense of rising up slowy through a dark tunnel toward light, and being greeted by a being on arriving in the light. He then proposed that there was a wholly material explanation for these experiences: people who underwent such experiences were having a flashback to the memory of being born; the dark tunnel was the birth canal, the light was the hospital lamp, and the waiting figure was the physician who attended the delivery. 

It was (and is) an interesting hypothesis, but the way Sagan used the hypothesis was more interesting still. Those of my readers who know their way around the scientific method know that the first thing a scientist does, on coming up with an interesting hypothesis, is to think up as many ways as possible to disprove it. “How can I test this?”—that’s the question that drives real science, and makes it something other than a way to make the universe mirror our own ideas back at us. Yet this is exactly what Sagan didn’t do. Instead, he behaved as though the mere existence of a hypothesis that explained near-death experiences in terms conformable to his materialist worldview justified the dismissal of every hypothesis that didn’t fit within that worldview.

This is all the more fascinating because Sagan’s hypothesis is quite testable. It would be tolerably easy, for example, to survey a large population of individuals who have had a near-death experience, note which of them had the experience of rising up through a dark tunnel toward light, find out which of them had been born in the usual way and which of them had been born by Caesarian section, and see if there’s any correlation. If Sagan’s hypothesis is correct, people born by C-section should have different imagery in their near-death experiences.  If no such difference appears, and in particular if people born by C-section also have imagery of rising up through a tunnel in their near-death experiences, Sagan’s hypothesis would be disconfirmed by the evidence: that’s how science works.

What Sagan was proposing, though, was evidently not meant as a scientific hypothesis, as he made no attempt to test it, or even to suggest that it might be worth testing. Rather, it was an example of a common debating trick, the ad hoc hypothesis—a hypothesis that’s supposed to be accepted without proof, because it justifies the evasion of contrary evidence. Where a scientific hypothesis is meant to further inquiry, an ad hoc hypothesis is thus meant to stop inquiry in its tracks. Logically, it’s another form of the petitio principii fallacy, since it presupposes that any hypothesis that supports one side of a debate is automatically more likely to be true than any hypothesis that supports the opposite side of the same debate—which, again, is begging the question.

Thus it proves nothing to say, as so many of our current skeptics like to say, that an experience “must have been caused by” some natural phenomenon or other. In logic, “must” is a very strong word:  it can only be justified by strict logical necessity or overwhelming evidence, and if neither of these is forthcoming, it’s simply another rhetorical gimmick. Nor is it reasonable to insist, as so many of these same skeptics like to insist, that anyone who disagrees with their ad hoc hypotheses has to disprove them, to the skeptics’ satisfaction, while anyone who presents a hypothesis with which the skeptics disagree has to prove it, again to the skeptics’ satisfaction. Here the question isn’t merely being begged, but borrowed and stolen into the deal.

The third argument I want to discuss here is a little different, as it doesn’t rely on the fallacy of petitio principii; its roots descend into a different part of the realm of bad logic. This is the insistence that a phenomenon can’t happen if current scientific theory doesn’t include a mechanism that’s able to make it happen: “If the cause isn’t known, the effect didn’t occur.” Stated so baldly, it sounds preposterous—and of course it is—but that’s far and away the most common angle of attack taken by critics of the subjects central to this blog. Thus it’s tolerably common to hear claims that magic, for example, can’t possibly work, because currently accepted scientific theory provides no mechanism by which ritual actions can make things happen at a distance.

Now of course it’s not as though there’s been any significant amount of research aimed at finding mechanisms that might account for the effects of magic; quite the contrary, any scientist who seriously proposed such a research program would be kissing his career goodbye. Nor, for that matter, would most practicing mages agree that magical rituals all by themselves make things happen at a distance.  Magic, again, is the art and science of causing change in consciousness in accordance with will, and all the effects of magic are mediated by consciousness. Ritual—symbolic psychodrama performed in quasimeditative mental states—is an important tool of magical practice because it shapes and reorients consciousness in reliable ways, but you won’t find many people in the scientific community who are willing to take the time to read books on magic by practitioners, or talk to people who have practical experience of the subject, and find that out.

That said, there’s also a logical issue here. The question “does X happen?” is logically distinct from the question “why does X happen?” Thousands of years before Newton worked out the theory of gravitation, people knew that objects fall when they’re dropped, and could make accurate predictions on the basis of their knowledge, even though they had no notion of the cause. For that matter, Newton himself famously refused to offer any hypothesis about what gravity was; his sole concern was to construct a precise mathematical model of the way that it appeared to work. Only the fact that heavy objects clearly do fall when dropped, I suspect, prevented the skeptics of Newton’s day from rejecting his ideas out of hand; after all, late 17th century physics hadn’t yet conceived of the curvature of spacetime, and so didn’t have a causal mechanism in place to explain the effects of gravity.

This latter point can be made even more forcefully, because most of the great scientific discoveries of the last three or four centuries would have been “disproved” by the arguments today’s skeptics use with such eager abandon. Let’s take Darwin’s theory of natural selection as an example. When it was first formally proposed in 1859, to begin with, Darwin’s claims were extraordinary by most standards, while the proof he offered to back up those claims was composed of ordinary scientific evidence, some of it the product of his own painstaking research, some published by others in the scientific journals of the day, all of it solid but none of it particularly amazing.  If extraordinary claims require extraordinary proof, as so many skeptics insist today, Darwin’s work should have been rejected out of hand by the scientific community of his time.

Furthermore, there was no shortage of ad hoc hypotheses to explain away the facts Darwin marshalled, without recourse to a theory of evolution. Some scholars in Darwin’s time argued that fossils were the bones of ancient animals that failed to find room aboard Noah’s ark; others insisted that, just as Adam had a navel even though he’d never needed an umbilical cord, the Earth was created miraculously in 4004 BCE with a complete stock of fossils, as though it had existed from measureless time; still others argued that fossils had been put there by Satan in an attempt to lure the unwary into eternal damnation. If it’s legitimate to use ad hoc hypotheses to dismiss possibilities that don’t conform to existing theory, it would have been equally appropriate to insist that the evidence for evolution “must have been caused by” the Flood, or God, or Satan, and dismiss Darwin’s theory on that basis.

Finally, Darwin’s theory required two things for which the science of his time had no causative mechanisms at all. The theory of heredity as understood in the middle of the 19th century argued that the traits of each parent blended completely with the other, and so provided no way for individual characteristics to be passed down unchanged to offspring—that didn’t enter the body of science-as-product until the rediscovery of Gregor Mendel’s work in the early 20th century. What’s more, 19th century physics provided no mechanism for the Sun to keep shining for the immense periods of time needed for evolution to work, and so physicists in Darwin’s lifetime insisted that life on Earth could only be a few million years old. Evolutionary biologists ignored that, because they were confident that a mechanism that would provide billions of years of sunlight would be found, and of course it was. If it’s reasonable for observed phenomena to be rejected if no causal mechanism capable of producing them is known, though, Darwin’s theory should certainly have been tossed in the trash.

Fortunately, that’s not the way science worked in 1859. Darwin’s theory of natural selection was assessed on its own merits, not dismissed out of hand because it contradicted the science-as-product of its day.  The body of ordinary evidence that bolstered Darwin’s extraordinary claim was recognized as quite adequate to the purpose; the various ad hoc hypotheses brandished about by critics were recognized as such, and mocked merrily on that basis in the scientific and popular press; and the absence of crucial causal mechanisms, far from causing The Origin of Species from being tossed in the nearest dustbin, encouraged researchers to go looking for those mechanisms, and find them.

In Darwin’s time, science was still in what last month’s post described, in deliberately mythic terms, as a Phoenix phase:  less gnomically, a phase in which human thought was more or less midway along its trajectory from concrete images to abstract concepts, and the flight into abstraction that characterizes the next step in the process, the Dragon phase, had not yet gone anything like so far as it’s gone since then. As I noted last month, it’s typical of the Dragon phase for a civilization to become so convinced of the truth of its abstract conceptual models of the universe that any gap between those models and the universe of human experience is argued away with rhetoric, rather than being used as an opportunity to correct the model. The arguments surveyed here are good examples of the type.

Such defensive maneuvers are probably unavoidable at this point in the turning of history’s wheel, and they also serve a valuable function in their own way. Limits are as necessary in the ecology of thought as they are in every other ecology, and the hard defensive shell around the basic presuppositions of an intellectual creation such as modern Western science is what allows those presuppositions to be taken as far as they will go. In the case of classical logic, which completed its own journey through this same process around seventeen centuries ago, the same rigid exclusion of inconvenient realities helped drive inquiry into those fields where logical method could accomplish the most—for example, mathematics—and kept it away from those fields, such as natural history, to which the methods of classical logic were very poorly suited.

In much the same way, the exclusion of such phenomena as consciousness from scientific study in the modern world is almost certainly a good thing, because the methods of scientific research simply aren’t that useful when applied to such topics; the energy that might go in those directions is better used elsewhere, on topics better suited to quantitative analysis and reproducible experimental designs. Just as classical logic was taken up in later centuries as a fully developed system and put to uses the ancient Greek logicians couldn’t have imagined, in turn, modern science will doubtless be taken up by civilizations in the future, and put to work doing things that would baffle or horrify today’s scientists. That’s in the nature of cultural recycling: since every human culture evolves its own set of values, what seems like the obvious and natural thing to do to people from one culture generally looks pretty bizarre from the standpoint of any other.

It would be pleasant if today’s would-be defenders of science and reason were to see things that way, and recognize modern science as a culturally bound phenomenon, one of the supreme creations of the civilization that came to birth in western Europe and now maintains a brittle and fraying hegemony over most of the rest of the planet. From that perspective, the fact that some phenomena are not well suited for study via science-as-process, and are thus poorly represented in science-as-product, would not inspire the sort of furious, fist-pounding denunciations you so often see from today’s “angry atheists” and the like. Still, it seems to be essential to the ripening of an intellectual system in a Dragon era that many of its proponents see it as the royal road to absolute objective truth about everything, and I take sufficient delight in the discoveries of science that to my mind, at least, the misdirected tirades that result from this necessary habit are a small price to pay.