I mentioned in last month’s post here that our familiar term “world” is a rounded-off version of the Old English weorold, “man-old,” the time or age of human beings. That bit of etymology conceals more than one important insight. As I noted last month, it reminds us that this thing we call “the world” isn’t something wholly outside ourselves, something we experience in a detached and objective way. It’s something we create moment by moment in our minds, by piecing together the jumble of unconnected glimpses our senses give us—and we do the piecing according to a plan that’s partly given us by our biology, partly given us by our culture, and partly a function of our individual life experience.
That point is astonishingly easy to forget. I’ve long since lost track of the number of times I’ve watched distinguished scientists admit with one breath that the things we experience around us aren’t real—they’re just representations constructed by our sense organs and brains, reacting to an unimaginable reality of probability waves in four-dimensional space-time—and then go on with the very next breath to forget all that, and act as though matter, energy, space, time, and physical objects exactly as we perceive them are real in the most pigheadedly literal sort of objective sense, as though the human mind has nothing to do with any of them except as a detached observer. What’s more, many of those same scientists proceed to make sweeping claims about what human beings can and can’t know and do, in blithe disregard of the fact that these very claims depend on the same notion of the objective reality of the world of experience that they’ve just disproved.
It’s a fascinating example of doublethink, and we’ll be talking about its implications more than once as this discussion proceeds. That said, there’s another insight hidden in that deceptively simple term “world,” which is that the world, the man-old, the thing we’re used to experiencing as an objective reality independent of our consciousness even though it’s nothing of the kind, is defined not by space but by time. It’s not a place but a time of human beings, and it has a history.
Part of that history needs to be traced out over the scale of evolutionary time. Owen Barfield pointed out most of a century ago, for essentially the same reasons I’ve just cited, that all those images of dinosaurs lumbering around in vaguely tropical jungles are works of imaginative fiction—images of what the prehistoric past would have looked like to human beings, had there been human beings around to view it, which of course there weren’t. More than a century of research into the nervous systems and cognitive processes of other living things have shown definitively that they don’t experience the same world as you and I. A cat, for example, has modes of visual processing hardwired into its eyes and brain that are radically different from the ones that you have in yours. Have you ever watched a cat staring intently at something you can’t see? Something is setting off the cat’s visual processing neurons and not yours, so whatever the cat sees is part of the cat’s world, but not part of yours.
In evolutionary terms, mind you, the cat and you are practically kissin’ cousins. Factor in a hundred million years of evolutionary history and the yawning genetic chasm that separates you from, say, an allosaur out for a pleasant stroll in the greenery of a Jurassic cycad forest, and you might have some sense of just how different the world that the allosaur experienced was from yours. The allosaur saw, heard, felt, and smelled a world vastly different than you would have experienced, had you been hiding from it and its hungry kin in that same forest. There were likely things in its world that you wouldn’t have perceived at all, and vice versa, because its sense organs and nervous system were variations on the standard megalosaur model, while yours are variations on the radically different standard primate model. Until there were hominins with eyes and nervous systems sufficiently like yours, even the most basic elements of the world you know didn’t yet exist, because—again—the world is not “out there.” You don’t observe the world, you construct it.
There’s every reason to think that, within a certain fairly modest range of individual variation, one cat constructs much the same world—or, rather, the same “cat-old”—as any other cat. The same was most likely true of allosaurs, though it’s only fair to admit that cognitive testing of dinosaurs is still a little beyond the capacity of today’s scientists. That similarity is much less true of human beings, because of one of the evolutionary twists that pole-vaulted us out of our australopithecine ancestors’ comfortable niche as savanna-dwelling primates and sent us scampering around the globe.
Cats construct their cat-olds, and allosaurs presumably constructed their allosaur-much-olders, on the basis of genetically transmitted patterns that are hardwired in their nervous systems, and are triggered into activity by parental behavior. Cats teach their kittens how to hunt, for example, and it’s been suggested by paleontologists that allosaurs did much the same thing for whatever you call baby allosaurs, but in both cases parental instruction serves as what ethologists call a releasing mechanism: a way of triggering and fine-tuning patterns that are put in place by genetics. Human beings also have a fair number of hardwired reactions and releasing mechanisms—the way a child learns to understand and use language is exactly akin to the way a kitten learns to hunt—but we’ve also evolved a way of modifying those genetic patterns to a much more dramatic degree than cats do or allosaurs did. In the process of learning language and the other dimensions of its culture, a young human absorbs a distinctive way of constructing the world, and that cultural pattern pushes, pulls, and prods the inherited world-structure shared by all human beings into a culturally distinct form.
The content of cultural transmission thus varies from culture to culture and from person to person, even though the capacity for cultural transmission has been hardwired into the brain by millions of years of hominin evolution. Most people can instantly remember the words and melodies of whatever songs were popular around the time they hit puberty, for example, and that’s not accidental; in tribal cultures around the world, that’s when young people get taught the traditional chants and incantations that guide them through their adult lives, and much the same sort of imprinting that allows toddlers to pick up the grammar of their native language effortlessly functions here to fix traditional songs—or top-40 hits—in permanent memory when those same toddlers reach their teens. One point that needs to be remembered here is that this imprinting process isn’t conscious, and the imprints left by it can’t be changed by the conscious activities of the mind; those of my readers who have ever tried to get a song out of their minds know this from personal experience!
The imprints each human being absorbs from his or her culture then get overlaid by various kinds of individual experience. Thus, as I’ve noted earlier, the worlds we each construct have three layers: a personal layer derived from life experience, a cultural layer derived from childhood imprinting, and a biological layer derived from the evolutionary background of our species. Each of these, in turn, has a history, and that’s where we start straying into some very controversial territory.
Now of course it’s not too controversial to point out that the personal layer has a history. The biology of the human life cycle works its changes on consciousness, and so do all the ordinary and extraordinary events encountered along the way from womb to tomb; an individual’s biography might usefully be seen as a chronicle of how the genetic and cultural model of the world that they inherited got reworked, for better or worse, over the course of one human life. Equally, it’s not too controversial to point out that the biological layer has its own much slower history, which is part of the evolutionary trajectory of the genus Homo and its direct antecedents.
It’s the cultural layer that stirs up the controversy, because our culture has staked its survival, and more than merely its own survival, on the notion that the peculiar way its inmates construct the world is not the jumble of genetic, collective, and individual patterns that its own sciences prove it to be, but the plain unvarnished truth about the universe, which ought to be obvious to anyone anywhere who pays unbiased attention to the world around them. Thus the only version of history that most people in the industrial world are willing to consider is one that explains how people stopped believing all the obviously muddleheaded things they used to believe about the cosmos, and learned to see the reality that was sitting right out in front of them all along—which, of course, just happens to be the one we construct, moment by moment, as we make our worlds.
There are plenty of problems with that way of thinking about history, but the one that’s most relevant to the project of this blog can be grasped by recalling the last time you saw a cat staring intently at something that your eyes didn’t see. The worlds constructed by different cultures don’t just vary from one another in how they arrange the flurry of disconnected data that comes streaming in through the senses. They also vary in which data they include in their arrangements, which they exclude, what they consider important and what gets dismissed as meaningless. It’s entirely possible for the world of a given culture, at a given era in its history, to exclude utterly a range of common human experiences that the worlds of most other human cultures treat as having very great importance. We know this because the world of modern industrial culture does exactly this—and among the things that are excluded in that world, dismissed as nonexistent and meaningless and imaginary, are the raw materials of magic.
What those raw materials are, how they relate to other aspects of the universe of human experience, and how the operative mage identifies them and puts them to work, will be the subject of quite a few posts a little later on. The point I’d like to make here is that the exclusion we’re discussing is a very recent thing in the industrial world. Until the final triumph of the scientific revolution at the start of the eighteenth century, magic and a great many things connected with it were treated as everyday matters in Western cultures, as obviously real as weather or the misbehavior of kings. Most people practiced magic in one form or another—it’s rare to find a household commonplace book from the Middle Ages, the Renaissance, or the early modern period that doesn’t have an assortment of spells for healing, divination, and the like right in there alongside recipes for heather ale and mustard plasters.
The relationship between magic and religion all through those centuries has been misunderstood and misstated by almost everyone outside a handful of scholarly fields. Valerie Flint’s The Rise of Magic in Early Medieval Europe, for example, shows that one of Christianity’s major selling points in the post-Roman dark ages was that its priests and monks were considered better at magic than their pagan rivals. From the fall of Rome straight through to the late fourteenth century, charms and incantations were forbidden only if they invoked someone other than God, Christ, or the saints. It was only very late in the Middle Ages, with the dominance of the Nominalist movement in Christian philosophy, that people started looking askance at traditional Christian magic, and it took centuries more for that disapproval to evolve into the claim that the thing so heartily disapproved of didn’t exist in the first place.
The dubious sort of history I mentioned earlier, which treats all previous thought as a collection of obvious stupidities that humanity only got around to outgrowing in the eighteenth century, very often seizes on the decline and fall of magic at the dawn of the scientific revolution as a case study: see, everybody believed in this stuff until the Enlightenment finally gave us all a clue! It all seems to make sense, too, unless you know enough about the history of magic to discover that the same rationalist revolt against magic has happened many times in the past.
Take the time to read the ancient Greek philosophers and you’ll get to watch the same revolt in full swing two millennia earlier than ours. Just as Johannes Kepler cast horoscopes to pay the rent, and Isaac Newton devoted as much of his time to alchemy as he did to physics, Pythagoras and Empedocles—among the leading figures in the early days of what we may as well call the Greek Enlightenment—were up to their eyeballs in magical practices. That didn’t last for long; Plato, arguably the pivotal figure of the Greek Enlightenment, inherited Pythagoras’ mathematical magic but chucked out the magic in favor of the first draft of Greek logical method, and wrote scornfully about the way that the mages of his time peddled spells and initiations door to door in Athens.
The philosophers of the centuries right after Plato had even less time for magic than he did. By the beginning of the Common Era, practicing magic was strictly for peasants, the urban poor, and exotic people in faraway places who supposedly didn’t know any better. Lucian of Samosata, the Amazing Randi of the first century CE, wrote a series of hilarious satires on the flim-flam that he claimed was being practiced by the mages and prophets of his time; it’s among the recurrent themes of these satires that most of the people clueless enough to fall for such obvious humbug were illiterate yokels.
Now of course there was still plenty of magic being practiced in the classical world in those years, and not just by yokels. The Greek Enlightenment, like the later European one, was fashionable on the wealthier end of society, and only penetrated down the social pyramid to a limited extent. Furthermore, then as now, there were always members of the educated classes who kept up an interest in magic, and there were certain traditional organizations—the Mysteries in the classical world, Freemasonry in the modern one—that didn’t exactly practice magic, but offered initiations that were rooted in old magical traditions, and passed on teachings, symbolism, and ceremonials rich with magical possibilities.
As the charisma of Greek rationalism faded and its internal contradictions became steadily more problematic, in turn, these survivals became the seeds from which magic promptly revived. All through the first centuries of the Common Era, there had been tentative contacts between philosophers and mages, and a few colorful figures such as Apollonius of Tyana had revived magical traditions in something like their old forms. As the classical world stumbled toward its end, the reasonings of the philosophers and the inner disciplines of the mages finally met and merged in the person of Iamblichus of Chalcis, who fused Neoplatonist philosophy with traditional magic and religion into an enduring hybrid. That fusion sparked the classical world’s last major intellectual movement, provided the new faith of Christianity with its first coherent theology, and created the tradition of philosophical magic that would remain standard in the Western world for more than a millennium thereafter.
This same pattern can be traced in the life cycles of other civilizations—in India, for example, where the local version of the rationalist revolt got going in the sixth century BCE, and in China, where it took off a little later. Today’s rationalists like to point out that Greek rationalists, Indian rationalists, and Chinese rationalists, not to mention their peers in other civilizations, didn’t embrace the same beliefs as the current example of the rationalist species, and of course they’re quite correct in saying so. They run off the rails when they insist that, because people in other civilizations didn’t embrace the peculiar way that modern industrial civilization constructs the world as the plain unvarnished truth, this means these other rationalisms weren’t really rationalist.
That objection is as predictable as it is hopelessly wrong. Each culture constructs its own world, its own man-old, atop the common foundation provided by human neurology and instinct, and so each civilization’s version of rationalism attempts to make rational sense of a different world. Every other civilization’s rationalist movement has been as convinced as ours that its way of thinking about the universe was the plain unvarnished truth, rather than the elaborate biological and cultural construct it actually was. Every other civilization’s rationalist movement, in turn, has broken down over some equivalent of the same issues that crippled classical rationalism, and ended up fusing with a resurgent magical or esoteric religious tradition in the same way that classical rationalism did.
Modern industrial society hasn’t yet found its Iamblichus, or for that matter its Zhang Daoling or its Nagarjuna, but the normal processes that will lay the groundwork for the appearance of some similar figure are well under way. The magical traditions of the industrial world began their return from exile with the publication of Eliphas Levi’s Dogme et Rituel de la Haute Magie (Doctrine and Ritual of High Magic) in 1854, and the work of rediscovering and reinventing those traditions has continued steadily since then. Twice now—during the flowering of psychical research at the end of the nineteenth century, and during the flowering of parapsychology from the 1940s through the 1970s—scientists and mages in our culture have made the same sort of tentative contacts that rationalist intellectuals and occult practitioners have made so many times in the past, before hard historical necessities drove them together. Crucially, too, the breakdown of our civilization’s rationalist worldview is proceeding at something very like the usual pace.
That breakdown, its symptoms and its consequences, will be a central theme of many of the posts to come. The predicament at the heart of it, though, can be summed up easily enough. For reasons we’ll be discussing in a later post, rationalism suffers from an innate and lethal tendency to lose track of the difference between the abstractions that it contemplates and the universe that those abstractions are meant to represent. That confusion between representation and reality tends to increase over time as the rationalist movement defines its view of existence with more and more precision. It’s as simple as it is inevitable: the tighter the rationalist clenches his fist, if you will, the more of the universe of possible human experience slips through his fingers.
Sooner or later, the things that have been excluded from the world by any given rationalist system will include things that can’t be ignored without putting the survival of the civilization at risk, and when those things are ignored anyway, as they normally are, the consequences are all too familiar from the historical record. That’s why rationalist movements in their final years, when it finally becomes impossible to ignore those things any longer, always end up making peace with the realms of magic, myth, and religion they‘ve previously spent so many years and so much effort denouncing. To put the same thing another way, that’s why the magic or the esoteric religion of a waning civilization ends up absorbing the heritage of that civilization’s broken-down rationalism, repurposing it to cope with the unmet needs of its time, and placing it in a context of practice that keeps it from blinding itself with its own abstractions quite so readily as when it’s given free rein.