Life and Death, pt 3

There are a lot of human languages that derive their word for “human” from their word for death (mard in the Iranian languages, e.g.) or birth (e.g. tlacatl in Nahuatl). The sense of finitude thus expressed may be taken as establishing a contrast with the nonhuman, i.e. the gods, a contrast that also gets topicalized for instance in the Greco-Roman distinction between mortal humans and immortal divinities. The gods do not die, or else the gods are not born.

So: do gods get to experience the thrill of Darwinian evolution? Presumably not: in Greek myth, for instance, in Hesiod’s Theogony, the gods multiply and expand to fill the universe, then stop. Within that framework, gods compete for power: Cronos overthrows Ouranos, Zeus overthrows Cronos, and various titanomachies or gigantomachies herald broader overturnings. You might also want to talk about Zeus’ differential reproductive success, but that works itself out precisely among mortal humans: because the gods don’t die, there’s no room for Zeus to repopulate heaven in his own image.

In cultures where that set of contrasts operates – human/mortal/born vs divine/immortal – non-Darwinian life hovers at the edge of the human imaginary, as a limit or goal to which certain remarkable humans – Hercules or Augustus, for example – might be able to assimilate themselves. At the same time, however, this form of life finds itself always in the crosshairs of a critical assault that, in the Greek tradition, would run from Xenophanes through Palaiphatus and Euhemerus to early Christian polemicists like Tertullian. Each of these thinkers would, in their own way, attack not the notion of godhood but the image of it which the mythographic tradition presented. They wanted to deny, not that there could be anything undying, but that something undying could count as life. That mythographic form of godhood was an impossibilty even for the gods, let alone for human beings.

This inherited tendency to understand the undying or unborn as an uncrossable threshold, incompatible with the human (and therefore the animal) condition, probably prepared the grounds for the emergence of a science of life in Europe. If we take Darwinian evolution as a biological fact, then, we have been able to integrate its earth-shaking claims about human origins just to the extent that we have already naturalized and universalized a born, dying form of life that could form a substrate for natural selection. Pockets of resistance to Darwinism remain, predictably, among religious groups that see their god or messiah or even their own “immortal souls” as involved in a form of undying life (which is not, by the way, the original position of Christianity or Islam.)

As modern Christian resistance to Darwinism suggests, there are certainly other ways of dividing up the world and thus other ways of conceptualizing life. One example that lies particularly close to my work at the moment is offered by the Manichaeans, who take an atomic approach to life by stipulating that only certain (“light”) particles within apparently living things are actually alive – the rest of the living creature being a kind of clothing of dead matter made by an evil demiurge in imitation of life. These particles of light, living by definition, survive the apparent death of their host and are either reincorporated into another organism or (if they happen to have been eaten by a Manichaean elect) sent up to the moon, whence they can follow a trajectory through the sun and out of the universe. On this model, life is actually indifferent to birth and death, which are events that happen around it but not to it.

This is an alternative bio-ontology that developed alongside mainstream Greco-Roman thinking on life, in the interstices between it and neighboring cultural commonwealths to the East. It perhaps still bears the trace of a “western” obsession with policing the lines between life and nonlife that is less apparent in the case studies collected in Elizabeth Povinelli’s Geontologies, where, among Aboriginal Australians, persons pass in and out of the world without this necessarily corresponding to a birth or a death. Povinelli suggests that some such account of personhood for nonliving things – not as quasi-humans with rights, but as agents with which we can enter into a relationship of care, or not – will be essential to understanding how the anthropocene has placed limits to human “power” at the same time as it seems to represent the extension of that power over the face of the Earth.

There are people, probably including some of the originators of the anthropocene concept, who understand it in mythographic terms as a human “overthrowing” of nature to match Zeus’ triumph over Cronos, except that, in modern, rationalizing terms, we have to take care lest we actually manage to “kill” the life that nature is. The last 2 years of pandemic should have left us no illusions about this: it is rather we who have put ourselves into the power of a range of non-human agents, some living, some not, some – like this virus – uncannily shimmering between living and dead.

Evolution is a story of life outwitting life, a game that some species – or one species – can win by eliminating all rivals. Yet there are forms of life on Earth, articulated by cultures with different visions than our own, that happen to defy this Darwinian conception by refusing to die or be born. These, and not the impossibility of an immortal god, are what we now face on Earth in the anthropocene – an experience that may better prepare us to encounter life on other planets than do any of the stipulative definitions offered by biology.

Life and Death, pt 2

In On Inductive Reasoning, the Epicurean philosopher Philodemus is thinking about England. “We don’t know if there are living things on the island of Britain,” he says (pritania in Philodemus’ Greek – the island had become known to the Romans so recently that no single spelling could yet command agreement), “but we can infer that, if there are any living things there, they will be mortal.” Philodemus’ point is that all living things which we know of also die; this unanimity should make us confident that living things unknown to us die too. I suppose that the same logic holds for other planets as for other countries. Yet drawing such universal inferences about life on the basis of the single example known to us may give us a poor conception of the diversity of living things existing in the universe. In particular, as I argued in a previous post, an uncritical commitment to the universality of Darwinian evolution forces us to write birth and death into our definition of life, which will thus exclude apparently living things that neither get born nor die.

The literature on the origins of life is full of examples that trouble this definition – no big surprise, since the mechanisms for information storage and copying that power Darwinian evolution on Earth are probably too complex to have assembled spontaneously from inorganic matter. At its beginnings on Earth, life was probably not capable of evolution; evolution itself had to be developed by non-evolutionary means, the play of chance within an already-living substrate. One way of conceiving this pre-evolutionary life is as a series of lipid membranes within which self-sustaining chemical reactions process chemical species selectively admitted through the membrane, accumulating reaction products that are themselves transformed by further synthetic reactions. An ecology of these bubbles would show, not the uniformity of discrete species, but an almost limitless variety of different chemical complexes. Bubbles could merge, mix together their chemical contents, then bud off again. Would that be death? Then birth? Or are those concepts themselves more modern “evolutionary” developments that make no sense in the context of this primeval micellar world?

One question, I suppose, is whether we’d call such an assemblage “alive” if we encountered it on another world. Would it meet our expectations for living stuff, or would we write it off as a particularly complex and robust non-living system? The individual bubbles making up the system would probably not be self-moving and the system as a whole would probably not give the “appearance” of life. But the chemical reactions it contained and sustained would surely produce artificially high abundances of certain chemical compounds, while others would serve as nexus points between multiple sets of reactions, so that the whole chemical space would end up following a power law distribution – one statistical feature that distinguishes biochemistry from everything else.

Which leads me to raise a practical point, too. For the next couple decades at least, astrobiologists are going to be limited to observing bulk atmospheric characteristics of exoplanets, so most current thinking about biosignatures is focused on figuring out the atmospheric transformations likely to be produced by life. An oxygenated atmosphere in chemical disequilibrium because of the presence of reduced species like methane is one strong candidate, and there’s no reason an ocean full of chemically-active bubbles couldn’t produce such an atmosphere given enough time. A micellar world is one that existing observational techniques could quite possibly flag as “alive,” regardless of whether we’d consider the particular assemblages lying underneath its atmosphere “living” on closer inspection. That gap should make us pause, and should probably make us widen our working definition of life.

We can imagine other possibilities, too. Suppose a living system in which LUCA is a single cell that doesn’t reproduce, but grows – so LUCA lives forever and has no offspring. The first problem such an organism would encounter is that, as it grows, the environmental interface through which it can access materials for survival and further growth – i.e. its surface area – may increase less quickly than its volume. One way to circumvent that issue would be to adopt a flat, spreading habit of growth, a tactic employed by some large unicellular organisms and simple animals on Earth and one dictated, at a certain scale of growth, by the force of gravity. Another would be to develop a core of “dead” material within the cell that serves as a support for future growth, an alternative source of nutrients, and an artificial restriction on runaway increase in volume. Neither of these are per se adaptive responses depending on complex structures that would have to arise out of Darwinian evolution (as, e.g., the large central vacuole of some macroscopic single-celled algae would be).

Speaking of adaptive responses, the strongest case for the universality of Darwinian evolution is that it provides a mechanism for population-scale cybernetic responses to a changing environment that are beyond the capacity of all but the most complex individual organisms. If it gets cold, a human can put on a sweater and a ground squirrel can grow out a winter coat; bacteria have to die off, generation after generation, until a cold-tolerant genotype ends up dominant in the population. It has been thought that some capacity for cybernetic response is essential for “not dying” (organisms) or “not going extinct” (populations). Can a single giant cell – perhaps planetary in scale! – still manage that kind of response? Or, if the cell grows so large that it becomes the ecosystem, does it even need to? Would it then not be the environment in which various organelles and chemical species had, themselves, to struggle for survival? We don’t know what to call that, but it’s not exactly the kind of evolution that happens on Earth.

A good general rule in all the observational sciences is that what we can imagine is just a small subset of what’s out there. This is somewhat true of physics, very true of chemistry (would we have been able to predict the intricacies of carbon-based organic chemistry without actual examples of the stuff all around us to study?) and, pari passu, extremely true of biology. If we can imagine a couple of non-Darwinian types of life, how many more are there likely to be out there in the universe? On this point and others, I expect that extraterrestrial life is going to surprise us.

Life and Death, pt 1

Conversations about astrobiology, the study of life on other planets, depend on our sense of what life is. Some in the field take the definition of life for granted, as though this were a product of another field of study – terrestrial biology – that we could or should port unchanged into the search for life elsewhere in the universe. Yet other scientists regard this move as premature, particularly given the so-called n=1 problem: since all life we can study on earth descends from a last universal common ancestor (LUCA), we know just one token of a type that might end up getting expanded in unpredictable ways if, or when, we find life beyond Earth. All terrestrial life shares certain elements of ribosomal structure that it would be absolutely astonishing to see repeated in an alien life form – but we would hardly want to declare something “not alive” just because it didn’t share this terrestrially universal feature of life. Yet we risk making a similar sort of mistake by demanding that alien life forms meet a rigorous definition of life derived purely from our experience of life on Earth.*

A criterion shared by many stipulative definitions of life is that living things must be capable of Darwinian evolution. This criterion seems to me particularly Earth-parochial and thus particularly likely to pose problems for finding life on other worlds. There are plenty of reasons to think this, not least that practically everyone would judge self-moving things with metabolisms to be alive whether they could undergo evolution or not. But we might also find it useful to reflect on the precise way in which the evolutionary criterion is Earth-parochial, an exercise that could help us to imagine some extraordinary forms of extraterrestrial life.

We may think of evolution in the context of genes, sexual selection and productive mutation as primarily a feature of reproduction, which is to say as associated with birth. In multicellular organisms, evolution does indeed depend on reproduction as a substrate for producing difference – yet this association is far from universal. In bacteria, for example, horizontal gene transfer also serves to create new combinations and configurations on which evolutionary selection can act. Differential reproductive success nevertheless remains the evolutionary mechanism by which a species changes, the next generation of organisms looking like the most successful members of the last one.

This notion of generational turnover, however, also highlights the importance of death as a complement to birth in life capable of Darwinian evolution. Without death as a means of generational turnover, evolutionary change becomes diluted; without death as a means of natural selection, evolution may lose its direction entirely. Violent competition and conflict – “nature red in tooth and claw” – played a much larger role in early evolutionary theory than they do in the modern synthesis, which focuses more directly on reproductive fitness. Yet death remains an essential background feature even of the modern synthesis , especially as it explains evolutionary leaps and structural innovations. The theory of puncuated equilibrium that accounts for these asserts that moments of high mortality are also times of rapid evolutionary change.

Does all life need to reproduce? Does all life need to die? We are not now in any position to say, but we should be willing to entertain negative answers to both these questions. Insofar as the Darwinian criterion limits our thinking on this topic, we should excise it from our definitions of life. Next post, I’ll do some imagineering as to what a non-born, non-dying organism might be like. In the post after that, I’ll try to show how some such organisms already exist after all on Earth.

* Carol Cleland’s The Quest for a Universal Theory of Life is the best modern work on this problem, which has a genealogy stretching back to Aristotle but which astrobiology has made newly urgent.

pearls and people, pt 2

Production and reproduction are ways of talking about our relationship to the future. Both terms address a kind of world-building art, one that aims to create a future in which “we” – defined in a way that our notions of production and reproduction may themselves stipulate – can thrive or at least survive. They entail a providence that takes measures in the present, either to create objects for later consumption or to create people who will mature and, when they reach the age of maturity, will consume, produce, and reproduce in their turn.

We treat production and reproduction as distinctive social processes because of the way that capitalism distinguishes between people and commodities: people as sources of value (by purchasing or, more importantly, by freely selling their labor power), commodities as carriers or signifiers of value. In an agricultural slave society like the Roman Empire, though, the lines between production and reproduction get so blurry that the distinction between the two may not be analytically useful. Romans sometimes treat people as commodities, as carriers of value, while animals appear in Roman natural history as “sources” of value in sometimes a very literal sense.*

Can we trace Pliny’s analogy between (servile) human reproduction and the birth of pearls to this “confusion?” My answer is a qualified “yes:” Pliny has no good economic reason to treat slaves and pearls as essentially different in the way that we distinguish between people and commodities. And Pliny’s Natural History is, among other things, a textbook of Roman economics.

A qualified “yes,” however, because this explanation only shows why Pliny might think of pearls and people as similar kinds of things, both commodities carrying value. We still need to understand why it is that they carry the very different values that they do. Why are pearls precious qua unique (unio), while humans become more valuable when they’re identical (unitas of two or more?)

One obvious answer to that question would invoke the relative scarcity of the two goods: identical people who aren’t identical twins are about as hard to find as pearls, while “unique” human beings are a dime a dozen. However, “scarcity” describes a state of affairs without explaining it. What can we say about the economy or world-system or Empire that produces exactly this set of scarcity relations?

I think that, in an odd way, it mirrors a sense of inside/outside that was as important to the self-esteem of Roman elites as it was to the constitution of the Roman economy. Outside the boundaries of the Empire, Roman wealth could “command” the production or extraction of difficult-to-acquire goods whose value lay primarily in their rarity.** Pearls are both an example of this type of good – they come from the Red Sea and points beyond, they are guarded by sharks, the oysters themselves will bite off your hand if you let them – and a paradigm for it – as something unique, each pearl’s rarity is absolute.

On the other hand, Pliny and other authors regard “domestic” production from within the bounds of the Empire according to an ethic of abundance. The whole point of having an Empire, after all, is to make sure you never run short of anything. This is of course a very Rome-centric perspective: tribute and taxation bring goods from all over the Empire to the city of Rome, but provincials or even small farmers elsewhere in Italy would probably not have felt such a sense of abundance.***

Goods from anywhere in the Empire were available to Roman elites. The two identical slave children provided by Toranius to Marc Antony are paradigmatic of this availability in much the same way as pearls are paradigmatic of the scarcity of “foreign” luxuries. Coming from opposite ends of the Empire – in fact, from almost the utmost boundaries of Roman rule during the late Republic – they demonstrate the power of Roman elites to extract commodities from anywhere that Rome holds sway. So these children are paradigmatic – but not exemplary, since the pair of them, on account of their unitas, are not cheap and abundant but rather rare and scarce.

What is it about these children that puts them in the same realm of value as pearls, those paradigmatic and exemplary items of foreign exchange? One approach to answering that question would begin by asking how large a population, or an Empire, has to be before two unrelated but identical people appear in it. The answer would give us some way of evaluating Pliny’s subjective sense of the Empire’s magnitude – in one sense, “very big.” In another, “about the size of an oyster.” Through its bigness, Rome too becomes capable of producing pearls of great price.

Now let’s return to the problem of building a future. What’s the future towards which this parable of production and reproduction points? A full response to that question, which would have to take into account the self-conscious pastness of Pliny’s anecdote about Toranius and Marc Antony, as well as the Civil War toward which it points, is beyond the scope of this blog post. Nonetheless, the broad outlines of an answer are already visible. The Empire takes care of the survival of Roman elites; their task of self-production, which the Empire also enables, will be to acquire unique luxury goods and thereby to make themselves “unique” in a sense that transcends the uniqueness which Pliny already takes to be part of the human condition.

* Pliny says of a number of animals whose body parts circulate as luxury goods in the roman world that they “know for what reason they’re hunted” vel sim., and that they act accordingly – either to surrender the valuable part or to hide it from human beans. Here and elsewhere in the Roman animal imaginary, critters share our economic sense of what’s valuable.

** But see Matthew FitzPatrick’s “Provincializing Rome” for a discussion of how this relationship might have looked from the perspective of Rome’s trading partners in the East.

*** The major exceptions to this division between scarce foreign and abundant domestic production in Rome are gold and silver – precisely the metals that constitute Rome’s money supply. I’ve seen some interesting preliminary work on this problem from a colleague and will share it here when it’s published.

pearls and people, pt 1

There’s a story in Procopius’ Wars about a pearl that ended up in the crown of the Sassanian Shah Piroz. It’s a pretty big one, visible from the surface, but guarded, as pearls usually are (Pliny testifies to this) by vicious sharks. A satrap, the Shah’s agent, tracks down the best pearl diver he can find and says “if you bring back that pearl, you can have whatever you want.” The diver takes one look at the shark and figures out how things are likely to go down. “Take care of my kids,” he says, then goes down for the pearl. Going down, he gets past the shark by a stratagem; on the way back up, though, the diver gets caught by the shark and devoured. With the last of his strength, he pitches the pearl up onto the beach. No word on whether Piroz keeps up his end of the bargain.

That’s one way to trade pearls for people. There are others, too. The genre of trade is one that I’m interested in because it speaks to how the Romans thought about problems of production and reproduction, which–since most goods that circulated in the ancient economy came from animals or plants, which produce as they reproduce–the Romans tended to think about together.

So caught were the Romans within this metonymy that they understood pearls not (in “correct” modern terms) as accidental byproducts of the oyster’s immune response, but as a kind of oyster baby. That conception obviously lies behind Pliny’s story about where pearls come from:

“Origo atque genitura conchae sunt, haut multum ostrearum conchis differentes. has ubi genitalis anni stimularit hora, pandentes se quadam oscitatione impleri roscido conceptu tradunt, gravidas postea eniti, partumque concharum esse margaritas pro qualitate roris accepti. si purus influxerit, candorem conspici; si vero turbidus, et fetum sordescere; eundem palere caelo minante.” (NH 9.54.107)

“Their origin and conception is from shellfish, a kind not much different from the oyster. When the time of year for conception arrives and stimulates them, they are said to open up with a kind of yawning and are filled with a dewy impregnation, such that they later become pregnant and bear young. The offspring of the shellfish is a pearl that has the character of the dew that was taken in. If it was pure, the pearl will be white; but, if turbid, the offspring too will be dirty. But the same will wax pale if the sky threatens a storm.”

Nor is this language of birth and conception merely analogical. Toward the end of the passage just quoted, you can see that Pliny uses it to generate an explanation of the varied coloration that differentiates some pearls from others. Pliny actually spends a couple more paragraphs drawing connections between weather patterns and pearl colorations according to a logic that will remind classicists of the Roman belief that children will turn out to look like whatever their mothers were looking at (or imagining) at the moment of conception.

If oysters generate pearls by recapitulating the sexual role of Roman women, then women complete the circuit by wearing pearls. For Pliny, this is a pinnacle of feminine luxuria – a word that covers the meanings of English “luxury” and “lust.” Women establish their status and their identity by wearing pearls. Biggest is best, but each is unique: the Latin word for pearl (margarita being a borrowing from Greek) is unio, which etymologizes as “something of which there is only one.” So, for the Roman woman, in other respects so identity-poor (e.g., while male children received actual names, female children were just called the feminine form of their family name, so that in effect all the daughters of a given family had one name to share), acquiring a pearl was also a way of acquiring a self.

Of course, the “normal” way of acquiring a self was to be born. That’s something else that humans share in common with oysters, according to Pliny: they produce unique offspring. But human children, unlike oyster babies, are things without value. As Pliny and plenty of other Roman authors attest, children could be bought (most of the time) very cheaply. If you were willing to bear the cost and the trouble of raising them, you could buy a household full of slaves for a pittance.

The really valuable thing (and not coincidentally, by Roman standards, a mirabilium) was to find two human children who weren’t unique. Pliny has an interesting anecdote about that, too:

“Toranius mango Antonio iam triumviro eximios forma pueros, alterum in Asia genitum, alterum trans Alpis, ut geminos vendidit: tanta unitas erat. postquam deinde sermone puerorum detecta fraude a furente increpitus Antonio est, inter alia magnitudinem preti conquerente (nam ducentis erat mercatus sestertiis), respondit versutus ingenii mango, id ipsum se tanti vendidisse, quoniam non esset mira similitudo in ullis eodem utero editis; diversarum quidem gentium natales tam concordi figura reperire super omnem esse taxationem” (NH 7.12.56)

“Toranius the slave dealer sold two children to Antony the Triumvir, one born in Asia and the other born across the Alps, as though they were twins: so great was their identity (unitas). Later, when the fraud had been discovered through the differing languages of the children and Toranius was assailed by a furious Antony, who complained among other things of the high price (for the children had been sold for two hundred sestertii), the wily slave dealer replied that he had sold them at so high a price just for that reason, since there was nothing miraculous about the likeness of children born from the same womb, but to discover children so alike belonging to two completely different peoples was a marvel beyond all price.”

I’ll come back to thinking about this passage, and about the broader connection between pearls and people, production and reproduction, in a future post.

Zoroaster: the history of an idea?

I just read a really interesting paper by Abolala Soudavar that offered a revisionist account of the history of Zoroastrianism in which the Achaemenid Empire (post-Darius in particular) appears not as a passive receiver of Zoroastrian ideology but as an important agent in shaping that ideology. That way of approaching the problem fits well with suspicions I’ve had about certain structural features of Zoroastrianism, like the importance of truth-telling, that feature prominently in Darian propaganda and in post-Achaemenid Zoroastrianism but not in the oldest Gathas. When and why does lying become the worst vice for Zoroastrians, when the earliest parts of the tradition depict it as one vice among many, and not even the most serious of them? It would certainly be easier to answer that question if we could decide that Darius’ use of Zoroastrian ideology in the Behistun Inscription was actually an abuse that later came to have the force of tradition behind it, rather than assuming, as most scholars do, that Zoroastrianism already existed in something like a completed form in the 7th century BCE.

The frustrating thing about Soudavar’s argument, though, is that it spuriously connects a totally appropriate skepticism about conventional narratives of the history of a religion to (what strikes me as) an equally inappropriate positivism about that religion’s founder. Soudavar thinks the reason Zoroastrianism hadn’t been “completed” by the reign of Darius was that Zoroaster himself lived in the late 7th century, which is to say that he was practically contemporary with the earliest Achaemenid rulers. From this it would follow that the Avestas in their entirety basically postdate the Achaemenid Empire and were written in the context of, not to say with the goal of supporting, that empire.

There are serious problems with that proposal from a linguistic standpoint: despite Soudavar’s protestations to the contrary, the Gathas, which are the most archaic part of the Avesta, must be very old indeed for them to show the similarity (in grammar, lexicon, idiom and even formula) to Vedic Sanskrit. Beyond that, scholars have pretty well proven that a linguistic stratification in the Avestas distinguishes at least two and possibly more dialect strata of composition. That would seem to give the lie to Soudavar’s explanation for the (for him apparent) archaism of the Gathas, which is that they’re actually written in an archaizing dialect by a prophet intending to impress. If the archaizing dialect was actually impressive, and actually available to a 7th-century writer, then why doesn’t the whole of the Avesta show a uniform “archaizing” flavor? The reason is rather that not all crafters of Avestan texts were equally competent in the Gathic dialect, because not all of them lived at a time when it was (at least close to) the spoken language; some of them lived much later, and so the composition of the Avesta has to be stretched out over centuries if not millenia. Soudavar’s compressed timeline for the authorship of the Avestas is therefore basically a non-starter.

Then why pursue that thesis in the first place, since it involves so much special pleading and is actually incidental to Soudavar’s substantive claims about the relation between Zoroastrianism and the Achaemenid Dynasty? I think it must be that Soudavar still accepts received ideas about the identification of Zoroastrianism with the founding prophet after whom this religion (in the West at least; compare MP mazdesn, which just means “worshiper of Ahura Mazda) ended up getting named.

What is the history of that connection, the history of an idea (which may be, to give away the ending, all that Zoroaster is)? As an etic interpretation, it goes pretty far back in the West, although not so far as you would think: as far as I know, the earliest mention of Zoroaster by name in Greco-Roman literature is by Pliny the Elder in the first century CE. He identifies Zoroaster as a wizard, not a prophet, and he ascribes this identification to a Greek scholar working some two centuries earlier named Hermippus. That, you could say, is just the kind of misunderstanding we should expect from Greeks and Romans who were more interested in the idea of Persia as a mysterious oriental other than they were concerned to know what actually went on there. However, Herodotus – writing in the 5th century BCE and extremely interested in details about Persian history – makes no mention of Zoroaster at all. Nor do the fragmentary remains of Ctesias, a physician who served at the Achaemenid court and would thus (if Soudavar’s views on the early spread of Zoroastrian belief are correct) have spent decades in an intensely and evangelically Zoroastrian milieu.

Greco-Roman interest in Zoroaster as prophet peaks among the gnostic-tending Neoplatonists of the 4th c CE and after, a set of sources that, in parallel contexts, scholars have rejected out of hand as transmitting historical realities from a millenium before. The same milieu – indeed, the name of Porphyry stands prominently in both narratives – also provides us with our most detailed accounts of the life of Pythagoras, but nobody now accepts that material as genuine. Of course, it reflects the accumulated results of centuries of mythmaking around a philosopher who probably left no written testimonia of his own, a source of error that would only have been compounded by the barriers of language and culture separating a Porphyry from any notional Zarathustra. More recently, we’ve begun to recognize another set of biases that distort late antique accounts of Pythagoras – the interest of writers in finding a “substitute Jesus” that could enable their schools of philosophy to compete with Christianity on an even footing. Zarathustra could serve that purpose just as well.

The difference between Zarathustra and Pythagoras is that, in the former case, we’re supposed to have sources “in the original language” that go back further in time and preserve a notionally independent tradition. But how true, really, is the conventional wisdom on this point? The majority of extant literature in Middle Persian, which certainly does represent Zoroaster as a prophet, dates no earlier than the 6th century CE for the simple reason that the alphabet in which it is written did not exist before that point. It may have circulated in oral form earlier than that, but ask any anthropologist about the lability of oral narratives in an oral culture.

The earliest datable evidence for Zoroaster’s place in the history of Zoroastrianism comes from early Sassanian-era inscriptions. These, indeed, reflect a consensus view among historians, for which there is ample evidence, that the Sassanians tied their imperial rule to Zoroastrianism as a legitimating institution. What remains unclear is the status of Zoroastrianism, and of Zoroaster particularly, in the age before the Sassanian revolution of the 3rd centure CE. The religious dimension of the Parthian inscriptions with which I am familiar is minimal. When we go back further, to the Achaemenid royal inscriptions, we find a forceful assertion of religious identity from which, however, the name of Zoroaster is entirely absent. Darius credits his victories to the aid of Ahura Mazda, whose worship he promotes; he could thus be correctly said to be a mazdesn, but not a Zoroastrian.

The picture is further complicated by a lexical feature of epigraphic Old Persian which Soudavar notes: the word in that language for god, baga, is not the one used in every Zoroastrian context known to us, yazata in Avestan and yazd in Middle Persian. Darius moreover presents Ahura Mazda as one baga among many, whereas Middle Persian writers do not to my knowledge give yazd as an epithet to Ohrmazd; to do so would be somehow to diminish him. Since tradition traces yazata as a lexeme back to Zoroaster’s own usage, we then have to admit that Darius talks about the gods in a language distinct from the one that Zoroaster himself is supposed to have employed. How likely does it seem, in light of this, that the religion of the Achaemenid kings took its inspiration from Zoroaster? Or that Zoroaster even existed as a source of authority in the same religious sphere?

The evidence critically points, then, not to skepticism about the conventional dating of Zoroaster’s floruit (Soudavar’s position), or even (though it also does this) to doubts about whether Zoroaster ever existed at all. The questions raised here are of a different order, because it turns out that the nature of our archive doesn’t even allow us to draw positive conclusions about Zoroaster’s existence or not. What the archive will allow us to do is to trace the history of Zoroaster as an idea. And we may even conjecture that the idea of Zoroaster was invented substantially later than the date of 618 BCE that Soudavar associates with the prophet’s life.

The universal boss

As the Trump Era draws to a close, that expression sounds more and more like wishful thinking. We worry that Trump’s “base” may continue to support him independent of his political office; we watch the emergence, within modern Republicanism, of a second nazi party whose members set the unwritten law of obedience to the fuhrer over any written legislation. But it would be history repeating as farce if any of these predictions came true. At least Hitler had some gravitas; Trump’s petty-grudginess and general clownism make his charisma a continuing mystery.

Sociological explanations help us understand how the Trump bloc formed, historically, out of poor whites who saw no higher calling for politics than the expression of resentment and rich, dumb crackers followin through on their long-standing, canine obsession with lowering the tax rate. All this is in consequence of a longstanding national decay with its roots in the Reagan-era promise to make Thatcher’s quip that “there’s no such thing as society” come true. On the basis of that unrealizable dogma, we’ve built perhaps the most hellish society possible in an era of general abundance. in 2016, voting for Trump was a way of voting for that trajectory while trying to vote against its consequences.

Yet Trump’s appeal was clearly then, and is clearly now, deeply personal. Loyalty to Trumpism is loyalty to Trump alone, as the outcome of the Georgia senate races showed and as Hawley, Pompeo et al are going to find out to their disappointment over the next four years. Trump’s supporters find a solution to their political and social anxieties, not in any particular set of policy outcomes (an intervention of which they had already long ago despaired) but in the person of Trump himself. Therein lies the mystery of his personal appeal, a mystery deepened by the objectively observable nastiness and unpleasantness of his personality. Trump’s fans see a version of Trump that looks to the rest of us like a hallucination.

The problem is that we’re looking across a genuine cultural gap, disguised by a shared language and national citizenship but frankly as deep as the one that separated the Aztecs and the Spanish conquistadors. Every attempt to decode that culture of the other and its symbols is speculative at best.

In that spirit, I suggest that Trump’s charisma is best understood as that of the boss within a celestially-projected corporate/office structure that has taken hold, among a particularly decadent set of trailer trash, realtors, and waterbed salesmen as an orienting vision for American culture. For them, “the office” is a locus of socialization, social security, and (in the case of Trump’s most loyal demographic, the non-college-educated small-town rich) personal dominance. They would like to see this cultural structure become the norm nationwide, an aim that only secondarily involves political means. What every office absolutely requires, however, is a boss, and Trump projects a version of that character deeply familiar, even dear, to small-business tyrants everywhere.

We outside the Trump movement are excluded from participation in this imaginary, universal office by our sense of professional independence, our well-worn cynicism about working for the man, or both. Lots of us – doctors, lawyers, professors – sacrificed the best years of our lives getting educated enough not to have bosses. Minorities especially are aware that the organicist, familial ideology of the office conceals an exclusionary, exploitative machinery that crushes and discards those at the bottom of the corporate ladder. We thus form an accidental coalition against the businessification of America.

This coalition is on the political back foot, despite representing a decisive majority of Americans, for three related reasons. The first is that our government carries out its most basic obligations – for example, health care – through the medium of the workplace, lending credibility to the delusion that offices are places that care about us. The government has thus abrogated its role as a structure for social organization; it can’t even organize consent around the results of election and the peaceful transfer of power.

The flip side of this abrogation is that other forms of organization gain power in proportion as the state loses it. This imaginary “business of America” has been well-positioned to acquire the government’s lost authority even as it works busily to undermine that authority.

Trumpism’s third source of strength is a kind of evangelical zeal that wishes to convert by incorporating. Trump’s supporters know what they want to do with the rest of us. It may look like they want to kill us, but actually they just want to make us bend the knee; the threat of violence is a means to that end. There’s a kind of sacred delight in imagining doctors, lawyers, professors – the sort of people who boss other people around all the time on the basis of their professional qualifications – forced to start off in the mailroom of America the business, while people who boarded the Trump train early sit comfortably in the penthouse offices.

Trumpism has thus developed an organizational strength that will be difficult to overcome, but which is entirely dependent on the continued presence of the boss at the top of the pyramid. There can be no question of a replacement for Trump. Basic to the experience of having a boss is the arbitrary character of the boss’s authority, consummately ascribed rather than achieved. A boss can’t be elected by employees, because that would make him beholden to them. Trump’s death or imprisonment, whenever it comes, will reveal American the business for the phantasm that it is. What happens then to the energies it has unleashed or to the deeply-held cultural symbols that produced it is anyone’s guess.

Governmint

The premise of Earth 2100, a 2009 made-for-TV movie that I’ve read about on wikipedia but never actually seen, is basically that climate change will end up destroying the world in a more low-budget way that it does in The Day After Tomorrow: flooding, drought, migration, etc. are just too much for civilization at the national level to deal with, so eventually it collapses. What gives the US government that final, toppling push – or what reveals that it has already for some time been defunct – is a pandemic.* When a disease she had earlier helped the CDC to contain starts ravaging the US without any federal agency doing anything at all to slow its spread or mitigate the social chaos that results, the film’s narrator, Lucy, concludes that this must be because there just are no federal agencies anymore.

Well, now it’s happened – many years ahead of schedule – and some people are drawing conclusions in much the same vein that Lucy did. Outside observers call the US a failed state, while internal critics point to this as the moment when our national institutions, weakened by years of Republican underfunding and obstructionism, finally cracked.

Yet one would hesitate to say that the federal government has “disappeared,” even if it’s given up on the task of keeping governed populations alive – the defining feature of modern governments being, as Foucault and others have claimed, to make those population numbers go up or at least keep them from going down. The administrative aspects of government – taxation, elections, and most grotesquely of all law enforcement – have continued to operate without interruption since the beginning of the pandemic. So we still experience government as government in the direct, unmediated ways that we have all our lives. But the flipside of these, the mediated experience of being cared-for and protected by a nearly omnipotent nation state, seems to have gone by the wayside.

What is this new form of government? Something closer to feudalism in its ethic, an apparatus of exploitation that regards its victims as deserving of their fates? Yes, but also something that’s always been latent in American democracy for most Americans. The ones to whom it was patent, ethnic minorities and especially blacks, have long been used to getting nothing back in exchange for the gold, votes and black lives that the government demands of them. So we could call what we’re all now experiencing “universal racism” or, to be more historically specific, “the lived black experience.” Something which, because it has heretofore only afflicted minorities, doesn’t yet have the dignity of a classical-sounding name.

In La France peripherique, Cristophe Goguey claims that the key to understanding 21st-century politics is the psychologizing maxim that “no one wants to become a minority.” Well, that’s probably true, but it hides this other, less well-grounded claim: that to make someone else a minority means to be a majority yourself. Because what people really want is to enjoy the privileges and rents that come with being part of a majority. That’s what Trump or Le Pen voters are afraid to lose, and according to a zero-sum understanding of the terms that probably derives from electoral politics, they think the sufficient move to protect their majoritarian status is just to force everyone else into minority. It never really occurs to these people – or, to be fair, their liberal-democratic opponents – that we could all become minorities.

Let’s call it Governmint, the light, refreshing substitute for governance. We’ll see if it’s only an artifact of this passing (knock on wood) crisis, but I doubt it.

* of something called “Caspian Fever.” The names of diseases are always catchier in fiction; in real life, we’re trapped between a bureaucracy that wants to incorporate by naming (COVID-19 vel sim) and dumb assholes who use names to trivialize or distance a disease they wish would go away (‘Rona, The China Virus, etc.)

Explore

I’ve been working with some other people in an astrobiology initiative on problems related to the ethics of space exploration. That made me think about the word “exploration” itself (dissociating during zoom meeting, searching out the nearest handhold to avoid ego death, etc.) Soon the word started to look strange. I could figure out its etymology easily enough – ex+plorare, “cry out” – but then I couldn’t for the life of me understand how you got from the etymological meaning to the one it has now (and, actually, had in classical Latin too.) Luckily, Festus’ 2nd c etymological dictionary has an entry for it:

“antiquos pro exclamare usos, sed postea prospicere et certum cognoscere coepit significare. itaque speculator ab exploratore hoc distat, quod speculator hostilia silentio perspicit, explorator pacata clamore cognoscit.”

(The ancients used it as a synonym for “shout,” but later it started to mean “look into” and “get to know for sure.” So a speculator is different than an explorer in this way, that a speculator looks into the enemy’s business in silence, an explorer comes to know pacified things while making noise.)

So the notional connection between “crying out” and the modern sense of the word “explore” is that one investigates pacified affairs by shouting out. Or one shows that one comes in peace by shouting. Or one shows one has pacified (i.e. conquered) something by investigating it noisily. Festus’ dictionary survives only in epitomized form, so we’re missing some important diacritics between these alternatives. It also bears mentioning that the distinction to which Festus points here seems to have been ignored by most Latin writers. It’s good to think with nonetheless, especially because English usage does respect something like Festus’ distinction between the words “explore” and “scout.”

In English, like in Latin, exploration is a cognitive procedure for getting new knowledge. What distinguishes it from scouting is that, notionally at least, one goes about it openly and in the service of universal knowing – though any reader of Burton or Thesiger will know how much duplicity and stealth went into the exploration of Africa and the Near East. Another way of putting it is that one scouts in fear over territories where enemies have the power to stop you, while one explores in confidence where the land is pacata – whether that means peaceful or pacified. Scouting and exploration are spatial concepts that apply to territories, and one could draw a map that divided the world up between them. Scouting would correspond to “Europe” – with due allowance being made for the admission of North America at one end and the removal of the Ottoman domains at the other – while exploration would characterize everywhere else, that is to say the nations and peoples seen from an anglophone perspective as ripe for colonization.

It matters what we plan to do in space. Are we quietly scouting a territory we suspect is hostile? Are we noisily exploring something that we regard as already ours? I ask these questions knowing that most of us (especially the people that matter) have already settled on the second alternative, but hoping that the lag between our futurism and its realization may make room for meaningful reflection. The problem is, after all, not just ethical but acutely pragmatic. Exploration expresses a confidence in our knowledge, our abilities – our power, in short – that may be not at all justified. The nascent field of astrobiology might be able to help us resolve those pragmatic uncertainties, given time. One more reason not to rush to the stars.

Annihilation, pt 3: or, what was democracy?

After a primary season and a general election that forclosed anything really radical happening, you might wonder what’s the point of talking anymore about self-annihilation in a political context. As it turns out, the psychoanalytic observation that nobody really wants to be different from what they are is also a good way of predicting people’s votes: democracy is incompatible with really revolutionary, self-destructive change.

That opposition seems to be written into the DNA of Anglo-American, “representative” two-party democracy, where the parties in a kind of pas-de-deux decide what political conflicts can be salient to the formation of our egos. Because the eruption of new alternatives actually reflecting people’s desires is basically impossible under this duopoly, parties are for the most part insulated from their own policy accomplishments and failures (e.g., G.W. Bush won re-election in 2004 even though anyone could see by then that the Iraq War was an unmitigated disaster). They reap electoral rewards for creating and stabilizing the personal identities of their followers, a process that has grown faster and more intense as social media increasingly replaces Actual Life.

This formation, in which polarization of identity maps onto polarization of political loyalty in a self-reinforcing and self-stabilizing way, produces an actual political stasis – and in a way that shows the semantic overlap between modern and ancient senses of the term. In the language of the Ancient Greek polis, stasis is a standing-apart, the formation of opposed parties, to be dreaded because of the violence it produces but also because, despite the apparent flurry of activity that composes civil strife, the city “stops working.” We, like the Greeks, grind to a halt when we stand apart.

In this sense – that stasis is part of its normal functioning rather than, as in the Greek case, an abberation – American democracy is profoundly conservative. A conservativism of the ego, sometimes even compatible with progressivism in the sphere of policy: a conservatism commited, first and foremost, to making its citizens sick, to engendering a nation of withered souls addicted, first and foremost, to the dopamine rush of victory.

To continue that contrast, ancient Athenian democracy could well be interpreted as part of a therapeutic culture, designed to cure citizens of their unhealthy enthusiasms. For Athens, tragedy is law is politics – a series of homologous arenas in which citizens can see their desires represented, can see the beings they want to be come into being and pass away, can even transform themselves. Athenian democracy is a Dionysiac spectacle of self-destruction, which probably explains Plato’s opposition to it. Because Plato is an early and thoroughgoing advocate of governance, but a democracy of the Athenian sort is basically opposed to the notion of governability.*

American democracy, by contrast, is all about ensuring a continuity of governance. Commitment to the winner-takes-all electoral contest is (or at least has been) part of identity formatino on both sides of the aisle, and that contest is only winner-takes-all if we agree to act like the winners are our legitimate government. We shout our desires into the void; in exchange for not having to live those desires, we pledge our lives and loyalty to a government that ignores them.

Yet all this takes place against the background of an increasingly dionysiac culture, one where the claims of being over becoming seem more and more dated. That’s the deep connection between fluid gender identity and the equally-fluid susbtance of QAnon conspiracy theories, two forms of belief and praxis that otherwise seem not to overlap. It might seem, then, that the democratic model of governance is nearing the end of its shelf life. But what comes next? I’m afraid that we’ve only put off, not obviated, the necessity of answering that question.

* but, looked at from another perspective – that of the free/slave division – Athens does bear comparison with modern American ego-preserving conservatism. It bears asking whether true democracy really can extend to every plane of social struggle.