Giant Meteor 2020

There’s nothing more boring than people talking about their dreams. I’ve got to talk about this one, though, because it’s apparently not just mine. Last night, I dreamed that the Earth was about to get hit by a meteor big enough to cause human extinction. I had a line on a seat on a spaceship offworld, but I’d missed the bus to get to the launch and I wasn’t sure I could make it in time by cab.

I woke up before the meteor hit, so I never got to find out whether I made it or not. That’s not always the way these dreams go: so far, I’m about 50/50 on catching the spaceship. The other half of the time, I just get blown up along with everyone else.

Is this a collective dream? At least it’s a political one. I always have it when I’ve been paying too much attention to elections. One time, my ride on the spaceship was supposed to be courtesy of a community organizer I knew. Instead, he showed up with a rowboat. That was one of the dreams where I got blown up.

Is it a collective dream? Yes, to judge from the bumper stickers that were everywhere during the last presidential election: “Giant Meteor 2016.” Extinction is better than politics. That’s not exactly a tongue-in-cheek sentiment, especially if you think that politics is maybe now just a longer, more terrifying route to extinction anyway.

I don’t think we’re going to see those stickers get updated for 2020, even if Biden wins the Democratic nomination. Politics is serious now; we grew up somehow in the last four years, and we don’t treat Trump like a joke anymore. The most important thing is beating him.

This is actually a dangerous attitude, though – not taking politics seriously, which is extremely appropriate, but extending that seriousness uncritically to the presidential contenders. The fact that we’re ready to do that makes us smell desperate, which is what got desperate grifters like Seth Moulton to jump in the water. Campaigns that we’d otherwise instantly dismiss as shameful vanity projects now advance, under the cover of seriousnesss, onto the national stage. Pete Buttigieg ought to be starring in a Little Rascals reboot; instead, he’s polling a bit shy of ten percent.

Buttigieg won’t win, but likely as not the end of the primary process will spit out a candidate as vain and empty as he is – someone like Biden, who won’t do a damn thing to roll back Trump’s most vile policies but who’ll make us feel better about them by not talking like a brain-damaged Nazi. You can bet Biden isn’t going to abolish ICE; he’ll just make us forget that we’re running concentration camps for children.

Is that “seriousness?” No, it’s seriousness as performance, both on Biden’s part and ours. In a Trump/Biden race, we’ll be asked to make a choice between putting a smily face on fascism or leaving the mask off. If we pretend that’s a real choice, then that’s the choice we’ll keep getting sold for the foreseeable future.

I’d want Giant Meteor to be running in that race. Hell, I want Giant Meteor to be running in the Democratic primary. New debate rule: any candidate who can’t beat mass extinction in all the national and state polls doesn’t make it onto the stage. If Giant Meteor wins, NASA has to go find an appropriately-sized rock and smash it into the Earth. If it loses, put it back in the next presidential election cycle. We should always get the chance to choose the fast route to oblivion, just so we don’t take the slow one by accident.

Burn Down the Small Towns

Some truly horrifying shit this week from Phillip Jeffery, writing in National Affairs and calling for a new national cultural agenda. Like a lot of us, Jeffery thinks that the free-market modernism pushed covertly by the CIA and then openly by the NEA was a bad thing, both for the world and for the arts. Where we part ways, though, is why. I think that government sponsorship of the arts tends to make sure that the arts aren’t ever going to be revolutionary, either aesthetically or politically. Jeffery’s problem with the NEA is, instead, that it’s ended up enforcing a kind of “cosmopolitan” (with all the nasty connotations that word now carries on the far right) high culture on a small-town America that, basically, knows better.

The absolutely deranged solution that Jeffery proposes (with an obligatory lib-bating nod to the WPA) is that the NEA start directing its grants explicitly toward small-town artists, artists whom Jeffery expects to produce an anti-cosmopolitcan art that’s in line with Donald Trump’s agenda of American renewal. You (and he) are already imagining the national nightmare that would ensue: bookstore shelves stocked with Mitch Albom fanfiction, art museum walls decked out in nothing but patchwork quilts.

Can we have a reality check or two? First, America already produces a whole bunch of not-exactly-nativist but surely not cosmopolitan artwork. Americans make weird music and write weird books, both pitched to a domestic audience but enjoyed abroad because the USA (and not France, pace one of Jeffery’s weirdest claims) still leads the dance of global culture like it has since the fifties. The vast majority of this art comes from cities, because that’s where the vast majority of Americans live.

Second, there’s not actually any small-town culture to support. I say this as someone who was born in one small town and has lived in a bunch of others. Locally, there’s absolutely nothing but dinner theater and cover bands; the vast majority of the culture that small-town America consumes gets beamed in from above; the monoculture is a tall tree that’s already killed all the rural saplings struggling in its shadow. Jeffery notes that the WPA focused on rural culture because that culture already seemed to be disappearing in the 1930’s, but doesn’t draw the obvious consequence that rural culture is, by now, long-disappeared. What’s replaced it is mostly Fox and Sinclair Media and Clearchannel; if you asked small-town America to make art now, that’s about what you’d get.

I suspect that’s also what Jeffery wants, and that this plea for the revivification of small-town culture actually disguises a right-wing project for propaganda from below. Even thus conceived, though, it won’t work: the impulses that Jeffery wants to capture are already getting expressed on Facebook, and posting is a lot easier than making art.

Still, it’s worth asking what a left culture policy to counterbalance this one would look like. A good start would be to adopt an idea mooted by the late Paul S. Martin, a distinguished Cenozoic paleontologist. In his final book, he suggested that the midwest and great plains regions of the USA should be re-wilded, first with buffalo and then, when technology catches up to our ambition, with cloned wooly mammoths. I propose extending this policy nationwide and just bulldozing every town with a population of less than 30,000 that isn’t also a near exurb of a major city. With that range of habitats, we could probably bring back the full range of quaternary North American megafauna. Giant armadillos, giant sloths, cute little horses. Doesn’t that beat going on a family vacation to York, Nebraska?

The cultural angle here is twofold. First, of course, paintings of mammoths, but then also the idea that getting rid of small towns would at this point be doing artists a big favor. Small towns are a repository for the most atavistic, backwards ideas in the national discourse; because one of those ideas is that small towns represent “real America,” they also get assigned an importance in the national conversation (why does Iowa get to have the first presidential primary?) that’s totally out of proportion to their share of the national population, which is around 10%. The always apparently looming threat of small-townism has the effect of making really radical socio-economic transformations of the sort that most Americans actually support seem impossible, so we end up stuck with a neo-liberal compromise formation that, even if it doesn’t help most urban residents, at least holds back the fascist tide.

But what if we could see that the fascist tide was really more of a trickle? What if small-town ideas had a place in the cultural conversation that actually matched their place in American demography? People would feel freer to experiment, I bet, in art as well as in politics. We might get more novelists like Thomas Pynchon, writing imaginative grand Americana, and fewer mediocrities like Jonathan Franzen stooping to the prejudices of an imaginary small-town audience.

That’s maybe not too practical a recommendation in the current political climate. On the other hand, the actual climate is changing in ways that make what I’m suggesting (minus the megafauna) basically inevitable. The small towns of the southeast, midwest and great plains are going to be uninhabitable by the end of the century, and small towns in the deep south are going to be a lot more unpleasant to live in (where they’re not underwater). At least the end of the world comes with a small silver lining.

Society Must be Defriended

Margaret Thatcher famously said that society didn’t exist. What she meant by that isn’t what she literally said – Thatcher was in for reactionary defense of “social values” as much as any right-wing cracker from that era – but rather that society didn’t have any interests worth defending. That’s true in a sense, but only after you’ve bought into a lot of other mythology about interests. In particular, the unspoken thought behind Thatcher’s quip is that individuals do have interests worth defending: in the Thatcherite idiom, that they “exist.”

That basic assumption is one that it’s hard for Westerners – and not just econ majors – to think past. The problem is that we have an idea in our head that society is a kind of force constraining individuals not to act on their “interests,” which are given. That image of society can probably be traced back to the enlightenment, which generalized a view of society as contract which had been advocated by Hobbes and raised, but dismissed, by Hobbes’ main bugbear, Aristotle. On that view, society exists as a compact of mutual self-defense by which its members agree to a curtailment of what can legitimately count as their interests so as not to fall victim to the interests of their neighbors. Adam Smith took over this view in the main, only adding that, actually, the social contract doesn’t need to curtail our interests at all, since the universal pursuit of self-interest creates a self-sustaining system. Mainstream arguments between individualists and collectivists have been carried on in those terms ever since: the interests of society (synthetic) on the one hand, those of individuals (natural, given) on the other.

I’m thinking about this now because that’s basically also the social image that E.R. Dodds imposes on the Greeks in The Greeks and the Irrational. By sheer coincidence, it happens to be close enough to the Greeks’ own social image not to be all that deeply distorting, but it’s worth asking where the differences lie. Greek thought pictures society as a koinonia, a res communis, a community in the somewhat archaic sense of “that which one shares in common.” That means, not that society is what we all have in common, but that the things that we have in common – materially, spatially, linguistically or, at the utmost level of abstraction, teleologically- are what produces us as a collective. Is there such a thing as society? Yes, on this depiction, of course there is. Are you going to tell me there’s no such thing as paving stones? As public slaves? As the Greek language?

At about the same time as Thatcher was doing a set of activities that she characterized as privileging individual interests over the (non-existent?) interests of society, the anthropologist Marilyn Strathern was making a similar kind of claim from a very different perspective. Looking at Melanesian subjects, it seemed to her that perhaps there really was no such thing as society – not for them, at any rate. While we confront society as something external and given, most Melanesian peoples regard it as the object of their production – as actually needing to be produced constantly through practices of exchange, among which the kula made famous by Bronislaw Malinowski would have to be numbered. That gift-exchange system did not, contrary to what Malinowski tried to show from a functionalist standpoint, allow a given society to survive; instead, it produced that society in a way that ensured the survival of its individual members.

That point is in a way more obviously true when it comes to Greek practices of xenia, especially in the archaic period and to the extent that Homer counts as evidence for these. Odysseus justifies his request for gifts from the Phaeacians not by making reference to a pre-existing society that unites them, but by appealing to the Phaeacians’ own interest in receiving good treatment if they, in a similar case, should ever wash up on the shores of Ithaca. Xenia doesn’t preserve a society, it produces one. If xenia could seem a bit atavistic from the point of view of Aristotle, that was only because the classical social image represented society as an accumulation of stationary objects rather than moving ones. It was still the things that produced the relations.

If we could learn to think about our own society in a slightly more material way, this would probably be salutary. We would not, for instance, ever be in danger of falling under the Thatcherite delusion that you can help people out by taking their side against “society.” After all, society is just made of the things that we hold in common. From an anthropologically-informed standpoint, that’s like saying you’re going to help someone out by setting their house on fire (or, in Thatcher’s case, by giving their house to someone else.)

Unfortunately, time’s running out for this. The reason is that tech people have started to literalize the most awful and backwards things about the enlightenment image of society. Twenty years ago, if you’d said “society is oppressing me,” you would have been using a metaphor. Now, you’re just talking about facebook: there really are all-encompassing, contractual (though the contracts are a bit more coercive than Rousseau might have hoped) fields that organize us as a collective, and these really do have coercive authority to curtail us from pursuing our interests too aggressively. Moreover, “society” in this sense really does turn out to have interests of its own, which include selling our eyeballs for ad views. Rousseau definitely wouldn’t have seen that coming.

In any case, the fact that we’re a long way from the General Will shouldn’t stop us from seeing the genealogical connection. The modern internet in many ways completes enlightenment project, stripped of every noble aspiration and repeated as farce. In this topsy-turvy world, Margaret Thatcher’s reactionary claim that society doesn’t exist turns into a salutary reminder, soon perhaps just the expression of a nostalgic wish.

The Greeks and Their Irrational

A big problem with E.R. Dodds’ classic The Greeks and the Irrational is, to the extent that anyone can tell, Dodds imposes concepts of rationality and irrationality drawn largely from Freud onto Greeks who had no idea that Freud was coming. That’s not necessarily a bad thing – psychoanalysis can tell us stuff about the past, for sure – but it puts Dodds on a spur track from the main line of a modern “social anthropology” that aims, to the extent this is possible, to understand other societies in their own terms. A book that took into account what the Greeks thought the irrational was would be very different than the one that Dodds ended up writing.

In a forthcoming article that argues somewhat along these lines, Robert Parker sets Evans-Pritchard’s work on the Azande side-by-side with Dodds as an example of how anthropologists, even in the 1950’s, were already beginning to look for other rationalities in other cultures. The comparison is an apt one, especially from Parker’s standpoint as a historian of religion. A major project of Dodds’ work, though, is to bring together “religion” as a mass phenomenon with the critiques of that phenomenon which were leveled by sophists and philosophers who belonged to, and addressed, a small elite. Insofar as those critiques are “rational” in nature, Dodds can draw a distinction between rational and irrational in Greek culture without thereby imposing on it a purely etic standard of judgment.

Where we draw the distinction, though, still matters. That’s because the Greeks, unlike the Azande (but like many other premodern societies) have a word that includes “irrational” among its meanings and that gets deployed in debates over what irrationality is. The word is alogos, and one would be interested to know the history of its usage so that we can understand what counted, for the Greeks, as irrational.

One category of things that the Greeks took trouble to stigmatize as irrational is animals – such that ta aloga, the irrational ones, was a kind of a byword for nonhuman creatures. What did animals do that was irrational? Very little, as it turns out: ancient philosophers who insisted on this distinction were hard-pressed to separate the only “apparently” rational activities of dogs and dolphins and the rest from the “actually” rational activities of human beings which the former seemed to emulate, while skeptics ran roughshod over the distinction. What really justified the Greeks in calling animals “irrational” was a lack. Animals lack logos in the sense that they can’t speak and therefore can’t give an account of their actions. The Greeks inferred from this that such an account was also missing internally, such that animals had a kind of subjectivity (phantasia) without thought.

In effect, animals were thus excluded from a human community united by speech and a set of practices surrounding speech. This exclusion served as the prototype for a number of others: women, barbarians and slaves, to name just a few. The focus on speech which underlay these practices, however, was a central feature of a more-or-less democratic polis community and fared poorly outside of those conditions. I’ll have more to say on what this meant for postclassical animals in a later post.

Reasonable Men?

I never put down a book without finishing it. A super-rare exception to that rule (though preserving the letter of the law, since I read a digital copy) is Justin Smith’s Irrationality: a History of the Dark Side of Reason, which I’ve just given up on. I came to it out of frustration with E.R. Dodds’ The Greeks and the Irrational, a much better book which I’ve been writing about lately. One serious problem (or, more charitably, enduring challenge) posed by Dodds’ classic is that it never, and perhaps can’t, define the irrational which it claims for its subject. I expected that Smith, notionally a professional philosopher, would do better, but actually he does worse.

Smith aspires to write a history of irrationality, which means that he claims to have a notion of what irrationality is. But the seams start to show right away with the anecdote that begins Smith’s introduction, the story of Hippasus’ murder by Pythagoreans enraged at the former’s having publicized the secret of irrational numbers. That irrational numbers like the square root of 2 exist, glosses Smith, is proof that the universe isn’t actually rational as Pythagoras had maintained. Classicists know, however (and this isn’t the only error of fact or interpretation that I, as a specialist, was able to spot in the book), that “rational” and “irrational” here just have to do with ratios. The Pythagoreans (not Pythagoras himself, probably, although Iamblichus, the source of the anecdote, wouldn’t have made this exception) believed that the universe could be explained as a series of ratios between whole numbers; the square root of two, since it can’t be expressed as such a ratio, throws a monkey wrench into that system. Pythagoreans (or, better, their Latin interpreters) used
“rationality” and “irrationality” in so far different a sense from the modern one that it’s hard to see any contiguity between the two usages.

The same kind of claims might be advanced for much of the evidence Smith brings together. “Rational” is a predicate said of so many different things – numbers, logic, arguments, nature, political organization and thought, to name just a few – that it loses all definition.

Dodds has a similar problem – he equivocates between social rationality, a kind of functionalism, and individual rationality, a system for evaluating belief coherence – but at least his various senses of “rational” stand in a dialectical relation to one another. Society developsbelief-based means of controlling individual irrationality that appear ends-rational from society’s viewpoint, as ensuring the survival of the collective; from the viewpoint of the individual, however, these beliefs appear irrational because incoherent. An age of criticism thus follows an age of credulity. Since man’s irrational impulses still need to be controlled by society, however, a new age of credulity ensues.

We could argue about whether a lot of the entities invoked by Dodds – society, beliefs, human nature – actually exist, but, assuming they do, reason still operates in his argument despite Dodds’ refusal to define it univocally. Smith aspires to something similar, characterizing the overall argument of his book as a dialectical approach that praises some reason, but not too much (poor Adorno and Horkheimer get dragged in to give this position a pedigree, though I don’t think they would have recognized it.)

As it plays out, though, this is not really a dialectical argument but rather a statement of what I would pejoratively call a “moderate” position. Because he dislikes the extreme irrationalist and rationalist positions for consequentialist reasons (e.g., too much reason brings about the French Revolution), he situates himself as near to the midpoint between these extremes as he can get. Sometimes, as with the goofy equivalence Smith draws between neo-nazis and #metoo twitter, this means inventing an extreme: the moderate needs to have enemies on both sides.

There’s a kind of magical thinking involved in taking such a position – the belief that not being wrong makes you right, as though there were only a limited number of ways to be wrong – and it shows throughout the book. Smith seems to believe that his moderate position, perforce that of a reasonable man, gives him the authority to speak ex cathedra on a pretty wide range of issues where he has no actual expertise. Does being a political moderate, for instance, equip you to dismiss Lacan, Freud, Zizek and all of French theory without so much as offering a counterargument? It does if you believe you’ve already established your credentials as a reasonable man and if, as Smith does, you find that body of work personally distasteful.

You don’t necessarily need to take all those texts on board, but a book that claims to be offering a critical history of irrationality should do better than that kind of idle polemic. The more of it you read, the more you get the sense that what Smith says he wants to avoid – treating rationality as a mere “term of approval” – is in fact the presumption that underlies the whole project. Rationality seems to mean so many different things in this book because it actually doesn’t have a substantive definition, being just synonymous with “stuff Justin Smith likes.”

That’s disappointing, all the moreso since Smith has a job at a French university. You expect the French to know better, but maybe that expectation is just a holdover from happier times. In France as everywhere else, “reasonable men” like Smith are under attack on all sides as people seek to overthrow the regime of market neutrality of which such people are avatars. When that regime remained unchallenged, it could afford to sponsor internal critics like the French theoreticians Smith despises. Now that it’s under attack, it hires dragoons to write polemics in its defense. If Smith’s book is any indication, this slackening of intellectual standards on the part of the ancien regime won’t save it: nobody, not even its fiercest advocates, can make a convincing case for moderatism.

The Volunteer Spirit

People worry a lot about free will now, not so much anymore because of the theological puzzles it poses as because it’s supposedly incompatible with modern science. There are basically two ways to gloss that incompatibility. One reading is that our experience of having free will is somehow “fake” because science teaches us that everything is either strictly determined or random. This approach takes the experience of free will as naively representative and declares it mistaken on that basis. To characterize any phenomenon of consciousness as naive representation, however, is seriously misguided – the same kind of error as taking thoughts to be identical with scans of brain activity correlating to those thoughts. When it comes to consciousness, the picture is not the thing; and neither is the thing the picture.

A more promising way to construe the incompatibility between free will and the modern scientific consensus would be to concede that the notion of free will as usually understood entails some kind of special (uncaused) causation that experimental evidence tells overwhelmingly against. Assuming we want to maintain a univocal notion of “cause” that has proven at least technologically useful, what we then have to do is reconstruct “free will” in a non-causal way, for instance as a special way of experiencing uncertainly about the future with reference to ourselves as agents. On this reconstruction, free will isn’t so much a way of changing the future as it is of acknowledging our moral responsibility vis-a-vis actions for which we’re not in a deep sense causally responsible.

Dreams provide us a way of understanding what this would be like. When we remember our dreams, we often recall ourselves as choosing while at the same time feeling (in retrospect) as though the whole plot of our dream had been predetermined. From this perspective, we can see free will as a feature of experience that doesn’t have any causal efficacy. That’s why, despite this experience of choice, people try to experience “lucid dreaming” – a kind of dream state in which we would “really” have free will, rather than just the experience of it.

The idea is then that our waking minds will have causal efficacy in the world of our dreams. Of course, if free will is non-causal anyhow, then it can’t have any more causal efficacy when we’re asleep than when we’re awake. However, it bears noticing that a distinction between “real” (i.e. causal) free will and merely experiential free will can’t even really be conceptualized outside of some kind of scenario like this one, where we observe and participate in a fictional world (i.e. the dream space) from an “actual” position in a “real” world outside it.

All that by way of introducing a passage from the Oneirocriticon that struck me when I read it a few days ago and has stayed with me since. It comes from the end of book two, where Artemidorus finally gets around to what we’ve all been waiting for, flying dreams:

πάντων δὲ ἄριστον τὸ ἑκοντα πέτεσθαι καὶ ἑκόντα παύεσθαι· πολλὴν γὰρ ῥᾳστώνην καὶ εὐχέρειαν ἐν τοῖς πραττομένοις προαγορεύει. διωκόμενον δὲ ὑπὸ θηρίου ἢ ὑπὸ ἀνθρώπου ἢ ὑπὸ δαίμονος ἵπτασθαι οὐκ ἀγαθον· φόβους γὰρ μεγάλους καὶ κινδύνους ἐπάγει.

“The best of all is to take off willingly and stop [flying] willingly; this predicts much ease and facility for those who do it. But it is not good to fly while being pursued by a beast or a person or a daimon; this brings great fears and hazards.”

How are we supposed to understand this “willingly” which makes so much difference for our interpretation of the dream? Categorically excluded, I think, is a causal interpretation where “willingly” means “you desire to do it, and this causes you to do it.” That’s because the whole Oneirocriticon treats dream actions as something for which you’re not causally responsible. Indeed, this must have been a general premise for ancient dream interpretation: how else to create an atmosphere in which people are comfortable recounting dreams they’ve had about, e.g., getting a blow job from their own fathers? The dream is something that befalls you, not something that you do.

Hekonta, then, has some kind of experiential meaning. But what kind? The text opens up a couple of alternatives, which I have a hard time choosing between. One way to construe it is as the same kind of special ignorance about future events I was talking about before – the only difference between it and our general uncertainty about the future being that free will has reference to our own, agential actions. Another way to construe it would be to read it in close connection with the second sentence I quoted above, which we’d then interpret as presenting alternative scenarios in which you’re not voluntarily taking off or landing. If the typology given here is exhaustive – that is, if either you fly voluntarily or you fly under compulsion from another agent – then the experience of free will would just be the experience of non-compulsion, in principle susceptible of verification by outside observers too.

Making the Difference

Politics is about to eat the world again. The 2020 election, actually, is starting early, with a wide democratic enough democratic primary field for pundits to start the horserace well in advance of the general. This breadth of choice is deceptive: in all likelihood, we’re going to have to choose (yet again) between two aging cave-beasts whose different rhetorical stances conceal agreement on all the most important issues.

How much do we care if it’s Trump or Biden* who ends social security? That we’re now at the point of asking that question highlights one of the most depressing features of modern politics, which is that you can’t really do anything about it: even though you have a vote in a notionally democratic system, you feel this crushing sense of powerlessness. It used to be that only about half the country would feel this way at any given time, while the other half would feel like its votes had mattered and it was getting its way. The paint started to come off of this notion sometime during the Clinton years, when “getting your way” as a Democrat turned out to mean military intervention abroad and welfare reform domestically, but Obama’s second term, with its legislative non-function, really drove home how little anything matters. As I’ve said before, the world-beating idea in this situation turns out to be a guy who promises to screw you over in exchange for providing four years of red-hot “entertainment” a la Archie Bunker.

It’s starting to seem like the point of election coverage is to drive us so crazy that we don’t notice how the election itself is a broken lever on the runaway machine of US politics. I’m as susceptible to this kind of manipulation as anyone. Something I do to keep myself calm is to think concretely about what kind of a difference I can make, and to hold myself accountable for doing only exactly that much. In election coverage as in other areas, the news media commands our emotional involvement by making us feel anger or guilt over things we can’t do anything about. The only way to resist this is by coming to know our own capacities.

There are good reasons for thinking that college professors like me are in a position to make some kind of a difference. The biggest such reason is the demographic sort that seems to be happening around level of educational attainment: in 2016, Clinton beat Trump by 21% among college graduates, a gap unprecedented in recent times.** The college experience appears to be doing something to people’s susceptibility to Trump.

The MSNBC interpretation of those numbers is that Trump voters are just dumb. That may be true, but if it is then there’s not much we can do about it. More interesting to me is the question of why, or how, Trump induces such a strong educated/uneducated polarity in the voting population, a polarity that undoubtedly works in his favor since only about 30% of US adults have a college degree.

As with most of his core constituencies, Trump won this one by turning it against what he was able to portray as an opposite or an enemy. Trump first turned college-educated people against him (partly on purpose and partially by speaking like someone with a brain lesion), then looked to the wider crowd and said “these people, your enemies, are my enemies too.”

We’re not in a position to do much about what Trump says, of course. Not even people who are trained and paid to be in that position can shut him up. That he was able to make this particular representation stick, though, should be really disturbing to anyone involved in the production of college graduates. If the rest country hates them so much, then we may be turning out a defective product.

Why do they hate us? Is it the haircuts and social justice? That’s all just a sideshow: if Ben Shapiro is talking about it, you can be sure it’s totally insignificant. The real problem is a part of college education that both liberal and conservative political elites are fine with. At most colleges, even STEM majors learn a little bit of sociology: they interalize a picture of society where there are winners and losers, where winning and losing are a matter of skill rather than chance, and where winners and losers share no interests in common. The last piece of the puzzle is that, with our tacit encouragement, they come to see having a college education as the main factor separating winners from losers. That means that college graduates are prepared to see pursuing their own narrow interests as socially natural, as part of winning or as a prize for winning already done.***

You can see why it would be easy to hate people who felt that way, especially if they also acted like it, and “people who feel like that acting like that” is actually a pretty good characterization of US politics since the fall of communism. Trump wasn’t the first person to recognize this feature of national life; he wasn’t even the first politician to try to exploit it. He was able to exploit it with unprecedented success because you couldn’t think he was putting on an act or playing both sides: nobody would ever accuse Donald Trump of being smart.

Since what teaches graduates this lessons is a structural feature of college life and national discourse, not a discrete part of the curriculum, there’s no straightforward way to keep them from learning it. What we should be doing instead, I think, is to try to counteract it, first by making explicit how they’re being taught to see themselves as better than their non-educated peers and then by reminding them that, if anything, they’re worse – in terms of hustle, endurance, flexibility, what have you. Most college graduates, for instance, couldn’t last a week in an amazon fulfillment center.

The point would be, actually, not just to make them feel bad but to sell them on a vision of solidarity. When grads and non-grads fight, the only people who win are folks like Biden, Trump, the Koch brothers, the whole rest of that rotten capitalist-political class. Grads and non-grads alike are most of them workers, getting exploited by figures who, as long as we’re fighting each other, can remain safely behind the scenes.

We should emphasize that the alternative to solidarity is, in the near term, a brand of nationalist fascism that will lead to world war and, in the long term, complete ecological collapse. Some people will benefit from both those things, but probably not anyone you’ll have in your classroom (unless you teach at Harvard or Yale). For the most part, even the “winners” will end up losing.

What would be the right kind of class in which to present that message? I say, all of them. Kids need to hear it and we need them to hear it. Otherwise, we’re enabling another Trump win (and it may already be too late).

*It could happen. Obama wanted to pass “social security reform,” which is basically a codeword for the same thing. Like Obama, Biden seems to believe that campaigning toward the center means doing Republicans’ work for them.

** Historically, the gap is within five percentage points, and college graduates don’t always lean in a leftward direction, either. 2012 saw them go for Romney by four points, not that it helped him any.

*** The process starts even earlier at elite Ivy League schools like Yale, where I taught for a while and where I encountered a lot of kids who thought that just having gotten admitted entitled them to a job at Goldman Sachs. They weren’t wrong, either; in this case, it’s the system that’s crazy.

Dead Don’t Dance

Book two of the Oneirocriticon completes its author’s natural history of the dream-life. Following a pattern established by earlier encyclopedists, Artemidorus of Daldis has interpreted the whole world – from humans to minerals to animals, fish and birds – as dream symbols. Fittingly enough (and excluding “off topic sections on dream flight and the numerology of the human lifespan which is meant to remind us that he’s a “real scientist”), Artemidorus concludes by talking about the dead.

If you’ve been raised on a steady diet of modern horror movies, you’ll expect that seeing dead people is never a good sign. Artemidorus confirms this expectation at a few points – you don’t want dead people to be going after your posessions, for instance, and especially not your clothing, which would mean that you’re going to die. On the other hand, dead people per se are actually a good sign in a lot of situations. As usual in the Oneirocriticon, the dreamer’s status and state of mind matter a lot.

If you’re really worried about something, for instance, seeing a dead person means you’re in the clear. That’s because people in the afterlife are chill: they don’t worry about anything, since they’re dead. As Artemidorus explains a little later with respect to whom you can trust in a dream – and dead people are very trustworthy, as it turns out – most people deceive themselves and others because of hopes and fears, phoboumenoi e elpizontes. Dead people neither fear nor hope for anything. Why would they? There’s nothing worse that can happen to you than death, which takes away all grounds for hope.

For similar reasons, it’s pretty good news for slaves to see a revenant. That’s because the dead have no masters: they’re anupotaktos, uncommanded. To dream of dead people, then, predicts freedom, which is what (on Artemidorus’ account, and it would be really interesting to know how accurate this was for the Roman context) slaves most of all want.

On the whole, you can see that the Oneirocriticon cultivates a much more friendly and less fearful relationship to the dead than we might have expected. Part of this is just the author’s contrarianism: a long polemic section on the interpretation of nightingales suggests that earlier books of dream-interpretation had been more uniform in assigning a negative meaning to symbols and signs associated with death. Artemidorus’ revisionist approach here might be part of a rhetoric of progress and experimentation by which he asserts his own superiority over his predecessors in the art.

In most cases, though, we can verify that his innovations take advantage of pre-existing cultural symbolisms rather than inventing new ones. This holds good for the dead, as well: the characterization of death as a state of lack on which Artemidorus is building goes back at least to Greek tragedy, where Antigone’s characterization of her own future corpse as aphilos, agamos, anumenaios, aklautos and alektos is not at all untypical. Even in Homer, what’s striking about the dead (for Odysseus, anyway) is their lack of body and even of mind. Death means something’s missing.

Most of the time, you don’t want to lose what death takes away. On the other hand, as Artemidorus points out, you sometimes do. If death didn’t come with certain advantages, there wouldn’t be such a thing as the death wish – and, to avoid anachronism, the Stoics wouldn’t have recommended it as a way out from intolerable situations. The readings presented in the Oneirocriticon draw out the implications of that attitude and, incidentally, show how different the ancient conception of death was from our own.