There’s certainly a warning here by our esteemed colleague, Jamies R. Cowles. It was originally published on our sister site, Beguin Again. / Jamie Dedes
The recent report on the findings of climatic research into the causes and probable evolution of climate change – a more accurate term than “global warming” – prompted me to consider a possible answer to Enrico Fermi’s classic question “Where is everybody?” Multiple generations of science fiction writers have projected a future in which the Milky Way Galaxy fairly teems with life, rather like Times Square on New Year’s Eve or the tavern in the first Star Wars movie – so much so that the late Prof. Stephen Hawking has publicly counseled SETI investigators to – not literally STFU – but certainly to exercise due caution in broadcasting the existence of intelligent life on earth to every corner of the Galaxy. (Not that we have a choice by now: earth’s electromagnetic emissions by now comprise a bubble 200-plus light-years in diameter.) We do not know, says Prof. Hawking, what sharks may inhabit the interstellar waters. (My analogy, not his.) So far, we have been safe. Except for the never-reproduced “Wow Signal”, for which a serious possible explanation has now been proposed, SETI researchers have so far not found any intelligent signal, using any kind electromagnetic energy, that so much as hints at an intelligent origin. The following is pure speculation on my part, albeit – so I would argue – intelligent and informed speculation, as to this eerie silence. Anyway, I submit the following for your consideration …
The evolution of an intelligent species – actually, any species – usually takes multiple millions, even billions, of years. I say “often” and not “always” because the speed with which a species evolves can be measured in days, perhaps even hours, if the evolving organism is simple enough. Consider a flu virus comprising only several dozen base pairs. Add the adjective “intelligent” to the noun “species” and then we really are talking hundreds of millions, most likely billions, of years. It took about 4 billion years for the intelligent species homo sapiens sapiens to make an appearance on Planet Earth.
However … in terms of “boots on the ground” real time, evolution proceeds by fractions of a millimeter, temporally speaking. The proto-hominid is concerned with finding enough wood to keep her / his family warm tonight, and perhaps for a couple nights in the future. S/He is likewise concerned with finding an area with abundant resources for hunting and gathering for perhaps a week or so in the future. Even when settled agricultural communities evolved, the primary emphasis was on this year’s harvest, and perhaps … maybe … next year’s. What is the point of all this? Only that the fraction-of-a-millimeter-at-a-time nature of evolution militates against anything that could reasonably be considered long-term planning.From the standpoint of survival and the propagation of one’s genes into the future, this is a good thing. A hunter-gatherer of, say, 100,000 years ago who paused to consider the long-term ecological effects of rampant deforestation, the poisoning of the atmosphere by wood smoke, the depletion of the oceans, etc., would probably be devoured by animals – or other hunter-gatherers – before s/he had a chance to reproduce, in which case I would not be around to write this “Skeptic’s” column and you would not be around to read it. At least in terms of earth-like intelligent life, it would appear that individual human beings, and human communities, are not “hard-wired” to reflexively consider The Big Picture. From a “boots on the ground” perspective, evolution has simply not equipped us to think in those terms. We can certainly learn to do so. But it does not come naturally. It is like learning to use your left hand if you are right-handed. Furthermore, this difficulty is reflected in our political institutions and our educational systems. Ditto economics. It is no accident that late capitalism does not encourage long-term planning – defined as time-frames measured in generations at least or centuries. As for millennia, i.e., the time-scale when climate change becomes glaringly, life-and-death critical … well … fugg-id-aboud-it!
Granted, I am referring now to terrestrial life, and to cognates thereof, i.e., to life that evolved on temperate, water-abundant earth-like planets, perhaps on a “super-earth”, orbiting a stable, main-sequence sun-like yellow-dwarf or red-dwarf star like our sun within that sun’s habitable zone. If the evolution of intelligent life on such earth-cognates was anything like the evolution of intelligent life on earth, then the environmental challenges we face on earth today would – so I would speculate – have their equivalents on those extraterrestrial worlds. So, from the standpoint of SETI, there is good news, but there also may be – remember, I am speculating here – bad news. The good news is that it is reasonable to conclude that, in the Milky Way Galaxy, there are around 2 billion earth-like planets (“earth cognates” in my terminology), but perhaps as many as 17 billion or even 100 billion. The bad news is that, for the reasons I have outlined above, the challenges posed for the evolution of intelligent life may be as difficult for beings inhabiting those planets as they are for us. (And remember: this is assuming the existence of intelligent life to begin with, i.e., discounting the “rare-earth hypothesis”, which is by no means a crackpot opinion.) Assuming that the laws of chemistry, physics, and celestial mechanics are the same everywhere, it is reasonable to conclude that our own environmental challenges on earth would have their equivalents on those alien worlds. So the key question in assessing the likelihood of the existence of intelligent life elsewhere in our Galaxy is: are there evolutionary regimes that result in brains whose “hard-wiring” is more congenial to long-term – as defined above – planning? I mean planning in time-frames commensurate with large-scale changes in the home planet’s environment.
Forms of socio-political organization also enter the mix. Serious question: to what extent, if any, is an emphasis on individuality, individual rights, individual liberty – basically, the presuppositions of an “Enlightenment-centric” socio-political culture – compatible with long-term planning for the survival of the species when challenged by incipient catastrophies like climate change? Maybe dealing with these challenges requires that intelligent species develop, if they have not done so earlier, forms of social organization similar to, e.g., the “formics” in the Ender’s Game / Speaker for the Dead cycle of novels, or the Borg Collective of Star Trek, or the Caretakers who used – but did not build – the wormhole subway in Carl Sagan’s incomparable science-fiction novel Contact. Or, less benevolently, the Dark Ones of Babylon 5 or the malignant alien collective that launched the planet-devouring self-replicating Von Neumann machines in Greg Bear’s The Forge of God and Anvil of Stars. Without in any way advocating for such a collectivist polity, a lucidly honest historical assessment would certainly indicate that trying to induce human beings to unite for collective action to confront a common danger is pretty much like herding cats … and feral cats, at that … unless the end-in-view is the apocalyptic and uncompromising destruction of some human enemy. Think “Manhattan Project.” That kind of cooperation we are damned good at! Climate change / global warming? Well … maybe not so much.
Back in the ’60s, the astronomer Frank Drake formulated the by-now-classical Drake equation, which attempts to quantify the number of intelligent species in the Milky Way Galaxy by factoring in quantitative estimates of the various coefficients that combine to produce intelligence in various planets’ species. I like to think of the Drake equation as analogous to the design of a digital circuit, with various “gates” — AND gates, OR gates, NAND gates, XOR gates, etc., etc., — that determine whether a given species achieves intelligence and a technological civilization capable of communicating with other intelligent species inhabiting planets and evolving their own civilizations. Many of the factors in the classical Drake equation are obvious, e.g., the rate of planet formation in a star’s habitable zone (however one might define that), the fraction of planets that actually evolve life, the fraction that evolve intelligent life, etc., etc. The historical trend strongly suggests that we have greatly underestimated the number and type of relevant coefficients in the Drake equation. For example, I would suggest that one such overlooked coefficient — one that I have never seen acknowledged in the literature — is the fraction of planets whose axis of rotation is stabilized by the presence of a large moon and the influence of other, probably gas-giant, planets in the same star-system. (An example of where the absence of these factors is critical is Mars. Mars only has two little pebbles for moons, Deimos and Phobos, and so Mars’ axis of rotation has, over the millennia, precessed perhaps 90 degrees, and the climatic variations would virtually preclude the evolution of intelligent life. By contrast, earth has a very large moon and is farther away from Jupiter, with the result that earth’s axis of rotation is stable enough to ensure a stable climate congenial to the evolution of intelligence.) Bottom line: it is reasonable to conclude that the proportion of intelligent species capable of taking the long view, of planning for the future in terms, not of years or even of generations, but at least of centuries would have a critical bearing on whether a given “candidate” species achieved intelligence and survived long enough to develop space travel and a sophisticated communication technology. This is perhaps one missing coefficient in the classical Drake equation: the percentage of species that have evolved intelligence sufficient to engage in long-range planning.
But our response — or lack thereof — to climate change strongly suggests that we may in perhaps a century, maybe less, encounter a break point where our endemic inability to take future centuries, even future millennia, into proper account may render us a footnote in some hypothetical Sagan-esque Galactic survey. We have to overcome the short-sightedness selected into us by the imperatives of evolution. So far, there have been five mass extinction events in earth’s history. We may well be in the middle of the sixth. Granted, some of these were unavoidable, e.g., the end-Permian catastrophe 250 million years ago. Others, if they occurred today, might be preventable, given long-term planning, e.g., the Chicxulub event 65 million years ago. But all would require a capacity for long-range planning for which we humans have thus far shown little aptitude or inclination.
So perhaps now we have the answer to Dr. Fermi’s question of “Where is everyone?”. Perhaps the eerie silence we detect with our radio telescopes is mute testimony to the scarcity of intelligent species that evolved an intelligence, and the accompanying social and political organizations, sufficient to deal with multiple-millennia-long threats to those species’ existence. Maybe the Universe is silent because, thanks to the in-built limitations inherent in evolution, intelligent species’ own short-sightedness caught up with them.
“Wow” signal … North American Astrophysical Observatory … Public domain
SETI logo … SETI Institute … Creative Commons Attribution-Share Alike 4.0 International
Radio telescopes … Bure Peak Observatory … Public domain
Drake equation … Mohammad Alrohmany … Creative Commons Attribution-Share Alike 3.0 Unported
Human evolution … Wellcome Images … Creative Commons Attribution 4.0 International
Global warming map … Environmemtal Protection Agency … Public domain
Desert and tree … Max Pixel … Public domain
Two planets … Pixabay … Public domain
Today we bring you a special feature by James Cowles, our resident skeptic. You may or may not agree, but you will be forced to think. / J.D.
To a few of you, the following sentence will be like saying “Elvis has left the building”, i.e., old news. But to many others, it will be very much in the vein of “Main bites dog,” i.e., novel to the point of being revolutionary. Anyway, here goes … the European Enlightenment is now officially over. “Over” as in “dead as last week’s oatmeal” or “as passé as disco fever and bell-bottom pants” or “As useless as invitations to Hillary Clinton’s inaugural ball”. (Yeah, I know … too soon … sorry … apologies!) Probably many fewer of you are aware of the likely – not strictly certain, but this is the way to bet – replacement ideology: (some form of) postmodernism. Not to put too fine a point on it, but the operative word in the third sentence (beginning “Anyway, here goes … “) above is officially. In academe, of course, places like Ivy League English and philosophy and the Frankfurt School, the European Enlightenment has been over for some time, supplanted by some species of postmodernism. Rather, what makes the end of the Enlightenment “officially official” is that, for the first time, it has actually determined the outcome of the election, at the level of retail popular politics, of senior executives in the very nations that originated and sustained the Enlightenment, and whose political and constitutional systems would be unimaginable without it. You know … nations like the United States. We (meaning “all such nations”) are now not only post-industrial and post-Christian, both of which have been true for some time, but now, in addition, we are increasingly post-modern, even in terms of our “retail politics”. In the following, I will argue that, insofar as it is possible to talk about the “principles of post-modernism,” these principles undergird and underwrite that might accurately be described as a “para-fascist” ideology deeply inimical to the corresponding principles of the European Enlightenment.
In many ways, making sense of post-modernism is like trying to make sense of an M. C. Escher drawing, most of which are “post-perspectival”. So the following will of necessity be only a superficial, hasty thumbnail sketch of three of the more important parameters that distinguish (what I believe to be) the coming post-Enlightenment / post-modern culture, because the following three were especially crucial to the election of Donald Trump as the Nation’s first post-Enlightenment / post-modern President. These factors also bid fair to be important elements in the burgeoning nationalist movements in Europe led by people like Nigel Farage in the UK, Marine LePen in France, and Viktor Orban in Hungary (whose rhetoric on the necessity of “ethnic homogeneity” eerily echoes similar sentiments by Adolf Hitler in Mein Kampf). In future columns, I will describe the historical and ideological roots in more detail. But for now …
o The Enlightenment conception of fact as a datum supported and confirmed, usually by multiple independent observers, by actual empirical evidence vs. the post-modern conception “fact” (in quotes) as an expression of what a community needs to be true in order to function
As an example of the latter, there is no evidence whatsoever that thousands of Muslims in New Jersey stood and cheered upon receiving news of the World Trade Center collapsing, nor is there any evidence that Ted Cruz’s father was implicated in the Kennedy assassination. Facts – as in “quantifiable data corroborated by empirically derived statistics” – indicate that, contrary to Trump’s assertion, the United States as a whole — local exceptions like Chicago notwithstanding — is experiencing an almost unprecedented period of law-compliance, not lawlessness. Despite being corroborated by no fewer than sixteen agencies in the US intelligence community, Trump persists in manufacturing his own “fact” that Russia was not involved in the “cyber-jimmying” of his recent election to the Presidency. Nor is there any indication – based on actual facts, in the “pre-post-modernist” / Enlightenment sense – that immigrants to the US are exceptionally crime prone, and some evidence indicating the opposite.
What runs as a common thread through all these allegations is that all such assertions involve, basically, articles of faith that Trump supporters, as a community, need to affirm in order to be a community. To be a Trump supporter is to be a member of what is, in all essentials, a fundamentalistreligiouscult. Given the sheer absence of evidence, affirming that thousands of Muslims cheered the fall of the Twin Towers is in no way essentially different from an observant Roman Catholic affirming that, with a duly ordained priest’s Words of Institution, the bread and wine become the Body and Blood of Jesus. Both are about equally contrary to empirical experience, are therefore matters of pure faith, yet both are required – “required” as in absolutely sine qua non — for membership in the community. Ditto the Virgin Birth. Ditto the Resurrection. Ditto three million fraudulent votes. Ditto 47% unemployment. Religious sects have actually been practicing most of the principles of post-modernism for several centuries, at least 500 years in the case of Christianity. (More about this in the future, too.) Mass politics in established classical democracies is just now belatedly getting the hang of it.
Furthermore, analogous remarks would apply to all authoritarian political and ideological personality cults centered on, e.g., Hitler, Stalin, Mao, Mussolini, with religious equivalents ranging from the Shabbetai Tzvi in the 17th century to Joseph Smith in the 19th to Aum Shinrikyo in the 20th. (The breathtaking devotion of followers of Chairman Mao to Mao’s Thoughts – the little red book everyone carried during the Cultural Revolution – is in no essential way different from the corresponding devotion of fundamentalist Christians for the text of the [usually King James] Bible.) All require a radical sacrifice of the critical faculty and its replacement with the ostensibly a priori true ideology of the group, as defined by its leader. The differences are so trivial as to be beside the point – and all are the diametric opposite of the valorization of the critical intellect characteristic of the Enlightenment. We may reasonably expect the senior leadership of the Trump organization to declare The Art of the Deal literal holy writ.
o The post-modern conception of morality as an “infinitely fungible” and indefinitely negotiable parameter of a community vs. the Enlightenment conception of human beings as embodying a certain ontology – call it “human nature” — respect for the integrity of which is encoded in universally applicable moral principles
I mean fungible in the sense of “one is just as good as another, depending on the end-in-view, hence interchangeable”. For example, I have owned several houses and pieces of real estate in my life, and while I liked all of them for various reasons, all were “fungible” in the sense of being equally subject to sale or exchange, given the exigencies of the moment. My wife and I liked our house in Wichita, KS, but when we decided to move to Boston so I could go to graduate school, we sold it because the house was less important than the end-in-view (going to grad school). The house / real estate was fungible as a token of exchange.
Trump’s sexual and commercial escapades have conclusively proven just how similarly fungible conservative Christian, especially evangelical, moral codes are. No doubt under many circumstances, self-proclaimed arbiters of public morals like Franklin Graham and Jerry Falwell, Jr., would condemn men who grabbed women by their genitals and defrauded middle-aged people out of their savings. But when the end-in-view is renewed access to the Oval Office, their version of Christian morality proved eminently fungible, and they were eager to trade in their morality for political leverage. Evangelical morality turned out to be just a rather more genteel form of harlotry. The only difference turned out to be that evangelical-Christian bordellos displayed a Cross out front.
Again, as with virtually all things post-modern, as it was with facts, so it is with morality: the needs of the community are paramount, even in terms of right and wrong. The ultimate criterion, with any moral principle, is the principle’s utility for defining and sustaining the community. I find this especially troubling. If the needs of the community – what the community perceives that it needs in order to be a community – is the supreme defining parameter of permissible vs. impermissible conduct, then, if a given Muslim community decides that, in order to be a community, it must practice, say, female genital mutilation or allow husbands to beat their wives (neither of which is a teaching of qur’anic Islam as I understand it) … well … pubescent girls will be mutilated and wives will be beaten.
By contrast, and as James Madison argued in characterizing the Constitution as a guarantee of the rights of the minority, the Enlightenment idea was that even the needs of the community must often be held as secondary to certain human rights at the individual level. So the community’s felt need for segregated schools vs. “equal protection” of the law, the community’s revulsion at certain religious beliefs vs. the individual’s right of “free exercise”, the community’s disagreement with certain unpopular opinions vs. an individual’s right to free speech, etc., etc., etc. (Mr. Spock’s Star Trek maxim that “The needs of the many outweigh the needs of the few” is pristinely, quintessentially post-modern. That distant rumbling sound you hear is James Madison turning over in his grave!) The post-modernist “needs of the community” criterion basically amounts to underwriting mob rule. What renders this principle acceptable to conservative Christians is that, with Donald Trump in the White House, evangelical Christians may reasonably hope to be the mob. With that change, the moral calculus changes accordingly from one that is recognizably Christian to one that is explicitly post-modern.
The post-modernist idea of the preeminence of the needs of the community is not at the end of the path to, e.g., Leni Riefenstahl’s Triumph of the Will , but it is headed in that direction. If the post-modernist criterion of the needs of the community is to be the final arbiter of morality, both public and private, then it is not clear — to me, anyway — what stands in the way of a 21st-century version of half-million-strong torchlight parades in Nuremberg, c. 1935.
o The post-modernist conception of science as merely one more “meta-narrative” among many others vs. the Enlightenment conception of science as ascertaining objective truth about the Universe-as-such
This is one we should have — and could have — seen coming, at least those of us who have read, say, the late Jean-Francois Lyotard, who did the most (in, e.g., The Postmodern Condition) to popularize the term, and the late Michel Foucault. In a nutshell, a “meta-narrative” is a “story about stories”, i.e., an overarching story that validates a given culture’s “sub-stories” that, collectively, lend coherence and some kind of unity to a culture. The Christian meta-narrative unified and made rational the political hierarchy of the Middle Ages whereby the liege lord, like God, was at the top of the pyramid. The Christian meta-narrative even rationalized the horror of the Black Death in the middle 1300s: God was punishing the human race for its history of infidelity and immorality. Etc., etc.., etc. Under the umbrella of the Christian meta-narrative, history, politics, and morality — and even deviations from those norms — all made sense.
The Christian meta-narrative gave way in the 1500s to the science meta-narrative — the world as a system governed by natural laws discoverable by reason and empirical investigation, and even useful in improving the physical circumstances of life — that has been dominant ever since, at least up until the advent of the post-modernist world-view. (This is how I conceive the contrast between Lyotard’s conception of discourse-as-story vs. discourse-as-science in Condition.) I say we should have seen this coming because we saw early symptoms, even in the popular culture, of the breakdown of the strictly scientific meta-narrative, followed by its replacement among many people by what can only be termed some form of “magical thinking”. (That, in a nutshell, is a good hip-pocket description of New Age culture. Ann Druyan, the late Carl Sagan’s widow, had some trenchant comments about magical thinking when she appeared on Bill Maher’s Real Time a few years ago, and said that a dismaying number of people are convinced that it is possible to effect change in the world just by sitting down, thinking about it, and “sending out good thoughts”.) Perhaps the most recent example is all the kerfuffle about the implications of the Mayan “Long Count” Calendar predicting a dire alignment of planets and the sun with the center of the Milky Way Galaxy that, for all manner of half-baked and misunderstood pseudo-scientific reasons, portended some kind of apocalyptic, perhaps even physical, upheaval on a cosmic scale. Which never happened, of course. But never mind. People still believe Jesus could return a week from next Thursday … and have been saying so for 2000 years.
The difference is that now the post-modernist critique of meta-narratives, hitherto restricted to academic debates in classrooms and proseminar courses – several of which I have facilitated — has escaped from the magic lamp and become a genie that may render impossible meaningful action to mitigate the exhaustively corroborated reality of climate change, to name perhaps the most obvious example. The rational, “pre-post-modern”, Enlightenment-centric response would be that, you are quite welcome to your New Age superstitions, as long as they don’t leave Miami underwater. But that’s just me, still benighted by being caught in the “pre-post-modern” Enlightenment Weltanschauung. The much more contemporary attitude would seem to be the belief, on the part of Trump and his devotees, that the gradual increase in the mean ambient global temperature, even supposing it to be real, is due to China indiscriminately dumping greenhouse gases into the atmosphere … which, to fit the data, would have to have been happening since, at the very least, quite early in the 18th century. But there I go again. And that is just one example. If you don’t like that one, pick another. A good alternative might be the imaginary link between vaccinations and autism. But again, the question should be “What does the community need?” Certainly not a belief, however well-grounded, in anthropogenic climate change! As the mandarins of Seattle University’s School of Theology and Ministry often told me back in “The Day”,”There goes Jim again, being too left-brained!”
This is one of those rare occasions when academic philosophy — e.g., Lyotard and Foucault — bids fair to destroy one of the cornerstones of Western civilization: in this case, its characteristic and hard-earned virtuosity with science, and therefore technology. (The last such occasion was Marx / Engels and Marxism.) So, in terms of practical consequences, if a given community — never mind which one — needs to believe that vaccinations cause autism, should that community be allowed to forego vaccinating its kids — who presumably don’t have a choice — thereby penalizing the pro-vaccination community by turning the non-vaccinated kids into tiny biological weapons of mass destruction? Good post-modernist practice, sustained by Lyotard, Foucault, and their arguments of “meta-narrative as instrumentality of oppression,” would presumably argue “Not only ‘Yes’, but ‘Hell yes’.” Thus the slow-motion suicide of Western civilization proceeds apace.
Well … is there nothing we can do? Is there no longer a place for the values, beliefs, and principles of the European Enlightenment? My answer is “Yes but … ” During the early 1940s, there was also a place for the population of London during the German blitzkrieg: the tunnels and caverns of the London Underground. If we propose to remain a technological civilization, there must be a place — and not just in “science proper,” science in the narrowly technical sense — for the principles of the European Enlightenment. But, at least for a while, that place will not be above ground culturally. The Enlightenment must henceforth be practiced sub rosa, in a clandestine discursive space of intellectual Tube tunnels where it will be safe.
Where might that be? Funny you should ask …
It is quite possible to critique the Enlightenment as at least implicitly biased in terms of race, culture, and class. The Enlightenment, like all things human, suffered from its own imperfections. For example, many of the heirs of the Enlightenment among the American Founders were member of the aristocracy (though even the American aristocracy were little more than upper middle class, compared to their British counterparts), were racists and therefore usually slave owners (Washington, Madison, and Jefferson) or former slave owners (Franklin), and most believed in a form of Euro-centric cultural bias. However, subsequent history shows that the architects of the Enlightenment were these things, not because of the Enlightenment, but despite it, and that their descendants addressed these issues, not by repudiating the principles of the European Enlightenment, but by getting better at practicing those principles. To cite just one example, the ongoing civil rights movement in the United States originates, not from a disavowal of the principles of the Enlightenment, as embodied in the US Constitution, but by implementing those principles more radically and consistently, as with the application of the “equal protection” clause of the 14th Amendment. The flaws of the Enlightenment argue for more of the Enlightenment, not less. When practiced with uncompromising consistency, the principles of the Enlightenment are all self-correcting. Rather like science.
Hence the begged question: what can we do to “ride out” the current disillusionment with the principles of (classical!) liberal small-“r”-republican and small-“d”-democratic politics, and the concomitant belief in principles like free inquiry, a secular / religion-neutral public square, respect for rational and evidence-based reasoning, equality before the law, and freedom of expression? The short answer is that the latter-day London Underground I mentioned earlier is us ourselves. (In fact, before you read any farther in this article, I urgently recommend you read David Brooks’ superlative New York Times column on just this issue.)
Acting to preserve the principles of the European Enlightenment in the shelter of our own intellects and moral consciences is a many-splendored undertaking, involving action on several different fronts.
One of the more obvious areas where the Enlightenment project is being challenged today is in the area of science. The post-modern challenge to the Enlightenment incorporates a certain skepticism about science, the scientific method, the epistemological foundations of science, and consequently the utility of science as a means of ascertaining true knowledge about the external world. Post-modernist critiques of science are often written by people – Lyotard, Foucault, et al., come to mind immediately – whose attainments in other fields are undisputed, but whose knowledge of science, and scientific methodology affords them just enough knowledge to be dangerous. One thinks, in particular, of science skepticism based on the belief that ancient myths and belief systems, and contemporary spirituality, are just as revelatory of the Universe as empirical science. So learning involves:
— Familiarizing oneself with contemporary findings in the sciences, especially biology and physics.
This does not mean becoming a biologist or a physicist, but it does involve cultivating a degree of working-knowledge-level familiarity that enables one to penetrate the superficially attractive but shallow façade of contemporary pseudo-sciences like intelligent design, creationism, and the supposed “proofs” in quantum physics of the existence of God.
— Developing a working knowledge, not of particular sciences, but of the scientific method itself, and the role of data and methodology. For example, one often hears it alleged that science requires “just as much faith” as religion. Like many other skeptical arguments, this is just true enough to be dangerously misleading. There is a sense in which science presupposes a certain type of faith, but any attempt to equate the two dies the death of a thousand qualifications, and it is only an unfortunate accident of language that the same word “faith” is used to connote both. Learn and develop an ability to discuss the differences.
— For Americans, one of the most useful elements of learning would be a close and sustained study of how the principles of the European Enlightenment became instantiated, first, in the Declaration of Independence, and later in the US Constitution, including the Bill of Rights. In particular, pay special attention to both “religion” clauses of the First Amendment about the equality of all religious traditions before the civil law, and how such a principle decisively disposes of arguments to the effect that the United States is a “Christian nation” in any sense but the purely cultural. Such a consideration is especially pertinent in light of the “needs of the community” criterion for truth often prevalent in post-modernist writings.
There are many worthy causes that are dedicated to upholding various aspects of the Enlightenment consensus. The following are suggestions only, intended to give you some idea of where one’s monetary contributions could be expected to maximize “bang for the buck”:
— Scientific organizations like the Keck Telescope Foundation
— One’s university and / or various particular departments therein (e.g., my wife and I contribute to my old Oxford University college, Exeter)
— Organizations dedicated to the defense and preservation of the founding principles of various Enlightenment-grounded values and practices like free speech / press, due process, etc., e.g., the American Civil Liberties Union, People for the American Way, Americans United for Separation of Church and State, the Southern Poverty Law Center, the National Constitution Center, and the Center for the First Amendment
— One’s local museums, symphony orchestras, and arts organizations as practitioners of First Amendment liberties
I have found that one of the most effective ways of catching the overall “flavor” of the European Enlightenment, and catching it on an intuitive and affective level in a way that transcends words and “logo-centric” discourse, is through music. The music of Enlightenment composers – Bach, Beethoven, Haydn, Mozart, Handel, Telemann … the pantheon goes on … – is, by turns and often simultaneously, elegant, reasoned, passionate, playful, yet always disciplined in a way that flows out of the music itself rather than being imposed extraneously from without. Listen to the gracefully galloping first movement of Mozart’s Violin Concerto No. 3 in G. Listen to Franz Josef Haydn’s matchlessly graceful String Quartet in F-Major. (The third movement alone could well serve as a kind of “theme music” for the entire European Enlightenment.) The hallmark of virtually all the music of the Enlightenment is grace and freedom within the bounds of an intrinsic discipline that does not constrict, but rather liberates … in other words, the diametric opposite of the characteristically post-modern hostility toward all forms of discipline as putative instruments of oppression.
Rather than compile a reading list, which would probably stretch for the length of a dozen ‘Zine articles, I will mention a few books, and recommend that those of you who want to do “deep dives” into the history and ideology of the Enlightenment read these books, and then sample the sources, both primary and secondary, in the footnotes and bibliographies.
— From Dawn to Decadence: 1500 to the Present: 500 Years of Western Cultural Life by Jacques Barzun
Barzun’s book can serve admirably as a kind of Baedecker guide-book to the European Enlightenment, both in the British Isles and on the Continent. Its bibliography is exhaustive and a comprehensive reading of it would be exhausting.
In terms of the Enlightenment roots of the US Constitution and of American constitutionalism, there are none better than:
— America’s Constitution: A Biography by Akhil Reed Amar, Sterling Professor of Law, Yale
— The Bill of Rights: Creation and Reconstruction also by Amar
The latter is especially useful in terms of assessing how the “equal protection” clause of the 14th Amendment affected the interpretation of the Constitution “proper” and the Bill of Rights
— The Invisible Constitution (Inalienable Rights) by Prof. Laurence Tribe of Harvard Law
A very instructive, but eminently readable, treatment of 10th Amendment un-enumerated rights
— On Reading the Constitution also by Prof. Tribe
Very useful “how-to” book on how to read – and not read – the Constitution
— Desperately Seeking Certainty: The Misguided Quest for Constitutional Foundations by Daniel A. Farber and Suzanna Sherry
The most sheerly entertaining book on constitutional theory – three words I never thought to find in the same sentence – I have ever read, in which interpretation theory is developed in parallel with a recipe for latkes. Please. Just read it.
— The Ideological Origins of the American Revolution by Bernard Bailyn
For my money, the masterpiece of them all in terms of the Enlightenment, especially English / Scottish Enlightenment, roots of the American Revolution and Constitution
— A Primer on Postmodernism by Stanley J. Grenz, Pioneer McDonald Professor of Baptist Heritage, Theology, and Ethics at Carey Theological College and Profess at Regent College in Vancouver, BC
For sheer clarity of exposition of an intrinsically murky subject, Prof. Grenz’s book cannot be beaten. The last few chapters are written from a conservative evangelical standpoint, from which those not like-minded may demur, but that does not alter the clarity of the preceding text.
— Basically, anything by Prof. Jurgen Habermas of the Frankfurt School
But choose your text carefully. Habermas is widely – and justly — regarded as the greatest European philosopher since Immanuel Kant, and his texts are about as dense and impenetrable as those of his intellectual predecessor. Habermas is a voice in the wilderness in terms of his withering critiques of post-modernism, especially those written by his Frankfurt School Colleagues Theodor Adorno and Max Horkheimer. Good luck with this! When I first encountered the Frankfurt School, I had a full head of hair and weighed 50 pounds less.
People who defend the Enlightenment project have to be much more assertive, often aggressively so. This is an unaccustomed stance, because, up until approximately the middle of the 20th century, this consensus was essentially unchallenged. The Enlightenment premises of modernism seemed inscribed into reality like the value of pi. But now we have to learn to:
— Defend the value of science and the integrity of the scientific method by learning – to cite a few of the more pertinent examples – what the theory of evolution through natural selection really says (Hint: it does not say “humans came from monkeys” or that “evolution is random”)
— The United States is a “Christian nation” only in a purely cultural sense, not as a matter of law
— Goedel’s Incompleteness Theorem is a double-edged sword: it cuts both ways. Asserting, as globally true, that verbal and written texts are subject to endless interpretation is itself an example of an attempt to “universalize” a text, and therefore – according to Goedel’s Theorem – render the text contradictory. Like any other universe of discourse, post-modern ideology is valid – at most – only locally, not as a universal principle.
Karl Marx began The Communist Manifesto with the statement “A spectre is haunting Europe — the spectre of communism”. My equivalent is “A spectre is haunting the West – the spectre of post-modernist nihilism”. Once contained within the biosafety-level-4 laboratories of English and philosophy departments of the academic world, the virus of post-modernism has escaped into the political ecosystem, with results that are most evident in the election of Donald Trump in the US – the first completely post-modern American President — but that are also afflicting the European nations that nurtured the Enlightenment and the constitutional socio-political order it engendered. (What a stinging historical irony that the nation that produced Adolf Hitler is also the same nation whose Chancellor, Angela Merkel, is the modern-day Leonidas defending the Thermopylae of the West against the assault of the post-modern Persians.) If the heritage of the Enlightenment is to be preserved, along with the constitutional, latitudinarian, rights-centric socio-political order it engendered, it will be up to the beneficiaries of that order – us – to do so. No one else will. No one else can.
James R. Cowles
Jean-Francois Lyotard … Bracha L. Ettinger … Creative Commons Attribution-Share Alike 2.5 Generic
Michel Foucault … Photographer unknown … Public domain
Collatz fractal … Originator unknown … Public domain
“Metanarrative” quote … David Bentley Hart … Public domain
Franklin Graham … “Cornstalker” … Creative Commons Attribution-Share Alike 4.0 International
Jerry Falwell, Jr. … Liberty University … Public domain
Escher waterfall … M. C. Escher … Fair use
“The Milkmaid” … Johannes Vermeer … Public domain
“Flat earth” engraving … Camille Flammarion … Public domain
“Vitruvian Man” … Leonardo DaVinci … Public domain
William Herschel’s telescope … Artist unknown … Public domain
Johann Gutenberg reviewing a press proof … Artist unknown … Public domain
Treaty of Westphalia, 1648 … Photographer unknown … Public domain
Jurgen Habermas … Wolfram Hake … CC-BY-SA-3.0
By this point in the 2016 presidential campaign, it has become something of a cliché to compare the candidacies of both Donald Trump and Ted Cruz, and all the turmoil, often violent, surrounding the former’s campaign rallies, to the spawning of the monster in Mary Shelley’s 1818 novel Frankenstein, or the Modern Prometheus. Progressives and people to the left side of the political spectrum sometimes joke that such comparisons actually insult Frankenstein’s monster. But by concentrating exclusively on Trump and Cruz and the perennial freak show of the lunatic right, the comparison misses the larger point that the real Frankenstein monster – the monster that ultimately gestated Trump, Cruz, the Great Recession, and their attendant pathologies– is contemporary capitalism itself. I emphasize contemporary capitalism deliberately, because the adjective “contemporary” is absolutely critical: the capitalism to which we have all-too-rapidly become accustomed is not capitalism as it existed in the few Administrations immediately following the Second World War. That capitalism – roughly speaking, the capitalism of the Truman, Eisenhower, Kennedy, and Johnson years – was, comparatively speaking, a “kinder, gentler capitalism” than the system fortuitously denoted by the “c-word” today. To paraphrase an advertising slogan: This is not your parents’ capitalism.
Now, before we go any farther and commit the criminal offense of misdemeanor sociology by over-idealizing what those years were like, I should back up a step or two and acknowledge that, no, the largesse of those supposed halcyon days by no means included everyone. Yes, the middle class was growing … but mostly the white, male, heterosexual middle class. Yes, home ownership was burgeoning … but mostly only for white, heterosexual families (and also in large measure because of the GI Bill to assist veterans, a measure a hard-right GOP Congress might well refuse to fund today, for fear of nurturing a “culture of dependency”). (The term “homosexual family” would have been considered as oxymoronic as “two-sided triangle”.) Yes, Dinah Shore sang her theme song – which I am old enough to remember – “See … the … U … SA in your Chev … ro-let … “ But you had to be able to afford a Chevy, which many people in that ostensible golden age of the American economy could not. This was also the time of the germinating civil rights movement; the schoolchild “duck-and-cover” time when we believed that the Nation could be annihilated in a half-hour – and when, during the Cuban Missile Crisis, it nearly was; when schools were segregated … as Gov. Orval Faubus vowed they always would be in Arkansas; when registering black people to vote could be, in some cases was, worth your life, etc., etc., etc. But, that said, the fact remains that for some Americans – by no means all, but for a number unprecedented in world history – the middle class was, not just growing, but thriving … so much so that, in our optimism, we even coined a phrase for the coming of Camelot and the Kennedy era: “the Soaring Sixties”. Remember that?
So what happened? I like to think of it in terms of an analogy with biological evolution. A Reader’s Digest-condensed version of biological evolution, basically the skeleton of Darwin’s original theory, the first edition of which was published in 1859, says that as changes occur in an organism’s phenotype via random mutations in its genotype, the environment acts on the resulting mutated organism to determine whether the organism lives or dies. (Darwin had only the crudest conception — something called “pangenesis,” long since discredited — of how mutations originate.) It’s like a vast, jaw-droppingly complex, planet-spanning figure-skating competition: organisms “skate” their “program”, mutations included, and the environment acts as the panel of judges, determining which organisms survive and which do not … survival being defined as the ability to survive long enough to reproduce and thus pass on the adaptation to their descendants. But as the environment changes over time, the “judging criteria” that determine the fate of each species likewise change: mutations that were once advantageous or neutral may become disadvantageous – the technical term is “maladaptive” – under the new environmental regime. Perhaps the classical example of this process is the meteor strike on the Yucatan Peninsula 65 million years ago, that resulted in basically a “nuclear winter” due to the debris thrown up by the impact reflecting sunlight back into space and thus cooling the planet. Dinosaurs, being huge lizards which had no ability to regulate their body temperature, and which had been around for over 150 million years, suddenly found themselves in the midst of a catastrophe. Because the earth became colder – and there were other changes because of the meteor – the evolutionary niche once occupied by cold-blooded dinosaurs came to be occupied by mammals, which do have the ability to regulate their body temperature independently of the environment. Result: the dinosaurs died off; mammals – including humans about 64 million-plus years later – survived.
OK … now back to capitalism … Societies – in particular, societies’ economies and the underlying technological infrastructure – evolve, too. And the process is intriguingly similar to biological evolution in response to a changing environment. The “figure skating competition” here, however, is between forms of socio-economic organization – what Marx called “the mode of production” – and the overall technological environment in which production takes place – what Marx called “the means of production”, with the “mode” playing the part of the skater and the “means” playing the part of the judges. (Again, the same caution: this is a Reader’s Digest-condensed synopsis.) Conservatives spill ‘way, ‘way too much ink pooh-pooh-ing Marx’s theory of the materialist dialectic of history – by which, Marx says at one point, he “stood Hegel on his [Hegel’s] head” – and ‘way, ‘way too little ink acknowledging the keen insights that, despite the undisputed oversimplifications of Marxist theory, lie at the heart of Marx’s basic paradigm. An example might clarify matters. In the Middle Ages, the production of goods was carried on according to what we today would call a “cottage industry” paradigm. A wainwright – a carriage-maker – would typically start with raw materials, fabricate the various parts of the carriage, put those parts together into higher- and higher-level assemblies, and finally put those assemblies together into a finished carriage – and, in the process, maintain exclusive control over the entire manufacturing process from start to finish, “touching” the entire carriage at each stage as it was being built. Working with the wainwright would be some young men – always men – who would serve apprenticeships as “wainwrights-in-training”. Furthermore, a master craftsman usually developed a close personal relationship with his apprentices, journeymen, etc., and the group often even lived together. As the “junior wainwrights” were trained, the supervising craftsman and the local wainwright guild would observe their progress and together determine what stage each trainee / “intern” was at: apprentice, journeyman, etc., all the way up to master craftsman – at which point the once-apprentice could become an independent craftsman in his own right, authorized to hire his own apprentices and teach them, whereupon the cycle would repeat.
Then came the factory movement from the middle 1700s on, the reasons of which are too complex to even synopsize here. Suffice to say that the factory movement eventuated from advances in technology that enabled the manufacturing process to be broken down into rather naturally occurring, small, easily identifiable, discrete stages, each of which could be physically isolated from the other inside an immense building – called a “manufactory,” later abbreviated to just “factory” – where a given worker, or more likely a cadre of several dozen workers, performed the same discrete sub-task, and passed the results on to other cadres of workers who would perform subsequent sub-tasks. In Marxist language, the “means” of production underwent a tectonic change. Now, instead of working on an entire product, each worker in the factory dealt with only a small, discrete task, and often had no idea how that one discrete task fit into the manufacture of the end-product. Furthermore, the idea of craftsmanship became quaint … then ceased to have any meaning altogether: there is no sense of craftsmanship in the fabrication of a mere “sub-widget”. Over time, and a rather historically brief time, at that, workers became mere fungible ciphers: if worker A and worker B fabricate the same type of widget X, then they are interchangeable; and given the simplicity of the discrete tasks, either can be trained to fabricate widget Y. The workers became strangers to the end-product, and, unlike a century before, strangers to one another. In Marxist language, the “mode” of production underwent a tectonic change. Our hypothetical wainwright building a carriage from start to finish with the help of his apprentices and journeymen became as obsolete as the post-meteor dinosaurs – and for essentially the same reason: the craftsman, like the dinosaur, was adapted to an obsolete environment. In the brave new world of the factory environment, mere physical dexterity – the ability to rapidly build sub-widgets – will win out over craftsmanship every time.
But the crowning humiliation came when the factory movement, leveraging advancing technology, gradually substituted machines for human workers altogether. In some meaningful sense, human beings became quite literally worthless in many contexts. What supplanted the value of workers was the value of capital, i.e., the money necessary to buy land and equipment, build factories, buy raw materials, and in general “jump start” the entire manufacturing enterprise. The cost associated with the workers themselves was relatively minimal: defined as the minimum wage necessary to enable a worker to subsist and to reproduce, so as to engender other workers to feed into the system. (The factory movement routinely employed children whose age was expressed in single digits.) Because workers could not afford the costs of transportation to and from their jobs, this also meant that workers had to move from the countryside, where most of the “cottage industry” work had been done prior to the factory movement, into great cities where they could be close to their jobs, usually congregating into vast, vast slums whose appalling misery has been so well documented in the novels of Charles Dickens, giving rise to scenes of human degradation that bear comparison only to conurbations of nightmare like today’s Mogadishu. It is this “para-Hegelian” dialectic between “means” and “mode” that drove the evolution of history, argued Marx. No wonder Romantic poets of the late 1700s and early 1800s like William Blake wrote of “the dark, satanic mills of Wolverhampton” and of the hellish filth-scapes of Whitechapel and the East End. No wonder the Luddite sect, with its hostility to any and all forms of technology, became increasingly popular. No wonder French workers, for fear of being displaced by machines, threw their wooden shoes (sabot, in French) into the cogs and gears of the machines … thus coining the word “sabotage”. As it is in biology, so also it is in socio-economics: evolution does not forgive.
So in many ways, the London of Charles Dickens is the tangible embodiment and vindication of Karl Marx: the means of production – factories leveraging technology so as to use human workers, if at all, only as flesh-and-blood machines – and the mode of production – wage-slavery intensified to a lyrical pitch through the massive urbanization of labor. All in the service of Capital. Now multiply the single example of London by all the great cities of Europe – their name is “Legion,” for they are many – and the sense of moral crisis becomes almost tangible. Two questions end up being begged: (1) how the hell did matters come to such a pass back then? and (2) why is the present so much like the past to such an unsettling extent? I would suggest that at least the outlines of an answer begin to emerge if we consider two factors we usually do not associate with each other: biological evolution and the European Enlightenment.
It’s important to remember a critical fact about the evolution of our species: it’s about survival. Or, to be strictly precise, evolution is about surviving long enough to reproduce. Furthermore, given the short life-spans (on geologically and cosmically significant time-scales) of our species, homo sapiens sapiens, the type of survival toward which evolution is biased is short-term survival. Evolution — evolution alone and unaided by human intentions — is “concerned” with the long-fanged beast hiding behind that rock over there, not the long-fanged beast hiding behind other rocks elsewhere farther away. Evolution certainly has long-term consequences, but these are worked out in billions upon billions of particular, discrete, short-term instances. In an odd kind of way, evolution is like that verse in II Corinthians 6:2: “Now is the accepted time, now is the day of salvation”. For evolution, now — or perhaps 5 minutes or perhaps an hour from now — is all that counts. An organism that dies right now never reproduces, and thus falls out of the evolutionary stream.
As paradoxical as it may sound, given the time-scales involved, evolution is actually the ultimate in short-term thinking. So we should not be surprised that humans are biased, down to the deepest sub-basement of our neuroanatomy, toward similar short-term thinking. We are evolutionarily predisposed to think in terms of the next 5 minutes or 5 hours. That is the consequence of the way the human brain evolved. Evolution tends to be very parsimonious: it throws almost nothing away. (Most of the DNA in the human genome is so-called “junk” DNA: perhaps functional, even vital, at one time, it has since been superseded and no longer “does anything” — but was never discarded.) So as the brain evolved from reptiles to mammals to primates, the earlier parts of the brain were, not discarded, but built upon, rather like a medieval castle or manor house. “Evolution” and “efficiency” both start with the letter “e”, but the similarity ends there. (The conservative parsimony of biological evolution, by the way, poses a sticky problem for advocates of intelligent design: whatever Designer exists must have a severe hoarding fetish if S/He preserves so much “junk”.) Those archaic parts of the brain — less accurately but more descriptively called the “reptile brain” — are collectively called the “limbic system”, and include structures like the amygdala that do primitive, “fight or flight” processing of the emotions that demand instantaneous, reflexive, very-short-term responses, i.e. responses, like dropping a match when it burns your finger-tips, that do not require conscious thinking. Comparatively primitive structures like the amygdala reflect evolution’s “assessment” that stopping to think can sometimes be fatal — and therefore maladaptive
What does all this have to do with capitalism, both old and new? Well, if you stop to reflect on the fact that, at least in capitalist economies, the economic system is an arrestingly faithful analog of a biosphere, complete with “nature red in tooth and claw” survival for competition, the answer should be obvious. Because of the emphasis on competition and survival in the marketplace, the evolution that occurs in capitalist economies is no more predisposed to long-term thinking than the evolution that occurs in biospheres. The natural and “naive” tendency of all capitalist economies is to concentrate on today’s profit and tomorrow’s or next quarter’s bottom line, and if that means the growth of slums, the pollution of the natural environment, and social pathologies that can only be restrained and contained by the application of brute force, then … well …the Devil take the hindmost.
But the limbic system was not the only part of the brain to evolve. Human beings also developed a cerebral cortex — the part of the brain that, loosely and qualitatively speaking, deals with abstract thought and therefore, most importantly, with long-term planning. With only an amygdala and its associated structures, human beings would still be capable of pursuing their self-interest. But only with a cerebral cortex are we capable of pursuing our enlightenedself-interest. But like any powerful instrument — a car, a computer, a nuclear reactor, etc. — there is the issue of learning how to use it. Much of human history could be written in terms of the two-steps-forward-one-step-back process of humans learning how to use the cerebral cortex. And we are still very much in the process of learning how to use it. One of the most critical, make-or-break steps in Westerners’ learning how to use this awesomely powerful instrument was the European Enlightenment that began in the middle 1600s and that continues today. Much of human history between the fall of Rome and the end of the 30 Years War in 1648 consisted of religious zealotry placed at the service of the amygdala and the limbic system. But because of the rediscovery of the classical world, the efflorescence of science, and in consequence a renewed confidence in the powers of the autonomous human intellect and rationality, Europeans gradually — it was a very near thing — discovered how to agree to disagree and live with their differences instead of slaughtering one another over them.
It would be literally impossible to overestimate the importance of this discovery. The fact that Europe, with all its faults, is not a late-Bronze-Age wasteland today is because, over time, the principles of the Enlightenment — tolerance of differences, the concept of inalienable human rights, the unique value of human beings, the idea that governments and economies should work for human beings instead of the other way around, that it is legitimate to circumscribe the behavior of the few for the good of the many, etc., etc., etc. — came to dominate the culture in terms of its rhetoric … and gradually in terms of its behavior. Anyone who watches the news or even reads a newsmagazine occasionally or peeks at internet blogs now and again will be convinced that there is still an enormous amount of work to do to put these principles into practice. But even a casual acquaintance with history will reveal that we have come a long way. As Dr. King once said, quoting an old slave hymn, “We ain’t what we ought to be, and we ain’t what we gonna be, but thank God we ain’t what was”.
So what conservatives miss in their critique of government “meddling” in the economy, e.g., their oft-avowed (though never fulfilled) pledge to abolish the EPA and like agencies, etc., is that the whole sweep of human civilization since humans descended from the trees and emigrated from east-central Africa has been to escape from, to transcend, Nature, and to temper and moderate Nature’s brutality, not to slavishly replicate it in our social and economic relations. “Nature red in tooth and claw” is fine if you are the “apex predator” who benefits from such an arrangement, so it is no accident that the farther up the affluence scale you go, the more intense becomes the hostility to government regulation: if the game is already rigged in your favor, you will naturally be reluctant to change the rules of the game. But one of the benefits of the Enlightenment was a renewed confidence in humans’ ability to critique such arrangements and to perform tasks of autonomous moral reasoning, and thus establish a rational basis for altruism, for care for the weak, for the support of the disadvantaged — and thus to hedge about the otherwise-unrestrained cut-throat competition in the capitalist jungle with limits that ensure human life, human survival, and human dignity — values of which pure and unadulterated Nature is ignorant. Hence the abolitionist movement in 1850s England. Hence efforts to alleviate the suffering of the workers in the slums of London. Hence the abolition of poor houses and debtors’ prisons. All were examples of “big government meddling,” and yet all were rooted in the Enlightenment-backed consensus that, while human beings emerged and originated from Nature, we are not bound to take up permanent residence there.
Capitalism can be and has been — and very often still is — a good and healthy and liberating thing. But capitalism is morally defensible only as long as, and to the extent that, human beings are in charge of capital for the good of the entire human community … never vice versa.
Skepticism is the chastity of the intellect, and it is shameful to surrender it too soon or to the first comer: there is nobility in preserving it coolly and proudly through long youth, until at last, in the ripeness of instinct and discretion, it can be safely exchanged for fidelity and happiness. — George Santayana
There are not many tenets of orthodox Christianity in which I still believe. But one of the few such – it may well be the only one – to which,at least in some cognate form, I still subscribe is the doctrine of Original Sin. Yea and verily! I even believe that Original Sin is inherited in being passed down from parent to child, very much in the tradition of St. Augustine. (I do demur from Augustine’s conclusion that the heritability of sin renders sexual intercourse intrinsically immoral.) In fact, so fervent is my agreement that I even meet and embrace Philip Larkin, who wrote, in a poem called “This Be the Verse”, “Man hands on misery to man; / It deepens like an ocean shelf”. In fact, in some cases the second half of that stanza is also sound advice: “Get out as early as you can, / And don’t have any kids yourself”. I will leave to others of more orthodox beliefs the task of unpacking the moral, metaphysical, and theological consequences. In the more pragmatic spirit of substituting the proverbial ounce of prevention for a pound of cure, I will only suggest some measures that I believe would preveniently mitigate the consequences of sin. My suggestions are quite the reverse of radical. In fact, I appeal to the stodgily traditional practice of using the US tax code to encourage and to incentivize certain forms of behavior.
Well … OK … probably not that easy – a corollary of Murphy’s Law says “Everything is harder than you think” — but comparable! Especially when you consider the alternative.
As matters stand right now, the US tax code supports Original Sin by subsidizing parenthood indiscriminately. In 2013, for example, you can claim a $3,900 exemption for each dependent child. Period. No qualifications. End of discussion. Elvis has left the building. All the child has to do for you to qualify for the exemption is to meet the criteria in the tax code for being a “dependent”. (What are those criteria? Don’t ask me. I’m not a tax specialist. As a tax accountant, I’m a great short-order cook!) But the point is that, as long as your child meets the dependency qualifications in the US tax code, then you may take the exemption, even if your skills as a parent make Gilles de Rais look like the Walton kids’ folks. At least partially because of one’s incompetent parenting, one’s child can turn out to be as warped as Dracula’s assistant, Renfield, and, especially when grown, can wreak as much havoc on the community as Dracula himself. No matter. You still can take the $3,900 tax break, which in such a case will then amount to a subsidy for sociopathy: “Hand[ing] on misery to man” with a vengeance. Or, as St. Augustine might say, subsidizing Original Sin. Then, if, as we fervently hope, one’s child is apprehended by the police instead of pursuing her / his career as a real-life character out of a Criminal Minds episode, society – meaning you, me, and everyone else – will pay the price, which we hope will only be monetary, for whatever contribution was made to the situation by virtue of incompetent parenting. In extreme cases, a court might even declare the parents to be unfit, and the child might then become a token in the pinball machine known as “Foster Homes”.
May I make a suggestion? Let’s do things differently!
In the interest of fairness, let’s acknowledge up front that, yes, of course, quality of parenting, while important, is only one factor among many others in the kind of person a kid turns out to be. Yes, of course: children make their own choices, and even good parents can have children that go tragically wrong. Yes, of course: children are human beings, not programmable automatons. Yes, of course: if you want an iron-clad guarantee, go buy a lawn mower from Sears. That said, however, why not structure the tax code to encourage, not parenting as such, not parenting per se — but good parenting? How could we do this? Well, we know many of the factors that make for competent parenting. We don’t know literally everything, of course. We also don’t know literally everything that makes a good professional football player, either. But the NFL draft proceeds apace nonetheless. Instead of encouraging parenthood promiscuously via the “shotgun” approach of giving $3,900 to everyone whose reproductive plumbing worked as advertised, as the current practice is, we should be as persnickety about encouraging parenting as NBA teams are in selecting athletic talent. We should subsidize incompetence in neither one.
First of all, please be assured that I do not — I say again: I do not — propose to limit the number of kids as a matter of law. So also be assured that you can have as many kids as you want. However, there will be two salient changes to the tax laws pertaining thereto:
Step 1:On as nuts-and-bolts-practical a level as possible, develop a curriculum on competent parenting in consultation with the bestchild psychologists, developmental psychologists, parents themselves, etc., etc. in the relevant fields.
Step 2:Once this curriculum has been developed and appropriately vetted and implemented, change the tax code so that:
Step 2a: You may only claim the dependent-child exemption if you can authenticate having successfully completed — just once, not once per child — the curriculum.
Step 2b: If you do not complete the curriculum, not only may you not claim the dependent-child exemption, but your tax liability will be increased by, say, 1 percent for each dependent child, probably subject to some sliding-scale algorithm that takes into account adjusted gross income.
Admittedly, this is only abstract training, and as Dostoyevsky’s “Underground Man” said, to know what is right is not to do what is right. So it would probably be desirable to incorporate some type of parental counseling component into the curriculum. In any case, this scheme has two purposes: (1) to shift more of the cost of inept parenting from society as a whole to the parents themselves, and (2) to do so preveniently, before the need arises. Statistically and particular counter-examples notwithstanding, quality of parenting does make a significant difference. The difference is that, at present, the social costs of incompetent parenting are incurred after the factand by the community as a whole. I only propose to use the tax code in such a way as to (a) encourage people to be good or better parents by providing some type of training, and (b) to shift to the parents themselves the future costs of poor parenting by making the parents who do not elect such training pay in advance of need instead of the community as a whole after the damage has already been done.
Yes, of course, there are holes in the plan. Yes, of course, there are details to be worked out. This is a brief blog post, not a comprehensive policy white paper for the Brookings Institution or the Institute for Policy Studies or the Guttmacher Institute. I take it as axiomatic that, if parenting by the seat of one’s pants had immediate or short-term and significant pocketbook consequences to people who undertake the responsibility of raising kids, then greater care and thought would go into the decision of whether to have kids, and, if the decision was “Yes”, how to proceed. To paraphrase Dr. Johnson, “The sight of a 1040 form focuses the mind wonderfully”. And to quote George F. Will verbatim, “You always get more of what you subsidize”. Simple justice — never mind theology — demands that we use the tax code, not to subsidize Original Sin and the dialectic described in the Larkin poem, but to mitigate it. Of course, we’ll never do it. But we should.
James R. Cowles
1280px-Schoolgirls_in_Bamozai — Public domain (Capt. John Severns, U.S. Air Force – Own work)
435px-Form_1040,_2005 — Public domain (US government tax form)
Editor’s Note: Coming on the heals of the U.S. Supreme Court decision on Obergefell v. Hodges stating that the fundamental right to marry is guaranteed to same-sex couples by the Due Process and Equal Protection clauses of the Fourteenth Amendment to the U.S. Constitution, we present James Cowles’ “Popular Sovereignty” and Marriage Equality essay. It is reblogged here from Beguine Again. James Cowles is a newer member of Core Team. His poetry was shared in past issues of The BeZine. Welcome James!
One of the more enlightened-sounding proposals aimed at resolving the question of marriage equality for sexual-orientation minorities is to allow each State in the Nation to decide the issue, either with a vote of the State legislature via initiative and referendum, where the State constitution permits such, or to allow each individual State’s legislature to decide the issue. This alternative appeals to the “democracy instinct” that is pretty much encoded into the Nation’s political DNA. But this perception is deceptive. We have seen this movie before, and its deeper implications are anything but friendly toward individual rights. The first time we saw the “let-the-States-decide” movie was in 1858 with the Lincoln-Douglas debates. All that is different, 1858 vs. now, is the specific matter at issue: slavery then vs. marriage equality now. But what was really at issue in both instances was much deeper, going to the “ontology” of human personhood.
In 1858, the year after the infamous Dred Scott v. Sandford decision of the Roger Brook Taney Supreme Court, Stephen Douglas, senior Senator from Illinois, and Abraham Lincoln, former one-term representative from that State, as part of their respective Senate campaigns, undertook an epic series of debates up and down the length and breadth of Illinois, each challenging the other on his solution to the burning slavery question that would finally eventuate in the Civil War. (In those days before the 17th Amendment, Senators were appointed by the State legislatures. Sen. Douglas won. Mr. Lincoln lost. But Mr. Lincoln would go on to be elected President in 1860. South Carolina would secede from the Union a month later.) Sen. Douglas repeated his often-advocated proposal of “popular sovereignty”: let each State decide for itself whether that State will be slave or free. As Mr. Lincoln was quick to point out, Sen. Douglas’s proposal had already been ruled unconstitutional the previous year by the Supreme Court in the Dred Scott opinion. Thus “popular sovereignty” died a-borning. To understand the reasons for this, I refer you to the Dred Scott decision itself. Looming at least equally large at the time was the fact that the Taney Court, on the way to its decision, also declared unconstitutional the Kansas-Nebraska Act and the Compromise of 1850, both of which had the effect of quarantining slavery within States where slavery was already legal. With Dred Scott, the Taney Court “breached containment” and set the slavery virus loose in the Union as a whole.
Dred Scott has been vilified now for 158 years as the judicial equivalent of Pearl Harbor: “a date which will live in infamy”. Or maybe the 9/11 attacks. Justly so, in an obvious sense. Two years after Dred Scott, in 1859, John Brown would stage his abortive assault on the Federal arsenal at Harper’s Ferry; the Nation, both North and South, would quail before the prospect of a slave rebellion; Brown’s trial and execution would only succeed in making him a martyr and rendering the Civil War, already almost a certitude, literally inevitable. (“Things fall apart, the center cannot hold; / Mere anarchy is loosed upon the world” — William Butler Yeats, “The Second Coming”.) But if we take a step or two back and look at Dred Scott dispassionately, to the extent that is possible, what becomes clear is the question beneath the question.
In that sense and to that extent, the decision of the Taney Court did the Nation a service in clarifying, if only in retrospect, what was really at stake. If Douglas’s proposal of “popular sovereignty” had been adopted and implemented, and if each State had voted on whether to be slave or free, what would the State really have been voting on? The State would have been voting on, not only the legal status of slavery within its borders, in fact, least of all on that, but on whether or not the “ontological” character of human beings – some human beings, anyway – was such that human beings were the kind of thing that could be owned. The real question at issue is whether or not slaves are human beings with human rights. The Court said “No”, of course, asserting that “[African slaves are] beings [note: not “human beings” but just “beings” – JRC] of an inferior order, and altogether unfit to associate with the white race, either in social or political relations, and so far inferior that they had no rights which the white man was bound to respect”. Mr. Lincoln’s critique of “popular sovereignty,” which predates by several years his debates with Sen. Douglas, is predicated on his revulsion for placing slavery and freedom on an equal moral plane as Coke-or-Pepsi alternatives meriting equal consideration. In a speech in Peoria, IL, 1854, he asserted that “there can be [no] MORAL RIGHT in the enslaving of one man by another.” (all-caps in original) In the last analysis, Sen. Douglas’s proposal to settle the slavery issue by “popular sovereignty” is just as much a negation of the human-ness of the slave as the Dred Scott decision itself. To subject human-ness to majority vote is to deny the existence of the very thing you are voting on. If slaves are human beings, there is nothing to vote on. Conversely, to insist on voting on whether a certain group has human rights is to deny the human-ness of that group. (In Kitchen v. Herbert, the decision that struck down Utah’s gay-marriage ban, the US Court of Appeals for the 10th Circuit said “The protection and exercise of fundamental rights are not matters for opinion polls or the ballot box”.)Human beings have human rights. To affirm one is to affirm the other; to deny one is to deny the other. Period. End of discussion.
Well, as I said earlier, we have seen this movie before. Now we are seeing it again. Now the issue is not the “ontological” character of slaves, but the “ontological” character of sexual-orientation minorities. In particular, the question now is whether such minorities have a right to marry. At least, that is the “surface” question, corresponding to the choice Sen. Douglas proposed putting before the States. Now, before I go any farther, I want to reaffirm the all-important dual character of marriage: marriage as a civil contract, and marriage as a religious ordinance / sacrament. My remarks are confined entirely to the former aspect of marriage, i.e., marriage as a contract in civil law not essentially different from, say, a contract with Verizon for cell-phone service or with Bank of America for a mortgage loan. Within that context, we may ask “Is the right to enter into a (civil) contract a human right?” That question we can resoundingly answer “Not only ‘Yes’, but ‘Hell yes’”. In fact, during the opening years of the 20th century – the Lochner era – the Supreme Court’s “hell-yes” answer was so strong that very little progress could be made until the New Deal in terms of ameliorating employees’ working conditions: employees had entered into a contract with their employer that was so iron-clad that even Federal courts felt bound by constitutional prohibitions forbidding impairment of contracts. We are no longer in the “Lochner era”, of course, but the right to enter into contracts is still strong – be the contract a mortgage or a marriage …
… unless you are a sexual orientation minority …
In that case, some argue that an act of the legislature or the electorate … anyway, some kind of vote … is necessary. And even then, only with regard to the specific type of civil contract known as “marriage”. No one argues that a vote is necessary to “give” sexual-orientation minorities the right to contract with Verizon for cell-phone service. No one argues that a vote is necessary to “give” sexual-orientation minorities the right to get a mortgage. No one argues that a vote is necessary to “give” sexual-orientation minorities the right to contract with a gardening service to mow, mulch, and fertilize their lawns. Those are all civil contracts. But when you mention the civil contract known as “marriage”, suddenly some people are not willing to grant that right without some kind of prior plebiscitary permission. Why? I can think of two possible reasons:
o Marriage is a religious ceremony / sacrament nor normally granted to gay / lesbian people
But in that case, the State is clearly overstepping its “establishment”-clause boundaries by presuming to grant gay / lesbian people permission to participate in a religious activity. One may as well envision the State having a voice in whether a Catholic priest can celebrate Mass or whether a Buddhist sensei can chant the Diamond Sutra. But I think a more likely reason is …
o Gay / Lesbian / LGBTQIA people are not … well … not … well … not “like us” … any more than black slaves were “like us” in Sen. Douglas’s mind in 1858, and so require permission to exercise what the rest of us – those who are “like us” – consider a human birthright: the right to contract (civil) marriage
In other words, to be brutally honest, gay / lesbian / LGBTQIA people are not … quite … human and so need their human-ness, and therefore their human rights, legislatively validated. At least, that seems to be the subtext of the 21st-century version of the “popular sovereignty” argument. Which, as in the case of black slaves in the 1850s, means those rights do not exist because their presumptive possessors are not … quite … fully human. Indeed, that is the “question-behind-the-question” in both cases: are slaves and LGBTQIA people fully human? Furthermore, as it was with slaves and “popular sovereignty”, so it is with sexual-orientation minorities: the ostensible necessity of voting in order to validate rights annihilates those rights. The act of voting vitiates that which is voted on.
The Declaration of Independence asserts that human rights are “unalienable”: we cannot give our rights away. Nor can we “give” them to others. They are not ours to give. And if we try to give them to others, we only prove that we do not believe in them.