Operation Wild Mayhem – Games To Collapse Empire

A piece by Julian Langer, whose blog can be found at https://ecorevoltblog.wordpress.com
————–

“Three minutes. This is it – ground zero. Would you like to say a few words to mark the occasion?” Tyler Durden

 

As Tyler stood overlooking the heart of civilization he knew he saw a failing empire on the brink of destruction. So Tyler abandons the consumer nesting instinct, in favour of neo-luddite primal anarchy.

 

Whether or not time and this magical land called the future exists is something for another discussion – I’ll just say that I’m not convinced. But for the purposes of this discussion it seems fair to say that we are three minutes from ground zero. The important thing to remember though is that we are not awaiting the arrival of a nuclear bomb, though Trump, Putin and North Korea appear to be directing us towards that situation. No, we are the embodiment of a thermonuclear cataclysm, a world-ruining piece of machinery, three minutes away from ground zero. So I’ll say a few words to mark the occasion.

 

Tyler was wrong when he said that our fathers were our models for God; our fathers were merely meant to teach us how to navigate the body of God – the body of the metropolis, the state, the market, civilisation, the Leviathan. But Tyler was correct when he said that God hates us.

 

After all, what has God, civilisation, the state, the Leviathan, the stranglehold of capitalism brought us? The planet is in ecological ruins. We are plagued by droughts, hurricanes, wildfires intensified by arid conditions and desertification, brought on by agriculture and the deforestation it requires, oil spills, dehabitation, specicde, air unfit to breath and mass extinction. Television feeds us the daily horror of militarism, bombs and politics, along side (m)advertising, supposed reality shows and force-fed comedy spooned down our throats, as we sink deeper into the psychosis of this hyper-real Spectacle – the word of God, the great domesticator.

 

But in the words of Tyler Durden, “fuck damnation, man, fuck redemption! If we are God’s unwanted children, then so be it!”

 

If this culture wants us to live lives of death, I propose we rebel, by seeking (near-)Life experiences; that we lose every-Thing to be free to do anything.

 

How do we do this?

 

Well if this culture is hell bent on trying to domesticate us all, bringing some wildness to this culture seems the best routeless direction to go down.

 

With all their attempts to make living in this culture better, most activist projects have served only to make the planetary work machine more bearable for those closest to “radicals”. The revolutionary project is now largely a t-shirt or film. Social anarchists fill potholes and keep this culture going – acts of service to God.

 

On the other hand, subversive art-focused and psychology-focused milieus, such as Situationists, Discordians, guerrilla ontologists and others which can generally be considered applicable to a post-anarchist practice, have succeeded in creating spaces to release the repressed flow of the wild, within the body of civilisation. This type of practice is that which Hakim Bey (Peter Lamborn Wilson) calls poetic terrorism, and appears to be a means of eco-radicals waging primal war against this world-ruining culture.

 

Situationists are focused on challenging the psycho-geography of this culture’s everyday normal life, through mediums such as the situationist-prank, which involved turning aspects of capitalism’s everyday-narratives against itself. Discordians and guerrilla ontologists, inspired by the philosophy of Robert Anton Wilson, have often embraced the campaign of Operation Mindfuck, which has focused on art based approaches like performance and guerrilla art, as well as vandalism, practical jokes, reality hacking and hoaxes.

 

Drawing from these both, in mapping out a loose route for our poetic terrorism, a campaign seems available to us, as agents against this culture, as a means of wild-attack.

 

What does this look like?

 

Here are some games –

Modelling glue in the locks of shops, banks and vehicles of world-ruiners. Going into computer shops with fishtank cleaner magnets and destroying the hard drives. Standing on streets with a free hugs sign and giving each person who hugs you a home-made pamphlet on the ecological crisis. Gluing folded up pieces of paper into the coin slots of parking machines and into the card slots of ATMs. Searching for unlocked cars and turning on their lights to drain the battery. Mixing whipped cream, corn flakes, grapes, maple syrup, dish washer fluid and warm water in large quantities in black bags and pouring home-made vomit along streets, in high-traffic pedestrian areas. Standing on the edge and peeing into swimming pools. Leaving kitchen knives in public places covered in fake blood. Graffiti apocalypse poems or surrealist slogans, like “I don’t want to be a wall anymore” on walls, in chalk in public spaces, in sight of on-lookers. Putting itching powder on the toilet paper in public toilets and rolling it back up. Gluing public toilet seats down and putting glue on them. Wearing sandwich boards emblazoned with “the end was yesterday”. Filling wet paper towels with flour, wrapping it up, tying up with a rubber band and throwing flour bombs. Reviving the Existential Negation Campaign. Destroying badger traps or committing other acts of ecotage/monkey-wrenching and turning them into works of art.

 

These are some of the directions for this project of wild mayhem. With every work of creative-destruction performed a communiqué should be left, as words to mark the occasion.

 

With this, eco-radical practice escapes the revolutionary model, which is tied to History (the narrative of this culture and its “progress”), without falling into renunciation and becomes an iconoclastic endeavour, full of wild potential. Eco-radicals can challenge believers in this culture’s faith in its ability to maintain everyday normality in ways that are direct and signify a defiant rebellion, without appealing to ideologies and systems, which end up being incorporated and part of that which we hate.

 

As this operation is intentionally outside of History, there is no start or end date to it. This can be picked up by anyone and dropped as soon as they decide to stop. It has no governing body or even decentralised organisation behind it. It is a wild endeavour, anarchic, free.

 

So this is it – ground zero. Our operation will be one of wild mayhem. What is to come we cannot know for sure, but the present course only leads to ruin. Disrupting that course, disrupting its ruin of the world and encouraging God’s ruin, through wild poetic terrorism as primal war, seems like a pathway to go down with potential. If you want a future, I will turn again to Mr Durden – In the world I see – you are stalking elk through the damp canyon forests around the ruins of Rockefeller Center. You’ll wear leather clothes that will last you the rest of your life. You’ll climb the wrist-thick kudzu vines that wrap the Sears Tower. And when you look down, you’ll see tiny figures pounding corn, laying strips of venison on the empty car pool lane of some abandoned superhighway.”

BURROUGHS ON HOW TO ESCAPE THE SOCIETY OF CONTROL

 

In “Electronic Revolution,” whence Gilles Deleuze got his idea of the “control society,” William S. Burroughs writes about how we can scramble the control society grammatically (see Ubuweb for the essay in full):
The aim of this project is to build up a language in which certain falsifications inherit in all existing western languages will be made incapable of formulation. The follow-falsifications to be deleted from the proposed language. (“ER” 33)
Why? As he puts it elsewhere,
There are certain formulas, word-locks, which will lock up a whole civilisation for a thousand years. (The Job 49)
To unscramble control syntax, the DNA precode of the language virus,
  1. delete the copula (is/are), i.e., disrupt fixed identities – YOU ARE WHAT YOU ARE NOT [Lacan]!
  2. replace definite articles (the) with indefinite articles (a/an), i.e., avoid reification — THERE EXIST MULTIPLICITIES [Badiou]!
  3. replace either/or with and, i.e., ignore the law of contradiction — JUXTAPOSE [Silliman]!
William S. Burroughs and Brion Gysin, “Rub Out the Word,” The Third Mind (Viking, 1978).

The IS OF IDENTITY. You are an animal. You are a body. Now whatever you may be you are not an “animal,” you are not a “body,” because these are verbal labels. The IS of identity always carries the assignment of permanent condition. To stay that way. All name calling presupposes the IS of identity.
This concept is unnecessary in a hieroglyphic language like ancient Egyptian and in fact frequently omitted. No need to say the sun IS in the sky, sun in sky suffices. The verb TO BE can easily be omitted from any languages. . . . (“ER” 33)
The IS of identity . . . was greatly reinforced by the customs and passport control that came in after World War I. Whatever you may be, you are not the verbal labels in your passport any more than you are the word “self.” So you must be prepared to prove at all times that you are what you are not. (ibid.)
THE DEFINITE ARTICLE THE. The contains the implication of one and only: THE God, THE universe, THE way, THE right, THE wrong, If there is another, then THAT universe, THAT way is no longer THE universe, THE way. The definite article THE will be deleted and the indefinite article A will take its place. (33-34)
Definite article THE contains the implications of no other. THE universe locks you in THE, and denies the possibility of any other. If other universes are possible, then the universe is no longer THE[;] it becomes A. (34)
THE WHOLE CONCEPT OF EITHER/OR. Right or wrong, physical or mental, true or false, the whole concept of or will be deleted from the language and replaced by juxtaposition, by AND. This is done to some extent in any pictorial language where two concepts stand literally side by side. (ibid.)
[A] contradictory command gains its force from the Aristotelian concept of either/or. To do everything, to do nothing, to have everything, to have nothing, to do it all, to do not any, to stay up, to stay down, to stay in, to stay out, to stay present, to stay absent. (ibid.)
These falsifications inherent in the English and other western alphabetical languages give the reactive mind commands their overwhelming force in these languages. […] The whole reactive mind can be in fact reduced to three little words — to be “THE.” That is to be what you are not, verbal formulations. (ibid.)
Charles Burns, “Burroughs” (1986), Adam Baumgold Gallery, New York, 2008

There are also his more familiar “lines of fracture” (to use Deleuze’s phrase): aleatory procedures like cut-ups and fold-ins — but also the grid and picture language — that fracture the “lines of association” by which “control systems” exert their monopoly (13, 12). These represent a “new way of thinking”:

The new way of thinking has nothing to do with logical thought. It is no oceanic organismal subconscious body thinking. It is precisely delineated by what is not. Not knowing what is and is not[,] knowing we know not. Like a moving film the flow of thought seems to be continuous while actually the thoughts flow stop change and flow again. At the point where one flow stops there is a split second hiatus [a cut]. The new way of thinking grows in this hiatus between thoughts. (The Job 91)

Burroughs’ “lines of association” foreshadow Deleuze’s “lines of sedimentation,” i.e., of “light” (visibility), “enunciation” (speech), “force” (government) and “subjectification” (self-government); the “new way,” those of “fracture” or “breakage” (events in Badiou’s sense or cuts in Burroughs’). (N.B. “Lines of subjectivation,” being “lines of escape” or excess, point beyond sedimentation across the breaks to new dispositifs [“apparatuses”].)

The upshot of such scrambles is twofold:

  1. they are writing itself: “All writing is in fact cut-ups. A collage of words read heard overhead [sic]. Use of scissors [just] renders the process explicit and subject to extension and variation” (The Cut-Up Method of Brion Gysin)
  2. they are democratic: “Scrambles is the democratic way” (“ER” 24) — or elsewhere: “Cut-ups are for everyone” (“CMBG”); and, in that they are disruptive,
  3. they are revolutionary:

He who opposes force with counterforce alone forms that which he opposes and is formed by it. History shows that when a system of government is overthrown by force a system in many respects similar will take place. On the other hand he who does not resist force that enslaves and exterminates will be enslaved and exterminated. For revolution to effect basic changes in existing conditions three tactics are required: 1. Disrupt. 2. Attack. 3. Disappear. Look away. Ignore. Forget. These three tactics to be employed alternatively. (The Job 101)

NIHILISM AS THE DEEPEST PROBLEM; ART AS THE BEST RESPONSE Iain Thomson

 

How can art and poetry encourage existential trajectories that move beyond the nihilism of late-modernity? American philosopher Iain Thomson turns towards the German philosopher Martin Heidegger, in order to illustrate nihilism as our deepest historical problem and art as our best response, while establishing Heidegger’s insights into postmodernity and technology.


Heidegger, Art, and Postmodernity seeks to show that Heidegger is best understood not simply as another regressive or reactionary “antimodernist” (the way critics typically portray him) but, instead, as a potentially progressive and so still promising “postmodernist”—if I may be forgiven for trying to rehabilitate a term that has become so thoroughly “unfashionable” (or unzeitgemäße, as Nietzsche aptly put it, literally “not cut to the measure of the time”).  Sounding like some hipster conservative, Heidegger contends in Being and Time that a formerly hyper-trendy term like postmodern “can first become free in its positive possibilities only when the idle chatter covering it over has become ineffectual and the ‘common’ interest has died away.”  In other words, once everyone stops talking about “The Next Big Thing,” it becomes possible to understand what was so inspiring about it in the first place, letting us uncover those enduringly inspirational sources that tend to get obscured by the noise that engulfs a major trend during its heyday. [1]

It remains true and important, of course, that Heidegger is highly critical of modernity’s metaphysical foundations, including (1) its axiomatic positing of the Cartesian cogito as the epistemological foundation of intelligibility; (2) the ontological subject/object dualism generated by (1); (3) the fact/value dichotomy that follows from (1) & (2); and (4) the growing nihilism (or meaninglessness) that follows (in part) from (3), that is, from the belief that what matters most to us world-disclosing beings can be understood as “values” projected by human subjects onto an inherently-meaningless realm of objects.  I shall come back to this, and continue to find myself provoked and inspired by Heidegger’s phenomenological ways of undermining modern Cartesian “subjectivism.”  But my own work is even more concerned with Heidegger’s subsequent deconstruction of late-modern “enframing” (Gestell), that is, with his ontological critique of global technologization.  Heidegger’s critique of the nihilism of late-modern enframing develops out of his earlier critique of modern subjectivism but goes well beyond it.  As Heidegger, Art, and Postmodernity shows, enframing is “subjectivism squared”:  As modernity’s vaunted subject applies the technologies developed to control the objective realm back onto human subjects, this objectification of the subject is transforming us into just another intrinsically-meaningless resource to be optimized, ordered, and enhanced with maximal efficiency—whether cosmetically, psychopharmacologically, eugenically, aesthetically, educationally, or otherwise “technologically.”  (I shall come back to this point too.)

Taken together, Heidegger’s ontological critiques of modern subjectivism and late-modern enframing helped establish his work as an uncircumventable critical touchstone of twentieth century “continental” philosophy.  And I say this even while fully acknowledging that Heidegger deliberately and directly involved himself and his thinking with history’s greatest horror (greatest thus far, at least), thereby rendering his work even more controversial than it would have been anyway.  All of us would-be post-Heideggerians have to work through the significance of Heidegger’s deeply troubling Nazism for ourselves, as I have long argued.  Indeed, that critical task is new only to those who are new to Heidegger (or who have somehow managed to avoid it by bunkering down in untenable and so increasingly desperate forms of denial).  The critical task of working through and beyond Heidegger’s politics remains difficult nonetheless, because—as I showed in my first book, Heidegger on Ontotheology:  Technology and the Politics of Education—the most insightful and troubling aspects of Heidegger’s thinking are often closely intertwined.  Disentangling them thus requires both care and understanding, and so a capacity to tolerate ethical as well as philosophical ambiguity (traditional scholarly skills that seem to be growing rare in these days of one-sided outrage and indignation). [2]

Yet, despite Heidegger’s sustained critiques of modernity and late-modernity, he is not simply an anti-modernist (or even an anti-late-modernist).  To try to think against something, he repeatedly teaches, is to remain trapped within its underlying logic.  (The proud atheist often remains caught in the traditional logic of theism, for instance, insofar as both theist and atheist presume to know something outside the realm of possible knowledge.  Like Hölderlin, Heidegger himself ended up as a romantic polytheist, open to the relevant phenomena and so capable of different kinds of religious experience.)[3]  I recognize, of course, that many people find it difficult to muster the hermeneutic charity and patience that one needs in order to even be able to understand Heidegger.  But one of the deepest and most universal axioms of the hermeneutic tradition (and still shared from Gadamer to Davidson) is that the only way to understand another thinker is to presume that they make sense, that they are not just passing-off meaningless nonsense as profundity.  (There is a detectably post-Christian wisdom in the hermeneutics tradition here.  “Thinking…loves”:  Love even thy enemy, as it were, because hatred can never understand.)[4]  When Heidegger is read charitably (rather than dismissed polemically), it becomes clear that his overarching goal is not only to undermine but also to transcend modernity.

By working to think modernity from its deepest Cartesian presuppositions to its ultimate late-modern conclusions, I believe Heidegger helps open up some paths that lead beyond those problematically nihilistic modern axioms mentioned above, paths that also allow us to preserve and build upon the most crucial and irreplaceable advances achieved in the modern age.[5]  As that suggests, we need to acknowledge—much less grudgingly than Heidegger himself ever did—that humanity has made undeniable and precious progress in the domains of technology, science, medicine, art, language, and even (I try to show, thus going well beyond Heidegger) in politics.  According to the perhaps heterodox, left-Heideggerian postmodernism I espouse (in the vicinity or aftermath of Dreyfus, Young, Rorty, Vattimo, Derrida, Agamben, and others), Heidegger’s central postmodern insight into the inexhaustible plurality of being serves best to justify and promote a robust liberal tolerance, a tolerance intolerant only of intolerance itself.  That may initially sound relativistic, but this is a tolerance with teeth, because ontological pluralism undermines all fundamentalist claims to have finally arrived at the one correct truth about how to live, let alone to seek to impose those final answers on others (as I have recently tried to show).[6]

“Heidegger’s central postmodern insight into the inexhaustible plurality of being serves best to justify and promote a robust liberal tolerance, a tolerance intolerant only of intolerance itself. ”

Questions concerning how best to understand the implications of Heidegger’s central insights remain complex and controversial, of course.  But I think it is clear—in light of Heidegger’s distinctive attempts to combine philosophy and poetry into a thinking that “twists free” of and so leads beyond modernity—that Heidegger was the original postmodern thinker.  Here I say “original” even while acknowledging that Heidegger’s postmodern vision drew crucial inspiration from many others (including the Romantic tradition, especially Hölderlin, Van Gogh, and Nietzsche, as well as from his creative readings of Presocratic philosophy).  For, as Heidegger, Art, and Postmodernity shows, Heideggerian “originality” (Ursprünglichkeit) is less concerned with being first than with remaining inspiring; that is, it is less about planting flags and more about continuing to provoke important insights in others.

Moreover, this view of Heidegger as the Ur-postmodernist gains a great deal of support from the fact that almost every single significant contemporary continental philosopher was profoundly influenced by Heidegger.  The list is long, because it includes not just more recognizably “modern” philosophers like Arendt, Bultmann, Gadamer, Habermas, Kojève, Marcuse, Merleau-Ponty, Sartre, Taylor, and Tillich, but also such “postmodern” thinkers as Agamben, Badiou, Baudrillard, Blanchot, Butler, Cavell, Derrida, Dreyfus, Foucault, Irigaray, Lacan, Levinas, Rancière, Rorty, Vattimo, and Žižek—all of whom take Heideggerian insights as fundamental philosophical points of departure.  Each of these thinkers seeks to move beyond these Heideggerian or post-Heideggerian starting points (more and less successfully, it must be said, but with lots of significant advances along the way).

Taken as a whole, one thing all of these major thinkers help confirm is that we think best with a hermeneutic phenomenologist like Heidegger only when we learn to read him “reticently”—that is, slowly, critically, carefully, thoughtfully, with reservations and alternatives left open rather than too quickly foreclosed.  If we can adopt a critical yet charitable approach to Heidegger’s views on the matters of deep concern that we continue to share with him, then we can find our own ways into “die Sache selbst,” the matters themselves at stake in the discussion.  Focusing on the issues that matter in this way can also help us avoid getting too bogged down in the interminable terminological disputes that too often turn out to be merely “semantic” misunderstandings or confusions of translation, noisy distortions in which those trained in different traditions and languages continue to unknowingly talk past one another.[7] Our hermeneutic goal should instead be genuine understanding and so the possibility of positive disagreement, that is, disagreements that generate real alternatives and so do not remain merely criticisms (let alone pseudo-criticisms, confused epiphenomena of unrecognized misunderstandings, distortions passed down through generations or sent out across other networks).  The modestly immodest goal of post-Heideggerian thinking, in sum, is to think the most important issues at issue in Heidegger’s thinking further than he himself ever did.  At the very least, such attempts can succeed in developing these enduringly-important issues somewhat differently, in our own directions and inflections, in light of our own contemporary concerns and particular ways of understanding what matters most to our time and generations.

Heidegger’s provocative later suggestion about how best to develop the deepest matters at stake in the thinking of another can be helpful here:  We need to learn “to think the unthought.”  Thinking the unthought of another thinker means creatively disclosing the deepest insights on the basis of which that thinker thought.  When we think their unthought, we uncover some of the ontological “background” which rarely finds its way into the forefront of a thinker’s thinking (as Dreyfus nicely put it, drawing on the Gestalt psychology Heidegger drew on himself).  Thinking the unthought does mean seeing something otherwise unseen or hearing something otherwise unheard, but such hermeneutic “clairvoyance” (as Derrida provocatively dubbed it) should not presume that it has successfully isolated the one true core of another’s thinking (a mistake Heidegger himself too often committed).[8]  But nor should we concede that “death of the author” thesis which presumes that there is no deep background even in the work of our greatest thinkers.  We post-Heideggerian postmodernists should just presume, instead, that any such deep background will be plural rather than singular, and so irreducible to any one over-arching interpretive framework.  In that humbler hermeneutic spirit of ontological pluralism, we can then set out to develop at least some of a thinker’s best insights and deepest philosophical motivations beyond whatever points that thinker was able to take them.[9]

“We need to learn ‘to think the unthought’ ”

In such a spirit, my own work focuses primarily on some of the interconnected issues of enduring concern that I think we continue to share with Heidegger, including (1) his deconstructive critique of Western metaphysics as ontotheology; (2) the ways in which the ontotheology underlying our own late-modern age generates troublingly nihilistic effects in our ongoing technologization of our worlds and ourselves; (3) Heidegger’s alternative vision of learning to transcend such technological nihilism through ontological education, that is, an education centered on the “perfectionist” task of “becoming what we are” in order to come into our own as human beings leading meaningful lives.  My interest in those interconnected issues (of ontotheology, technology, and education) led me to try to explicate (4) the most compelling phenomenological and hermeneutic reasons behind the enduring appeal of Heideggerian and post-Heideggerian visions of postmodernity; and so also (5) the continuing relevance of art and poetry in helping us learn to understand being in some enduringly meaningful, postmodern ways.  The point of this postmodernism, to put it simply, is to help us improve crucial aspects of our understanding of the being of our worlds, ourselves, and each other, as well as of the myriad other entities who populate and shape our interconnected worlds.  (It is, in other words, a continuation of the struggle against nihilism, to which we will turn next.)

Beneath or behind it all, I have also dedicated much of the last decade to working through some of the philosophical issues that arise, directly and indirectly, from the dramatic collision between Heidegger’s life and thinking (as I have been working on a philosophical biography of Heidegger). I have thus taken up, for example, Heidegger’s views on the nature and meaning of love (which prove surprisingly insightful, once again, when approached with critical charity), while also continuing to participate in that ongoing re-examination of the significance of Heidegger’s early commitment to and subsequent break with Nazism, as well as the more recently revealed extent of his ignorant anti-Semitism (fraught and difficult topics).

In what follows I want to focus on the role that art—understood as poiêsis or ontological disclosure—can play in helping us learn to live meaningful lives.  So I shall try briefly to explain some of my thoughts on nihilism as our deepest historical problem and art as our best response.  How can art and poetry encourage existential trajectories that move beyond the nihilism of late-modernity?  Let me take up this question while acknowledging the apparent irony of doing so in this technological medium.  In fact, this need not be ironic at all, given my view that we have to find ways to use technologies against technologization—learning to use technologies without being used by them, as it were—by employing particular technologies in ways that help us uncover and transcend (rather than thoughtlessly reinforce) the nihilistic technologization at work within our late-modern age.  What Heidegger helps us learn to undermine and transcend, in other words, is not technology but rather nihilistic technologization.  By “nihilistic technologization,” I mean the self-fulfilling ontological pre-understanding of being that reduces all things, ourselves included, to the status of intrinsically-meaningless stuff standing by to be optimized as efficiently and flexibly as possible.  (That, of course, will take some explaining.)

“What Heidegger helps us learn to undermine and transcend, in other words, is not technology but rather nihilistic technologization.”

To develop Heidegger’s thinking on technological nihilism beyond the point he himself left it, we need both (1) to learn to recognize the undertow of technologization’s drift toward nihilistic optimization and yet still (2) find ways to use particular technologies (including word processing software, synthesizers, Facebook, on-line philosophy ‘zines, and all the other irreversibly-proliferating technological media of our world) in ways that help move us beyond that nihilistic technologization rather than merely reinforcing it.  Heidegger, Art, and Postmodernity suggests that one of the best ways to do this is by cultivating a receptivity to that which overflows and so partly escapes all the willful projects in which the modern subject understands itself as the source of what matters most in the world (as the foundation of all “values,” all “normativity,” and other such widespread but deeply problematic, modern philosophical ideas).  We think Heidegger’s unthought when we disclose this postmodern understanding of being, learning to understand and so encounter being not as a modern domain of objects for subjects to master and control, nor as a late-modern “standing reserve” of resources to be efficiently optimized, but instead as that which continues to both inform and exceed our every way of making sense of ourselves and our worlds.  By learning to cultivate a phenomenological receptivity to this postmodern understanding of being, we can address the nihilism of our technological understanding of being by responding directly to its ontotheological foundations.

Heidegger, Art, and Postmodernity begins with the words, “What does Heidegger mean by ontotheology—and why should we care?”  Here is a greatly simplified answer:  If, like Parmenides, we think of all intelligible reality as a sphere, then ontotheology is the attempt to grasp this sphere from the inside-out and the outside-in at the same time.  More precisely, ontotheology is Heidegger’s name for the attempt to stabilize the entire intelligible order (or the whole space of meaning) by grasping both the innermost “ontological” core of what-is and its outermost “theological” expression, then linking these innermost and outermost “grounds” together into a single, doubly-foundational, “ontotheological” understanding of the being of what-is.  An ontotheology, when it works (by uncovering and disseminating those grounds beneath or beyond which no one else can reach, for a time), establishes the meaning of being that “doubly grounds” an historical age.  Such ontotheologies shape and transform Western history’s guiding sense of what being “is” (by telling us what “Isness” itself is), and since everything is, they end up shaping and reshaping our understanding of everything else.  (Heidegger’s notorious antipathy to metaphysics thus obscures the pride of place he in fact assigns to ontotheologies in the transformation and stabilization of history itself.)[10]

One of the crucial points to grasp here is that Heidegger’s critique of technology follows directly from his understanding of ontotheology.  Indeed, the two are so intimately connected that his critique of technology cannot really be understood apart from his view of ontotheology, a fact even scholars were slow to recognize (reminding us that Heidegger still remains, in many ways, our contemporary).  As Heidegger on Ontotheology shows, one of Heidegger’s deepest but most often overlooked insights is that our late-modern, Nietzschean ontotheology generates the nihilistic technologization in whose currents we remain caught.  The deepest problem with this “technologization” of reality is the nihilistic understanding of being that underlies and drives it:  Nietzsche’s ontotheological understanding of the being of entities as “eternally recurring will-to-power” dissolves being into nothing but “sovereign becoming,” an endless circulation of forces, and in so doing, it denies that things have any inherent nature, any genuine meaning capable of resisting this slide into nihilism (any qualitative worth, for example, that cannot be quantified and represented in terms of mere “values,” so that nothing is invaluable—in the full polysemy of that crucial phrase).[11]

Heidegger, Art, and Postmodernity explains Heidegger’s radical philosophical challenge to the deepest presuppositions of modernity and his attempt to articulate a genuinely meaningful post-modern alternative by drawing on key insights from art and poetry, especially insights into the polysemic nature of being and the consequent importance of creative world disclosure (as contrasted with the willful, subjective imposure of “value”). Heidegger’s view is that even great late-modern philosophers like Nietzsche, Marx, and Freud remain trapped within unrecognized modern presuppositions, including the nihilistic view that all meaning is projected onto or infused into an inherently-meaningless world of objects through the subject’s conceptual and material labors (both conscious and unconscious).  These unnoticed metaphysical presuppositions undermine their otherwise important attempts to forge paths into a more meaningful future.  Drawing on Kierkegaard, Hölderlin, Van Gogh, and others, Heidegger teaches that more genuinely enduring meaning cannot come from the subject imposing its values on the world but, instead, only from a poetic openness to those meanings that precede and exceed our own subjectivity.  Such meaningful encounters (or “events”) require us to creatively and responsibly disclose their significance, unfolding their meaning throughout the lives they can thus come to transform, guide, and confer meaning on.

“Drawing on Kierkegaard, Hölderlin, Van Gogh, and others, Heidegger teaches that more genuinely enduring meaning cannot come from the subject imposing its values on the world but, instead, only from a poetic openness to those meanings that precede and exceed our own subjectivity.”

One of the central theses of Heidegger, Art, and Postmodernity is that this crucial difference between imposing and disclosing—or between technological imposition and poetic disclosure—is the crucial distinction between the meaninglessness of our technological understanding of being and those meaning-full encounters that a postmodern understanding of ourselves and our worlds help give rise to, nurture, and encourage.[12]  Genuinely-enduring, meaningful events, the kinds around which we can build fulfilling lives, do not arise from imposing our wills on the world (as in the modern view which, as Kierkegaard already taught, turns us into sovereign rulers over a land of nothing, where all meaning is fragile because it comes from us, from the groundless voluntarism of our own wills, and so can be rescinded as easily as it was projected).  Genuinely enduring meanings emerge, instead, from learning to creatively disclose those often inchoate glimmers of meaning that exist at least partly independently of our preexisting projects and designs, so that disclosing their significance creatively and responsibly helps teach us to partake in and serve something larger than ourselves (with all the risk and reward that inevitably entails).

In short, a truly postmodern understanding requires us to recognize that, when approached with a poetic openness and respect, things push back against us, resisting out wills and so making subtle but undeniable claims on us.  We need to acknowledge and respond creatively and responsibly to these claims if we do not want to deny the source of genuine meaning in the world.  For, only those meanings which are at least partly independent of us and so not entirely within our control—meanings not simply up to us human beings to bestow and rescind at will—can provide us with the kind of touchstones around which we can build enduringly meaningful lives (and loves).  Heidegger sometimes describes our encounter with these more genuinely meaning-full meanings as an “event of enowning” (Ereignis), thus designating those profoundly significant events in which we come into our own as world-disclosers by creatively enabling things come into their own, just as Michelangelo came into his own as a sculptor by creatively responding to the veins and fissures in a particularly rich piece of marble so as to bring forth his “David,” just as a woodworker comes into her own as a woodworker by learning to respond to the subtle weight and grain of each individual piece of wood, and just as teachers comes into their own as teachers by learning to recognize, cultivate, and so help develop the particular talents and capacities of individual students.

This poetic openness to that which pushes back against our preexisting plans and designs is what Heideger, Art, and Postmodernity calls a sensitivity to the texture of the text, that subtle but dynamic meaning-fullness which is “all around us” phenomenologically, as Heidegger writes.[13]  The current of technologization tend to sweep right passed the texture of the texts all around us, and can even threaten to render us oblivious to it (most plausibly, if our resurgent efforts at genetic enhancement inadvertently eliminate our defining capacity for creative world-disclosure).  When we learn to recognize the ontohistorical current feeding technology, however, we can also learn to resist its nihilistic erosion of all inherent meaning, and so begin to develop a “free relation to technology” in which it becomes possible to thoughtfully use technologies against nihilistic technologization, as we do (for example) when we use a camera, microscope, telescope, or even glasses creatively to help bring out something there in the world that we might not otherwise have seen, a synthesizer or computer to make a new kind of music that helps us develop our sense of what genuinely matters to us, or when we use a word processor or even the Internet to help bring out our sense of what is really there in the issues and texts that most concern us.

In my view, the role human beings play in the disclosure and transformation of our basic sense of reality thus occupies a middle ground between the poles of voluntaristic constructivism and quietistic fatalism.  Heidegger is primarily concerned to combat the former, “subjectivistic” error—that is, the error of thinking that human subjects are the sole source of meaning and so can reshape our understanding of being at will—because that is the dangerous error toward which our modern and late-modern ways of understanding being incline us.  But this has led to some widespread misunderstandings of his view.  Perhaps most importantly, Heidegger’s oft-quoted line from his famous Der Spiegel interview, “Only another God can save us,” is probably the most widely misunderstood sentence in his entire work.  By another “God,” Heidegger does not mean some otherworldly creator or transcendent agent but, instead, another understanding of being.  He means, quite specifically, a post-metaphysical, post-epochal understanding of “the being of entities” in terms of “being as such,” to use his philosophical terms of art.  Heidegger himself equates his “last God” with a postmodern understanding of being, for example, when he poses the question “as to whether being will once more be capable of a God, [that is,] as to whether the essence of the truth of being will make a more primordial claim upon the essence of humanity.”[14]  Here Heidegger asks whether our current understanding of being is capable of being led beyond itself, of giving rise to other world-disclosive events that would allow human beings to understand the being of entities neither as modern “objects” to be mastered and controlled, nor as late-modern, inherently-meaningless “resources” standing by for optimization, but instead as things that always mean more than we are capable of expressing conceptually (and so fixing once and for all in an ontotheology).  That the “God” needed to “save us” is a postmodern understanding of being is one of the central theses of Heidegger, Art, and Postmodernity.

“‘Only another God can save us,’ is probably the most widely misunderstood sentence in his entire work.”

Rather than despairing of the possibility of such an inherently pluralistic, postmodern understanding of being ever arriving, moreover, Heidegger thought it was already here, embodied in the “futural” artwork of artists like Hölderlin and Van Gogh, simply needing to be cultivated and disseminated in myriad forms (clearly not limited to the domain of art, pace Badiou) in order to “save” the ontologically abundant “earth” (with its apparently inexhaustible plurality of inchoately meaningful possibilities) from the devastation of technological obliviousness.  When Heidegger stresses that thinking is at best “preparatory” (vorbereitend), what he means is that great thinkers and poets “go ahead and make ready” (im voraus bereiten), that is, that they are ambassadors, emissaries, or envoys of the future, first postmodern arrivals who, like Van Gogh, disseminate and so prepare for this postmodern future with “the unobtrusive sowing of sowers” (as Heidegger nicely put it, drawing a deep and illuminating parallel between his teaching and Van Gogh’s painting which I seek to explain in Heidegger, Art, and Postmodernity).  As this suggests, new historical ages are not simply dispensed by some super-human agent to a passively awaiting humanity.  Rather, actively vigilant artists and particularly receptive thinkers pick up on broader tendencies happening partly independently of their own wills (in the world around us or at the margins of our cultures, for example), then make these insights central through their artworks and philosophies.

For good and for ill, then, Heidegger is a profoundly hopeful philosopher, not some teacher of despair and resignation, as he is often polemically portrayed.  As I began by saying, he is not an anti-modern who exhausts himself critiquing modernity but rather the original postmodern philosopher, a thinker who dedicates himself to disseminating a postmodern understanding of being in which he places his hope for the future.  I continue to find myself inspired by Heidegger’s poetic thinking of a postmodern understanding of being (as well as by many of those Heidegger helped inspire in turn), especially in light of his provocative proclamations that the philosophical lessons of art and poetry’s distinctive ways of disclosing the world were needed to help us find ways through and beyond the growing noontime darkness of technological nihilism.  (Perhaps such concerns partly reflect middle-age and its attendant anxieties, but if so, then I have been partly middle-aged my whole life, and suspect that many of us feel similarly, as if we were all living in a time in the middle or between ages, a historical period of radical change and transition—or at least we, some of us, still hope.)

 

References
[1] That hipster conservativism sounds rather paradoxical does not make it false—just falsely totalizing in this case:What is false is imagining that only latecomers can truly understand something.As anyone who has ever been there at the beginning of something important will probably recognize, first-comers often understand something too, and can do so at least as deeply (if not often as cogently) as those who come later.Rather than define “understand” more cognitively than Heidegger himself did, let us just admit that we need both:Early arrivals help create and draw our attention to potentially important and inspiring phenomena; late-comers remain crucial to preserving what remains inspiring beneath traditions whose day in the sun might otherwise have come and gone.That we need both “creators” and “preservers” is something Heidegger himself recognized by the time he wrote the magnum opus of his middle period, “The Origin of the Work of Art” (1934-35), which goes so far as to posit creators and preservers as the two equally-important sides of the work of art.For a detailed discussion of the creative role of such interpretive “preservers,” see Thomson, Heidegger, Art, and Postmodernity (Cambridge University Press, 2011), ch. 3.(An earlier version is available on-line as Thomson, “Heidegger’s Aesthetics,” Stanford Encyclopedia of Philosophy, <http://plato.stanford.edu/entries/heidegger-aesthetics/>.)
[2] See Thomson, Heidegger on Ontotheology:Technology and the Politics of Education (Cambridge University Press, 2005), esp. chs. 3-4.
[3]I discuss Heidegger’s provocative views on polytheism, atheism, and on the phenomenological relation between humanity and “the divine” in Heidegger, Art, and Postmodernity (esp. chs. 1 and 6); and in Thomson, “The Nothing (das Nichts),” in Mark Wrathall, ed., The Heidegger Lexicon (Cambridge University Press, forthcoming).For more on the perhaps surprising appeal of Heidegger’s romantic polytheism, see also Hubert Dreyfus and Sean Kelly, All Things Shining (New York:Free Press, 2011).
[4] On this point, see Thomson “Heidegger’s Nazism in the Light of his early Black Notebooks,” Alfred Denker and Holger Zaborowski, eds, Zur Hermeneutik der ‘Schwarzen Hefte’:Heidegger Jahrbuch 10 (Freiburg:Karl Alber, forthcoming.)
[5] This hermeneutics of philosophical “fulfillment” (Vollendung)—or what Heidegger, Art, and Postmodernity also calls the strategy of hypertrophic deconstruction—is premised on the insight that, where the deepest historical trends are concerned, the only way out is through.
[6] See Thomson, “Heideggerian Phenomenology and the Postmetaphysical Politics of Ontological Pluralism,” in S. West Gurley and Geoffrey Pfeifer, eds, Phenomenology and the Political (Rowman & Littlefield, forthcoming October 2016).
[7] See Thomson, “In the Future Philosophy will be neither Continental nor Analytic but Synthetic:Toward a Promiscuous Miscegenation of (All) Philosophical Traditions and Styles,” Southern Journal of Philosophy 50:2 (2012), pp. 191-205. 
[8] On this still metaphysical mistake, see Heidegger, Art, and Postmodernity, ch. 3.
[9] Such limits inevitably follow from our universal condition of existential “finitude,” and include personal limitations of time and perspective to which we can remain insensitive, whether out of ignorance or pride.Obviously, Heidegger’s personal limitations have become increasingly glaring in the four decades since his death, with the ongoing publication of his thinking.However much distance we might like to put between Heidegger’s perspective and our own, the fact that all of our perspectives remains limited (in ways more and less visible to us) may help to motivate the open-minded, hermeneutic humility that we still need (and need all the more) in order to approach Heidegger’s work in ways that remain charitable as well as critical, so that we can both learn something and go further ourselves.
[10] For more on the way the great metaphysical ontotheologies temporarily dam the flow of historicity by grasping the innermost core of reality and its outermost expression and linking these dual perspectives together into a single “ontotheological” account, see Heidegger on Ontotheology, ch. 1.
[11] Heidegger on Ontotheology thus seeks to develop and defend the core of Heidegger’s “reductive yet revealing” and so rightly controversial reading of Nietzsche as the unrecognized ontotheologist of our late-modern age of technologization.For a summation of that view, see ch. 1 of Heidegger, Art, and Postmodernity.On the crucial polysemy of the nothing, see Heidegger, Art, and Postmodernity, ch. 3.
[12] On the importance of this difference between imposing and disclosing, see also Thomson, “Rethinking Education after Heidegger:Teaching Learning as Ontological Response-Ability,” Educational Philosophy and Theory, 48:8 (2016), pp. 846-861.
[13] “The texture of the text” is also the seditious way in which Heidegger, Art, and Postmodernity tries to re-Heideggerize Derrida’s famous, anti-Heideggerian aperçu:“There is nothing outside the text.”
[14] See Heidegger, Off the Beaten Track, Julian Young and Kenneth Haynes, eds. and trans. (Cambridge:Cambridge University Press, 2002), p. 85 (Holzwege, Gesamtausgabe vol. 5 [Frankfurt:Klostermann, 1977], p. 112).Here the “truth of being” is shorthand for the way an understanding of “the being of entities” (that is, a metaphysical understanding of “the truth concerning entities as such and as a whole” or, in a word, an ontotheology) works to anchor and shape the unfolding of an historical constellation of intelligibility.Its “essence” is that apparently inexhaustible source of historical intelligibility the later Heidegger calls “being as such,” an actively a-lêtheiac (that is, ontologically “dis-closive”) Ur-phenomenon metaphysics eclipses with its ontotheological fixation on finally determining “the being of entities.”(That “being as such” lends itself to a series of different historical understandings of “the being of entities” rightly suggests that it exceeds every ontotheological understanding of the being of entities.)The “essence of humanity” refers to Dasein’s definitive world-disclosive ability to give being as such a place to “be” (i.e., to happen or take place); it refers, that is, to the poietic and maieutic activities by which human beings creatively disclose the inconspicuous and inchoate hints offered us by “the earth” and so help bring genuine meanings into the light of the world.

Iain D. Thomson is a Professor of Philosophy at the University of New Mexico, where he also serves as Director of Graduate Studies. He is the author of Heidegger, Art, and Postmodernity (CUP, 2011) and Heidegger on Ontotheology: Technology and the Politics of Education(CUP, 2005), and his articles have appeared in numerous scholarly journals, essay collections and reference works.

10 MINUTES WITH RAYMOND GEUSS ON NIHILISM

in conversation with Raymond Geuss

Is nihilism the most important philosophical problem of our present? Philosopher Raymond Geuss talks to four by three about our misconception of nihilism, outlining three ways of questioning it, while asking whether nihilism is a philosophical or a historical problem and whether we are truly nihilists or might simply be confused.

Raymond Geuss is Emeritus Professor in the Faculty of Philosophy at the University of Cambridge and works in the general areas of political philosophy and the history of Continental Philosophy. His most recent publications include, but are not limited to, Politics and the Imagination (2010) and A World Without Why (2014).

12 FRAGMENTS ON NIHILISM

12 FRAGMENTS ON NIHILISM

Eugene Thacker


Are you a nihilist and should you be one? Philosopher Eugene Thacker turns to Friedrich Nietzsche to break down nihilism into fragments of insights, questions, possible contradictions and thought provoking ruminations, while asking whether nihilism can fulfill itself or always ends up undermining itself?


1. What follows came out of an event held at The New School in the spring of 2015. It was an event on nihilism (strange as that sounds). I admit I had sort of gotten roped into doing it. I blame it on the organizers. An espresso, some good conversation, a few laughs, and there I was. Initially they tell me they’re planning an event about nihilism in relation to politics and the Middle East. I tell them I don’t really have anything to say about the Middle East – or for that matter, about politics – and about nihilism, isn’t the best policy to say nothing? But they say I won’t have to prepare anything, I can just show up, and it’s conveniently after I teach, just a block away, and there’s dinner afterwards…How can I say no?

How can I say no…

 

2. Though Nietzsche’s late notebooks contain many insightful comments on nihilism, one of my favorite quotes of his comes from his early essay “On Truth and Lies in an Extra-Moral Sense.” I know this essay is, in many ways, over-wrought and over-taught. But I never tire of its opening passage, which reads:

In some remote corner of the universe, poured out and glittering in innumerable solar systems, there once was a star on which clever animals invented knowledge. That was the haughtiest and most mendacious minute of “world history” – yet only a minute. After nature had drawn a few breaths the star grew cold, and the clever animals had to die.

One might invent such a fable and still not have illustrated sufficiently how wretched, how shadowy and flighty, how aimless and arbitrary, the human intellect appears in nature. There have been eternities when it did not exist; and when it is done for again, nothing will have happened. For this intellect has no further mission that would lead beyond human life. It is human, rather, and only its owner and producer gives it such importance, as if the world pivoted around it.

The passage evokes a kind of impersonal awe, a cold rationalism, a null-state. In the late 1940s, the Japanese philosopher Keiji Nishitani would summarize Nietzsche’s fable in different terms. “The anthropomorphic view of the world,” he writes, “according to which the intention or will of someone lies behind events in the external world, has been totally refuted by science. Nietzsche wanted to erase the last vestiges of this anthropomorphism by applying the critique to the inner world as well.”

Both Nietzsche and Nishitani point to the horizon of nihilism – the granularity of the human.

 

3. At the core of nihilism for Nietzsche is a two-fold movement: that a culture’s highest values devalue themselves, and that there is nothing to replace them. And so an abyss opens up. God is dead, leaving a structural vacuum, an empty throne, an empty tomb, adrift in empty space.

But we should also remember that, when Zarathustra comes down from the mountain to make his proclamation, no one hears him. They think he’s just the opening band. They’re all waiting for the tight-rope walker’s performance, which is, of course, way more interesting. Is nihilism melodrama or is it slapstick? Or something in-between, a tragic-comedy?

 

4. I’ve been emailing with a colleague about various things, including our shared interest in the concepts of refusal, renunciation, and resignation. I mention I’m finishing a book called Infinite Resignation. He replies that there is surprisingly little on resignation as a philosophical concept. The only thing he finds is a book evocatively titled The Art of Resignation – only to find that it’s a self-help book about how to quit your job.

I laugh, but secretly wonder if I should read it.

 

5. We do not live – we are lived. What would a philosophy have to be in order to begin from this, rather than arriving at it?

 

6. “Are you a nihilist?”

“Not as much as I should be.”

 

7. We do Nietzsche a dis-service if we credit him for the death of God. He just happened to be at the scene of the crime, and found the corpse. Actually, it wasn’t even murder – it was a suicide. But how does God commit suicide?

 

8. By a process I do not understand, scientists estimate that the planet is capable of sustaining a population of around 1.2 billion – though the current population is upwards of 7 billion. Bleakness on this scale is difficult to believe, even for a nihilist.

 

9. I find Nietzsche’s notebooks from the 1880s to be a fascinating space of experimentation concerning the problem of nihilism. The upshot of his many notes is that the way beyond nihilism is through nihilism.

But along the way he leaps and falls, skips and stumbles. He is by turns analytical and careless; he uses argumentation and then bad jokes; he poses questions without answers and problems without solutions; and he creates typologies, an entire bestiary of negation: radical nihilism, perfect nihilism, complete or incomplete nihilism, active or passive nihilism, Romantic nihilism, European nihilism, and so on…

Nietzsche seems so acutely aware of the fecundity of nihilism.

10. It’s difficult be a nihilist all the way – Eventually nihilism must, by definition, undermine itself. Or fulfill itself.

 

11. Around 1885 Nietzsche writes in his notebook: “The opposition is dawning between the world we revere and the world which we live – which we are. It remains for us to abolish either our reverence or ourselves.”

 

12. If we are indeed living in the “anthropocene,” it would seem sensible to concoct forms of discrimination that are adequate to it. Perhaps we should cast off all forms of racism, sexism, classism, nationalism, and the like, in favor of a new kind of discrimination – that of a species-ism. A disgust of the species, which we ourselves have been clever enough to think. A species-specific loathing that might also be the pinnacle of the species. I spite, therefore I am. But is this still too helpful, put forth with too much good conscience?

The weariness of faith. The incredulity of facts.

 

Eugene Thacker is Professor at The New School in New York City. He is the author of several books, including In The Dust Of This Planet (Zero Books, 2011) and Cosmic Pessimism (Univocal, 2015).

The Dark Secret at the Heart of AI

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network.

Adam Ferriss

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

Adam Ferriss

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

This early artificial neural network, at the Cornell Aeronautical Laboratory in Buffalo, New York, circa 1960, processed inputs from light sensors.
Ferriss was inspired to run Cornell’s artificial neural network through Deep Dream, producing the images above and below.

Adam Ferriss

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

Adam Ferriss

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

Choosing the Proper Tool for the Task

Assessing Your Encryption Options

So, you’ve decided to encrypt your communications. Great! But which tools are the best? There are several options available, and your comrade’s favorite may not be the best for you. Each option has pros and cons, some of which may be deal breakers—or selling points!—for you or your intended recipient. How, then, do you decide which tools and services will make sure your secrets stay between you and the person you’re sharing them with, at least while they’re in transit?

Keep in mind that you don’t necessarily need the same tool for every situation; you can choose the right one for each circumstance. There are many variables that could affect what constitutes the “correct” tool for each situation, and this guide can’t possibly cover all of them. But knowing a little more about what options are available, and how they work, will help you make better-informed decisions.

Crypto options 1

Signal

Pros: Signal is free, open source, easy to use, and features a desktop app, password protection for Android, secure group messages. It’s also maintained by a politically-conscious nonprofit organization, and offers: original implementation of an encryption protocol used by several other tools,1ephemeral (disappearing) messages, control over notification content, sent/read receipts—plus it can encrypt calls and offers a call-and-response two-word authentication phrase so you can verify your call isn’t being tampered with.

Cons: Signal offers no password protection for iPhone, and being maintained by a small team means fixes are sometimes on a slow timeline. Your Signal user ID is your phone number, you may have to talk your friends into using the app, and it sometimes suffers from spotty message delivery.

Signal certainly has its problems, but using it won’t make you LESS secure. It’s worth noting that sometimes Signal messages never reach their endpoint. This glitch has become increasingly rare, but Signal may still not be the best tool for interpersonal relationship communications when emotions are heightened!2 One of Signal’s primary problems is failure to recognize when a message’s recipient is no longer using Signal. This can result in misunderstandings ranging from hilarious to relationship-ending. Additionally, Signal for Desktop is a Chrome plugin; for some, this is a selling point, for others, a deal breaker. Signal for Mac doesn’t offer encryption at rest,3 which means unless you’ve turned it on as a default for your computer, your stored saved data isn’t encrypted. It’s also important to know that while Signal does offer self-destructing messages, the timer is shared, meaning that your contact can shut off the timer entirely and the messages YOU send will cease to disappear.

Wickr

Pros: Wickr offers free, ephemeral messaging that is password protected. Your user ID is not dependent on your phone number or other personally identifying info. Wickr is mostly reliable and easy to use—it just works.

Cons: Wickr is not open source, and the company’s profit model (motive) is unclear. There’s also no way to turn off disappearing messages.

Wickr is sometimes called “Snapchat for adults.” It’s an ephemeral messaging app which claims to encrypt your photos and messages from endpoint to endpoint, and stores everything behind a password. It probably actually does exactly what it says it does, and is regularly audited, but Wickr’s primary selling point is that your user login is independent from your cell phone number. You can log in from any device, including a disposable phone, and still have access to your Wickr contacts, making communication fairly easy. The primary concern with using Wickr is that it’s a free app, and we don’t really know what those who maintain it gain from doing so, and it should absolutely be used with that in mind. Additionally, it is worth keeping in mind that Wickr is suboptimal for communications you actually need to keep, as there is no option to turn off ephemeral messaging, and the timer only goes up to six days.

Threema

Pros: Threema is PIN-protected, offers decent usability, allows file transfers, and your user ID is not tied to your phone number.

Cons: Threema isn’t free, isn’t open source, doesn’t allow ephemeral messaging, and ONLY allows a 4-digit PIN.

Threema’s primary selling point is that it’s used by some knowledgeable people. Like Wickr, Threema is not open source but is regularly audited, and likely does exactly what it promises to do. Also like Wickr, the fact that your user ID is not tied to your phone number is a massive privacy benefit. If lack of ephemerality isn’t a problem for you (or if Wickr’s ephemerality IS a problem for you), Threema pretty much just works. It’s not free, but at $2.99 for download, it’s not exactly prohibitively expensive for most users. With a little effort, Threema also makes it possible for Android users to pay for their app “anonymously” (using either Bitcoin or Visa gift cards) and directly download it, rather than forcing people to go through the Google Play Store.

WhatsApp

Pros: Everyone uses it, it uses Signal’s encryption protocol, it’s super straightforward to use, it has a desktop app, and it also encrypts calls.

Cons: Owned by Facebook, WhatsApp is not open source, has no password protection and no ephemeral messaging option, is a bit of a forensic nightmare, and its key change notifications are opt-in rather than default.

The primary use case for WhatsApp is to keep the content of your communications with your cousin who doesn’t care about security out of the NSA’s dragnet. The encryption WhatsApp uses is good, but it’s otherwise a pretty unremarkable app with regards to security features. It’s extremely easy to use, is widely used by people who don’t even care about privacy, and it actually provides a little cover due to that fact.

The biggest problem with WhatsApp appears to be that it doesn’t necessarily delete data, but rather deletes only the record of that data, making forensic recovery of your conversations possible if your device is taken from you. That said, as long as you remain in control of your device, WhatsApp can be an excellent way to keep your communications private while not using obvious “security tools.”

Finally, while rumors of a “WhatsApp backdoor” have been greatly exaggerated, if WhatsApp DOES seem like the correct option for you, it is definitely a best practice to enable the feature which notifies you when a contact’s key has changed.

Facebook Secret Messages

Pros: This app is widely used, relies on Signal’s encryption protocol, offers ephemeral messaging, and is mostly easy to use.

Cons: You need to have a Facebook account to use it, it has no desktop availability, it’s kind of hard to figure out how to start a conversation, there’s no password protection, and your username is your “Real Name” as defined by Facebook standards.

Facebook finally rolled out “Secret Messages” for the Facebook Messenger app. While the Secret Messages are actually pretty easy to use once you’ve gotten them started, starting a Secret Message can be a pain in the ass. The process is not terribly intuitive, and people may forget to do it entirely as it’s not Facebook Messenger’s default status. Like WhatsApp, there’s no password protection option, but Facebook Secret Messages does offer the option for ephemerality. Facebook Secret Messages also shares the whole “not really a security tool” thing with WhatsApp, meaning that it’s fairly innocuous and can fly under the radar if you’re living somewhere people are being targeted for using secure communication tools.

There are certainly other tools out there in addition to those discussed above, and use of nearly any encryption is preferable to sending plaintext messages. The most important things you can do are choose a solution (or series of solutions) which works well for you and your contacts, and employ good security practices in addition to using encrypted communications.

There is no one correct way to do security. Even flawed security is better than none at all, so long as you have a working understanding of what those flaws are and how they can hurt you.

— By Elle Armageddon

Burner Phone Best Practices

A User’s Guide

A burner phone is a single-use phone, unattached to your identity, which can theoretically be used to communicate anonymously in situations where communications may be monitored. Whether or not using a burner phone is itself a “best practice” is up for debate, but if you’ve made the choice to use one, there are several things you should keep in mind.

Burner phones are not the same as disposable phones.

A burner phone is, as mentioned above, a single-use phone procured specifically for anonymous communications. It is considered a means of clandestine communication, and its efficacy is predicated on having flawless security practices. A disposable phone is one you purchase and use normally with the understanding that it may be lost or broken.

Burner phones should only ever talk to other burner phones.

Using a burner phone to talk to someone’s everyday phone leaves a trail between you and your contact. For the safety of everyone within your communication circle, burner phones should only be used to contact other burner phones, so your relationships will not compromise your security. There are a number of ways to arrange this, but the best is probably to memorize your own number and share it in person with whoever you’re hoping to communicate with. Agree in advance on an innocuous text they will send you, so that when you power your phone on you can identify them based on the message they’ve sent and nothing else. In situations where you are meeting people in a large crowd, it is probably OK to complete this process with your phone turned on, as well. In either case, it is unnecessary to reply to the initiation message unless you have important information to impart. Remember too that you should keep your contacts and your communications as sparse as possible, in order to minimize potential risks to your security.

Never turn your burner on at home.

Since cell phones both log and transmit location data, you should never turn on a burner phone somewhere you can be linked to. This obviously covers your home, but should also extend to your place of work, your school, your gym, and anywhere else you frequently visit.

Never turn your burner on in proximity to your main phone.

As explained above, phones are basically tracking devices with additional cool functions and features. Because of this, you should never turn on a burner in proximity to your “real” phone. Having a data trail placing your ostensibly anonymous burner in the same place at the same time as your personally-identifying phone is an excellent way to get identified. This also means that unless you’re in a large crowd, you shouldn’t power your burner phone on in proximity to your contacts’ powered-up burners.

Given that the purpose of using a burner phone is to preserve your anonymity and the anonymity and the people around you, identifying yourself or your contacts by name undermines that goal. Don’t use anyone’s legal name when communicating via burner, and don’t use pseudonyms that you have used elsewhere either. If you must use identifiers, they should be unique, established in advance, and not reused.

Consider using an innocuous passphrase to communicate, rather than using names at all. Think “hey, do you want to get brunch Tuesday?” rather than “hey, this is Secret Squirrel.” This also allows for call-and-response as authentication. For example, you’ll know the contact you’re intending to reach is the correct contact if they respond to your brunch invitation with, “sure, let me check my calendar and get back to you.” Additionally, this authentication practice allows for the use of a duress code, “I can’t make it to brunch, I’ve got a yoga class conflict,” which can be used if the person you’re trying to coordinate with has run into trouble.

Beware of IMSI catchers.

One reason you want to keep your authentication and duress phrases as innocuous as possible is because law enforcement agencies around the world are increasingly using IMSI catchers, also known as “Stingrays” or “Cell Site Simulators” to capture text messages and phone calls within their range. These devices pretend to be cell towers, intercept and log your communications, and then pass them on to real cell towers so your intended contacts also receive them. Because of this, you probably don’t want to use your burner to text things like, “Hey are you at the protest?” or “Yo, did you bring the Molotovs?”

Under normal circumstances, the use of encrypted messengers such as Signal can circumvent the use of Stingrays fairly effectively, but as burner phones do not typically have the capability for encrypted messaging (unless you’re buying burner smartphones), it is necessary to be careful about what you’re saying.

Burner phones are single-use.

Burner phones are meant to be used once, and then considered “burned.” There are a lot of reasons for this, but the primary reason is that you don’t want your clandestine actions linked. If the same “burner” phone starts showing up at the same events, people investigating those events have a broader set of data to build profiles from. What this means is, if what you’re doing really does require a burner phone, then what you’re doing requires a fresh, clean burner every single time. Don’t let sloppy execution of security measures negate all your efforts.

Procure your burner phone carefully.

You want your burner to be untraceable. That means you should pay for it in cash; don’t use your debit card. Ask yourself: are there surveillance cameras in or around the place you are buying it? Don’t bring your personal phone to the location where you buy your burner. Consider walking or biking to the place you’re purchasing your burner; covering easily-identifiable features with clothing or makeup; and not purchasing a burner at a location you frequent regularly enough that the staff recognize you.

Never assume burner phones are “safe” or “secure.”

For burner phones to preserve your privacy, everyone involved in the communication circle has to maintain good security culture. Safe use of burners demands proper precautions and good hygiene from everyone in the network: a failure by one person can compromise everyone. Consequently, it is important both to make sure everyone you’re communicating with is on the same page regarding the safe and proper use of burner phones, and also to assume that someone is likely to be careless. This is another good reason to be careful with your communications even while using burner phones. Always take responsibility for your own safety, and don’t hesitate to erase and ditch your burner when necessary.

— By Elle Armageddon

Why I Choose to Live in Wayne National Forest

TO THE POINT

Our current system is like an abandoned parking lot. Asphalt was laid, killing life and turning everything into a homogenous blackness, a dead sameness. The levers of maintaining this have broken down. No one is coming to touch up the asphalt. In abandoned parking lots, cracks form and life grows from the cracks.

All these riots, environmental catastrophes, food crises, occupations of land by protestors, and various breakdowns in daily life are cracks in the asphalt. What will spring from the cracks depends on what seed is planted within them. Beautiful flowers could grow. Weeds could grow.

Modern rich nations have walled themselves in. Colonized India was a world apart from Britain. The United States exists an ocean away from the places it drone strikes. Citizenship acts as a tool of ethnic cleansing. The world, according to the new nationalists, will be a checkerboard of racially homogenous governments with swords continuously drawn. The rich nations will now literally wall themselves in, ensure their “racial purity”, and steal from the poorer nations until the end of days. At least, this is the future envisioned by the Trump/Bannon regime. This is the future governments everywhere seem to be carrying us toward, a divided people screaming in joy or anger.

The continued and sped up process of fracking Wayne National Forest, Ohio’s only national forest, fits perfectly into this worldview, or a governance in managing the cracks. The power of this world and the world our rulers wish to realize is dependent on fracking wells, oil rigs, pipelines, and energy infrastructure in general. To oppose this infrastructure is to oppose this system, to take as our starting point the cracks.

I am living in Wayne National Forest in hopes for, first and foremost, protecting the forest. I hope to crack the asphalt and plant a flower.

Everyone is welcome to join the occupation, beginning on May 12th. Everyone is welcome to visit. Everyone is welcome to participate, in one way or another, in this land defense project.

EXTENDED

Some conclude the election of Trump signals and end of the left. Though the opinion seems rushed, and forces could push for a revitalization, if true, then good riddance.

Those of the left are preoccupied with flaunting ego. Taking up their various labels- communist, socialist, anarchist, Trotskyist- seems more about themselves than any revolutionary project. The labelling urge is bureaucratic. Leftists have done themselves no favors talking like politicians. Their endless meetings bear all the marks of officialdom and red tape. Distant from daily life, they alienate those who truly seek a new world. Most meetings, not much more is accomplished than an agreement to continue having meetings. This is a hallmark of bureaucracy.

Rally after rally appears the same dead tactics and strategies. Standing on the sidewalk, holding signs, and chanting slogans at buildings will never bring change. These events only pose a threat when a variety of activity occurs, when people stop listening to the activists. This could be anything from smashing up cop cars to a group of musicians playing spur of the moment.

Supervisors hate the unplanned.

If change is sought than an understanding of the ruling structure is vital. Understanding the current arrangement takes a grasping of history. History reveals how the present came to be and such recognition provides the basis for comprehending our current world.

The first known civilization sprang up in modern day Iraq around 6,000 years ago. This did not occur because humans became smarter or more physically fit. Modern humans evolved physically around 100,000 years ago and mentally 40,000 years ago. The five main qualities of civilization are: 1. City life 2. Specialized labor 3. Control of resources above what is needed to survive by a small group, leading to 4. Class rank and 5. Government. This is still the order we face today.

Before civilizations ascendency humans organized life in various ways. One was the hunter-gatherer band. These were groups of 100 or less, usually with no formal leadership and no difference in wealth and status. These groups were mobile, never staying in one spot more than temporarily. Again, it was not due to stupidity that these people did not develop more civilized ways of living. One could argue the hunter-gatherer life promotes a general knowledge while modern society encourages a narrow, yet dense, knowledge.

Agriculture and animal domestication led to farming villages and settled life. With this came the “Trap of Sedentism.” After a few generations of village life people forgot the skills needed to live nomadically and became dependent upon the village. In general, people worked harder and longer to survive while close quarters with each other and animals increased illness. With greater access to food, the population increased.

Chiefdoms were another form of pre-civilized living. These ranked societies had various clans placed differently on the pecking order and everyone governed by a chief. The chief controlled whatever food was produced above what the village needed to survive. The chief controlled the surplus. These societies came the closest to civilized living patterns.

Agriculture’s surplus allowed more people to feast than in the hunter-gatherer band. With more people working the fields and tinkering with technology came innovation and with innovation a larger surplus. This larger surplus allowed for continued population growth. This cycle proceeded to the birth of civilization and became more rapid with its birth.

Economists have advertised the story of “barter” for a very long time, perhaps because it is so vital to their domain of study. The narrative is as follows: John owns 3 pairs of boots but needs an axe and Jane has 2 axes but needs a pair of boots. The two trade with each other to get what they want and each is trying to get the upper hand in the trade. The massive problem with this story is it is false.

Adam Smith, an economist from the late 1700s, popularized this tale and made it the basis of economics. He asserted one would find barter where money did not exist, in all cases, and pointed to aboriginal Americans as an example. When Europeans came to conquer the continent, they did not find a land of barter where money was nonexistent.

Barter took place between strangers and enemies. Within the village, one found different forms of distribution. One place may have a central hub where people add to and take from. Another place may have free gifts between them. To redo our John and Jane example, John takes Jane’s axe and Jane knows that when she needs something of John’s he will let her have free use of it. What we never find happening is barter.

This is important because the barter folktale convinces people our present system is a reasonable development. If humanity’s natural propensity is to barter, then money and profitable exchange seem like evident progression. This is not to say that barter is “unnatural”, as it came from the heads and relationships of people, but that it is not the only game in town. If it is not the only game in town, and there are a multitude of ways humanity could and has organized itself, then the current system can’t be justified as the necessary development of human nature.

So, for most of human history impersonal government power did not exist. Communities were self-sufficient and relationships were equal and local. The rise of civilization and government changed this. Dependency and inequality marked associations and the few held power over the many.

Surplus food put some in a position where they did not need to work for their survival. While most still obtained resources from the earth and survived on their labor, a few extracted supplies from the many. This small group became the wealthy ruling class and controlled the allocation of production excess. The basic relationship here is parasitic.

The smart parasite practices restrained predation, meaning it doesn’t use up all the energy of the host to maintain the host’s life and continue its own survival. The smartest parasite defends its host. Rulers learned to protect the workers for this reason and in the process increased these laborers’ dependence on them. Increasing population developed into cities and problems of coordination occurred with more people living in a single space. The ruling parasites organized social life to maintain their control of the surplus and, at the same time, rationalize the city to solve problems of communication and coordination.

State power emerged from large scale infrastructural projects as well, specifically irrigation. Irrigation is a way of diverting water from the source to fields. Large scale irrigation endeavors required thousands of people and careful utilization of raw materials. Undertaking such a plan required a small group with the technical know-how to control what labor was done, how and when it was done, how much material was needed, when and how it was used, and utilize these same networks of influence for future repairs. Large infrastructure and complex city life increased the dependency of producers on rulers.

The city is the basis of civilization. The city, simply defined, is land where too many people exist for it to be self-sufficient. It requires continuous resource importation to keep the large population alive, one that cannot live off the soil. This impersonal power, who’s structures don’t change, is based on mindless expansion outside of the city in search of resources. War, of course, is the most efficient way to grab these resources when one city’s importation search runs head on with another’s or with people who live in the way of what is sought. Conquering existed before civilization yet become perfected within its system.

Emperors emerged to rule the masses, gaining prestige from war prowess. Forming empires, these leaders ushered in a new form of rule through large territories gained in conquest. For peasants that controlled their land and were not controlled by feudal lords, they only came into contact with government once a year. Politics was centralized in the palace. Ruling families may change but this did not affect peasant lives. Without modern surveillance technologies and police institutions it was virtually impossible to continuously govern every piece of land. Peasants organized their villages on their own. The only time they saw their government was when the army was sent to collect taxes. This all changed with the rise of the nation-state and mass politics.

Feudalism was based on loyalty to the King and land distribution by the King to obedient lords. Lords, in turn, granted parts of their land to vassals under similar conditions of obedience. Governing authority was decentralized. The King was the ultimate feudal lord, but could only flex on those lords who held land from him. The entire system depended on the lord’s willingness to obey or the ability of the king to rally enough troops to crush the disobedient. This system was basically moneyless, relying on rents or food and other goods flowing up the feudal pyramid. This changed with increased commercial activity.

Buying and selling began to replace rents, with power beginning to shift to merchants and urban commercial activity in general. This change brought about the ability for Kings to monetarily tax those within their domain. Centralization was required to do this and it undermined feudal relations, that of the lord controlling his own land. Any further development of commercial activity would strengthen the monarchy over the nobility.

Changes in warfare required taxes and the creation of a permanent army. Before, Kings would call upon their lords who would rally their vassals to the King’s will. Feudal armies were small, unreliable, and war was local. With Kings increasing their revenue, they were able to hire foreign mercenaries and pay a small permanent army. If other Kings did not want to be conquered, they conformed or died. With a permanent army came a need to increase taxation for maintenance, further undermining feudalism.

Kingly taxation of the populace established a direct link between the highest governing authority and the lowest on the power chain. This completely undermined the rule of lords and centralized power into national monarchies. The primary concern of these nations was that people consented to taxes.

Another way the nation-state emerged was through city-state infighting. Dictators would rise within the city to calm civil war taking place between the rich and poor. These dictators would conquer more land and become princes. When these carved out territories fell apart, cities and other units would try to conquer each other to fill the power vacuum. Eventually, consolidation would happen and usually with the help of mercenaries. Since mercenaries held it all together, whoever controlled the national treasury had power.

When vast empires fell apart, specifically in the Middle East, there arose smaller governing units. These smaller units were concerned with conquering and so had to develop militaries. To do this they taxed the population and could only do so if the people consented, meaning they had to provide services and other incentives. Politics moved out of the palace to everywhere. The nation-state gave birth to mass-politics.

The nation-state is totalitarian by nature. It must care about what its population is doing. Government presence went from one year at tax time to being a constant. Laws upon laws developed, strictly regulating the life of the people in the borders of the nation. The daily life of the people was now bound together with the health and viability of the system. Here, we find the international system of nation-states and world market.

Peasants no longer grew food, ate it, and had a surplus. Now, they sold their food on the market, which the nation-state taxed, in exchange for money and used this money to buy food and pay taxes. Urban centers made goods for a taxable wage and the goods they made could be taxed. Imported goods from other nations could be taxed as well. Truly, all of daily life was absorbed into the system. People’s continued consent and work within new market parameters called forth the totalitarian nature of the nation-state.

Economic development led to restructuring. Small craftsmen went out of business when factory production was able to make, and therefore sell, goods cheaper and faster. These craftsmen found themselves doing unskilled and semiskilled labor on the factory floor. Before, production was individual. Those that produced a good also owned the shop and tools so it made sense that they should get all the money earned. Factory production saw creation become social, with many helping to make the goods, while payment stayed individual, with factory owners who contributed no labor gaining all the profit for simply owning the tools and the building. This is the same parasitic relationship found throughout all of civilization, just new roles and new ways for the ruling class to live off of the labor of many.

The workers movement developed in response to this, made up of various left ideologies; from Marian communism to anarchism. Regardless of ideological preference, the idea was the same. The factory was the kernel of the new world. People had been separated from the land and each other through borders, style of work, race, and a number of other things. The factory got all these different types of people together and under the same conditions. The more the factory spread, the more people were united by their similar exploitation. Eventually, they would rise up and usher in a new world based on freedom and equality.

There were problems with this. People were united in their separation. It took the imposition of an ethic by the workers movement, that all these different types of people should identify first and foremost as workers, for collective action to take place. All the workers did not have similar interests. Young white single males have much different concerns than a single black immigrant mother, regardless of being in the same factory. Obviously, government leaders and factory owners utilized these differences to their own advantage by privileging some groups over others. The slogan “An Injury to One is an Injury to All” was based more on faith than fact.

The workers movement saw the factory’s mass employment with hope as well. With massive profits, owners would reinvest this money in machines and other tools. Needing people to work the new equipment, they hired. Selling more products, created more efficiently, led to more profits and the cycle continued. As the factory system expanded, it was believed capitalism was bringing its own collapse. More were being united by a common state, that of the worker, and eventually their false separations would subside. They would see each other as the same, regardless of creed or color, see their true enemy in the factory owners and their government, and revolt.

For this reason, the workers movement advocated the expansion of the factory in a policy called “proletarianization.” When the Bolshevik Communists came to power in Russia, their main concern was to industrialize the nation for this purpose, similar to the rise of Communist governments elsewhere. One could ask the obvious question: Would spreading the factory system and the working-class condition really bring its end? Would spreading the plantation system and the slave condition end slavery, or strengthen it?

If Trump is the end of the left, good riddance.

The conditions that brought about the original workers movement have changed, yet the left seems blind to this or prepares mental gymnastics. For starters, the current economy is deindustrializing in America and post-industrial worldwide. Even in current industrial powerhouses like China and India, employment rates and growth from a former period are not found. For the United States, Europe, and the West in general, there is no real industrial manufacturing base. This type of work only happens in the colonized world or prison. It’s only sweatshops of various types in different spaces.

In fact, it may even be fallacious to speak of a “colonized” world. The nation-state seems not to matter anymore. A new, global system has developed. Transnational corporations organize social life, almost everywhere, to operate for the creation of value. Every facebook post made, every online search informs advertisers and helps business adapt their products. The spending habits tracked on your debit card help to know who you are and what type of products you like. One’s interaction with the current world contributes to value creation. In other words, production has moved from the workplace to all of life and has only been possible with modern communication technology and the new post-industrial economy.

The workers now are not the same as the past in this country or countries similarly situated. The left, when admitting that things have changed, will then perform backflips to also claim nothing has changed. The service sector has come to dominate, yet the left holds its orientation to be exactly the same as the factory. I was discussing this with a Trotskyist friend that worked a service sector job at a burrito joint. Since they were still payed a wage, he claimed, the form of capitalist exploitation had not changed.

Taking the example of the burrito joint, the harvesting of the lettuce, tomatoes, and other food items used to make those burritos most likely occurred in another underdeveloped country or by migrants or prisoners in this country. They receive wages much lower than those in the service sector (usually) and their labor is more vital to the economic set-up than those performing easily automated service jobs. If they did not harvest the food, my Trotskyist friend would have no lettuce to put on someone’s burrito. Building a burrito is not the same as building a highway, car, skyscraper, or harvesting fields. No kernel of a new world can be seen within this type of work, other than someone akin to a psychiatric ward.

So, how will a better world be brought about? I think anyone who believes they know the answer to this question is arrogant and needs to come back down to earth. I certainly do not know the answer. I will provide some thoughts to help answer this question.

Every single revolution has failed. The French Revolution, American Revolution, Russian Revolution, the list goes on, all have failed to usher in a world that has ended the few dominating the many. To hang on to these past conceptions of revolution is to condemn the next one to loss. This means a rethinking of fundamental questions is needed.

What does revolutionary action succeed at doing?

First and foremost, it succeeds at establishing a set of values within a subversive context. Courageousness is a good thing to find in the hearts of people, yet the soldier who goes to fight and die is “courageous.” The last goal of revolutionary action is to get people to join the armed forces. An insurrectionary act affirms notions of justice, courage, honor, right and wrong, freedom, kindness, empathy, etc. that completely negate the selfishness, materialism, and overall toxicity of the dominant values.

This is where anarchists who fetishize violence get off the mark. Simply put, just because we burn everything to the ground does not mean people stop being assholes. This is not to say, however, that these values won’t get affirmed in riots and the like. Who could say those in Ferguson, Baltimore, and many other places were not courageous with deep notions of justice, right and wrong, and freedom? These values can also be affirmed through wise grandparents going on a hike with their grandson, a teacher who treats her students as equals, a victim who stands up to their bully, a group of musicians playing carefree, a ropeswing and a group of good people, graffiti, sharing a smoke, stealing from Walmart, fighting mobilized Nazis, and many other ways.

Revolutionary action does not just happen at a march or political meeting. I’d go so far as to argue it happens at these places less of the time.

Secondly, it succeeds in taking space to keep these values and energy going. It takes space and organizes the shared life within it in a completely new way. It may even be wrong to describe this as “organized.

When hegemonic powers fall apart, power REALLY does go back to individual people. Depending on how we relate to each other flowers or weeds could grow. What seed is planted in the cracks?

How is power disrupted?

From here, we can look to the most interesting struggle to occur in the United States in many years: Standing Rock. For all its problems, the Standing Rock resistance highlighted some important things. Power is found in infrastructure. The construction of the pipeline only strengthens the world of pipelines and oil dependency. These constructions, from oil pipelines to highways to electrical system to fracking equipment, help keep this world running. Those of us who went to Standing Rock and stayed with a certain group in Sacred Stone saw the banner: “Against the Pipeline and Its World!”

Standing Rock had one camp that sat in the path of the pipeline to black construction, until forcibly removed by the police, and camps across the river. This struggle blocked the construction of a world it did not want to see and built one it did right in the space it captured. It had its own food supply, water supply, etc. It had its own logistical system, outside of government and business. It relied on the power of people.

During the Occupy movement, it seemed natural for those in Oakland to block the port. The port brought in commodities to be sold, benefitting the rich and propping up the system. It seemed like common sense for revolutionaries in Egypt to take Tahrir Square, the center of activity, block main roads, stopping people from shopping and working, and burn police stations. In fact, focusing on Tahrir square misses all the blocked roads and burnt police stations all across Egypt.

The reflex seems to be to block the flows of this world and construct new ones, to block on form of life and build many new forms.

Why do revolutions fail?

There is no good answer to this.

One reason revolt fails to materialize (among many) is activity gets pacified by liberals. This, again, could be seen at Standing Rock where those apart of “Spirit Camps” put their bodies between police and “Warrior Camps”, telling them to demobilize, leave the initiated conflict, and pray. This can be seen when liberals demask covered protesters trying to push things or these liberals even pepper spraying them when they nonviolently damage property.

Following this, one of the most inspiring revolutions in the last 100 years was snuffed out by revolutionaries giving up their power, believing it was strategic. Within the Spanish Civil War, workers in Barcelona, Aargon, and other urban and rural places took over the land and factories, abolished the government and money, and armed themselves. They subordinated themselves to Republican government authority in the belief they could win the fight against the fascists by doing this.

What was shown from this is the Republican government was no more capable of fighting fascists than autonomous armed workers. The workers should of trusted no one but themselves, being repressed by both Republican and Communist henchmen to fall in line. Both of these forces reintroduced market mechanisms and money, government authority, and other ways the few rule over the many. Contrary to their claims, these efforts did not make fighting the war any more efficient and in some ways, especially the reintroduction of market forces into the food supply, it made things much worse. In the end, the fascists still won.

The problem here is viewing the conflict in purely military terms instead of a social war. By falling in line with Republican government and military command, those in Barcelona and other places allowed them to organize social life and overall just laid the groundwork for a fascist organization. Their self organization should’ve never been sacrificed.

When revolutionaries forget their struggle is more than a military confrontation they become exactly what they are fighting against. They become their enemy. They also miss inspiring movements due to fetishizing combat. We heard the left praise the fight of Kurdish women in Rojava against ISIS, and justifiably so, yet hear nothing about grassroots councils that have sprung up and continue to survive all across Syria in spite of horrible civil war. With the collapse of the Assad dictatorship, these councils took on the role of getting electricity, distributing food and water, healing the sick and injured, and whatever else is necessary to life.

I have spoken in generalities mainly, attempting to adequately explain my reasoning yet not over complicate and bore.

Over 700 acres of the Wayne National Forest have been auctioned off to the with hydrofracturing intentions. The Wayne is not new to gas and energy exploitation, yet this is a new and intensified maneuver in the war on Ohio’s only national forest. The plan from the Bureau of Land Management is to continue resource extraction until it’s all gone and The Wayne is dead. Some people will make a profit, though…

I will live in Wayne National Forest, in a long-term occupation starting on May 12th, in hopes of changing this tide. While it would be interesting for this to fit into some wider narrative of struggle, and in some ways it naturally does, that is not my main concern. My main concern is stopping the continued energy industry’s attack on the forest.

To anyone who has resonated with what’s been written, who sees this battle as their battle, and who believes they can help, PLEASE GET INVOLVED.

EVERYONE IS WELCOME TO COME.

To read:

– Affirming Gasland by the creators of the documentary Gasland

– 1984 by George Orwell

– The Madman: His Parables and Poems by Kahlil Gibran

– The Great Divorce by C.S. Lewis

– The Worst Mistake in the History of the Human Race by Jared Diamond

– What is Civilization? by John Haywood (found in The Penguin Historical Atlas of Ancient Civilizations)

– Debt by David Graeber

– To Our Friends by The Invisible Committee

To watch:

– Gasland

– Gasland 2