In “Electronic Revolution,” whence Gilles Deleuze got his idea of the “control society,” William S. Burroughs writes about how we can scramble the control society grammatically (see Ubuweb for the essay in full):
The aim of this project is to build up a language in which certain falsifications inherit in all existing western languages will be made incapable of formulation. The follow-falsifications to be deleted from the proposed language. (“ER” 33)
Why? As he puts it elsewhere,
There are certain formulas, word-locks, which will lock up a whole civilisation for a thousand years. (The Job 49)
To unscramble control syntax, the DNA precode of the language virus,
delete the copula (is/are), i.e., disrupt fixed identities – YOU ARE WHAT YOU ARE NOT [Lacan]!
replace definite articles (the) with indefinite articles (a/an), i.e., avoid reification — THERE EXIST MULTIPLICITIES [Badiou]!
replace either/or with and, i.e., ignore the law of contradiction — JUXTAPOSE [Silliman]!
The IS OF IDENTITY. You are an animal. You are a body. Now whatever you may be you are not an “animal,” you are not a “body,” because these are verbal labels. The IS of identity always carries the assignment of permanent condition. To stay that way. All name calling presupposes the IS of identity.
This concept is unnecessary in a hieroglyphic language like ancient Egyptian and in fact frequently omitted. No need to say the sun IS in the sky, sun in sky suffices. The verb TO BE can easily be omitted from any languages. . . . (“ER” 33)
The IS of identity . . . was greatly reinforced by the customs and passport control that came in after World War I. Whatever you may be, you are not the verbal labels in your passport any more than you are the word “self.” So you must be prepared to prove at all times that you are what you are not. (ibid.)
THE DEFINITE ARTICLE THE. The contains the implication of one and only: THE God, THE universe, THE way, THE right, THE wrong, If there is another, then THAT universe, THAT way is no longer THE universe, THE way. The definite article THE will be deleted and the indefinite article A will take its place. (33-34)
Definite article THE contains the implications of no other. THE universe locks you in THE, and denies the possibility of any other. If other universes are possible, then the universe is no longer THE[;] it becomes A. (34)
THE WHOLE CONCEPT OF EITHER/OR. Right or wrong, physical or mental, true or false, the whole concept of or will be deleted from the language and replaced by juxtaposition, by AND. This is done to some extent in any pictorial language where two concepts stand literally side by side. (ibid.)
[A] contradictory command gains its force from the Aristotelian concept of either/or. To do everything, to do nothing, to have everything, to have nothing, to do it all, to do not any, to stay up, to stay down, to stay in, to stay out, to stay present, to stay absent. (ibid.)
These falsifications inherent in the English and other western alphabetical languages give the reactive mind commands their overwhelming force in these languages. […] The whole reactive mind can be in fact reduced to three little words — to be “THE.” That is to be what you are not, verbal formulations. (ibid.)
There are also his more familiar “lines of fracture” (to use Deleuze’s phrase): aleatory procedures like cut-ups and fold-ins — but also the grid and picture language — that fracture the “lines of association” by which “control systems” exert their monopoly (13, 12). These represent a “new way of thinking”:
The new way of thinking has nothing to do with logical thought. It is no oceanic organismal subconscious body thinking. It is precisely delineated by what is not. Not knowing what is and is not[,] knowing we know not. Like a moving film the flow of thought seems to be continuous while actually the thoughts flow stop change and flow again. At the point where one flow stops there is a split second hiatus [a cut]. The new way of thinking grows in this hiatus between thoughts. (The Job 91)
Burroughs’ “lines of association” foreshadow Deleuze’s “lines of sedimentation,” i.e., of “light” (visibility), “enunciation” (speech), “force” (government) and “subjectification” (self-government); the “new way,” those of “fracture” or “breakage” (events in Badiou’s sense or cuts in Burroughs’). (N.B. “Lines of subjectivation,” being “lines of escape” or excess, point beyond sedimentation across the breaks to new dispositifs [“apparatuses”].)
The upshot of such scrambles is twofold:
they are writing itself: “All writing is in fact cut-ups. A collage of words read heard overhead [sic]. Use of scissors [just] renders the process explicit and subject to extension and variation” (The Cut-Up Method of Brion Gysin)
they are democratic: “Scrambles is the democratic way” (“ER” 24) — or elsewhere: “Cut-ups are for everyone” (“CMBG”); and, in that they are disruptive,
they are revolutionary:
He who opposes force with counterforce alone forms that which he opposes and is formed by it. History shows that when a system of government is overthrown by force a system in many respects similar will take place. On the other hand he who does not resist force that enslaves and exterminates will be enslaved and exterminated. For revolution to effect basic changes in existing conditions three tactics are required: 1. Disrupt. 2. Attack. 3. Disappear. Look away. Ignore. Forget. These three tactics to be employed alternatively. (The Job 101)
How can art and poetry encourage existential trajectories that move beyond the nihilism of late-modernity? American philosopher Iain Thomson turns towards the German philosopher Martin Heidegger, in order to illustrate nihilism as our deepest historical problem and art as our best response, while establishing Heidegger’s insights into postmodernity and technology.
Heidegger, Art, and Postmodernity seeks to show that Heidegger is best understood not simply as another regressive or reactionary “antimodernist” (the way critics typically portray him) but, instead, as a potentially progressive and so still promising “postmodernist”—if I may be forgiven for trying to rehabilitate a term that has become so thoroughly “unfashionable” (or unzeitgemäße, as Nietzsche aptly put it, literally “not cut to the measure of the time”). Sounding like some hipster conservative, Heidegger contends in Being and Time that a formerly hyper-trendy term like postmodern “can first become free in its positive possibilities only when the idle chatter covering it over has become ineffectual and the ‘common’ interest has died away.” In other words, once everyone stops talking about “The Next Big Thing,” it becomes possible to understand what was so inspiring about it in the first place, letting us uncover those enduringly inspirational sources that tend to get obscured by the noise that engulfs a major trend during its heyday. 
It remains true and important, of course, that Heidegger is highly critical of modernity’s metaphysical foundations, including (1) its axiomatic positing of the Cartesian cogito as the epistemological foundation of intelligibility; (2) the ontological subject/object dualism generated by (1); (3) the fact/value dichotomy that follows from (1) & (2); and (4) the growing nihilism (or meaninglessness) that follows (in part) from (3), that is, from the belief that what matters most to us world-disclosing beings can be understood as “values” projected by human subjects onto an inherently-meaningless realm of objects. I shall come back to this, and continue to find myself provoked and inspired by Heidegger’s phenomenological ways of undermining modern Cartesian “subjectivism.” But my own work is even more concerned with Heidegger’s subsequent deconstruction of late-modern “enframing” (Gestell), that is, with his ontological critique of global technologization. Heidegger’s critique of the nihilism of late-modern enframing develops out of his earlier critique of modern subjectivism but goes well beyond it. As Heidegger, Art, and Postmodernity shows, enframing is “subjectivism squared”: As modernity’s vaunted subject applies the technologies developed to control the objective realm back onto human subjects, this objectification of the subject is transforming us into just another intrinsically-meaningless resource to be optimized, ordered, and enhanced with maximal efficiency—whether cosmetically, psychopharmacologically, eugenically, aesthetically, educationally, or otherwise “technologically.” (I shall come back to this point too.)
Taken together, Heidegger’s ontological critiques of modern subjectivism and late-modern enframing helped establish his work as an uncircumventable critical touchstone of twentieth century “continental” philosophy. And I say this even while fully acknowledging that Heidegger deliberately and directly involved himself and his thinking with history’s greatest horror (greatest thus far, at least), thereby rendering his work even more controversial than it would have been anyway. All of us would-be post-Heideggerians have to work through the significance of Heidegger’s deeply troubling Nazism for ourselves, as I have long argued. Indeed, that critical task is new only to those who are new to Heidegger (or who have somehow managed to avoid it by bunkering down in untenable and so increasingly desperate forms of denial). The critical task of working through and beyond Heidegger’s politics remains difficult nonetheless, because—as I showed in my first book, Heidegger on Ontotheology: Technology and the Politics of Education—the most insightful and troubling aspects of Heidegger’s thinking are often closely intertwined. Disentangling them thus requires both care and understanding, and so a capacity to tolerate ethical as well as philosophical ambiguity (traditional scholarly skills that seem to be growing rare in these days of one-sided outrage and indignation). 
Yet, despite Heidegger’s sustained critiques of modernity and late-modernity, he is not simply an anti-modernist (or even an anti-late-modernist). To try to think against something, he repeatedly teaches, is to remain trapped within its underlying logic. (The proud atheist often remains caught in the traditional logic of theism, for instance, insofar as both theist and atheist presume to know something outside the realm of possible knowledge. Like Hölderlin, Heidegger himself ended up as a romantic polytheist, open to the relevant phenomena and so capable of different kinds of religious experience.) I recognize, of course, that many people find it difficult to muster the hermeneutic charity and patience that one needs in order to even be able to understand Heidegger. But one of the deepest and most universal axioms of the hermeneutic tradition (and still shared from Gadamer to Davidson) is that the only way to understand another thinker is to presume that they make sense, that they are not just passing-off meaningless nonsense as profundity. (There is a detectably post-Christian wisdom in the hermeneutics tradition here. “Thinking…loves”: Love even thy enemy, as it were, because hatred can never understand.) When Heidegger is read charitably (rather than dismissed polemically), it becomes clear that his overarching goal is not only to undermine but also to transcend modernity.
By working to think modernity from its deepest Cartesian presuppositions to its ultimate late-modern conclusions, I believe Heidegger helps open up some paths that lead beyond those problematically nihilistic modern axioms mentioned above, paths that also allow us to preserve and build upon the most crucial and irreplaceable advances achieved in the modern age. As that suggests, we need to acknowledge—much less grudgingly than Heidegger himself ever did—that humanity has made undeniable and precious progress in the domains of technology, science, medicine, art, language, and even (I try to show, thus going well beyond Heidegger) in politics. According to the perhaps heterodox, left-Heideggerian postmodernism I espouse (in the vicinity or aftermath of Dreyfus, Young, Rorty, Vattimo, Derrida, Agamben, and others), Heidegger’s central postmodern insight into the inexhaustible plurality of being serves best to justify and promote a robust liberal tolerance, a tolerance intolerant only of intolerance itself. That may initially sound relativistic, but this is a tolerance with teeth, because ontological pluralism undermines all fundamentalist claims to have finally arrived at the one correct truth about how to live, let alone to seek to impose those final answers on others (as I have recently tried to show).
Questions concerning how best to understand the implications of Heidegger’s central insights remain complex and controversial, of course. But I think it is clear—in light of Heidegger’s distinctive attempts to combine philosophy and poetry into a thinking that “twists free” of and so leads beyond modernity—that Heidegger was the original postmodern thinker. Here I say “original” even while acknowledging that Heidegger’s postmodern vision drew crucial inspiration from many others (including the Romantic tradition, especially Hölderlin, Van Gogh, and Nietzsche, as well as from his creative readings of Presocratic philosophy). For, as Heidegger, Art, and Postmodernity shows, Heideggerian “originality” (Ursprünglichkeit) is less concerned with being first than with remaining inspiring; that is, it is less about planting flags and more about continuing to provoke important insights in others.
Moreover, this view of Heidegger as the Ur-postmodernist gains a great deal of support from the fact that almost every single significant contemporary continental philosopher was profoundly influenced by Heidegger. The list is long, because it includes not just more recognizably “modern” philosophers like Arendt, Bultmann, Gadamer, Habermas, Kojève, Marcuse, Merleau-Ponty, Sartre, Taylor, and Tillich, but also such “postmodern” thinkers as Agamben, Badiou, Baudrillard, Blanchot, Butler, Cavell, Derrida, Dreyfus, Foucault, Irigaray, Lacan, Levinas, Rancière, Rorty, Vattimo, and Žižek—all of whom take Heideggerian insights as fundamental philosophical points of departure. Each of these thinkers seeks to move beyond these Heideggerian or post-Heideggerian starting points (more and less successfully, it must be said, but with lots of significant advances along the way).
Taken as a whole, one thing all of these major thinkers help confirm is that we think best with a hermeneutic phenomenologist like Heidegger only when we learn to read him “reticently”—that is, slowly, critically, carefully, thoughtfully, with reservations and alternatives left open rather than too quickly foreclosed. If we can adopt a critical yet charitable approach to Heidegger’s views on the matters of deep concern that we continue to share with him, then we can find our own ways into “die Sache selbst,” the matters themselves at stake in the discussion. Focusing on the issues that matter in this way can also help us avoid getting too bogged down in the interminable terminological disputes that too often turn out to be merely “semantic” misunderstandings or confusions of translation, noisy distortions in which those trained in different traditions and languages continue to unknowingly talk past one another. Our hermeneutic goal should instead be genuine understanding and so the possibility of positive disagreement, that is, disagreements that generate real alternatives and so do not remain merely criticisms (let alone pseudo-criticisms, confused epiphenomena of unrecognized misunderstandings, distortions passed down through generations or sent out across other networks). The modestly immodest goal of post-Heideggerian thinking, in sum, is to think the most important issues at issue in Heidegger’s thinking further than he himself ever did. At the very least, such attempts can succeed in developing these enduringly-important issues somewhat differently, in our own directions and inflections, in light of our own contemporary concerns and particular ways of understanding what matters most to our time and generations.
Heidegger’s provocative later suggestion about how best to develop the deepest matters at stake in the thinking of another can be helpful here: We need to learn “to think the unthought.” Thinking the unthought of another thinker means creatively disclosing the deepest insights on the basis of which that thinker thought. When we think their unthought, we uncover some of the ontological “background” which rarely finds its way into the forefront of a thinker’s thinking (as Dreyfus nicely put it, drawing on the Gestalt psychology Heidegger drew on himself). Thinking the unthought does mean seeing something otherwise unseen or hearing something otherwise unheard, but such hermeneutic “clairvoyance” (as Derrida provocatively dubbed it) should not presume that it has successfully isolated the one true core of another’s thinking (a mistake Heidegger himself too often committed). But nor should we concede that “death of the author” thesis which presumes that there is no deep background even in the work of our greatest thinkers. We post-Heideggerian postmodernists should just presume, instead, that any such deep background will be plural rather than singular, and so irreducible to any one over-arching interpretive framework. In that humbler hermeneutic spirit of ontological pluralism, we can then set out to develop at least some of a thinker’s best insights and deepest philosophical motivations beyond whatever points that thinker was able to take them.
In such a spirit, my own work focuses primarily on some of the interconnected issues of enduring concern that I think we continue to share with Heidegger, including (1) his deconstructive critique of Western metaphysics as ontotheology; (2) the ways in which the ontotheology underlying our own late-modern age generates troublingly nihilistic effects in our ongoing technologization of our worlds and ourselves; (3) Heidegger’s alternative vision of learning to transcend such technological nihilism through ontological education, that is, an education centered on the “perfectionist” task of “becoming what we are” in order to come into our own as human beings leading meaningful lives. My interest in those interconnected issues (of ontotheology, technology, and education) led me to try to explicate (4) the most compelling phenomenological and hermeneutic reasons behind the enduring appeal of Heideggerian and post-Heideggerian visions of postmodernity; and so also (5) the continuing relevance of art and poetry in helping us learn to understand being in some enduringly meaningful, postmodern ways. The point of this postmodernism, to put it simply, is to help us improve crucial aspects of our understanding of the being of our worlds, ourselves, and each other, as well as of the myriad other entities who populate and shape our interconnected worlds. (It is, in other words, a continuation of the struggle against nihilism, to which we will turn next.)
Beneath or behind it all, I have also dedicated much of the last decade to working through some of the philosophical issues that arise, directly and indirectly, from the dramatic collision between Heidegger’s life and thinking (as I have been working on a philosophical biography of Heidegger). I have thus taken up, for example, Heidegger’s views on the nature and meaning of love (which prove surprisingly insightful, once again, when approached with critical charity), while also continuing to participate in that ongoing re-examination of the significance of Heidegger’s early commitment to and subsequent break with Nazism, as well as the more recently revealed extent of his ignorant anti-Semitism (fraught and difficult topics).
In what follows I want to focus on the role that art—understood as poiêsis or ontological disclosure—can play in helping us learn to live meaningful lives. So I shall try briefly to explain some of my thoughts on nihilism as our deepest historical problem and art as our best response. How can art and poetry encourage existential trajectories that move beyond the nihilism of late-modernity? Let me take up this question while acknowledging the apparent irony of doing so in this technological medium. In fact, this need not be ironic at all, given my view that we have to find ways to use technologies against technologization—learning to use technologies without being used by them, as it were—by employing particular technologies in ways that help us uncover and transcend (rather than thoughtlessly reinforce) the nihilistic technologization at work within our late-modern age. What Heidegger helps us learn to undermine and transcend, in other words, is not technology but rather nihilistic technologization. By “nihilistic technologization,” I mean the self-fulfilling ontological pre-understanding of being that reduces all things, ourselves included, to the status of intrinsically-meaningless stuff standing by to be optimized as efficiently and flexibly as possible. (That, of course, will take some explaining.)
To develop Heidegger’s thinking on technological nihilism beyond the point he himself left it, we need both (1) to learn to recognize the undertow of technologization’s drift toward nihilistic optimization and yet still (2) find ways to use particular technologies (including word processing software, synthesizers, Facebook, on-line philosophy ‘zines, and all the other irreversibly-proliferating technological media of our world) in ways that help move us beyond that nihilistic technologization rather than merely reinforcing it. Heidegger, Art, and Postmodernity suggests that one of the best ways to do this is by cultivating a receptivity to that which overflows and so partly escapes all the willful projects in which the modern subject understands itself as the source of what matters most in the world (as the foundation of all “values,” all “normativity,” and other such widespread but deeply problematic, modern philosophical ideas). We think Heidegger’s unthought when we disclose this postmodern understanding of being, learning to understand and so encounter being not as a modern domain of objects for subjects to master and control, nor as a late-modern “standing reserve” of resources to be efficiently optimized, but instead as that which continues to both inform and exceed our every way of making sense of ourselves and our worlds. By learning to cultivate a phenomenological receptivity to this postmodern understanding of being, we can address the nihilism of our technological understanding of being by responding directly to its ontotheological foundations.
Heidegger, Art, and Postmodernity begins with the words, “What does Heidegger mean by ontotheology—and why should we care?” Here is a greatly simplified answer: If, like Parmenides, we think of all intelligible reality as a sphere, then ontotheology is the attempt to grasp this sphere from the inside-out and the outside-in at the same time. More precisely, ontotheology is Heidegger’s name for the attempt to stabilize the entire intelligible order (or the whole space of meaning) by grasping both the innermost “ontological” core of what-is and its outermost “theological” expression, then linking these innermost and outermost “grounds” together into a single, doubly-foundational, “ontotheological” understanding of the being of what-is. An ontotheology, when it works (by uncovering and disseminating those grounds beneath or beyond which no one else can reach, for a time), establishes the meaning of being that “doubly grounds” an historical age. Such ontotheologies shape and transform Western history’s guiding sense of what being “is” (by telling us what “Isness” itself is), and since everything is, they end up shaping and reshaping our understanding of everything else. (Heidegger’s notorious antipathy to metaphysics thus obscures the pride of place he in fact assigns to ontotheologies in the transformation and stabilization of history itself.)
One of the crucial points to grasp here is that Heidegger’s critique of technology follows directly from his understanding of ontotheology. Indeed, the two are so intimately connected that his critique of technology cannot really be understood apart from his view of ontotheology, a fact even scholars were slow to recognize (reminding us that Heidegger still remains, in many ways, our contemporary). As Heidegger on Ontotheology shows, one of Heidegger’s deepest but most often overlooked insights is that our late-modern, Nietzschean ontotheology generates the nihilistic technologization in whose currents we remain caught. The deepest problem with this “technologization” of reality is the nihilistic understanding of being that underlies and drives it: Nietzsche’s ontotheological understanding of the being of entities as “eternally recurring will-to-power” dissolves being into nothing but “sovereign becoming,” an endless circulation of forces, and in so doing, it denies that things have any inherent nature, any genuine meaning capable of resisting this slide into nihilism (any qualitative worth, for example, that cannot be quantified and represented in terms of mere “values,” so that nothing is invaluable—in the full polysemy of that crucial phrase).
Heidegger, Art, and Postmodernity explains Heidegger’s radical philosophical challenge to the deepest presuppositions of modernity and his attempt to articulate a genuinely meaningful post-modern alternative by drawing on key insights from art and poetry, especially insights into the polysemic nature of being and the consequent importance of creative world disclosure (as contrasted with the willful, subjective imposure of “value”). Heidegger’s view is that even great late-modern philosophers like Nietzsche, Marx, and Freud remain trapped within unrecognized modern presuppositions, including the nihilistic view that all meaning is projected onto or infused into an inherently-meaningless world of objects through the subject’s conceptual and material labors (both conscious and unconscious). These unnoticed metaphysical presuppositions undermine their otherwise important attempts to forge paths into a more meaningful future. Drawing on Kierkegaard, Hölderlin, Van Gogh, and others, Heidegger teaches that more genuinely enduring meaning cannot come from the subject imposing its values on the world but, instead, only from a poetic openness to those meanings that precede and exceed our own subjectivity. Such meaningful encounters (or “events”) require us to creatively and responsibly disclose their significance, unfolding their meaning throughout the lives they can thus come to transform, guide, and confer meaning on.
One of the central theses of Heidegger, Art, and Postmodernity is that this crucial difference between imposing and disclosing—or between technological imposition and poetic disclosure—is the crucial distinction between the meaninglessness of our technological understanding of being and those meaning-full encounters that a postmodern understanding of ourselves and our worlds help give rise to, nurture, and encourage. Genuinely-enduring, meaningful events, the kinds around which we can build fulfilling lives, do not arise from imposing our wills on the world (as in the modern view which, as Kierkegaard already taught, turns us into sovereign rulers over a land of nothing, where all meaning is fragile because it comes from us, from the groundless voluntarism of our own wills, and so can be rescinded as easily as it was projected). Genuinely enduring meanings emerge, instead, from learning to creatively disclose those often inchoate glimmers of meaning that exist at least partly independently of our preexisting projects and designs, so that disclosing their significance creatively and responsibly helps teach us to partake in and serve something larger than ourselves (with all the risk and reward that inevitably entails).
In short, a truly postmodern understanding requires us to recognize that, when approached with a poetic openness and respect, things push back against us, resisting out wills and so making subtle but undeniable claims on us. We need to acknowledge and respond creatively and responsibly to these claims if we do not want to deny the source of genuine meaning in the world. For, only those meanings which are at least partly independent of us and so not entirely within our control—meanings not simply up to us human beings to bestow and rescind at will—can provide us with the kind of touchstones around which we can build enduringly meaningful lives (and loves). Heidegger sometimes describes our encounter with these more genuinely meaning-full meanings as an “event of enowning” (Ereignis), thus designating those profoundly significant events in which we come into our own as world-disclosers by creatively enabling things come into their own, just as Michelangelo came into his own as a sculptor by creatively responding to the veins and fissures in a particularly rich piece of marble so as to bring forth his “David,” just as a woodworker comes into her own as a woodworker by learning to respond to the subtle weight and grain of each individual piece of wood, and just as teachers comes into their own as teachers by learning to recognize, cultivate, and so help develop the particular talents and capacities of individual students.
This poetic openness to that which pushes back against our preexisting plans and designs is what Heideger, Art, and Postmodernity calls a sensitivity to the texture of the text, that subtle but dynamic meaning-fullness which is “all around us” phenomenologically, as Heidegger writes. The current of technologization tend to sweep right passed the texture of the texts all around us, and can even threaten to render us oblivious to it (most plausibly, if our resurgent efforts at genetic enhancement inadvertently eliminate our defining capacity for creative world-disclosure). When we learn to recognize the ontohistorical current feeding technology, however, we can also learn to resist its nihilistic erosion of all inherent meaning, and so begin to develop a “free relation to technology” in which it becomes possible to thoughtfully use technologies against nihilistic technologization, as we do (for example) when we use a camera, microscope, telescope, or even glasses creatively to help bring out something there in the world that we might not otherwise have seen, a synthesizer or computer to make a new kind of music that helps us develop our sense of what genuinely matters to us, or when we use a word processor or even the Internet to help bring out our sense of what is really there in the issues and texts that most concern us.
In my view, the role human beings play in the disclosure and transformation of our basic sense of reality thus occupies a middle ground between the poles of voluntaristic constructivism and quietistic fatalism. Heidegger is primarily concerned to combat the former, “subjectivistic” error—that is, the error of thinking that human subjects are the sole source of meaning and so can reshape our understanding of being at will—because that is the dangerous error toward which our modern and late-modern ways of understanding being incline us. But this has led to some widespread misunderstandings of his view. Perhaps most importantly, Heidegger’s oft-quoted line from his famous Der Spiegel interview, “Only another God can save us,” is probably the most widely misunderstood sentence in his entire work. By another “God,” Heidegger does not mean some otherworldly creator or transcendent agent but, instead, another understanding of being. He means, quite specifically, a post-metaphysical, post-epochal understanding of “the being of entities” in terms of “being as such,” to use his philosophical terms of art. Heidegger himself equates his “last God” with a postmodern understanding of being, for example, when he poses the question “as to whether being will once more be capable of a God, [that is,] as to whether the essence of the truth of being will make a more primordial claim upon the essence of humanity.” Here Heidegger asks whether our current understanding of being is capable of being led beyond itself, of giving rise to other world-disclosive events that would allow human beings to understand the being of entities neither as modern “objects” to be mastered and controlled, nor as late-modern, inherently-meaningless “resources” standing by for optimization, but instead as things that always mean more than we are capable of expressing conceptually (and so fixing once and for all in an ontotheology). That the “God” needed to “save us” is a postmodern understanding of being is one of the central theses of Heidegger, Art, and Postmodernity.
Rather than despairing of the possibility of such an inherently pluralistic, postmodern understanding of being ever arriving, moreover, Heidegger thought it was already here, embodied in the “futural” artwork of artists like Hölderlin and Van Gogh, simply needing to be cultivated and disseminated in myriad forms (clearly not limited to the domain of art, pace Badiou) in order to “save” the ontologically abundant “earth” (with its apparently inexhaustible plurality of inchoately meaningful possibilities) from the devastation of technological obliviousness. When Heidegger stresses that thinking is at best “preparatory” (vorbereitend), what he means is that great thinkers and poets “go ahead and make ready” (im voraus bereiten), that is, that they are ambassadors, emissaries, or envoys of the future, first postmodern arrivals who, like Van Gogh, disseminate and so prepare for this postmodern future with “the unobtrusive sowing of sowers” (as Heidegger nicely put it, drawing a deep and illuminating parallel between his teaching and Van Gogh’s painting which I seek to explain in Heidegger, Art, and Postmodernity). As this suggests, new historical ages are not simply dispensed by some super-human agent to a passively awaiting humanity. Rather, actively vigilant artists and particularly receptive thinkers pick up on broader tendencies happening partly independently of their own wills (in the world around us or at the margins of our cultures, for example), then make these insights central through their artworks and philosophies.
For good and for ill, then, Heidegger is a profoundly hopeful philosopher, not some teacher of despair and resignation, as he is often polemically portrayed. As I began by saying, he is not an anti-modern who exhausts himself critiquing modernity but rather the original postmodern philosopher, a thinker who dedicates himself to disseminating a postmodern understanding of being in which he places his hope for the future. I continue to find myself inspired by Heidegger’s poetic thinking of a postmodern understanding of being (as well as by many of those Heidegger helped inspire in turn), especially in light of his provocative proclamations that the philosophical lessons of art and poetry’s distinctive ways of disclosing the world were needed to help us find ways through and beyond the growing noontime darkness of technological nihilism. (Perhaps such concerns partly reflect middle-age and its attendant anxieties, but if so, then I have been partly middle-aged my whole life, and suspect that many of us feel similarly, as if we were all living in a time in the middle or between ages, a historical period of radical change and transition—or at least we, some of us, still hope.)
 That hipster conservativism sounds rather paradoxical does not make it false—just falsely totalizing in this case:What is false is imagining that only latecomers can truly understand something.As anyone who has ever been there at the beginning of something important will probably recognize, first-comers often understand something too, and can do so at least as deeply (if not often as cogently) as those who come later.Rather than define “understand” more cognitively than Heidegger himself did, let us just admit that we need both:Early arrivals help create and draw our attention to potentially important and inspiring phenomena; late-comers remain crucial to preserving what remains inspiring beneath traditions whose day in the sun might otherwise have come and gone.That we need both “creators” and “preservers” is something Heidegger himself recognized by the time he wrote the magnum opus of his middle period, “The Origin of the Work of Art” (1934-35), which goes so far as to posit creators and preservers as the two equally-important sides of the work of art.For a detailed discussion of the creative role of such interpretive “preservers,” see Thomson, Heidegger, Art, and Postmodernity (Cambridge University Press, 2011), ch. 3.(An earlier version is available on-line as Thomson, “Heidegger’s Aesthetics,” Stanford Encyclopedia of Philosophy, <http://plato.stanford.edu/entries/heidegger-aesthetics/>.)
 See Thomson, Heidegger on Ontotheology:Technology and the Politics of Education (Cambridge University Press, 2005), esp. chs. 3-4.
I discuss Heidegger’s provocative views on polytheism, atheism, and on the phenomenological relation between humanity and “the divine” in Heidegger, Art, and Postmodernity (esp. chs. 1 and 6); and in Thomson, “The Nothing (das Nichts),” in Mark Wrathall, ed., The Heidegger Lexicon (Cambridge University Press, forthcoming).For more on the perhaps surprising appeal of Heidegger’s romantic polytheism, see also Hubert Dreyfus and Sean Kelly, All Things Shining (New York:Free Press, 2011).
 On this point, see Thomson “Heidegger’s Nazism in the Light of his early Black Notebooks,” Alfred Denker and Holger Zaborowski, eds, Zur Hermeneutik der ‘Schwarzen Hefte’:Heidegger Jahrbuch 10 (Freiburg:Karl Alber, forthcoming.)
 This hermeneutics of philosophical “fulfillment” (Vollendung)—or what Heidegger, Art, and Postmodernity also calls the strategy of hypertrophic deconstruction—is premised on the insight that, where the deepest historical trends are concerned, the only way out is through.
 See Thomson, “Heideggerian Phenomenology and the Postmetaphysical Politics of Ontological Pluralism,” in S. West Gurley and Geoffrey Pfeifer, eds, Phenomenology and the Political (Rowman & Littlefield, forthcoming October 2016).
 See Thomson, “In the Future Philosophy will be neither Continental nor Analytic but Synthetic:Toward a Promiscuous Miscegenation of (All) Philosophical Traditions and Styles,” Southern Journal of Philosophy 50:2 (2012), pp. 191-205.
 On this still metaphysical mistake, see Heidegger, Art, and Postmodernity, ch. 3.
 Such limits inevitably follow from our universal condition of existential “finitude,” and include personal limitations of time and perspective to which we can remain insensitive, whether out of ignorance or pride.Obviously, Heidegger’s personal limitations have become increasingly glaring in the four decades since his death, with the ongoing publication of his thinking.However much distance we might like to put between Heidegger’s perspective and our own, the fact that all of our perspectives remains limited (in ways more and less visible to us) may help to motivate the open-minded, hermeneutic humility that we still need (and need all the more) in order to approach Heidegger’s work in ways that remain charitable as well as critical, so that we can both learn something and go further ourselves.
 For more on the way the great metaphysical ontotheologies temporarily dam the flow of historicity by grasping the innermost core of reality and its outermost expression and linking these dual perspectives together into a single “ontotheological” account, see Heidegger on Ontotheology, ch. 1.
 Heidegger on Ontotheology thus seeks to develop and defend the core of Heidegger’s “reductive yet revealing” and so rightly controversial reading of Nietzsche as the unrecognized ontotheologist of our late-modern age of technologization.For a summation of that view, see ch. 1 of Heidegger, Art, and Postmodernity.On the crucial polysemy of the nothing, see Heidegger, Art, and Postmodernity, ch. 3.
 On the importance of this difference between imposing and disclosing, see also Thomson, “Rethinking Education after Heidegger:Teaching Learning as Ontological Response-Ability,” Educational Philosophy and Theory, 48:8 (2016), pp. 846-861.
 “The texture of the text” is also the seditious way in which Heidegger, Art, and Postmodernity tries to re-Heideggerize Derrida’s famous, anti-Heideggerian aperçu:“There is nothing outside the text.”
 See Heidegger, Off the Beaten Track, Julian Young and Kenneth Haynes, eds. and trans. (Cambridge:Cambridge University Press, 2002), p. 85 (Holzwege, Gesamtausgabe vol. 5 [Frankfurt:Klostermann, 1977], p. 112).Here the “truth of being” is shorthand for the way an understanding of “the being of entities” (that is, a metaphysical understanding of “the truth concerning entities as such and as a whole” or, in a word, an ontotheology) works to anchor and shape the unfolding of an historical constellation of intelligibility.Its “essence” is that apparently inexhaustible source of historical intelligibility the later Heidegger calls “being as such,” an actively a-lêtheiac (that is, ontologically “dis-closive”) Ur-phenomenon metaphysics eclipses with its ontotheological fixation on finally determining “the being of entities.”(That “being as such” lends itself to a series of different historical understandings of “the being of entities” rightly suggests that it exceeds every ontotheological understanding of the being of entities.)The “essence of humanity” refers to Dasein’s definitive world-disclosive ability to give being as such a place to “be” (i.e., to happen or take place); it refers, that is, to the poietic and maieutic activities by which human beings creatively disclose the inconspicuous and inchoate hints offered us by “the earth” and so help bring genuine meanings into the light of the world.
Is nihilism the most important philosophical problem of our present? Philosopher Raymond Geuss talks to four by three about our misconception of nihilism, outlining three ways of questioning it, while asking whether nihilism is a philosophical or a historical problem and whether we are truly nihilists or might simply be confused.
Raymond Geuss is Emeritus Professor in the Faculty of Philosophy at the University of Cambridge and works in the general areas of political philosophy and the history of Continental Philosophy. His most recent publications include, but are not limited to, Politics and the Imagination (2010) and A World Without Why (2014).
Are you a nihilist and should you be one? Philosopher Eugene Thacker turns to Friedrich Nietzsche to break down nihilism into fragments of insights, questions, possible contradictions and thought provoking ruminations, while asking whether nihilism can fulfill itself or always ends up undermining itself?
1. What follows came out of an event held at The New School in the spring of 2015. It was an event on nihilism (strange as that sounds). I admit I had sort of gotten roped into doing it. I blame it on the organizers. An espresso, some good conversation, a few laughs, and there I was. Initially they tell me they’re planning an event about nihilism in relation to politics and the Middle East. I tell them I don’t really have anything to say about the Middle East – or for that matter, about politics – and about nihilism, isn’t the best policy to say nothing? But they say I won’t have to prepare anything, I can just show up, and it’s conveniently after I teach, just a block away, and there’s dinner afterwards…How can I say no?
How can I say no…
2. Though Nietzsche’s late notebooks contain many insightful comments on nihilism, one of my favorite quotes of his comes from his early essay “On Truth and Lies in an Extra-Moral Sense.” I know this essay is, in many ways, over-wrought and over-taught. But I never tire of its opening passage, which reads:
In some remote corner of the universe, poured out and glittering in innumerable solar systems, there once was a star on which clever animals invented knowledge. That was the haughtiest and most mendacious minute of “world history” – yet only a minute. After nature had drawn a few breaths the star grew cold, and the clever animals had to die.
One might invent such a fable and still not have illustrated sufficiently how wretched, how shadowy and flighty, how aimless and arbitrary, the human intellect appears in nature. There have been eternities when it did not exist; and when it is done for again, nothing will have happened. For this intellect has no further mission that would lead beyond human life. It is human, rather, and only its owner and producer gives it such importance, as if the world pivoted around it.
The passage evokes a kind of impersonal awe, a cold rationalism, a null-state. In the late 1940s, the Japanese philosopher Keiji Nishitani would summarize Nietzsche’s fable in different terms. “The anthropomorphic view of the world,” he writes, “according to which the intention or will of someone lies behind events in the external world, has been totally refuted by science. Nietzsche wanted to erase the last vestiges of this anthropomorphism by applying the critique to the inner world as well.”
Both Nietzsche and Nishitani point to the horizon of nihilism – the granularity of the human.
3. At the core of nihilism for Nietzsche is a two-fold movement: that a culture’s highest values devalue themselves, and that there is nothing to replace them. And so an abyss opens up. God is dead, leaving a structural vacuum, an empty throne, an empty tomb, adrift in empty space.
But we should also remember that, when Zarathustra comes down from the mountain to make his proclamation, no one hears him. They think he’s just the opening band. They’re all waiting for the tight-rope walker’s performance, which is, of course, way more interesting. Is nihilism melodrama or is it slapstick? Or something in-between, a tragic-comedy?
4. I’ve been emailing with a colleague about various things, including our shared interest in the concepts of refusal, renunciation, and resignation. I mention I’m finishing a book called Infinite Resignation. He replies that there is surprisingly little on resignation as a philosophical concept. The only thing he finds is a book evocatively titled The Art of Resignation – only to find that it’s a self-help book about how to quit your job.
I laugh, but secretly wonder if I should read it.
5. We do not live – we are lived. What would a philosophy have to be in order to begin from this, rather than arriving at it?
6. “Are you a nihilist?”
“Not as much as I should be.”
7. We do Nietzsche a dis-service if we credit him for the death of God. He just happened to be at the scene of the crime, and found the corpse. Actually, it wasn’t even murder – it was a suicide. But how does God commit suicide?
8. By a process I do not understand, scientists estimate that the planet is capable of sustaining a population of around 1.2 billion – though the current population is upwards of 7 billion. Bleakness on this scale is difficult to believe, even for a nihilist.
9. I find Nietzsche’s notebooks from the 1880s to be a fascinating space of experimentation concerning the problem of nihilism. The upshot of his many notes is that the way beyond nihilism is through nihilism.
But along the way he leaps and falls, skips and stumbles. He is by turns analytical and careless; he uses argumentation and then bad jokes; he poses questions without answers and problems without solutions; and he creates typologies, an entire bestiary of negation: radical nihilism, perfect nihilism, complete or incomplete nihilism, active or passive nihilism, Romantic nihilism, European nihilism, and so on…
Nietzsche seems so acutely aware of the fecundity of nihilism.
10. It’s difficult be a nihilist all the way – Eventually nihilism must, by definition, undermine itself. Or fulfill itself.
11. Around 1885 Nietzsche writes in his notebook: “The opposition is dawning between the world we revere and the world which we live – which we are. It remains for us to abolish either our reverence or ourselves.”
12. If we are indeed living in the “anthropocene,” it would seem sensible to concoct forms of discrimination that are adequate to it. Perhaps we should cast off all forms of racism, sexism, classism, nationalism, and the like, in favor of a new kind of discrimination – that of a species-ism. A disgust of the species, which we ourselves have been clever enough to think. A species-specific loathing that might also be the pinnacle of the species. I spite, therefore I am. But is this still too helpful, put forth with too much good conscience?
Paul Kingsnorth, a former green activist, thinks the environmental movement has gone wrong. He argues for ‘uncivilisation’
The future for humanity and many other life forms is grim. The crisis gathers force. Melting ice caps, rising seas, vanishing topsoil, felled rainforests, dwindling animal and plant species, a human population forever growing and gobbling and using everything up. What’s to be done? Paul Kingsnorth thinks nothing very much. We have to suck it up. He writes in a typical sentence: “This is bigger than anything there has ever been for as long as humans have existed, and we have done it, and now we are going to have to live through it, if we can.”
Hope finds very little room in this enjoyable, sometimes annoying and mystical collection of essays. Kingsnorth despises the word’s false promise; it comforts us with a lie, when the truth is that we have created an “all-consuming global industrial system” which is “effectively unstoppable; it will run on until it runs out”. To imagine otherwise – to believe that our actions can make the future less dire, even ever so slightly – means that we probably belong to the group of “highly politicised people, whose values and self-image are predicated on being activists”.
According to Kingsnorth, such people find it hard to be honest with themselves. He was once one of them.
We might tell ourselves that The People are ignorant of The Facts and that if we enlighten them they will Act. We might believe that the right treaty has yet to be signed, or the right technology yet to be found, or that the problem is not too much growth and science and progress but too little of it. Or we might choose to believe that a Movement is needed to expose the lies being told to The People by the Bad Men in Power who are preventing The People from doing the rising up they will all want to do when they learn The Truth.
He says this is where “the greens are today”. Environmentalism has become “a consolation prize for a gaggle of washed-up Trots”.
As a characterisation of the green movement, this outbreak of adolescent satire seems unfair. To suggest that its followers become activists only because their “values and self-image” depend on it implies that there is no terror in their hearts, no love of the natural world, nothing real other than their need for a hobby. My experience of green politics is minuscule and secondhand compared with the author’s; all I can say is that the environmentalists I know often share his doubts and yet manage to stick with the cause, believing that their actions may not be totally ineffectual, that something is better than nothing. Most of us would tip our hat to that idea, but Kingsnorth is a passionate apostate with an almost Calvinist certainty that most of the human race, if not all of it, is heading for the fire.
These pieces trace some of his personal and political history. He had a middle-class childhood in the outer London suburbs, with a father who was a “compulsive long-distance walker” – he took his son on marches across the English and Welsh hills. In 1992, aged 19, Kingsnorth joined the protests on Twyford Down against the hill’s destruction by the M3. Aged 21, he was in the rainforests of Indonesia. Like many others, he became an environmentalist “because of a strong emotional reaction to wild places, and the world beyond the human” – like them, he wanted “to save nature from people”. But he also wanted to be different and famous. When he first took it up, green activism “seemed rebellious and excitingly outsiderish”; later, he writes with disappointment, it became “almost de rigueur among the British bourgeoisie”.
Disenchantment arrived when he was in his 30s. In a piece published in 2011, after he has written two or three books as well as columns “for the smart newspapers and the clever magazines”, he decides that his new role model is “not Hemingway but Salinger”. He has done the “big book stuff” – the tours, the extracts run big across the centre pages of mass-market papers. There will be no more Newsnight interviews, no more sitting on the sofa with Richard and Judy (“Jerry Springer was sitting next to me. It was … strange”). All he wants is an acre or two, a house, some bean rows, a pasture, a view of the river. In lists of this kind, renunciation can be hard to distinguish from bragging, and self-sufficiency comes packaged with literary romance.
At the root of this disillusion and retreat – he lives now in a dry-lavatory bungalow in Galway – lay what he calls the “single-minded obsession with climate change” that began to grip environmentalism early in the century. “The fear of carbon has trumped all other issues,” he writes. “Everything else has been stripped away.” Some would see this as saving the planet. Kingsnorth thinks the opposite, that we are destroying the wildest parts of it in the name of sustainability, “a curious, plastic word” that means “sustaining human civilisation at the comfort level that the world’s rich people – us – feel is their right, without destroying the ‘natural capital’ or the ‘resource base’ that is needed to do so”. In more concrete terms, it means wind farms, solar panels and undersea turbines, the renewables that will allow us to carry on business as usual.
Kingsnorth notes that environmentalism is now respectable enough to be embraced by the presidents both of the US (pre-Trump) and Anglo-Dutch Shell, and that a lot of awkward questions have been pushed aside by the drive to reduce carbon. The number of humans, for example, when sustaining a global population of 10 billion, suddenly isn’t a problem, and anyone who suggests otherwise is “giving succour to fascism or racism or gender discrimination”. Instead we make the hills, the deserts and the seas suffer – we’re “industrialising [the] wild places in the name of human desire”.
He writes insightfully about England – presciently, too. “Large-scale immigration is not, as some of its more foaming opponents believe, a conspiracy by metropolitan liberals to destroy English identity,” he says in an essay first published by the Guardian in 2015. “It is a simple commercial calculation. It may cause overcrowding and cultural tension … it is undoubtedly good for growth … if you don’t want the population movement, you don’t get the cheap, easy consumer lifestyle it facilitates. Which will you choose?”
This is Kingsnorth at his plainest and most provocative, but another Kingsnorth is never far away, as romantic in his nationalism as any Victorian storybook when he writes in the same essay: “England is the still pool under the willows where nobody will find you all day, and the only sound is the fish jumping in the dappled light.” This Kingsnorth believes that the human race will eventually die of civilisation, and he wants to create what he calls “Uncivilisation” that will show us a new way to look at human history and endeavour. Stories, he says, are the key.
The book ends with a manifesto: The Eight Principles of Uncivilisation, designed to undermine the myths of progress and human centrality. “Principle 7: we will not lose ourselves in the elaboration of theories or ideologies. Our words will be elemental. We write with dirt under our fingernails.” And so, rather than electric cars and oil in the ground, we are left with a smaller idea of salvation: a little literary movement of the kind that might have gathered around a hand press in a Sussex village c1925, facing the real uncivilisation that has still to come.
•Confessions of a Recovering Environmentalist is published by Faber. To order a copy for £11.24 (RRP £14.99) go to bookshop.theguardian.com or call 0330 333 6846. Free UK p&p over £10, online orders only.
My otherwise-quite-suitable life-partner and I seem to address this topic quite often, still, even after more than twenty-three years together: Why the hell do I like such horrible things? (In case you’re wondering, this oft-visited discussion is rarely instigated by me.) I’m not just talking about music, though of course music is one of the many areas where this ‘Mat likes horrible things’ rule is undeniably true. I’m talking art in general, whether it’s sound or visual art; I’m talking movies, whether it’s disturbing cinema or silly monster movies or films causing severe psychological discomfort; but I’m also talking about actively researching/hunting down and reading about the various assorted true depravities committed by the ever-creative-in-this-department mass of humankind. Horribleness. Miscellaneous vileness. Ugliness of the form and spirit. I seem drawn to it, and always have been, ever since I can remember. And, given the extremity of topic/sound/aesthetic surrounding this article, the odds are strong that you too, Heathen Harvester, are just as drawn to the deplorable as I am. The question I want to investigate here is: why?
Because it’s not all of us that dig this shit. There are a great many people (as frequently brought up as some kind of evidence by my aforementioned otherwise-quite-suitable life-partner) who don’t like ugliness/horror/depravity at all, and in fact spend a good deal of time deliberately avoiding such matters, choosing to spend their finite hours on this planet enjoying things that are, well, enjoyable. Instead of, say, looking up uncensored footage of prison stabbings, they’ll read an article on, I don’t know, propagating kale, or look at pictures of animals with amusing expressions, or, I don’t know, something else. I honestly have no idea. Because I’m too busy watching grainy footage of people shivving each other in the weights yard.
And I’ve always been this way. The earliest memory I have of being drawn to the monstrous was as a very young child, watching Doctor Who (I’ve since rummaged through my old stuff and have found a tiny notepad my mum used to keep, which is full of her painstaking re-drawings of my drawings when I was little, and have found a picture of Doctor Who and his companion Sarah Jane which mum dated sixth of August 1978, meaning I was about three and a half). I seem to recall some green slimy eyeball-type creature shambling up the side of a lighthouse, and I remember loving it soooo much. (I clearly also remember mum telling dad that my love of the bizarre and frightening was ‘just a phase’, which is pretty damn funny in hindsight.) But why did I love it so much? Was it just a love of the impossible, the fantastical? Or was it just some very normal thing that I never grew out of (I mean, all kids love monsters, don’t they?)—in which case, why didn’t I grow out of it?
Some of us seem drawn to ugly art, strange music, and real-life depravity, and some of us don’t. I have an inkling that the two are related (being drawn to ugly strangeness in sound/vision, and being interested in ugly strangeness in real life), but of course nothing is ever actually that simple, and I definitely know people who refuse to watch scary/freaky movies but insist on weird/noisy music at all times, so I’m pretty sure whatever conclusions I come up with will be highly variable in their personal mileage, and the whole lumping-this-all-together thing I’m attempting here may very well be a terrible mistake. But, well, I’m going to attempt it anyway.
So, first stop… monstrousness in fantasy/art/sound/imagination.
LEVEL ONE: THE MONSTROUS AS AESTHETIC
My otherwise-quite-suitable life-partner has a simple rule: no screaming in the lounge room (at least, not with her around). This doesn’t refer to my own screaming (I am a very quiet chap in general, softly-spoken with a tendency to mumble incoherently), but rather the screaming of the vocalists in the musical projects that I choose to listen to: Nocturno Culto, Utarm, Katherine Katz, Mories, Nekrasov, Jay Randall, Passenger of Shit, J R Hayes, etc. (not to mention the even more ‘non-vocalist’-type screams of bands like Abruptum and Stalaggh/Gulaggh). She can tolerate the more tuneful-type screams of Devin Townsend, but that’s about her limit: otherwise, any part of our house that isn’t my dimly lit (and expertly soundproofed) underhouse studio is simply a no-scream zone. Which is fair enough. After all, human screams are one of the sounds we’re almost biologically attuned to dislike, either through empathy or revulsion. So why does so much of the music I like contain so damn much of the stuff? The same goes for immense amounts of atonality, or for overwhelming cut-up chaos (without repetition or pattern or structure): These things are, as a rule, disorienting and/or anxiety-inducing, so why the fuck do I chase it so much? Why does something in me light up when it gets sonically flummoxed, when the same thing drives other (normal) people away? And why are you like that too? What happened to us? Are we damaged?
I suspect this is roughly how my otherwise-quite-suitable life-partner sees it: that I turned out “wrong” somehow, that I’m a bit “broken”. But I also suspect that this view is completely inaccurate. Because I just don’t feel very broken. I feel fine, generally speaking. It’s not like I’m drawn to this chaos or darkness or phantasmagorical pain because it’s only then that I feel at home, or because it’s only horror that makes me feel like someone understands my hellish existence, or that it’s the only way I can experience healing catharsis, or anything like that. It’s not like I need horrible screaming people in my lounge room. It honestly feels like it’s purely a taste thing, an aesthetic that I’m drawn to. I just like horrible screaming people, ugly visions, inappropriate textures, and sordidness of spirit. I just do. But, of course, this is exactly the issue I’m attempting to investigate here: the reasons behind this taste, and the reasons why I’m drawn to this particular aesthetic, given that the whole human experience is typically about avoiding the same (shunning the ugly, moving away from the screaming person, not submerging oneself in grossness, etc.).
To help with writing this essay, I’ve just gone and read up on what draws people to horror movies (as an example of the ‘monstrous in art’), and it turns out there’s a million different theories:
1) There’s the theory that watching a scary/freaky movie makes one’s heart rate, blood pressure, and breathing intensify, which kind of experientially heightens the feelings associated with watching it, so, if you’re having a great time watching, it’ll feel like an ever greater time, relatively speaking, because your system is on such high alert. This theory also lends itself to musical experiences: If all that atonality and screaming and super-speedy beatmongering (or super-loud doom-vibes) cause your biological system to become heightened, and you’re having a great time listening, then it’ll become an even greater time, relatively speaking. This theory also ties into the idea of some people totally digging it, and some people totally not digging it, because this theory also says that if you’re having a bad time watching/listening, the bad experience will be made even more unpleasant by the same heightened biological states. But what this theory doesn’t really help with is why the monstrous thing is enjoyable in the first place. It only really deals with how the heightened experience makes the reaction stronger than other forms of media.
Still from Taxidermia
2) There’s a theory that it is the heightened excitation itself that we enjoy, in the same way as a base-jumper enjoys leaping off things or a rollercoaster-fan enjoys screaming in abject terror and barfing their guts up (assuming that’s what people enjoy about rollercoasters). We get off on the feeling of it. And, even better than a rollercoaster, watching a scary movie, or listening to a disorienting album is an intrinsically safe way to go about getting this hit of heightened excitation. There may be some merit to this theory; there is definitely some kind of a buzz that I get from these forms of media, and yet I am just as definitely not the kind of person who goes jumping off cliffs (and last time I was on a fairground ride with my daughter I vowed never to fucking do that shit ever again). But at the same time, I don’t feel like it’s the whole story, because there’s many a time I’ll want to listen to some extreme metal or crazy cut-up nonsense and not feel like I’m ‘chasing a buzz’ at all, but rather, just ‘having a nice time’.
3) There’s a theory that it’s the enjoyment of triumphing over fear/repulsion itself that we enjoy—that, in essence, we enjoy these terrible things because they are unenjoyable, and being able to show them who’s boss is what gives us the positive feelings. It’s like we’re giving the Grim Reaper the finger, in some sense, like we’re reducing the hideous/terrifying/ghastly/repulsive to mere entertainment, and that is what feels good. It feels like there might be some merit in this theory too, perhaps: It is kinda cool to be able to say, ‘You can’t handle Whourkr or Utarm? I love those bands.’ But this theory does reduce the entirety of enjoying the Art of the Horrendous to some kind of show-offy bullshit pretence, which it really doesn’t feel like, and makes the experience all about proving yourself to others, which it also doesn’t really feel like. When I listen to full-on strangeness or watch Visitor Q, I tend to do it on my own, without anyone else in mind, and enjoy the experience wholly on my own terms, without anyone else’s validation or respect or values on my mind (and, as mentioned earlier, the enjoyment of such media actually makes my otherwise-quite-suitable life-partner respect me less). So, although there may be an element of this involved, it doesn’t feel like it’s the whole picture. (There is, of course, that thing of proving to yourself that you can handle something scary, but, for most of us who are actually into The Strange, that’s a no-brainer: We already know we can handle it because it’s the kind of thing we regularly seek out and experience, so it’s not so much of an issue. I think there probably is an element of it involved [especially in the escalating scale of the ugliness we may seek out], but it’s definitely not the whole story.)
4) There’s the theory that male-identifying people are drawn to scary movies because they get gender reinforcement by ‘proving themselves’ in the face of fear/repulsion (‘lack of fear in the face of terror’ being a cultural marker for assessing masculinity). One study (a ridiculously small one, with only thirty-six people of very limited diversity) showed that male-identifying people enjoyed a horror movie more when they watched it with a female-identifying person who was scared, and female-identifying people enjoyed the horror movie more when they watched it with a male-identifying person who wasn’t scared. I suspect this is actually rank codswallop, given the male—and female—identifying people I personally know, but could very well be a factor for a more mainstream population. What would I know? Either way, it doesn’t really answer my particular question though, and is much harder to shift across to other art forms—if these effects are remotely true in the first place, does listening to strange/ugly music produce similar effects (i.e., do chicks like listening to hideous noise if there’s a manly man around)? Certainly my otherwise-quite-suitable life-partner has never once found my ever-so-masculine tolerance of unpleasant music/cinema even remotely erogenous. As suggested earlier: rank codswallop.
5) There’s a theory that says we are drawn to horror/strangeness/ugliness because it is outside of our normal realm of experience, and, as such, becomes imbued with the Imaginary Value of the Rare. In the same way as people care more about cheetahs than they do about pigs, or diamonds more than they care about bread, the very novelty of the horrendous makes it worth something. Biologically, we are hardwired to look for anomalies in our environment, and curiosity about The Strange is a sensible survival technique. It may very well be that we are drawn to horror movies/weird music/ghastly stories for the very same reasons we rubberneck at a car crash. A normal person’s morbid fascination and my unending hunt for intriguing new sounds are basically rooted in the same biological thing.
Still from Martyrs
Now, when I saw this theory, it made a small ‘ching’ noise in my mental theatre, like a little gold bell struck once with a tiny hammer, because this is actually something I am consciously aware of in my search for interesting music/art/cinema. I love nothing more than hearing some piece of music and thinking, ‘Fuck, I have never heard that before’. When I make music, it’s always with the intention of adding something to the world that doesn’t already exist. When I review an album, I’m always asking myself, ‘Is this just a pile of self-conscious cookie-cutter swill, or is this actually something worthwhile?’ So, seeing this concept of novelty applied to horror movies was actually a bit of an eye-opener. I’d never thought of it that way before. My interest in the dark/ugly/strange side of media is all linked by a conceptual interest in the far borders of human experience—in experiencing the very fringes of the normal/socially permissible. I don’t want to jump off a cliff, but I’m deeply drawn to music/visuals/emotions that do (metaphorically speaking). It’s not actually an attraction to the repulsive, it’s an attraction to the strange, and, by its very nature, the strange includes all those things that don’t fit into the normal. And, since the normal spends so much time appreciating/collecting beauty and pleasantry and comfort, the strange ends up including the ugly and unpleasant and discomforting! I’m not broken after all! I just like weirdness, which happens to include ugliness and horror! It may be that the part of me that lit up when I first heard Alvin and the Chipmunks is the exact same part of me that got a buzz out of Martyrs.
It does all make sense, that all this interest in The Repulsive stems from a blanket interest in The Strange. Most of the other people I know who share this obsession with the macabre/ugly have similar interests in Surrealism, the Occult, dreams, etc. Being raised in a slick corporate world of ego-driven fitness, photoshopped beauty, and community as PR, it’s no surprise that some of us were drawn to the things we weren’t meant to see, and sided with ugliness instead. Like the underarm hair on a fashion model, there are many things that are true and real and natural that our society attempts to erase in the name of capitalist fear-mongering and mind control, and it is no surprise that some of us opted for the forbidden (sometimes for no other reason than it was forbidden in the first place).
Blood Dumpling Envy by Chris Mars
Still with me? Great! This paragraph or so of ‘rubbernecking at car crashes’ seems the perfect segue to take us to the next, quite a bit more disturbing, level of this journey of the horrendous: our interest in true horror (because it’s not just fantasy stuff we’re into). The kind of person I’m talking about here (okay, so basically me at this stage, but I’m hoping there are enough of you out there to justify the effort involved in writing and publishing this essay), this kind of person doesn’t just watch Taxidermia and listen to Gnaw Their Tongues and enjoy the painted works of Chris Mars (and bonus points to any of you who ticked off all three boxes there). It’s not just in the phantasmagorical realm that we’re drawn to ghastliness, but in the real. The kind of person I’m talking about also reads true crime stories (the more aberrant the better) and searches out photos of things made of human skin. This kind of person finds themselves late at night perusing the sickening online transcripts of the instructional cassette tape David Parker Ray (AKA the Toybox Killer) recorded for his bound and gagged kidnapping victims to listen to as they awoke on his torture table. Because (I think) part of this interest in the great horror is not merely titillation or car-crash rubbernecking, but in unlocking something about what it means to be human—where the lines of experience are drawn, and what’s at the very edges of that terrain. So, level two: Hold on tight.
LEVEL TWO: THE MONSTROUS AS REALITY
Now, before we get to this level, let’s make it clear: Horror is still horror to me. It’s not fun. The ugly is still ugly. It’s not like I’m here going, ‘It’s so cool when people get hurt or have bad times’. It’s not like that at all. It’s something like eating a really hot chili: It still hurts, lots, but there is some kind of intensity to the pain itself that can be enjoyed, while the burning is still really not enjoyable at all. You can enjoy the intensity itself while still registering the pain as painful. There’s an excitement to the extremity of the badness while still fully recognising the badness is bad. Like the car crash we drive past, craning for corpses: We know those corpses are real people, like everyone we love, and that those corpses represent a whole world of sadness and pain for other very real people, but at the same time, it’d be kinda cool to cop an eyeful.
So, drawing on all the theories above, do they still apply when the horror is not some kind of aesthetic choice, but a real-life tragedy? Is it okay to get a buzz out of genuine misfortune? Is it okay to be interested in the very darkest parts of the human organism? Hasn’t it crossed some line now into sickness and depravity? I argue it hasn’t, as long as we keep that previous point in mind: that bad shit is actually really fucking bad. My interest in the true horrors of the world is actually miles away from ‘fun’. It has elements of ‘attraction to novelty’ about it, it has elements of ‘triumphing over fear’, but it is never, ever ‘having a cool time’. It’s definitely an interest in the aberrant while being fucking endlessly gratitudinously thankful that it is an aberration and not the norm. It’s a much more serious business than listening to some wacky music or watching a bunch of actors pretend to be scared: This is intrinsically linked to that stuff about experiencing the very borders of human experience and knowing what’s really going on. It’s pretty fucked, but I feel better knowing just how fucked it actually is.
Collected Atrocities 2005-2008 by Gnaw Their Tongues
And sometimes it really does leave me scarred—sometimes permanently so. That late night when I discovered myself reading what David Parker Ray had to say to his victims, I felt physically ill. I was shaking with the horror of it all—that this shit was fucking real, this actually happened to people as flesh-and-blood as I am, as my daughter is. I actually felt like I was having a panic attack. It was not fun. And yet I read it to the end and went hunting for more information, pictures, and testimonies in some kind of horrified fact-hunting fugue.
I had a similar reaction when reading about one researcher’s infiltration of the child pornography community on the Deep Web. What I read there fucking completely freaked me out for a long time (families raising kids specifically for ‘sharing’; the schism between the anti-violence and pro-violence factions; the mind-boggling scale of it all). But that didn’t stop me poking around the dark corners of reality, because, well, just because something is mind-boggling horrible doesn’t mean I should put my fingers in my ears and go ‘la la la’ in the hopes that it will go away. It won’t.
When it comes to fictional depravity, I think the simple notions of ‘novelty’ and ‘triumph over horror’ might come into play, but when it comes to this far-scarier, far-more-awful real life horror, I think another element comes to the fore, namely knowing what’s really going on. I like to think it’s the attraction of knowledge, pure unrefined warty-balls-and-all knowledge itself, that draws me in. (But of course, I’m not scouring astrophysicist sites for knowledge; I’m not trawling marine biology sites for knowledge; it’s simply not the case that it’s ‘just knowledge’ that interests me. It’s very definitely ‘knowledge about things that are horrible’ that attracts me. So, what is it about that knowledge regarding specifically horrendous, fucking ghastly shit that interests me? Is it the ‘triumphing over fear’ stuff investigated above? Is it the ‘fringes of experience’ stuff?)
I think, in the end, it’s some kind of a desperate attempt to understand what we’re capable of—what I, as a human being, must be capable of. When I talk about an interest in exploring ‘the fringes of human experience’, I wonder if, deep down, it’s actually about exploring what I could be capable of—what you could be capable of. It’s about what any of us could be capable of. Because we’re all the same species, exactly the same species, as David Parker Ray or Jeffrey Dahmer or Elizabeth Bathory. Anything they could do (I’m not talking about feats of strength or remarkable agility here), I could do, or you could do. And yet, somehow, through some amazing conjunction of circumstances, we don’t do these terrible, fucked up things. And that feels great.
When we know just how horrible things can be, it gives us two things:
1) We are armed with the shining scimitar of actual truth, and
2) We are filled with the glowing light of gratitude that whatever foul fucking piece of disaster we’ve just finished consuming is not, in fact, happening to us right now.
And truth and gratitude, I think, may be worth more than a little horror.
SOME KIND OF GLIB POINT-PROVING SUMMARY
In closing, what have I learned? I think the most important thing here is that an interest in the strange is not necessarily a problem or some kind of symptom of a broken person, or something that we should be concerned about in our young ones, or anything like that. An interest in the strange can definitely bring people into contact with horrible, horrible things and can definitely make the soundtrack of your lounge room less comfortable for your significant others, but it can also bring a lot of truth into your lives. Unpleasant, awful, trauma-inspiring truth, but truth nonetheless. As a vegan-type person, I’ve definitely seen a lot more trauma-inspiring footage than most mainstreamer corpse-eating-type people, but I can’t help but feel that if I have to choose between comfortable illusion and uncomfortable truth, I’ll always end up choosing to know the ugly facts. It’s a bit like that.
In the end, I’m not actually saying, ‘I listen to weird music, which is somehow loosely tied into valuing truth more than people who listen to mainstream music, so I’m a better person than you’. I’m not actually saying, ‘People who only listen to carefully sanitised, executively driven, corporately produced music are somehow trapped in an inauthentic world of capitalist product-driven illusion, and I’m not, so nyer’. I’m not really saying, ‘Weirdness is better, straight people suck massive dogballs’. Or am I?
Maybe, deep down, I am saying that. And maybe this is really just me petulantly getting back at everyone who ever called me a weirdo. How can I possibly tell? Funny how the subconscious works.
No one really knows how the most advanced algorithms do what they do. That could be a problem.
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.
Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”
There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.
In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”
At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”
Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.
At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.
But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.
The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.
You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.
The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.
Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.
Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.
We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”
In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”
After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.
The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.
Recommended for You
David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.
This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.
One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”
It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.
Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”
If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.
To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.
He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”
Every Monday, Serial Killers takes a psychological and entertaining approach to provide a rare glimpse into the mind, methods and madness of the most notorious serial killers with the hopes of better understanding their psychological profile. With the help of real recordings and voice actors, we delve deep into their lives and stories.
So, you’ve decided to encrypt your communications. Great! But which tools are the best? There are several options available, and your comrade’s favorite may not be the best for you. Each option has pros and cons, some of which may be deal breakers—or selling points!—for you or your intended recipient. How, then, do you decide which tools and services will make sure your secrets stay between you and the person you’re sharing them with, at least while they’re in transit?
Keep in mind that you don’t necessarily need the same tool for every situation; you can choose the right one for each circumstance. There are many variables that could affect what constitutes the “correct” tool for each situation, and this guide can’t possibly cover all of them. But knowing a little more about what options are available, and how they work, will help you make better-informed decisions.
Pros: Signal is free, open source, easy to use, and features a desktop app, password protection for Android, secure group messages. It’s also maintained by a politically-conscious nonprofit organization, and offers: original implementation of an encryption protocol used by several other tools,1ephemeral (disappearing) messages, control over notification content, sent/read receipts—plus it can encrypt calls and offers a call-and-response two-word authentication phrase so you can verify your call isn’t being tampered with.
Cons: Signal offers no password protection for iPhone, and being maintained by a small team means fixes are sometimes on a slow timeline. Your Signal user ID is your phone number, you may have to talk your friends into using the app, and it sometimes suffers from spotty message delivery.
Signal certainly has its problems, but using it won’t make you LESS secure. It’s worth noting that sometimes Signal messages never reach their endpoint. This glitch has become increasingly rare, but Signal may still not be the best tool for interpersonal relationship communications when emotions are heightened!2 One of Signal’s primary problems is failure to recognize when a message’s recipient is no longer using Signal. This can result in misunderstandings ranging from hilarious to relationship-ending. Additionally, Signal for Desktop is a Chrome plugin; for some, this is a selling point, for others, a deal breaker. Signal for Mac doesn’t offer encryption at rest,3 which means unless you’ve turned it on as a default for your computer, your stored saved data isn’t encrypted. It’s also important to know that while Signal does offer self-destructing messages, the timer is shared, meaning that your contact can shut off the timer entirely and the messages YOU send will cease to disappear.
Pros: Wickr offers free, ephemeral messaging that is password protected. Your user ID is not dependent on your phone number or other personally identifying info. Wickr is mostly reliable and easy to use—it just works.
Cons: Wickr is not open source, and the company’s profit model (motive) is unclear. There’s also no way to turn off disappearing messages.
Wickr is sometimes called “Snapchat for adults.” It’s an ephemeral messaging app which claims to encrypt your photos and messages from endpoint to endpoint, and stores everything behind a password. It probably actually does exactly what it says it does, and is regularly audited, but Wickr’s primary selling point is that your user login is independent from your cell phone number. You can log in from any device, including a disposable phone, and still have access to your Wickr contacts, making communication fairly easy. The primary concern with using Wickr is that it’s a free app, and we don’t really know what those who maintain it gain from doing so, and it should absolutely be used with that in mind. Additionally, it is worth keeping in mind that Wickr is suboptimal for communications you actually need to keep, as there is no option to turn off ephemeral messaging, and the timer only goes up to six days.
Pros: Threema is PIN-protected, offers decent usability, allows file transfers, and your user ID is not tied to your phone number.
Cons: Threema isn’t free, isn’t open source, doesn’t allow ephemeral messaging, and ONLY allows a 4-digit PIN.
Threema’s primary selling point is that it’s used by some knowledgeable people. Like Wickr, Threema is not open source but is regularly audited, and likely does exactly what it promises to do. Also like Wickr, the fact that your user ID is not tied to your phone number is a massive privacy benefit. If lack of ephemerality isn’t a problem for you (or if Wickr’s ephemerality IS a problem for you), Threema pretty much just works. It’s not free, but at $2.99 for download, it’s not exactly prohibitively expensive for most users. With a little effort, Threema also makes it possible for Android users to pay for their app “anonymously” (using either Bitcoin or Visa gift cards) and directly download it, rather than forcing people to go through the Google Play Store.
Pros: Everyone uses it, it uses Signal’s encryption protocol, it’s super straightforward to use, it has a desktop app, and it also encrypts calls.
Cons: Owned by Facebook, WhatsApp is not open source, has no password protection and no ephemeral messaging option, is a bit of a forensic nightmare, and its key change notifications are opt-in rather than default.
The primary use case for WhatsApp is to keep the content of your communications with your cousin who doesn’t care about security out of the NSA’s dragnet. The encryption WhatsApp uses is good, but it’s otherwise a pretty unremarkable app with regards to security features. It’s extremely easy to use, is widely used by people who don’t even care about privacy, and it actually provides a little cover due to that fact.
The biggest problem with WhatsApp appears to be that it doesn’t necessarily delete data, but rather deletes only the record of that data, making forensic recovery of your conversations possible if your device is taken from you. That said, as long as you remain in control of your device, WhatsApp can be an excellent way to keep your communications private while not using obvious “security tools.”
Finally, while rumors of a “WhatsApp backdoor” have been greatly exaggerated, if WhatsApp DOES seem like the correct option for you, it is definitely a best practice to enable the feature which notifies you when a contact’s key has changed.
Facebook Secret Messages
Pros: This app is widely used, relies on Signal’s encryption protocol, offers ephemeral messaging, and is mostly easy to use.
Cons: You need to have a Facebook account to use it, it has no desktop availability, it’s kind of hard to figure out how to start a conversation, there’s no password protection, and your username is your “Real Name” as defined by Facebook standards.
Facebook finally rolled out “Secret Messages” for the Facebook Messenger app. While the Secret Messages are actually pretty easy to use once you’ve gotten them started, starting a Secret Message can be a pain in the ass. The process is not terribly intuitive, and people may forget to do it entirely as it’s not Facebook Messenger’s default status. Like WhatsApp, there’s no password protection option, but Facebook Secret Messages does offer the option for ephemerality. Facebook Secret Messages also shares the whole “not really a security tool” thing with WhatsApp, meaning that it’s fairly innocuous and can fly under the radar if you’re living somewhere people are being targeted for using secure communication tools.
There are certainly other tools out there in addition to those discussed above, and use of nearly any encryption is preferable to sending plaintext messages. The most important things you can do are choose a solution (or series of solutions) which works well for you and your contacts, and employ good security practices in addition to using encrypted communications.
There is no one correct way to do security. Even flawed security is better than none at all, so long as you have a working understanding of what those flaws are and how they can hurt you.
A burner phone is a single-use phone, unattached to your identity, which can theoretically be used to communicate anonymously in situations where communications may be monitored. Whether or not using a burner phone is itself a “best practice” is up for debate, but if you’ve made the choice to use one, there are several things you should keep in mind.
Burner phones are not the same as disposable phones.
A burner phone is, as mentioned above, a single-use phone procured specifically for anonymous communications. It is considered a means of clandestine communication, and its efficacy is predicated on having flawless security practices. A disposable phone is one you purchase and use normally with the understanding that it may be lost or broken.
Burner phones should only ever talk to other burner phones.
Using a burner phone to talk to someone’s everyday phone leaves a trail between you and your contact. For the safety of everyone within your communication circle, burner phones should only be used to contact other burner phones, so your relationships will not compromise your security. There are a number of ways to arrange this, but the best is probably to memorize your own number and share it in person with whoever you’re hoping to communicate with. Agree in advance on an innocuous text they will send you, so that when you power your phone on you can identify them based on the message they’ve sent and nothing else. In situations where you are meeting people in a large crowd, it is probably OK to complete this process with your phone turned on, as well. In either case, it is unnecessary to reply to the initiation message unless you have important information to impart. Remember too that you should keep your contacts and your communications as sparse as possible, in order to minimize potential risks to your security.
Never turn your burner on at home.
Since cell phones both log and transmit location data, you should never turn on a burner phone somewhere you can be linked to. This obviously covers your home, but should also extend to your place of work, your school, your gym, and anywhere else you frequently visit.
Never turn your burner on in proximity to your main phone.
As explained above, phones are basically tracking devices with additional cool functions and features. Because of this, you should never turn on a burner in proximity to your “real” phone. Having a data trail placing your ostensibly anonymous burner in the same place at the same time as your personally-identifying phone is an excellent way to get identified. This also means that unless you’re in a large crowd, you shouldn’t power your burner phone on in proximity to your contacts’ powered-up burners.
Given that the purpose of using a burner phone is to preserve your anonymity and the anonymity and the people around you, identifying yourself or your contacts by name undermines that goal. Don’t use anyone’s legal name when communicating via burner, and don’t use pseudonyms that you have used elsewhere either. If you must use identifiers, they should be unique, established in advance, and not reused.
Consider using an innocuous passphrase to communicate, rather than using names at all. Think “hey, do you want to get brunch Tuesday?” rather than “hey, this is Secret Squirrel.” This also allows for call-and-response as authentication. For example, you’ll know the contact you’re intending to reach is the correct contact if they respond to your brunch invitation with, “sure, let me check my calendar and get back to you.” Additionally, this authentication practice allows for the use of a duress code, “I can’t make it to brunch, I’ve got a yoga class conflict,” which can be used if the person you’re trying to coordinate with has run into trouble.
Beware of IMSI catchers.
One reason you want to keep your authentication and duress phrases as innocuous as possible is because law enforcement agencies around the world are increasingly using IMSI catchers, also known as “Stingrays” or “Cell Site Simulators” to capture text messages and phone calls within their range. These devices pretend to be cell towers, intercept and log your communications, and then pass them on to real cell towers so your intended contacts also receive them. Because of this, you probably don’t want to use your burner to text things like, “Hey are you at the protest?” or “Yo, did you bring the Molotovs?”
Under normal circumstances, the use of encrypted messengers such as Signal can circumvent the use of Stingrays fairly effectively, but as burner phones do not typically have the capability for encrypted messaging (unless you’re buying burner smartphones), it is necessary to be careful about what you’re saying.
Burner phones are single-use.
Burner phones are meant to be used once, and then considered “burned.” There are a lot of reasons for this, but the primary reason is that you don’t want your clandestine actions linked. If the same “burner” phone starts showing up at the same events, people investigating those events have a broader set of data to build profiles from. What this means is, if what you’re doing really does require a burner phone, then what you’re doing requires a fresh, clean burner every single time. Don’t let sloppy execution of security measures negate all your efforts.
Procure your burner phone carefully.
You want your burner to be untraceable. That means you should pay for it in cash; don’t use your debit card. Ask yourself: are there surveillance cameras in or around the place you are buying it? Don’t bring your personal phone to the location where you buy your burner. Consider walking or biking to the place you’re purchasing your burner; covering easily-identifiable features with clothing or makeup; and not purchasing a burner at a location you frequent regularly enough that the staff recognize you.
Never assume burner phones are “safe” or “secure.”
For burner phones to preserve your privacy, everyone involved in the communication circle has to maintain good security culture. Safe use of burners demands proper precautions and good hygiene from everyone in the network: a failure by one person can compromise everyone. Consequently, it is important both to make sure everyone you’re communicating with is on the same page regarding the safe and proper use of burner phones, and also to assume that someone is likely to be careless. This is another good reason to be careful with your communications even while using burner phones. Always take responsibility for your own safety, and don’t hesitate to erase and ditch your burner when necessary.