Becoming Digital - Mark Jarzombek - Digital Post-Ontology
Becoming Digital
January 30, 2019
Becoming Digital

Digital Post-Ontology

Viktor Timofeev, Continuum, 2017, production still. Courtesy of the artist.

Discussing who we are in the digital age is no easy task. Everyone and their grandmother, so to speak, has an opinion. But to call in data experts is for sure a fundamental fallacy. The news is out: the age of data optimism is over. Yet we are stuck in processes of producing Self, like flies on sticky paper. To make sense of the current situation, we need to ask the question of how we got to this moment in time. This of course needs to be supplemented by the question of how we study the present. But even that presupposes a more formidable question: how do we speak from within the subject position of the present?

The first question of how we got to this moment in time is a question of history, if that is the right word, since it is unclear how one would even measure a historian’s qualifications for that task. It most certainly cannot be left in the hands of historians of technology or of media. But what then does this history look like, and who has the authority to speak for it? The second question of how to study the present requires something like “present-ology,” which is a term that can perhaps replace the outdated concepts of anthropology and sociology; disciplines generated by Enlightenment era confidences that divided the world into discrete chunks of epistemological work. But today, who really knows where anthropo- begins and ends, or where socio- begins and ends? Ontology, the study of Being, falls into the same category. Enmeshed as we are in hallucinatory regimes of data consumption, production, and manipulation, who really knows where onto- is, or what its bounds are? The third question: the question of how we speak from within the present… Well, that is the question indeed.

All of this points to a de-ontological project within the classical idea of ontology. In other words, the issue here is not the traditional, modern question of “Who am I?” but also “Who am I not?” The digital age is the first time in human history that the “me” and the “not-me” are placed in philosophical alignment.

This situation defies any disciplinary clarity—much less the predictable positions of expertise that emerged from Enlightenment-era thinking. The nineteenth and post-nineteenth-century categorizations of knowledge are inadequate to the task of present-ology, if only because the primary focus back then was on building an answer to “who am I?” around Reason. But if we investigate Kant’s take on the matter, we realize that there was an often overlooked twist in his argument that will allow us to begin on a fresh footing. In his Anthropology from a Pragmatic Point of View (1785), Kant makes the point that in order to better know the human we need more information: thus his “anthropo-logy.”

His book, however, does not study remote village life in the rainforest, but the activities of his neighbors, his friends, and even his own. It is not really an anthropology, as we would understand the word today, but a present-ology, with his core argument being that for the self to be the Self—the ostensibly ultimate goal of the Enlightenment—we need supplementary information about how we all live in the here and now. We are thus both Self and students of Self. Deep within this suggestion lies an explosive charge that has now gone off, though without a sound and unlocatable in the subterranean caverns of time and history. What Kant requires is that the Self understand that it is an inadequate Self, even with epistemological supplements added in. The Self has within it a type of void that is filled not just with experiences, but with a circulatory system that gives and takes. Ontology only works if it embraces the de-ontological within—a not-Self filled with infomatics derived from others and yet critically embedded within the Self.

Kant hoped that his book would set the tone for an ever more exhaustive anthropo-logy, but most nineteenth-century philosophers rejected his ambivalence to the unity of Self, preferring instead to impose regimes of knowing not back onto ourselves (Socratic-style, so to speak), but onto others. By the end of the century and carrying over into the core agenda of modernity, history witnessed massive investments in science, objectivity, professions, and discipline making. The result was departments of anthropology, ethnography, history, medicine, law, geography, sociology, and urban planning, all to bring what we learned about the “study of man” to bear on creating nations, communities, cities, politics, universities, and the like—reducing the Self to a strange question mark in the parcellated world.

In 1961 Jay Wright Forrester, a professor at the MIT Sloan School for Management, intuited this grand, historical drift and made a prescient argument that tried to bring the balance back toward the human in the system. His interest in this “human” was not, one could say, driven by some esoteric agenda. He wanted corporate capitalism to work better than it did, and for this to happen it needed, according to him, to move past the simple idea of product and market share to a broader—more universal—understanding of how humans “are in the world.” Forrester, one of the most consequential thinkers of the twentieth century, argued that “physical systems, natural systems, and human systems are fundamentally of the same kind… [T]hey differ primarily in their degree of complexity.” Here we see the nub of a world that seems oddly familiar some seven decades later. Today, we know that his statement need only be slightly rewritten. These systems do not differ in terms of their complexity. AI, in fact, promises their equivalence, thus its lure. Forrester’s equation can therefore be rewritten to fit our current conditions by simply splicing in the phrase “of data”:

physical system (of data) = natural system = human system

Even with its syntax thus edited, Forrester’s law needs to be supplemented in order to explain the powerful dynamic forces that exist within each of the equation’s elements. Data, for example, as we understand it today, is not a stable entity in the old days of empiricism. Data is only useful in the form of surplus, and data surplus only works when it exceeds the capacity of data processing. Data is a compulsive self-propagator. Capacity (as in better and faster computers) must produce data surplus, which requires, in turn, more capacity. Capacity generation is thus a multi-billion-dollar economy in its own right, and has become “natural” in the sense that to survive, like all natural systems, it must propagate. The result is a strange, metaphoric landscape: data mountains, data streams, data leaks, data fog, data pollution, data collapses, and even data deluges. The goods are harvested, mined, sold, stolen, protected, purchased, fabricated, marketed, and even in some cases given away. This is the new "natural" world—the world of data.

The subtext of the circulatory systems of surplus and capacity is that data is only as good as its difference with previous data. Data difference can be in terms of quantity, quality, in the way it was acquired, or even in the way it was integrated in algorithmic formulas. In some cases, data grows in microseconds; in other cases, over a span of years. On the stock exchange, masters of microsecundal data are a special breed of algorithmic geniuses know as quants. The basic law of data is, therefore, quite simple:

data = Δ data

Delta (Δ, meaning “change” in the language of physics) is not just a question, however, of the acquisition and use of data. The genius of the circulatory system between “data” and “nature” is its design for data flows as if by itself. This is, of course, the promissory horizon of AI, which aims to complete the picture of Self—the Self as replicant. But this promise, as seductive as it might be, does not describe the present. We must, therefore, distinguish—and importantly so—between present-ology and futurology.

The human, as it turns out, predictable and yet predictably fickle, can provide endless amounts of data. The more humans there are, the more data there is—and thus, the sycophantic relationship between surplus and capacity is preserved. The goal, in other words, is not to finish the project of “naturalization,” but just the opposite, to perpetuate its incompletion: to produce a vicious cycle of dependency. Humans move from place to place; they move things and ideas; their desires change continually. They have to buy and sell stuff. They live, breathe, and die. Make and unmake friends. And above all, there are so many humans.

All the great corporate entities who navigate this domain have to do is produce the incentive to define reality as a type of heat generating activity, whether at the scale of an electron, a package, a human desire, or a cat. The more that things are “connected”—the secret code word for our de-ontological Self—the more data is generated as if it were “natural” to life—and the more one can approximate the human. Today, the endless task of elastic, approximate modeling constitutes humanity in all its newly, productive glory:

human = Δ = Δ1 = Δ2 = Δ3 = Δn = data

The human is placed in this great vortex, where the Self is always a shifting signifier of itself. Algorithms chop us into digestible, marketable, governable, and hackable categories. But how do we orient ourselves in this space, find up and down, right and wrong? Here and there, when Amazon recommends some books or a special type of sock, we are on the receiving end, but mostly, these regimes are kept secret.

The irony of all this is that the Subject (the “I”) remains relatively stable in its ability to self-affirm—the lingering by-product of the psychologized modern Self. In fact, we do not experience estrangement as the old-school modernists did, though we do sense the titillation of possibility, suspecting in the delayed encounter with ourselves and our supposed future an odd resistance to novelty. For the data world is not trying to refashion us—it is not trying to make us into “moderns”—but instead to know us better than we know ourselves, disguising in the process arrays of exploitation so massive that they are beyond comprehension. The Self collapses into the illusions produced for it by the global cyclone of the infomatic industry.

Stated more simply, in the post-Enlightenment world, data was in the purview of academics, of anthropologists, historians, scientists, sociologists, and the like. It did not belong to the general public or to the masses. But now, we are not just the producers of data, we are our own data managers, or are given the illusion thereof. We are not just the proverbial “objects” in this system, but the producers: we supply the opium of data to the addicted corporations, nation states, and even ourselves. In some instances, the onto-bits that we release into the system are designed to seemingly enhance our sense of Self. They can even be used to ostensibly protect, but they are also used against us to irritate Being, to humiliate it, to observe it, overtly and covertly. One group advertises itself in this way:

We are data people, we believe in the prospect of uncovering “invisible” data to help make sense of all of the consuming, driving, walking, running, watching, eating, and buying that is going on in the “real-world.”1

Data acquisition, which began with Kant under the assumption that it guides us to become a better human, immerses us in the expanding horizon of not-Self.

The break-up of ontological securities that so marks this age forces us into new relationships with body and soul. The old idea of Self as a more-or-less autonomous creature of the Enlightenment disappears in a new hallucinogenic. There is now no human or inhuman, but only one thing: (in)human. Every molecule of us is just as inhuman as it is human. (In)human cannot be pronounced, for as Derrida correctly diagnosed, the pathologies of orality and the desire for logos always seem to seep into our language.

The word we can give for this (in)human condition is “post-ontology.” It does not mean that the good old days of ontology are over, only that quotation marks around “ontology” are the new normal. We can no longer even remotely pretend that we are anything but a social construction. We live in an algorithmic vivarium that can sustain different—and complex—coterminous power relationships. Order and disorder are not antithetical. In fact, those very words are archaic. They come from the age of onto-centrism. The algorithmic world seeks to make order out of disorder, but just as importantly, it makes disorder out of order, often in the name of “innovation.”

The digital knows just how to inhabit the ontological host without killing it. It understands the human desire for a Bluetoothed world (built originally as “biology” and now fantasized as AI), but it will always deny the human its very fulfillment. This frictional reality comes in a thousand small cuts, updates, upgrades, system crashes, and system reboots. These are not accidents or even mistakes but the very core of the (in)human project. Optimists will never recognize this: there is always a patch, always a way around, but algorithm designers know the temporal limitations of their work. In some places, algorithms pile up in vast mountains of dead numerological heaps, never to be heard of again, unless accidentally or purposefully—and in some cases maliciously—re-activated.

The result is a perpetual, low-intensity torture of the social-civilizational body. Post-ontology thrives on security threats, real, imagined, or fabricated. Security Threat = The Social. Everything we need in the world will soon be the byproduct of security—if not already. My “I” is a byproduct of the globally-scaled, Data Security Industry that produces insecurity in just the right doses for its self-perpetuation. The system is calculated and legalized in the form of upgrades and contract renewals, patches and defaults that continuously remind the (in)humans—often when they least expect it—of their precarious, but embedded standing in the new world order. Errors, in other words, are not really errors, but rather prove that the system is working; that it is necessarily fallible and that we as humans, along with the corporations and governments, are its victims.

Onto-torture is the only way we can locate our bodies in the hyper-oxygenated realm of algorithms. It is a special form of torture that reminds us, in the form of psycho-semiotics, how our senses, even at their most perceptive, cannot fathom the scale of our extended presence. It announces itself in words like progress, innovation, regulation, deregulation, scam, scam protection, net neutrality, data protection, glitch, crash, zero-day vulnerability, or packet injection; a great vectoral mash-up designed to simultaneously mitigate and multiply a low-grade torture that is often described as “accidental” or “unprecedented,” but is in truth anything but. When a system fails, if a problem occurs, it is not that the system really fails; it means that it is working. The “natural” condition of data is its failure. Each of us is a living, breathing energizer of this grand system. When our computer crashes, we must remember that it is designed to crash. The same for the world economy.

Some people reach a tipping point. They throw their cell phones into the fountain; tear up their mobile phone contracts; jab at their screens; say nasty things. This is not good for the Industrial Data-Civilianization Complex. We lose faith in the instruments that produce our onto-exhaust. “Just enough” torture is now considered normal. Encryptions protocols; crypto management; bitlockers; tripwires; optimization packs.

The result is a new type of paranoia—one that is no longer the psychoanalytical problem of old. It mostly manifests itself on the casual surface of existence to allow Being to see itself in the great wash of micro-geopolitical realities. “My FB app keeps crashing unknownly [sic] for days. What do I do?”2 Because of the naturalization of paranoia, Kant’s idea of the inadequate Self as the launching point for Enlightenment can no longer calm itself down in grand disciplinary projects like sociology, anthropology, and biology. The human is now trapped in its periodic dialectic of negation. Cookies, flash cookies, zombie cookies, trojans, heisenbugs, bots—these make up the invisible soft tissue of life.

What develops over time is something like an ontological crust, a place where our traditional sense of identity toward the outside condenses and contains our sense of Self. This crust can be loose, flexible, even compliant, or it can harden into identity politics, fundamentalism, or paranoid anxiety of governmental interference. This onto-crust is fed just as much from the outside (as in standard encounters with other (in)humans) as it is from the inside. Yet this inside is not desire and passion—the interiorities of old—but the known/unknown organization of energies that infuses its prerogatives into our sense of Being. Our onto-crust hooks itself into the flesh of the digital, draining energy from it for its psychic purposes. Paranoia rests below the onto-crust’s surface. The harder the crust gets, the more likely it will fissure, allowing paranoia to leak out. Flowing to the surface, paranoia spreads out over us, defining us. It is no longer an illness, but the everyday, the everywhere. We are all now the subject of an absent subject. The possible subject-to-be of a possible subject-that-was-and-will-possibly-be—or not.

The Sub-Routines of DIANA: Toward a Post-Ontological Architecture

We live it without thinking, as if it carried within it neither questions nor answers, as if it weren’t the bearer of any information. This is no longer even conditioning, it’s anaesthesia. We sleep through our life in a dreamless sleep. But where is our life? Where is our body? Where is our space?3

Architects today do not “draw” in that traditional sense of putting pencil to paper. Instead, they move through a conceptual landscape filled with software animals (plug-ins) that go by the names of “Grasshopper,” “Ladybug,” “Crow,” “Flamingo,” “Penguin,” “Squirrel,” “Hedgehog,” “Pufferfish,” “Lizard,” etc. All of these creatures belong to Rhinoceros (typically abbreviated Rhino, or Rhino3D) which is a computer-aided design application software developed by Robert McNeel and Associates, an American company founded in 1980. A student license costs about $150. For professionals it is more like $1,000, followed by upgrades that range in the hundreds of dollars. The plug-ins—each with their own costs and upgrades—serve to enhance or supplement the architect’s needs around specific tasks, with Rhinoceros ruling at the top of an ever-expanding animal kingdom. Grasshopper, for example, is a software (released in 2007) that allows the architect to “jump” between data sets, and thus the name. Ladybug (released in 2013) links weather data visualization, solar radiation studies, and sunlight hours analysis to digital models. Following Ladybug’s success, Honeybee was released to connect Grasshopper to validated daylighting and energy simulation engines. Because a Pufferfish can change shape, that name was used for a plug-in “which focuses on Tweens, Blends, Morphs, Averages, Transformations, & Interpolations—essentially Shape Changing.”4 Rooster, with its comb as symbol, “will take an image path as input, convert it to binary (black and white) image, trace the edge, and output them as rhino curve.” And so on…

Architects today live in a park-like landscape of tamed animals, with each animal performing a particular trick. But the Rooster does not go off with the Hedgehog to attack the Lizard. For this reason, architects are not full participants in the much wilder, post-ontological world. They live and work in an anthropo-centric bubble out of touch with the “nature,” so to speak, of nature. Living in such a Disney World will only sustain the delusion that computation works to the benefit of the profession through the authorial majesty of the architect, even though, if anything, the animals prove the core inadequacy of Reason without its natural/un-natural supplement. Clearly this zoological imaginary needs some diversity in order to better perform in the post-ontological world.

First, we need some bugs. Let me introduce Cockroach. Once walls are drawn, the architect has three hours to purchase “sealant” (an algorithmic protocol) from the digital “hardware store.” If not, the sub-routine will split the walls open to, obviously, allow the cockroaches out. The designer must “seal” the wall within five seconds or will have to redesign the wall altogether. Termites is an attack-ware that lowers the height of a building. Once activated (by not using Grasshopper for 24 hours) the designer has ten minutes to make a PayPal purchase from the “hardware store” for a software protection package called “DDT” to restore the original dimension. Rats carry “fleas” that infest the design software and freeze it until the designer pays the ransom package called “Rat-Removal.” Vulture is even more rapacious. The designer will see a shadow on the screen as if the bird were circling above, and after about a minute, a loose architectural element will be flown away with. The designer has two minutes to purchase a special gravity glue from the hardware store to keep the element on the screen. Sleepy Dog is a subroutine that comes into being if the designer is moving too fast through the program, slowing the click-response rate until the designers feel so frustrated that they buy “protein bars” to feed the dog, also available at the store. This perks the dog up and it wanders off, perhaps to return in a few hours depending on how many protein bars were purchased. Each program will have its own sound effect. Termites will be announced by crunching sounds, and Sleepy Dog by soft panting.

The more intense the design, the more it all costs—obviously—and the more frustrating it becomes to the architect. But there is a way for the architect to mitigate the financial and psychological outlay. For every ten dollars of real money one charges to the store, the designer is awarded a star. At the top right on the screen, next to the hardware store icon, one will see another in the shape of a small temple; the Temple of DIANA, the goddess of the hunt, of wild animals and wilderness. “DIANA” stands for Designing Insidious Alternative Natural Architecture. When enough stars have been deposited in her temple, DIANA herself emerges onto the screen in radiant glory and puts all the sub-routine animals under her control to sleep for a day. After the day is over, however, the struggle resumes between the ever-polite animals ruled over by Rhino and the more devious ones under the control of DIANA. DIANA’s sub-routines force the architect to negotiate with a range of unexpected temporal conditions. More importantly, the various negotiations, purchases, and avoidance mechanisms will become part of the whole “game” of designing.

Once the architect and the world of software are on this more equal footing, the architect can no longer deploy animals as technological pets doing their bidding without consequence. In fact, the architect will have to struggle to keep their design moving along as desired. The architect could adopt a conciliatory position and accept some of the consequences. When Sleepy Dog arrives, the architect can take a coffee break, go for a dance, or read a book. Sleepy Dog eventually falls asleep, and the architect can return to work. The architect might allow Cockroach to split open the walls. The effect might even be charming. The architect might also allow Termites to nibble away at the foundations to see in which direction the building starts to lean. A building with elements taken away by Vulture might in fact be much improved! Some architects will, of course, insist on Rhino. They will make a name for themselves as Rhinopurists and charge a special fee to cover the cost of keeping DIANA’s animals at bay. They will dress in black and have secret meetings where they will share notes on weakness within DIANA’s programming. On the other side, there will be the colorfully-dressed Dianites who will meet the Rhinopurists in Las Vegas at annual conventions where student projects are shown and huge competitions staged.

×

Becoming Digital is a collaboration between e-flux Architecture and Ellie Abrons, McLain Clutter, and Adam Fure of the Taubman College of Architecture and Urban Planning.

Mark Jarzombek is Professor of the History and Theory of Architecture at MIT. He works on a wide range of topics, historical, theoretical, and philosophical. His most recent book is Digital Stockholm Syndrome in the Post-Ontological Age (University of Minnesota Press, 2016).

Becoming Digital
Related
Conversations
Notes
Share
More
Becoming Digital
Mark Jarzombek
Conversations - Digital Post-Ontology
Conversations
Join the Conversation

e-flux conversations is a discussion platform for e-flux readers. Click to start a discussion of the article above.

Start the Conversation
Notes - Digital Post-Ontology
1

ViaInformatics homepage, via Archive.org, .

Go to Text
2

Abigail, “Facebook app keeps crashing,” Facebook Help Community (2016), .

Go to Text
3

Georges Perec, Species of Spaces and Oher Pieces, ed. and trans. John Sturrock, (New Work: Penguin Books, 1997), p. 210.

Go to Text
4

ekimroyrp, "PUFFERFISH," food4Rhino, .

Go to Text

ViaInformatics homepage, via Archive.org, .

Abigail, “Facebook app keeps crashing,” Facebook Help Community (2016), .

Georges Perec, Species of Spaces and Oher Pieces, ed. and trans. John Sturrock, (New Work: Penguin Books, 1997), p. 210.

ekimroyrp, "PUFFERFISH," food4Rhino, .

Go to Text
  • 1
  • 2
  • 3
  • 4
See All
Share - Digital Post-Ontology
  • Conversations
  • Notes
  • Share
Close
Next