Artificial Labor - Bruce Wexler - About Tomorrow

About Tomorrow

Bruce Wexler

Left: Stone tools showing early evidence of extensive heat treatment of silicrete from Howiesons Poort at Klipdrift Shelter, South Africa.; Right: The Leavers’s Lace Machine, 1904. Photo: Wikipedia Commons

Artificial Labor
June 2017

For thousands of years, change and its associated discomforts have been a central feature of human societies and trans-generational struggle. Hubel and Weisel were awarded the Nobel Prize in 1981 for demonstrating the degree to which the structure and function of the mammalian brain are shaped after birth by stimulation from the environment. Decades of research have confirmed and elaborated this work, demonstrating, for example, that when visual input from the eyes is surgically rerouted to the auditory cortex in newborn ferrets, the animals can see with what is usually the listening part of their brains, and that the cells in the auditory receptive areas completely rearrange themselves to look like normal visual cortex rather than maintaining their “natural” tonotopic organization. More recent research on sensory substitution devices in humans shows that even the eyes are unnecessary for vision. Blind people can see with a camera that sends patterns of electrical impulses to the tongue to create a picture the same way patterns of light intensity create an image on a television screen.

Human beings differ from other animals with regard to this post-natal neuroplasticity in two important ways. First, our brains are more immature at birth and susceptible to environmental shaping to a greater degree and for a longer time. Second, humans are the only animals that shape and reshape the environments that shapes their brain. This powerful combination is the basis of cultural evolution, and of most features of human minds, behavior and communities.

Our hominid ancestors tamed fire, turning dark into light and cold into warmth. They created stone tools that made more efficient their primary and essential labor of feeding themselves. But even once innovation got started, it was slow; the first tool set went unchanged for 1.8 million years. Neanderthal and modern humans (or homo sapiens), the last two hominids to evolve, both developed elaborate symbolic practices evidencing a complex internal mental life, abstraction and imagination. But despite such capabilities, innovation remained slow compared to today, as the beautiful cave paintings that are said to have caused Picasso to exclaim that “they already knew everything (about line and art)” remained essentially unchanged in terms of content, style and material for 26,000 years. With the invention of farming and animal husbandry 10–12,000 years ago, the pace of innovation began significantly accelerating. Itself an alteration of the landscape, animals, and plants, farming freed time from the labor of food production and led to role and skill specializations that accelerated innovation more generally.

Left: Cave Paintings at the rock shelter Bhembetika, India. Some of the paintings found here are circa 30,000 years old. Photo: Vijay Tiwari; Right: George Stubbs, Firetail with his Trainer by the Rubbing-Down House on Newmarket Heath, 1773. Oil on panel Photo: Wikimedia Commons

Our brains are shaped by the human-made worlds in which we are children and develop into adults. This shaping of the highly plastic brain and mind of children by the major features of the human-made rearing environment creates a match or harmony between our internal neuropsychological structures and external worlds. Decades of research demonstrates that we feel more comfortable and function better when this harmony is maintained. For these reasons, we do a variety of things to make the external world stay consistent with our internal worlds, like associating with like-minded people and ignoring or forgetting information with which we disagree. Once the genie of innovation was let out of the bottle, however, it added a new dimension to this process. Innovation during each generation alters the rearing environment of the next, meaning that the brains and minds of each new one differs from those of their parents. When each new generation then becomes adults, assumes instrumental roles in society, and in the organization and means of work, they strive to remake the world of their parents to match their own internal worlds. Their efforts must overcome the resistance of their parents for whom such change creates an external world inconsistent with their established neuropsychological structures. And so, Socrates is said to have objected to the new practice of writing because it was sure to compromise human memory, people rioted in the streets after the debut performance of Stravinsky’s Rite of Spring, and many great artists are recognized as such only after their deaths. As an elderly midwife from the Ariaals, nomadic cow herders from the Ndoto Mountains in Kenya told a National Geographic reporter, “We send our children to school and they forgot everything. It is the worst thing that ever happened to our people.” This is the way of human beings.

There seems little doubt that the speed of innovation has dramatically increased over the last 200 years and continues to accelerate. If we consider the 200,000-year history of our species on the scale of a 24-hour day, in many ways the change in the last 1.5 minutes exceeds all that preceded it. At the very least, then, there are issues of rate and quantity of change to consider. Faster and greater change make people more uncomfortable, but are the existential issues today that have emerged in response to the rise of automation and artificial intelligence still the same as what Socrates faced, with an outcome no more problematic? Or is there something such as “too much too fast”—will the ship fall apart if the speed is too great? If so, where is the danger? At the level of the individual’s ability to adapt? The ability of social structures to function at such high speeds? Or in an alteration in the type of person able to survive and succeed?

Experiences with cross-cultural transfers of technology are one potential source of instruction. The rate of technological development in Europe and North America over the past two hundred years has been rapid and associated with, among others, unprecedented environmental pollution, the reorganization and redefinition of labor, large population movement, and the disruption of both extended-family structures and community organization. Throughout this period, the collective development of new regulatory structures and processes struggled to reconcile these changes with existing value and belief systems, witnessing at times violent struggle over the control of new means of production and wealth. But in the end, communities by and large established regulatory power, cleaned and protected the environment, dealt with the egregious exploitation of children and other workers, and developed new means to redistribute wealth. Even “rules of war” were established to protect civilian populations from newly powerful weapons of death and destruction.

When Western oil companies moved into the oil-rich Middle East, they built roads overnight which linked towns and communities that had lived in proximate isolation for millennia, added extreme amounts of wealth for small numbers of people, and through the likes of radio, television, and film, brought mass communication, and along with it exposure to Western music, mores and popular culture. New forms of relations between men and women that Western societies had gradually developed and painfully adjusted to over more than a century of bourgeois society, the lives and work of artists, and women’s political organization, for example, were simply depicted as fact. The oil industry’s transformation of the physical and cultural environment at a rate without historical precedent consequently did not allow for the gradual accommodation of belief systems or co-evolution of regulatory processes. As the chief of police in a seaside town in the United Arab Emerates once explained, “Western influence has eroded family values and weakened parental authority. Our police will step up efforts to maintain social values in keeping with Islamic and Arabic traditions.” The social instability, violent opposition to the West and Western culture, and mass migrations that now characterize the region were probably generated in part by the introduction of change at such a rapid rate. Or conversely, consider some aspects of the “modernization” of China: Chairman Mao attempted to rapidly and nationally institute ideas of collective farming from Russian theory and advisors, causing a famine that killed 30–50 million people. Deng Xiaopeng similarly visited factories in Europe and Japan and imported organizational techniques and technologies of production that had taken over a century to evolve in their respective developmental contexts, leading to extensive air, water, soil and food contamination in China. Without the co-evolution of formal and informal regulatory processes, generations for years to come will have their health impacted. Here then we can see consequences of change that is too rapid to allow accommodation and regulation. But are these examples relevant to understanding threats that may be associated with rapid changes in robotics and information technology?

Some aspects of the rapidly developing information technologies circumvent existing formal and informal regulation. There are now unprecedented opportunities for anonymous actions, and as computer hackers have stated, “where there is anonymity there is no regulation and accountability.” In the pre-internet age, extremist groups like the Klu Klux Klan in the United States wore hoods over their head and faces while lynching black Americans, but members of their communities, including law enforcement, could and did know their identities. Similarly, those recognized as public intellectuals were “screened” and “selected” by community social processes, and only subsequently given access to important communication platforms. Today, individuals on the fringes of society have equal access to means of mass communication. While large social media followings are gained by people who might not previously have been able to gain access to such platforms, there are still social processes that govern the creation of such communities. However, there are still many individuals that are not subject to such processes who have a larger impact than they could have had in the past. Groups of individuals, some of which arise and disband in a moment’s notice, like the internet activists who go by “Anonymous,” are transnational and unregulated actors who have been able to disrupt business operations for political purposes with little consequence to themselves. But as new and unregulated as these anonymous actors are, and as disruptive to critical infrastructure as their acts could potentially be, they do not yet seem to pose an existential threat to human life and society.

But what about machines that are smarter than people in some ways? What about the monitoring of our online behavior—where we spend more and more of our time? What about machines and algorithms that watch what we do online, that individually shape our environmental stimulation, and therefore our minds and brains? What about groups that send out fake news that spreads faster than checks and rebuttals? These are indeed alarming. But people have always had limited and wrong information about many things. Subgroups have always had their alternate facts, and people have been proven to choose to associate with like-minded others who reinforce, rather than challenge their view of reality. Does it matter if machines monitor our beliefs and provide the reinforcing information we seek anyway? What’s so different or of consequence? Both informal social-normative regulation within communities of actors and by others with whom they have affiliative connection, and formal governmental regulation are still primitive but are under discussion and will evolve. The global scale of impact by these new machines and algorithms that analyze and filter our information might itself be unsettling to those of us not “raised on the internet,” but local control and uniformity of information within small communities can be just as limiting. In fact, one can see how rogue actors, hackers, and individuals on the fringe could have greater voice; competing groups can create their own machines and algorithms.

Left: Lee Krasner, Charred Landscape, 1960. Oil paint on canvas. Copyright: 2015 Pollock- Krasner Foundation / Artists Rights Society (ARS), New York; Right: Technicians at NASA Ames Research Center are reflected in the coated SOFIA telescope main mirror suspended above them. Photo: USRA/Patrick Waddell.

Perhaps, though, decrease of population variability is a real fear. Darwinian evolution builds on variation within populations. Cultural evolution built on biological evolution and its effects on gene expression rapidly increased the variability of brain function, ideas and beliefs in human populations. Furthermore, culturally induced variability depends upon differences in important aspects of the rearing environments that shape our brains, ranging from ways of attending to the world to religious, moral and political beliefs, exposure to music, painting and literature, and attitudes toward change, difference, novelty and risk-taking. Variability has, to a certain extent, depended upon the geographic separation of communities. For most of human history, mountains, rivers, desserts and oceans kept communities separated enough to develop 6,000 different languages (not counting dialects), different belief systems, laws, ways of eating and dressing, and ways of thinking. Within each of geographically separated societies (tribes or countries), cultural variability has come from differences among families, local communities, adolescent interest groups, etc. Writers, painters, musicians and scientists have increasingly exposed adults and children to new ways of seeing, listening, understanding and thinking. Mass media and technology have facilitated access to these human-made contributions to our environment, effectively magnifying the resulting variability in minds and brains. But now, the electronic environment has become the primary environment that shapes the brains of our children. It is an environment that crosses geographic barriers and one that is increasingly shaped by machinic algorithms and artificial intelligence. Technology no longer only facilitates access to human-made programming, but rather programs us as well. What differences will it make if our rearing environments are shaped by machines that lack the variability of the human hand and mind? How much will human variability be reduced as the shaping of the rearing environment becomes more and more centralized and mechanical?

It is possible to imagine, and even be alarmed, that human variability and innovation will be decreased. Perhaps the 12,000-year epic of rapid human innovation since the advent of farming will give way to a much longer period of increasing stability and uniformity. This might constitute a qualitative change in the dynamic between the brain and the world. But to what reference posts or standards do we turn to decide if this would be good or bad? Darwinian theory posits that variability allows populations to survive in the face of significant environmental changes. As human military and industrial technology increasingly alter the environment in ways that threaten the survival and livelihoods of large numbers of people, the mechanical shaping of our rearing environments may at the same time reduce the variability of thought necessary to deal with these threats. Could such processes in tandem threaten human life all together? Or will we as a society limit the power of the centralized and machinic algorithms to shape our minds and the minds of our children, and act to ensure the variability of brains and minds shaped by the human hand?

Artificial Labor is a collaboration between e-flux Architecture and MAK Wien within the context of the VIENNA BIENNALE 2017 and its theme, “Robots. Work. Our Future.”

Category
Technology
Subject
Consciousness & Cognition, Artificial intelligence, Algorithms, Community
Return to Artificial Labor

Artificial Labor is collaborative project between e-flux Architecture and MAK Wien within the context of the VIENNA BIENNALE 2017.

Bruce E. Wexler is Professor at Yale University. Author of over 130 scientific articles, Wexler is a world leader in harnessing neuroplasticity to improve cognition through brain exercises. Wexler’s book Brain and Culture; Neurobiology, Ideology and Social Change was published by MIT Press in 2006.

Advertisement
Subscribe

e-flux announcements are emailed press releases for art exhibitions from all over the world.

Agenda delivers news from galleries, art spaces, and publications, while Criticism publishes reviews of exhibitions and books.

Architecture announcements cover current architecture and design projects, symposia, exhibitions, and publications from all over the world.

Film announcements are newsletters about screenings, film festivals, and exhibitions of moving image.

Education announces academic employment opportunities, calls for applications, symposia, publications, exhibitions, and educational programs.

Sign up to receive information about events organized by e-flux at e-flux Screening Room, Bar Laika, or elsewhere.

I have read e-flux’s privacy policy and agree that e-flux may send me announcements to the email address entered above and that my data will be processed for this purpose in accordance with e-flux’s privacy policy*

Thank you for your interest in e-flux. Check your inbox to confirm your subscription.