Posted on Leave a comment

Three Thousand Years of Algorithmic Rituals: The Emergence of AI from the Computation of Space

Illustration from Frits Staal, "Greek and Vedic geometry" Journal of Indian Philosophy 27.1 (1999): 105-127.


With topographical memory, one could speak of generations of vision and even of visual heredity from one generation to the next. The advent of the logistics of perception and its renewed vectors for delocalizing geometrical optics, on the contrary, ushered in a eugenics of sight, a pre-emptive abortion of the diversity of mental images, of the swarm of image-beings doomed to remain unborn, no longer to see the light of day anywhere.

—Paul Virilio, The Vision Machine1

1. Recomposing a Dismembered God

In a fascinating myth of cosmogenesis from the ancient Vedas, it is said that the god Prajapati was shattered into pieces by the act of creating the universe. After the birth of the world, the supreme god is found dismembered, undone. In the corresponding Agnicayana ritual, Hindu devotees symbolically recompose the fragmented body of the god by building a fire altar according to an elaborate geometric plan.2 The fire altar is laid down by aligning thousands of bricks of precise shape and size to create the profile of a falcon. Each brick is numbered and placed while reciting its dedicated mantra, following step-by-step instructions. Each layer of the altar is built on top of the previous one, conforming to the same area and shape. Solving a logical riddle that is the key of the ritual, each layer must keep the same shape and area of the contiguous ones, but using a different configuration of bricks. Finally, the falcon altar must face east, a prelude to the symbolic flight of the reconstructed god towards the rising sun—an example of divine reincarnation by geometric means.

The Agnicayana ritual is described in the Shulba Sutras, composed around 800 BCE in India to record a much older oral tradition. The Shulba Sutras teach the construction of altars of specific geometric forms to secure gifts from the gods: for instance, they suggest that “those who wish to destroy existing and future enemies should construct a fire-altar in the form of a rhombus.”3 The complex falcon shape of the Agnicayana evolved gradually from a schematic composition of only seven squares. In the Vedic tradition, it is said that the Rishi vital spirits created seven square-shaped Purusha (cosmic entities, or persons) that together composed a single body, and it was from this form that Prajapati emerged once again. While art historian Wilhelm Worringer argued in 1907 that primordial art was born in the abstract line found in cave graffiti, one may assume that the artistic gesture also emerged through the composing of segments and fractions, introducing forms and geometric techniques of growing complexity. 4In his studies of Vedic mathematics, Italian mathematician Paolo Zellini has discovered that the Agnicayana ritual was used to transmit techniques of geometric approximation and incremental growth—in other words, algorithmic techniques—comparable to the modern calculus of Leibniz and Newton.5 Agnicayana is among the most ancient documented rituals still practiced today in India, and a primordial example of algorithmic culture.

But how can we define a ritual as ancient as the Agnicayana as algorithmic? To many, it may appear an act of cultural appropriation to read ancient cultures through the paradigm of the latest technologies. Nevertheless, claiming that abstract techniques of knowledge and artificial metalanguages belong uniquely to the modern industrial West is not only historically inaccurate but also an act and one of implicit epistemic colonialism towards cultures of other places and other times.6 The French mathematician Jean-Luc Chabert has noted that “algorithms have been around since the beginning of time and existed well before a special word had been coined to describe them. Algorithms are simply a set of step by step instructions, to be carried out quite mechanically, so as to achieve some desired result.”7 Today some may see algorithms as a recent technological innovation implementing abstract mathematical principles. On the contrary, algorithms are among the most ancient and material practices, predating many human tools and all modern machines:

Algorithms are not confined to mathematics … The Babylonians used them for deciding points of law, Latin teachers used them to get the grammar right, and they have been used in all cultures for predicting the future, for deciding medical treatment, or for preparing food … We therefore speak of recipes, rules, techniques, processes, procedures, methods, etc., using the same word to apply to different situations. The Chinese, for example, use the word shu (meaning rule, process or stratagem) both for mathematics and in martial arts … In the end, the term algorithm has come to mean any process of systematic calculation, that is a process that could be carried out automatically. Today, principally because of the influence of computing, the idea of finiteness has entered into the meaning of algorithm as an essential element, distinguishing it from vaguer notions such as process, method or technique.8

Before the consolidation of mathematics and geometry, ancient civilizations were already big machines of social segmentation that marked human bodies and territories with abstractions that remained, and continue to remain, operative for millennia. Drawing also on the work of historian Lewis Mumford, Gilles Deleuze and Félix Guattari offered a list of such old techniques of abstraction and social segmentation: “tattooing, excising, incising, carving, scarifying, mutilating, encircling, and initiating.”9 Numbers were already components of the “primitive abstract machines” of social segmentation and territorialization that would make human culture emerge: the first recorded census, for instance, took place around 3800 BCE in Mesopotamia. Logical forms that were made out of social ones, numbers materially emerged through labor and rituals, discipline and power, marking and repetition.

In the 1970s, the field of “ethnomathematics” began to foster a break from the Platonic loops of elite mathematics, revealing the historical subjects behind computation.10 The political question at the center of the current debate on computation and the politics of algorithms is ultimately very simple, as Diane Nelson has reminded us: Who counts?11 Who computes? Algorithms and machines do not compute for themselves; they always compute for someone else, for institutions and markets, for industries and armies.

Illustration from Frank Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, (Cornell Aeronautical Laboratory, Buffalo NY, 1961).

2. What Is an Algorithm?

The term “algorithm” comes from the Latinization of the name of the Persian scholar al-Khwarizmi. His tract On the Calculation with Hindu Numerals, written in Baghdad in the ninth century, is responsible for introducing Hindu numerals to the West, along with the corresponding new techniques for calculating them, namely algorithms. In fact, the medieval Latin word “algorismus” referred to the procedures and shortcuts for carrying out the four fundamental mathematical operations—addition, subtraction, multiplication, and division—with Hindu numerals. Later, the term “algorithm” would metaphorically denote any step-by-step logical procedure and become the core of computing logic. In general, we can distinguish three stages in the history of the algorithm: in ancient times, the algorithm can be recognized in procedures and codified rituals to achieve a specific goal and transmit rules; in the Middle Ages, the algorithm was the name of a procedure to help mathematical operations; in modern times, the algorithm qua logical procedure becomes fully mechanized and automated by machines and then digital computers.

Looking at ancient practices such as the Agnicayana ritual and the Hindu rules for calculation, we can sketch a basic definition of “algorithm” that is compatible with modern computer science: (1) an algorithm is an abstract diagram that emerges from the repetition of a process, an organization of time, space, labor, and operations: it is not a rule that is invented from above but emerges from below; (2) an algorithm is the division of this process into finite steps in order to perform and control it efficiently; (3) an algorithm is a solution to a problem, an invention that bootstraps beyond the constrains of the situation: any algorithm is a trick; (4) most importantly, an algorithm is an economic process, as it must employ the least amount of resources in terms of space, time, and energy, adapting to the limits of the situation.

Today, amidst the expanding capacities of AI, there is a tendency to perceive algorithms as an application or imposition of abstract mathematical ideas upon concrete data. On the contrary, the genealogy of the algorithm shows that its form has emerged from material practices, from a mundane division of space, time, labor, and social relations. Ritual procedures, social routines, and the organization of space and time are the source of algorithms, and in this sense they existed even before the rise of complex cultural systems such as mythology, religion, and especially language. In terms of anthropogenesis, it could be said that algorithmic processes encoded into social practices and rituals were what made numbers and numerical technologies emerge, and not the other way around. Modern computation, just looking at its industrial genealogy in the workshops studied by both Charles Babbage and Karl Marx, evolved gradually from concrete towards increasingly abstract forms.

Illustration from Frank Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, (Cornell Aeronautical Laboratory, Buffalo NY, 1961).

3. The Rise of Machine Learning as Computational Space

In 1957, at the Cornell Aeronautical Laboratory in Buffalo, New York, the cognitive scientist Frank Rosenblatt invented and constructed the Perceptron, the first operative artificial neural network—grandmother of all the matrices of machine learning, which at the time was a classified military secret.12 The first prototype of the Perceptron was an analogue computer composed of an input device of 20 × 20 photocells (called the “retina”) connected through wires to a layer of artificial neurons that resolved into one single output (a light bulb turning on or off, to signify 0 or 1). The “retina” of the Perceptron recorded simple shapes such as letters and triangles and passed electric signals to a multitude of neurons that would compute a result according to a threshold logic. The Perceptron was a sort of photo camera that could be taught to recognize a specific shape, i.e., to make a decision with a margin of error (making it an “intelligent” machine). The Perceptron was the first machine-learning algorithm, a basic “binary classifier” that could determine whether a pattern fell within a specific class or not (whether the input image was a triangle or not, a square or not, etc.). To achieve this, the Perceptron progressively adjusted the values of its nodes in order to resolve a large numerical input (a spatial matrix of four hundred numbers) into a simple binary output (0 or 1). The Perceptron gave the result 1 if the input image was recognized within a specific class (a triangle, for instance); otherwise it gave the result 0. Initially, a human operator was necessary to train the Perceptron to learn the correct answers (manually switching the output node to 0 or 1), hoping that the machine, on the basis of these supervised associations, would correctly recognize similar shapes in the future. The Perceptron was designed not to memorize a specific pattern but to learn how to recognize potentially any pattern.

The matrix of 20 × 20 photoreceptors in the first Perceptron was the beginning of a silent revolution in computation (which would become a hegemonic paradigm in the early twenty-first century with the advent of “deep learning,” a machine-learning technique). Although inspired by biological neurons, from a strictly logical point of view the Perceptron marked not a biomorphic turn in computation but a topological one; it signified the rise of the paradigm of “computational space” or “self-computing space.” This turn introduced a second spatial dimension into a paradigm of computation that until then had only a linear dimension (see the Turing machine that reads and writes 0 and 1 along a linear memory tape). This topological turn, which is the core of what people perceive today as “AI,” can be described more modestly as the passage from a paradigm of passive information to one of active information. Rather than having a visual matrix processed by a top-down algorithm (like any image edited by a graphics software program today), in the Perceptron the pixels of the visual matrix are computed in a bottom-up fashion according to their spatial disposition. The spatial relations of the visual data shape the operation of the algorithm that computes them.

Because of its spatial logic, the branch of computer science originally dedicated to neural networks was called “computational geometry.” The paradigm of computational space or self-computing space shares common roots with the studies of the principles of self-organization that were at the center of post-WWII cybernetics, such as John von Neumann’s cellular automata (1948) and Konrad Zuse’s Rechnender Raum by (1967).13 Von Neumann’s cellular automata are cluster of pixels, perceived as small cells on a grid, that change status and move according to their neighboring cells, composing geometric figures that resemble evolving forms of life. Cellular automata have been used to simulate evolution and to study complexity in biological systems, but they remain finite-state algorithms confined to a rather limited universe. Konrad Zuse (who built the first programmable computer in Berlin in 1938) attempted to extend the logic of cellular automata to physics and to the whole universe. His idea of “rechnender Raum,” or calculating space, is a universe that is composed of discrete units that behave according to the behavior of neighboring units. Alan Turing’s last essay, “The Chemical Basis of Morphogenesis” (published in 1952, two years before his death), also belongs to the tradition of self-computing structures.14 Turing considered molecules in biological systems as self-computing actors capable of explaining complex bottom-up structures, such as tentacle patterns in hydra, whorl arrangement in plants, gastrulation in embryos, dappling in animal skin, and phyllotaxis in flowers.15

Von Neumann’s cellular automata and Zuse’s computational space are intuitively easy to understand as spatial models, while Rosenblatt’s neural network displays a more complex topology that requires more attention. Indeed, neural networks employ an extremely complex combinatorial structure, which is probably what makes them the most efficient algorithms for machine learning. Neural networks are said to “solve any problem,” meaning they can approximate the function of any pattern according to the Universal Approximation theorem (given enough layers of neurons and computing resources). All systems of machine learning, including support-vector machines, Markov chains, Hopfield networks, Boltzmann machines, and convolutional neural networks, to name just a few, started as models of computational geometry. In this sense they are part of the ancient tradition of ars combinatoria.16

Image from Hans Meinhardt, The Algorithmic Beauty of Sea Shells (Springer Science & Business Media, 2009).

4. The Automation of Visual Labor

Even at the end of the twentieth century, no one would have ever thought to call a truck driver a “cognitive worker,” an intellectual. At the beginning of the twenty-first century, the use of machine learning in the development of self-driving vehicles has led to a new understanding of manual skills such as driving, revealing how the most valuable component of work, generally speaking, has never been merely manual, but also social and cognitive (as well as perceptual, an aspect of labor still waiting to be located somewhere between the manual and the cognitive). What kind of work do drivers perform? Which human task will AI come to record with its sensors, imitate with its statistical models, and replace with automation? The best way to answer this question is to look at what technology has successfully automated, as well as what it hasn’t.

The industrial project to automate driving has made clear (more so than a thousand books on political economy) that the labor of driving is a conscious activity following codified rules and spontaneous social conventions. However, if the skill of driving can be translated into an algorithm, it will be because driving has a logical and inferential structure. Driving is a logical activity just as labor is a logical activity more generally. This postulate helps to resolve the trite dispute about the separation between manual labor and intellectual labor.17 It is a political paradox that the corporate development of AI algorithms for automation has made possible to recognize in labor a cognitive component that had long been neglected by critical theory. What is the relation between labor and logic? This becomes a crucial philosophical question for the age of AI.

A self-driving vehicle automates all the micro-decisions that a driver must make on a busy road. Its artificial neural networks learn, that is imitate and copy, the human correlations between the visual perception of the road space and the mechanical actions of vehicle control (steering, accelerating, stopping) as ethical decisions taken in a matter of milliseconds when dangers arise (for the safety of persons inside and outside the vehicle). It becomes clear that the job of driving requires high cognitive skills that cannot be left to improvisation and instinct, but also that quick decision-making and problem-solving are possible thanks to habits and training that are not completely conscious. Driving remains essentially also a social activity, which follows both codified rules (with legal constraints) and spontaneous ones, including a tacit “cultural code” that any driver must subscribe to. Driving in Mumbai—it has been said many times—is not the same as driving in Oslo.

Obviously, driving summons an intense labor of perception. Much labor, in fact, appears mostly perceptive in nature, through continuous acts of decision and cognition that take place in the blink of an eye.18 Cognition cannot be completely disentangled from a spatial logic, and often follows a spatial logic in its more abstract constructions. Both observations—that perception is logical and that cognition is spatial—are empirically proven without fanfare by autonomous driving AI algorithms that construct models to statistically infer visual space (encoded as digital video of a 3-D road scenario). Moreover, the driver that AI replaces in self-driving cars and drones is not an individual driver but a collective worker, a social brain that navigates the city and the world.19 Just looking at the corporate project of self-driving vehicles, it is clear that AI is built on collective data that encode a collective production of space, time, labor, and social relations. AI imitates, replaces, and emerges from an organized division of social space (according first to a material algorithm and not the application of mathematical formulas or analysis in the abstract).

Animation from Chris Urmson’s, Ted talk “How a Driverless Car Sees the Road.” Urmson is the former chief engineer for Google’s Self-Driving Car Project. Animation by ZMScience

5. The Memory and Intelligence of Space

Paul Virilio, the French philosopher of speed or “dromology,” was also a theorist of space and topology, for he knew that technology accelerates the perception of space as much as it morphs the perception of time. Interestingly, the title of Virilio’s book The Vision Machine was inspired by Rosenblatt’s Perceptron. With the classical erudition of a twentieth-century thinker, Virilio drew a sharp line between ancient techniques of memorization based on spatialization, such as the Method of Loci, and modern computer memory as a spatial matrix:

Cicero and the ancient memory-theorists believed you could consolidate natural memory with the right training. They invented a topographical system, the Method of Loci, an imagery-mnemonics which consisted of selecting a sequence of places, locations, that could easily be ordered in time and space. For example, you might imagine wandering through the house, choosing as loci various tables, a chair seen through a doorway, a windowsill, a mark on a wall. Next, the material to be remembered is coded into discreet images and each of the images is inserted in the appropriate order into the various loci. To memorize a speech, you transform the main points into concrete images and mentally “place” each of the points in order at each successive locus. When it is time to deliver the speech, all you have to do is recall the parts of the house in order.

The transformation of space, of topological coordinates and geometric proportions, into a technique of memory should be considered equal to the more recent transformation of collective space into a source of machine intelligence. At the end of the book, Virilio reflects on the status of the image in the age of “vision machines” such as the Perceptron, sounding a warning about the impending age of artificial intelligence as the “industrialisation of vision”:

“Now objects perceive me,” the painter Paul Klee wrote in his Notebooks. This rather startling assertion has recently become objective fact, the truth. After all, aren’t they talking about producing a “vision machine” in the near future, a machine that would be capable not only of recognizing the contours of shapes, but also of completely interpreting the visual field … ? Aren’t they also talking about the new technology of visionics: the possibility of achieving sightless vision whereby the video camera would be controlled by a computer? … Such technology would be used in industrial production and stock control; in military robotics, too, perhaps.

Now that they are preparing the way for the automation of perception, for the innovation of artificial vision, delegating the analysis of objective reality to a machine, it might be appropriate to have another look at the nature of the virtual image … Today it is impossible to talk about the development of the audiovisual … without pointing to the new industrialization of vision, to the growth of a veritable market in synthetic perception and all the ethical questions this entails … Don’t forget that the whole idea behind the Perceptron would be to encourage the emergence of fifth-generation “expert systems,” in other words an artificial intelligence that could be further enriched only by acquiring organs of perception.20

Ioannis de Sacro Busco, Algorismus Domini, c. 1501. National Central Library of Rome. Photo: Public Domain/Internet Archive. 

6. Conclusion

If we consider the ancient geometry of the Agnicayana ritual, the computational matrix of the first neural network Perceptron, and the complex navigational system of self-driving vehicles, perhaps these different spatial logics together can clarify the algorithm as an emergent form rather than a technological a priori. The Agnicayana ritual is an example of an emergent algorithm as it encodes the organization of a social and ritual space. The symbolic function of the ritual is the reconstruction of the god through mundane means; this practice of reconstruction also symbolizes the expression of the many within the One (or the “computation” of the One through the many). The social function of the ritual is to teach basic geometry skills and to construct solid buildings.21 The Agnicayana ritual is a form of algorithmic thinking that follows the logic of a primordial and straightforward computational geometry.

The Perceptron is also an emergent algorithm that encodes according to a division of space, specifically a spatial matrix of visual data. The Perceptron’s matrix of photoreceptors defines a closed field and processes an algorithm that computes data according to their spatial relation. Here too the algorithm appears as an emergent process—the codification and crystallization of a procedure, a pattern, after its repetition. All machine-learning algorithms are emergent processes, in which the repetition of similar patterns “teach” the machine and cause the pattern to emerge as a statistical distribution.22

Self-driving vehicles are an example of complex emergent algorithms since they grow from a sophisticated construction of space, namely, the road environment as social institution of traffic codes and spontaneous rules. The algorithms of self-driving vehicles, after registering these spontaneous rules and the traffic codes of a given locale, try to predict unexpected events that may happen on a busy road. In the case of self-driving vehicles, the corporate utopia of automation makes the human driver evaporate, expecting that the visual space of the road scenario alone will dictate how the map will be navigated.

The Agnicayana ritual, the Perceptron, and the AI systems of self-driving vehicles are all, in different ways, forms of self-computing space and emergent algorithms (and probably, all of the them, forms of the invisibilization of labor).

The idea of computational space or self-computing space stresses, in particular, that the algorithms of machine learning and AI are emergent systems that are based on a mundane and material division of space, time, labor, and social relations. Machine learning emerges from grids that continue ancient abstractions and rituals concerned with marking territories and bodies, counting people and goods; in this way, machine learning essentially emerges from an extended division of social labor. Despite the way it is often framed and critiqued, artificial intelligence is not really “artificial” or “alien”: in the usual mystification process of ideology, it appears to be a deus ex machina that descends to the world like in ancient theater. But this hides the fact that it actually emerges from the intelligence of this world.

What people call “AI” is actually a long historical process of crystallizing collective behavior, personal data, and individual labor into privatized algorithms that are used for the automation of complex tasks: from driving to translation, from object recognition to music composition. Just as much as the machines of the industrial age grew out of experimentation, know-how, and the labor of skilled workers, engineers, and craftsmen, the statistical models of AI grow out of the data produced by collective intelligence. Which is to say that AI emerges as an enormous imitation engine of collective intelligence. What is the relation between artificial intelligence and human intelligence? It is the social division of labor


Matteo Pasquinelli (PhD) is Professor in Media Philosophy at the University of Arts and Design, Karlsruhe, where he coordinates the research group KIM (Künstliche Intelligenz und Medienphilosophie / Artificial Intelligence and Media Philosophy). For Verso he is preparing a monograph on the genealogy of artificial intelligence as division of labor, which is titled The Eye of the Master: Capital as Computation and Cognition.


Paul Virilio, La Machine de vision: essai sur les nouvelles techniques de representation (Galilée, 1988). Translated as The Vision Machine, trans. Julie Rose (Indiana University Press, 1994), 12.


The Dutch Indologist and philosopher of language Frits Staal documented the Agnicayana ritual during an expedition in Kerala, India, in 1975. See Frits Staal, AGNI: The Vedic Ritual of the Fire Altar, vol. 1–2 (Asian Humanities Press, 1983).


Kim Plofker, “Mathematics in India,” in The Mathematics of Egypt, Mesopotamia, China, India, and Islam, ed. Victor J. Katz (Princeton University Press, 2007).


See Wilhelm Worringer, Abstraction and Empathy: A Contribution to the Psychology of Style (Ivan R. Dee, 1997). (Abstraktion und Einfühlung, 1907).


For an account of the mathematical implications of the Agnicayana ritual, see Paolo Zellini, La matematica degli dèi e gli algoritmi degli uomini (Adelphi, 2016). Translated as The Mathematics of the Gods and the Algorithms of Men (Penguin, forthcoming 2019).


See Frits Staal, “Artificial Languages Across Sciences and Civilizations,” Journal of Indian Philosophy 34, no. 1–2 (2006).


Jean-Luc Chabert, “Introduction,” in A History of Algorithms: From the Pebble to the Microchip, ed. Jean-Luc Chabert (Springer, 1999), 1.


Jean-Luc Chabert, “Introduction,” 1–2.


Gilles Deleuze and Félix Guattari, Anti-Oedipus: Capitalism and Schizophrenia, trans. Robert Hurley (Viking, 1977), 145.


See Ubiratàn D’Ambrosio, “Ethno Mathematics: Challenging Eurocentrism,” in Mathematics Education, eds. Arthur B. Powell and Marilyn Frankenstein (State University of New York Press, 1997).


Diane M. Nelson, Who Counts?: The Mathematics of Death and Life After Genocide (Duke University Press, 2015).


Frank Rosenblatt, “The Perceptron: A Perceiving and Recognizing Automaton,” Technical Report 85-460-1, Cornell Aeronautical Laboratory, 1957.


John von Neumann and Arthur W. Burks, Theory of Self-Reproducing Automata (University of Illinois Press, 1966). Konrad Zuse, “Rechnender Raum,” Elektronische Datenverarbeitung, vol. 8 (1967). As book: Rechnender Raum (Friedrich Vieweg & Sohn, 1969). Translated as Calculating Space (MIT Technical Translation, 1970).


Alan Turing, “The Chemical Basis of Morphogenesis,” Philosophical Transactions of the Royal Society of London B 237, no. 641 (1952).


It must be noted that Marvin Minsky and Seymour Papert’s 1969 book Perceptrons (which superficially attacked the idea of neural networks and nevertheless caused the so-called first “winter of AI” by stopping all research funding into neural networks) claimed to provide “an introduction to computational geometry.” Marvin Minsky and Seymour Papert, Perceptrons: An Introduction to Computational Geometry (MIT Press, 1969).


See the work of twelfth-century Catalan monk Ramon Llull and his rotating wheels. In the ars combinatoria, an element of computation follows a logical instruction according to its relation with other elements and not according to instructions from outside the system. See also DIA-LOGOS: Ramon Llull's Method of Thought and Artistic Practice, eds. Amador Vega, Peter Weibel, and Siegfried Zielinski (University of Minnesota Press, 2018).


Specifically, a logical or inferential activity does not necessarily need to be conscious or cognitive to be effective (this is a crucial point in the project of computation as the mechanization of “mental labor”). See the work of Simon Schaffer and Lorraine Daston on this point. More recently, Katherine Hayles has stressed the domain of extended nonconscious cognition in which we are all implicated. Simon Schaffer, “Babbage’s Intelligence: Calculating Engines and the Factory System,” Critical inquiry 21, no. 1 (1994). Lorraine Daston, “Calculation and the Division of Labor, 1750–1950,” Bulletin of the German Historical Institute, no. 62 (Spring 2018). Katherine Hayles, Unthought: The Power of the Cognitive Nonconscious (University of Chicago Press, 2017).


According to both Gestalt theory and the semiotician Charles Sanders Peirce, vision always entails cognition; even a small act of perception is inferential—i.e., it has the form of an hypothesis.


School bus drivers will never achieve the same academic glamor of airplane or drone pilots with their adventurous “cognition in the wild.” Nonetheless, we should acknowledge that their labor provides crucial insights into the ontology of AI.


Virilio, The Vision Machine, 76.


As Stall and Zellini have noted, among others, these skills also include the so-called Pythagorean theorem, which is helpful in the design and construction of buildings, demonstrating that it was known in ancient India (having been most likely transmitted via Mesopotamian civilizations).

In fact, more than machine “learning,” it is data and their spatial relations “teaching.”
Posted on Leave a comment

The Identity Paradigm

Tony Gregory intercultual psychologist

In 1962, Thomas Kuhn published the most important intellectual work of the 20th century, The Structure of Scientific Revolutions. In it he argued against the long-held belief that evolution was an uninterrupted and steady continuum. He posited instead that progress came in jerks and starts – long periods of calm that were managed according to widely accepted beliefs and customs interspersed with brief violent periods of enormous change, like the renaissance, when all that had been accepted before was challenged and frequently overthrown. He called these violent brief periods 'paradigm shifts,' and since that time it has become an accepted part of how we see our world.

It was not long after that that Alvin Toffler wrote Future Shock, in which he argued that not only was Kuhn correct, but that the periods of relative stability between the brief and violent episodes of change were becoming shorter, so short in fact that it challenged out ability as humans to adjust to one set of revolutionary changes before another set was already upon us.

He gave as an example the impact of railroads on history. When Julius Caesar marched his legions south from France to Italy to conquer Rome in the first century AD it took more or less the same time as it took Napoleon to cover the same distance seventeen hundred years later. But it was only forty years after that when the railroad linking France and Italy was completed, cutting the journey from two months to three days. When Lincoln was assassinated in 865, it was only noon the next day that they heard about it in San Francisco. I saw the assassination of Robert Kennedy live – at the same time it happened – a century later. There are many examples you can give, but the impact is similar – changes coming at such a fast pace produce stress, and stress is the handmaiden of paradigm change.

One of the most important insights about paradigm shifts is that the animals that did well following the rules of the previous paradigm did not do well in the new one if they continued to follow those same rules because all the rules had changed (just ask the dinosaurs). People that owned stables during the age of agriculture were no longer at the center of things when the automobile replaced the horse as the accepted means of transportation. Quite clearly, there is a clear message here – if the paradigm changes and you don't, your future looks bleak.

But it is important to point out that not all paradigm changes are the same. The industrial revolution was a definite change in paradigms, and economic power in the world shifted dramatically from an emphasis on ownership of land to an emphasis on access to raw materials and the means of production. Yet the family structure survived the change, as did religion and nationalism.

The change from the ice age to the Holocene period which we presently inhabit was also a paradigm shift, but one far more powerful than the movement from agriculture to industry. When the glaciers finally retreated and the planet warmed, our species (Homo sapiens in case you forgot) spread around the globe and our numbers exploded because it became possible for us to sustain ourselves in far larger groups, which in turn allowed us to do things we had never done before, like build permanent dwellings and use the land to provide us with food on a continual basis, which we called agriculture.

We actually started recording events then, some ten thousand years ago – we call it history. The concentration of our species in such large numbers created a need to order things, to solve disputes and regulate affairs, and that led to the birth of customs, religion and culture and the domestication of animals. I could go on but I think you get the point – the change was so dramatic that nothing that had been true before remained. It was a transformation.

The other thing to point out is that all of this happened slowly, over the period of more than one lifetime. The people that came south after the glaciers retreated were long gone before the first cities were built and the first empires were formed. Akkadia was the first human empire, formed in Mesopotamia 4300 years ago, and that’s a full five thousand years after the glaciers began to retreat. We had time to adjust, time to consider how to respond to our new reality, time to try different ways of approaching things, and time to fail and try something else and still survive (unlike the Neanderthals).

Now, at the beginning of what we call our twenty-first century since we started writing stuff down, it appears that we are on the verge of a new paradigm shift, and possibly one as dramatic as that last big one when the ice retreated. If that is true, then we should remember that insight from so long ago – nothing that went before remained. That is the mark of a complete transformation.

It's tough for us to think about that because whether we like it or not we are children of our current paradigm, formed by its assumptions, educated in its customs and brainwashed accordingly. We find it difficult to think of ourselves without these things we are wedded to. Look, when Copernicus stepped forward in 1543 and said "Uh…I just want to point out that the earth is not the center , it’s the sun" even very smart people had a hard time wrapping their heads around that. It took literally a hundred years before it was accepted as a scientific proof (except in parts of the United States where science is still not accepted until this day). That is called denial of reality, and back then a lot of people were in that state for an extended period of time.

So when I step up and suggest that everything is about to change, not just the small stuff, I imagine that a lot of people – smart people – will find that hard to accept. Nevertheless, I think our ice age is about to end, and, in the spirit of Alvin Toffler, I think the new paradigm will be upon us so quickly that we will not have a lot of time to react. So, with that proviso, here is my preview of the next paradigm. Please forgive me if not all of the changes are of the same magnitude and if I leave some out. I, too, am a child of our current paradigm, and like everyone else my vision to see ahead is both limited and subjective.

We have become accustomed to identifying ourselves in relation to other people, to our geographical location, our membership in some political group ( a nation) and to our occupation, and to what we believe, which the more extreme among us label 'the truth.' So, I say I am a father, a husband, a member of a certain family, a citizen of a community and a nation, and I work as a psychologist - and all of that is about to change.


let's start with the easy one – work. There is not enough of it to go around. In our current paradigm we regard unemployment as some sort of negative state, a disease that needs to be treated. We talk about work moving around the world and call it outsourcing. We act as if the lack of jobs in North America means those same jobs have somehow magically moved to Asia and it is the cause of a great deal of unrest. None of that is true.

What is true is that human work, as we have come to know it during the last three centuries, is disappearing. What was once done by human labor is now done by machines. In a report on Automation in 2020, the World Economic Forum predicted by the year 2025, 53% of work would be performed by humans and 47% by machines, a 14% increase from the year the report was issued. If you carry that ratio forward then all work will be done by machines before the year 2060. But forget the numbers game. The impact of automation is that work will cease to be the center of life as it has been during the last three centuries.

It's not only that people will not physically move to find work, like they moved from the country to the cities at the start of the industrial revolution. It means there will be no place to move to. The family will not have to sacrifice some part of their life so that the wage earner can do his job, there simply will be no wage earner. People's income from work will not have to be supplemented by government spending when it is not enough because there will be no income from work. That is the nature of a complete transformation.

Income will not be apportioned on the basis of achievement (higher salary for work that is valued more highly) but existentially– you will not get money because of what you do but rather because of who you are. Iran was the first country to install universal basic income in 2010, and the practice is now prevalent throughout northern Europe. In an economic sense it is inevitable. If people depend on work for income, when there is no work, people starve, and when people starve, they revolt and topple governments (Just ask Louis the XVI). Every government on earth will take steps to prevent that.

Once work is no longer a benchmark of identification, the status distributed on the basis of occupation or position will cease to exist. A manager will not be more important than a laborer; a doctor will not have higher status than a janitor because these jobs will cease to exist.  The subtle but unmistakable prejudice of assigning credibility based on occupation (doctors must be smarter than gardeners) will slowly fade away and people will be judged on who they really are rather than the work they perform.

Organizations will look completely different, and all the silly talk about organizational 'culture' will cease (thank God) because machines aren't in need of culture. The center of life will not be the place of work, there will be no traffic jams nor daily disruption of activities because of the physical need to move from one place to another, and identity will have to emanate from something other than where you work, because there will be no such thing.

Some things will remain. There will probably be teachers to some extent, though most instruction will be provided by machines, and there will be caretakers for more intimate human contact, though again, basic medical functions will be fully automated. Entertainment may remain a human occupation in some form, though it is important to point out that today most of the most popular entertainment is now animation (80% of top box office receipts in 2019 came from Disney studios and the most popular films tend to feature cartoon characters rather than human beings).

The clincher in all of this is time. We had eons to adjust from a nomadic life style to living in permanent communities. We will have just decades to adjust from a world with work to a world without work and it will leave literally billions of people gasping to find something to do. Some people like to compare what will happen to that old experiment of putting the frog in a pot of lukewarm water and heating it slowly so the frog doesn’t notice until he's cooked, but that isn't what will happen. The changes will be so fast that we will feel ourselves cooking, and it won't be pleasant.


Family has been the anchor of our identity for longer than work, probably for the last fifteen to twenty thousand years. It is without doubt the most emotionally-charged part of our identity, and most of our great works of literature deal with it from Oedipus to Anna Karenina. There is a natural inclination for a species to nurture its young; this is not exclusive to mammals. What is exclusive is the tendency of mammals to remain in units defined by a common blood line for an extended period of time, and among the mammals we humans are the champs. We extend our families for generations and we have made them the center of our lives, once again, for good and ill.

Part of the reason for this is survival. In the beginning if you were sick or injured you would not survive unless there were other people around you who cared enough to tend to you. More recently, the bond of survival has not been exclusively physical but also economic. Especially in the current generation, children in the west in particular are less well-off financially than their parents and without that support they would not make it. Like the man said, family is the place that when you go there they have to take you in.

There is an attendant pride that accompanies family identity, particularly when the family is adept either at maintaining a certain status (aristocracy, for example) or occupation (the military, for example). So, there are families of hostlers, shoemakers, haberdashers, iron-workers, doctors, and so on, and the connection between familial and occupational identity makes these families stronger over time. They exert pressure on their young to 'follow in their footsteps' and to adopt their ideals and beliefs, and believe this continuity has great value.

The industrial revolution weakened this bond for all but the wealthiest, causing as it did displacement of millions of people who found it necessary to move away from their place of origin to another community in order to secure employment, and the division of labor into employers and employees weakened the family ties of the latter and in millions of cases made it impossible for them to maintain the occupation or trade of the previous generations. The evolution of humanity from family-based to community-based dates from this time, about three hundred years ago.

But the real dismemberment of the family has been prosperity. As people become wealthier, on the top of their agenda is the desire to distance themselves from others. This has now arrived at a situation in which one out of every seven households in the United States is listed as a single person residence, and the situation in many major European cities is even more pronounced. In popular culture the familial bond has been replaced by the comradely bond, i.e. people you meet are closer to you than people of your same blood. In turn, this has led to a decrease in marriages and birthrates, and it becomes a self-propagating loop.

The coming identity paradigm holds a future in which the individual will replace the family as the basic social unit. Clearly, this is such a revolution that it is difficult for most people to imagine, but it is on the way, supported by the development of virtual relationships as a replacement for close physical relationships, meaning the sensation of being close to a person without ever being in the same room with him or her.

This is already well underway, egged on by social media, which encourages the individual to remain isolated from others in a physical sense in preference of a virtual connection. It is a common sight now to see a group of people 'together' in a public place not speaking to each other but rather managing a dialogue with a cell phone with somebody else who is not in the room.

Unlike the loss of work, which is a phenomenon not dictated or controlled by personal choice, this movement toward the individual in place of the family unit will take time, tempered by economic factors as well as strong cultural opposition, but it is coming nonetheless and will be the norm for most places on the planet by the end of the century.  There are already sections of big cities like Tokyo that are intended for the exclusive use of young people, as well as adult communities restricted to those over the age of 65.

Multi-generational living arrangements are already largely a thing of the past globally, particularly beyond the nuclear family. The cultural consequences of this change are immense and frankly frightening for me to contemplate. Practically, it means that we will need to find new ways to transfer property and assign responsibility (designated driver will replace parent). Emotionally, we will go through a hard time when we dismember old axioms like 'blood is thicker than water,' because quite clearly, with all of its attraction, collegial ties will never take on the commitment that blood ties have.  In the new identity paradigm, the family will disappear.


Belonging is such a central pillar of our current paradigm that it has been enshrined as a key component of mental health. People who shun contact with others are not just considered anti-social; they are labeled as mentally unwell (autistic). Mass movements were a central feature of the last two centuries, both political and social. Whether they were as benign as scouting organizations or as controversial as political protests, being part of some action which involved thousands of other people gathering together was a mainstay of life in every country on the planet. This is now coming to an end.

People will still voice their opinions, but they will do so online. Even dating has become a virtual activity rather than a night out; you check out a person's profile in the privacy of your own home long before you meet them.  The same is true of voting and all forms of political activity. Not only can it be done from the home, it is being done from the home. The key to watch here is sporting events, one of the more acceptable reasons to mix physically with thousands of other people. When people begin to prefer viewing the events on a screen rather than sitting in a stadium, public participation will be terminated because it will become unprofitable.

Again, there will still be instances where thousands if not millions of people will express their opinions on a common topic, but this will be done in real time, surveys conducted by pressing a button on your phone rather than driving to a common location.

The mental health community will be forced to redesign conclusions about what it means to be alone. Indeed, loneliness itself will need to be redefined. Are you really alone (not lonely) if you are physically removed from everyone else but your cell phone is by your side? There will be a whole new list of mental conditions when the common living situation is one person alone. Clearly, there will be fewer problems resulting from interpersonal conflict (like domestic violence) because there will be fewer people living together. On the other hand, a whole new list of ailments will pop up because there will not be that other person in the room that can tell you when you are wrong. It will be a new world.


Our present paradigm has been flavored with our conceit that we are masters of the world, that we could bend the natural laws to our will, that we had some sort of irresistible control over everything. I suppose that the climate crisis is enough evidence to demonstrate what a mistake that was, but there is something even closer to home that will shake us to our roots in the new paradigm – we are no longer calling the shots.

Artificial intelligence will be the driving force in the new paradigm, and algorithms will make decisions in a distinctly different way than human beings. The lead elements of this new force are already changing the buying and selling of stocks and bonds and the application of medical procedures in hospitals all over the world. In the space of a few decades, all transportation will be directed by artificial intelligence, and drones and driver-less vehicles will be the norm (There will be no more human drivers or pilots because they are too dangerous). Manufacturing is already there, but there will be complete automation by the middle of the century.

AI will take the lead in education and customer service and the last pathetic attempts to suggest that the room for human work is just moving to other occupations will fall mute. In the new paradigm we will cease to make decisions about anything other than what we want personally, and that too will be limited. This is the one that scares me the most, but unless I take advantage of the next big change I won't be around, so it won't matter.

Human beings are used to making decisions. For a long time our ability to do this well was intimately tied to our survival. The idea that this will be taken from us because AI will do it better is a conclusion that many of us will find hard to swallow, and we will be reaching for that phantom limb long after it has been removed. Old people who believe they can drive just as well at the age of eighty as they did when they were 20 is a hint of what it will feel like. When the reality sets in that this is not rue it will likely be accompanied by a depression that will be very difficult to deal with, maybe even tied to the meaning of life. It will be a global emotional crisis that more than likely will trigger new forms of belief.


Yuval Harari has been writing for some time about the conquest of death. At the present time, eight vital organs can be transplanted: the heart, kidneys, liver, lungs, pancreas, intestine, thymus and uterus. Artificial limbs are now commonplace, as well as eye transplants, artificial bladder implants, inner ear implants, and deep brain stimulation. The practicality of replacing the entire body, other than some higher functions of the brain, is now a distinct possibility before the middle of the century.

That means that your body no longer defines who you are, nor are you limited to a specific number of years before you 'die.' 'Life' will have to be redefined when it is not followed by the modifier 'time.' Immortality is a daunting moral and philosophical challenge, but it is no longer a physical one. It is very likely that the possibility of living longer will have a dramatic effect on birthrates, as the idea of passing the torch to a new generation, what Richard Dawkins called The Selfish Gene, will become a remnant of thinking from the previous paradigm, because that thinking is based on the assumption that the existing organism cannot sustain itself beyond a certain date.

No doubt the conquest of mortality will also lead to significant changes in relationships that were previous thought of (at least in theory) as life time commitments, like marriage and even parenthood. It will also be marked by the development of a whole new industry dedicated to the total replacement of the body, possibly with gender changes thrown in for a little spice – live eighty years as a man and another eighty years as a woman.

Immortality combined with artificial intelligence will demand an entire rethinking of the role of Homo sapiens on the planet, as well as how we define spirituality (if all of us are immortal how does this change the status of deities?).  It is a daunting prospect. Things that we regarded as one-time decisions will lose that distinction, and almost everything will become choice-determined. Death itself will become a decision, not inevitability, and this alone will completely reshape philosophy and morality.


For the past several centuries we have defined ourselves as members of one nationality or another to such an extent that human beings were willing to die to protect or extend that abstract concept, something that commanded our loyalty even more than family or religion.

Most of us tend to forget our previous participation in smaller political units like tribes and regions, and for the most part these remain as romantic abstractions, lacking the full force of what it means to be a system of a country. Those pictures of Uncle Sam pointing his finger at you and calling you to enlist are not just propaganda, they are the expression of the belief of the country that it has the right to demand that its citizens give their lives to protect it. In the country in which I live this is a reality, and the state is by law authorized to exert its domain over the private lives of its citizens.

Because of the maximum commitment it involves, most of us are highly emotional about what we call our national identity. Yet nations, too, may not be a part of the next paradigm, as difficult as it is to believe. There is a contractual need for people to align themselves with a large political entity that manages an infrastructure. We need water, electricity, transportation systems and supply chains, and these are arrangements beyond the power or resources of any individual. But they are definitely contractual, and by no mean the exclusive rights or ability of nations.

In practice – not theory, practice – power companies in the United States can supply energy to all the homes of North America and maybe South America as well. The practice of ending the power grid at a country's borders is a political decision, not a technological one.

There is also no practical reason why a person living in Caracas cannot contract with a company half way around the globe, say India, for the supply of needed services, if that supplier is capable of meeting the demand. When it becomes clear that the supply of services that were formally relegated only to nations – security, welfare, transportation, health, energy, waste disposal, and more – can be supplied to individuals by a more effective alternative, then the grip of nations on individuals will slip.

The people of Catalonia do not want to be part of Spain, and the people of California have their doubts about the United States, yet this dissatisfaction with the larger national unity is still just a little step, the dismantling of larger political bodies into smaller ones.

There is a real possibility that the next paradigm holds a much more dramatic change in store – the alliance of the individual with an organizing structure beyond nations. Instead of a process of unification that produces ever bigger political bodies, think of it in the other direction – the existence of thousands of service providers making direct contact with consumers directly on a non-geographical basis, and not using a government as an agent.

So, for example, the person living in London might receive his mail from a supplier in Delhi, his power from a supplier in Norway, his security from a company in Scotland, and his health from an organization in Switzerland. He may still consider himself English, but this will have more to do with his physical surroundings than with the political structure associated with it.

Quite clearly such a dramatic change has immeasurable implications for property ownership and civil legislation of every kind, and the number of lawyers required to work it out I don't even want to think about, but the point is that on a practical level it is indeed possible. It is only the abstract concept of nations for which so many people laid down their lives in the previous century that keeps it from happening. Nations have traditionally promoted themselves by their opposition to other nations, a practice which was expensive and bloody (we are better than they are; they want to kill us, so let's kill them first). If there is a business model that proves to be much more cost-efficient than the national one (and less bloody), it will come to pass, and within the next one hundred years, though I know how hard that is to believe. Yes, nations may be a thing of the past.

There will be a lot of gnashing of teeth when contemplating the alternatives, and there will remain a true need for the collection of public money in order to finance projects for the good of all (taxes), and there will always be disagreements over decisions made and the need to handle the losers so that they do not act to disrupt the system – all of that is true, but there is no natural law that says this must be the work of nations. The fact is that many nations are artificial in the extreme, the deformed children of colonialism, places like Pakistan and India and many states in Africa. The attempt to supplant such constructions with something else more effective is a positive idea, and it will be pursued.


The final pillar of identity that will be challenged in the new paradigm is belief. For the last millennium, many individuals have defined who they are as members of some religious movement, with Christianity and Islam being the most prominent recent examples. More blood has been spilled trying to sway different parts of the world to one religion or another over the last millennium than any other cause. This was challenged a half millennium ago when Christianity finally started to come apart into disparate elements of Protestantism and Catholicism and has been echoed more recently with the division of Islam into Sunni and Shiite. Still, many nations are defined by their religion. There are more than 80 nations today that officially give preference to one religion over another, including the one in which I reside.

Yet that, too, will be challenged by the impact of the new identity paradigm. In 2020, church membership in the United States dropped below 50% for the first time since the Gallup Poll began reporting. The American Mosque Survey reported a similar decline in the number of African Americans attending mosques in the United States. Similar situations are found in Europe. The Muslim population in Asia is still growing, but at a slower rate than was true half a century ago. Christianity in Latin America is becoming increasing more Pentecostal and less Catholic.

This does not mean that in the new paradigm religion will not play a role, but it does seem to indicate that the role will be much more individualized and much less public. In other words, the practice of mass movements of people professing the same belief who attempt to forcibly take over various parts of the world to install that belief seems to be coming at an end. It will take some time to realize that, but certainly most everyone can see that religious leaders today of whatever ilk are less influential in their ability to sway global events than they were even a hundred years ago.

Nations like Iran may still claim some sort of religious intent in their dealings with other nations, but this will become much less convincing during the next few decades, and most people will see it for what it really is – a political movement masquerading as a belief. A recent survey conducted in Iran suggested that about 40% of the country identified itself as actively Muslim in opposition to the official state claim of 99%.


Imagine for a moment a human being who is not defined by his nationality, place in a family, age, and membership in a religion, race, occupation, status or gender. How, then, is he to be defined? - Purely by his or her actions, emotions and thoughts, and what he or she makes from them? It would be true individuality, an identity that would make grouping impossible and therefore defy prejudice or assumptions. You would need to assess each person you meet in depth to really get to know them, because there would be no basis on which to make assumptions.

Patterns of course would eventually develop, they always do, but the base for these patterns would be different. We will no longer here things like "all women are…" or "Blacks are always…" or "Jews all are…" because there will be no meaning to these old distinctions. It would be like saying all Huguenots are the same or all Wares are the same, because these groups no longer exist. Some people will think alike, have the same taste, wear similar fashions, believe similar things, but those like-minded people will come from a wide variety of what used to be called mutually exclusive groups in the old paradigm, our paradigm.

I know that these observations may make some people uncomfortable; I know they make me uncomfortable. We are creatures of our times, and many of us have gotten ahead by following closely the rules that our paradigm gave us. So why is it that we need a new paradigm when so many of us are comfortable with the one we have even with all of its flaws?

Well, I don't think anyone did a survey of the woolly mammoths before the end of the ice age. It turned out that the paradigm shift was beyond their control, and their extinction was one of the unfortunate consequences of it. The truth is that many of the decisions we made over the last few centuries have consequences that we did not intend nor want, but they are consequences nonetheless. Who could have predicted that prosperity would lead to a desire to separate and not to join? Yet this is where the evolution of our species has led us – to a complete redefinition of who we are. We are subject to the consequences of our own actions, intentional or not.

I suppose in the middle of the feudal millennium many smart people would have found it hard to believe that there could be a world one day without masters or peasants, but it came to pass. Similarly, many of us may find it hard to believe today that there could be a world without marriage or the concept of children as the property of their parents until a certain age, or that people have a duty to sacrifice their lives for a nation's aspirations, but there is an equal likelihood that these things too will come to pass.

I guess the real question is if we will end up like the woolly mammoths, buried in the tundra to be excavated years hence by some other species that made the transformation to the new paradigm more successfully than us, or we will somehow transform ourselves to the new rules and realities... Time will tell.

But get ready. The first winds of the new paradigm are already whipping up the leaves around us. There will be rain after that and thunder and lightning. It will be a real storm, one like we have never experienced before. It won't work to close all the shutters and wait for the storm to pass, because this is a transformation, not a period of chaos after which everything will return to what it was before. This is the identity paradigm, and it is the invitation to define anew who we are.

Imagine there's no heaven

It's easy if you try

No hell below us

Above us, only sky

Imagine all the people

Livin' for today


Imagine there's no countries

It isn't hard to do

Nothing to kill or die for

And no religion, too

 -John Lennon



Posted on Leave a comment

VI3: Philosophy of Computing and Information Technology

Philosophy of Computing and Information Technology

Philip Brey, Johnny Hartz Søraker, in Philosophy of Technology and Engineering Sciences, 2009

Philosophy has been described as having taken a “computational turn,” referring to the ways in which computers and information technology throw new light upon traditional philosophical issues, provide new tools and concepts for philosophical reasoning, and pose theoretical and practical questions that cannot readily be approached within traditional philosophical frameworks. As such, computer technology is arguably the technology that has had the most profound impact on philosophy. Philosophers have studied computer technology and its philosophical implications extensively. Philosophers have discovered computers and information technology (IT) as research topics, and a wealth of research is taking place on philosophical issues in relation to these technologies. The research agenda is broad and diverse. Issues that are studied include the nature of computational systems, the ontological status of virtual worlds, the limitations of artificial intelligence, philosophical aspects of data modeling, the political regulation of cyberspace, the epistemology of Internet information, ethical aspects of information privacy and security, and many more.

5.6 Cyborgs and virtual subjects

Information technology has become so much part of everyday life that it is affecting human identity (understood as character). Two developments have been claimed to have a particularly great impact. The first of these is that information technologies are starting to become part of our bodies and function as prosthetic technologies that take over or augment biological functions, turning humans into cyborgs, and thereby altering human nature. A second development is the emergence of virtual identities, which are identities that people assume online and in virtual worlds. This development has raised questions about the nature of identity and the self, and their realization in the future.

Philosophical studies of cyborgs have considered three principal questions: the conceptual question of what a cyborg is, the interpretive and empirical question of whether humans are or are becoming cyborgs, and the normative questions of whether it would be good or desirable for humans to become cyborgs. The term “cyborg” has been used in three increasingly broad senses. The traditional definition of a cyborg, is that of a being composed of both organic and artificial systems, between which there is feedback-control, with the artificial systems closely mimicing the behavior of organic systems. On a broader conception, a cyborg is any individual with artificial parts, even if these parts are simple structures like artificial teeth and breast implants. On a still broader conception, a cyborg is any individual who relies extensively on technological devices and artifacts to function. On this conception, everyone is a cyborg, since everyone relies extensively on technology.

Cyborgs have become a major research topic in cultural studies, which has brought forth the area of cyborg theory, which is the multidisciplinary study of cyborgs and their representation in popular culture [Gray, 1996]. In this field the notion of the cyborg is often used as a metaphor to understand aspects of contemporary — late modern or postmodern — society's relationship to technology, as well as to the human body and the self. The advance of cyborg theory has been credited to Donna Haraway, in particular her essay “Manifesto for Cyborgs” [Haraway, 1985]. Haraway claims that the binary ways of thinking of modernity (organism-technology, man-woman, physical-nonphysical and fact-fiction) traps beings into supposedly fixed identities and oppresses those beings (animals, women, blacks, etc.) who are on the wrong, inferior side of binary oppositions. She believes that the hybridization of humans and human societies, through the notion of the cyborg, can free those who are oppressed by blurring boundaries and constructing hybrid identities that are less vulnerable to the trappings of modernistic thinking (see also [Mazlish, 1993]).

Haraway believes, along with many other authors in cyborg theory (cf. [Gray, 2004; Hayles, 1999]) that this hybridization is already occurring on a large scale. Many of our most basic concepts, such as those of human nature, the body, consciousness and reality, are shifting and taking on new, hybrid, informationalized meanings. Coming from the philosophy of cognitive science Andy Clark [2003] develops the argument that technologies have always extended and co-constituted human nature (cf. [Brey, 2000]), and specifically human cognition. He concludes that humans are “natural-born cyborgs” (see also the discussion of Clark in Section 3.6).

Philosophers Nick Bostrom and David Pearce have founded a recent school of thought, known as transhumanism that shares the positive outlook on the technological transformation of human nature held by many cyborg theorists [Bostrom, 2005; Young, 2005]. Transhumanists want to move beyond humanism, which they commend for many of its values but which they fault for its belief in a fixed human nature. They aim at increasing human autonomy and happiness and eliminate suffering and pain (and possibly death) through human enhancement. Thus achieving a trans- or posthuman state in which bodily and cognitive abilities are augmented by modern technology.

Critics of transhumanism and human enhancement, like Francis Fukuyama, Leon Kass, George Annas, Jeremy Rifkin and Jürgen Habermas, oppose tinkering with human nature for the purpose of enhancement. Their position that human nature should not be altered through technology has been called bioconservatism. Human enhancement has been opposed for a variety of reasons, including claims that it is unnatural, undermines human dignity, erodes human equality, and can do bodily and psychological harm [DeGrazia, 2005]. Currently, there is an increasing focus on ethical analyses of specific enhancements and prosthetic technologies that are in development, including ones that involve information technology [Gillett, 2006; Lucivero and Tamburrini, 2008]. James Moor [2004] has cautioned that there are limitations to such ethical studies. Since ethics is determined by one's nature, he argues, a decision to change one's nature cannot be settled by ethics itself.

Questions concerning human nature and identity are also being asked anew because of the coming into existence of virtual identities [Maun and Corruncker, 2008]. Such virtual identities, or online identities, are social identities assumed or presented by persons in computer-mediated communication and virtual communities. They usually include textual descriptions of oneself and avatars, which are graphically realized characters over which users assume control. Salient features of virtual identities are that they can be different from the corresponding real-world identities, that persons can assume multiple virtual identities in different contexts and settings, that virtual identities can be used by persons to emphasize or hide different aspects of their personality and character, and that they usually do not depend on or make reference to the user's embodiment or situatedness in real life. In a by now classical (though also controversial) study of virtual identity, psychologist Sherry Turkle [1995] argues that the dynamics of virtual identities appear to validate poststructuralist and postmodern theories of the subject. These hold that the self is constructed, multiple, situated, and dynamical. The next step to take is to claim that behind these different virtual identities, there is no stable self, but rather that these identities, along with other projected identities in real life, collectively constitute the subject.

The dynamics of virtual identities have been studied extensively in fields like cultural studies and new media studies. It has been mostly assessed positively that people can freely construct their virtual identities, that they can assume multiple identities in different contexts and can explore different social identities to overcome oppositions and stereotypes, that virtual identities stimulate playfulness and exploration, and that traditional social identities based on categories like gender and race play a lesser role in cyberspace [Turkle, 1995; Bell, 2001]. Critics like Dreyfus [2001] and Borgmann [1999], however, argue that virtual identities promote inauthenticity and the hiding of one's true identity, and lead to a loss of embodied presence, a lack of commitment and a shallow existence. Taking a more neutral stance, Brennan and Pettit [2008] analyze the importance of esteem on the Internet, and argue that people care about their virtual reputations even if they have multiple virtual identities. Matthews [2008], finally, considers the relation between virtual identities and cyborgs, both of which are often supported and denounced for quite similar reasons, namely their subversion of the concept of a fixed human identity.