Posted on Leave a comment

The holographic principle: Study

In short, the holographic principle states it is the area A of a surface that constrains the amount of information in the bordering regions, and not the volume. The holographic principle therefore relates information and geometry, and this suggests it's origin must lie in a theory which unifies matter and spacetime.

Read it

Conclusions

The holographic principle is a remarkable property that seems to be universally valid. It relates the information content of nature to the geometry of spacetime, and therefore it seems that it originates from a yet unknown theory which unifies quantum mechanics and gravity. According to the covariant entropy bound, the amount of information that a region of space can posses is vastly less than the predictions of any current theory. Even more, it is possible that a deeper theory is not local, since the CEB states that entropy on a light-sheet is limited by the area of its boundary surface. Another interesting feature following from the holographic principle is the existence of cosmological screens. These hypersurfaces contain all the information of a spacetime, hence making it possible that our universe is a giant hologram.
Although most systems composed of ordinary matter seemed to obey a stronger bound than the CEB, S < A3/4,counterexamples have been found by Bousso, Freivogel and Leichenauer[2] and thereby confirmed the universality of the CEB. These counterexamples can be divided in mainly two categories: truncated light-sheets and anti-trapped surfaces in open FRW universes. In the case of anti-trapped surfaces, the CEB can approximately be saturated.
New counterexamples were searched in the anisotropic Bianchi model and in the inhomogeneous LTB model. For the considered solutions of those models (except for the elliptic solution of the LTB model) counterexamples were found that are very similar to those of truncated light-sheets or anti-trapped spheres found by Bousso, Freivogel and Leichenauer [2]. One of those examples approximately saturates the CEB. A new kind of counterexample requiring anisotropy was found in the Bianchi model, but the validity of the derivation is not completely certain, since quantum gravitational effects may be important in the regime that was
considered.

Posted on Leave a comment

Three Thousand Years of Algorithmic Rituals: The Emergence of AI from the Computation of Space

Illustration from Frits Staal, "Greek and Vedic geometry" Journal of Indian Philosophy 27.1 (1999): 105-127.

 

With topographical memory, one could speak of generations of vision and even of visual heredity from one generation to the next. The advent of the logistics of perception and its renewed vectors for delocalizing geometrical optics, on the contrary, ushered in a eugenics of sight, a pre-emptive abortion of the diversity of mental images, of the swarm of image-beings doomed to remain unborn, no longer to see the light of day anywhere.

—Paul Virilio, The Vision Machine1

1. Recomposing a Dismembered God

In a fascinating myth of cosmogenesis from the ancient Vedas, it is said that the god Prajapati was shattered into pieces by the act of creating the universe. After the birth of the world, the supreme god is found dismembered, undone. In the corresponding Agnicayana ritual, Hindu devotees symbolically recompose the fragmented body of the god by building a fire altar according to an elaborate geometric plan.2 The fire altar is laid down by aligning thousands of bricks of precise shape and size to create the profile of a falcon. Each brick is numbered and placed while reciting its dedicated mantra, following step-by-step instructions. Each layer of the altar is built on top of the previous one, conforming to the same area and shape. Solving a logical riddle that is the key of the ritual, each layer must keep the same shape and area of the contiguous ones, but using a different configuration of bricks. Finally, the falcon altar must face east, a prelude to the symbolic flight of the reconstructed god towards the rising sun—an example of divine reincarnation by geometric means.

The Agnicayana ritual is described in the Shulba Sutras, composed around 800 BCE in India to record a much older oral tradition. The Shulba Sutras teach the construction of altars of specific geometric forms to secure gifts from the gods: for instance, they suggest that “those who wish to destroy existing and future enemies should construct a fire-altar in the form of a rhombus.”3 The complex falcon shape of the Agnicayana evolved gradually from a schematic composition of only seven squares. In the Vedic tradition, it is said that the Rishi vital spirits created seven square-shaped Purusha (cosmic entities, or persons) that together composed a single body, and it was from this form that Prajapati emerged once again. While art historian Wilhelm Worringer argued in 1907 that primordial art was born in the abstract line found in cave graffiti, one may assume that the artistic gesture also emerged through the composing of segments and fractions, introducing forms and geometric techniques of growing complexity. 4In his studies of Vedic mathematics, Italian mathematician Paolo Zellini has discovered that the Agnicayana ritual was used to transmit techniques of geometric approximation and incremental growth—in other words, algorithmic techniques—comparable to the modern calculus of Leibniz and Newton.5 Agnicayana is among the most ancient documented rituals still practiced today in India, and a primordial example of algorithmic culture.

But how can we define a ritual as ancient as the Agnicayana as algorithmic? To many, it may appear an act of cultural appropriation to read ancient cultures through the paradigm of the latest technologies. Nevertheless, claiming that abstract techniques of knowledge and artificial metalanguages belong uniquely to the modern industrial West is not only historically inaccurate but also an act and one of implicit epistemic colonialism towards cultures of other places and other times.6 The French mathematician Jean-Luc Chabert has noted that “algorithms have been around since the beginning of time and existed well before a special word had been coined to describe them. Algorithms are simply a set of step by step instructions, to be carried out quite mechanically, so as to achieve some desired result.”7 Today some may see algorithms as a recent technological innovation implementing abstract mathematical principles. On the contrary, algorithms are among the most ancient and material practices, predating many human tools and all modern machines:

Algorithms are not confined to mathematics … The Babylonians used them for deciding points of law, Latin teachers used them to get the grammar right, and they have been used in all cultures for predicting the future, for deciding medical treatment, or for preparing food … We therefore speak of recipes, rules, techniques, processes, procedures, methods, etc., using the same word to apply to different situations. The Chinese, for example, use the word shu (meaning rule, process or stratagem) both for mathematics and in martial arts … In the end, the term algorithm has come to mean any process of systematic calculation, that is a process that could be carried out automatically. Today, principally because of the influence of computing, the idea of finiteness has entered into the meaning of algorithm as an essential element, distinguishing it from vaguer notions such as process, method or technique.8

Before the consolidation of mathematics and geometry, ancient civilizations were already big machines of social segmentation that marked human bodies and territories with abstractions that remained, and continue to remain, operative for millennia. Drawing also on the work of historian Lewis Mumford, Gilles Deleuze and Félix Guattari offered a list of such old techniques of abstraction and social segmentation: “tattooing, excising, incising, carving, scarifying, mutilating, encircling, and initiating.”9 Numbers were already components of the “primitive abstract machines” of social segmentation and territorialization that would make human culture emerge: the first recorded census, for instance, took place around 3800 BCE in Mesopotamia. Logical forms that were made out of social ones, numbers materially emerged through labor and rituals, discipline and power, marking and repetition.

In the 1970s, the field of “ethnomathematics” began to foster a break from the Platonic loops of elite mathematics, revealing the historical subjects behind computation.10 The political question at the center of the current debate on computation and the politics of algorithms is ultimately very simple, as Diane Nelson has reminded us: Who counts?11 Who computes? Algorithms and machines do not compute for themselves; they always compute for someone else, for institutions and markets, for industries and armies.

Illustration from Frank Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, (Cornell Aeronautical Laboratory, Buffalo NY, 1961).

2. What Is an Algorithm?

The term “algorithm” comes from the Latinization of the name of the Persian scholar al-Khwarizmi. His tract On the Calculation with Hindu Numerals, written in Baghdad in the ninth century, is responsible for introducing Hindu numerals to the West, along with the corresponding new techniques for calculating them, namely algorithms. In fact, the medieval Latin word “algorismus” referred to the procedures and shortcuts for carrying out the four fundamental mathematical operations—addition, subtraction, multiplication, and division—with Hindu numerals. Later, the term “algorithm” would metaphorically denote any step-by-step logical procedure and become the core of computing logic. In general, we can distinguish three stages in the history of the algorithm: in ancient times, the algorithm can be recognized in procedures and codified rituals to achieve a specific goal and transmit rules; in the Middle Ages, the algorithm was the name of a procedure to help mathematical operations; in modern times, the algorithm qua logical procedure becomes fully mechanized and automated by machines and then digital computers.

Looking at ancient practices such as the Agnicayana ritual and the Hindu rules for calculation, we can sketch a basic definition of “algorithm” that is compatible with modern computer science: (1) an algorithm is an abstract diagram that emerges from the repetition of a process, an organization of time, space, labor, and operations: it is not a rule that is invented from above but emerges from below; (2) an algorithm is the division of this process into finite steps in order to perform and control it efficiently; (3) an algorithm is a solution to a problem, an invention that bootstraps beyond the constrains of the situation: any algorithm is a trick; (4) most importantly, an algorithm is an economic process, as it must employ the least amount of resources in terms of space, time, and energy, adapting to the limits of the situation.

Today, amidst the expanding capacities of AI, there is a tendency to perceive algorithms as an application or imposition of abstract mathematical ideas upon concrete data. On the contrary, the genealogy of the algorithm shows that its form has emerged from material practices, from a mundane division of space, time, labor, and social relations. Ritual procedures, social routines, and the organization of space and time are the source of algorithms, and in this sense they existed even before the rise of complex cultural systems such as mythology, religion, and especially language. In terms of anthropogenesis, it could be said that algorithmic processes encoded into social practices and rituals were what made numbers and numerical technologies emerge, and not the other way around. Modern computation, just looking at its industrial genealogy in the workshops studied by both Charles Babbage and Karl Marx, evolved gradually from concrete towards increasingly abstract forms.

Illustration from Frank Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, (Cornell Aeronautical Laboratory, Buffalo NY, 1961).

3. The Rise of Machine Learning as Computational Space

In 1957, at the Cornell Aeronautical Laboratory in Buffalo, New York, the cognitive scientist Frank Rosenblatt invented and constructed the Perceptron, the first operative artificial neural network—grandmother of all the matrices of machine learning, which at the time was a classified military secret.12 The first prototype of the Perceptron was an analogue computer composed of an input device of 20 × 20 photocells (called the “retina”) connected through wires to a layer of artificial neurons that resolved into one single output (a light bulb turning on or off, to signify 0 or 1). The “retina” of the Perceptron recorded simple shapes such as letters and triangles and passed electric signals to a multitude of neurons that would compute a result according to a threshold logic. The Perceptron was a sort of photo camera that could be taught to recognize a specific shape, i.e., to make a decision with a margin of error (making it an “intelligent” machine). The Perceptron was the first machine-learning algorithm, a basic “binary classifier” that could determine whether a pattern fell within a specific class or not (whether the input image was a triangle or not, a square or not, etc.). To achieve this, the Perceptron progressively adjusted the values of its nodes in order to resolve a large numerical input (a spatial matrix of four hundred numbers) into a simple binary output (0 or 1). The Perceptron gave the result 1 if the input image was recognized within a specific class (a triangle, for instance); otherwise it gave the result 0. Initially, a human operator was necessary to train the Perceptron to learn the correct answers (manually switching the output node to 0 or 1), hoping that the machine, on the basis of these supervised associations, would correctly recognize similar shapes in the future. The Perceptron was designed not to memorize a specific pattern but to learn how to recognize potentially any pattern.

The matrix of 20 × 20 photoreceptors in the first Perceptron was the beginning of a silent revolution in computation (which would become a hegemonic paradigm in the early twenty-first century with the advent of “deep learning,” a machine-learning technique). Although inspired by biological neurons, from a strictly logical point of view the Perceptron marked not a biomorphic turn in computation but a topological one; it signified the rise of the paradigm of “computational space” or “self-computing space.” This turn introduced a second spatial dimension into a paradigm of computation that until then had only a linear dimension (see the Turing machine that reads and writes 0 and 1 along a linear memory tape). This topological turn, which is the core of what people perceive today as “AI,” can be described more modestly as the passage from a paradigm of passive information to one of active information. Rather than having a visual matrix processed by a top-down algorithm (like any image edited by a graphics software program today), in the Perceptron the pixels of the visual matrix are computed in a bottom-up fashion according to their spatial disposition. The spatial relations of the visual data shape the operation of the algorithm that computes them.

Because of its spatial logic, the branch of computer science originally dedicated to neural networks was called “computational geometry.” The paradigm of computational space or self-computing space shares common roots with the studies of the principles of self-organization that were at the center of post-WWII cybernetics, such as John von Neumann’s cellular automata (1948) and Konrad Zuse’s Rechnender Raum by (1967).13 Von Neumann’s cellular automata are cluster of pixels, perceived as small cells on a grid, that change status and move according to their neighboring cells, composing geometric figures that resemble evolving forms of life. Cellular automata have been used to simulate evolution and to study complexity in biological systems, but they remain finite-state algorithms confined to a rather limited universe. Konrad Zuse (who built the first programmable computer in Berlin in 1938) attempted to extend the logic of cellular automata to physics and to the whole universe. His idea of “rechnender Raum,” or calculating space, is a universe that is composed of discrete units that behave according to the behavior of neighboring units. Alan Turing’s last essay, “The Chemical Basis of Morphogenesis” (published in 1952, two years before his death), also belongs to the tradition of self-computing structures.14 Turing considered molecules in biological systems as self-computing actors capable of explaining complex bottom-up structures, such as tentacle patterns in hydra, whorl arrangement in plants, gastrulation in embryos, dappling in animal skin, and phyllotaxis in flowers.15

Von Neumann’s cellular automata and Zuse’s computational space are intuitively easy to understand as spatial models, while Rosenblatt’s neural network displays a more complex topology that requires more attention. Indeed, neural networks employ an extremely complex combinatorial structure, which is probably what makes them the most efficient algorithms for machine learning. Neural networks are said to “solve any problem,” meaning they can approximate the function of any pattern according to the Universal Approximation theorem (given enough layers of neurons and computing resources). All systems of machine learning, including support-vector machines, Markov chains, Hopfield networks, Boltzmann machines, and convolutional neural networks, to name just a few, started as models of computational geometry. In this sense they are part of the ancient tradition of ars combinatoria.16

Image from Hans Meinhardt, The Algorithmic Beauty of Sea Shells (Springer Science & Business Media, 2009).

4. The Automation of Visual Labor

Even at the end of the twentieth century, no one would have ever thought to call a truck driver a “cognitive worker,” an intellectual. At the beginning of the twenty-first century, the use of machine learning in the development of self-driving vehicles has led to a new understanding of manual skills such as driving, revealing how the most valuable component of work, generally speaking, has never been merely manual, but also social and cognitive (as well as perceptual, an aspect of labor still waiting to be located somewhere between the manual and the cognitive). What kind of work do drivers perform? Which human task will AI come to record with its sensors, imitate with its statistical models, and replace with automation? The best way to answer this question is to look at what technology has successfully automated, as well as what it hasn’t.

The industrial project to automate driving has made clear (more so than a thousand books on political economy) that the labor of driving is a conscious activity following codified rules and spontaneous social conventions. However, if the skill of driving can be translated into an algorithm, it will be because driving has a logical and inferential structure. Driving is a logical activity just as labor is a logical activity more generally. This postulate helps to resolve the trite dispute about the separation between manual labor and intellectual labor.17 It is a political paradox that the corporate development of AI algorithms for automation has made possible to recognize in labor a cognitive component that had long been neglected by critical theory. What is the relation between labor and logic? This becomes a crucial philosophical question for the age of AI.

A self-driving vehicle automates all the micro-decisions that a driver must make on a busy road. Its artificial neural networks learn, that is imitate and copy, the human correlations between the visual perception of the road space and the mechanical actions of vehicle control (steering, accelerating, stopping) as ethical decisions taken in a matter of milliseconds when dangers arise (for the safety of persons inside and outside the vehicle). It becomes clear that the job of driving requires high cognitive skills that cannot be left to improvisation and instinct, but also that quick decision-making and problem-solving are possible thanks to habits and training that are not completely conscious. Driving remains essentially also a social activity, which follows both codified rules (with legal constraints) and spontaneous ones, including a tacit “cultural code” that any driver must subscribe to. Driving in Mumbai—it has been said many times—is not the same as driving in Oslo.

Obviously, driving summons an intense labor of perception. Much labor, in fact, appears mostly perceptive in nature, through continuous acts of decision and cognition that take place in the blink of an eye.18 Cognition cannot be completely disentangled from a spatial logic, and often follows a spatial logic in its more abstract constructions. Both observations—that perception is logical and that cognition is spatial—are empirically proven without fanfare by autonomous driving AI algorithms that construct models to statistically infer visual space (encoded as digital video of a 3-D road scenario). Moreover, the driver that AI replaces in self-driving cars and drones is not an individual driver but a collective worker, a social brain that navigates the city and the world.19 Just looking at the corporate project of self-driving vehicles, it is clear that AI is built on collective data that encode a collective production of space, time, labor, and social relations. AI imitates, replaces, and emerges from an organized division of social space (according first to a material algorithm and not the application of mathematical formulas or analysis in the abstract).

Animation from Chris Urmson’s, Ted talk “How a Driverless Car Sees the Road.” Urmson is the former chief engineer for Google’s Self-Driving Car Project. Animation by ZMScience

5. The Memory and Intelligence of Space

Paul Virilio, the French philosopher of speed or “dromology,” was also a theorist of space and topology, for he knew that technology accelerates the perception of space as much as it morphs the perception of time. Interestingly, the title of Virilio’s book The Vision Machine was inspired by Rosenblatt’s Perceptron. With the classical erudition of a twentieth-century thinker, Virilio drew a sharp line between ancient techniques of memorization based on spatialization, such as the Method of Loci, and modern computer memory as a spatial matrix:

Cicero and the ancient memory-theorists believed you could consolidate natural memory with the right training. They invented a topographical system, the Method of Loci, an imagery-mnemonics which consisted of selecting a sequence of places, locations, that could easily be ordered in time and space. For example, you might imagine wandering through the house, choosing as loci various tables, a chair seen through a doorway, a windowsill, a mark on a wall. Next, the material to be remembered is coded into discreet images and each of the images is inserted in the appropriate order into the various loci. To memorize a speech, you transform the main points into concrete images and mentally “place” each of the points in order at each successive locus. When it is time to deliver the speech, all you have to do is recall the parts of the house in order.

The transformation of space, of topological coordinates and geometric proportions, into a technique of memory should be considered equal to the more recent transformation of collective space into a source of machine intelligence. At the end of the book, Virilio reflects on the status of the image in the age of “vision machines” such as the Perceptron, sounding a warning about the impending age of artificial intelligence as the “industrialisation of vision”:

“Now objects perceive me,” the painter Paul Klee wrote in his Notebooks. This rather startling assertion has recently become objective fact, the truth. After all, aren’t they talking about producing a “vision machine” in the near future, a machine that would be capable not only of recognizing the contours of shapes, but also of completely interpreting the visual field … ? Aren’t they also talking about the new technology of visionics: the possibility of achieving sightless vision whereby the video camera would be controlled by a computer? … Such technology would be used in industrial production and stock control; in military robotics, too, perhaps.

Now that they are preparing the way for the automation of perception, for the innovation of artificial vision, delegating the analysis of objective reality to a machine, it might be appropriate to have another look at the nature of the virtual image … Today it is impossible to talk about the development of the audiovisual … without pointing to the new industrialization of vision, to the growth of a veritable market in synthetic perception and all the ethical questions this entails … Don’t forget that the whole idea behind the Perceptron would be to encourage the emergence of fifth-generation “expert systems,” in other words an artificial intelligence that could be further enriched only by acquiring organs of perception.20

Ioannis de Sacro Busco, Algorismus Domini, c. 1501. National Central Library of Rome. Photo: Public Domain/Internet Archive. 

6. Conclusion

If we consider the ancient geometry of the Agnicayana ritual, the computational matrix of the first neural network Perceptron, and the complex navigational system of self-driving vehicles, perhaps these different spatial logics together can clarify the algorithm as an emergent form rather than a technological a priori. The Agnicayana ritual is an example of an emergent algorithm as it encodes the organization of a social and ritual space. The symbolic function of the ritual is the reconstruction of the god through mundane means; this practice of reconstruction also symbolizes the expression of the many within the One (or the “computation” of the One through the many). The social function of the ritual is to teach basic geometry skills and to construct solid buildings.21 The Agnicayana ritual is a form of algorithmic thinking that follows the logic of a primordial and straightforward computational geometry.

The Perceptron is also an emergent algorithm that encodes according to a division of space, specifically a spatial matrix of visual data. The Perceptron’s matrix of photoreceptors defines a closed field and processes an algorithm that computes data according to their spatial relation. Here too the algorithm appears as an emergent process—the codification and crystallization of a procedure, a pattern, after its repetition. All machine-learning algorithms are emergent processes, in which the repetition of similar patterns “teach” the machine and cause the pattern to emerge as a statistical distribution.22

Self-driving vehicles are an example of complex emergent algorithms since they grow from a sophisticated construction of space, namely, the road environment as social institution of traffic codes and spontaneous rules. The algorithms of self-driving vehicles, after registering these spontaneous rules and the traffic codes of a given locale, try to predict unexpected events that may happen on a busy road. In the case of self-driving vehicles, the corporate utopia of automation makes the human driver evaporate, expecting that the visual space of the road scenario alone will dictate how the map will be navigated.

The Agnicayana ritual, the Perceptron, and the AI systems of self-driving vehicles are all, in different ways, forms of self-computing space and emergent algorithms (and probably, all of the them, forms of the invisibilization of labor).

The idea of computational space or self-computing space stresses, in particular, that the algorithms of machine learning and AI are emergent systems that are based on a mundane and material division of space, time, labor, and social relations. Machine learning emerges from grids that continue ancient abstractions and rituals concerned with marking territories and bodies, counting people and goods; in this way, machine learning essentially emerges from an extended division of social labor. Despite the way it is often framed and critiqued, artificial intelligence is not really “artificial” or “alien”: in the usual mystification process of ideology, it appears to be a deus ex machina that descends to the world like in ancient theater. But this hides the fact that it actually emerges from the intelligence of this world.

What people call “AI” is actually a long historical process of crystallizing collective behavior, personal data, and individual labor into privatized algorithms that are used for the automation of complex tasks: from driving to translation, from object recognition to music composition. Just as much as the machines of the industrial age grew out of experimentation, know-how, and the labor of skilled workers, engineers, and craftsmen, the statistical models of AI grow out of the data produced by collective intelligence. Which is to say that AI emerges as an enormous imitation engine of collective intelligence. What is the relation between artificial intelligence and human intelligence? It is the social division of labor

 

Matteo Pasquinelli (PhD) is Professor in Media Philosophy at the University of Arts and Design, Karlsruhe, where he coordinates the research group KIM (Künstliche Intelligenz und Medienphilosophie / Artificial Intelligence and Media Philosophy). For Verso he is preparing a monograph on the genealogy of artificial intelligence as division of labor, which is titled The Eye of the Master: Capital as Computation and Cognition.

Notes
1

Paul Virilio, La Machine de vision: essai sur les nouvelles techniques de representation (Galilée, 1988). Translated as The Vision Machine, trans. Julie Rose (Indiana University Press, 1994), 12.

2

The Dutch Indologist and philosopher of language Frits Staal documented the Agnicayana ritual during an expedition in Kerala, India, in 1975. See Frits Staal, AGNI: The Vedic Ritual of the Fire Altar, vol. 1–2 (Asian Humanities Press, 1983).

3

Kim Plofker, “Mathematics in India,” in The Mathematics of Egypt, Mesopotamia, China, India, and Islam, ed. Victor J. Katz (Princeton University Press, 2007).

4

See Wilhelm Worringer, Abstraction and Empathy: A Contribution to the Psychology of Style (Ivan R. Dee, 1997). (Abstraktion und Einfühlung, 1907).

5

For an account of the mathematical implications of the Agnicayana ritual, see Paolo Zellini, La matematica degli dèi e gli algoritmi degli uomini (Adelphi, 2016). Translated as The Mathematics of the Gods and the Algorithms of Men (Penguin, forthcoming 2019).

6

See Frits Staal, “Artificial Languages Across Sciences and Civilizations,” Journal of Indian Philosophy 34, no. 1–2 (2006).

7

Jean-Luc Chabert, “Introduction,” in A History of Algorithms: From the Pebble to the Microchip, ed. Jean-Luc Chabert (Springer, 1999), 1.

8

Jean-Luc Chabert, “Introduction,” 1–2.

9

Gilles Deleuze and Félix Guattari, Anti-Oedipus: Capitalism and Schizophrenia, trans. Robert Hurley (Viking, 1977), 145.

10

See Ubiratàn D’Ambrosio, “Ethno Mathematics: Challenging Eurocentrism,” in Mathematics Education, eds. Arthur B. Powell and Marilyn Frankenstein (State University of New York Press, 1997).

11

Diane M. Nelson, Who Counts?: The Mathematics of Death and Life After Genocide (Duke University Press, 2015).

12

Frank Rosenblatt, “The Perceptron: A Perceiving and Recognizing Automaton,” Technical Report 85-460-1, Cornell Aeronautical Laboratory, 1957.

13

John von Neumann and Arthur W. Burks, Theory of Self-Reproducing Automata (University of Illinois Press, 1966). Konrad Zuse, “Rechnender Raum,” Elektronische Datenverarbeitung, vol. 8 (1967). As book: Rechnender Raum (Friedrich Vieweg & Sohn, 1969). Translated as Calculating Space (MIT Technical Translation, 1970).

14

Alan Turing, “The Chemical Basis of Morphogenesis,” Philosophical Transactions of the Royal Society of London B 237, no. 641 (1952).

15

It must be noted that Marvin Minsky and Seymour Papert’s 1969 book Perceptrons (which superficially attacked the idea of neural networks and nevertheless caused the so-called first “winter of AI” by stopping all research funding into neural networks) claimed to provide “an introduction to computational geometry.” Marvin Minsky and Seymour Papert, Perceptrons: An Introduction to Computational Geometry (MIT Press, 1969).

16

See the work of twelfth-century Catalan monk Ramon Llull and his rotating wheels. In the ars combinatoria, an element of computation follows a logical instruction according to its relation with other elements and not according to instructions from outside the system. See also DIA-LOGOS: Ramon Llull's Method of Thought and Artistic Practice, eds. Amador Vega, Peter Weibel, and Siegfried Zielinski (University of Minnesota Press, 2018).

17

Specifically, a logical or inferential activity does not necessarily need to be conscious or cognitive to be effective (this is a crucial point in the project of computation as the mechanization of “mental labor”). See the work of Simon Schaffer and Lorraine Daston on this point. More recently, Katherine Hayles has stressed the domain of extended nonconscious cognition in which we are all implicated. Simon Schaffer, “Babbage’s Intelligence: Calculating Engines and the Factory System,” Critical inquiry 21, no. 1 (1994). Lorraine Daston, “Calculation and the Division of Labor, 1750–1950,” Bulletin of the German Historical Institute, no. 62 (Spring 2018). Katherine Hayles, Unthought: The Power of the Cognitive Nonconscious (University of Chicago Press, 2017).

18

According to both Gestalt theory and the semiotician Charles Sanders Peirce, vision always entails cognition; even a small act of perception is inferential—i.e., it has the form of an hypothesis.

19

School bus drivers will never achieve the same academic glamor of airplane or drone pilots with their adventurous “cognition in the wild.” Nonetheless, we should acknowledge that their labor provides crucial insights into the ontology of AI.

20

Virilio, The Vision Machine, 76.

21

As Stall and Zellini have noted, among others, these skills also include the so-called Pythagorean theorem, which is helpful in the design and construction of buildings, demonstrating that it was known in ancient India (having been most likely transmitted via Mesopotamian civilizations).

22
In fact, more than machine “learning,” it is data and their spatial relations “teaching.”
Posted on Leave a comment

DataRobot’s vision to democratize machine learning with no-code AI

 

The growing digitization of nearly every aspect of our world and lives has created immense opportunities for the productive application of machine learning and data science. Organizations and institutions across the board are feeling the need to innovate and reinvent themselves by using artificial intelligence and putting their data to good use. And according to several surveys, data science is among the fastest-growing in-demand skills in different sectors.

However, the growing demand for AI is hampered by the very low supply of data scientists and machine learning experts. Among the efforts to address this talent gap is the fast-evolving field of no-code AI, tools that make the creation and deployment of ML models accessible to organizations that don’t have enough highly skilled data scientists and machine learning engineers.

In an interview with TechTalks, Nenshad Bardoliwalla, chief product officer at DataRobot, discussed the challenges of meeting the needs of machine learning and data science in different sectors and how no-code platforms are helping democratize artificial intelligence.

Not enough data scientists

Nenshad Bardoliwalla
Nenshad Bardoliwalla, Chief Product Officer at DataRobot

“The reason the demand for AI is going up so significantly is because the amount of digital exhaust being generated by businesses and the number of ways they can creatively use that digital exhaust to solve real business problems is going up,” Bardoliwalla said.

At the same time, there are nowhere near enough expert data scientists in the world who have the ability to actually exploit that data.

“We knew ten years ago, when DataRobot started, that there was no way that the number of expert data scientists—people who have Ph.D. in statistics, Ph.D. in machine learning—that the world would have enough of those individuals to be able to satisfy that demand for AI-driven business outcomes,” Bardoliwalla said.

And as the years have passed, Bardoliwalla has seen demand for machine learning and data science grow across different sectors as more and more organizations are realizing the business value of machine learning, whether it’s predicting customer churn, ad clicks, the possibility of an engine breakdown, medical outcomes, or something else.

“We are seeing more and more companies who recognize that their competition is able to exploit AI and ML in interesting ways and they’re looking to keep up,” Bardoliwalla said.

At the same time, the growing demand for data science skills has driven a wedge into the AI talent gap continue. And not everyone is served equally.

Underserved industries

The shortage of experts has created fierce competition for data science and machine learning talent. The financial sector is leading the way, aggressively hiring AI talent and putting machine learning models into use.

“If you look at financial services, you’ll clearly see that the number of machine learning models that are being put into production is by far the highest than any of the other segments,” Bardoliwalla said.

In parallel, big tech companies with deep pockets are also hiring top data scientists and machine learning engineers—or outright acquiring AI labs with all their engineers and scientists—to further fortify their data-driven commercial empires. Meanwhile, smaller companies and sectors that are not flush with cash have been largely left out of the opportunities provided by advances in artificial intelligence because they can’t hire enough data scientists and machine learning experts.

Bardoliwalla is especially passionate about what AI could do for the education sector.

“How much effort is being put into optimized student outcomes by using AI and ML? How much do the education industry and the school systems have in order to invest in that technology? I think the education industry as a whole is likely to be a lagger in the space,” he said.

Other areas that still have a ways to go before they can take advantage of advances in AI are transportation, utilities, and heavy machinery. And part of the solution might be to make ML tools that don’t require a degree in data science.

The no-code AI vision

no-code ai platform

“For every one of your expert data scientists, you have ten analytically savvy businesspeople who are able to frame the problem correctly and add the specific business-relevant calculations that make sense based on the domain knowledge of those people,” Bardoliwalla said.

As machine learning requires knowledge of programming languages such as Python and R and complicated libraries such as NumPy, Scikit-learn, and TensorFlow, most business people can’t create and test models without the help of expert data scientists. This is the area that no-code AI platforms are addressing.

DataRobot and other providers of no-code AI platforms are creating tools that enable these domain experts and business-savvy people to create and deploy machine learning models without the need to write code.

With DataRobot, users can upload their datasets on the platform, perform the necessary preprocessing steps, choose and extract features, and create and compare a range of different machine learning models, all through an easy-to-use graphical user interface.

“The whole notion of democratization is to allow companies and people in those companies who wouldn’t otherwise be able to take advantage of AI and ML to actually be able to do so,” Bardoliwalla said.

No-code AI is not a replacement for the expert data scientist. But it increases ML productivity across organizations, empowering more people to create models. This lifts much of the burden from the overloaded shoulders of data scientists and enables them to put their skills to more efficient use.

“The one person in that equation, the expert data scientist, is able to validate and govern and make sure that the models that are being generated by the analytically savvy businesspeople are quite accurate and make sense from an interpretability perspective—that they’re trustworthy,” Bardoliwalla said.

This evolution of machine learning tools is analogous to how the business intelligence industry has changed. A decade ago, the ability to query data and generate reports at organizations was limited to a few people who had the special coding skill set required to manage databases and data warehouses. But today, the tools have evolved to the point that non-coders and less technical people can perform most of their data querying tasks through easy-to-use graphical tools and without the assistance of expert data analysts. Bardoliwalla believes that the same transformation is happening in the AI industry thanks to no-code AI platforms.

“Whereas the business intelligence industry has historically focused on what has happened—and that is useful—AI and ML is going is to give every person in the business the ability to predict what is going to happen,” Bardoliwalla said. “We believe that we can put AI and ML into the hands of millions of people in organizations because we have simplified the process to the point that many analytically savvy business people—and there are millions of such folks—working with the few million data scientists can deliver AI- and ML-specific outcomes.”

The evolution of no-code AI at DataRobot

datarobot continuous ai no-code platform
DataRobot’s AI Cloud is an end-to-end platform that covers the entire machine learning development lifecycle

DataRobot launched the first set of no-code AI tools in 2014. Since then, the platform has expanded at the fast pace of the applied machine learning industry. DataRobot unified its tools into the AI Cloud in 2021, and in mid-March, the company released AI Cloud 8.0, the latest version of its platform.

The AI Cloud has evolved into an end-to-end no-code platform that covers the entire machine learning development lifecycle.

“We recognized in 2019 that we had to expand, and the way you get value from machine learning is by being able to deploy models in production and have them actually provide predictions in business processes,” Bardoliwalla said.

In addition to creating and testing models, DataRobot also supports MLOps, the practices that cover the deployment and maintenance of ML models. The platform includes a graphical No-Code AI App Builder tool that enables you to create full-fledged applications on top of your models. The platform also monitors deployed ML models for decay, data-drift, and other factors that can affect performance. More recently, the company added data engineering tools for gathering, segmenting, labeling, updating, and managing the datasets used to train and validate ML models.

“Our vision expanded dramatically, and the first evidence of the end-to-end platform arrived in 2019. What we’ve done since then is tie all of that together—and this is what we announced with the 8.0 release with the Continuous AI,” Bardoliwalla said.

The future of no-code AI

As no-code AI has matured, it has also become valuable to seasoned data scientists and machine learning engineers, who are interested in automating the tedious parts of their job. Throughout the entire machine learning development lifecycle, more advanced users can integrate their own hand-written code with DataRobot’s automated tools. Alternatively, they can extract the Python or R source code for the models DataRobot generates and further customize it for integration into their own applications.

But no-code AI still has a lot to offer. “The future of no-code AI is going to be about increasing the level of automation that platforms can provide. The more you increase the level of automation, the less you have to write code,” Bardoliwalla said.

Some of the ideas that Bardoliwalla is entertaining is the development of tools that can continuously update and profile the data used in machine learning models. There are also opportunities to further streamline the automated ML process by continually monitoring the accuracy of not only the model in production, but also challenger models that can potentially replace the main ML model as context and conditions change.

“The way that no-code environments are going to succeed is that they allow for more and more functionality that used to require someone to write code, to now be able to manifested in just a couple of simple clicks inside of a GUI,” Bardoliwalla said.

Source

Posted on Leave a comment

Neuralink 2022 Update -Human Trials are coming

Let’s get into the latest updates on Elon Musk’s futuristic brain implant company Neuralink. Elon has been talking a lot lately about Neuralink and some of the applications that he expects it will be capable of, or not capable of, in the first decade or so of the product life cycle.

We know that Elon has broadly promised that Neuralink can do everything from helping people with spinal cord injuries, to enabling telepathic communication, curing brain disease like Parkinsons and ALS, allowing us to control devices with our thoughts and even merging human consciousness with artificial intelligence.

But as we get closer to the first clinical human trials for Neuralink, things are starting to become a little more clear on what this Brain Computer Interface technology will actually do, and how it will help people. So, let’s talk about what’s up with Neuralink in 2022.

Neuralink Human Trials 2022

When asked recently if Neuralink was still on track for their first human trial by the end of this year, Elon Musk replied by simply saying, “Yes.” Which I think is a good sign. It does seem like whenever Elon gives an abrupt answer like this, it means that he is confident about what he’s saying.

For comparison, at around the same time last year, when asked about human trials of Neuralink, Elon wrote, “If things go well, we might be able to do initial human trials later this year.” Notice the significant difference in those two replies. Not saying this is a science or anything, but it is notable.

We also saw earlier this year that Neuralink were looking to hire both a Director and Coordinator for Clinical Trials. In the job posting, Neuralink says that The director will “work closely with some of the most innovative doctors and top engineers, as well as working with Neuralink’s first Clinical Trial participants.”

We know that Neuralink have been conducting their surgical trials so far with a combination of monkeys and pigs. In their 2020 demonstration, Neuralink showed us a group of pigs who had all received Neuralink implants, and in some cases had also undergone the procedure to have the implant removed. Then in 2021, we were shown a monkey who could play video games without the need for a controller, using only his brain, which was connected with two Neuralink implants.

Human trials with Neuralink would obviously be a major step forward in product development. Last year, Elon wrote that, “Neuralink is working super hard to ensure implant safety & is in close communication with the FDA.” Previously, during Neuralink events, he has said that the company is striving to exceed all FDA safety requirements, not just to meet them. In the same way that Tesla vehicles exceed all crash safety requirements, they actually score higher than any other car ever manufactured.

What can Neuralink Do?

As we get closer to the prospective timeline for human testing, Elon has also been dialing down a little more into what exactly Neuralink will be able to do in its first phase implementation. It’s been a little bit hard to keep track when Elon is literally talking about using this technology for every crazy thing that can be imagined - that Neuralink would make language obsolete, that it would allow us to create digital backups of human minds, that we could merge our consciousness with an artificial super intelligence and become ultra enhanced cyborgs.

One of the new things that Elon has been talking about recently is treating morbid obesity with a Neuralink, which he brought up during a live TED Talk interview. Which is not something that we expected to hear, but it’s a claim that does seem to be backed up by some science. There have already been a couple of studies done with brain implants in people with morbid obesity, the implant transmitted frequent electric pulses into the hypothalamus region of the brain, which is thought to be driving an increase in appetite. It’s still too soon to know if that particular method is really effective, but it would be significantly less invasive than other surgeries that modify a patient's stomach in hopes of suppressing their appetite.

Elon followed up on the comment in a tweet, writing that it is “Certainly physically possible” to treat obesity through the brain. In the same post, Elon expanded on the concept, writing, “We’re working on bridging broken links between brain & body. Neuralinks in motor & sensory cortex bridging past weak/broken links in neck/spine to Neuralinks in spinal cord should theoretically be able to restore full body functionality.”

Which is one of the more practical implementations of Neuralink technology that we are expecting to see. These electrical signals can be read in the brain by one Neuralink device, and then wirelessly transmitted through BlueTooth to a second Neuralink device that is implanted in a muscle group, where the signal from the brain is delivered straight into the muscles. This exact kind of treatment has been done before with brain implants and muscular implants, but it has always required the patient to have a very cumbersome set up with wires running through their body into their brain, and wires running out of their skull and into a computer. The real innovation of Neuralink is that it makes this all possible with very small implants that connect wirelessly, so just by looking at the patient, you would never know that they have a brain implant.

Elon commented on this in another Tweet, writing, “It is an electronics, slash mechanical, slash software engineering problem for the Neuralink device that is similar in complexity level to smart watches - which are not easy!, plus the surgical robot, which is comparable to state-of-the art CNC machines.”

So the Neuralink has more in common with an Apple Watch than it does with any existing Brain Computer Interface Technology. And it is only made possible by the autonomous robotic device that conducts the surgery, the electrodes that connect the Neuralink device into the brain cortex are too small and fine to be sewn by human hands.

Elon touched on this in a response to being asked if Neuralink could cure tinnitus, a permanent ringing in the ears. Elon wrote, “Definitely. Might be less than 5 years away, as current version Neuralinks are semi-generalized neural read/write devices with about 1000 electrodes and tinnitus  probably needs much less than 1000.” He then added that, “Future generation Neuralinks will increase electrode count by many orders of magnitude.”

This brings us back to setting more realistic expectations of what a Neuralink can and cannot do. It’s entirely possible that in the future, the device can be expanded to handle some very complex issues, but as it is today, the benefits will be limited. Recently a person Tweeted at Elon, asking, “I lost a grandparent to Alzheimers - how will Neuralink address the loss of memory in the human brain?” Elon replied to say, “Current generation Neuralinks can help to some degree, but an advanced case of Alzheimers often involves macro degeneration of the brain. However, Neuralinks should theoretically be able restore almost any functionality lost due *localized* brain damage from stroke or injury.”

So, because those 1,000 electrodes can’t go into all areas of the brain all at once, Neuralink will not be effective against a condition that afflicts the brain as a whole. But those electrodes can be targeted on one particular area of damage or injury, and that’s how Neuralink will start to help in the short term, and this will be the focus of early human trials.

During his TED Talk interview, Elon spoke about the people that reached out to him, wanting to participate in Neuralink’s first human trials. Quote, “The emails that we get at Neuralink are heartbreaking. They'll send us just tragic stories where someone was in the prime of life and they had an accident on a motorcycle and now someone who’s 25 years old can’t even feed themselves. This is something we could fix.” End quote.

In a separate interview with Business Insider that was done in March, Elon talked more specifically about the Neuralink timeline, saying, “Neuralink in the short term is just about solving brain injuries, spinal injuries and that kind of thing. So for many years, Neuralink’s products will just be helpful to someone who has lost the use of their arms or legs or has just a traumatic brain injury of some kind.”

This is a much more realistic viewpoint than what we’ve seen from Elon in interviews of the past. On one episode of the Joe Rogan Podcast, Elon tried to claim that in 5 years from now language would become obsolete because everyone would be using Neuralink to communicate with a kind of digital telepathy. That could have just been the weed talking, but I’m hoping that the more realistic Elon’s messaging becomes, the closer we are getting to a real medical trial of the implant.

And finally, the key to reaching a safe and effective human trial is going to be that robot sewing machine that threads the electrodes into the cortex.  Elon referred to it as being comparable to a CNC machine. Because as good as the chip itself might be, if we can’t have a reliable procedure to perform the implant, then nothing can move forward. The idea is that after a round section of the person’s skull is removed, this robot will come in and place the tiny wires into a very specific areas in the outer layer of the brain - these don’t go deep into the tissue, only a couple of millimeters is enough to tap into the neural network of electrical signals. In theory this can all be done in a couple of hours, while the patient is still conscious - they would get an anesthetic to numb their head, obviously, but they wouldn’t have to go under full sedation, and therefore could be in and out of the procedure in an afternoon. Very similar deal to laser eye surgery - a fast and automated method to accomplish a very complex medical task. 

That’s what this Twitter user was referencing when he recently asked how close the new, version two of the Neuralink robot was to inserting the chip as simply as a LASIK procedure. To which Elon responded, quote, “Getting there.”

We know that the robot system is being tested on monkeys right now, and from what Elon says, it is making progress towards being suitable for human trials.

The last interesting thing that Elon said on Twitter in relation to Neuralink was his comment, “No need for artificial intelligence, neural networks or machine learning quite yet.” He wrote these out as abbreviations, but these are all terms that we are well familiar with from Tesla and their autonomous vehicle program. We know that Elon is an expert in AI and he has people working for him at Tesla in this department that are probably the best in the world. This is a skill set that will eventually be applied at Neuralink, but to what end, we still don’t know.

Posted on Leave a comment

The case for hybrid artificial intelligence

Cognitive scientist Gary Marcus believes advances in artificial intelligence will rely on hybrid AI, the combination of symbolic AI and neural networks.

Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and speech recognition. However, as the deep learning matures and moves from hype peak to its trough of disillusionment, it is becoming clear that it is missing some fundamental components.

This is a reality that many of the pioneers of deep learning and its main component, artificial neural networks, have acknowledged in various AI conferences in the past year. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, the three “godfathers of deep learning,” have all spoken about the limits of neural networks.

The question is, what is the path forward?

At NeurIPS 2019, Bengio discussed system 2 deep learning, a new generation of neural networks that can handle compositionality, out of order distribution, and causal structures. At the AAAI 2020 Conference, Hinton discussed the shortcomings of convolutional neural networks (CNN) and the need to move toward capsule networks.

But for cognitive scientist Gary Marcus, the solution lies in developing hybrid models that combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning. In a paper titled “The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence,” Marcus discusses how hybrid artificial intelligence can solve some of the fundamental problems deep learning faces today.

Connectionists, the proponents of pure neural network–based approaches, reject any return to symbolic AI. Hinton has compared hybrid AI to combining electric motors and internal combustion engines. Bengio has also shunned the idea of hybrid artificial intelligence on several occasions.

But Marcus believes the path forward lies in putting aside old rivalries and bringing together the best of both worlds.

What’s missing in deep neural networks?

The limits of deep learning have been comprehensively discussed. But here, I would like to generalization of knowledge, a topic that has been widely discussed in the past few months. While human-level AI is at least decades away, a nearer goal is robust artificial intelligence.

Here’s how Marcus defines robust AI: “Intelligence that, while not necessarily superhuman or self-improving, can be counted on to apply what it knows to a wide range of problems in a systematic and reliable way, synthesizing knowledge from a variety of sources such that it can reason flexibly and dynamically about the world, transferring what it learns in one context to another, in the way that we would expect of an ordinary adult.”

Those are key features missing from current deep learning systems. Deep neural networks can ingest large amounts of data and exploit huge computing resources to solve very narrow problems, such as detecting specific kinds of objects or playing complicated video games in specific conditions.

However, they’re very bad at generalizing their skills. “We often can’t count on them if the environment differs, sometimes even in small ways, from the environment on which they are trained,” Marcus writes.

Case in point: An AI trained on thousands of chair pictures won’t be able to recognize an upturned chair if such a picture was not included in its training dataset. A super-powerful AI trained on tens of thousands of hours of StarCraft 2 gameplay can play at championship level, but only under limited conditions. As soon as you change the map or the units in the game, its performance will take a nosedive. And it can’t play any game that is similar to StarCraft 2, such as Warcraft or Command & Conquer.

AI AlphaStar StarCraft II
A deep learning algorithm that plays championship-level StarCraft can’t play a similar game. It won’t even be able to maintain its level of gameplay if the settings are changed the slightest bit.

The current approach to solve AI’s generalization problem is to scale the models: Create bigger neural networks, gather larger datasets, use larger server clusters, and train the reinforcement learning algorithms for longer hours.

“While there is value in such approaches, a more fundamental rethink is required,” Marcus writes in his paper.

In fact, the “bigger is better” approach has yielded modest results at best while creating several other problems that remain unsolved. For one thing, the huge cost of developing and training large neural networks is threatening to centralize the field in the hands of a few very wealthy tech companies.

When it comes to dealing with language, the limits of neural networks become even more evident. Language models such as OpenAI’s GPT-2 and Google’s Meena chatbot each have more than a billion parameters (the basic unit of neural networks) and have been trained on gigabytes of text data. But they still make some of the dumbest mistakes, as Marcus has pointed out in an article earlier this year.

“When sheer computational power is applied to open-ended domain—such as conversational language understanding and reasoning about the world—things never turn out quite as planned. Results are invariably too pointillistic and spotty to be reliable,” Marcus writes.

What’s important here is the term “open-ended domain.” Open-ended domains can be general-purpose chatbots and AI assistants, roads, homes, factories, stores, and many other settings where AI agents interact and cooperate directly with humans. As the past years have shown, the rigid nature of neural networks prevents them from tackling problems in open-ended domains. In his paper, Marcus discusses this topic in detail.

Why we need to combine symbolic AI and neural networks?

Connectionists believe that approaches based on pure neural network structures will eventually lead to robust or general AI. After all, the human brain is made of physical neurons, not physical variables and class placeholders and symbols.

But as Marcus points out in his essay, “Symbol manipulation in some form seems to be essential for human cognition, such as when a child learns an abstract linguistic pattern, or the meaning of a term like sister that can be applied in an infinite number of families, or when an adult extends a familiar linguistic pattern in a novel way that extends beyond a training distributions.”

Marcus’ premise is backed by research from several cognitive scientists over the decades, including his own book The Algebraic Mind and the more recent Rebooting AI. (Another great read in this regard is the second chapter of Steven Pinker’s book How the Mind Works, in which he lays out evidence that symbol manipulation is an essential part of the brain’s functionality.)

We already have proof that symbolic systems work. It’s everywhere around us. Our web browsers, operating systems, applications, games, etc. are based on rule-based programs. “The same tools are also, ironically, used in the specification and execution of virtually all of the world’s neural networks,” Marcus notes.

Decades of computer science and cognitive science have proven that being able to store and manipulate abstract concepts is an essential part of any intelligent system. And that is why symbol-manipulation should be a vital component of any robust AI system.

“It is from there that the basic need for hybrid architectures that combine symbol manipulation with other techniques such as deep learning most fundamentally emerges,” Marcus says.

Examples of hybrid AI systems

human brain

The benefit of hybrid AI systems is that they can combine the strengths of neural networks and symbolic AI. Neural nets can find patterns in the messy information we collect from the real world, such as visual and audio data, large corpora of unstructured text, emails, chat logs, etc. And on their part, rule-based AI systems can perform symbol-manipulation operations on the extracted information.

Despite the heavy dismissal of hybrid artificial intelligence by connectionists, there are plenty of examples that show the strengths of these systems at work. As Marcus notes in his paper, “Researchers occasionally build systems containing the apparatus of symbol-manipulation, without acknowledging (or even considering the fact) that they have done so.” Marcus iterates several examples where hybrid AI systems are silently solving vital problems.

One example is the Neuro-Symbolic Concept Learner, a hybrid AI system developed by researchers at MIT and IBM. The NSCL combines neural networks to solve visual question answering (VQA) problems, a class of tasks that is especially difficult to tackle with pure neural network–based approaches. The researchers showed that NCSL was able to solve the VQA dataset CLEVR with impressive accuracy. Moreover, the hybrid AI model was able to achieve the feat using much less training data and producing explainable results, addressing two fundamental problems plaguing deep learning.

Google’s search engine is a massive hybrid AI that combines state-of-the-art deep learning techniques such as Transformers and symbol-manipulation systems such as knowledge-graph navigation tools.

AlphaGo, one of the landmark AI achievements of the past few years, is another example of combining symbolic AI and deep learning.

“There are plenty of first steps towards building architectures that combine the strengths of the symbolic approaches with insights from machine learning, in order to develop better techniques for extracting and generalizing abstract knowledge from large, often noisy data sets,” Marcus writes.

The paper goes into much more detail about the components of hybrid AI systems, and the integration of vital elements such as variable binding, knowledge representation and causality with statistical approximation.

“My own strong bet is that any robust system will have some sort of mechanism for variable binding, and for performing operations over those variables once bound. But we can’t tell unless we look,” Marcus writes.

Lessons from history

One thing to commend Marcus on is his persistence in the need to bring together all achievements of AI to advance the field. And he has done it almost single-handedly in the past years, against overwhelming odds where most of the prominent voices in artificial intelligence have been dismissing the idea of revisiting symbol manipulation.

Marcus sticking to his guns is almost reminiscent of how Hinton, Bengio, and LeCun continued to push neural networks forward in the decades where there was no interest in them. Their faith in deep neural networks eventually bore fruit, triggering the deep learning revolution in the early 2010s, and earning them a Turing Award in 2019.

It will be interesting to see where Marcus’ quest for creating robust, hybrid AI systems will lead to.


Source

Posted on Leave a comment

How to build a decentralized token bridge between Ethereum and Binance Smart Chain?

Conclusion

The advent of blockchain bridges has made blockchain a more mainstream technology. Bridging solutions also aid the DeFi applications design that empowers the prospectus of a decentralized and financial system. By enabling connections between different blockchains or working together, blockchain bridges help users head towards the next-generation decentralized system. Thus, it aims to end the sovereignty of the centralized system from the business ecosystem. However, blockchain plans to bring about many new paradigms to reinvent the existing bridges and promote greater innovation and technological relevance.

 

Blockchain technology keeps evolving, and it has been changed significantly since 2008 when Satoshi Nakamoto introduced the first cryptocurrency, Bitcoin, to the world. Bitcoin brought along blockchain technology. Since then, multiple blockchain platforms have been launched. Every blockchain has unique features and functionality to fill the gap between blockchain technology and its real-world implications. Notwithstanding the amazing benefits of the blockchain, such as its decentralized nature, the immutability of records, distributed ledger, and smart contract technology, a major hurdle still affects blockchain’s mass adoption, which is the lack of interoperability.

Although public blockchains maintain transparency in the on-chain data, their siloed nature limits the holistic utilization of blockchain in decentralized finance and many other industries. Blockchains have unique capabilities that users often want to utilize together. However, that doesn’t seem possible since these blockchains work independently on their isolated ecosystem and abide by their own unique consensus rules. Independent blockchains can’t interact with each other to exchange information or value.

This interoperability issue becomes critical due to the expanding blockchain networks and more DeFi projects going cross-chain. Meanwhile, such siloed nature of blockchain contradicts the core principle of decentralization, which revolves around making blockchain accessible for everyone. Is there any solution to this lack of interoperability? How can someone from the Ethereum network access the data and resources available on a different blockchain like Binance? That’s where bridging solutions or blockchain bridges make a move.

Let’s explore the bridging solutions and their working mechanisms in this article. In addition, we will also learn to build a decentralized token bridge between Ethereum and Binance Smart Chain are two popular blockchains for DeFi development.

What are blockchain bridges?

A blockchain bridge enables interoperability and connectivity between two unique blockchains that operate under different consensus mechanisms. More clearly put, blockchain bridges allow two different blockchains to interact with each other. Blockchains can share smart contract execution instructions, transfer tokens, and share data & resources back and forth between two independent blockchains as they no longer remain limited by their origin. These blockchains can even access the off-chain data, such as access to the live chart of the stock market. Some of the widely used blockchain bridges are xPollinate, Matic Bridge, Binance Bridge. Blockchain bridges provide the following benefits to the users:

  • Users can leverage the benefits of two separate blockchains to create dApps instead of only from the hosted blockchain. It means a user can deploy dApp on Solana and can power the dApp with Ethereum’s smart contract technology.
  • Users can transfer tokens from a blockchain that charges high transaction costs to another blockchain where transaction costs are comparatively cheaper.
  • With the ability to transfer tokens instantly, users can shift from a volatile cryptocurrency to Stablecoins quickly without taking the help of an intermediary.
  • One can also host digital assets on a decentralized application of a different blockchain. For example, one can create NFTs on the Cardano blockchain and host them on the Ethereum marketplace.
  • Bridging allows users to execute dAPPs across multiple blockchain ecosystems.

What are the Types of Blockchain Bridges?

To understand how blockchain bridges work, we first need to know how many types exist. Currently, two types of blockchain bridges are present; a federated bridge and a trustless bridge. Now, let’s understand their working mechanism.

Federated bridge

A federated bridge is also known as a centralized bridge. It is essentially a kind of centralized exchange where the users interact with a pool that can sometimes be a company or a middleman. If the token transfer occurs for Ether and BNB, there will be two large pools; one containing BNB and another containing Ether. As soon as the sender initiates the transfer with Ether, it gets added to the pool, and the pool sends him an equivalent amount of BNB out of the second pool. The centralized authority charges a small fee to regulate this process. However, the fee is a small amount that users can pay conveniently.

Trustless bridge

These are the purely decentralized bridge that eliminates the role of any third party. Trustless blockchain bridges don’t even use API to administer the process of burning and minting the token. Instead, smart contract plays a key role here. When a user initiates the token transfer through the trustless bridge, the smart contract freezes his current cryptos and provides him a copy of equivalent tokens on the new network. The smart contract then mints the token since it understands that the user has already frozen or burnt tokens on another network.

What are the main features of a bridging solution?

Lock and Mint

Tokens are not really transferred via a blockchain bridge. When a user transfers a token to another blockchain, a two-stage process takes place. At first, the tokens are frozen on the current blockchain. Then, a token of equal value is minted on the receiving blockchain. So, if the user wants to redeem the tokens, the bridge burns the equivalent token to unlock the original value.

Trust-based Solution

Trust-based decentralized blockchain bridges are popular even though they include a ‘merchant’ or trusted custodian. The custodian controls the fund (tokens) via wallet and helps ease off the token transfer process. Thus, high flexibility remains in many blockchain networks.

Assisting Sidechain

While a bridge links two different blockchains, a sidechain bridge connects a parent blockchain to its child blockchain. Since the parent and child blockchain exists on separate chains, they need a blockchain bridge to communicate or share data.

Robust Management

Bridge validators act as the network operators. These operators issue corresponding tokens in exchange for the token they receive from another network through a special smart contract.

Cross-chain Collaterals

Cross-chain collaterals help users to move assets from one blockchain of significant value to another with low fees. Earlier, the users were allowed to borrow assets only from their native chain. Now, they can leverage cross-chain borrowing through a blockchain bridge that requires additional liquidity.

Efficiency

Blockchain bridges authorize the regulation of spontaneous micro transfers. These transfers happen instantly between different blockchains at feasible and nominal rates.

Why is a bridging solution needed?

Following are the three big reasons a blockchain bridge or bridging solution is crucial:

Multi-blockchain token transfer

The most obvious yet crucial role of the blockchain bridge is that it enables cross-blockchain exchange. Users can instantly mint tokens on the desired blockchain without using any costly or time-taking exchange process.

Development

Blockchain bridges help various blockchains to develop by leveraging the abilities of each other. For instance, the features of Ethereum cannot be available on BSC. Bridging solutions let them work and grow together as a team player to solve the challenges occurring in the blockchain space.

Transaction fees

The last big reason behind someone’s need for a bridging solution is transaction fees, often high on popular blockchains. In contrast, newer blockchains don’t impose high transaction costs, though they lack security and other major features. So, bridges allow people to access new networks, transfer tokens to that network, and process transactions at a comparatively low cost.

How to build a decentralized token bridge between Ethereum and Binance Smart Chain?

Using this step-by-step procedure, you will learn how to build a completely decentralized bridge between Ethereum and Binance smart chain using the solidity programming language. Although many blockchain bridges use API to transfer tokens and information, APIs are vulnerable to hacks and can send bogus transactions once hacked. So, we will make the bridge fully decentralized by removing the API from the mechanism.

We allow the bridge script to generate a signed message that the contract will receive to mint the tokens after verifying the signature. The contract also makes sure that the message is unique and hasn’t been used before. That way, you give the signed message to the user, and they are in charge of submitting it to the blockchain to mint and pay for the transaction.

First set up a smart contract for the bridge base using the following functions

import '@openzeppelin/contracts/token/ERC20/IERC20.sol';
import './Itoken.sol';
contract BridgeBase {
address public admin;
IToken public token;
mapping(address => mapping(uint => bool)) public processedNonces;
enum Step { Burn, Mint }
event Transfer(
address from,
address to,
uint amount,
uint date,
uint nonce,
bytes signature,
Step indexed step
);
constructor(address _token) {
admin = msg.sender;
token = IToken(_token);
}
function burn(address to, uint amount, uint nonce, bytes calldata signature) external {
require(processedNonces[msg.sender][nonce] == false, 'transfer already processed');
processedNonces[msg.sender][nonce] = true;
token.burn(msg.sender, amount);
emit Transfer(
msg.sender,
to,
amount,
block.timestamp,
nonce,
signature,
Step.Burn
);
}
function mint(
address from,
address to,
uint amount,
uint nonce,
bytes calldata signature
) external {
bytes32 message = prefixed(keccak256(abi.encodePacked(
from,
to,
amount,
nonce
)));
require(recoverSigner(message, signature) == from , 'wrong signature');
require(processedNonces[from][nonce] == false, 'transfer already processed');
processedNonces[from][nonce] = true;
token.mint(to, amount);
emit Transfer(
from,
to,
amount,
block.timestamp,
nonce,
signature,
Step.Mint
);
}
function prefixed(bytes32 hash) internal pure returns (bytes32) {
return keccak256(abi.encodePacked(
'\x19Ethereum Signed Message:\n32',
hash
));
}
function recoverSigner(bytes32 message, bytes memory sig)
internal
pure
returns (address)
{
uint8 v;
bytes32 r;
bytes32 s;
(v, r, s) = splitSignature(sig);
return ecrecover(message, v, r, s);
}
function splitSignature(bytes memory sig)
internal
pure
returns (uint8, bytes32, bytes32)
{
require(sig.length == 65);
bytes32 r;
bytes32 s;
uint8 v;
assembly {
// first 32 bytes, after the length prefix
r := mload(add(sig, 32))
// second 32 bytes
s := mload(add(sig, 64))
// final byte (first byte of the next 32 bytes)
v := byte(0, mload(add(sig, 96)))
}
return (v, r, s);
}
}

After constructing and deploying bridge base code, deploy Binance bridge using the following code

pragma solidity ^0.8.0;
import './BridgeBase.sol';
contract BridgeBsc is BridgeBase {
constructor(address token) BridgeBase(token) {}
}

Next, deploy another component of the decentralized token bridge; the Ethereum token bridge using the following code.

pragma solidity ^0.8.0;
import './BridgeBase.sol';
contract BridgeEth is BridgeBase {
constructor(address token) BridgeBase(token) {}
}

Once done with the contracts, mint and burn the IToken using the following code:

pragma solidity ^0.8.0;
interface IToken {
function mint(address to, uint amount) external;
function burn(address owner, uint amount) external;
}

Next, after minting and burning the IToken, program the migrations:

// SPDX-License-Identifier: MIT
pragma solidity >=0.4.22 <0.9.0;
contract Migrations {
address public owner = msg.sender;
uint public last_completed_migration;
modifier restricted() {
require(
msg.sender == owner,
"This function is restricted to the contract's owner"
);
_;
}
function setCompleted(uint completed) public restricted {
last_completed_migration = completed;
}
}

Now, write the smart contract for the token base.

pragma solidity ^0.8.0;
import '@openzeppelin/contracts/token/ERC20/ERC20.sol';
contract TokenBase is ERC20 {
address public admin;
constructor(string memory name, string memory symbol) ERC20(name, symbol) {
admin = msg.sender;
}
function updateAdmin(address newAdmin) external {
require(msg.sender == admin, 'only admin');
admin = newAdmin;
}
function mint(address to, uint amount) external {
require(msg.sender == admin, 'only admin');
_mint(to, amount);
}
function burn(address owner, uint amount) external {
require(msg.sender == admin, 'only admin');
_burn(owner, amount);
}
}

Once the token base is deployed, deploy the token on Binance smart chain using the given code:

pragma solidity ^0.8.0;
import './TokenBase.sol';
contract TokenBsc is TokenBase {
constructor() TokenBase('BSC Token', 'BTK') {}
}

Next, deploy the token on Ethereum using the given code:

pragma solidity ^0.8.0;
import './TokenBase.sol';
contract TokenEth is TokenBase {
constructor() TokenBase('ETH Token', 'ETK') {}
}

Once the token is deployed on Binance smart chain and Ethereum, we will program the migration function:

const Migrations = artifacts.require("Migrations");
module.exports = function (deployer) {
deployer.deploy(Migrations);
};

Now, deploy the bridge between Ethereum and Binance smart chain.

const TokenEth = artifacts.require('TokenEth.sol');
const TokenBsc = artifacts.require('TokenBsc.sol');
const BridgeEth = artifacts.require('BridgeEth.sol');
const BridgeBsc = artifacts.require('BridgeBsc.sol');
module.exports = async function (deployer, network, addresses) {
if(network === 'ethTestnet') {
await deployer.deploy(TokenEth);
const tokenEth = await TokenEth.deployed();
await tokenEth.mint(addresses[0], 1000);
await deployer.deploy(BridgeEth, tokenEth.address);
const bridgeEth = await BridgeEth.deployed();
await tokenEth.updateAdmin(bridgeEth.address);
}
if(network === 'bscTestnet') {
await deployer.deploy(TokenBsc);
const tokenBsc = await TokenBsc.deployed();
await deployer.deploy(BridgeBsc, tokenBsc.address);
const bridgeBsc = await BridgeBsc.deployed();
await tokenBsc.updateAdmin(bridgeBsc.address);
}
};

Once the bridge is deployed, deploy the decentralized bridge:

const TokenBsc = artifacts.require('./TokenBsc.sol');
module.exports = async done => {
const [recipient, _] = await web3.eth.getAccounts();
const tokenBsc = await TokenBsc.deployed();
const balance = await tokenBsc.balanceOf(recipient);
console.log(balance.toString());
done();
}

Next, program the bridge API that listens to the transfer events:

const Web3 = require('web3');
const BridgeEth = require('../build/contracts/BridgeEth.json');
const BridgeBsc = require('../build/contracts/BridgeBsc.json');
const web3Eth = new Web3('url to eth node (websocket)');
const web3Bsc = new Web3('https://data-seed-prebsc-1-s1.binance.org:8545');
const adminPrivKey = '';
const { address: admin } = web3Bsc.eth.accounts.wallet.add(adminPrivKey);
const bridgeEth = new web3Eth.eth.Contract(
BridgeEth.abi,
BridgeEth.networks['4'].address
);
const bridgeBsc = new web3Bsc.eth.Contract(
BridgeBsc.abi,
BridgeBsc.networks['97'].address
);
bridgeEth.events.Transfer(
{fromBlock: 0, step: 0}
)
.on('data', async event => {
const { from, to, amount, date, nonce, signature } = event.returnValues;
const tx = bridgeBsc.methods.mint(from, to, amount, nonce, signature);
const [gasPrice, gasCost] = await Promise.all([
web3Bsc.eth.getGasPrice(),
tx.estimateGas({from: admin}),
]);
const data = tx.encodeABI();
const txData = {
from: admin,
to: bridgeBsc.options.address,
data,
gas: gasCost,
gasPrice
};
const receipt = await web3Bsc.eth.sendTransaction(txData);
console.log(Transaction hash: ${receipt.transactionHash});
console.log( Processed transfer: - from ${from} - to ${to} - amount ${amount} tokens - date ${date} - nonce ${nonce} );
});

Now, deploy the Private key function to the Ethereum bridge.

const BridgeEth = artifacts.require('./BridgeEth.sol');
const privKey = 'priv key of sender';
module.exports = async done => {
const nonce = 1; //Need to increment this for each new transfer
const accounts = await web3.eth.getAccounts();
const bridgeEth = await BridgeEth.deployed();
const amount = 1000;
const message = web3.utils.soliditySha3(
{t: 'address', v: accounts[0]},
{t: 'address', v: accounts[0]},
{t: 'uint256', v: amount},
{t: 'uint256', v: nonce},
).toString('hex');
const { signature } = web3.eth.accounts.sign(
message,
privKey
);
await bridgeEth.burn(accounts[0], amount, nonce, signature);
done();
}

At last, program Token balance function for the bridge:

const TokenEth = artifacts.require('./TokenEth.sol');
module.exports = async done => {
const [sender, _] = await web3.eth.getAccounts();
const tokenEth = await TokenEth.deployed();
const balance = await tokenEth.balanceOf(sender);
console.log(balance.toString());
done();
}

To run the demo, follow the given steps:

To deploy bridge smart contract on Ethereum, type this given code in the Ethereum test net

~ETB/code/screencast/317-eth-bsc-decenrealized-bridge $ truffle migrate --reset --network ethTestnet

To deploy bridge smart contract on Binance smart chain, type this given code in the BSC testnet

~ETB/code/screencast/317-eth-bsc-decenrealized-bridge $ truffle migrate --reset --network bscTestnet