Brand Image for Darwin Among the Machines
Cover Image for Darwin Among theMachines
ABOUT THE AUTHOR

George Dyson was born in 1953. Through his father, a mathematical physicist, his mother, a logician, and his sister, a computer industry analyst, he indirectly witnessed the conjunction of theory, technology and high finance which precipitated the information age. A kayak builder and ethnohistorian, his experience in the Canadian and Alaskan wilderness has sharpened his skills as an observer of the convergence between technology and living things.

George Dyson

Darwin Among the Machines

image
PENGUIN BOOKS

PENGUIN BOOKS

Published by the Penguin Group
Penguin Books Ltd, 80 Strand, London WC2R 0RL, England
Penguin Group (USA) Inc., 375 Hudson Street, New York, New York 10014, USA
Penguin Group (Canada), 90 Eglinton Avenue East, Suite 700, Toronto, Ontario, Canada M4P 2Y3 (a division of Pearson Penguin Canada Inc.)
Penguin Ireland, 25 St Stephen’s Green, Dublin 2, Ireland (a division of Penguin Books Ltd)
Penguin Group (Australia), 250 Camberwell Road, Camberwell, Victoria 3124, Australia (a division of Pearson Australia Group Pty Ltd)
Penguin Books India Pvt Ltd, 11 Community Centre, Panchsheel Park, New Delhi – 110 017, India
Penguin Group (NZ), 67 Apollo Drive, Rosedale, Auckland 0632, New Zealand (a division of Pearson New Zealand Ltd)
Penguin Books (South Africa) (Pty) Ltd Block D, Rosebank Office Park, 181 Jan Smuts Avenue, Parktown North, Gauteng 2193, South Africa

Penguin Books Ltd, Registered Offices: 80 Strand, London WC2R 0RL, England

penguin.com

First published in the USA by Addison-Wesley Publishing Company 1997
First published in Great Britain by Allen Lane The Penguin Press 1998
Published in Penguin Books 1999

Copyright © George Dyson, 1997
All rights reserved

The woodcut on page 228, from The Famous History of Frier Bacon, 1679, appears by permission of the Huntington Library, San Marino, California

The moral right of the author has been asserted

Except in the United States of America, this book is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, re-sold, hired out, or otherwise circulated without the publisher’s prior consent in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser

ISBN: 978-0-7181-9457-4

Anything can happen once.

PHILIP MORRISON

Contents

Preface: Edge of the World

1. Leviathan

2. Darwin Among The Machines

3. The General Wind

4. On Computable Numbers

5. The Proving Ground

6. Rats In a Cathedral

7. Symbiogenesis

8. On Distributed Communications

9. Theory of Games and Economic Behavior

10. There’s Plenty of Room at The Top

11. Last and First Men

12. Fiddling While Rome Burns

Notes

Acknowledgments

Preface

Edge of the World

This is a book about the nature of machines. It is framed as history but makes no claim to have separated the fables from the facts. Both mythology and science have a voice in explaining how human beings and technology arrived at the juncture that governs our lives today.

I have attempted, in my own life and in this book, to reconcile a love of nature with an affection for machines. In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.

In November of 1972, at the age of nineteen, I built a small tree house on the shore of Burrard Inlet in British Columbia, and settled in. In winter I consumed books and firewood; in summer I explored the British Columbian and Alaskan coasts. The tree house, ninety-five feet up in a Douglas fir, was paneled with cedar I found drifting in Georgia Strait, split into boards whose grain spanned as many as seven hundred years.

During those tree house winters I had lots of time to think. It got dark at four in the afternoon, rained for days on end, and, when the ocean fog rolled in, the earth, but not the sky, was obscured. At odd, unpredictable moments I found myself wondering whether trees could think. Not thinking the way we think, but thinking the way trees think; say, two or three hundred years to form the slow trace of an idea.

I spent the summers working on a variety of boats. When running at night I preferred to take the midnight-to-daybreak watch. By three or four in the morning, I was alone with the trace of unseen landforms on the radar screen and the last hour or two of night. I sometimes left the helm and paced the decks. The world receded in a phosphorescent wake, while birds appeared as red or green phantoms in the glow of the running lights, depending on whether they took wing on the port or starboard side. I also found myself slipping down into the engine room for more than the obligatory check.

When you live within a boat its engine leaves an imprint, deeper than mind, on neural circuits first trained to identify the acoustic signature of a human heart. As I had sometimes drifted off to sleep in the forest canopy, boats passing in the distance, and wondered whether trees might think, so I sat in the engine-room companionway in the small hours of the morning, with the dark, forested islands passing by, and wondered whether engines might have souls. This question threads its way through the chapters of this book.

We are brothers and sisters of our machines. Minds and tools have been sharpened against each other ever since a scavenger’s stone fractured cleanly and the first cutting edge was held in a hunter’s hand. The obsidian flake and the silicon chip are struck by the light of the same campfire that has passed from hand to hand since the human mind began.

This book is not about the future. Where we are at present is puzzling enough. I prefer to look into the past, exercising the historian’s privilege of selecting predictions that turned out to be right. The past is where we find answers to our questions: Who are we, and why? The future is where we see questions to which the answers are up to us.

Do we remain one species, or diverge into many?

Do we remain of many minds, or merge into one?

PENGUIN BOOKS

DARWIN AMONG THE MACHINES

‘Brilliant… a wonderfully chewy, nuggety tour of the fields of ideas behind technological history; and it’s a felt piece of work too’ Francis Spufford,
Literary Review

‘Presenting ideas from 20th-century scientists, as well as from professional thinkers like Hobbes, Babbage and Leibniz, George Dyson has come up with a nicely condensed history of the people and processes that have led to today’s technology’ J.D. Biersdorfer,
The New York Times Book Review

‘A cogent, succinct history of thinkers and thinking that paved the way… to today’s technology’ Katie Hafner,
Newsweek

‘George Dyson’s clever, eccentric Darwin among the Machines brings evolutionary thinking to bear on 21st-century subjects such as machine intelligence… His arguments are subtle and careful’ Maggie Gee,
Daily Telegraph

‘Lucid and thoughtful’ Sadie Plant,
The Times

Acknowledgments

Princeton University’s Firestone Library, the largest open-stack collection in the world, is one of the few libraries that require a university identification card to get in. The job of guarding the turnstile at the entrance to the library must dull one’s attention over the years, and I discovered in 1967 that by melting into the crowd of students flooding into the library at 8:30 in the morning, it was usually possible to sneak in. Firestone’s fifty-five miles of books, most of them shelved underground, offered a warm, anonymous refuge until it was safe to reappear out on the street and meet up with friends who had suffered through a day at school. I was left with a love of libraries, and a fear of librarians, that has lasted ever since.

Western Washington University’s Fairhaven College granted me research associate status, with library privileges, to write this book. The Mabel Zoe Wilson Library is a small, comfortable facility, and to its resources I owe most of the citations appearing here. Special thanks go to Frank Haulgren and colleagues at interlibrary loan, who successfully pursued obscure requests. Bob Christensen, who enjoys confronting librarians as avidly as I shy away from them, helped excavate many things. Robert Keller, Marie Eaton, and others at Fairhaven College managed to bend the university’s rules around my absence of credentials. Without such support this book would not exist.

The engines of evolution are driven by the recombination of genes; human creativity is driven by the recombination of ideas; literature is driven by the recombination of books. This book owes its elements to many others, cited elsewhere, and to two books that deserve special mention here. My father’s Origins of Life1 and my mother’s Gödel’s Theorems2 contributed substantially to whatever limited understanding of the foundations of biology and of the foundations of mathematics is represented in this book. Both critiqued the manuscript as it took form, but any remaining errors or misinterpretations are my own.

In 1982 my sister, Esther Dyson, became editor of the Rosen Electronics Letter, a Wall Street investment newsletter that sensed wider implications as the personal-computer revolution began. Esther observed the new industry, and I observed Esther. All my perspectives on computational ecology can be traced to the Rosen Electronics Letter (which became RELease 1.0 in 1983). This does not imply that Esther agrees with any of my interpretations of her work.

Thanks to Esther, I met literary agent John “No Wasted Motion” Brockman in 1984, who, nine years later, with Katinka Matson, helped precipitate this book. William Patrick at Addison-Wesley accepted an ambiguous proposal, and Jeff Robbins had the patience to await a manuscript, followed by the efficiency as editor to produce a book without additional delay. Others, including Danny Hillis, William S. Laughlin, James Noyes, Patrick Ong, and Ann Yow, offered encouragement at different stages along the way. The builders of my boat designs kept me afloat. I owe the last sentence in this book, and more, to David Brower—archdruid, mountaineer, and editor of landmarks from In Wildness … to On the Loose.

My daughter Lauren had just turned five, in 1994, when we watched a videotape describing Thomas Ray’s digital organisms, self-reproducing numbers that had enraptured their creator by evolving new species and new patterns of behavior overnight. Ray was speaking at the Institute for Advanced Study, in Princeton, New Jersey, where forty years earlier the first experiments at evolving numerical organisms were performed. Ray’s Tierran creatures inhabit a landscape entirely foreign to our own. Their expanding digital universe was first wrested into existence, out of the realm of pure mathematics, by the glow of twenty-six hundred vacuum tubes that flickered briefly at the dawn of digital programming in a low brick building at the foot of Olden Lane. Tom Ray and his portable universe now stood on ancestral ground.

“This is Tom Ray and his imaginary creatures,” I said, explaining what we were watching partway through the tape. “But Dad,” my daughter corrected, “they’re not imaginary!”

She’s right.

1

Leviathan

Canst thou draw out leviathan with an hook?
or his tongue with a cord which thou lettest down?

Canst thou put an hook into his nose?
or bore his jaw through with a thorn?

Will he make many supplications unto thee?
will he speak soft words unto thee?

Will he make a covenant with thee?
wilt thou take him for a servant for ever?

Wilt thou play with him as with a bird?
or wilt thou bind him for thy maidens?

Shall the companions make a banquet of him?
shall they part him among the merchants?

Canst thou fill his skin with barbed irons?
or his head with fish spears?

Lay thine hand upon him, remember the battle, do no more.

JOB 41:1–8

“Nature (the Art whereby God hath made and governes the World) is by the Art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal,” wrote Thomas Hobbes (1588–1679) on the first page of his Leviathan; or, The Matter, Forme, and Power of a Common-wealth Ecclesiasticall and Civill, published to great disturbance in 1651. “For seeing life is but a motion of Limbs, the beginning whereof is in the principall part within; why may we not say that all Automata (Engines that move themselves by springs and wheeles as doth a watch) have an artificiall life?”1 Hobbes believed that the human commonwealth, given substance by the power of its institutions and the ingenuity of its machines, would coalesce to form that Leviathan described in the Old Testament, when the Lord, speaking to Job out of the whirlwind, had warned, “Upon earth there is not his like, who is made without fear.”

Three centuries after Hobbes, automata are multiplying with an agility that no vision formed in the seventeenth century could have foretold. Artificial intelligence flickers on the desktop and artificial life has become a respectable pursuit. But the artificial life and artificial intelligence that so animated Hobbes’s outlook on the world was not the discrete, autonomous mechanical intelligence conceived by the architects of digital processing in the twentieth century. Hobbes’s Leviathan was a diffuse, distributed, artificial organism more characteristic of the technologies and computational architectures approaching with the arrival of the twenty-first.

“What is the Heart, but a Spring; and the Nerves, but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer?” asked Hobbes. “Art goes yet further, imitating that rationall and most excellent worke of Nature, Man. For by Art is created that great LEVIATHAN called a COMMON-WEALTH … which is but an Artificiall Man.”2 Despite his reasoned arguments Hobbes was variously condenmed by the monarchy, the Parliament, the universities, and the church. Hobbes saw human society as a self-organizing system, possessed of a life and intelligence of its own. Power was vested by mutual consensus, but not by divine right, in the hands of an assembly or a king. Loyalty was useful but need not be absolute. This ambivalence was viewed with suspicion from both sides. “Mr. Hobbs defyeth the whole host of learned men,” and was “dangerous to both Government and Religion,” warned Alexander Ross in Leviathan Drawn out with a Hook,3 the first of a series of attacks that culminated with the citing by the House of Commons of Hobbes’s blasphemies as a probable cause of the great fire and plague of 1666. Although threats against Hobbes were never executed, he destroyed his more incriminating manuscripts, fearing the worst. In his Historical Narration Concerning Heresie, and the Punishment Thereof, written in 1668, Hobbes maintained that his ideas did not fit the existing definition of heresy and accusations against him were unjust; in any event, he argued, there was no legal authority for burning heretics at the stake. Nonetheless, after Hobbes was safely dead, a decree by the University of Oxford in 1683 recommended that Leviathan, among other “Pernicious Books and Damnable Doctrines,” be burned.4

Hobbes’s blasphemy was his vision of a diffuse intelligence that was neither the supreme intelligence of God nor the individual intelligence of the human mind. Leviathan was a collective organism, transcending the individual beings and institutional organs of which it was composed. Human society, taken as a whole, constituted a new form of life, explained Hobbes, “in which, the Soveraignty is an Artificiall Soul, as giving life and motion to the whole body; The Magistrates, and other Officiers of Judicature and Execution, Artificiall Joynts; Reward and Punishment (by which fastned to the seate of the Soveraignty, every joynt and member is moved to performe his duty) are the Nerves, that do the same in the body Naturall; The Wealth and Riches of all the particular members, are the Strength; Salus Populi (the peoples safety) its Businesse; Counsellors, by whom all things needfull for it to know, are suggested unto it, are the Memory; Equity and Lawes, an artificiall Reason and Will; Concord, Health; Sedition, Sicknesse; and Civill war, Death.”5

Hobbes sought not to diminish the intelligence of any existing being, human or divine, but rather to discover evidence of intelligence in the vacuum that supposedly intervened. As he argued against the physical vacuum demonstrated by the air pump of Robert Boyle, so he argued against the metaphysical vacuum that separated God from man. Hobbes hinted at a science of complex systems as comprehensive (and potentially heretical) as the two new sciences by which Galileo, befriended by Hobbes in 1636, had revealed the relative motion of all things. Hobbes’s shortcomings as a mathematician, ridiculed by other natural philosophers, were out weighed by his facility with words. His ambition—when not distracted by civil war, the Restoration, or other social upheavals of the time—was to construct a consistent and purely materialistic natural philosophy of mind. “Motion produceth nothing but motion,” he argued.6 “And consequently every part of the Universe, is Body, and that which is not Body, is no part of the Universe: And because the Universe is All, that which is no part of it, is Nothing.”7 His analysis revealed deep-seated contradictions within the doctrines of the church. “Wee are told, there be in the world certaine Essences separated from Bodies, which they call Abstract Essences, and Substantiall Formes: For the Interpretation of which Jargon, there is need of somewhat more than ordinary attention…. Being once fallen into this Error of Separated Essences, they are thereby necessarily involved in many other absurdities that follow it…. Can any man think that God is served with such absurdities?”8

Hobbes protested strongly against the metaphysics of René Descartes (1596–1650). His objections, along with a terse response, were published in 1641 as an appendix to Descartes’s Meditationes de prima philosophia, translated into English as Six Metaphysical Meditations; Wherein it is Proved that there is a God. And that Mans Mind is really distinct from his Body. “The question may be put infinitely, how do you know that you know, that you know, that you know? &c,” argued Hobbes. “Wherefore… we cannot separate thought from thinking matter, it seems rather to follow, that a thinking thing is material, than that ’tis immaterial.”9 Hobbes countered all the arguments that would reappear much later as arguments against the possibility of mind among machines. “Ratiocination will depend on Words, Words on Imagination, and perhaps Imagination as also Sense on the Motion of Corporeal Parts; and so the Mind shall be nothing but Motions in some Parts of an Organical Body,” he explained, treading dangerously close to heresy, though failing to dissuade Descartes.10

In suggesting, as Alexander Ross put it, “that our natural reason is the word of God” and that “it was a winde, not the holy spirit which in the Creation moved on the waters,”11 Hobbes raised an upheaval that reverberated for three hundred years. The seeds of the Darwinian revolution, with all its ensuing controversies, were sown by Hobbes. The precedent for Bishop Samuel Wilberforce versus Thomas Huxley and Charles Darwin in 1860 was set in 1658 by Bishop John Bramhall versus Thomas Hobbes, launched with a sweeping salvo titled The Catching of the Leviathan, or the Great Whale, Demonstrating out of Mr. Hobbs his own Works, That no man who is thoroughly an Hobbist, can be a good Christian, or a Good Common-wealths man, or reconcile himself to himself, Because his Principles are not only destructive to all Religion but to all Societies; extinguishing the Relation between Prince and Subject, Parent and Child, Master and Servant, Husband and Wife; and abound with palpable contradictions.

Hobbes bore these attacks without flinching and made few concessions to the authorities of his time. He was famous for his irreverences, including an opinion that “the Episcopalians ridiculed the Puritans, and the Puritans the Episcopalians; but… the Wise ridiculed both alike.”12 Charles II, then the sixteen-year-old Prince of Wales, had been tutored by Hobbes while exiled in Paris in 1646; with the restoration of the monarchy in 1660 he invited Hobbes into his court. The king awarded Hobbes a small pension and gave him a measure of protection against his enemies, describing him as “a bear, against whom the Church played their young dogs, in order to exercise them.”13 The insults against Hobbes grew bolder on his death, such as the anonymous Dialogues of the Living and the Dead, which appeared in 1699, satirizing Hobbes as “a parcel of atoms jumbled together by chance.” Hobbes had prepared for a protracted battle, leveling his own broadsides at his opponents, epitomized by his Considerations upon the Reputation, Loyalty, Manners, & Religion, of Thomas Hobbes of Malmsbury, written by himself, by way of a Letter to a Learned Person, in which he asked: “What kind of Attribute I pray you is immaterial, or incorporeal substance? Where do you find it in the Scripture? Whence came it hither, but from Plato and Aristotle, Heathens, who mistook those thin Inhabitants of the Brain they see in sleep, for so many incorporeal men; and yet allow them motion, which is proper only to things corporeal? Do you think it an honour to God to be one of these?”14

Hobbes advocated neither the pantheism of the ancients nor the atheism of which he was accused. He believed life and mind to be natural consequences of matter when suitably arranged; God to be a corporeal being, of perhaps infinitely higher mental order but composed of substance nonetheless; and damnation, to those so afflicted, a temporary state. The eloquence of his arguments wounded his critics deeply, whereas Hobbes suffered only superficially from the charges of heresy and promises of eternal hellfire pressed against him in response. “In writing books just as in real life,” he wrote to Cosimo de’ Medici in 1669, “enemies are more useful than friends.”15 Leviathan circulated widely, reprinted by underground or offshore press. “To my bookseller’s, for ‘Hobbs’s Leviathan,’” noted Samuel Pepys in 1668, “which is now mightily called for; and what was heretofore sold for 8s. I now give 24s. at the second hand, and is sold for 30s., it being a book the Bishops will not let be printed again.”16

Hobbes was a lifelong pacifist, a disposition he attributed to a premature birth precipitated by anxiety over the approach of the Spanish Armada in 1588. He meticulously cultivated new ideas and distilled them into words. “He walked much and contemplated,” wrote his contemporary John Aubrey, “and he had in the head of his Staffe a pen and inke-horn, carried always a Note-book in his pocket, and as soon as a notion darted, he presently enterd it into his Booke, or els he should perhaps have lost it.”17 He played tennis until the age of seventy-five (“this he did believe would make him live two or three yeares the longer”) and served up a lively game of words until silenced by a peaceful death at the age of ninety-one. “Neither the timorousness of his Nature from his Infancy, nor the decay of his Vital Heat in the extremity of old age,” reported Aubrey, “chilled the briske Fervour and Vigour of his mind, which did wonderfully continue to him to his last.”18 His most outspoken critics were among the first to grant his intellect their respect. Steven Shapin and Simon Schaffer concluded their exhaustive study of the argument between Hobbes and Robert Boyle with an unambiguous judgment: “Hobbes was right.”19 Hobbes’s vision was never so much extinguished as transformed.

Two centuries after Hobbes, the French electrodynamicist André-Marie Ampère sought to categorize all branches of human knowledge in his Essay on the Philosophy of Science, or Analytic exposition of a natural classification of human knowledge.20 Reaching the field of political science through territory first explored by Hobbes (who composed Leviathan during his exile in Paris, before the French clerical authorities grew agitated by his ideas), Ampère coined a word with a far-reaching destiny: Cybernétique. Derived from Greek terminology referring to the steering of a ship, Ampère’s Cybernétique encompassed that body of theory, complementary to but distinct from the theory of power, concerned with the underlying processes that direct the course of organizations of all kinds. In the second, posthumous volume of Ampère’s Essay, published by his son in 1843, Ampère explains how he came to recognize a field of knowledge “which I name Cybernétique, from the word image, which was applied first, in a restricted sense, to the steering of a vessel, and later acquired, even among the Greeks, a meaning extending to the art of steering in general.”21

Ampère, an early advocate of the electromagnetic telegraph and mathematical pioneer of both game theory and electrodynamics, thereby anticipated the Cybernetics of Norbert Wiener, who, another century later, reinvented both Ampère’s terminology and Hobbes’s philosophy in their current, electronic form. “Although the term cybernetics does not date further back than the summer of 1947,” wrote Wiener in 1948, “we shall find it convenient to use in referring to earlier epochs of the development of the field.”22 Wiener, who was involved in the development of radar-guided anti-aircraft fire control, which marked the beginning of rudimentary perception by electronic machines, was unaware until after the publication of Cybernetics of the coincidence in choosing a name coined by the same Ampère we now honor in measuring the flow of electrons through a circuit. In 1820, by demonstrating that electric currents are able to convey both power and information, Ampère had laid the foundations for Wiener’s cybernetic principles of feedback, adaptation, and control.

We live in an age of embodied logic whose beginnings go back to Thomas Hobbes as surely as it remains our destiny to see new Leviathans unfold. Hobbes established that logic and digital computation share common foundations, suggesting a basis in common with mind. “Per ratiocinationem autem intelligo computationem,” declared Hobbes in 1655, or, “by ratiocination, I mean computation. Now to compute, is either to collect the sum of many things that are added together, or to know what remains when one thing is taken out of another. Ratiocination, therefore is the same with Addition or Substraction; and if any man adde Multiplication and Division, I will not be against it, seeing Multiplication is nothing but Addition of equals one to another, and Division nothing but a Substraction of equals one from another, as often as is possible. So that all Ratiocination is comprehended in these two operations of the minde. Addition and Substraction.”23

This statement launched an argument far from settled after 340 years: If reasoning can be reduced to arithmetic, which, even in Hobbes’s time, could be performed by mechanism, then is mechanism capable of reasoning? Can machines think? (Or, as Marvin Minsky put it, “Why do people think computers can’t?”)24 Hobbes, the patriarch of artificial intelligence, was succeeded in this line of questioning by the young German lawyer and mathematician Gottfried Wilhelm von Leibniz (1646–1716), who made the first attempt at a system of symbolic logic and the first suggestion of a binary computing machine. The holy grail of capturing intelligence within a formal, mechanical system, however, slipped through Leibniz’s grasp.

Or did it? The binary arithmetic and logical calculus of Leibniz and Hobbes’s vague notions of reason as a mathematical function are now executed millions of times per second by thumbnail-size machines. Our formalization of logic is embedded microscopically in these devices, and by every available means of digital communication, from fiber optics to circulating floppy disks, the kingdom of the microprocessor is building a collective body of results. Philosophers and mathematicians have made limited progress at deconstructing the firmament of mind from the top down, while a grand, bottom-up experiment at building intelligence from elemental bits of addition and subtraction has been advancing by leaps and bounds. The results have more in common with the diffuse intelligence of Hobbes’s Leviathan than with the localized artificial intelligence, or AI, that has now been promised for fifty years.

Is intelligence a formal (or mathematically definable) system? Is life a recursive (or mechanically calculable) function? What happens when you replicate discrete-state microprocessors by the billions and run these questions the other way? (Are formal systems intelligent? Are recursive functions alive?) Life and intelligence have learned to operate on any number of different scales: larger, smaller, slower, and faster than our own. Biology and technology evidence parallel tendencies toward collective, hierarchical processes based on information exchange. As information is distributed, it tends to be represented (encoded) by increasingly economical (meaningful) forms. This evolutionary process, whereby the most economical or meaningful representation wins, leads to a hierarchy of languages, encoding meaning on levels that transcend comprehension by the system’s individual components—whether genes, insects, microprocessors, or human minds.

Binary arithmetic is a language held in common by switches of all kinds. The global population of integrated circuits—monolithic networks of microscopic switches that take only billionths of a second to switch between off and on—is growing by more than 100 million units per day.25 Production of silicon wafer, approximately 2.5 billion square inches for the year 1994, is expected to double by the year 2000—enough raw material, to use an existing benchmark, for 30 billion Pentium microprocessors, of 3.3 million transistors each.26 Intel’s Pentium microprocessors are now manufactured, tested, and packaged at a cost of less than forty dollars each, while 350,000-transistor 486SXL embedded microprocessors cost less than eight dollars to manufacture and sell in quantity for about fifteen dollars each.27 Microcontrollers—specialized microprocessors embedded in all kinds of things—were produced at a rate of more than 8 million units per day in 1996.28 Over 200,000 non-embedded 32-bit microprocessors per day were shipped in 1995, and worldwide sales of personal computers exceeded 70 million units for the year. But the distinction between microprocessors and microcontrollers is increasingly obscure. Embedded devices are being integrated into the computational landscape, while computers are reaching beyond the desktop to become more deeply embedded in the control of all aspects of our world.

This digital metabolism is held together by telecommunications, spanning distance, and by memory, spanning time. Annual production of dynamic random-access memory (DRAM) now exceeds 25 billion megabits, and the manufacturing cost of 16-megabit memory circuits dropped below $10.00, or $0.62 per megabit, in 1996.29 More than 100 million hard disk drives—averaging 500 megabytes each—were shipped in 1996. The market for electronic connectors now exceeds 20 billion dollars a year. Long-distance transmission of data has exceeded transmission of voice since 1995, with current telecommunications standards allowing the multiplexing of as many as 64,000 voice-equivalent channels over a single fiber optic pair.

Physicist Donald Keck, who wrote “Eureka!” in his Corning laboratory notebook after testing the first 200 meters of low-loss optical fiber in August 1970, estimated the worldwide installed base of optical fiber at more than 100 million kilometers at the end of 1996.30 Eight million kilometers of telecommunications fiber were deployed in 1996 in the United States alone.31 Much of this is “dark fiber” that awaits the growth of high-speed switching elsewhere in the global telecommunications network before it can be used. “The AT&T network is the world’s largest computer,” according to Alex Mandl of AT&T. “It is the largest distributed intelligence in the world—perhaps the universe,” he claimed in 1995 (assuming that extraterrestrial civilizations have broken up their telecommunications industries into pieces smaller than AT&T).32

The emergence of life and intelligence from less-alive and less-intelligent components has happened at least once. Emergent behavior is that which cannot be predicted through analysis at any level simpler than that of the system as a whole. Explanations of emergence, like simplifications of complexity, are inherently illusory and can only be achieved by sleight of hand. This does not mean that emergence is not real. Emergent behavior, by definition, is what’s left after everything else has been explained.

“Emergence offers a way to believe in physical causality while simultaneously maintaining the impossibility of a reductionist explanation of thought,” wrote W. Daniel Hillis, a computer architect who believes that architecture and programming can only go so far, after which intelligence has to be allowed to evolve on its own. “For those who fear mechanistic explanations of the human mind, our ignorance of how local interactions produce emergent behavior offers a reassuring fog in which to hide the soul.”33 Although individual computers and individual computer programs are developing the elements of artificial intelligence, it is in the larger networks (or the network at large) that we are developing a more likely medium for the emergence of the Leviathan of artificial mind.

Sixty years ago, English logician Alan Turing constructed a theory of computable numbers by means of an imaginary discrete-state automaton, reading and writing distinguishable but otherwise intrinsically meaningless symbols on an unbounded length of tape. In Turing’s universe there are only two objects in existence: Turing machine and tape. Turing’s thought experiment was as close to Leibniz’s dream of an elemental and universal language as mind, mechanism, or mathematics has been able to get so far. With the arrival of World War II, statistical analysis and the decoding of computable functions became a matter of life and death. Theory became hardware overnight. Turing and his wartime colleagues working for Allied intelligence at Bletchley Park found themselves coercing obstinate lengths of punched paper tape, at speeds of up to thirty miles per hour, through an optical mask linked by an array of photoelectric cells to the logical circuitry of a primitive computer named Colossus. Some fifteen hundred vacuum tubes, configured for parallel Boolean arithmetic, cycled through five thousand states per second, seeking to recognize a meaningful pattern in scrambled strings of code. The age of electronic digital computers was launched, secretively, as ten Colossi were brought on line by the time the war came to an end.

It has been nothing but Turing machines, in one form or another, ever since. Ours is the age of computable numbers, from the pocket calculator to Mozart on compact disc to the $89.95 operating system containing eleven million lines of code. We inhabit a computational labyrinth infested by billions of Turing machines, each shuffling through millions of internal states per second and set loose, without coordinated instructions, to read and write mutually intelligible strings of symbols on a communally unbounded, self-referential, and infinitely convoluted supply of tape.

Although our attention has been focused on the growth of computer networks as a medium for communication among human beings, beneath the surface lies a far more extensive growth in communication among machines. Everything that human beings are doing to make it easier to operate computer networks is at the same time, but for different reasons, making it easier for computer networks to operate human beings. Symbiosis operates by way of positive rewards. The benefits of telecommunication are so attractive that we are eager to share our world with these machines.

We are, after all, social creatures, formed by our nature into social units, as we ourselves are formed from societies of individual cells. Even H. G. Wells, who warned of a dark future as he approached the close of his life, held out hope for humanity through the globalization of human knowledge, described in his 1938 book World Brain: “In a universal organization and clarification of knowledge and ideas… in the evocation, that is, of what I have here called a World Brain… a World Brain which will replace our multitude of uncoordinated ganglia… in that and in that alone, it is maintained, is there any clear hope of a really Competent Receiver for world affairs…. We do not want dictators, we do not want oligarchic parties or class rule, we want a widespread world intelligence conscious of itself.”34 As we develop digital models of all things great and small, our models are faced with the puzzle of modeling themselves. As far as we know, this is how consciousness evolves.

Wells acknowledged memory not as an accessory to intelligence, but as the substance from which intelligence is formed. “The whole human memory can be, and probably in a short time will be, made accessible to every individual…. This new all-human cerebrum… need not be concentrated in any one single place. It need not be vulnerable as a human head or a human heart is vulnerable. It can be reproduced exactly and fully, in Peru, China, Iceland, Central Africa, or wherever else seems to afford an insurance against danger and interruption. It can have at once, the concentration of a craniate animal and the diffused vitality of an amoeba.”35 Writing from a perspective about midway, technologically, between the diffuse, largely unmechanized nature of Hobbes’s Leviathan and the diffuse, highly mechanized information-processing structures of today, Wells held out the hope that this collective intelligence might improve on some of the collective stupidity exhibited by human beings so far. Let us hope that Wells was right.

Not everyone agrees that our great network of networks represents an emerging intelligence, or that it would be in our best interest if it did. Our intuitive association of intelligence with computational complexity has no precedent by which to grasp the combinatorial scale of the computer networks developing today. “Since the complexity is an exponential function of this kind of combinatorics, there is really a gigantic gap between computers and flatworms or any other simple kind of organism,” warned Philip Morrison, considering the prospects for artificial intelligence in 1974. “Computer experts have a long, long way to go. If they work hard, their machines might approach the intelligence of a human. But the human species is not one person, it is 1010 of them, and that is entirely a different thing. When they tell you about 1010 computers, then you can start to worry.”36

Those ten billion computers are not here yet, but the advance guard is settling in. Most are safely minding their own business, performing innocuous routines with no more intelligence than it takes to recalculate a spreadsheet, schedule a meeting, or adjust the ignition timing as you drive. Some are more visible than others, especially personal computers—microprocessors linked more or less intimately to the memories, intuitions, and decision-making abilities of individual human brains. Suddenly, with the convergence of the computer and telecommunications industries (not to mention the banking industry, which led the way) everything is being connected to everything else.

A circuit-switched communications network, in which real wires are switched to connect a flow of information between A and B, would be swamped by the intractable combinatorics of millions of computers demanding random access to their collective address space at once. All the switches in the world could never keep up. But with packet-switched data communications, collective computation scales gracefully as the number of processors (both electronic and biological) grows. Thanks to “hot-potato” routing algorithms, individual messages—the raw material from which intelligence is formed—are broken into smaller pieces, told where they are going but not how to get there, and reassembled after finding their own way to the destination address. Consensual protocols, running on all the processors in the net, maintain the appearance of robust connections between all the elements at once. The resulting free market for information and computational resources determines which connection pathways will be strengthened and which languish or die out. By the introduction of packet switching on an epidemic scale, the computational landscape is infiltrated by virtual circuitry, cultivating a haphazard, dendritic architecture reminiscent more of nature’s design than of our own. Rules are simple, results complex. Does this signal the emergence of intelligence or merely the intellect of a bamboo forest growing toward the light?

Network architecture appears entirely random—as does, by coincidence or by design, the initial wiring of our own brains. Randomness has its reasons, however. “An argument in favor of building a machine with initial randomness is that, if it is large enough, it will contain every network that will ever be required,” advised Irving J. Good, one of the pioneers of the Colossus, in a lecture on parallel processing given at IBM in 1958.37 Whether growing a brain or evolving a telecommunications system, this seems to be good advice.

Computers may never embody mind at the level of human beings, despite a resurgence of such predictions every few years. But it is differences that make symbiotic relationships work. Symbiosis implies cooperation between distinguishable organisms, often a competition between host and parasite from which fruitful coexistence evolves. New and less distinguishable coalitions, such as lichens or eukaryotic cells, may be formed. “Life did not take over the globe by combat, but by networking,” observed Lynn Margulis, describing how life evolved from the exchange of information between primitive chemical microprocessors the first time around.38 Life began at least once and has been exploring its alternatives ever since. The cooperation between human beings and microprocessors is unprecedented, not in kind, but in suddenness and scale.

From simple congregations of simple molecules life moved, against all odds, to complex associations of complex molecules, forming a prolific molecular ecology eventually leading to living cells. Simple organisms were then established by associations of simple cells, followed by increasingly complex and differentiated cells forming increasingly complex and differentiated living forms. The social insects evolved elementary but highly successful collective organisms based on the behavior of individually simple parts, as Hobbes’s Leviathan introduced the idea of an enduring collective organism composed of our own exceedingly complicated selves. And now, in the coalescence of electronics and biology, we are forming a complex collective organism composed of individual intelligences—governed not at the speed of Parliament but at the speed of light.

Is this the end of nature? Not by any means! Just as J. D. Bernal observed that “we are still too close to the birth of the universe to be certain about its death,”39 so we are still too close to the beginning of nature (not to mention the beginning of science) to be certain about its end. As Hobbes’s Leviathan sparked debate over the divine right of kings, so this new Leviathan signals an end to the illusion of technology as human beings exercising control over nature, rather than the other way around. The proliferation of microprocessors and the growth of distributed communications networks hold mysteries as deep as the origins of life, the source of our own intelligence, or the convergence of biology and technology toward a common language based on self-replicating strings of code. How can we imagine what comes next? As Loren Eiseley suggested concerning the possibility of life on other planets, in 1953, “It is as though nature had all possible, all unlikely worlds to make.”40

Among the unlikely worlds that nature has yet to finish is the one that we call home. “And in this hope I return to my interrupted Speculation of Bodies Naturall,” wrote Thomas Hobbes in the final paragraph of Leviathan, “wherein, (if God give me health to finish it,) I hope the Novelty will as much please, as in the Doctrine of this Artificiall Body it useth to offend.”41

Nature, in her boundless affection for complexity, has begun to claim our creations as her own.