See the ASF Future Salon Network
Technology, Business, and Humanist
Exploring Evolutionary and Developmental Futures
Cosmic history as known to date (ie, the Cosmic Calendar) provides strong evidence that accelerating change is an inevitable constant in our universe. More specifically, the "developmentalist" (self-organization) schools of astrobiology and complexity science propose that there are a significant number of apparently inevitable, predetermined, or developmental emergences which collectively must appear in an accelerating universe, emergences which appear essential to complex systems. Commonly proposed examples of these include both products of convergent evolution on this planet (eyes, jointed limbs, wings, nervous systems, binocular vision, etc.) as well as "meta-processes" such as intelligence, immunity, interdependence, and ever better internal modeling (simulation) of external reality by the most highly adapted local organisms. Yet at the same time, evolutionary biology and modern science have demonstrated that a large measure of chaos, randomness and unpredictable evolutionary variation (e.g., uniqueness of individual paths) is the way that developmental emergences occur over time.
So when we consider the "predetermined" developmental creation of a tree, a brain, a city, or the internet, we must at the same time note that the fine structure of such systems is clearly evolutionary, chaotic, and highly unpredictable (i.e., the specific placements of leaves on a tree, the connectivity of neurons in a fetal brain, the urban architecture of a city, the particular connections within the internet). Yet the general form of these systems is developmentally highly future-constrained, regardless of where it arises (one developing acorn tree looks highly similar to another, and the developing internet topology in Spain looks effectively the same as in the U.S.) Arguments such as these have caused developmentalist thinkers to suggest that given the presence of humanoid life forms, the appearance of automobiles, roads, telephones, energy plants, and many other technological emergences are functional inevitabilities. Furthermore, four wheeled automobiles are clearly more successful than other forms, and in this sense, the automobile, or any technological development, has some minimal necessary structure to the archetype. Yet which brand of automobile sells best, and many other specific features, appear to be contingent, evolutionary, and very poorly predictable events.
In living systemsand many now suspectthe universe itself, it appears that most aspects of change are evolutionary (i.e., determined by local, largely unpredictable factors) while a special set are developmental (i.e., statistically predetermined, or inevitable, barring developmental failure, due to the implicit emerging structure of the system itself). Even more relevant to our own lives, it is also clear that intelligent systems can strongly influence the outcome of evolutionary processes, and can at least partially inhibitor accelerate, as desireddevelopmental ones. Thus it is in our choice of preferred evolutionary path, and preferred timing of developmental emergences, that our personal freedom and moral choice essentially lie. The great challenge of complexity studies, and within our own lives, is determining which aspects of universal and local change are evolutionary, and thus potentially changable by our own intelligence, and which are developmental, and thus deep "tidal waves" of universal change which we may slightly delay or accelerate, but can ultimately never prevent from occuring. (Imagine trying to stop the use of electricity, or mathematics, or the computer).
Learning such discrimination in our own personal and social lives is the basis of the popular Serenity Prayer ("Universe, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference"). We seek this courage, wisdom, and serenity when we seek to better understand and differentiate preferred (evolutionary) and inevitable (developmental) futures, and when we contemplate how to act in those situations where we appear to have the greatest freedom to cause meaningful change. To the extent we learn to distinguish the evolutionary "noise" from the deep developmental "signal" of the universe, we gain a kind of cosmic wisdom, and can make far better choices in the present moment.
Computational Inevitabilities: The Technological Singularity
This brings us to consider what may be the most profound developmental inevitability that each of us will witness in our own lifetime. It is evident to many social observers that the ongoing, accelerating computer revolution will have an enormous and unprecedented impact on the human environment in coming decades. Computers are the central and enabling force behind all other current revolutions in the human sphere, in communications, in miniaturization, in biotechnology and biomedicine, and in physics, both at quantum and cosmological scales.
Many scientists now believe that successive generations of computational systems are becoming human-independent ("autonomous") at an ever faster pace each year, and must eventually develop their own intelligence, independent of human programming and control. Those who write with the greatest depth about this accelerating computational complexityand its approximate growth metrics, such as information doubling rates and "Moore's Law"suspect it to be an inexorable universal developmental process. Thus the recent rise of technological complexity on our planet appears causally related to the nature of computation in physical systems, and the deep computational advantages of evolutionary development within technological vs. biological (and biological vs. simply chemical) systems.
Skeptics say that predictions of imminent machine intelligence aren't newboth Herb Simon and Marvin Minsky claimed, in the 1950's and 60's, that we'd see self-aware computers by the end of the century. So what's different this time? At least four useful new observations have been made in subsequent years:
First, the development of neuroscience has allowed us to recently place a reasonable upper bound on the maximum possible complexity of human brains, their connections, and their innate information processing capabilities. This "boundary model" is pretty good (100 billion neurons, capable of 1,000 average local connections), and gets increasingly accurate every year. Neuroscience is also giving us the ability to decode human neural structure down to the finest detail. We are coming to suspect that our greatest challenge in creating machine intelligence may not be to understand how the human brain gives rise to higher thought (individual humans may find this too challenging a problem, in fact), but to instead learn how to reinstantiate simplified models of human neural algorithms into a self-replicating, ever more evolutionary (capable of randomly self-varying), and thus developmentally self-improving machine substrate. It is also very likely that this substrate must have access to an embodied interface to the world, and thus advances in robotics will also be central to the story. This neural instantiation has been done already, in a primitive sense, in such areas as artificial neural networks, and evolutionary computation (both software and now even hardware implementations). Several insightful observers now suggest we'll have sufficient biological representation in the machine substrate for the emergence of self-improving systems by the mid-21st century.
Second, computer complexity has been increasing at a double exponential pace for over a century now, without exception (actually, for billions of years when we consider molecular, cellular, and organismic "computers"). This means the rate of acceleration is itself gently accelerating, while modern computer design becomes ever more human-independent, allowing us to forsee a future of emergent Autonomous Intelligence (A.I.). For the first time in human history, we have a rough quantitative sense of when our double accelerating computer and our finite and bounded human complexity curves will roughly intersect. Some project a "self-evolved" human-equivalent machine intelligence will arrive by 2030. Others by 2070. In either case, the transition seems close enough now to begin serious consideration of the choices in path we take toward it every day.
Third, since the late 1960's, the fields of scientometrics and informetrics have been measuring the growth rate of various informational parameters in technological society. It has become clear that the "doubling rates" of scientific and general information are, like computer complexity, also progressively shortening each year (down from 15 years in the 1960's to 5-7 years or less by 2001, for several parameters). Some observers have projected that machine complexity will continue to drive these rates ever faster, until a time when the rate of change will appear effectively instantaneous to a biological human observer. Interestingly, this projected date also occurs circa 2020 to 2060, providing yet more independentthough still circumstantialevidence that a transition of deep significance to biological organisms is rapidly headed our way.
Fourth, it is becoming apparent that technological systems enjoy a multi-millionfold increase in their speeds of replication, variation, operation (interaction/selection), and evolutionary development by comparison to their biological progenitors. Many of these speedup factors range between 1-30 million for higher order processes, with a proposed "average" of 10 million (electrochemical (200 mph) vs. electronic(speed of light) communication speeds). If this is true, and if today's most complex computers are roughly as intelligent as insects, which emerged in mature forms about 400 million years ago, this implies that the evolutionary computational developmental paradigms of our applications, agents and robots in coming decades may be engaged in replicating the entire metazoan evolutionary developmental learning curve, taking our digital systems from insect to human level intelligence within approximately 40 years. Most importantly, if these increasingly intelligent computational systems are ever more balanced and integrated with human society, and providing increasingly useful solutions to human problems, humankind will allow this evolutionary learning process to continue unabated. As environmental inputs drive evolutionary learning, human guidance will be only minimally required in this process, perhaps primarily to ensure developmental safety.
In the decades to come, everyone agrees that the rate of technological (but not human cultural!) change will become ever more unbelievably fast. Just ask anyone involved in business forecasting. The "Prediction Horizon" grows steadily shorter, and is now down to two years or less for many technological businesses, when ten year business plans were quite reasonable as recently as a generation ago.) This "Great Speedup" in our technological systems has been called, by those who currently investigate and extrapolate its implications, the "Technological Singularity," and is a concept well worth attempting to understand. In this unique era where our ever-accelerating technological computational development is projected to soon outpace all human "computational" abilities, futurists such as Ray Kurzweil, Hans Moravec, and Marvin Minsky have proposed that humans must gently begin to become "transhuman" in coming decades: learning ways incrementally increase our own mental complexity, and to merge with our technologies in ever closer and more useful ways.
If all of this is true, even partially so, how will we create this merger, how will we enter this Symbiosis Age (an apparent successor to the Information Age) in a balanced and sustainable fashion? How will we protect both our humanity and our deep respect for the biological world? How do we ensure the stability, diversity, and spirit of world culture as it undergoes this apparently inevitable accelerating transformation? Will there be issues of safe artificial intelligence in the development process? Or will we find, as many now suspect, that technological systems are not in competition with biological ones, but will rapidly occupy entirely different niches? As our technologies begin to wake up, how do we use them today to improve the immune systems of human societies, in a world with access to ever more effective weapons of mass destruction? How can we continue to increase our scientific, personal and financial foresight in a world of accelerating change? Which sciences, technologies, businesses, and humanist practices will aid and inform us during this accelerating transition?
Come Join Us Each Month
Our fun, friendly, and future-aware reading group meets every month on the UCLA campus to discuss these an other future-relevant topics. We investigate four broad themes: Science (the most important universal constraint on human futures), Technology (the most powerful daily force in human society), Business (the most pervasive vehicle and social driver of applied technology), and Humanism (how to maintain and improve planetary ethics, social justice, and human self-actualization in an environment of accelerating change).
Join us for inspirational revelations, occasional diatribe, and perhaps a latte or two. Come listen and react to interesting book reviews if you don't have time to read (like most of us, these speedy days). Just show up!
We are all time-challenged, so participation is easy and simple. Simply try to bring an interesting web printout, book, video, investment tip, other educational tidbit, or friend to briefly introduce and share with others.
Book Reviews: Sample Topics
We occasionally review books at our monthly salons, and below are examples of some topics we have addressed in the past.
Topic 1 - Science and
Systems behind the Great Speedup
Topic 2 - Tools and Techniques
for Modern Living
Topic 3 - Financial and
Topic 4 - Profiles
of Humanist Futures
The only prereqs: An open mind, willingness to make friends, and interest in personal, organizational, national, or global foresight.