![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Background
Readings on the Developmental
Singularity Hypothesis (DSH) |
A Speculative Evolutionary Developmental Model for Our Universe's History of Hierarchical Emergence Under Conditions of Continously Accelerating Change
|
Develomental Singularity Hypothesis Resources Overview This page contains resources helpful to study of the Developmental Singularity Hypothesis, one possible "big picture" of locally accelerating change. For a more recent set of readings relative to the DSH, please see the
following article: The following seven topics in accelerating change seem particularly relevant to interpreting the developmental singularity hypothesis, and may be explored more thoroughly here. Selected insightful books (enjoy the Amazon reviews), articles (in quotes), and web resources are in chronological or alpha order. Resources marked with
|
1. Pervasive Trends in Accelerating Change | ![]() |
Should we expect a local technological singularity (self-aware A.I.) circa 2060? Can this apparently inevitable event be best understood as the latest phase in a universal trend of accelerating change through a succession of emergent computational substrates? The
Spike, Damien Broderick, 2001 Sites: |
2. Technological-Evolutionary and Human-Evolutionary Paths to AI | ![]() |
Has the field of evolutionary computation already demonstrated the capability to increase adaptive hardware and software complexity independent of human aid? Are all evolutionary developmental goal-control systems (whether in molecular, genetic, neural, memetic, or technologic substrates) self-organizing, context dependent, and only partially amenable to conscious rational human analysis? Are human logic and rational A.I. strategies themselves therefore also evolutionary (e.g., emergent substrates for universal evolutionary development)? Is every A.I. approach simply a different type of evolutionary and developmental search in phase space for ever more effective algorithms? In other words, is our own semi-rational, serendipitous search for better A.I. designs best seen as a set of tools used by the universe, through the human substrate, to semi-randomly explore the human-evolutionary computational phase space? If true, what does this imply about the nature and trajectory of emergent A.I.? Flesh
and Machines: How Robots Will Change Us, Rodney Brooks,
2002
Sites: About/AI,
AAAI, ACM,
|
3. Meso / Nano / Femtotech: Accelerating and Asymptotic Trends in Computation and Physics (Including STEM Compression) | ![]() |
Does the universe facilitate both ever faster and more spatially compressed computational substrates? Do computationally denser substrates always figure out clever ways to use less space, time, energy, and matter (STEM compression) to encode their learned environmental information, and thus continually avoid limits to hyperexponential growth? Do several of the special laws of the universe (such as c, the information speed limit) require STEM compression as the only viable pathway for creating a continually accelerating local complexity? Is the apparent tuning of the newly discovered dark energy (cosmological constant) evidence that the universe is now entering a seed recreation/developmental singularity production stage (ie, a reproductive maturity), to be followed by a universal decomposition stage, involving an accelerating decrease in computational and physical density, while all the remaining computationally complex systems transcend via a developmental singularity (ubiquitous black hole involution) into the multiverse?
What is most interesting in this analysis is that our technologies, when
expressed in this index, have complexities exceeding biological
and cultural substrates. Modern engines range from 10^5 to
10^8. But most tellingly, modern computer chips exceed all these measures
by orders of magnitude, due to their extreme miniaturization (STEM compression).
The Intel 8080 of the 1970's comes in
at 10^10; That makes both of these very local, very special computational domains
already much more impressively "complex" (or, in alternative
language, more dynamically "self-organizing" per unit time)
if not yet more sentientor more structurally complex, which is only
distantly related to dynamic complexitythan the individual and social
organisms they are coevolving with. If you are searching for a universal
perspective, and a coarse quantitative proof, that silicon systems (more
generally, the "electronic systems" substrate) are the current
leading contender for the next autonomous substrate, Chaisson's analyses
are well worth investigating. Bush Robots, Hans Moravec, 1999. A concise introduction to the idea of miniaturization as a recursive process. |
4. Black Holes and Smolins Cyclic Recursion | ![]() |
Is intelligent life in the process of creating a local black hole, which will "bounce" to create a new universe? Does universal life cycle through the multiverse from (big bang) singularity to (black hole) singularity, in the same manner that a seed creates an organism which in turn creates a new seed? Of the trillions of black holes in our universewhich may each go on to create new universesis there a continuum of complexity in their offspring, i.e., an ecology of replicating primordial, quasar, galactic, stellar and "intelligent" black holes, each going on to create universes which engender various fixed degrees of developmental complexity, and most of which represent the "stable base" of amoeba-like universies, but also including a smaller population of intelligence-engendering universal systems (of which ours is arguably a case) at the top of the pyramid? Are such models only comfortable infopomorphisms, or are they eventually provable by simulation, and can such "cosmological selection", when generally applied, explain the widely observed evidence for anthropic design in our universe?
|
5. Simulation and Computational Closure: Are We Headed for Inner or Outer Space? | ![]() |
Have we discovered most of the simplest laws of the universe, in our mental recreation of its structure? Is the universe itself a simulation of sorts, if we can model it so effectively with our own simple simulations? From our position within the universe, are we close to a gross understanding of the beginning, end, and recurrence of the universe's developmental cycle, in a manner that cannot exceed inherent universal constraints? Are we close to extending the standard model of physics all the way to the Planck scale, and developing a fundamental "theory of everything", and would this define a natural lower limit in spacetime (universal "computational closure") to the intrinsic complexity of the universe as a self-organized computational substrate? Are we simultaneously close to discovering a universal replicating cycle via black hole transcension that might define a natural upper limit in spacetime to computational cycles within this particular universe? Will our exponentiating simulation capacity allow us to rapidly discover remaining hidden universal structure, and inform us in the production of a more computationally complex universe in a subsequent cycle? Will we gain adequate computational closure on this developmental cycle simply by looking at and simulating outer space rather than by physically traveling there? Is the direction of change (time, the arrow of complexity) leading us irreversibly to inner space (black holes, new universes) to create our future, and is outer space therefore essentially an informational record of our past, less complex universal historya computational rather than physical frontier? Does this closure and journey to inner space suggest that we are now in the end stages (ie, a type of universal maturity/"ovulation" stage) of locally recreating a new universe seed? Where
is Everybody? Fifty Solutions to Fermi's Paradox,
Stephen Webb, 2002 Sites: |
6. Emergent AI: Stable, Moral, and Interdependent vs. Unpredictable, Post-Moral, or Isolationist? | ![]() |
Are complex systems naturally convergent, self-stabilizing and symbiotic as a function of their computational depth? Is the self-organizing emergence of "friendliness" or "robustness to catastrophe" as inevitable as "intelligence," when considered on a universal scale? Are deception and violence useful strategies only for systems (like biological humans) with very computationally limited (e.g., largely non-plastic) learning capacity and social information flow? Are such strategies relentlessly eliminated as computational capacity, flexibility and interconnectedness (global brain, swarm computation) increase, as some have argued? If so, can we better characterize and reinforce this intrinsic trajectory as we create our pre-emergent A.I.? Are all catastrophes in complex systems, independent of substrate, primarily catalysts for both increased balance and complex immunity in the surviving substrate? Are there any examples of catastrophes, from any timescale or substrate, which have eliminated more than a small fraction (usually less than 5%) of the extant systems of similar complexity in the local environment? (So far I can think of none, after long deliberation on this issue). Will our emerging technological substrate (the internet and its computational intelligence) become ever more seamlessly integrated and symbiotic with human minds, even long prior to any potential "uploading?" In other words, as our interfaces increase in sophistication and utility, will we "upload by degrees" into the coming electronic systems substrate? What insights can such tools as evolutionary game theory, the evolutionary psychology of metazoan and primate morality, and a universal, substrate-centric perspective provide about the preconditions, friendliness, and implicit safety and security of our currently developing computational technology? Friendly
AI, Eliezer Yudkowsky, 2001 It is deceptively easy to assume that because humans are catalysts in the production of technology to increase our local understanding of the universe, that we ultimately "control" that technology, and that it develops at a rate and in a manner dependent on our conscious understanding of it. Such may approximate the actual case in the initial stages, but all complex adaptive systems rapidly develop local centers of control, and technology is proving to be millions of times better at such "environmental learning" than the biology that it is co-evolving with. It can be demonstrated that all evolutionary developmental substrates take care of these issues on their own, from within. Technological evolutionary development is rapidly engaged in the process of encoding, learning, and self-organizing environmental simulations in its own contingent fashion, and with a degree of STEM compression at least ten million times faster than human memetic evolutionary development. Thus humans are both partially-cognizant spectators and willing catalysts in this process. This appears to be the hidden story of emergent A.I. Ethics
for Machines, Josh Hall, 2000 Critique
(Peter Voss) Sites: AAAI/Ethics
of AI, |
7. Responsible Advocacy and Dangers of the Transition | ![]() |
What are our greatest levers for increasing the technological effectiveness/ computational complexity of our existing economic, social, and political systems and institutions? What classes of catastrophes can occur in the transition to a technological singularity? How can we use our best models to minimize their frequency and severity? Do catastrophes naturally limit their scope and severity as a function of substrate complexity? Is a moderate and omnipresent level of catastrophe a necessary catalyst for accelerating change? If so, how do we, as purposeful agents for catastrophe reduction (creating self-organizing immune systems on a cultural level), find the balance between inadequate selection pressure and destructive stresses? State
of the World 2004, Worldwatch staff, 2004 Existential
Risks: Human Extinction Scenarios,Nick Bostrom, 2001 Sites: (see Acceleration-Relevant Conferences page). |
Omissions? Oversights? Please let us know. I hope you find these resources
to broaden and sharpen your perspective on the fascinating topic of universal
accelerating change. As time allows, we will add more mini-commentaries
under selected entries to highlight some of their specific contributions
to the issues surrounding the developmental singularity hypothesis. A more
extensive bibliography will also be forthcoming at a later date. |