Singularity Watch
HomeNewsletterReading GroupsConferencesPublications"Singularity Studies" LinksDegree ProgramsCritiques

 

Other Notable Computational Acceleration (and Selected Artificial Intelligence) Critiques

Outline

Critiques of Credibility

Critiques of Prediction

Critiques of Applicability (Limits of Rational Predictive Knowledge)

 

Critiques of the Singularity (Continuous Accelerating Technological Change)

Critiques of the 2040-2080 Prediction Interval for the Technological Singularity

 

Critiques of Robust Artificial Intelligence

What Objections Have We Missed?

 

For a discussion of the four most frequently advanced critiques against the technological singularity hypothesis, see No Apparent Limits: Addressing Common Arguments Against Continuous Computational Acceleration. Other common acceleration-related critiques are considered below.

Critiques of Credibility

How do we know that our insights into the nature of accelerating change are not simply opinion? How do we gauge their credibility? To address such questions there are a number of tests that must be met. At a minimum, we must engage in widely multidisciplinary, peer-reviewed discourse. We must develop both quantitative and qualitative models. We must explore both the near- and (where possible) longer-term predictions of these models. We must demonstrate where these predictions have been met by the data, both historically (in "backtesting") and as far as we can project into our extraordinary future. And finally, we must understand where and why our predictions fail, and thereby discover the boundaries and limitations of our foresight.

The philosopher of science Karl Popper discusses the importance of "falsifiability" in guiding rational inquiry. We at ASF are committed to working toward a more systematic and falsifiable methodology for the emerging fields of acceleration studies (general technological inquiries) and singularity studies (mathematical and complex systems inquiries) in coming years.

As futurists, technology forecasters, and science and technology scholars work toward broader credibility in the general and technical communities in coming years, the predictions they make will be central to this process. As our community makes increasingly specific and valid predictions with regard to the nature and growth in various technological capacities and autonomies, and the nature and timing of developmental emergences (such as the CUI), and if these predictions are increasingly implicit to the models being used, a new level of professionalism will emerge in our field.

One can find a number of technology extrapolation models making specific predictions today, and the models are of varying quality. The Moore's law driven International Technology Roadmap for Semiconductors (ITRS) is one of the more commendable examples. In the meta analysis, some of these are much better than others at exposing the fundamental, continuous accelerations constantly occurring in our local environment, and the new opportunities they periodically must create for human society. But which are better, and why? Only careful critique and validity testing through prediction can bring the most credible models and the most useful assumptions and methodologies to the surface.

We also have to make efforts to differentiate our early and still minimally reviewed work from the large body of pseudoscience. As Bob Park observes in "Seven Warning Signs of Bogus/Voodoo Science," there are a number hallmarks of inauthentic work, including several not mentioned in the article above. Those who would expand the field, rather than expose it to unnecessary social backlash, would do well to keep these in mind. In any intrinsically abstract and complicated field, critiques of the method are far easier to make than critiques of the model.

We must continually ask, does our work pass the "smell" test? In some cases, we aren't going to be able to avoid being branded as speculators. In each case we must ask our critics, where specifically do we fall short? In this process, let us know how you can help.

Critiques of Prediction

Are We All Just Bozos on the Bus? John Barlow and Robert Lucky note that in a nonlinear, chaotic world, a large number of the short term emergences that we observe will be simply (pseudo)random noise. Unpredictable and mystifying.

But does this mean that nothing is predetermined? Hardly. Every interesting feature of our world is associated with probabilities, with what we may call "statistical determinism." James Clerk Maxwell, one of the greatest physicists in human history, said "the true logic of this world is in the calculus of probabilities." Complex systems researchers A.N. Kolmogorov and B.V. Gnedenko in Limit Distributions for the Sum of Independent Random Variables, 1954, noting the emergent order and constraint implicit in randomness, are more specific: "All epistemologic value of the theory of probability is based on this: that large-scale random phenomena in their collective action create strict, non-random regularity." Self-organization theory in complex systems research adds another level of insight. It explores many examples of the transition between random (read: evolutionary) and deterministic (read: developmental) physical regimes.

Finally, I suggest that evolutionary developmentalists can restate these insights the most simply, powerfully and specifically yet. All developmental systems, as even our universe itself appears to be, transcend chaos to prediction in at least two fundamental ways:

1) They are Cyclical, so they demonstrate patterns with predictable beginnings, middles, and endings, and

2) They demonstrate Emergent Order at all scales. Chaos exists, but is always constrained locally to task of producing emergent predetermined order at a higher level.

Let's look at an example: The developing brains of two identical twins use chaos to wire up the specific connections of their nervous system. But these connections, though they differ randomly from twin to twin, still specify emergent structures with deep predetermined order, as the two twins exhibit strong psychological similarity once each brain's developmental plan has unfolded to a mature convergence point. This is a very impressive trick, constraining local chaos to produce a global emergent order. This way, the DNA of the organism doesn't have to specify where every dendrite and axon goes. Imagine the complexity! No DNA string could do it in any reasonable length: it's a combinatorial explosion. So instead, the evolutionary developmental system figures out, via many iterations of development (and via many cycles of universes, in Smolin's CNS model) just how to use and then to extinguish local randomness to create emergent order at multiple substrate scales, and with a minimum of encoded structure. The system constrains chaos, and selects for developmental order, in a special subset of complexity emergences. There's lots of local nonlinear unpredictability, but it always reliably smooths out to allow an emergent statistically deterministic pattern, for the developmental scaffolding on which evolution proceeds.

Can we figure out how the universe does this? Why not? The supposition that the system is cyclical, and thus heading for some inevitable convergence, and the intuitive likelihood that all complex adaptive evolutionary developmental systems constrain chaos to produce emergent order in similar ways are two strong possibilities operating in our favor. The better we understand all complex systems, the closer we will likely be to understanding the framework of universal computation. This is the great (and at least partially fulfilled) promise of the sciences of complexity, self organization, and nonlinear dynamics. A good deal of the particulars may presently be too complex for us to figure out, but there's no reason to believe our electronic progeny won't see even deeper down this road than we who have already seen so far.

There are several stable forms of our fundamental universal laws, symmetry breaks, and emergent structures which appear to be convergent, such as the forces of nature, the periodic table, organic chemistry, fats, proteins, and nucleic acids (all these precursors are found in comet chemistry), cells, neurons, eyes, jointed limbs, wings, binocular vision, language grammars, social structures, mathematics, science, various technological archetypes (tools, engines, automobiles, roads, electrical devices), etc.). Each of these constructs contains an apparently developmentally predictable minimum structure, even though it follows a predominantly evolutionary path to its creation, and is born in a chaotic and primarily nonlinear world. Furthermore, our assumptions of which special subset of developmental structures are inevitable, given our increasingly known initial conditions, are increasingly testable through our unfolding sciences of simulation. In this manner it has become clear to many astrobiologists and other scientists in recent decades that the large scale structure of the universe is mostly predictable: cosmic evolution is a story of evolutionary development. The development of intelligence seems to be some concentration of the informational aspects of this unfolding structure, and so future events such as autonomous AI, ubiquitous computation, and the technological singularity are just three more of a long list of emergences that appear to be as inevitable as those we've already seen. I would add to this list of predictions the developmental singularity itself, a black-hole-analogous destiny that we seem to be racing toward more swiftly every year. I discuss such concepts further in the book summary, Exploring the Technological Singularity, 2002 and in my forthcoming book.

Certain aspects of the future seem to stick out like mountain ranges above a fog, like North stars in a black night. Time will tell and soon enough if this perspective is wrong or right.

Critiques of Applicability (Limits of Rational Predictive Knowledge)

Even when predictive models and knowledge are shown to have value in a certain domain, how do we know the limits of our predictive knowledge? Can we predict major dynamical changes at universal levels in complex systems (such as the developmental singularity) from our limited local physical knowledge, using today's weakly digital brains?

As J. Andrew Rogers points out, one can mathematically prove the predictive limits of finite models of any algorithmically finite system (for the technically inclined, such systems even include a subset of non-finite state machines in addition to all finite state machines). This is a valuable theorem in algorithmic information theory. He states:

In any finite subcontext, rationality does not imply correctness, and correctness does not imply rationality. But it is theoretically possible to compute the maximum probability that a rational model is also a correct model. For some arbitrary brain or machine, the actual probability will be of the form: 0 < x < predictive limit < 1 where "x" is the actual probability that some rational model is correct in some context, and the predictive limit is the maximum theoretical probability that a model might be correct in that context.

Rogers notes that not only are many of us unaware that rationality does not imply correctness, but the problem is even worse than this in a theoretical sense, as there are always predictive limits (often substantial ones) of our rational models with respect to correctness. He concludes that there are many things in the universe that can only be rationally modeled to such low predictive limits in the human brain that one would have to be skeptical of any claim as to the correctness of those models.

So where does this leave us? Firmly in the realm of the surprising. It is, as Eugene Wigner noted, simply unreasonable that our human-accessible mathematics and our limited mental models, with their poor predictive limits, should have already had such tremendous success in understanding the regularities and subtleties of the natural sciences. Furthermore, when we add qualitative, analogical reasoning to our quantitative, analytical models, as well as a vast sea of intuitive, unconscious models to our sparse islands of rational, conscious ones, we almost always discover that an entirely new level of "unreasonable effectiveness" emerges. Just ask anyone who can communicate a brief verbal summary of your mood, from a five second encounter with your body in physical space, without communicating how they know this, and often without consciously realizing just how predictive, and correct, this model can be, in a variety of situations.

Curiously, this effectiveness does not allow us more than near-term success in predicting the myriad evolutionary possibilities of our world. Instead it preferentially reveals the simpler, developmental dynamics, the constrained framework of a universe apparently based on evolutionary developmental physics. The universe, as I and others have said several times before, seems to be a rigged game with a set of statistically predetermined developmental outcomes, and a host of unpredictable evolutionary paths to be taken in the process.

Humans have evolved and maintain a strong set of intuitive, semirational thought processes to gain insight into this developmental rigging. We apparently have done so not because these semirational processes are the only path toward developing a coming entirely rational thought architecture (this idea seems unsound, in a Godelian-incomplete universe), but rather because these processes remain unreasonably effective at developing probabilistic insight based on incomplete information and limited processing time for simple brains acting in a host of practical situations.

We shall have to leave our inquiry at this, for now, but don't yield to those who say that because our weakly rational minds may be weakly correct when modelling more complex finite systems, that this implies our entire mind, with its tremendously more sophisticated, successively approximating intuitive and analogical approach, must be weakly correct, in general. We certainly seem to have little insight into the evolutionary aspects of our world, and we overpredict in this domain to often laughable excess. But at the same time, the exponential success of scientific discovery argues that the exact opposite is the case with regard to our understanding of developmental regularities, constraints and dynamics, both local and universal.

Critiques of the Singularity (Continuous Accelerating Technological Change)

For discussion of the popular miniaturization-, resource-, demand-, and design-limit critiques of the hypothesis of continuous accelerating technological change, see our introductory piece, "What is the Singularity?"

Mathematician and science fiction author Vernor Vinge has stated ("The Coming Technological Singularity", 1993) that he would be surprised to see a technological singularity occur "before 2005 or after 2030." Vinge's 2003 update of this essay (Whole Earth Review, Spring 2003) reiterates this time period as reasonable, though he leaves open the possibility that human inability to help machines discover biologically inspired "bottom up" computer designs might lead to a significant delay, or even to no technological singularity at all, due to what he calls the "large project software problem."

Some commentary on the possibility of software problems leading to avoidance of a tech singularity entirely is in order here. Vinge emphasizes the software problem, but software as we commonly refer to it (e.g., anything that is understandable to humans as software) is really an abstraction layer, in the same manner that human culture and language are abstraction layers built on the capabilities of genetically developed organisms. To see the true arc of complexity development I suggest we need to consider how the hardware-software system changes over time, and the environmental learning problems it solves internally over time, from its own perspective, not as measured from our perspective as external observers.

In other words, in the above statement Vinge appears to be counting apples (software effectiveness in solving human or machine problems, as percieved from our increasingly limited human perspective) when it is oranges (hardware effectiveness in solving evolutionary developmental problems, as measured by technology's ability to explore connectionist phase space in an increasingly autonomous manner) that we should be measuring. The latter topic, hardware effectiveness from the hardware's perspective as a complex adaptive learning system, involves a much more restrictive set of issues.

As with any other hyperexponential evolutionary developmental processes, almost all the learning occurring in the newly emergent substrate will be "under the hood", hidden from our view. To what extent is the accelerating computational capacity of social insects (ants, bees, termites) understandable to solitary insects as the colonies are forming? To what extent will the full computational capacities of the emerging global internet be measurable by human scientists? In most cases in our accelerating universe, complex systems at every scale inhabit an apparent equilibrium phase of slow upward linear growth until convergence, phase change, and punctuation occur. But from a big picture view, we see that continually accelerating learning must be occurring to keep hierarchical emergence on the "Cosmic Calendar" trajectory that has held for at least the last six billion years.

It is quite possible that our machines will give us only slow linear improvements in their ability to solve difficult human problems, at the same time that they continue to engage in hyperexponential growth in their ability to solve evolutionary developmental learning problems at the hardware level, as seen from technology's perspective. We might see, for example, another twenty years of "stagnation" (slow linear improvement) in our large software project efficiency, while witnessing an apparently unimportant hyperexponentiation in hardware connectivity and reconfigurability during this "stagnation period."

History suggests, however, that the accelerating changes continuing in the hardware space must eventually self-organize to create a whole new level of network intelligence, in the same way that we've seen successively accelerating paradigm shifts in twentieth century computing architectures (e.g., mechanical, electromechanical, vacuum tube, transistor, integrated circuit, heterojunction integrated circuit, etc.). How are transistor-based computer systems more "intelligent" than vacuum tube computer systems? The shortest answer is that there appears to be increased complexity (greater and more flexible learning) encoded in the network arising from the new substrate.

I think the essential question is to what extent our hardware, the more adaptible, plentiful, interconnected, and parallel it becomes, begins to adopt evolutionary developmental, tunable, "embryological" qualities. It is well known that reconfigurability at the hardware level, ("Configurable Computing," Scientific American, June 1997) also follow a Moore's law curve. It is also clear that just about every important sensor, effector, and computational process that we value is being implemented in silicon circuitry (e.g., Intel's Silicon Photonics group, 2004). I find sufficient evidence, for now, that there is no end in sight to the learning capacities (currently driven mostly by our top down efforts, but with more and more bottom-up help from the machines each year) that are innate to the silicon substrate.

Another major singularity critique was presented by systems theorist and futurist Ted Modis in a 2002 article in Technological Forecasting and Social Change, as well as a followup in the May-June 2003 Futurist. Modis, author of Predictions, 1992, and Predictions: 10 Years Later, 2002, has concluded that we have reached a local "peak" of technological change, circa 1990. (Forecasting the Growth of Complexity and Change, Theodore Modis, Technological Forecasting & Social Change, 69, No 4, 2002).

Take a look at Ted's well-chosen data sets for universal emergences, which are nicely logarithmic until only very recently (in his interpretation). Unfortunately, he uses this data to conclude not that we have reached a temporary plateau or equilibrium, before the next punctuated surge in computational autonomy, but that we've instead reached an inflection point for the universe as a whole, and that all local technological change will continue more slowly from this point forward.

Such analysis is well worth mentioning, as it is so very rarely attempted, and is a welcome addition to the anti-singularity literature. But as well-intentioned and aesthetically symmetric as the Modis model is, I find it fundamentally deficient in its understanding of both the growing autonomy and the intrinsic multi-millionfold speedup in the evolutionary developmental learning occurring within the technologic substrate. It would seem there are a host of better explanations for his dip at the end, if indeed it exists. As a developmental systems theorist, I have proposed earlier on this site, for example, that we should expect a 20 year human-observed (but not technology-observed) plateau after the internet but before the emergence of the CUI network, the next major developmental punctuation we should expect prior to such subsequent inevitable emergences as user interface-driven personality capture (first-generation uploading) and true A.I. Nevertheless, whether there is a recent equilibrium in the data or not is an important outstanding question for singularity studies, and for that we should be grateful to thoughtful contrarians like Ted Modis.

In "The Limitations of the Singularity," deep-thinking transhumanist Anders Sandberg has posed a range of issues that might easily affect the near term nature of our planet's growth in computational complexity. All of these make sense, but while Sandberg notes that most transhumanist arguments ignore these present challenges, he never claims they might do more than create temporary roadblocks on the path to increasingly self-directing evolutionary developmental A.I. systems.

For another subtle critique, consider reading Lyle Burkhead's "The Singularity." Originally a "debunking" of Vernor Vinge's 1993 article, Lyle has subsequently come to agree at least with Vinge's intelligence amplification (I.A.) insights, if not his artificial intelligence (A.I.) ones, and has toned down his response, centering his counterpoints more on Vinge's prose than the logic of his major assumptions. Burkhead's thesis remains that "humans plus A.I. will always stay ahead of A.I."

While an intuitively appealing, anthropomorphic concept, and one shared by a number (perhaps a minority) of transhumanists, the idea that biological systems can ultimately maintain control of the coming emergence seems fundamentally flawed. Essentially, it seems to be in opposition to universal mechanisms of substrate shift, as I argue in my essay, "Evolutionary Computation in the Universe," 2001, and my forthcoming book. Lyle states "The recursive center from which ultraintelligence is emerging lies within us," and here I think he is entirely correct. What he does not presently appear to acknowledge is that the computational essence, lessons, and structure of our "human substrate" can be expected to be recursively incorporated ("instantiated") within the new technological substrate of emergent A.I., a repeat of the same process that we have observed in all previous emergences. This instantiation, if it is to parallel previous known hierarchial emergences, must occur in its earliest stages by a very distributed, collectivist, incremental, bottom-up process of technology-aided intelligence amplification (I.A.), much more than via a centralized, individualist, rapidly punctuated, top-down, or rationally guided process of artificial intelligence (A.I.) design applied intensively within a small sector of the existing dominant substrate (in this case, human culture).

For clarity, we should here define as essentially I.A. any technological process that mediates mass-scale human-human, human-machine, and machine-machine interaction in a greatly distributed and deeply human-connected artificial environment, regardless of how much bottom-up or top-down hardware or software design is involved in particular modules of this complex system. Likewise, we may define as essentially A.I. any process that involves a comparatively small number of human beings creating either top-down or bottom-up design in both a scale-restricted (e.g., occurring in only a small geographical area, such as one city) and largely human-isolated (e.g., only small numbers of human testers/designers) environment. Expressed in this language, A.I.-centric emergence models don't appear to fit at all with the prior emergence history of complex adaptive systems (CAS) as autonomous substrates.

For example, we now suspect that eukaryotic cells emerged incrementally from a vast, distributed network of archaebacteria, which themselves emerged from a vast, distributed network of protometabolic molecular autocatalytic sets which necessarily developmentally preceded them (and which can no longer even be found on Earth outside of cells, except in remnants). The informational essence of the autocatalytic molecular systems today form the stable metabolic base of the prokaryotic substrate, and are later fully instantiated within the new substrate. In the same manner, the informational essence of the bacterial genome is encapsulated in the eukaryotic cell (and even continues on in such structures as mitochondria). Eukaryotic systems, in symbiosis with a bacterial base, represent a true superset of prokaryotic capacities, while also relying on them as a stable, less computationally specialized platform for eukaryotic activities.

So we should expect with emergent A.I., which may nominally, in its final brief catalysis, appear to be produced by a subset of individuals and institutions, but is actually the result of a long, deep instantiation of the evolutionary lessons discovered and utilized by all human civilization within the emerging symbiotic intelligence amplification (I.A). network. See our essay on the first-generation CUI network, and its second-generation personality capture implementations, for more on what we consider to be developmentally inevitable, I.A.-driven instantiation and convergence scenarios. As with previous systems, this process can be expected to be guided by incremental I.A. of the entire human substrate over an extended period of time. Furthermore, this process must apparently proceed through a number of developmental steps before full autonomy can emerge. The number and nature of those steps is, of course, a matter of ongoing lively debate. For more on the potential implications of this process, consider visiting the Speculative Topics page of this website, read Exploring the Technological Singularity, 2002, or my forthcoming book.

At geniebusters.org, Burkehead has also written "Nanotechnology Without Genies," a more comprehensive argument against the plausibility of "strong" (A.I. guided) nanotechnology. This series of articles proposes that autonomous A.I. will not arrive, and as such is another notable critique of the paradigm of the technological singularity. His general comments regarding the incremental evolution of the nanotechnology industry are, I think, often a reasonable description of what we will see in the next few decades, before we hit "the wall" of the curve of inexorably accelerating computational change. Some of his other political and historical perspectives as stated on his site are in my opinion misinformed, and require no further comment than this disclaimer. In summary, it is nice to see someone take the trouble of elaborating a picture of what nanotechnology might look like in a non-singularity world, and I hope to discover more such detailed alternative scenarios in coming years. They are rarely produced, and this is telling evidence of their difficulty.

Critiques of the 2040-2080 Prediction Interval for the Technological Singularity

Perhaps the most interesting common critique of a near-term singularity is the "quantum-scale computing" argument, as advanced by such distinguished thinkers as Roger Penrose The Emperor's New Mind, 1989/02 and Shadows of the Mind, 1996, and Stuart Hameroff. In this intriguing model, it may take a lot longer for our bottom up developmental systems to reach human level complexity if human beings engage in quantum computation in our subcellular structures. Hameroff proposes microtubules, though other structures might easily be counterproposed for the argument to remain intact. In this argument, our accelerating computers might take centuries more to learn the "hidden" human/biological complexity operating at the quantum scale, if indeed such information can ever be discovered by nonbiological systems. Penrose is pessimistic on this latter point, a position which gives humans a feeling of special superiority, if it is to be allowed.

But a careful look at developmental systems theory tells us a much different story. How do we know whether human beings compute nonrandomly on the quantum scale? A simple look at systems emergence in universal history suggests that this is highly unlikely. All universal systems are necessarily built of quantum-scale objects, having learned to entrain the intrinsic evolutionary randomness of the quantum world to their larger, systemic developmental ends. Do galaxies do quantum computing? Do stars? Do cells? Do brains? All evidence suggests that they do not. These systems have developed their macroscopic features in spite of quantum indeterminacy, brownian motion, chaos, and all the other evolutionary processes intrinsic to an evolutionary developmental universe.

Yet fat-fingered 20th century human beings, working with a preliminary generation of technologic systems that operate at the femto-scale, have finally been able to perform nonrandom quantum computing—a local first. In other words, only our new, electronic systems substrate has gained access to the realm of the quantum in a nonrandom way, and can do so only because electrons are intrinsically at the quantum scale. This level of STEM compression (computational acceleration) has profound implications. Soon all the slow, macroscopic features of our local environment may be as computationally transparent to intelligent quantum systems as the finite physics of this particular universe is becoming to us, even here at the beginning of the 21st century. More on this at another time.

What about other arguments against the 2040-2080 singularity predictions? Simple applications of some doubling regimens do suggest a less vigorous growth model. One may note, for example, that computers were doubling in complexity roughly every three years in 1900, and every 14-18 months in 2000. Conservatively, this appears to suggest a halving of the doubling time every century, and thus we might expect only 9 month doubling times by 2100, daily doubling rates in 2900, an hourly rate in 3400, and way out in 4500 A.D. a world where computational complexity is doubling every second. Where exactly do we define instantaneous? milliseconds? nanoseconds? Planck seconds? In this progression, we can expect millennia before the arrival of at least one type of universal computational singularity.

But there are a few factors not being considered here. First, the data are wrong in this simple analysis. It appears that the doubling rates of computational complexity have themselves been shrinking at a gently accelerating rate (today, we are closer to 12 months than 18 months with regard to calculations per second per dollar in digital computers). Thus if we defined an "instantaneous" rate as a doubling time of seconds, that situation might arrive in 2900, rather than 4900 A.D. Or even in 2140, depending on the still unknown strength of the second exponent in the double exponential curve. Remember that the second exponent "blows up" in the same way that the first exponent causes the amount of rice on the chess board to blow up preposterously over a short period of time (only 64 doublings, in the chess board example). Thus we need much better data and models to meaningfully project out this curve. Nevertheless, we do know that we are in the ballpark with the figures above. Another 100 years or a few more millenia are both infinitesimal in cosmological time—all present evidence suggests we are on the verge of a very interesting, apparently universal transformation.

A second subtler question we may ask is how many more doublings of technological complexity are required to build a technological environment that will allow the evolutionary emergence of greater-than-human intelligence. As humans have such an obviously finite complexity we can expect that this transition, the technological singularity, will arrive much, much earlier than a period of near-instantaneous complexity doubling in the local universe, an entirely separate state of affairs perhaps more appropriately called a developmental singularity. As we've mentioned elsewhere, due to a rate differential of perhaps 10 million for the speeds of evolutionary development in technological vs. genetic systems, we should today expect rediscovery of at first the lowest and then even the highest human level intelligence (and beyond) over the next 40-80 years.

A third, and perhaps our deepest question, concerns exactly how many more doublings of computational complexity, beyond A.I., might be necessary before local technological (or post-technological) intelligence begins to run into limits to growth (for example, Planck-scale boundaries of miniaturization within this universe's spacetime structure), and limits to the local knowledge that can be gained about the universe with finite computational tools (so-called "computational closure"). Under these interesting but speculative conditions, I'd suggest a developmental singularity would have been attained, and if our multiverse theorists are correct, rather than some form of cosmic expansion within this universe, which seems highly unlikely from the data so far (see my book summary, Exploring the Technological Singularity, for more on this), a trajectory of universal transcension or new universe creation might instead be expected to occur.

Critiques of Robust Artificial Intelligence

The few well-known general critiques of A.I. (such as those advanced by Roger Penrose, John Searle, and even Hubert Dreyfus) have all been adequately dealt with by Ray Kurzweil and others, and I refer readers to that extensive literature. Here, I wish to address some of the lesser-known arguments against the imminent emergence of a strong, general, and human-surpassing autonomous artificial intelligence.

James Martin's After the Internet: Alien Intelligence, 2000, while outlining the importance of evolutionary computation, argues that the kind of intelligences we will see from emergent A.I. will be "alien" to ours. This is clearly true, in the sense that they will quickly grow to incorporate all that we comprehend, as well as perceiving much more, in ways we cannot directly comprehend. As I've discussed elsewhere, they won't simply be conscious, but are better described as hyperconscious, having the ability to change their cognitive architecture on the basis of their conscious thought (something we cannot do).

But Martin does not use "alien" in this less conditional sense. He suggests emergent A.I.'s will be more advanced in certain narrow ways, but essentially less generally intelligent. This is a recent version of the idea that while computers may grow to become "idiot-savants," there exists some unforseen developmental block that will prevent them from quickly becoming a mature and generally intelligent, common sensical emergent A.I.

Martin's work and some others like it are thus also notable critiques of the technological singularity, as this perspective argues that emergent A.I. will not both fully encompass and entirely exceed existing biological intelligence. Again, I think Martin overlooks what appear to be some of the fundamental qualities of computational complexity, independent of substrate. Local simulation systems (or perhaps more accurately, their perception-action cycles) always increase their space-time dimensions (the spacetime scope of their computational interdependence) as a direct function of their complexity. In the process, they gain all structural and computational complexity of the substates from which they emerge, and in addition, develop new, "alien" capabilities.

We can give examples from all substrates in universal evolutionary development, but let's pick a more recent one to illustrate the point: 100,000 years ago, pre-technological humans simulated and reacted primarily to our local environment, including our models of the way other organisms (other humans, other animals) modeled reality. Today, technologically augmented humans are significantly concerned about asteroid impacts, the lifetime of the Sun, and even the birth and death of the universe, while at the same time, we attempt an ever deeper modelling of the perception-action cycles of all other computationally complex organisms in our local environment, learning how to predict and tune into their (as well as refine our) perspective on the universe. This is a strategy of survival, common sense, and an ever more non-local (global, universal) form of balance, paradoxically a direct result of ever more local increases in complexity. If you wish to read more on this process, consider Rod Swenson's papers on the Speculative Topics page. As humans, we simulate every less complex intelligence in our known universe, in restricted computational domains within our own general intelligence, using the tools of science and our intrinsic computational resources. Every complex system appears to do this, subject to the constraints of their own computational resources. An emergent A.I.'s resources, by all accounts, will profoundly exceed ours, and there is no reason to assume their intelligence would not exhibit the same trajectory, and be built in the same recursive manner, as all known extant examples.

What Objections Have We Missed?

All reasonable arguments against apparent universally accelerating change and multi-local computational complexity increase should be carefully considered by scholars of singularity ideas. I am thankful to the community for pointing out these notable critiques for our collective consideration. Also, if you have any such published critique of your own, please make us aware of it, so that it may also be linked, acknowledged and discussed in this location.

Thank you in advance for your careful consideration of these ideas.