Singularity Watch
HomeNewsletterReading GroupsConferencesPublications"Singularity Studies" LinksDegree ProgramsCritiques

Singularitarians and Singularity Belief — The ASF Position


A Need for Public Dialog, Education, Scientific Inquiry, and Informed Activism in Understanding and Guiding Accelerating Change — Not The Promotion of Singularity Belief or Expectations that We Can Personally Create the Singularity

It is the present position of the Acceleration Studies Foundation (ASF) that not enough good scientific information yet exists for belief or disbelief to be a particularly relevant issue with regard to the singularity. Some who take this topic seriously believe that certain paths, such as particular forms of artificial intelligence research, neuroscience research, or other technical, economic, or public policy activity may lead us most directly (and under the right circumstances, most safely) to a technological singularity. Others, such as myself, believe that technological singularities may emerge in all intelligent civilizations in our universe via a statistical process of collective social intelligence development, and that this developmental process is likely to be strongly guided by poorly understood immune systems (social morality, resiliency, computational redundancy, etc.), in order for our past record of acceleration to have been so smooth, in both apparent human historical and universal history.

It would be premature to say that one of these sets of beliefs is right and the other is wrong. Indeed they may very well both be true. While our beliefs help guide our actions, we must recognize that belief is just one of at least three fundamental discourses driving human inquiry. As Jacob Bronowski (The Ascent of Man, 1974, Science and Human Values, 1956/65) reminds us, practical knowledge and philosophy, and verified knowledge and theory (science) are each equally as important to human behavior and values as belief. All three (belief, practice/ philosophy, and science) should be well developed for what we might call informed thought, behavior, and activism.

In this early stage of singularity scholarship, where just about all we have to ground our discourse is a range of poorly-substantiated beliefs, we would therefore do well to focus on accumulating as much practical knowledge/ philosophy and scientific knowledge/ validated theory as we can. Without the latter legs of Bronowski's tripod, our beliefs will have little relevance beyond guiding our individual actions. If we use them to attempt to convince others of the rightness of our perspectives, as some singularity scholars have advocated, without grounding them in much better practice and theory, this will just be an exercise in vanity. The social echo chambers that result, no matter how large, are likely to have little relevance to global complexity development. Therefore, ASF's activism in regard to the singularity meme is primarily focused on expanding public dialog, education and scientific inquiry regarding the phenomenon of continuously accelerating change, and on networking those lay and academic scholars who have interests in advancing the study and critique of accelerating change from a variety of disciplines.

Singularitarians who believe that they can create and guide the singularity by a particular set of personal conscious actions, as opposed to a long series of mostly global and collective unconscious actions, are of course sincere in their beliefs, but it is our present position that their beliefs are in most (not all) cases misguided, and the degree of individual effect they imagine they can have on the coming transition is generally (not always) significantly overrated. We also share the singularitarians desire for a safe singularity, but see no evidence for the great majority (not all) of their self-servingly dramatic nightmare or failure scenarios—the universe has a long, proven history of facilitating successful emergences, while carefully preserving ancestor substrates in the process. We ignore this history at the risk of our own ignorance and unwitting self-aggrandizement. It is heartening that there are groups who are thinking of how to chart a careful, considered approach in AI development, as that ethic should help ensure a transition with minimal negative effects on humanity. SIAI, for example, is doing good work in this area.

At the same time, we expect such work to be progressively co-opted by the much larger and more AI-productive traditional AI community of which the singularitarians are a small part. Weld and Etzioni's presentation, "The First Law of Robotics" at AAAI 1994 was perhaps the first modern discussion of safe AI. AAAI 2002 has already had a special symposium dedicated to Safe Learning Agents, and all this while we remain at least a decade or more away from agents or robots that are intelligent enough to even begin being considered as "intentionally" harmful. Such work demonstrates that human foresight on these issues is progressing actively, and will grow steadily as evolvable robotics and agents continue to add new capacities in an accelerating and increasingly noteworthy manner in coming years.

With regard to accelerating the speed of the coming transition, another core singularitarian principle, it is our intuition that this process is already occurring quite rapidly (it now appears to be only decades away) in the human future, and that this will be a global, collective transition, a story of I.A. (species intelligence amplification, via science and technology) much more than A.I. (artificial/autonomous intelligence). For more on this, please see our outline of what we consider to be the next great computational attractor for our local environment, the Conversational Interface (CI). The CI network appears to be a developmentally necessary transition, an emergence that will likely occur several decades prior to fully autonomous A.I.. Intelligence amplification (I.A.) systems like the CI network are highly collectivist in their construction, and will be tested and refined by an entire planet's worth of users. They are also highly symbiotic, and will intrinsically engage in extensive profiling, simulation, and "personality capture" (first generation uploading) of their users' behaviors, habits, goals, limits, and rational and emotive states in the process of their refinement.

Yet while the CI is a hard and noble problem, it's complexity pales by comparison to the creation of biologically-inspired hardware systems that are capable of indeterminate self-guided evolutionary developmental improvement (e.g., full autonomy). Artificial life, one of the fields where such efforts have made an early, halting progress to date, has demonstrated itself to be a highly collectivist undertaking, involving large swarms of human beings each incrementally improving the replicating systems and development environment. That environment is itself very hardware dependent in its scalability, requiring further legions of highly differentiated human beings to provide incrementally better "digital soil" for new organisms. Furthermore, any success that does occur with these architectures, as it becomes both scalable and widely demonstrated (a condition not yet apparent for A-Life systems in general), will be first applied to such collectively important and commercially rewarding developmental goals as the CI network. Thus precipitation of the singularity, in our present estimation, will not be significantly dependent on any one group of individual AI design efforts. If we imagine we can do anything other than very incrementally accelerate it by our own individual action, we are likely to be giving into romantic idealism.

While personal visions are fundamental to our humanity and certainly have their own motivational value, a potentially far more important social issue is the ethical path we take in the last days of the human era—the manner in which we continue to develop our own personal responsibility and accountability in a world of inexorable accelerating technological change. In an evolutionary developmental universe apparently equipped with a deep immunity to informational destruction, what human actors can strongly effect, on personal, cultural, national, and global levels, is the recognition of the freedoms of and our ability to affect the quality of our evolutionary choices and paths, and the general nature, timing and trajectory of our developmental constraints and destinations. Furthermore, we must recognize that developmental immunity applies on average, in collectives of biological systems in supportive environments, but not necessarily in our unique case, as one of potentially many intelligent civilizations. How strong are our planetary immune systems (physical, chemical, biological, cultural, technological)? We must be willing to see and recognize the immunity that exists if we hope to strengthen it.

There are many paths to the technological singularity that we may choose, and some will be far more human-affirming than others, using our existing imperfect technologies and foresight. As the storm of change races ever faster, we need to be thinking more about core human values in the transition, not less. To do so will require both enlarging the dialog to encompass more than the current privileged few, and bringing significant new analytical attention and understanding to the the nature and trajectory of accelerating change, so that we may continue to make more informed technological, sociopolitical, and personal decisions in the present.