![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Social Backlash to the Technological Singularity Hypothesis
|
How Culturally Disruptive Might the Singularity Idea Be in Coming Decades? Acceleration Watcher Randy Marks recently (2003) posed the following question, reproduced here with minor edits:
This is an excellent question, and one that seems intrinsically more evolutionary (and hence less predictable) than the developmental acceleration of local computational systems. Here are some preliminary thoughts, however, and I welcome your feedback on them. In a nutshell, I am of the present opinion that most of society will continue to comfortably ignore the hypothesis for the next decade or two, primarily because there just isn't enough data yet to support real theories, only a semi-informed speculative hypothesis. Nevertheless, as we collectively get progressively better at measuring and interpreting technological development, there will be a growing group of IT professionals, scientists, technology scholars, futurists, and lay thinkers who will become adept at predicting specific types of accelerating computational and technological price performance gains in time, even as we remain mostly unable to predict the specific evolutionary form that these developments will take from year to year. For example, which wireless network platform wins out over the next decade is a very evolutionary, poorly predictable event. But predicting the average growth rate of wireless node density every year (e.g., "Poor's law," as coined by Robert Poor of MIT), is an apparently inevitable and much more regular developmental process. Another decade or so of such increasing predictive foresight, in a range of special computationally-related domains, should help many to take notice of the singularity dialog. Eventually, I'm sure we will see the emergence of formal disciplines, such as Acceleration Studies and Evolutionary Development Theory , and our Acceleration Studies Foundation is very interested in promoting these over the coming years. But this will take time. In regard to discussion of the technological singularity hypothesis, then, I expect all the extremism and "backlash" we see will mostly be just talk for several more years, as so much of this is simply opinion without great data and clever predictive experiments based on developmental models. I don't expect any major social groups will take singularity discussion too seriously for at least another decade, rightly considering it still primarily a philosophical topic at the present time, for all its promise. However the news media, ever on the alert for an interesting story, are another matter. Beginning in 2002 they began to take public notice of technological singularity discussions, and I expect that increasingly better coverage of these issues by foresighted journalists will continue to move issues of accelerating change in and out of the attention of various future-oriented communities. As we struggle to adapt culturally to the ever-faster rush of technology, we will note new cultural change, such as the phenomenon that today's children, the most plastic members of human society, have developed new status as teachers of their adult parents with regard to necessary tools of modern culture (internet, email, cell phones). This surprises and alarms some of us, as growing portions of human culture seek to hold on to traditions, to slow down and simplify, in response to a nonhuman environment that is continually speeding up and complexifying. The recurring drama of this "future shock" makes for a great story. Hopefully we at the ASF can do our part to bring a call both for broad and balanced public discourse and increased scientific attention and funding for the multidisciplinary study of accelerating change in coming years. As an added complication, I don't presently expect that a functional study of singularity hypotheses will emerge first in academia, though there are many in the universities presently advancing our understanding of accelerating technological change. Much of this early work may have to be championed and developed independently of universities for some time to come. Academia has an incontrovertible history of being very conservative in regard to paradigms that threaten the dominant ones. In the Renaissance, for example, almost all the real innovation in science, engineering, and humanities occurred outside the tradition-bound academies. Even in computer development in the mid 20th century, for more than two decades the important innovations occurred mostly in military and industrial environments, before academia formally embraced this new phenomenon, conferring the first Ph.D.'s in computer science only in 1961. As a more current example, academia is fighting to keep the ultra-Darwinist paradigm as the sole approach to biological sciences research and education, when there is increasing evidence that evolutionary development is a better paradigm for biological change. So academia often comes very late to the most important issues in understanding and constructing new realities. To what extent will accelerating computation itself intrinsically trigger social regulation? Again it seems to a very minimal extent, from my perspective. Unlike the top-down and necessarily error-prone interventions that we are doing in the biotechnological world, which are presently being significantly slowed by social and bioethical constraints, and which should continue to be slowed greatly for the forseeable future, I expect very few attempts to slow down the bottom up increase in speed, power, and flexibility of our computational and technological systems. We will continue to hold them to progressively higher safety and environmental standards, to be sure, and to make their interfaces more and more sophisticated, personalized, and customizable. But these new standards will just stimulate further acceleration of computational complexity. Here is a word and a field that you may not have heard of yet, "captology." It refers to persuasive computing, the use of computers to convince human beings to do things, and to monitor and manage their intellectual and emotional responses by a process of personality capture, the creation of an internal model of the user's mental state. This, of course, is a very first-generation form of uploading, something that is on a smooth continuum with Hans Moravec's silicon brain upgrade or Ray Kurzweil's neural transistor paradigms. Do you expect a social backlash against the increasing captology potential of computers? I don't, at least for the next ten years, as computers won't start to get powerful in this capacity until we begin to approach the Conversational Interface. Certainly there will be a great number of small backlashes all along the way, primarily to do with privacy and liberty issues, but I wouldn't expect anything extreme. The subtlety and sophistication of our machines will continue to accelerate, and if they follow a largely bottom up, self-balancing developmental path, as I think they are currently doing, the problems they create will rapidly subside in relation to the tremendous new productivities and freedoms from our historical constraints that they provide. We know that there have been technology backlashes for centuries. In 1812, the Luddites made their stand, which was apparently more for better working conditions than for stopping technological development. In 1910, the word "sabotage" came into English usage after neo-Luddite striking French railway workers cut the sabots (a metal "shoe") that hold railroad tracks in place. In the last century, leading pundits have voiced fears that machines would "soon put humanity out of work" in the 1920's-30's (assembly line/Great Depression), 1950's (Big Iron computers), 1980's (personal computers), and I would predict again in the 2010's-2020's (due to the early CUI-interface). These backlashes will be a common public dialog, but just that a dialog. Furthermore, it is my strong intuition that the total degree of backlash against technological systems is always statistically significantly reduced, even as the violence possible by any single disgruntled individual increases regularly. I'm looking forward to better measurement and study of this statistical stabilization (social and technological intelligence, immunity and interdependence) in coming decades. We are becoming very good at insulating and protecting our modern social system from technological disruption, even as our technologies begin to "take off" all around us. Will we eventually try to slow down the autocatalytic nature of our technologies? It certainly seems a non-issue in these early days. I expect, for example, no "backlash" against IBM's Autonomic Computing initiative of the present day, which is attempting to make their network infrastructure significantly more self-provisioning and self-repairing. Perhaps we will apply some social brakes to accelerating computation much later, once our platforms have become far more self-developing, at the hardware level. The leading first world societies might, for example, place a brief moratorium on self-evolving intelligences sometime after 2030, once our robots and CUI-equipped computer systems are likely to become much more interesting than they are today. I think a lot more technology foresight than we presently have would need to emerge for that to realistically occur, however, other than in a few economies, on an experimental basis. Would a multilateral moratorium have any ultimate effect on the speed or timing of the technological singularity? Even in that case, I doubt it would have more than a brief impact. Would that be something that any of us should be strongly for or against? I don't think we are even ready to answer that question yet. Certain moratoria, as on the development of destabilizing or dirty technologies, have been very effective, even though they are also only temporary in the sense of ultimate prevention. For example, even though we know that we cannot ultimately prevent the citizen of 2080 having access to technologies that would allow him to make a nuclear bomb in his basement, we also know, or should know, that the level of transparency and intelligence that will be built into all of our technologies by that time will continue to ensure that such an event remains, even in that advanced environment, statistically extremely unlikely. In other words, though we likely cannot prevent its individual occurance, we can make it extremely improbable that it will occur multiple times with our increasingly immune technological network. Unconsciously or not, we have taken a pathway that has slowed down, regulated, and placed moratoria on destabilizing technologies (e.g., access to WMD's) and is accelerating a range of stabilizing ones (e.g., a worldwide infrastructure of computational transparency and technological immunity). We need to observe what kind of mistakes our partially self-aware systems will make in coming decades, and how rapidly and thoroughly they can correct themselves, before we know what kind of regulations are appropriate, and what kind of moratoria, if any. I personally expect that our technological progeny's rate of learning, including learning how to integrate and interface seamlessly with us, will continue to be absolutely astounding by comparison to their biological ancestors. Fortunately, there have already been scores of computer science workshops on the issues involved in building friendly interfaces, unthreatening robotic systems, and safe learning agents. This will become an entire industry populated by its own professionals soon enough. The AI engineer and futurist Hugo De Garis has an article and book, "The Artilect War," proposing the idea that a major social conflict may ensue between the "Cosmists" (those who want to build artificial intellects/"artilects"), and the "Terrans" (those who do not). I think he is wrong on this point, greatly overestimating the coming rift, and underestimating the superstability of our new social and technological immune systems. Nevertheless, his is a good account of the extremes to which these ideas have been taken, and is worth reading if you want to understand the range of thought. While I agree there will be a range of strong and partially oppositional political and philosophical positions on these issues, I think the dominant adjective will be pluralism, not polarity. I don't think modern society will ever allow major disruptive social schisms again, no matter the issue: the human technocultural system is now far too immune, interdependent, and intelligent for that. Again, these are only partly informed speculations, at present. I expect we'll learn how to measure, predict, and understand all these issues much better in coming decades, and we at ASF want to accelerate the development of good data and models in these compelling areas of future concern, with your help.
|