The concept of Technological Singularity was introduced by Vernor Vinge. For those, who need an introduction to this concept, we will explain it at the end of this article.
Despite the fact, that by definition we cannot meaningfully describe this singularity from our current viewpoint, it is too tempting for visionaries not to try to paint a vision of the post-singularity world. I think that their descriptions are typically way too mild, when they speak about the post-singularity world, dominated by the self-evolving society of intelligent "AI" post-human agents, and about the place of humans in such society.
Here is what I think should happen, when we takes into account how physics and ethics can and will be developed by those agents.
I think that in the absence of outside "power" (God or a better developed civilization) the two main alternatives are quite clear --- they all boil down to ethics, not surprisingly...
Basically, it is likely that those first post-humans (AI, which is 100 times smarter, than a human), would either have as their main goal the "power" in all senses, or the "ethics" (and "power" then will be subservient to it).
The mixed scenario is highly unlikely, because even a few weeks of head start would probably be more than enough. So I'll just analyze the pure scenarios, where the first post-humans are either governed by quest for "power", or by quest for "ethics".
This text does not consider the possibility of achieving the strongly coupled human-computer powerful entity with direct neural-circuit connections first. In this case all bets are off.)
This all boils down to physics. We know that our physical models always had very limited applicability and have nothing to do with absolute truth, since Niels Bohr explained this to us.
Now the progress of physics have slowed down, because people's brain does not evolve quickly, and there is no resources for radically new experiments.
With post-humans with superior brain power, this would change in a moment. The radical discoveries (as radical as relativity or quantum mechanics) would follow once a month, then once a week, then once a day, then once a second --- and will be used to change the very nature of "derivative" physical "laws", which govern what's going on here, until the very notion of time would stop making sense --- we know what even simple gravitational fields do to time and space --- those new discoveries would lead to technologies, incomparably more radical than a nuclear bomb... The very nature of space and time would change, the Solar system will be just gone... Never mind humans, nobody would even notice them --- it's highly unlikely that even anything of that Society of Mind will survive, if proper ethical control and self-control are not exercised.
Just imagine various parts of the system trying to fight each other about who will unfavorably change the structure of space 0.0000000001 of a second before the opponent would do the same to them...
Basically, this is just an equivalent of a collision with a black hole...
If the self-reflection of the first post-humans would be governed by ethics, they would understand all this and much more much quicker, than our weak brains. It's likely, that if the strong security would be considered paramount, all self-reflective creatures would be protected to the most, just because it's very dangerous to draw an artificial line somewhere... So, "Golden Age" in some very strange and strong version is highly likely... For "humans" too. The "perfect ethics", with implementation, will follow quickly. All meta-considerations, like dangers of excessive controls, would be taken into account properly.
Of course, this set of two scenarios can be faulty, but it seems, that if we assume, that we can talk about singularity at all, instead of just assuming that it is outside of our discourse (also not a bad idea, philosophically), they are the most likely. All this assumes no interference of an existing outside power, of course...
If this is correct, then indeed what we do and how we approach this is crucial... What kinds of things we are creating as agents, learning systems, etc, will become decisive (that is, what matters is only who wins first, as obviously there will be attempts of both kinds).
So I feel that any picture of a "cloud of agents" with vague ethical basis is very naive --- it's either fierce warfare on the scale infinitely beyond any imagination, or a strongly ethical system, with whatever ethics it chooses to develop (but the collective survival, stability, and control of the rate of progress would be paramount). That is, if we can talk about this thing at all...
Ray Solomonoff at some point explained to me one of the probable scenarios for such singularity, which can serve as a naive explanation, if one does not want to spend time, reading the complete description. The scientists are divided on whether the full-scale artificial intelligence is possible. Let's presume for this scenario, that it is possible. Then, due to the exponential nature of progress, only few months will pass from the moment when a computer becomes as smart as a human, till the moment when it(?) becomes 100 times as smart, and will keep getting better. Thus the situation would change radically, since humans would not be on the top of the evolutionary chain any longer. Everything would be completely different. To think that the governments, which cannot stop the spread of computer viruses, can do something about this is entirely naive.
A typical consensus estimate in terms of timing is 2020-2025, but I think it rather overestimates the time we actually have until this event.
Written in September 1998.
Footnote (added June 4, 2020): the title of this essay had a funny typo, it was "Singularity is More Radical Then We Think". It was interesting to meditate upon this version, with "then" instead of "than", but I thought it was a distraction, so I finally fixed it.
Copying of this and other Essays by Mishka is allowed free of charge, provided that the texts and this notice are unaltered, and that no further restrictions on the subsequent free redistribution are imposed
Back to Mishka's home page