Cultural Decline. Some humans fear that vice, crime, and corruption indicate ongoing social decline or impending collapse. Other humans fear that problems of class division, pollution, education, and infrastructure indicate economic decline or impending collapse. These fears are perennial and unfounded. Past examples of the drastic decline or collapse of a culture or civilization have almost always been due to environmental change, or infection or invasion by outside humans. But after the advent of continental steam locomotion in the mid-1800s, no society remains unexposed to the infections of the others. Similarly, all societies have been made part of a single global human civilization which is not subject to invasion by outside humans. Environmental change indeed poses a set of challenges, but they seem to represent constraints on growth rather than seeds of collapse.
Cultural stagnation is another possible (but milder) kind of potential catastrophe. As in Ming China, Middle Ages Europe, or the Soviet Bloc, stagnation can result if a static ideology takes hold and suppresses dissent. Such a development seems unlikely, given the intellectual freedom and communication technology of the modern world. Ideologies with totalitarian potential include fideist religions, communism, and ecological primitivism.
Bioterrorism. Could a pathogen be genetically designed to be virulent enough to extinct humanity? A pathogen would have to be designed to spread easily from person to person, persist in the environment, resist antibiotics and immune responses, and cause almost 100% mortality. Designing for long latency (e.g. months) might be necessary to ensure wide distribution, but no length may be enough to infect every last human.
Robot Aggression. Some humans fear that the combination of robotics and artificial intelligence will in effect create a new dominant species that will not tolerate human control or even resource competition. These fears are misplaced. Artificial intelligence will be developed gradually by about 2200, and will not evolve runaway super-intelligence. Even when AI is integrated with artifactual life by the early 2200s, the time and energy constraints on artifactual persons will render them no more capable of global domination than any particular variety of humans (i.e. natural persons). Similarly, humanity's first Von Neumann probes will be incapable of overwhelming Earth's defenses even if they tried. To be truly dangerous, VN probes would have to be of a species with both true intelligence and a significant military advantage over humanity. Such a species would be unlikely to engage in alien aggression.
Nanoplague. Self-replicating nanotechnology could in theory become a cancer to the Earth's biosphere, replacing all ribonucleic life with nanotech life. The primary limit on the expansion of such nanotech life would, as for all life, be the availability of usable energy and material. Since any organic material would presumably be usable, the primary limit on how nanocancer could consume organic life would be the availability of usable energy. Fossil fuels are not sufficiently omnipresent, and fusion is not sufficiently portable, so nanocancer would, like ribonucleic microorganisms, have to feed on sunlight or organic tissues. Ribonucleic photosynthesis captures a maximum of about 10% of incident solar energy, while nanocancer should be able to capture at least 50%. The only way to stop nanocancer would be to cut off its access to energy and material or interfere with its mechanisms for using them.
No comments:
Post a Comment