February 26, 2010

Who Cares Most? top Google search result on The Future of Humanity


I am always curious who has the most recently refreshed content on different topics. In order to be at the top of the heap in Google Search, you really have to be the person who cares most about any given subject. This week, on WHO CARES MOST? we're Google Stalking...


This link leads to an evolving hypertext published by Brian Holt, author of Human Knowledge: Foundations and Limits. Brian has captured the top two Google Search Slots on "the future of humanity" today.

The first reason I love and want to be one with this text is that Brian has predicted that our universe and therefore timeline ends when we are all eaten up in the destruction of the LAST BLACK HOLE.

I have just spent forty minutes perusing Brian's EXTENSIVE treatise on the nature of human understanding, and it is nearly impossible for me to pick out the best parts. This excerpt is just a taste of what's waiting for you at the hypertext:

Nuclear Catastrophe. Nuclear power could result in three kinds of catastrophe: radioactive pollution, limited nuclear bombing, and general nuclear war. Accidental or deliberate radioactive pollution could kill tens or hundreds of thousands, but is quite unlikely to happen. Regional nuclear conflict in the Middle East or the Indian subcontinent could kill several million. Nuclear terrorism against Washington D.C. or New York City could kill more than a million and set back human progress by up to a decade. General nuclear war would kill hundreds of millions and could trigger a nuclear winter that might starve hundreds of millions more. While such a worst case would set back human progress by one or two centuries, existing nuclear arsenals could neither extinct humanity nor end human civilization.

Cultural Decline. Some humans fear that vice, crime, and corruption indicate ongoing social decline or impending collapse. Other humans fear that problems of class division, pollution, education, and infrastructure indicate economic decline or impending collapse. These fears are perennial and unfounded. Past examples of the drastic decline or collapse of a culture or civilization have almost always been due to environmental change, or infection or invasion by outside humans. But after the advent of continental steam locomotion in the mid-1800s, no society remains unexposed to the infections of the others. Similarly, all societies have been made part of a single global human civilization which is not subject to invasion by outside humans. Environmental change indeed poses a set of challenges, but they seem to represent constraints on growth rather than seeds of collapse.

Cultural stagnation is another possible (but milder) kind of potential catastrophe. As in Ming China, Middle Ages Europe, or the Soviet Bloc, stagnation can result if a static ideology takes hold and suppresses dissent. Such a development seems unlikely, given the intellectual freedom and communication technology of the modern world. Ideologies with totalitarian potential include fideist religions, communism, and ecological primitivism.

Bioterrorism. Could a pathogen be genetically designed to be virulent enough to extinct humanity? A pathogen would have to be designed to spread easily from person to person, persist in the environment, resist antibiotics and immune responses, and cause almost 100% mortality. Designing for long latency (e.g. months) might be necessary to ensure wide distribution, but no length may be enough to infect every last human.

Robot Aggression. Some humans fear that the combination of robotics and artificial intelligence will in effect create a new dominant species that will not tolerate human control or even resource competition. These fears are misplaced. Artificial intelligence will be developed gradually by about 2200, and will not evolve runaway super-intelligence. Even when AI is integrated with artifactual life by the early 2200s, the time and energy constraints on artifactual persons will render them no more capable of global domination than any particular variety of humans (i.e. natural persons). Similarly, humanity's first Von Neumann probes will be incapable of overwhelming Earth's defenses even if they tried. To be truly dangerous, VN probes would have to be of a species with both true intelligence and a significant military advantage over humanity. Such a species would be unlikely to engage in alien aggression.

Nanoplague. Self-replicating nanotechnology could in theory become a cancer to the Earth's biosphere, replacing all ribonucleic life with nanotech life. The primary limit on the expansion of such nanotech life would, as for all life, be the availability of usable energy and material. Since any organic material would presumably be usable, the primary limit on how nanocancer could consume organic life would be the availability of usable energy. Fossil fuels are not sufficiently omnipresent, and fusion is not sufficiently portable, so nanocancer would, like ribonucleic microorganisms, have to feed on sunlight or organic tissues. Ribonucleic photosynthesis captures a maximum of about 10% of incident solar energy, while nanocancer should be able to capture at least 50%. The only way to stop nanocancer would be to cut off its access to energy and material or interfere with its mechanisms for using them.


There is no question in my mind that at this moment, Brian Holt is WHO CARES MOST about THE FUTURE OF HUMANITY. Stay tuned for next week, when we find out who cares most about....

THE LAST BLACK HOLE


No comments:

Post a Comment

Followers