Links on Human Extinction Risks
Here, rather than try to put together an extensive list of links, this section is an attempt to create a balanced introduction to the topics, with leading organizations and resource sites which are well curated. This website is a primer, so I try to provide some links to appropriate and concise resources for intelligent newcomers to the topic, a sort of screening from a high level viewpoint, not getting too deep into the tree heirarchy or trench work. More in-depth links are available via many of the sites linked below. Also, this page is under construction, and suggestions are welcome.
The focus of this website is what I call "The Big Three Human Extinction Risks":
Many of the links below, especially the multidisciplinary links, will also cover addition human extinction risks which I consider quite unlikely or centuries away, thus diffusing focus from the real threats we should be focused on.
This is not a "gee whiz" site to entertain academics. This is a site for a call to action, Save Our Species (SOS).
New links which have not yet been incorporated into the website are added at the bottom of this page.
Multidisciplinary or General Links
Wikipedia has a page on Human Extinction. In my opinion, it has information overload with a lot of academic distractions and impractical noise about interesting things which are not really extinction threats (albeit mass suffering and destruction) or low probability events we can do nothing about (such as stars far away creating gamma ray bursts). Nonetheless, there are some good links and topics raised, and Wikipedia is always growing. On the other hand, Wikipedia's handling of human extinction is more like "the top 100 risks", anything you can possibly imagine.
The Lifeboat Foundation currently may deserve to be the first listed multidisciplinary organization, addressing the main extinction risks, but also has a lot of superfluous stuff. I find their website to be information overload without enough focus on practical solutions, and too academic. Nonetheless, a leading force. Established in 2005, at least they now have a page on "Space Habitats" buried in there, albeit almost nothing on using space resources while highlighting academic and impractical longterm things like space elevators, and the links on that page are few, miss the meat on the internet on space colonization, and esoteric. The "Space Settlement Board" lists 80 people, but not much to show for it, typically academic. Their "Boards" altogether list gazillions of people. Back to the topics of human extinction, that is their strength, whereby they have some well written "Special Report" pages buried in there. What they need is a "What's New" page to help users find what's new, maybe in their Blog section.
(As a commentator to this GainExtinction.com website, John Hunt perhaps summarized best about the Lifeboat Foundation: "They certainly are one of the larger organizations which address a variety of extinction risks although I am a bit frustrated that they take the position of trying to pursue an immune system against nanotech risks when I think this would be akin to trying to be prepared in advance to neutralize every possible computer virus...good luck! My bet is on the hackers.")
Nick Bostrom is a leader in the fields of existential risks and transhumanism. He started the Future of Humanity Institute at Oxford. See also the Wikipedia page on Nick Bostrom and NickBostrom.com . Keep an eye on this leader, one of my favorite producers in the world. (His PDF papers are also good to download and read while in transit where networks are not available.)
There is a U.S. congressionally funded report on extinction risks from various causes which was pretty good, but I haven't been able to find it again yet via Google ...
Bill Joy's famous Wired magazine article titled "Why The Future Doesn't Need Us", on the threats of genetics, nanotechnology, and robotics, received a lot of mass media attention and is a nice personal outlook which hits the main issues and focuses on the main extinction mechanisms.
This section needs the most work. I don't know why but I've found a lot more primers on the threats of AI and nanotechnology than I have found on the more pressing threat of biotechnology and genetics. Please help out on finding good primer links for this section.
Policing Science: Genetics, Nanotechnology and Robotics by William Leiss of the Univ. of Ottawa includes good analysis of genetics risk, e.g., citing a respected journal, Nature, with a review of a few nearterm possibilities, e.g., "Transferring genes for antibiotic resistance (e.g., to anthrax or plague, as Russian scientists have done) or pathogenicity (the toxin in botulinin, which could be transferred to E. coli), or simply mixing various traits of different pathogens, all of which is said to be “child’s play” for molecular genetics today." This paper was apparently published in a German journal.
Ray Kurzweil, one of the greatest inventors and leaders of our time, as well as one of the more accurate predictors of the future in his long lifetime thus far, is probably behind KurzweilAi.net , which hosts the articles of the next few paragraphs:
The Pace and Proliferation of Biological Technologies by Rob Carlson, a researcher at the Molecular Sciences Institute in Berkeley, California, originally published in Biosecurity Journal in 2003, reprinted at KurzweilAI.net, analyzes how advances in gene sequencing could already allow an individual to do advanced things in a lab purchased for $10,000 and shows how advancing technology is just making it easier. This refutes specific points made in arguments by those who claimed we don't have have to worry ...
A much simpler article is Biowar for Dummies by Paul Boutin, who visited Roger Brent, a geneticist who runs a biotech firm, Molecular Sciences Institute, in Berkeley, who in turn showed this relative novice how it can be done ... and unleashed the author into one of his laboratories for quick training. An amazing and eye opening article.
Ray Kurzweil alo writes extensively about future artificial intelligence as you can see on Amazon.
There is a Congressional report out on the internet somewhere which covers many biotechnology and genetics risks with potential extinction results but I can't find it again. I stumbled upon it many months ago. However, it's of a very different style than items like the above written by or coauthored with professional researchers in the field, and it made me wonder how much was NOT written into that government report so that the press would not announce it all over the world and therefore make it a self-fulfilling proposition sooner than later ...
Other reports by researchers on the internet are much more specific and less self-censored ...
Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, a journal since 2003.
"Since 2001, the United States government has spent substantial resources on preparing the nation against a bioterrorist attack. ... The FY2010 federal budget for civilian biodefense totals $6.05 billion. Of that total, $4.96 billion is budgeted for programs that serve multiple goals and provide manifold benefits." ref
One of the most concise articles you can find on AI, with wonderful links, is in Wikipedia titled Technological Singularity. While it's mainly about AI, other technologies are connected and interrelated. Don't miss the Discussion tab.
Without the links, perhaps an even more concise short article is by The Singularity Institute, in their What Is The Singularity? overview page. Click back to their Home page and follow other links as well.
The abovementioned Singularity Institute is a leading international organization with top thinkers guiding their agenda. Their Media page has a lot of interview videos and audios (though rather than talking heads, I prefer to collect their MP3 audios for when I'm driving or walking), and a large subset of these have a transcript you can of course assimilate more quickly.
A long article analyzing key issues of "Friendly AI" (FAI) is by Eliezer Yudkowsky: Knowability of FAI.
The Lifeboat Institute includes these two Special Reports:
George Dyson's book Darwin Among the Machines. Excerpt: "In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines." As we have seen, Moravec agrees, believing we may well not survive the encounter with the superior robot species.
The Age of Spiritual Machines, a book by Ray Kurzweil
An AI leader: Hans Moravec
I went thru Amazon.com and looked at the variety of books on this subject of AI. Out of many books I considered, the two I ordered (both for my Kindle reader) were:
I haven't had time to incorporate any of their points into the website yet, and in fact I don't want to write too much on AI since my priority focus should really be on www.SpaceSettlement.com , but I do find the above two books to be good recommended reading, though I've gotten thru only about 10% of each at the time of this writing, April 2010. (Keeping up on the literature and maintaining my websites could be a fulltime job, but I'd quickly let down family, employees, and dependent customers if I didn't run my ordinary business, all of whom clamor far more than humanistic volunteers. I'm open to selling my business to a competent and responsible person!)
Dr. K. Eric Drexler started the Foresight Institute in the late 1980s "to help prepare society for anticipated advanced technologies" but with an emphasis on nanotechnology, which is why it's in the Links section on Nanotechnology instead of the multidisciplinary links.
You can find a link to the downloadable book Engines of Creation by Dr. K. Eric Drexler (just $1) on The Trajectory of Nanotechnology . As a founder of the Foresight Institute, see also their PDF Unbounding the Future: the Nanotechnology Revolution
An interesting Wired journalistic piece on the ostracizing of Drexler by emerging industry and Drexler's personal history is titled The Incredible Shrinking Man. Parts of this article show another viewpoint of how politics works to try to discredit or just cut out the inputs of those saying nanotechnology is risky, including that of one of the fathers of nanotechnology, so that the political operatives get their big money government grants.
This is actually related to the next article, from another angle:
A very good synopsis of the threat of nanotechnology and fundamental reasons why it's being overlooked by most researchers was published in Disarmament Diplomacy, located in the text above the author's Inner Space Treaty proposal.
Congressional report on Nanotechnology, with focus on where it's going economically and technologically but relatively little coverage of the risks: Nanotechnology: The Future is Coming Sooner Than You Think
Nick Szabo is a good writer on these kinds of topics: Nanotechnology, Self-Reproduction & Agile Manufacturing
The "Fermi Paradox" asks why at least one extraterrestrial civilization hasn't already sent visible robots to our solar system, such as self-replicating "von Neumann machines". My opinion is that multiple intelligences exist in the universe, and such a collaboration would put off limits any interference in an emerging intelligence by any one entity, similar to how the United Nations may object to invasion of a neighboring country and culture, and border and passport controls. A more enlightened intelligence could easily nix a less smart or egotistical or self-aggrandizing one of von Neumann machines. Many other opinions exist on this, including that there is no other intelligence in the galaxy, so maybe technological civilizations tend to destroy themselves (but it seems quite unlikely to me that 100% of them would before creating a von Neumann machine), or that life is just so unlikely that our planet is unique (which is hard to imagine with natural selection of extremophiles observed in some of the harshest Earth environments). I also believe that radio communications is a relatively primitive and brief phase, which could explain the Arecibo SETI failure to date. However, there are many opinions here...
You can always read the latest debate on the Fermi Paradox.
Nick Bostrom's personal website presents some most pertinent questions for you in the short beginning synopsis of his outlook, and Nick stays active in this field. In view of our own species' pursuit of potentially self-destructive technologies out of desire for its benefits (and money), as well as curiosity, Nick's interest in whether there is some event which makes technological species extinct is relevant to the current debate here on Earth.
In any case, the fact that no other intelligence has made itself known makes it clear enough that there is no cop-out, i.e., we are on our own, for whatever reason, and the responsibility is ours.
Will you share responsibility and do something? Or make an excuse and move on to another personal focus?
If you choose to submit feedback, then I wish to thank you in advance. After you click on Submit, the page will jump to the top.
If you have comments:
Besides the above quick feedback option, if you found this page or website useful or interesting, please let me know by contacting the author, even if it's just a quick "thank you" and blank otherwise without giving any of your identification information. The quantity of human (not just 'bot) feedback can help me manage priorities. I read all comments!
I can be OK with use, not abuse, especially when the source is clearly cited,
but I must be contacted first about all significant details, and my permission must be granted.