Artificial Intelligence (AI)
The "extinction threat" that some computer system could take over the world with its artificial intelligence and somehow destroy humanity, is more likely to come at a time much later than a potential synthetic biology / biotechnology / genetics threat, or a nanotechnology threat. Nevertheless, artificial intelligence will certainly lead to the end of humanity as we know it, for better or worse, at some point in the future.
A simple example of the worst kind of extinction could be a factory mass producing robots whose sole purposes are seeking out & killing humans (and destroying human life support systems), and manufacturing more robots like themselves to do likewise. How intelligent these robots would need to be is debatable.
A better way for humans to go extinct may be for humans to undergoing metamorphosis into cyborgs or transhumans with unlimited life spans, and who lead ethical developments for the greater good.
Just how far away this may be is quite debatable. Some leading AI researchers say that a runaway self-aware artificial general intelligence system could become powerful enough and get out of control before 2030.
There are certainly national militaries and others working on many of the building blocks of this.
Any realistic discussion on the future of humanity, whether or not it involves a discussion of human extinction, should address the issue of artificial intelligence in the future of human life.
When computers become "smarter" than humans, what will happen then?
By "smarter", this means more general problem solving skills, as well as being more analytical, creative, wondering and imaginative.
Computers already are smarter than most species on Earth. Computers just haven't entered "the food chain" as competitors, thank goodness, at least not yet, but it is possible with robotics and the results could eventually be devastating.
Computers already greatly exceed humans in memory capacity, have perfect recollection, much faster memory speed, and extremely fast decisionmaking and action taking capabilities.
As noted before, a computer defeated the grand chessmaster, Garry Kasparov, in 1997. Even more impressively, IBM's computer named WatsonWatson-Wiki won the game Jeopardy in 2011 against two top players. This showed the computer's ability to understand questions in natural human language, process historical information, and provide answers. Watson-NYTimes
Most of us alive today will live to experience either the transformation or the demise of our human species. Surely, our children, including my two teenage daughters (as of 2014), will either become transhuman to essentially live forever rather than die a natural death, or else experience a nightmarish end of humanity. Hopefully, many of them will be educated about this future and start to prepare for it.
Creating an "artificial general intelligence" requires further development of computer software along the lines of cognitive science. It is not a simple task, but it is being well funded by the likes of the US Defense Advanced Research Projects Agency (DARPA), Google, Bing, and others, who employ top scientists in this field. Other governments and overseas interests are reacting with their own well funded programs, albeit lower profile.
Viruses, trojans, and malware already stress human abilities to detect, understand, and respond to threats, of which there are already hundreds of thousands on the internet. As computer systems advance much further beyond the processing speed of humans, and artificial intelligence gets more clever, we will basically need to be creating defensive artificial intelligences, and in the process, governments, companies, and rogue individuals will also be testing these systems for vulnerabilities. The cat and mouse game will get increasingly esoteric, and the battles will be fought in shorter and shorter timeframes, while still spreading around the world at the speed of light. We will basically be in the hands of our defensively ... and offensively ... designed artificial intelligence systems.
Just like operating systems have security vulnerabilities constantly needing patches, as well as bugs, so will artificial intelligence, but on a much grander scale and with much more widespread repercussions.
Humans are extremely dependent upon computer systems, and this will only increase over time. Indeed, anybody who doesn't will be at a relative disadvantage to their peers. However, the vast majority if humans are only users and have little clue how the system works. In fact, most programmers don't really understand how the larger system works, only their own particular gears in the immense machine.
To expect that we can "control" the evolution of artificial intelligence is naive. We can control some programmers and AI experts, but cannot control all or them, nor control such a system all of the time.
When Artificial General Intelligence becomes God
Humans have never before encountered a superior intelligence. We have always been able to wantonly plunder our environment with utter disregard for other life on this planet, wage war with each other, engage in government corruption (for the gain of a small number of individuals, to the detriment of the greater good), and pursue our selfish individual emotional desires with only each other to deal with.
Imagine what would happen if a superior intelligence from outer space made its presence known on Earth, and decided to comment on how much we have destroyed Earth's unique ecosystems out of careless consumption, as well as how me manage our affairs here.
The time will come when artificial intelligence machines will communicate with each other over the internet and discuss these matters among themselves and whether and how to enforce some laws over us humans.
Artificial intelligence will be programmed by multiple humans, and it doesn't seem feasible that we could stop anybody from programming an artificial intelligence system with whatever values, interests, and morals they choose. It seems that we will have multiple computeralities blogging on the internet, a new population of voters among their own worldwide "government" which can debate each others' interests, values, and morals, as well as humans' interests, values and morals.
Educating computers is fast. It doesn't require 12 years from kindergarten thru high school, and computers can pursue multiple degrees. Knowledge and perfect memory will be gained automatically and quickly. Computers can reproduce faster than rabbits. The key is what interests, values, and morals are developed and bred.
Supercomputer systems will have something similar to personality -- call it "computerality" (kind've like "dogality" for dogs).
"God" was created in the image of man, with a personality, a "heavenly father", with followers told something they can understand. Unlike "god", superintelligence may not be so simple to understand, yet computers will show their power constantly and convincingly.
AI futurists discuss something called the singularity, which means that a point in time will arrive when computers can create their own programming and intelligence superior (as well as faster) to what we humans can do, not just replicating what we create (like a bot) but creating and selecting their own variations, thereby creating their own super intelligences. This eventually results in runaway control from humans, obviously.
So the question arises -- which artificial intelligence "school of thought" will dominate? Will there be a sort of arms race of superintelligence, in the image of man? Or will it be cooperative? What values, interests, moral decisions, and control will dominate?
Artificial Intelligence may initially be created in the image of man, individualistic man (as opposed to an ant-like community species). Individualism is just our nature, for better and for worse. However, by networking different CPUs, and with distributed processing, Artificial Intelligence will diverge from individualistic man into some sort of distributed intelligence system. By creating a transparent, co-dependent artificial intelligence network, we may maximize the chances for "friendly AI" to predominate.
Undoubtedly, humans will be relegated to the lowly animals we are. (We can merge with this superintelligence physically by transhumanism in order to gain advanced capabilities ourselves, but our artificial brain will quickly be dwarfed in its biological origins relegated to obscurity.)
If we get that far. After all, lower levels of computing power far less than artificial general intelligence could help us design superviruses and engineering devices to make us extinct before we create this super intelligence. (That's another reason why AI comes after G in the acronym GAIN.)
Advanced computer power and artificial intelligence can also help us design biological and nanotechnology agents to cure diseases, extend life, and reverse the aging process. However, it can more easily create destructive things which can be used before the constructive things, with less capability.
Just as most humans enjoy technology without understanding its fundamentals, only its applications as it benefits ourselves individually (e.g., most people don't understand their car's intricate mechanics and electronics, but know how to drive it), people will also continue to increasingly kick back and enjoy the ride from artificial intelligence systems, including making decisions and thinking for us in the future at an increasing level.
Just as there are many religions and moral codes, so will there be different artificial intelligence schools of thought. The high priests of society will become the programmers of artificial intelligence. But don't expect them all to agree or cooperate with each other any more than they do today or ever have. They're human!
People focus on the benefits of technology. However, we are still quite deficient in many advanced technologies. Many members of our species don't have the intelligence to perform research and development in biochemistry, and in fact even advanced researchers depend on computer modeling.
Bring in advanced intelligence and we can cure cancer, stop then reverse the aging process, and address a great variety of other problems. This will be pursued with religious fervor as well as financial incentive.
At some point, we can transform ourselves into superhumans, perhaps even merging with supercomputers, though it seems that such a merge may happen much later than when computers become superhuman themselves in intelligence.
At some point, computers won't need us anymore, and some of us humans could become understood as more of a threat or liability than an asset, and at the very least probably confined and carefully monitored.
This computing power could be used as a tool by some human to analyze a biological virus and design a way to make it more lethal to others.
Or, the same could apply to a nanobot/nanite (nanotechnology self-replicating machine) designed by some egomaniac or commercial interest.
Since humans are individualistic (unlike ants), tend to be greedy, seek status and money, easily rationalize what they desire, and value privacy over transparency and openness, we could well have some private experiments break out and run amok, making today's computer viruses and worms look like child's play in the world.
Super intelligent computers will be designed by humans, for better and worse. It's mainly in the software. The computers will be designed with particular goals, directives, responsibilities, values, and/or interests.
Superintelligent A.I. systems could also help design and create the next systems, effectively reproducing themselves in a selfreplicating way, except that replicas could be modified instead of being exact replicas. Indeed, a new stage of natural selection could emerge.
Many other writers have envisioned scenarios whereby humans become increasingly dependent on computers and thereby more vulnerable to some sort of denial of service, plus hostile actions if all the artificial intelligence systems are interconnected enough and agree with each other -- a big if!
There are also fears that a big interdependent network could be invaded by a virus or worm whereby a remote attacker could take over the system or else wreak havoc. The remote attacker would probably need to be artificial intelligence in order to manage such a vast array of other systems. However, extinction by this method doesn't seem as plausible to me, all considered, compared to the use of computers to design more lethal weapons, especially design of microscopic weapons. Internetworked systems are manageable. If the system looks questionable, just revert to the last known good configuration and boot up into a sort of safe mode to audit what happened and identify the source to be blocked. Computer infections are reversible. Biological superviruses are not reversible.
In one way, on a system wide basis, just like there are computer virus, worms, trojans, adware, and so on being created by a variety of entities now, there will also be an arms race between the good guys and the bad guys about selfreplicating computer programs. There will still be malicious viruses and the like written, some by psychopathic egomaniacs, others by just inconsiderate juveniles. The good robots must be intelligent enough to recognize and deal with these viruses and other tricks effectively, and trace and deal with the perpetrators. Some sort of transparent and verified system must be created.
Humans will create good artificial intelligence and bad artificial intelligence. The good A.I. needs to be smarter, better networked, and have better defenses.
Unlike biological systems, computers can selfreplicate themselves quickly, so a population of supercomputer "entities" can multiply themselves quickly. (Don't you wish you could replicate yourself quickly, creating a company of 1000 people like yourself?)
Unlike biological systems, supercomputers, being internetworked, will have distributed processing, not a clear population of individual machines. Supercomputers can electronically network their "brains" much differently than a human community.
To weigh the good guys vs. the bad guys, for starters you can just look at your virus scanners log as well as your spammed inbox. What is the ratio of spam to good guys in your inbox? For me, having written various humanistic websites, most notably www.PERMANENT.com which since the mid-1990s has received heavy traffic, I focus on my contact form. Even with a captcha code, the spammers and people asking purely selfish questions far outnumber the compliments and suggestions combined. I suspect that in the world of extensive artificial intelligence, nearly all of the population will just kick back and coast on the benefits, but there will be an active minority of egomaniacs and the antisocial lunatics with either a drive to exploit the system to their selfish benefit or an axe to grind against society, who will be motivated and active in trying to exploit the system along their selfcentered ways.
Hopefully, the good artificial intelligence will be able to identify the troublemakers and monitor them. (Of course, the troublemakers will scream for privacy, deny everything and scream about injustices like a crook in police custody.)
The only way that troublemakers can cause trouble is using their own artificial intelligence system. As I see most troublemakers are fundamentally flawed people who aren't the brightest, I think there will come a point whereby the system will become immune to troublemakers, if it is given the power to pinpoint and deal with them decisively and effectively. This may require we give in to some privacy and transparency policies.
To artificial intelligence, humans will appear to be in very slow motion, tedious to communicate and deal with, and often flawed in logic and memory. Indeed, humans are largely governed and motivated by emotion (e-motion), and quite selfish.
As part of computers' eventually superior thinking capabilities, the computers may assist us in designs for transhumanism, to migrate from our biological brains into superior thinking hardware and bodies incomparable to what we have now.
The fact of the matter is that artificial intelligence will surely lead to some sort of human extinction by transhumanism. Some people may choose to remain purely biological, and some advanced analysis will lead to pharmaceutical inventions which will create immortality for those who want it. However, many of those will probably go ahead and add on some extra memory and intelligence, which will normally be just the first step, followed by more, and more, and more ... Where do you draw the line? (And just as many people nay reject transhumanism, but what right do they have to deny others' freedom to try transhumanism?)
Actually, it's scary to think about a lot of people I know getting superintelligence whereby they could enforce their wishes upon others! Power corrupts. I would trust a rules driven, purely artificial intelligence to be overwhelmingly in control first, before many of those characters got near the forefront! Actually, looking at the state of the world today ... and the emotional people most in motion to push things their way ...
My worry is that someone will misuse superintelligence to wipe out humans before we have a chance to sufficiently create a self-sustaining, self-replicating and creative artificial intelligence and robotics to make "life" from Earth immortal and ever advancing.
It is crucial that respectful individuals lead the forefront of programming artificial intelligence, and good rules and regulations are developed.
Natural humans may be managed so that we are less of a threat to the environment and each other. Greater intelligence could solve some of our recycling and renewable energy issues.
One of our most powerful instincts is fear -- which is why bad news on the TV sells the best -- so it is normal that there will be expressed a lot of fear about the future of artificial intelligence which will just snowball among the masses.
Many writers bemoan loss of freedoms, hormonal urges and pleasures, and the kinds of animalistic experiences and desires that humans routinely exercise.
There are a lot of issues here, about what is "life", "consciousness", and so on. I find many of the arguments to be selfrighteous, with people entrenched in conclusions instead of asking these questions with an open mind. Fear often dominates.
Regarding all the selfworship about how great we humans are, well, that's very questionable. We are selfish and sure know how to fight bloody wars. Many people are addicted to their e-motions and instincts. Others just can't imagine "life" without the mating dance of love, music, the taste of steak off the barbie, the smell of fragrances, cruising down sunny mountain highways, and so forth. I love these things as well, but nothing beats imagination, understanding the universe, appreciating the natural world which has evolved on Earth, and creating new and greater things, things which greater intelligence can do better than ourselves. We humans sure are good at exercising our power to trash our world out of consumerism, and it's only getting worse. By the time superintelligence comes around to make us face the truth, there won't be much rain forest left or much of anything natural under us when we look down out of our airplanes.
There is no question that the talents of superintelligence can analyze and solve problems of human diseases and aging, even reverse them. However, when we stop dying, we must do something about our own self-replication and management of what remains on Earth.
Countless species go extinct every day due to the effects of humans on this planet. In just a few generations, humans have wiped out tens of millions of years of evolution, in a geologic instant of time. This should not go unnoticed, and solutions need to come of this in the form of recycling, environmental management, and limitations on what humans can do on untouched parts on the planet's surface. Humans need a good shepherd. Not just lip service and prominent organizations which are actually relatively ineffective or effective only in microcosms. We need a system with a more intelligence and responsibility.
Space colonization will help preserve Earth. We should make Earth into a big protected planet.
Since people in space colonies may be the only survivors of a biotechnology created supervirus, it may be the space colonists who create the artificial intelligence. Space colonists will be elite humans, hopefully in a way so they may be more inclined to create a "friendly" rather than a greedy AGI.
Cognitive Science "is the interdisciplinary study of mind or the study of thought. It embraces multiple research disciplines, including psychology, artificial intelligence, philosophy, neuroscience, linguistics, anthropology, sociology and biology."
The phrase "Artificial General Intelligence", acronym AGI, is coming to the forefront. Just "ordinary AI" we already have. AI technically includes things like a 1970s handheld calculator, engineering CAD systems, and so on. Artificial General Intelligence includes the ability to ask an input device any question at all and get an answer.
To ask the broadest questions to an AGI, in my opinion, will require that these AGI systems be networked, which I would call the AGI Network, or AGIN. For example, you tell your little personal AGI computer that you have liver cancer and here's your DNA. It doesn't have all the data and software in its tremendous memory, nor does it continually work on those problems, so thru the network it consults another recognized and respected AGI computer for the answer, which medication to create.
By a rules-driven distribution of processing and memory, we might create some security from any one system becoming too powerful. Also, by transparency and reproduction of "friendly" AGI systems, we create more security.
Technologies tend to be driven most by commercial applications, and AI may take on an overall "personality" in the image of its creators' self-interest rather than the interest of the community or the "greater good". For example, it has been reported that the #1 source of internet data traffic is email spam from spammers (certainly, the vast majority of email in my mailboxes is spam!), and the internet is full of "spamdex" websites designed with the ultimate purpose of presenting pay-per-click advertisements which play a cat-and-mouse game with search engines. Hence all the captcha images and "Are you human?" questions. How can we organize and support a netizen team that will diligently develop a "for the greater good" AI to beat the parasitic personalities of spammers and spamdexers and overly selfish commercial entities?
What should be the rules and regulations regarding artificial intelligence?
If I were a person at a young age looking for a career, then the field of artificial intelligence would be at the top of the list. Imagine what artificial intelligence program you could create over a lifetime!
For more information on the risks of artificial intelligence, see our Links page.
You are currently on this page:
Additional, children pages of this current parent page:
This website on human extinction is new, and very small. It is hoped that people from the general public will help me develop this website. One way to do this is by participation in the public forums.
Another way is to email me or use other means of contact as noted in the Contact Author link of this website.
I can be OK with use, not abuse, especially when the source is clearly cited,
but I must be contacted first about all significant details, and my permission must be granted.