The drumbeats of the singularity advocates is getting louder with the constant refrain that humanity is doomed at the hands of machine intelligence. Although their argument is machine intelligence is inevitable, many people do not believe it is a likely future. In his recent book about the evolution of humanity, Sapiens: A Brief History of Humankind, Yuval Harari offers a different assessment. He argues our species, Homo sapiens, will become extinct in the next century—but by human choice, not machine intelligence. Let’s take a look at his intriguing argument. See if you agree.
The technological singularity will occur when computing machines acquire intelligence and begin designing their own actions, some speculate this will be in next than 25 years. (Ray Kurzweil explores this in depth in The Singularity is Near.) Because there is no way to know what intelligent machines would be concerned about, no one can predict what the world would be like after this happens. Some say humanity is threatened by intolerant, post-singularity machines that will eliminate us because we are weak and unreliable—the Skynet scenario from “Terminator.” Although some in the computing field raised this concern two decades ago, it has come back into the public mind due to recent warnings from Elon Musk and Stephen Hawking, who say artificial intelligence may be the downfall of humanity. Others are not so negative. They are confident we can design machines to live in harmony with humans and have an off switch in case of emergencies—Asimov’s Three Laws of Robotics scenario. Whom shall we believe?
These scenarios assume machine intelligence is a given. We just do not know exactly when it will happen, but happen it will. But what if machine intelligence is not inevitable?
In his best-selling book Sapiens, Harari traces the evolution of the human species since the appearance of the genus homo in Africa 2.5 million years ago. Our species, Homo sapiens, appeared in East Africa about 200,000 years ago. At the time there were six other human species inhabiting the planet and according to the historical record they all co-existed rather peaceably. Then about 70,000 years ago Homo sapiens developed a new cognitive function—the ability to imagine things that do not exist and believe in them. This might have happened because of a chance permutation in brain circuitry. No other animal could organize themselves around beliefs in things they only imagined. This adaptation of Homo sapiens proved to be formidable. It gave Homo sapiens the ability to organize and hunt in large numbers. Everywhere Homo sapiens went the other human species and large animals disappeared— Homo sapiens gradually brought about the extinction of many other species. The last of the other human species, Homo floresiensis, went extinct about 13,000 years ago.
Homo sapiens were drawn into an agricultural revolution about 7,000 years ago by the need to survive in ever larger numbers. They learned how to raise food and animals to feed large communities. Another revolution, the scientific revolution, started around 500 years ago. It was then humans started to admit there was much more they did not know than what they did know. They started exploring nature, looking for new knowledge, and then harnessing it as new technologies. They acquired unprecedented power from discovering and harnessing forces of nature. The quest for knowledge took a leap forward with the industrial revolution, which began about 200 years ago.
One of the lessons from this long history is that humans have the power to imagine new possibilities, and through their shared belief in these possibilities organize themselves to make those possibilities become real. If something is possible, some human being somewhere will try to convert that possibility into something real, and will mobilize a lot of people around the new belief and move them to action.
Given that this is the kind of beings we are, how will we continue to evolve? Harari notes that around the world there is a pitched battle over intelligent design. Some claim the complexity of the biological world proves there must be a creator who worked out all the biological details in advance. Others claim evolution has occurred through natural selection without the intervention of any higher intelligence. Harari believes the evolutionists may be right about the past, but the designers may be right about the future.
The replacement of natural selection by intelligent design could happen in one of three ways: biological engineering, cyborg engineering, or the engineering of inorganic life.
Biological engineering is intentional intervention that aims to modify an organism’s function and capabilities to fulfill some human need or aspiration. Cross-breeding of plants and animals is an early example. A recent example is genetically modified organisms (GMO). The new technology of CRISPR (clustered regularly interspaced short palindromic repeats) allows the editing of DNA and opens the door for making many new kinds of organisms or engineering better human beings. It is fraught with ethical problems, but the possibilities are so attractive it will continue to grow.
Cyborgs are beings that combine organic and inorganic parts, such as bionic arms and legs. Such uses allow injured people to have near-normal lives and already include forms of thought-control of their prosthetics. Defense departments aspire to engineer insects containing small transmitters that could literally be the “fly on the wall” spying on others and cyborg schools of sharks that could perform military operations under water. The most ambitious undertakings concern direct human brain-to-computer interfaces, which would allow computers to make very powerful cognitive augmentations, such as photographic memory, and would allow networks of directly connected humans to perform unprecedentedly precise coordinated actions.
Inorganic intelligent entities, or artificially intelligent machines, are a third possible path. The well-known computer virus is an early prototype of a life-like entity that can move through the network and wreak havoc. New evolutionary programming methods open the possibility that the initial programs specified by programmers could evolve in unpredictable ways to acquire capabilities no one thought possible. Today’s “deep learning” technology, like Google’s AlphaGo, and its ability to master the game of Go, already hints at this possibility.
These three paths of evolution are already under way and are not independent of each other. Instead, what will emerge will likely be a blend of the three. We are at the threshold of creating a new species better than us. The organisms that emerge will be more powerful than current human beings in every way. In the end, we and our descendants will find the new powers so alluring that we will be unable to resist developing the new technologies. Even if the new beings are totally beneficent toward the existing species of humans, the existing species will eventually die out because its members will want to move into the new future.
With all this in mind, Harari defines a singularity as the time when all the concepts that currently give meaning to our lives will become irrelevant. Anything happening after that time will be meaningless to us. He concludes by saying:
The only thing we can try to do is influence the directions scientists are taking. But since we might soon be able to engineer our desires too, the real question facing us is not “What do we want to become?” but “What do we want to want?” Those who are not spooked by this question probably haven’t given it enough thought.—Sapiens: A Brief History of Humankind, p 414.
Isn’t that interesting? The Homo sapien goes extinct because it prefers its own creations to the status quo. It will, according to Harari, happen quietly in the next hundred years or so.