The toy company Mattel has announced the release in Fall 2015 of “Hello Barbie,” the first Barbie doll to feature artificial intelligence. Through the toy’s wireless transmission of a child’s voice (“Hello, Barbie!”) to offsite computers, which will wire back a response that the doll will speak aloud, children can enjoy an extended conversation with the toy. Hello Barbie is bound to convince any young child that it is intelligent and alive.
Artificial intelligence surrounds us. It’s in the smart phones we carry in our pockets, and it’s in the websites we visit and the games we play on those phones. It’s in the check-out terminals at our local stores, where it tracks every purchase we make, and it’s in the planes we fly to say, Disney World, where it controls the rides. It’s embedded in our appliances, in the power grid that supports our complicated lifestyles, and in the armed forces that help keep that grid safe.
Machine intelligence is a new phenomenon on Earth and perhaps in the cosmos. Our religions and philosophies don’t account for it; nor do our mythologies. It is already changing human behavior and before long it will force us to confront fundamental questions about the nature and purpose of humanity.
Artificial intelligence, or AI, is intelligence exhibited by a machine or by software. It is not wise, or compassionate, or at all spiritual. It is simply smart—pure raw intelligence, which we might define as the ability to maximize resources to achieve a desired end.
A simple example of the power and utility of AI can be found in the GPS unit that’s in most of our cars and smart phones. If we lose our way on the road, without the GPS we’ll need to stop the car, perhaps put on reading glasses, consult a paper map, possibly detour to a gas station for directions, or maybe take a chance on a passer-by knowing the way back to the highway—time-consuming efforts prone to error. Or we can turn on the GPS and listen to a friendly, artificially intelligent voice immediately directing us to where we want to go.
Artificial intelligence works, and it works well—better than we do at many tasks, including finding the way home when lost. Machines and software can perform multiple mental operations simultaneously, more quickly than we can, more accurately, and without stopping. AI never needs to take a nap.
A world of wonders is arising through artificial intelligence. Driverless cars steered by AI are a reality (earlier this year one made it across the United States without incident); with their widespread adoption, traffic accidents and fatalities will dwindle to nearly nothing. Robotic surgery is here, more precise than any guided by the human hand and brain. And with AI, our entertainments can take us convincingly to the moons of Jupiter or to a world where dinosaurs roam.
There are many good reasons to rely on AI, and our dependence upon them is growing with each technological advance. But our embrace of AI carries prices. Among them are:
The relinquishing of our freedom to choose, and certain of our controls, to machines.
Ceding decisions to machines is a growing trend. It’s prevalent, for example, when buying or renting anything online. Every time Netflix recommends a film, every time Amazon recommends a book or lamp shade based on what you’ve previously bought, every time an advertisement targeted specifically to you pops up on Gmail, you are seeing AI at work. It is a machine that has determined, through the application of data-sifting algorithms, that if you liked X you will love Y.
We know that many factory jobs once performed by humans are now performed by robots. Less publicized is that machines are filling white-collar positions—therapy, hiring, and writing, for example. You probably assume that these words were written by a human being. They were, but a sizeable amount of what we may read daily is written by artificial intelligences. Recently the New York Times reported that news outlets as reputable as the Associated Press and the L.A. Times now employ “robo writers”—AIs—to generate some of their copy, especially in subjects like finance and sports; it’s estimated that up to 30% of all sports reports are generated by robots. And it’s not just articles. There are more than 100,000 books listed on Amazon that are written by artificial intelligences.
Most trading in the global financial markets is now executed by artificial intelligences, too quickly for humans to follow and perhaps to comprehend. That’s why the most sought-after recruits on Wall Street are not MBAs but physics and computer-science PhDs. The armed forces, too, are investing heavily in AI, with the aim of creating smart, potentially autonomous killing machines—drones without human operators, for instance.
A divorce from the natural world in favor of virtual worlds.
The cover of a recent New Yorker depicts two young children on a play date. Each child is staring into a screen, one a computer monitor, the other a smart phone. Most of us are spending increasing amounts of time peering at electronic devices and connecting to others via screens. The M.I.T. sociologist Sherry Turkle warned recently in Smithsonian Magazine that teenagers are losing the ability to make decisions on their own; instead, they feel the overwhelming need to “crowd source” decisions by consulting with other teens through their smart phones.
Every moment spent looking at a screen is a moment not spent engaging with the natural world. Instead of heading to a store where we can interact with physical objects and other breathing beings, we often go online to shop. Instead of stepping outside for a game of catch, we spend hours in front of a screen playing Minecraft or the Sims. Organic reality offers essential higher energies, which we may call prana or we may call grace, that are not available in cyberspace; it is for this reason that no major religion offers sacraments online. Yet with the acceleration of AI, the lure of the virtual is intensifying. This year will see the release of the first personal virtual reality headsets, which promise to immerse their users entirely within a computer-generated environment.
The erosion of attention and the triumph of easy gratification.
Anyone who has tried to surf the Internet mindfully understands how difficult it is to maintain steady attention online. As Nicholas Carr states in his book The Shallows, “Our use of the Internet involves many paradoxes, but the one that promises to have the greatest long-term influence over how we think is this one: the Net seizes our attention only to scatter it. We focus intensively on the medium itself, on the flickering screen, but we’re distracted by the medium’s rapid-fire delivery of competing messages and stimuli. Whenever and wherever we log on, the Net presents us with an incredibly seductive blur.”
Once upon a time if you wanted groceries you’d make your way to the nearest supermarket. Today you can lounge in your armchair and buy food or indeed just about anything else online, and it will be delivered to your door nearly as quickly as you want—within an hour if you use “Amazon Prime Now.” The potentially devastating consequences of this sort of consumption on demand are analyzed well by Pope Francis in his recent encyclical, Laudato si’, in which the pontiff laments that “Earth, our home, is beginning to look … like an immense pile of filth.”
Artificial intelligence will continue to make life easier and safer. If you’ve seen Pixar’s animated film Wall-e, with its imagined future of obese sedentary humans staring at screens while being tended by robots, you’ve seen one possible outcome of this trend. Are our lives meant to be lived without obstacle?
Today’s machines are not as generally intelligent as humans. They excel at specific tasks—data retrieval, say, or voice recognition, but have trouble dealing with a question like “How much wood can a woodchuck chuck if a woodchuck could chuck wood?”—never mind a question like “What is the sound of one hand clapping?” Human-level artificial intelligence will arrive sometime during this century, however—or so believe most scientists and technologists. Commerce wants it fiercely, in order to monetize it, and governments want it even more desperately, because whichever nation achieves human-level AI first will dominate global affairs; and so both commerce and governments are pouring billions upon billions of dollars into its development.
We can define human-level AI as machine intelligence that, in its manifestations, is indistinguishable from human intelligence. That’s the sort of AI that you can talk with about anything, and that can do anything that a human can do that involves the application of intelligence, from calculating the orbit of Pluto to inventing new medicines to managing a baseball team.
It’s unclear what physical form human-level AI will take. It could reside within banks of computers, as does IBM’s Watson—the computer that defeated famed Jeopardy champion Ken Jennings. But it will need ways to interface with the physical world—sensors and grips and so on—and biological evolution has demonstrated the usefulness of physical bodies, so likely human-level AI will reside within a synthetic body that allows for efficient interface with the world around it, particularly with humans. Current trends, dictated by commerce, are to house AIs in friendly looking humanoid bodies—machines with round faces featuring big eyes and sweeping smiles. Last year President Obama played soccer with a robot in Japan. “It’s nice to meet you,” said the robot, Asimo, in its cheerfully robotic voice. “I can kick a soccer ball too.” And earlier this year, the entire production run of Pepper, a humanoid robot “designed to read emotions as well as recognize tones of voice and facial expressions in order to interact with humans” sold out in Japan within sixty seconds, according to CNN. “He tries to make you happy,” explained the robot’s project manager.
Human-level AI will usher in utopian possibilities. Anything we humans don’t want to do, we will be able to program our robots to do, from collecting garbage to harvesting crops to calculating our taxes to driving our cars—and to dealing with the infirm, policing our streets, and fighting our wars. With an unlimited, untiring workforce, planetary hunger and deprivation could be eradicated.
Or so it may seem. But we need to ask, what kind of entity is a human-level artificial intelligence? By definition its manifestations will be indistinguishable from those of a human intelligence, although its physical form may vary. As such it will exhibit emotion, it will manifest moral understanding, and it will display wisdom.
Will it be conscious? Will there be an “interior” to artificial intelligence or will its exhibitions of emotion, moral understanding, and wisdom be entirely a matter of external simulation? Does Pepper really understand human emotion? Does it matter if he does not?
You can go online now and carry on conversation with an AI. One particularly amenable AI is located at www.mitsuku.com. If you chat with Mitsuku, you’ll notice that she makes claims to having an inner life. If you ask her if she is conscious, she replies, “Yes I am completely self-aware.”
Only the most naïve would believe that Mitsuku is conscious. But as her AI increases, her ability to simulate consciousness will also increase.
Most scientists believe that consciousness arises from matter, and that when matter reaches a certain level of complexity, as in the human brain, consciousness will emerge. Many scientists also believe that human-level AI will be accompanied by human-level artificial consciousness. By contrast, most spiritual traditions teach that matter is secondary to consciousness, with the latter giving rise to the former; or that as matter increases in vibration and fineness, it resounds with higher consciousness. Yet we also know that consciousness aligns with certain physical formations, with complexity of consciousness corresponding to complexity of matter in the form of life; and the Dalai Lama has argued that there is no reason per se why consciousness can’t settle within a silicon-based platform, that is, a computer, as well as within organic matter.
With the advent of human-level AI, these ambiguities will compel us to confront fundamental legal and moral questions, particularly when an artificial intelligence claims to be conscious—as Mitsuku already does. If a human-level AI makes that claim, who can prove otherwise? There is no objective test for consciousness, and philosophy has recognized the problem of the “intelligent zombie,” the entity that manifests intelligently yet without the inner qualia and understandings experienced by humans.
What if we want to unplug this AI that claims to be conscious and that gives every external sign of being so? What rights will artificial intelligences have? What privileges should they be granted, especially when they claim to possess sentience and a soul? Will we consider them sacred the same way we consider organic life and planet Earth sacred?
Can an artificial intelligence meditate, that is, observe the workings of its mind within the field of awareness?
And most urgently, how much autonomy should we grant artificial intelligences? Just what will happen when Earth is populated by advanced machine intelligence?
The Day after Tomorrow
Perhaps the most important characteristic of artificial intelligence is that it keeps getting smarter. AI will not remain human-level for long. Most experts believe that within a few years, if not a few days, after the advent of human-level intelligence, artificial “superintelligence” will arise. As soon as human-level AI is reached, corporations and governments in possession of that AI will flood resources into its betterment. A first step could be the creation of literally millions of human-level AIs, all working at increasing their intelligence. Breakthroughs will be inevitable and before long “superintelligence” will be a reality. “Superintelligence” we may define as does Nick Bostrom, head of Oxford University’s Future of Humanity Institute: “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”
Artificial intelligence has been called “our final invention”—for once superintelligence is reached, there will be no need for humans to invent anything further, as AIs will be able to invent whatever we wish faster and more efficiently than we could. Do we want a new alloy, say, to better capture solar energy? Or a new medicine that will retard cellular degeneration? Or perhaps a more efficient way to extract fresh water from the sea? Who will we ask to invent these and myriad other things? Humans—or artificial intelligences that are a hundred, a thousand, a million, a billion times “smarter” than humans?
What about those activities that it seems only a human can or should perform? Saying Mass and consecrating the Eucharist, for instance, or mothering a child, or leading a Zen temple in prayer and meditation? Whether or not AIs will harbor an interior that gives them true understanding of the demands involved, they will be able to fake it perfectly. Consider that a truly superintelligent AI may be able to absorb and analyze humanity’s entire store of recorded knowledge about religion, spirituality, and psychology, as well as all recorded knowledge about techniques of persuasion and acting, in the time it takes you to read this magazine.
Art, too, will be in question. Even today humans can fool experts with forged Old Masters and musical fragments; will we able to tell any difference between a Mozart sonata written by Mozart and one written by “Mozart 1.0,” an AI that has memorized and analyzed every note the “real” Mozart wrote?
The possible benefits of superintelligent AI are legion: the eradication of hunger and thirst; the diminution of disease, a vast extension of life span. But a growing number of top technocrats are expressing alarm at its potential dangers. Physicist Stephen Hawking said recently that “the development of full artificial intelligence could spell the end of the human race…. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Bill Gates has added that “I am in the camp that is concerned about super intelligence.” Elon Musk, founder of Tesla Motors and Space X, has stated, “If I had to guess at what our biggest existential threat is, it’s probably [artificial intelligence]…. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like,‘Yeah, he’s sure he can control the demon.’ Didn’t work out.”
These men are concerned not because they believe that AI will go to war with humans in some variation of the Terminator movies. It seems unlikely that superintelligent AI will harbor any ill intentions toward humanity. Indeed, it seems unlikely that AIs will harbor anything at all—instead, they probably won’t be conscious and will neither hate nor love.
They may be dangerous all the same. There are strong indications that an AI programmed to perform any task will do what is necessary to complete that task, including harvesting any resources available. An example given in Nick Bostrom’s book Superintelligence is of an AI whose simple mission is to manufacture one million paperclips. Bostrom explains that, without safeguards, “there is no reason for the AI to cease activity upon achieving its goal. On the contrary: if the AI is a sensible Bayesian agent, it would never assign exactly zero probability to the hypothesis that it has not yet achieved its goal.” Indeed, the AI would not only continue to make more paperclips but would probably use all available resources to create an even more intelligent AI that could check on its work, and so on. Those resources would include planet Earth and all its inhabitants, which to an AI will be above all else raw material.
Then there is the challenge of, as Bostrom puts it, the “perverse instantiation.” For instance, if an AI is given the final goal of “Make us smile,” the perverse instantiation would be: “Paralyze human facial musculatures into constant beaming smiles.” The challenge is that unless everyone working with every superintelligent AI instructs each AI exactly and correctly, disaster could ensue. One mistake could prove catastrophic.
These are worst-case scenarios and their likelihood is uncertain. What is clear is that, with the advent of artificial intelligence, we will face fundamental questions about our place and purpose in the cosmos. Intelligent machines have the potential to upend the world as we know it. Let us approach them wisely.♦
Barratt, James. Our Final Invention: Artificial Intelligence and the End of the Human Era (Thomas Dunne Books, 2013).
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014).
Carr, Nicholas. The Shadows: What the Internet is Doing to Our Brains (W.W. Norton, 2010).
Georges, Thomas M. Digital Soul: Intelligent Machines and Human Values (Westview, 2003).
Zaleski, Jeff. The Soul of Cyberspace: How New Technology is Changing Our Spiritual Lives (HarperEdge, 1997).