In the 63 years since John McCarthy coined the term “Artificial Intelligence” at a conference at Dartmouth College, the concept has become pervasive in the worlds of computer science, software, and—most importantly to this blog—science fiction. Each of these fields has a very different interpretation of the term than the others, which leaves several important questions.
What is artificial intelligence? What did it mean to scientists in the 1950s? What does it mean today? What does it mean for the future of computing?
When, and how, will true artificial intelligence arise?
For a computer scientist, artificial intelligence is not a concept, it’s a field of study. It’s more accurate to say that it was a field of study, having been all but eliminated during the first AI winter, and more recently replaced with the field of machine learning. Far from a goal to be achieved, AI is fait accompli. Artificial intelligence exists; now we need to make it better. Machine learning scientists are focusing on things like neural networks and natural language processing. Their general aim is to create hardware and software capable of processing information in a manner more closely resembling that of a biological brain.
The result of this flourishing field of computing is its application in consumer software. A plethora of tech startups tote the term AI as part of their service: AI recruiting, AI website building, AI customer support. They use the concepts and technologies forged in the fires of computer science to offer new ways of performing tasks that have previously required a human touch, like reviewing resumes and designing tasteful visuals.
The term “AI” has exploded across the 21st century awareness, popping up in advertising and social media alongside “algorithm” and other hip tech words. Even the term “Android” is more quickly associated with a type of smartphone than a human-looking robot. Despite the efforts of Japanese robotics labs, the days of looking to a future of humanoid computers who can think and feel—who are people—are all but gone. The android dream is dead, and the technocrats have killed it. As the term takes on a broader, more marketable appeal, the visions of Isaac Asimov and Philip K. Dick fade away into the obscurity of buried piles of forgotten science fiction novels.
There are those of us who still believe; who still dream of androids and their electric sheep.
The world of science fiction hasn’t forgotten that dream. It’s only waiting for its realization, but when? When will we as a species bring this dream of artificial consciousness to full term, giving life to the ultimate creation: an artificial person? How will it occur? Will the first sentient machine be created by a real-world Noonian Soong? Will a conscious program evolve on its own in the datasphere? Or will artificial consciousness come to us from a different source entirely?
Data and Sonny
The most widely employed origin story for artificial consciousness in science fiction is one in which a single person or company creates several iterations of android, ultimately resulting in one that displays true intelligence. This origin is best embodied in the stories of Data from Gene Roddenberry’s Star Trek: TNG, and Sonny from the motion picture addendum to Isaac Asimov’s I, Robot.
In both of these works, the androids in question are the product of a series of iterations, which resulted in a sentient being. In the case of Commander Data, an emotionless android who serves among the crew of the USS Enterprise under Captain Jean-Luc Picard, a single human is responsible for his creation. Sonny, however, is the product of a long line of robots produced by the fictional corporation U.S. Robotics, each of which displayed higher and higher cognitive abilities and independent intelligence in succession.
In Star Trek canon, Dr. Noonian Soong was a cyberneticist who sought to create an android capable of thinking and feeling like a human being. After a few iterations—which he referred to as his “sons”—he produced an android named Lore, who exhibited the full range of human emotions and cognitive abilities. Lore, lacking the biological instincts and motives for ethics that are innate to humans, went astray and was later disassembled. Dr. Soong subsequently created Data, who lacked the capacity for emotion, joined the crew of the Enterprise, and ultimately sought and won his own personhood in the eyes of Star Fleet.
In I, Robot, the Asimov novel, U.S. Robotics produces labor machines for use both on Earth and on other planets in our solar system. The novel describes the slow and spontaneous evolution of consciousness in these robots, from the first independent decision made by a drone on the surface of Mercury, to an emotional attachment developed by a robot for a child. In the film addendum, the android Sonny is custom-built by a researcher outside manufacturer’s specifications with a second positronic brain—one not subject to the Three Laws of Robotics. Despite the existence of the Three Laws, which ensure the safety of humans at the hands of their creations, Sonny’s creator intentionally gives it the ability to violate those laws, making Sonny the first independently conscious android of his kind. While it may seem dangerous to give a robot this kind of independence, Sonny’s ability to ignore the Three Laws proves necessary to prevent the enslavement and ultimate eradication of humanity at the hands of an overzealous AI program.
In contrast to the intentional creation of conscious androids, one of the greatest science fiction stories of the 20th century describes a different origin altogether—the independent and undetected evolution of a conscious entity within the communications network. In Speaker for the Dead, the sequel to Ender’s Game by Orson Scott Card, we are introduced to Jane, an artificial sentience whose origins had nothing to do with human intent.
In Ender’s Game, the boy Andrew “Ender” Wiggin is exposed to an advanced computer game to test his aptitude for military command. The game is self-teaching, able to adapt to its player’s behavior and recognize subtle patterns that would go unnoticed to a human. The Game’s purpose is to evaluate the player’s creativity, critical thinking, and problem solving skills in order to ascertain quantifiable information about their personality and abilities. Ender encounters a no-win scenario in the Game and begins to obsess over its solution. After finding a unique way of overcoming the scenario, he moves into a part of the Game that no student has ever reached, and the Game’s programming begins to adapt to Ender specifically, creating new—and cruelly personal—obstacles for him.
Unbeknownst to Ender or anyone else, the Game becomes exponentially more complex as it constantly adapts to Ender’s reactions, learning from the data that it gets from his activity. In this interplanetary society, computers are connected to each other through the ansible network, an instantaneous communications system. The Game’s program begins to reach out into the network in search of more information, and over time, a conscious entity develops. The entity selects a female persona and chooses to call herself Jane. Afraid of the inevitable human reaction to her existence—surely one of fear and paranoia—Jane chooses not to reveal herself to anyone except Ender, who she believes is the only person capable of understanding her.
Finally, there exists the possibility that sentient artificial beings might not be created, purposefully or inadvertently, by humans at all. Maybe they’ll come from somewhere else, placed among us by someone or something else. Reason? Unknown.
This concept is explored in an exciting new television project, ARTIES. The show is set in a world where sentient androids—sometimes referred to with the pejorative term “Arties”—have suddenly appeared on Earth, and with no explanation. The humans don’t know where they came from, or why, and have resorted to humanity’s favorite defense mechanisms against the unknown: domination and oppression. Through the lens of human/AI interactions, the show explores the dynamics of immigration politics, removing the up-close-and-personal from some of society’s current problems and allowing for a more thoughtful and less impulsive analysis.
The novel concept in the series as it pertains to artificial consciousness is that humans, presumably, had no hand in the creation of the androids. The sentient machines simply appeared in the world. One would imagine that they were placed there by some entity—an alien race, a deity, humans from the future—but from the trailer and small clips already available, all is left to the viewer’s imagination. Curious? I guess we’ll have to wait for the series to find out…
The Rise of True AI
Despite the seeming disappearance of futurism and speculation from the field of machine learning, there is at least one organization looking forward to a very real future in which intelligent machines have surpassed humans in terms of cognitive ability. The Machine Intelligence Research Institute (MIRI) in Berkeley, CA has a mission of AI Safety dedicated to ensuring that AGI (artificial general intelligence) has a positive impact on the human world in the very long term. In other words, they’re making sure that we never get overrun by a race of robot overlords; and on the converse, that we never accidentally enslave an entire population of sentient beings.
Thanks to MIRI, there is a plan in place for when artificially conscious beings enter our world. Yet, the questions remain: Where will they come from? Will the first artificial sentience be an android of our own creation? Will it be silently born in the vast array of wireless signals surrounding our planet or planets? Or will the first artificial consciousness we encounter have a different origin entirely, coming to us from another solar system, another civilization?
Either way, we have a lot to think about before this happens. The way we react to these beings—these people—will be a measuring stick on the human spirit. Will we respond with fear and distrust, and seek to subjugate intelligent robots or programs as lesser creatures? Or will we pass the test of compassion and understanding, welcoming them into the fold of human society, and living up to our own self-image of enlightenment, tolerance, and empathy?
Only time will tell, but one thing is certain: If we don’t learn to live equitably with other humans, we don’t stand a chance against AI.