By: Tia Ghose, LiveScience Staff Writer
Published: 05/07/2013 03:42 PM EDT on LiveScience
Are you prepared to meet your robot overlords?
The idea of superintelligent machines may sound like the plot of "The Terminator" or "The Matrix," but many experts say the idea isn't far-fetched. Some even think the singularity — the point at which artificial intelligence can match, and then overtake, human smarts — might happen in just 16 years.
But nearly every computer scientist will have a different prediction for when and how the singularity will happen.
Some believe in a utopian future, in which humans can transcend their physical limitations with the aid of machines. But others think humans will eventually relinquish most of their abilities and gradually become absorbed into artificial intelligence (AI)-based organisms, much like the energy making machinery in our own cells. [5 Reasons to Fear Robots]
In his book "The Singularity is Near: When Humans Transcend Biology" (Viking, 2005), futurist Ray Kurzweil predicted that computers will be as smart as humans by 2029, and that by 2045, "computers will be billions of times more powerful than unaided human intelligence," Kurzweil wrote in an email to LiveScience.
"My estimates have not changed, but the consensus view of AI scientists has been changing to be much closer to my view," Kurzweil wrote.
Bill Hibbard, a computer scientist at the University of Wisconsin-Madison, doesn't make quite as bold a prediction, but he's nevertheless confident AI will have human-level intelligence some time in the 21st century.
"Even if my most pessimistic guess is true, it means it's going to happen during the lifetime of people who are already born," Hibbard said.
But other AI researchers are skeptical.
"I don't see any sign that we're close to a singularity," said Ernest Davis, a computer scientist at New York University.
While AI can trounce the best chess or Jeopardy player and do other specialized tasks, it's still light-years behind the average 7-year-old in terms of common sense, vision, language and intuition about how the physical world works, Davis said.
For instance, because of that physical intuition, humans can watch a person overturn a cup of coffee and just know that the end result will be a puddle on the floor. A computer program, on the other hand, would have to do a laborious simulation and know the exact size of the cup, the height of the cup from the surface and various other parameters to understand the outcome, Davis said. [10 Cool Facts About Coffee]
Once the singularity occurs, people won't necessarily die (they can simply upgrade with cybernetic parts), and they could do just about anything they wanted to — provided it were physically possible and didn't require too much energy, Hibbard said.
The past two singularities — the Agricultural and Industrial revolutions — led to a doubling in economic productivity every 1,000 and 15 years, respectively, said Robin Hanson, an economist at George Mason University in Washington, D.C., who is writing a book about the future singularity. But once machines become as smart as men, the economy will double every week or month.
This rapid pace of productivity would be possible because the main "actors" in the economy, namely people, could simply be replicated for whatever it costs to copy an intelligent-machine software into another computer.
That productivity spike may not be a good thing. For one, robots could probably survive apocalyptic scenarios that would wipe out humans.
"A society or economy made primarily of robots will not fear destroying nature in the same way that we should fear destroying nature," Hanson said.
And others worry that we're barreling toward a future that doesn't take people into account. For instance, self-driving cars could improve safety, but also put millions of truck drivers out of work, Hibbard said. So far, no one is planning for those possibilities.
"There are such strong financial incentives in using technology in ways that aren't necessarily in everyone's interest," Hibbard said. "That's going to be a very difficult problem, possibly an unsolvable problem."
Some scientists think we are already in the midst of the singularity.
Humans have already relinquished many intelligent tasks, such as the ability to write, navigate, memorize facts or do calculations, Joan Slonczewski, a microbiologist at Kenyon college and the author of a science-fiction book called "The Highest Frontier," (Tor Books, 2011). Since Gutenberg invented the printing press, humans have continuously redefined intelligence and transferred those tasks to machines. Now, even tasks considered at the core of humanity, such as caring for the elderly or the sick, are being outsourced to empathetic robots, she said.
"The question is, could we evolve ourselves out of existence, being gradually replaced by the machines?" Slonczewski said. "I think that's an open question."
In fact, the future of humanity may be similar to that of mitochondria, the energy powerhouses of cells. Mitochondria were once independent organisms, but at some point, an ancestral cell engulfed those primitive bacteria, and over evolutionary history, mitochondria let cells gradually take over all the functions they used to perform, until they only produced energy.
"We're becoming like the mitochondria. We provide the energy — we turn on the machines," Slonczewski told LiveScience. "But increasingly, they do everything else."