Technology is risky business. At least, that's what some scientists fear: the proposed Center for the Study of Existential Risk at the University of Cambridge will bring together researchers to brainstorm how we may prepare for technology-related and human-induced dangers in the future.
But what are these possible threats? Well, in part, it's too soon to tell -- that's precisely what the center hopes to study. Yet the center's co-founders have suggested we should pay more attention to the potential downsides of building sophisticated, artificially intelligent machines or of producing designer viruses. What if we build computers that are too smart for our own good, and they write their own code that wreaks havoc on our banking system or electrical grid? Or, what if a powerful genetically engineered virus is mistakenly let loose from a biotech lab and infects millions?
Dr. Martin Rees, entrepreneur and astrophysics professor at Cambridge, addressed these "what ifs" in the video above -- and/or click the link below for a full transcript. Plus, don't forget to sound off in the comments section at the bottom of the page.
BIANCA BOSKER: Imagine you've turned on the morning news and discovered that an army of robots have self-assembled and are now making demands on national leaders. Or you check Twitter, and see that its exploded with news that a genetically engineered virus was mistakenly let loose from a biotech lab, and is now infecting millions of people. Sounds like something from a Hollywood flick or sci-fi novel, right? But actually, researchers at Cambridge University’s Center for the Study of Existential Risk are investigating the likelihood of these kinds of doomsday scenarios becoming a reality. Their core question is: Could our own inventions make us not just obsolete, but eradicate us entirely?
MARTIN REES: Obviously there are risks connected with effects on the environment, runaway disasters there. There are risks in computer networks breaking down and there are other risks from potential new technology and of course there’s a continuing risk of some kind of nuclear catastrophe. So all these things are possible and one thing we certainly do is to get a group of scientists crossing all fields together and have them to brainstorm.
BB: Researchers at the center will focus on four key areas that could pose the greatest risk to humans in the future. They are nuclear, cyber, biological and environmental threats.
MR: It’s valuable in its own right to try and have as complete a list as possible of threats, including even the crazy ones, so that as evidence comes in, you adjust your betting odds against the different ones and decide which are serious and which are not.
BB: Never before, in the earth's history, has the threat from man-made catastrophes been equal to natural ones. And while we know we can survive earthquakes and tsunamis, we don’t have any real experience surviving these human-induced doomsdays situations.
MR: The human impact on the biosphere and on the climate is for the first time substantial. And we are threatened by small groups empowered by powerful technology. So this is the first century when one species, namely ours, will determine the future of the planet.
BB: So are any of these scenarios, like an explosion of super germs or a take-over of artificial intelligence, overhyped?
MR: We worry far too much about plane crashes and things like that. We expend far too little worry on these other less familiar threats. I think it's important to bear in mind that the fact that something is unfamiliar doesn't mean it's improbable.
BB: So, in other words, don't sweat the small stuff -- especially when a robot-apocalypse could be on the horizon. We'll be keeping an eye on the center's work as they draw the line between science fiction and science fact. In the meantime, tell us what you think: Will technology be to humans what the asteroid was to the dinosaurs?
See all Talk Nerdy to Me posts.