The new year is starting with a new, louder wave of warnings about the robots coming for our jobs. They're better at things than we are, goes the argument: more efficient, better able to process huge quantities of data, and available to work nights, weekends and holidays without demanding overtime pay or dental insurance. As advances in artificial intelligence and big data analysis endow machines with sharper cognitive abilities, blue- and white-collar jobs are on the cusp of becoming silicon-collar jobs: outsourced to increasingly capable machines that are making exponential improvements in their hardware and software.
But what happens when our software improves? The new year is also bringing us closer to a new kind of brain.
These pessimistic predictions about the coming tyranny of bots in the workplace have their merits (and are likely to come to pass), yet they largely presume we humans are staying the same.
We're not: with devices like Google Glass and tools such as Google Now or Siri, we're enhancing our own abilities, and sharing with robots the benefits of advances in artificial intelligence and big data. While that won't necessarily stop the influx of robot overlords into boardrooms, classrooms and operating rooms, we should take some comfort in the fact that we're gaining new abilities even as the machines are. And that means we may have better jobs to look forward to than sterilizing robo-surgeons or nannying robots, two future career paths Kevin Kelly suggests in his recent Wired story, "Better than Human." There are the devices that will replace us, and then there are the machines that will enhance us, more seamlessly fusing with our bodies and minds to create artificially, externally-powered brains that don't forget, can quickly summon facts, outsource tedious tasks to computers and maybe even reason a bit better. (Of course, with that comes no shortage of serious questions, from privacy concerns to anxieties over the future of face-to-face human interaction). Google Now, for instance, can proactively prompt us with information about our commute, calendar, travel plans or lifestyle. Siri, in its pre-Apple days, could handle basic chores on our behalf. Consider Google's Project Glass, the focus of a series of IEEE Spectrum stories out this week and one of a handful of wearable computers in the works that observe and capture our surroundings like a second brain. Google Glass's design "lets Glass record its wearer's conversations and surroundings and store those recordings in the cloud; respond to voice commands, finger taps, and swipes on an earpiece that doubles as a touch pad; and automatically take pictures every 10 seconds," explains IEEE Spectrum's Elise Ackerman. A concept video for the device released by Google showed a man using the glasses to video chat with his girlfriend, respond to messages, get directions and learn about people and places he can't immediately see. Artificial intelligence researcher Rod Furlan speculates data gathered by Glass could "eventually be able to search my external visual memory to find my misplaced car keys." Facial recognition could one day help you avoid the awkwardness of forgetting names, and object recognition could alert you to calorie counts of the sugary snack you're about to eat. Google Glass promises to be not only a communication device for answering emails or sharing photos, but a kind of personal assistant and second mind. The goal, according to Google Glass project head Babak Parviz, is to someday "make [accessing information] so fast that you don't feel like you have a question, then have to go seek knowledge and analyze it, but that it's so fast you feel like you know it ... We want to be able to empower people to access information very quickly and feel knowledgeable about certain topics." In a 2004 interview, Google co-founder and CEO Larry Page asked the world to "imagine your brain being augmented by Google." Nine years later, we no longer have to imagine that. This feeling that Google Glass can enhance the wearer's mind isn't PR spin, but something to which users of the device can attest. Furlan, who created a homemade pair of Google Glass-like specs that could stream emails, Twitter posts and more to a lens over his eye, told IEEE Spectrum that though he initially suffered from information overload, he now feels "impoverished" when he takes off the device. Evernote CEO Phil Libin predicts, based on his own experience with Google's glasses, that in three years' time, gazing upon a world without the additional information offered by a Google Glass device will seem "barbaric." "People think it looks kind of dorky right now but the experience is so powerful that you feel stupid as soon as you take the glasses off," Libin told The Huffington Post's Michael Rundle.
And, presumably, smarter with the glasses on. The marriage of cutting-edge technology with the human brain -- which has resisted obsolescence even after many millennia -- could open up new fields and professions for which even the machines are unqualified.
Computers can perform more of the tasks we thought only humans capable of -- recognize emotion, play chess, write articles, translate speech -- but they still haven't replicated our knack for intuition or gut feeling, as The New York Times' Steve Lohr pointed out in his story, "Sure, Big Data Is Great. But So Is Intuition." What happens when we fuse the best of machines with the best of us? Artificial intelligence with human intuition? Big data with gut feeling? Faulty memories with perfect ones? What new jobs will open up to us then? Affectiva's emotion-recognition technology may know when we feel miffed -- but humans are still better than computers at backtracking with an instant apology. It's hard to imagine diplomats being replaced by bots. Maybe we will be replaced as lawyers, lovers, drivers and therapists, as many predict. Or maybe our cyborg selves will borrow the best from software and souls to forge another path and, crucially, other professions. We're not being completely left behind in the digital revolution. It's making us better machines, too.