The most dangerous moment in a self-driving car involves no immediate or obvious peril. It is not when, say, the computer must avoid a vehicle swerving into its lane or navigate some other recognizable hazard of the road -- a patch of ice, or a clueless pedestrian stepping into traffic. It is when something much more routine takes place: The computer hands over control of the vehicle to a human being.
In that instant, the human must quickly rouse herself from whatever else she might have been doing while the computer handled the car and focus her attention on the road. As scientists now studying this moment have come to realize, the hand-off is laden with risks.
"People worry about the wrong thing when it comes to the safety of autonomous cars," says Clifford Nass, a Stanford University professor and director of the Revs Program, an interdisciplinary research center. "There are going to be times where the driver has to take over. And that turns out to be by far the most dangerous and totally understudied issue."
Thrust back into control while going full-speed on the freeway, the driver might be unable to take stock of all the obstacles on the road, or she might still be expecting her computer to do something it can't. Her reaction speed might be slower than if she'd been driving all along, she might be distracted by the email she was writing or she might choose not to take over at all, leaving a confused car in command. There's also the worry that people's driving skills will rapidly deteriorate as they come to rely on their robo-chauffeurs.
In the effort to engineer self-driving cars, the best and brightest minds have already mastered many of the technological questions, producing vehicles that can park themselves, navigate highways and handle stop-and-go traffic. But one of the biggest impediments remains the very thing that motivated the quest for self-driving cars in the first place: the limits of human abilities. Psychologists, engineers and cognitive scientists are now probing how humans interact with such cars, cognizant that these realities must shape how the systems operate.
"The greatest challenge to having highly automated vehicles is not technological," observes Richard Wallace, a director at the Center for Automotive Research, a non-profit research organization. "It's handling the transition when humans must take back control of the vehicle."
Inside a dark room at Stanford University's automotive research lab sits a four-week-old, $600,000 driving simulator that will be one of the first used to study how drivers trade duties with their self-driving cars and how the cars should be designed to ensure the trade-off is done safely.
Stanford University's driving simulator.
Nass, the simulator's chief champion, boasts that Stanford's new tool is unique in its ability to shift instantaneously from full- to zero-automation, and Nass plans to track drivers' concentration, attention, emotional state and performance when they take over for the self-driving car under different conditions.
His lab's findings will help inform the design of future driverless cars -- from the layout of their dashboards and infotainment systems, to how they deliver alerts and ask drivers to take control. Do people drive more safely if their cars speak to them, flash messages or, say, vibrate the steering wheel? Should cars give an update on road conditions just before the human driver takes over at the wheel, or are such details distracting? And how does a driverless car clearly outline what it can and can't do? Nass has a laundry list of such questions, the answers to which are likely to be monitored closely by automakers: In addition to his position at Stanford, Nass also consults for Google on its driverless cars and for major car companies, such as Nissan, Volkswagen, Volvo, Ford and Toyota (Toyota helped fund the Stanford simulator).
These car manufacturers, along with Google, have assured the public that driverless cars will make our commutes safer, more efficient and more productive. They point out that machines don't drink and drive or doze off at the wheel. Since algorithms react more quickly than humans, cars can be grouped into platoons, eliminating stop-and-go traffic and conserving fuel. Drivers will be able to read, text and work while their intelligent vehicles handle four-way stops.
Yet despite these rosy predictions, carmakers won't immediately deliver robo-taxis. The first generation of self-driving cars are more likely to be capable co-pilots that pass driving duties back to a human when complex situations arise, much as planes' autopilot systems ask pilots for help in emergencies. As one report authored by researchers at the Massachusetts Institute of Technology recently noted, "driverless is really driver-optional."
Nass' biggest fear is that unless car-human collaboration is better understood, self-driving cars could prove even more dangerous than the existing, imperfect automobile technology.
"One of the great ironies is that autonomous cars are much more dangerous, but not while they're being autonomous,” Nass says. “They're dangerous because of the driver taking over from the situation."
Nass has spent more than 25 years studying how people speak to, look at, criticize, make friends with and lie to machines. He's examined how sad drivers respond to peppy virtual voices; how people react to flattery from their computers; and why a group of German men thought their perfectly-functioning GPS systems were broken (they didn't trust directions spoken by a female voice, Nass discovered). In the process, Nass has proven over and over again that individuals treat gadgets as if they are other humans, expecting machines to be sensitive to our moods and feelings.
As Nass sees it, driverless cars should eventually be capable of acting as our "wingmen," proactive and aware of our faults so they can assist us in the best possible way.
We're witnessing "the transition of the car from being your slave to being your teammate," he explains. "You can start to think about a radical new way of designing cars that starts from the premise that [the car] and I are a team."
Nass' new simulator will give him the most detailed view yet into our relationships with our cars. What's special about this setup, he explains excitedly, is that it allows him to match up exactly what's happening in the driver's head with what's happening, at that instant, inside the car. His test subjects will be equipped with high-tech gear that tracks their emotional and mental states throughout the courses they drive. They can be outfitted with EEG sensors that measure brain activity, skin conductance sensors that track emotional arousal, and eyetracking glasses that follow their gaze. Nass will use data from these tools, in conjunction with questionnaires and logs of the car's activity, to see how automation affects drivers' reaction speeds, focus and their ability to avoid obstacles after driving a car that's been driving itself.
A researcher in Nass' lab is outfitted with eyetracking glasses, EEG sensors and skin conductance sensors ahead of his drive in the simulator.
In one of Nass' first studies, he will try to determine how long it takes drivers to "get their act together" after the autonomous car hands back control. Google's self-driving Lexus SUV offers one current template for the hand-off: When the car knows it needs human help -- often when approaching a construction zone or merging onto a freeway -- an icon or message will flash on a custom-made screen mounted on the car's dash, and drivers usually have 30 seconds' notice before they need to take over.
But is that just enough time, too much or too little?
Nass invited me to be one of his first lab rats in the simulator, and he was curious to see how I handled the obstacles that popped up on the road in the moments after I took over for the car.
I buckle my seatbelt in the driver's seat of the full-sized 2012 Toyota sedan. The car is surrounded by curved screens the size of billboards, onto which six projectors shine interchangeable animated driving courses. One minute I'm passing trucks, Land Rovers and Audi sedans in what vaguely looks like a Boston neighborhood. The next I'm cruising down a highway lined with office parks and TGI Fridays restaurants. ("I built the world," boasts one student who works at Nass' lab.) A subwoofer mimics the growl of an engine, and the whole scene is so lifelike, I'm even starting to feel carsick.
The Toyota's autonomous mode kicks in, and the car takes over. Seconds later, a white BMW swerves in front of me and slams on its brakes. Normally I'd panic, but the car has this one handled. The Toyota immediately taps the brake, slows down, then picks up speed once the other car has driven far enough ahead.
"We're interested in your attention level. Do you freak out more when you get cut off, or when the computer gets cut off? When is it scary?" Nass explains. Later, he elaborates that knowing my emotional state would help researchers understand whether I trusted my driverless car to handle emergencies for me. "The point is, if the car gets cut off and you remain totally calm, it means you trusted the car would keep you safe. One of the critical issues with autonomous cars is trust. Because if you don't trust the car, it won't work."
Slightly further down the road, the car tells me it's my turn to drive. I dutifully put my hands back on the wheel and fix my eyes on the road -- just in time to see a construction worker emerge from a pile of orange cones and amble across the street. I swerve the car to avoid it, sending the Toyota spinning over the median and into oncoming traffic. Car 1, Human 0.
The simulator's sideview mirrors are LCD screens that show the vehicles on the road "behind" the driver's car. The rearview mirror reflects the image of another screen, hung behind the car itself.
Though Nass' research will offer more precise insights into self-driving cars, engineers have already spent decades studying how people work with automated systems in cockpits, trains, nuclear reactors, mines and ships. Of course, each situation has its own nuances. Yet on the whole, research suggests that drivers could have difficulty adjusting to their car's electronic "wingman."
Pilots' collaboration with autopilot systems offers a useful point of comparison for anticipating how drivers will adapt to driverless cars, these experts say. They also warn that any problems with automation in aviation are likely to be magnified when transferred to drivers, who aren't as well-trained as pilots, and to roads, where cars face numerous obstacles and a slim margin of error.
Though autopilot systems have yielded enormous improvements in airline safety, some experts caution pilots have become so dependent on help from intelligent software that they are forgetting how to fly. The Federal Aviation Administration has become so concerned about the rise of "automation complacency" that it recently ordered airlines to have their pilots reserve time to practice hand-flying planes.
If automation can cause skill degradation among an elite group of professionals who train for years, imagine what it may do to drivers, who are tested only once (when they get their driver's license) and have a much broader range of driving abilities. (Teenagers drive cars. They'd never be allowed in the cockpit of a Boeing 777.) Researchers predict drivers will get rusty, making them ill-equipped to take over for their cars. Exacerbating the problem: Autonomous vehicles are likely to need assistance with the most challenging driving scenarios -- think slippery streets -- that out-of-practice drivers would likely be poorly prepared to handle.
"It's ironic: We have all these automated planes, but what we need is to go back to flying without automation," observes Raja Parasuraman, a psychology professor at George Mason University and director of the graduate program in human factors and applied cognition. "I could envision a similar situation in driving."
And as exciting as the technology may seem now, operating driverless cars will ultimately be extremely boring. When required to monitor autonomous systems for long periods of time, human babysitters frequently get distracted and tune out, which can lead to accidents, slowed reaction times and delays in recognizing critical issues. In 2009, two pilots operating a flight to Minneapolis from San Diego entrusted the autopilot with control of the plane, and eventually turned their attention to their laptops. They became so engrossed in their computer screens that they failed to realize they'd overshot the airport by about 110 miles.
In the recent MIT report on driverless car technology, Missy Cummings and Jason Ryan of the school's Humans and Automation Lab write that drivers in autonomous or highly autonomous cars failed to react as quickly in emergency situations. "[A]t precisely the time when the automation needs assistance, the operator could not provide it and may actually have made the situation worse," they concluded.
In time, technology could even solve that problem, too. Nass, along with engineers at Toyota, Ford and Mercedes-Benz, are already looking ahead to creating cars that monitor both road and driver, and could behave differently depending on the driver's mood or mental state. The latest Mercedes models claim their "Attention Assist" technology can detect if a driver is getting drowsy, though for the time being, its only recourse is to sound an alert.
In short, the self-driving car could one day map its drivers as well as it maps the roads. And when that happens, it won't only drive you around -- it'll also be your best friend.
"In the same way you become attached to friends, you'll become attached to your car, though not in an unhealthy way," Nass says. "From a business standpoint, this is the dream of the century."
This story appears in Issue 68 of our weekly iPad magazine, Huffington, available Friday, Sept. 27 in the iTunes App store.