For nearly a decade, auto executives and tech gurus have promised that self-driving cars would solve every problem of urban mobility.
On the freeway, autonomous vehicles would reduce traffic and emissions. In cities, “robo-taxis” would free up parking spaces for sidewalks and cafés. Thanks to advanced cameras and lightning-fast reaction times, self-driving cars could achieve near-perfect safety records for drivers and pedestrians — all while allowing commuters to start the workday from the comfort of their backseat.
According to the companies in charge of delivering this utopia, the technology behind it has always been just around the corner. In 2012, Google CEO Sergey Brin said autonomous vehicles would be a reality for “ordinary people” within five years. In 2016, Uber CEO Travis Kalanick promised to get rid of human drivers in four years’ time. In 2017, a BMW board member told a conference crowd that the company’s slogan, “The ultimate driving machine,” would be obsolete by 2020.
All of those predictions have now been extended far into the future. In April, Uber’s chief scientist admitted that self-driving taxis would take “a long time” and declined to make any further prediction. Officials from Nissan and Ford have said that the industry “overestimated” the timeline for developing autonomous technology. A 2018 Nature article by MIT scientists deflated the hype entirely, concluding that “some form of human intervention will always be required.”
“It’s closer than it’s ever been,” said Bryan Reimer, an MIT research scientist and one of the article’s co-authors, “but it’s still not within arm’s reach of a commercially viable product.”
The challenges of self-driving cars aren’t just technical. At the same time as predictions of a fully autonomous future have begun to dissolve, the reality of a kinda-sorta autonomous present has become alarmingly clear. In March 2018, one of Uber’s self-driving cars struck and killed a woman in Tempe, Arizona. Since 2016, four Tesla drivers have been killed while using the company’s “autopilot” feature, a semi-autonomous system whose capabilities have arguably been exaggerated by CEO Elon Musk.
“The commercial vision is for customers to play on their phones throughout ever-lengthening commutes.”
As the vision of self-driving cars moves further into the future, industry leaders, researchers and city residents are asking an even more fundamental question: Are autonomous vehicles really going to make cities better?
“Self-driving cars can solve a lot of problems,” said Sam Anthony, the chief technology officer of Perceptive Automata, a company that makes software to help autonomous vehicles understand human behavior. “But they also have the potential to make cities vastly worse. If we don’t think deeply about it, we’re going to unleash a whole new set of problems.”
Less Safe Before They’re More Safe
In 2001, the computer-animated film “Final Fantasy: The Spirits Within” kicked off a wave of predictions about Hollywood actors being replaced by robots. “It’s going to happen,” Tom Hanks told The New York Times just before the film’s release.
Other media outlets envisioned a future in which film studios would use “synthespians” as leverage to negotiate lower salaries for stars. The technology was moving forward so quickly, it seemed, that it would reach perfection after just a few more films.
It has now been 18 years. While motion-capture and animation technologies have advanced considerably (“Polar Express” < “Avatar” < “Rogue One”), near-perfect and perfect were never as close as industry predictions made them out to be.
Self-driving cars, Anthony said, have exactly the same problem.
“The difference between a good self-driving car and a perfect self-driving car is massive,” he said. “Humans underestimate how complicated driving is. We’re effortlessly good at looking at other humans and understanding their behavior. That’s a really hard thing to replicate.”
The unique challenge of autonomous vehicles is that, although they may be safer than human drivers in the future, they could be far less safe in the present.
According to a 2016 study, autonomous vehicles must drive 275 million miles without a fatality to demonstrate a safety level comparable to human drivers. In 2018, Alphabet’s autonomous-driving car company Waymo announced that its cars had logged 10 million miles on U.S. roads. While the company hasn’t had a fatality accident, self-driving cars are far from having proven their safety record, and most cars currently include a human safety driver.
And yet politicians have rushed to allow car companies to use their jurisdictions for beta tests. Three years before the Uber crash, Arizona Gov. Doug Ducey gave the company free rein to traverse the state without oversight. More recently, lawmakers in California and Florida have said they will allow autonomous vehicles to roam their roads without safety drivers at all.
Missy Cummings, the director of the Humans and Autonomy Laboratory at Duke University, said these permissions are premature. Government officials are exposing their citizens to experimental technology without any community input or guarantees of transparency from companies in case of a crash.
“It’s grossly unethical to put the public inside this experiment without their consent,” Cummings said. “Politicians want to promote innovation. They want to say that they did something amazing for their communities. But their first obligation is to public safety.”
Part of the reason Cummings is so cautious is that the basic hardware of self-driving cars simply isn’t ready to navigate the changing, complicated urban environments human drivers traverse every day. In road tests, autonomous vehicles have run red lights and struggled to swerve around buses. Their radar sensors can’t detect cars stopped along the freeway. Common complications like rain, fog and shadows can cripple even the most advanced autonomy systems.
These technical shortcomings can make self-driving systems behave in baffling and dangerous ways. In 2015, Toyota had to recall 31,000 cars after Priuses equipped with the company’s automatic braking system slammed on the brakes at high speeds. In 2018, a Tesla Model X on autopilot accelerated before ramming into a freeway barrier and killing its driver.
While these problems will likely become less severe as the technology continues to develop, it’s not obvious that they’ll disappear entirely. According to Cummings, autonomous driving systems suffer from the same fundamental, unsolved problem as all other machine-learning systems: inductive reasoning.
The way machines build the world around them is by analyzing millions of data points and finding patterns: deductive reasoning. To teach a machine how to recognize cats, for example, engineers show it a million cat photos. After a while, it can distinguish between “cat” and “not cat.” This system works well for some things, like recognizing faces or scanning sloppily handwritten letters. Where it falls short, however, is in inductive reasoning — the ability to predict whether the cat on the sidewalk will stay there or dart into traffic.
“The vision systems of autonomous vehicles can’t respond to even slightly different views of the world than what they’ve been shown before,” Cummings said. “A human can easily recognize a stop sign with a tree branch hanging across it, but that might cause a car computer vision system not to see it at all.”
Deviations like this are nearly infinite in the messy, complex environments of American cities. Even something as common as traffic cones, Cummings said, can confuse self-driving systems when they’re not placed in expected locations or in a perfectly straight line. Plus, vehicle sensors can be “hacked” by manipulations as simple as painting over lane markings or altering road signs.
Municipalities could address some of these weaknesses by updating their road infrastructure. But rebuilding cities to fit cars, rather than cars to fit cities, wasn’t part of the original promise of autonomous vehicles.
“If you need infrastructure changes to make autonomous vehicles work, you’ve failed at the engineering challenge,” Anthony said.
More Cars Won’t Mean Better Cities
In a way, solving the technical challenges of self-driving cars is the easy part. The more difficult question concerning the viability of autonomous vehicles in American cities is whether their residents want them at all.
“Our streets are already choked with traffic,” said Gregory Shill, a law professor at the University of Iowa who studies transportation. “Removing the need for drivers has the potential to turn roads into permanent rivers of motor vehicles.”
The promise that autonomous vehicles will reduce traffic, Shill said, doesn’t stand up to scrutiny. Once driverless cars can be summoned for commutes and errands, consumers may send empty cars back and forth across the city all day. Thousands of office workers streaming onto the street at 6 p.m. will cause backups as taxis creep toward their riders. Even in the short term, as semi-autonomous cars reduce the hassle of driving, city residents will likely respond by driving more.
“Right now, a major cost of driving is the amount of time and focus it requires,” Shill said. “Once you can read a book or take a nap on the road, people’s willingness to endure extra driving will go through the roof.”
The potential for self-driving cars to reduce parking spaces and improve urban environments is similarly unsupported. Aaron Naparstek, the founder of Streetsblog.org and the co-host of “The War on Cars” podcast, pointed out that autonomous vehicles will be released into the political system we have, not the one we want.
It’s unlikely that autonomous vehicles would result in an immediate spike in infrastructure spending or a spurt of new public transportation projects, Naparstek said. That means every large city locked in a debate over density, transportation and traffic (i.e., all of them) won’t mend their divisions just because a new technology comes onto the market.
“Politicians are using autonomous vehicles to delay making hard decisions,” Naparstek said. Mayors and governors, he pointed out, can already reclaim parking spots for wider sidewalks or dedicated bus lanes. They can address traffic congestion by increasing investment in public transit, bicycle infrastructure or car-sharing. Many of the benefits of autonomous driving are available right now. They don’t lack technology; they lack the funding and political courage to carry them out.
“Autonomous vehicle companies seem much more interested in solving the problems of suburban car commuters than the problems of urban automobile dependence.”
For the foreseeable future, autonomous technology will mostly be available to the wealthy. Last Thursday, The New York Times quoted a car company official who suggested that cities could restrain pedestrians with sidewalk gates so autonomous vehicles could whiz by without interruption. If residents and lawmakers aren’t careful, autonomous vehicles could make cities even more unequal and beholden to automobiles than they are now.
So far, the federal government isn’t helping cities address any of these challenges. Autonomous vehicles will require a huge range of new policies, from safety requirements to advertising standards to insurance regulations. As of yet, these questions have been left entirely up to states.
“We don’t even know what our goal is,” Reimer said. “Safety? Mobility? Connecting to public transport? Energy efficiency? Or do we just want a bunch of robo-taxis? This is where government needs to step in.”
Perhaps the most important legal issue is liability in the event of a crash. Sitting behind the wheel of an autonomous vehicle is almost perfectly designed to induce boredom and distraction. Reports indicate that the Uber safety driver in Arizona was watching TV on her phone just before the crash. Nonetheless, in all but the most narrow circumstances, human drivers remain legally responsible for crashes. This makes driving less safe — and shields autonomous-vehicle companies from the kind of accountability that would encourage more responsible behavior.
“The true definition of ‘self-driving’ is who has liability in case of a crash,” said Alex Roy, the founder of the Human Driving Association, a safety advocacy group. “You wouldn’t get into a taxi if you could be charged for an accident caused by the driver.”
The mixture of autonomous systems and human fallibility could already be giving drivers a false sense of security. A 2018 study by AAA found that 80% of drivers erroneously believed their cars’ blind-spot monitoring systems worked at high speeds and were effective at spotting cyclists. Forty percent thought their warning systems would apply the brakes in case of an obstruction rather than simply lighting up a warning light. These misconceptions could result in riskier or more careless driving.
Whether due to increasing autonomy or other factors, American roads are undeniably becoming more dangerous. Last year, pedestrian fatalities reached their highest level since 1990. Car crashes killed 5,000 more people in 2018 than in 2014. America already has twice the traffic fatality rate of Canada and four times that of the United Kingdom.
According to Shill, local and federal regulators don’t have to wait for technological advances to reduce these rates. They could redesign roads and install enforcement cameras. Car companies could use their navigation systems to limit drivers to the local speed limits, a policy already being implemented in Europe.
All of those fixes have been available for years. The fact that they haven’t been widely implemented indicates that increased safety isn’t the primary goal of autonomous vehicle companies or the politicians cheering them on.
“The commercial vision is for customers to play on their phones throughout ever-lengthening commutes,” Shill said.
And that’s why the key to making cities better may turn out to be fewer cars rather than smarter ones.
“We should be asking what kind of urban environments we want and shaping our priorities around that,” Shill said. “Otherwise we’re setting ourselves up for another century where our cities are planned around the automobile.”
Clarification: A paragraph on driver safety originally compared autonomous vehicle accidents to fatality accidents. It has been updated to make the comparison more clear.