On 7th May 2016, Joshua Brown left a family trip to Disney World, Florida. He was heading back to his home in Ohio when his car struck the trailer section of a truck. He was found dead at the scene. At first, it seemed there was nothing unusual about the accident—just one road death among the 38,000 that occur in the United States every year. Investigators found that the truck had been pulling out of a side turning on Highway 27. They assumed that one of the drivers was at fault. However, several weeks later, when the details emerged, it became clear that Brown’s death was anything but a routine motoring accident.
Quite the opposite. The car involved, an electric Tesla Model S, was in autopilot mode at the time of the crash. The incident is thought to be the first time that self-driving technology has caused a fatality. Brown, a 40-year-old former member of the US Navy Seals, loved his Tesla, especially its self-driving features, which he saw as the future of transport. In this he is no different from the many political and business leaders who have high hopes that this technology will solve myriad social problems, from the shortage of urban land for homes to air pollution. Brown, however, went so far as to give his car a pet name: Tessy.
According to the manufacturer, the Tesla S model’s autopilot feature “allows Model S to steer within a lane, change lanes with the simple tap of a turn signal, and manage speed by using active, traffic-aware cruise control. Digital control of motors, brakes and steering helps avoid collisions from the front and sides, as well as preventing the car from wandering off the road.”
Brown had enthused about how Tessy would take over the driving on highways while he relaxed, looking at the passing scenery as if he were on a train, or even watching DVDs on a portable player. He was such an enthusiast for Tesla that he posted numerous videos of his driving experiences. One shows how he narrowly avoided an accident when a white truck cut across in front of his car, alerting him of the need to take control.
Another of Brown’s videos on YouTube attracted the attention of Elon Musk, the founder and CEO of Tesla, prompting Brown to tweet: “@elonmusk noticed my video…I’m in seventh heaven!” The New York Times reported that Brown told a neighbour at the time: “for something to catch Elon Musk’s eye, I can die and go to heaven now.” Brown’s confidence in the autopilot was such that he tested it to the maximum, explaining in one shot, as the car rounded a curve, that “this section in here is going to be very, very difficult for the car to handle.” Ironically, the situation that his car appears to have failed to handle was relatively routine. On a straight section of Florida highway, a truck with a trailer moved across Brown’s path but the sensors on the Tesla seem not to have detected its presence. The car, remarkably, tried to squeeze under the trailer, smashing its windscreen and going on to hit two fences, crossing a field and coming to rest against a pole 30 metres south of the road. It was Brown’s over-confidence in the system that almost certainly killed him. A laptop and portable DVD player were found at the scene, though neither were running when the Florida Highway Patrol arrived. The truck driver Frank Baressi, who rushed up to the wrecked Tesla, claimed that a Harry Potter movie was playing on Brown’s DVD player.
Tesla’s statement implied that the very newsworthiness of the crash was a testament to the excellent safety record of the Tesla Model S. The company said in a blog post that its cars had driven 130m miles in autopilot mode without a fatality, whereas on average, there was a fatality for every 94m miles driven on US roads. This meant, Tesla suggested, that its autopilot was safer than human control. Tesla did not respond to questions about these figures. But its claims did not appear to take into account the real nature of the autopilot function, which can only be used for simple driving on major highways, where accident rates are far lower than on conventional roads with their frequent junctions, sharp turns and traffic lights.
At a time when driverless technologies are sold as safe, green and liberating, the incident was a major embarrassment. The enthusiasm has almost been like that during the space race. In January, President Barack Obama proposed $4bn of subsidies for driverless technology and vehicle safety technology over the next decade. George Osborne has been another devotee. The self-styled austerity chancellor wrote a more modest cheque—the first £20m of his £100m Intelligent Mobility Fund was this year allocated to eight projects, ranging from driverless shuttles for disabled people to the development of autonomous-vehicle-testing centres. In March, Nissan announced that it would make its first partially autonomous car at its Sunderland plant. The company’s decision, announced in October, to maintain its investments in Britain means that the first of these vehicles may roll off the production line in 2018. The ardour is preventing a cool assessment of the technology’s problems, not least because the language in this field is laced with spin.
The car that Brown was driving was not in the context of traffic today “driverless” or “autonomous”—a legion of difficulties still stand in the way of introducing a truly autonomous car and there is an element of desperation among companies to be the first to come up with the mould-breaking model. Last year VW and Toyota invested $22.6bn in research and no car company can afford to be left out in this race.
And it’s not just the car manufacturers. Uber has recently started offering rides in its “driverless” taxis in Pittsburgh. The hype has been enormous, as if we were about to see all cab drivers sent to the dole queue. But they can rest easy—for now at least. These “driverless” taxis have two people in the front seats: a test driver and an engineer. Moreover, when a reporter from Business Insider went for a ride, the test driver had to intervene at least four times. And the car could only drive in a limited area, you could not change the route once programmed and there was only room for two passengers. The trip was free, though.
Google is also working on the technology. In a statement at Congressional hearings in March, Chris Urmson, the then head of Google’s self-driving car division, suggested safety was the great impetus. The development of driverless cars, Urmson said, would “pave the way for the deployment of this innovative safety technology, which will help reduce the more than six million traffic accidents that are reported in the US every year.” In support of this, he said that in seven years of test driving, there had been only 17 minor accidents, which, on the mileage travelled, was not very different from the average among human drivers.
The real story, however, is more complicated. While he was correct that most accidents were caused by inattentive human drivers, cars on autopilot can behave erratically. Test cars react to minor events, such as a plastic bag blowing across their paths, as if they were dangerous and stop suddenly, surprising the driver behind who, rightly, perceived no risk. This has led to several rear-end shunts. The worst incident, from Google’s perspective, happened this year when a Google Lexus pulled into the path of a bus travelling at around 15mph, causing extensive damage to the car. In a statement on the accident, Google said that the test driver failed to intervene when the car made an error and pulled out in front of a bus: “The Google AV [autonomous vehicle] test driver saw the bus approaching in the left side mirror but believed the bus would stop or slow to allow the Google AV to continue.” This was a bit of a giveaway. It showed that, far from being autonomous, many such vehicles still rely on human intervention to prevent accidents. It’s the same reality that undermines those manufacturer safety statistics.
Another advantage cited by Urmson was that driverless cars would free up some of the 3,000 square miles that are currently being devoted to car parks in the US, an area the size of Connecticut. To arrive at this claim, Google assumes not only that cars become driverless, but that car sharing could become more commonplace. It assumes that its boring little pods, which will lack any human-operated controls, would be a communal resource: Uber-type taxis, that would carry passengers all day and take themselves off to some far-flung car park at night, presumably on less valuable land. It assumes, too, that since most ordinary cars sit unused 95 per cent of the time, the requisite number of pods—and the number of car parks for each of those pods—would be far less than with conventional cars.
Another telling assumption was the suggestion that driverless cars could reduce the need for public transport. Urmson cited the case of a “woman in Southern California who lost her ability to drive 15 years ago [who] told us, ‘my life has become very expensive, complicated, and restricted’ since she had to start paying drivers and enduring long waits for buses and trains.” It was an instructive case. To anyone not caught up in the stampede, making buses and trains more frequent might seem a more obvious response than building an army of wizardly pods.
What effects would the widespread introduction of this still half-hypothetical technology actually have? Google and other developers claim it would reduce congestion and so bring environmental benefits. But even on the most optimistic predictions, many of these vehicles will be driving around with no passengers, so that assumption is questionable. Add in the expectation that current public transport users will then shift to cars, and it becomes clear that there is no reason to expect that driverless technology will lead to empty roads or free up acres of city-centre car parks.
The idea that people will readily opt for communal vehicles is also questionable. There has been some success through car-clubs such as Zipcar. London has 186,000 car-club members, with access to just under 3,000 vehicles, according to the annual survey of car-club use; there are 25,000 fewer independently-owned vehicles on London’s streets as a result. Zipcar reckons this figure could be tripled by the end of the decade, but its General Manager in the UK, Jonathan Hampson, cautions against assuming that we will all be driving communal cars one day: “Car clubs, though, are not for everyone and there are many people who still aspire to car ownership, even Millennials. I don’t see a time when all cars will be shared.” We can’t assume, either, that this aspiration will disappear with automation: some will still want the freedom to leave their stuff on the seats overnight, and—perhaps—a flashy vehicle on their drive. The AA describes the presumption of sharing as “vastly hyped.”
Google argues that the technology will be liberating both for existing drivers and those who can’t drive. Instead of wasting time at the wheel, people could use their mobile devices as bus and train passengers do. Hundreds of millions of people would have more time to spend with the internet which, for the tech firms, is a clear case of “what’s not to like?” As with Tesla, the tone of Google’s submission seems to be that driverless cars are just around the corner. In his evidence, Urmson made clear that test drivers were aboard all Google’s vehicles. But it wants to do away with drivers altogether, and especially the regulations that assume their existence. Urmson suggests that human drivers are a safety hazard. Even “having controls that allow a passenger to change its trajectory or operate turn signals or headlamps… may make the operation of the car less safe,” he said.
The autonomy of cars is defined on a scale from zero (conventional cars) through to five (cars with no accessible controls such as accelerators or steering wheels.) Level four is a fully autonomous car that does not require a driver at all. Currently the technology is hovering around level two (at least two functions of the car, such as steering and braking, can work automatically) and level three, where it is possible “to completely shift ‘safety-critical functions’ to the vehicle, under certain traffic or environmental conditions.” Google is determined to create the impression, at least, of aiming to hit level five in the near future. But why?
The notion, promoted by both Google and Tesla, is that the driverless car revolution is already upon us: that soon, autonomous cars will drive us to work and then go back home to pick up the kids and take them to school. Independent analysts and many within the industry, are wary of firm predictions, and agree that level four is a long way off and level five further still. Edmund King, President of the Automotive Association (AA), found something of a consensus at a recent meeting in Germany of key players. The collective view was that to actually buy and use a driverless car on the highway will not happen until 2030.
“Even then, the ‘driver’ who would not be driving, would still need to be in the car because at that stage 99 per cent of cars would not be driverless, and therefore there will be the problem of the interaction between driverless and non-driverless cars.” King went on to pose a question that has been much ignored. Do people actually want driverless cars? “In a Populus survey of 26,000 of our [AA] members, 63 per cent say they still enjoy driving and 69 per cent say they are not ready to take their hands off the wheel.” Attitudes, of course, could shift. But during the switch to driverless cars, motorists would adjust to being, like airline pilots, the monitors of a self-driving system. Just as initially computer users were all expected to write our own programs and now only a handful of experts can, driving will become a lost art. The potential period in which drivers are expected to intervene only in an emergency may result in increased casualties.
Google and Tesla are pushing for a legal framework for its vehicles to ensure that once the technology is developed, it will be possible to sell them straight away. But there are enormous implications that need to be understood before any framework can be developed. What, after all, is the aim of this technology? If it really is achievable, it will throw every lorry and taxi driver out of work (assuming the technology allows there to be no driver), possibly worsen congestion and weaken the argument for public transport. It is not being a Luddite to worry.
As Tesla is learning in the aftermath of Joshua Brown’s accident, the legal issues are mind-boggling. Who is going to be allowed to “drive” these vehicles and where? Codes of practice for testing “driverless” vehicles on public roads have existed since 2015 and addresses a number of issues relevant to the testing of driverless vehicles—from vehicle and test driver requirements, to insurance, data protection and cyber-security issues—but primary legislation will still be required. And then the biggest teaser of all—who is to blame for a crash when no one is driving? There is currently no answer to that question. Volvo has attempted to pre-empt the situation by accepting liability for any collisions involving its autonomous vehicles. This is an easy promise to make when there are no cars on the road, but it might be far more difficult if there were a series of incidents. Hacking could also cause accidents. In a test, hackers got into the system of a Jeep Cherokee and played with the controls.
Another possibility is that they may be too safe. John Adams, a geography professor at University College, London, points out that fully autonomous cars would have to be programmed to avoid pedestrians and cyclists. So any stroppy pedestrian or cyclist could stop a car just by walking in front of it. Manufacturers are also worried that the predictability of autonomous cars might make them targets. Conventional drivers might feel happier to cut up a self-driving car, knowing that it will brake. Manufacturers are considering whether self-driving cars should hide their identity. Conversely, programming the vehicles to allow a collision in particular circumstances would create a legal nightmare.
If the introduction of autonomous cars proved to be viable, then the disruption to the automotive industry, which currently produces 91.5m vehicles worldwide each year and has a turnover of some $2 trillion, would be enormous. No industry stands still, and the industrial past cannot veto the future. But when the issue of bailing out Detroit has been a live question for Washington over the last few years, you might have thought it would give policymakers pause. Instead, they have grown giddy on the hype.
One of the early UK-funded experiments, which it is hoped will start next year, is the platooning of trucks on a motorway. The idea is that several lorries can be driven by just one person in a lead vehicle, with the rest, which are without a driver, remaining at a safe distance behind in an autonomous convoy. Even here, with such a limited experiment, there are difficulties. As King points out: “In Holland when they did this, they found that smaller trucks would try to squeeze in between those in the convoy which posed a risk to both the driverless and driven vehicles.”
Some of Osborne’s money is going to the Oxford Mobile Robotics Group. Its head, Professor Paul Newman, is convinced that the technology will be extremely beneficial, but won’t be drawn on the timing: “This autonomy technology is infused in an environment of computing, communications and machine-learning that this species has never seen before. These questions of pinning the date of its introduction down make no sense in that environment. In a year, I will have on my laptop such different ideas of how we are going to do stuff, this conversation will look old.” Progress may be fast for people who make the same journey every day, but “if you wanted it to take you anywhere to anywhere in any weather, it will take between three and 30 years.”
Milton Keynes is at the vanguard for the driverless world. Newman is developing a pod that will take people from the station along set routes on pavements. The project is running late because there was a problem with the manufacturing, but the first runs, on a limited route in a pedestrianised area, took place in October.
This is a place that has seen the future before. It was once the model of a car-based tomorrow. It is an urban area—calling it a city or even town is to give it an impression of coherence that is undeserved—designed around the needs of the car user. Step out of the railway station and you soon find yourself in Midsummer Boulevard, an inappropriately named street built as part of an American-style grid system. The dual carriageway down the middle is supplemented by parallel service streets that double up as parking spaces. There are hotels and offices but few pedestrians because, just as in most American cities, the distances between facilities deter people from walking. The design of Milton Keynes, with its emphasis on the car, was the result of a battle between proponents of different visions for the new town. An alternative concept, with a monorail supported by a comprehensive bus network connecting the neighbourhoods, was rejected.
There is the risk that the impetus for driverless cars could result in yet another attempt to mould cities around individual rather than collective transport systems. While much of the thinking behind the planning of Milton Keynes has now been rejected and many planners and politicians realise that urban areas are not suited to unlimited and unfettered access for all vehicles, autonomous cars could be seen as a panacea, a technological fix for problems that cannot be solved that way, as they involve fundamental issues of investment, planning and congestion. We are still righting the wrongs brought about in the 1960s and 1970s when town centres were destroyed and replaced by multi-lane highways, gyratories and six-storey parking grounds, all because ubiquitous individual car ownership was regarded, erroneously, as the desirable future of travel.
It is now widely accepted that the opposite is true: the key to health and prosperity, both of people and commerce, is walking, cycling and the use of public transport, and this is where the investment should be made. Governments risk being diverted from formulating coherent transport policies by the advent of a technology that may never be viable.