The autonomous vehicle industry is all the rage at the moment, with many manufacturers promising plenty of self-driving vehicles in their fleets in as little as 5 years. This despite the slow pace of legislation and indeed, the infrastructure required to enable the technology.
There is a mountain to climb. The technology will keep advancing, of course, and we’ll see many new milestones in the coming years. I was impressed when Audi’s A7 drove from San Jose to Vegas for CES a couple of years ago. But we have the social side to think about for one. The business of driving employs many people, and thus the economic impact of autonomy is still being studied.
Several of the driverless pioneers have also admitted that the built environment, that is, the roads and motorways that their autonomous products will have to navigate, will also need to evolve to help the vehicles’ pick their way through our complicated urban environments. It’s far better to have a traffic signal tell the car what light it is showing, as opposed to the car trying to interpret the light it thinks it can see. A journalist who took a ride in Google’s driverless Lexus RX450h noted that it got confused by a jogger on the other side of the road at one point, causing it to slam on the brakes. A Google engineer admitted that if a normal car was following, it would have likely ran into the back of them.
Yet this is the thin end of the wedge. Let’s say you’re approaching a junction with a traffic signal but can’t see what colour it is, perhaps because it’s obscured. You’d default to other clues instead, such as whether traffic is flowing across the intersection in a different direction. You’d know then that the light is red. But the hazards drivers face on a daily basis are incredibly difficult for a computer program to solve, because so much of a driver’s workload is subconscious and subjective. This type of hazard perception is particularly hard to ask a computer to handle. And that’s before we consider the fact that the whole world won’t switch overnight to driverless vehicles. The unpredictable human nature and variable quality of driving standards requires a pretty large safety net.
A great deal of the autonomous industry development has taken place in the US, thanks of course to Silicon Valley, but for me the real test will be in some areas of the UK, where you have towns and cities established long before the car came along. Our penchant for street furniture, inconsistent signage, variable surface quality and street width will be an interesting set of instructions for a computer to process. And how would you explain to an American developer what a mini-roundabout is?
Despite the negativity here, I’m actually a real fan of the tech, having taken my first ride in a driverless vehicle for the first time this year. But I think we can develop a faster return on the technology by not trying to address the whole world first. There are two tangential areas in vehicle autonomy that we can take advantage of more quickly.
The first is something we’re already seeing: driver aids. Tesla’s Autopilot is at the top of this tree, but radar-controlled cruise control, lane assist, emergency braking assistance – particularly at low speeds, one of the most common causes of accidents – and motorway automation, such as road trains. This tech isn’t full automation but will surely make up for the most common driver pitfalls. Even Tesla admits that its Autopilot requires the driver to have both hands on the wheel, so it’s a form of assistance, rather than true automation.
The other is to target vehicles where duty cycles are heavily repetitive. This would not only make the journey safer, as the monotony is removed for its driver, but could make use of alternative fuels too as movements would be predicted. Shuttle buses are a prime example, particularly where a number of them are being used to smooth out demand. Examples include airports, theme parks and large conferences. We’d also learn more about driverless tech and the infrastructure that is required to support it.
Leave a Reply