Close
Close

Published

Uber back on roads, but not yet back on track

Uber’s recent chastening experience in autonomous driving is really a metaphor for the whole self-driving industry, which has had to adjust its sights downwards over the past year after a spate of incidents during trials. Uber is now about to get its autonomous vehicles back on the road but reduced in range and speed, after gaining permission from the US state of state of Pennsylvania for local tests.

The company has effectively admitted that its software was not fit for purpose at the time of the fatal crash nine months ago in March 2018, by stressing that it has now improved the overall system design after an internal review. There were signals that latency was not low enough for the car’s decision making to execute in time for evasive action at the speeds the cars were driving at.

The cars had been allowed to reach 55 mph, but when tests are renewed the maximum will be just 25 mph (40 Kmph). That is a big difference when we consider that available reaction time decreases in line with the square of the speed. Therefore, the latency budget under the renewed testing will be around five times as generous.

Furthermore, the tests will be confined to a relatively tranquil one-mile loop, linking two of Uber’s offices in Pittsburgh, and will not take place either at night or in poor weather. It really is then a return to the drawing board. Yet the tests will feature the same vehicles running considerably more advanced software than before, so at least Uber can say it has taken a step forward.

These events have though underlined how autonomous driving is a harder problem to solve than some had appreciated. Even now some pundits are writing these accidents off as minor setbacks that will not derail the inexorable advance towards full SAE Level 5 autonomous driving within a decade. Such views are based on the idea that for all its complexities, self-driving is still a “narrow AI” problem that is well defined within a limited domain.

But it is not as narrow as say machine vision on its own, since it amounts to creating autonomous robots capable of operating safely at speed in a complex and semi-chaotic environment where some other vehicles may take unpredictable and even dangerous actions, as sometimes happens with drivers are under the influence of alcohol or drugs. Put simply, when the autonomous driving problem is completely solved up to Level 5, then a lot of other AI problems that seem hard today will also yield.

There is also the ethical challenge with no sign of that being resolved anytime soon, amid continuing uncertainty over whether there will be any global consensus, given the significant cultural variations between regions and countries.

Humans faced with a split second to decide decision between two or more options in an emergency tend to act without being consciously aware that any ethical balance has been struck. But autonomous agents would need some basis embedded in their core for making decisions that could in extreme cases involve deliberately sacrificing the safety of one individual for the sake of others. So even if the technical problem of autonomous driving is solved faster than some expect the full regulatory and ethical framework will take longer to establish.

The Uber crash and its aftermath meanwhile highlight a more immediate debate to be resolved, which is over the role of Level 2 and Level 3 driving on the road towards full autonomy, where human safety drivers are supposed to be still in charge of the vehicle – and therefore attentive to the road conditions. In the Uber case, the safety driver was not concentrating and could almost certainly have averted the crash, but tests have shown that humans are incapable of maintaining the required level of attention for potentially hours on end to back up a semi-autonomous driving system.

In practice, even people who do keep their minds on the task as far as possible are not in as full a state of concentration as when they are actually driving the vehicle directly. It takes them around 5 seconds or perhaps longer to reengage their brain fully in the event of an emergency that requires them to take over the controls from say a Level 3 autonomous vehicle, and by then it is usually too late.

For that reason, Waymo has decided to bypass Level 2 and 3 and go straight for 4, which would avoid the human/machine interface challenge. But this has the disadvantage of losing out on opportunity to gain valuable data and experience during what could be a protracted period of Level 2 and 3 driving, while Level 4 is restricted to controlled zones in most areas.

This has led most automobile makers as well as Uber to take a more iterative approach, and work up through the gears towards full autonomy. There is sense in that too given that autonomous driving is a journey that involves adding additional layers step by step, with machine learning and other techniques under the banner of AI playing an increasing part. Even Waymo has safety drivers behind the wheel of its ride hailing autonomous taxis just launched in Phoenix, Arizona, positioned rather like instructors poised to take over from a learner via dual controls.

Waymo argues that the taxis are capable of Level 4 driving but is of course anxious to minimize the risk of a damaging accident itself. So, we could be splitting hairs here in trying to distinguish between Level 4 and Level 3 driving when there is a safety driver in either case. The main point though is that all players are now proceeding more cautiously even within the frameworks they are allowed, rather than pushing the envelope during testing. They are though pressing ahead furiously with development of underlying algorithms based on machine learning and other techniques for incorporating within the systems.

Close