Close
Close

Published

Legislation struggles to catch connected cars

The recent spate of incidents relating to self-driving cars, mostly in the US, have only added further confusion for legislators struggling to develop coherent and consistent rules for compliance safety and liability. Not for the first time, manufacturers and others involved in development have complained that government and regulatory bodies have been lagging way behind the technology.

But that is inevitable and legislators do find themselves between a rock and a hard place. The rock is the risk that premature legislation hinders the development of autonomous driving and acts as a disincentive or competitive disadvantage for a given country. The hard place is the lack of clarity over what will be permissible, as well as uncertainty over issues such as liability, which could again hinder the field while bogging down parties in long court cases. There is the danger that the field could be dictated by test cases in the absence of any clear strategy. On top of this there is the issue of what role regulations will play in cyber security.

In practice it is right that legislators take their time to develop firstly a framework and then fit laws within that, which is roughly the approach being taken by the UK government whose Centre for Connected and Autonomous Vehicles (CCAV) has asked the country’s Law Commission to prepare a package of reforms to accommodate autonomous driving – although the initial target of 2021 for implementation looks ambitious. That is because it aims to plug all gaps in legislation, including allocation of liability between drivers, manufacturers and other vehicles which may or may not themselves be autonomous.

Then the European Union, very aware these days of the competitive threat from the US and China in particular, has emphasized the need for the regulatory environment to be as accommodating as possible for the industry in its longer-term GEAR 2030 project aimed at establishing coherent rules across all member states. Aligned with this is the agreement among 29 European states, although excluding the UK in the wake of the Brexit vote, to develop digital corridors across the continent facilitating cross border driving of autonomous vehicles within this supposedly emerging consistent regulatory framework.

The US is perhaps at the front of the regulatory march, but there the situation is even more confusing because of the split responsibilities between the federal government and individual states, which themselves are progressing at varying rates and in different directions. This is one reason why many trials are being confined to individual states.

In fact, states seem to be leading the way with the federal government almost wringing its hands, certainly in the current climate where putting America first tends to prioritize competitiveness over safety. The default is that self-driving cars are perfectly legal simply because there has never been any rule prohibiting them, and the government is reluctant to pass any legislation at this stage as it made clear in March 2018 when it organized what it called a “Public Listening Summit on Automated Vehicle Policy”. Stakeholders and the public in general were invited to attend a summit with senior leadership from the US Department of Transportation, alongside state and local partners, as well as a scattering of industry, academia and safety advocates. Inevitably this diverse lobby of conflicting interests reached no firm conclusion.

However, urgency has been increased by recent incidents. Even though the latest one involving a Waymo autonomous minivan occurred while the vehicle was in manual mode, that itself highlighted the issue of driver attention during the transitional phases of SAE Level 2 and Level 3 self-driving. Meanwhile further details about the earlier fatal Uber crash, when the car failed to apply the emergency brakes as a pedestrian unexpectedly crossed the road, has raised regulatory questions.

Uber has admitted that its software responsible for determining how the car reacts to objects it detects was probably at fault, and has kept its autonomous program suspended. This raises the issue of balancing false positives against false negatives and leaning too far in favor of avoiding the former. In other words, to avoid breaking hard erroneously for objects like floating soft rubbish on the road, which itself would risk causing an accident, the system was more likely to fail to recognize the pedestrian while on a fast road.

This raises questions around safety and ethics that will impinge on regulations, the question being how a car should react in rare but high-risk situations as when a pedestrian dangerously crosses a multi-lane highway. In practice. human drivers usually hedge their bets a little bit when reacting almost instinctively by avoiding say lurching right across to a neighboring lane or hitting the brakes absolutely full at high speed – but taking some evasive action in the hope that it will be sufficient to avoid running that person over. Some risk is involved but with the hope there is no collision with other vehicles either.

In theory, an autonomous system would be able to calculate the situation more accurately but would still for (the foreseeable future) face the uncertainty of how other vehicles would react. There is therefore the ethical dilemma of how much risk the system should bring both for the vehicle in question and others nearby on the road to avoid running that pedestrian over.

Partly because of such dilemmas some US states have been much more cautious than others. In California, self-driving the technology can only be tested and is not approved for consumer use. At least 14 other states are known to be working on regulations, while another 12 have voted against them so far. Some states have no rules and have said they would evoke existing ones such as laws prohibiting reckless driving in order to revoke registrations in the event of incidents or refuse to issue permits to cars in future. Some have adopted a half-way house approach, with New York state requiring a driver to keep one hand on the wheel at all times, while not expressly prohibiting self-driving.

This situation is hardly ideal and is detested by automobile makers themselves, although there is the saving grace that more lenient states are encouraging trials in their areas. No wonder some are collaborating with Chinese partners to take advantage of the laxer regime although it now looks like the country is catching up and has in fact been prompted to develop national regulations by the Uber crash. Until recently, China had a similar patchwork approach to the US, allowing major cities such as Beijing and Shanghai to set their own rules.

But just last month, the government announced it would move to harmonize rules across the country and keep pace with global developments on that front. Initial rules set include a requirement that all cars must first be tested in non-public zones, road tests can only be conducted on designated streets and a qualified driver must always be present in the driver’s seat ready to take control. These rules were announced by Xin Guobin, the vice minister of the Ministry of Industry of Information Technology, who cited those recent crashes in the US involving Tesla and Uber as reasons for China to make self-driving car safety a top priority.

Underlining all these moves is the reality that the safety bar is being set much higher for autonomous than human driving, as it should be given that a major motivator is the potential for cutting road accidents significantly. Given human nature, as reflected in media reporting, every fatality and the associated human tragedy resulting from an accident involving a self-driving car will be widely reported while the far greater number caused by sometimes grotesque human error or drunk driving get ignored.

It is also inevitable that in practice full field testing of autonomous driving must take place and will occasionally result in fatalities that would be avoided when the software is fully mature and debugged. Regulations therefore will ensure that autonomous drive testing will be confined longer to relatively safe zones which will impede its progress somewhat. Perhaps that is no bad thing.

Close