Close
Close

Published

California passes divisive 2020 IoT security law

California has approved SB-327, a bill that will mandate that connected device manufacturers take appropriate measures to protect their products. However, the law is coming into power at the beginning of 2020, and a selection of critics argue that it is targeting the wrong aspect of device security. It seems that the impact of the law will depend heavily on the judge presiding over the case.

The text itself is pretty concise. It calls for the manufacturer of a connected device to equip it with a “reasonable security feature” or features that are “appropriate” to the nature and function of the device, “appropriate” to the information it collects, and “designed to protect the device and any information contained therein from unauthorized access.”

The definition of a “reasonable security feature” is either a password that is unique to the device or that the device requires the user to generate a new authentication method (password) before granting access for the first time.

Essentially, the law is requiring that device makers take measures to protect their devices, especially if the device is handling sensitive data. The emphasis in the use of “unauthorized” access is on the device owner’s authorization – that to which they are consenting. Passwords are hardly revolutionary, but the intent here is to kill the practice of shipping hard-coded passwords, or passwords that are generic and used across multiple devices.

Critics argue that the wording is too loose, and that it does not go far enough to specify technical requirements. Proponents say that the loose wording works in the law’s favor, as it will provide enough leeway to let a judge make an “appropriate” ruling – where the most egregious violations can be properly dealt with. As they are not fixed, the definition can get more complex in time, tracking along with improvements in the technologies.

More structural criticisms will point to the fact that this law is only required in California, and that the federal equivalent that was introduced last year has emphatically stalled. Similarly, there aren’t yet such laws in Europe or China, meaning that a device maker is either going to have to overhaul an entire production process to accommodate these Californian requirements, or it will opt not to enter that market.

Now, it seems likely that similar laws will be introduced in other countries, and it also seems probable that companies will want to avoid dropping out of what is the fifth largest economy in the world, if California were its own country – although its population of just under 40mn people is the best response to that particular concern. That’s comparable to taking the strategic decision to no longer sell devices in Austria, Belgium, and the Netherlands, for some context. Hollywood and Silicon Valley distort things.

Consequently, it seems unlikely that a solely Californian requirement is going to be enough of an incentive to bring about global changes. If the national law comes to pass, then the US can throw enough weight around to exert influence, and the EU will certainly be considering similar laws.

So then, if such laws will inevitably come to pass, are they particularly onerous? It seems not – at least if we’re using the unique password criteria as a threshold. That could be achieved with a label printer and a bit of code to install the required characters, although a more fully-featured system for post-installation authentication is going to be more burdensome.

However, there are critics that think the bill is targeting the wrong areas. Robert Graham, of Security Errata, argues that “the point is not to add ‘security features’ but to remove ‘insecure features.’ For IoT devices, that means removing listening ports and cross-site/injection issues in web management. Adding features is typical ‘magic pill’ or ‘silver bullet’ that we spend much of our time in infosec fighting against.”

There’s a nuance to this critique that misses a notable distinction. In attacks like Mirai, the malicious code sought to use exploits that were left in the devices due to either developer laziness or incompetence. Those sorts of mistakes are going to remain for as long as the developers are buying such systems and doing the bare minimum to them. As long as those vulnerabilities are present, the problem will persist, as the developers aren’t choosing these additional features and then making the conscious decision to leave these problems in there. Rather, they are likely using the cheapest off-the-shelf configuration, and making do.

So while the Graham line of thinking is good, in that we’d all agree that a smart home device probably shouldn’t be able to crawl Telnet ports, it doesn’t address the root cause of those sorts of capabilities. Graham’s conclusion, which has been seized on by many outlets, is that “this law is based upon an obviously superficial understanding of the problem,” and that “it in no way addresses the real threats, but at the same time, introduces vast costs to consumers and innovation.”

Graham does make some sage points though. He notes that Mirai might be a last gasp, as the drought of IPv4 addresses means that many of the new IoT devices are going to be hidden behind firewalls and systems that translate IP addresses. He says that the law is backward looking, and that it should be looking to “isolation mode” in WiFi, which would prevent devices from talking to each other and then attacking their neighbors.

Close