Chinese web titan Baidu has picked Intel to provide the Mobileye Responsibility Sensitive Safety (RSS) model for Baidu’s open source Apollo platform, which Baidu is pushing as a standard approach for autonomous cars. Baidu is also going to use Mobileye for its self-driving system, which it plans to sell to Chinese OEMs, for use in China.
This looks like a good early win for Intel, but Apollo itself still hasn’t caught on like wildfire. Baidu claims that there are 116 global partners in the Apollo program, and sure, there are a lot of the major automakers. But we’ve yet to see one of these car companies announce that they are going to use Apollo as the basis of their next model, or even a family or entire range.
However, Baidu has just announced that it has begun production of its Apolong bus, a 14-seater SAE Level 4 self-driving vehicle. Baidu will be rolling this out in China, specifically Beijing, Pingtan, Shenzhen, and Wuhan, chiefly in areas like airports and tourist destinations. It will also be working with SB Drive, SoftBank’s self-driving division, to deploy Apolong in Japan, sometime next year.
There’s also the current geopolitical climate to consider. The US government has shown outward hostility to Chinese companies, and we can’t imagine the US media taking too kindly to fleets of Apollo-based cars – those headlines are too juicy to pass up, even if an open source community has poured over the code and found nothing troubling. Huawei fell afoul of this, and while ZTE breached Iranian sanctions, it also got caught up in allegations of spying, which China of course strongly denies.
With the RSS model, which is essentially a decision-making process that a computer system can use to make safety-conscious decisions in a human-like manner, Intel hopes that Apollo will be able to address one of the main concerns facing AI systems – that we can’t often explain how or why they have arrived at a given conclusion, due to the black-box nature of the technology.
Riot covered the RSS back in October, and the main thrust of the RSS model aims to create a mathematical version of ‘responsibility and caution’ and define a ‘safe state’ where a self-driving car can’t be the cause of an accident, regardless of what other vehicles around it are doing. Basically, it’s looking to ensure that the manufacturers are never going to be the target of a humongous lawsuit, and these are the mathematical rules to follow to adhere to Intel’s guidelines.
Notably, the RSS is technology-agnostic, as it isn’t a dedicated piece of silicon or a software stack. In theory, this means that anyone could use it, which makes it well suited for an open source project like Apollo. However, the Intel hardware being used by Baidu is the Mobileye Surround Computer Vision Kit, which is a combination of 12 cameras, software, and chips, that provides a view of the situation around a car – and all the ADAS data that is needed to perform self-driving functions. Good luck finding information about the kit on Mobileye’s website, though.
As far as Mobileye’s marketing goes, we’ve progressed past its EyeQ3 SoC, first introduced in 2014, and are now into the EyeQ4 era – where a car using the chips can let a driver take their eyes off the road entirely, confident that the car is going to understand the world around it sufficiently to drive safely.
The next phase, the EyeQ5, will enable “Mind Off” driving, and Mobileye has scheduled that for 2020. Notably, that chip will be using 10W of electricity in operation – a very big step up from the 3W planned for the EyeQ4. In terms of performance, Mobileye says the EyeQ3 has 0.256 TOPS, the EyeQ4 has 2.5 TOPS, and that the EyeQ5 will clock in at 24 TOPS. Mobileye outlines the SoCs here.
Apollo is already up to v3.0, having recently added support for self-valeting and code for minibus shuttles and ‘microcars.’ Baidu is hoping its machine-vision prowess will tempt automakers to adopt Apollo, thanks to its ability to use facial recognition and analysis for in-car security and wakefulness, as well as service personalization. While not as big a step forward as v2.5 or v2.0, in terms of features, it looks like there’s undeniable steady progress in the project.
The Intel and Baidu collaboration continues in the data center – with Baidu announcing plans to use Intel Altera FPGAs in its cloud computing offering, as well as the upcoming Movidius Vision Processing Unit (VPU) in a camera pitched at retailers looking to embrace machine-vision analytics. Baidu will also be optimizing its PaddlePaddle machine-learning framework to run on Intel’s Xeon Scalable CPUs.
Intel may well need a PaddlePaddle to navigate the creek that is encroaching machine-learning processing competitors. Having enjoyed such a lead in CPU market share, it has not been able to successfully counter the arrival of GPUs as the favored workhorse for AI-based tasks. Dedicated processors like Google’s TPUs could also threaten its share in AI workloads, and Intel’s Xeon Scalable family isn’t quite equivalent.
Intel bought its way into a market leading share with Mobileye, and did much the same with Altera – both acquisitions being north of $15bn. Looking to expand beyond the data center, the new applications are important to ensure that Intel can still fund its massive R&D programs that have kept it ahead of the competition for so long. Brian Krzanich, the man who oversaw these two purchases, was recently booted from the company. Intel is still looking for a replacement.
But Baidu and Intel might not be friends for long in the AI silicon space. Baidu unveiled its new Kunlun AI chip at its Create conference, the Kunlun 818-300, which it says will be used to train AI and ML models. There’s another chip, the Kunlun 818-100, which will be used for inference tasks on smaller devices too.
Baidu has been exploring FPGAs since 2011, and this has led it to create a chip capable of 260 TOPS, with a memory bandwidth of 512GBps – about 30x faster than Baidu’s first FPGA design. It is not clear if Baidu plans to boot Intel out of its cloud computing resources in the near future, and use its own chip instead.