Machine-vision silicon specialist Ambarella has revealed its CV2 processor, at a time when the market is asking what exactly went wrong in the Uber crash. Ambarella used a self-driving Lincoln MKZ to show off the capabilities of the system, driving it around Silicon Valley to prove its safety.
The car, nicknamed EVA (Embedded Vehicle Autonomy), does not use LiDAR to navigate. It relies on a radar system and an array of Ambarella-powered cameras – six roof-mounted long-range cameras and four short-range units to cover the front, back, and sides. The complete aversion to LiDAR is pretty unique in the industry, with most automakers declaring the technology essential.
For Ambarella, EVA’s camera-based process is obviously intended to prove the viability of its systems. EVA was on show at CES, using the CV1, but Ambarella was also showing off a CV1-powered drone. The push into cars and drones comes as GoPro, Ambarella’s biggest customer, has stalled – having to scrap its abortive drone program and battle a faltering share price that has plummeted in the past six months.
Ambarella knows that its current lead in the low-power vision processing chip market will soon be eroded by cheaper alternatives – just as GoPro’s lead in the action cam market has been eroded. Because of this, it began looking for other high-value markets that it could target as expansion plans, which has led it to cars and drones.
Ambarella’s focus on autonomous vehicles led it to acquire VisLab in 2015, an Italian company that had been developing self-driving technologies since 1998. Since then, the CV range of machine-vision chips has emerged, but they are entering a market that is currently dominated by Mobileye – an Israeli firm that was acquired by Intel for $15.3bn last year.
Speaking to Digital Trends, Ambarella has said it doesn’t want to become a Tier One automotive supplier, selling complete systems (like Bosch or Continental), but would want to sell the CV2 to them. It wants to provide the chips and software to the Tier Ones, to software developers, and perhaps directly to the automakers themselves. CV2 production begins later this year, and the company expects to have no trouble finding customers.
In terms of specs, the CV2 supports stereo 4K video at 60 frames per second, with AVC and HEVC encoding, and HDR support for better processing in low light and high contrast scenes. Built around a 1.2GHz quad-core ARM Cortex A53, with the Neon DSP extension and FPU, the 10nm processor has a lot I/O options – including gigabit Ethernet, CAN bus, USB 2.0, HDMI 2.0, and dual SD cards. LED flicker mitigation and Lens Distortion Correction (LDC) are perhaps more prominent in the marketing than they might have been a few weeks ago.
The other major feature is the CVflow processor with Convolutional Neural Network (CNN) support for deep-learning applications, which will be what powers the image recognition features used to help cars safely navigate. Ambarella says the CV2 has 20x the DNN performance of its CV1, and says that its ability to run multiple algorithms simultaneously delivers higher accuracy. To this end, Ambarella says that the higher performance reduces the need for other chips, meaning that Ambarella is hoping to squeeze out rivals via consolidation.
The CV2 SoC is being shipped with a set of tools to help customers port their neural networks onto the chip. With support for Caffe and TensorFlow, two of the most popular training tools used in the industry, the goal is to make the adoption process as simple as possible for developers.
Outside of the automotive space, Ambarella is also pitching it as an ideal choice for security cameras, as it can provide advanced motion and image recognition on the camera itself, without having to incur the networking costs of sending that data back to the cloud for processing.
Currently, the EVA car uses 20 total cameras (10 stereoscopic systems), using 16 CV1 chips to process the video feeds. Six of Ambarella’s SuperCam3 4K stereoscopic cameras are arranged hexagonally on the roof, each covering a 75-degree swathe. Ambarella says this gives EVA the ability to spot a pedestrian at 150-meters (492-feet) (180-meters when just monocular imaging), using two CV1 chips per SuperCam3 – one for each feed in the stereoscopic setup. Each CV2 uses a 4W power envelope.
Extreme Tech was on-board for the recent demo ride, and reported that the system appeared to work well. Ambarella’s Alberto Broggi, GM of the VisLab division (Ambarella Italy), was asked whether the lack of LiDAR was going to be a problem for night-time driving. He said that car headlights are enough to let humans drive safely at night, without LiDAR or infrared, and so its advanced camera systems would be enough.
In the official announcement, Broggi said “high resolution 8-Megapixel stereovision combined with superior perception in challenging lighting conditions allows EVA to “see” its surroundings with much higher reliability than was previously possible. Moving to an implementation based on dedicated Ambarella CVflow processors brings us much closer to making self-driving cars a practical reality.”
But while we wait on the official investigation into Uber’s recent crash, it is worth stressing that a car is an incredibly complex mesh of subsystems. The demo vehicles, like EVA, have been very thoroughly tested in controlled environments, and once the automakers start enmeshing subsystem products from a number of Tier One providers, there is a danger that the pieces don’t fit well together.
The amount of testing and validation that will need to be done for the production cars is going to be quite a burden. There isn’t currently a standardization system designed specifically for this challenge, and because of the fractured nature of that ecosystem, it doesn’t seem likely that one will emerge soon. For companies like Ambarella, they are going to want to be sure that their cameras and chips work well with components from other suppliers – especially when it is not yet clear where liability for crashes will lie.