Mentor has unveiled the DRS360, a sensor fusion system that it claims provides the SAE Level 5 autonomous requirements for completely driverless vehicles. Owned by Siemens, Mentor is diving into a market that is heating up, chasing the likes of Nvidia and Delphi, with an approach that looks set to irk sensor suppliers.
The premise of the DRS360 is that it removes the latency inherent in sensor fusion systems that use microcontrollers to pre-process the data from sensor feeds before it reaches the central processing resources – such as the camera and LiDAR feeds, which typically run some conversion of their data before handing it up the chain.
It’s a notable design shift, should it catch on, as it effectively moves the complexity and computation from the sensors (at the edge) to the core components, meaning that the sensors themselves could be simplified and made cheaper. In some sense, this approach is a move against the makers of cameras or LiDAR systems, whose computational expertise would no longer be needed in the chain – assuming that Mentor can correctly interpret their data streams.
To this end, Mentor would be treading on the toes of the likes of Mobileye, recently bought for $15bn by Intel, or Alphabet’s Waymo, currently in the process of suing the pants off of Uber in the wake of its Otto acquisition – both of which house local sensor computation that Mentor seemingly wouldn’t mind taking off their hands, and earning itself a larger slice of the pie in the process.
Mentor says that the DRS360 directly transmits the unfiltered information to that central processing unit, where ‘raw’ sensor data is fused in real time at all levels. It adds that “the platform employs innovative ‘raw data sensors,’ which are unburdened by the power, cost, and size penalties of MCUs and related processing in the sensor nodes, in partnership with leading sensor suppliers.”
Essentially, Mentor is pitching a system that it claims could be cheaper and less complex, while enjoying improved data resolutions. A new transport layer architecture, about which Mentor is rather vague on the details, has also been introduced to minimize the latency that arises from signals having to use physical buses and hardware interfaces.
The full package includes signal processing software, a whole heap of algorithms for interpreting the data feeds, and a neural network for machine-learning functions. With a 100W power envelope, Mentor is using a Xilinx Zynq UltraScale+ MPSoC (Multiprocessor SoC), essentially a Xilinx FPGA with ARM Cortex-A53 cores and a Cortex-R5 real-time processor, to handle the raw data fusion.
For the automated driving functions, Mentor has left space for an x86 or ARM SoC, and is using an MCU as the Vehicle Network Gateway. The three chips work in combination, using those specialized algorithms that Mentor has filed some 16 patents for – detailing its ‘3D unspecified data set’ processing. Mentor says that while the board is suited for SAE Level 5 driving, it could be easily scaled down to suit less automated requirements.
Mentor will argue that its low-latency approach, or at least a similar one, is needed to get the end-to-end latency for automotive applications down to a point where they are quick enough to safely drive on roads. An example of this would be the video feed from a forward-facing camera spotting a pedestrian in the road, where Mentor argues that its DRS360’s raw data processing makes for less latency than a camera sensor that has to locally process the feed before sending it up the chain.
That local processing for the video feed doesn’t mean encoding and packaging the video feed, but rather analyzing the video to identify objects in locations – such as spotting those pedestrians or lines on the road, as well as things like road signs.
Speaking to EE Times, Mentor’s VP and GM Embedded Systems, explained that raw sensor data is complex because of these variations in data formats, which arrive at different frames per second rates, sample rates, and intervals, as an asynchronous system. He pointed to the 2D camera images, 3D point clouds from LiDAR, and ADC outputs from radar systems as examples.
Perry adds that in previous industry approaches, the sensors discard a lot of the data they collect, without ever passing it up the chain. With Mentor’s centralized approach, he says the system benefits from enhanced accuracy and reliability. “We ourselves were surprised to see the dramatic efficiency – reduced latency and increased throughput – enabled by the algorithm in the raw data fusion.”
Mentor says it is currently engaged with 17 of the top-20 automakers, as the leading supplier of automotive networking solutions, and the number-one supplier of automotive Linux.
“For more than 25 years, Mentor has worked with the world’s top automotive OEMs and suppliers to establish a leadership position in delivering solutions that drive innovation while meeting the industry’s unyielding requirements relative to safety, efficiency, and quality,” said Mentor’s CEO and chairman, Wally Rhines.