Your browser is not supported. Please update it.

12 October 2020

Robot makers line up behind Qualcomm’s 5G RB5 platform

Qualcomm is stepping up its attack on the burgeoning 5G-based robotics market on the back of its latest RB5 platform unveiled in June 2020. The main advance will be in vision to improve performance not just for finer manipulation in real time industrial control, but also drones, autonomous driving and consumer facing applications such as robotic appliances like vacuum cleaners.

Such at least is the vision of LG Electronics, which has been in robotics some years but plans to launch new systems later in 2021 with more capable vision based on the RB5 platform. Otherwise it is too early to talk much about end customers or applications, according to Qualcomm, rather adoption of the platform by developers of robotics systems and components.  Indeed, Qualcomm supplied quite a long list of partners at the launch, some of which were interested in the embedded machine learning capabilities to tune vision systems to specific use cases or tasks.

The chip itself is interesting in following the trend towards increasing the processing real estate to reduce or avoid the need for dedicated silicon such as ASICs, while being oriented towards a specific class of applications or use cases. We have seen that trend in the AI field from the likes of Nvidia where chips originally designed as GPUs (graphical processing units) evolved into versions more dedicated to machine learning in particular. The RB5 incorporates such machine learning capabilities as well as an advanced machine vision system, security processing and features that dovetail with 5G, especially the latter’s ultra-low latency capabilities.

In fact, the machine vision subsystem is optimized for high speed to ensure that feedback from high resolution image capture can be provided to a robotics motion control engine in real time. It is this aspect that LG reckons will take its robots forward into a new era.

As one example, LG in 2019 launched a robot chef that is now cooking and serving noodles in South Korean restaurants. Called the CLOi Chefbot, this allows customers to hand over bowls, along with chosen ingredients such as vegetables and noodles, to the robot. Chefbot then boils or cooks these as appropriate and puts them back in bowls for the customer to take. Armed with the RB5, there is the prospect of the robot assuming waiting duties as well, so that customers can perhaps order their desired combination of ingredients from say a smartphone menu and have them assembled with the finished dish delivered to their table automatically.

The RB5 vision system incorporates the Qualcomm Spectra 480 Image Signal Processor (ISP), which is already optimized for fast capture of high resolution photos and videos at a processing rate of 2-gigapixels per second. This speed is sufficient to support the required high end camera features, including Dolby Vision video capture, 8K video recording at just 30 frames per second for now. It can also capture 200-megapixel photos  and 4K HDR (high dynamic range) video at the higher frame rate of 60fps.

The key point for dovetailing with 5G is the hardware acceleration for advanced applications using the dedicated computer vision hardware block EVA (Engine for Video Analytics). EVA provides the enhancements that reduce latencies for real time image processing decisions to the levels 5G is capable of supporting at relatively close range. Equally crucially, this decreases power consumed by the other key components for critical AI tasks, that is the DSP (digital signal processor), GPU and CPU, reducing the latency associated with tasks executed by these.

Latency is also addressed through integration at the system level, so that for example it combines the output of depth-sensing cameras from Intel with motion sensors and then motor control hardware from Japanese electronics component maker TDK.

It is worth reiterating in this context how 5G itself cuts latency, which it addresses on three fronts.

  • Originally the role of 5G in latency was widely perceived to begin and end in the RAN and that is indeed an important aspect through a variety of features such as combining downlink and uplink data in the same transmission slots and traffic pre-emption. These aim to cut down on delays at the radio level over which there is control, while also giving priority to urgent data, as in the pre-emption.
  • The second component is the backhaul, where the main focus is on using the media with the fastest native transmission speed, that is optical fiber combined with connecting components that forward or switch as fast as possible.
  • Thirdly there is the core, where latency is addressed through redesign of the architecture. As in the backhaul, the primary source of latency in the core is signal transmission time circumscribed by the laws of physics. Therefore, edge computing is the principal contribution to low latency for the core, with the challenge being to balance cost against the degree of low latency required. The aim is to place computing resources as close to the end user or process as necessary, but no closer because that will often require more system resources than are strictly required.

It can also be true though that locating hardware resources alone at the edge may not be sufficient to bring latency down, in the event for example that higher level functions associated with the user plane, such as authentication, are still executed more centrally. This means that the whole core network must be re-engineered to ensure that not just processing of the main data or execution of software take place at the edge, but also all associated functions.

In the case of the Qualcomm RB5, the edge in practice will often be on the premise or at any rate very close to it, because real time machine vision is especially intolerant of delay. To this end, the platform has incorporated foundational aspects in the system itself, such as the Qualcomm AI Engine with its recently released Hexagon Tensor Accelerator, capable of 15 trillion operations per second to run AI and deep learning workloads at the edge.

Tensors are fundamental to machine learning, being arrays of numbers across multiple dimensions on which arithmetical operations can be formed. A 2×2 grid of numbers whose elements can be multiplied or added row by column to the corresponding elements of another such grid, known as a matrix, is a two-dimensional tensor, while a single line of numbers is a one-dimensional tensor. In machine learning tensors can have multiple dimensions corresponding to the same multiple of variable elements representing a domain, such as color and contrast for a visual image.

Qualcomm’s Hexagon Tensor Accelerator is a dedicated piece of circuitry designed to perform these manipulations rapidly in parallel across multiple elements and dimensions. It is the equivalent of a dedicated tensor ASIC embedded in a broader design, available for general use rather than a specific embedded system.

The other feature of note is security support, given obvious concerns over remote takeover of systems that could potentially cause damage or injury. It incorporates the Qualcomm Secure Processing Unit including secure boot, cryptographic accelerators and its Trusted Execution Environment (TEE), while its camera security is certified to FIPS (Federal Information Processing Standard ). The latter is quite a long established US government standard for protecting cryptographic modules in general, with four levels of security designed to detect and respond to physical tampering.

Among other security features is support for remote attestation, a component of trusted computing allowing systems to give reliable and secure information over a network about the software they are running, as a kind of authentication to block access from rogue systems. There is also support for the full gamut of biometric authentication, including fingerprint, iris, voice, and face, so it is fair to say Qualcomm has thrown the kitchen sink at security. Whether that means it has shut off all conceivable threats is another matter.

Of the prospective use cases it is worth considering automotive because five months earlier in January 2020 Qualcomm launched its Snapdragon Ride platform, comprising systems-on-chip (SoCs) and a software stack for autonomous driving, specifically the ADAS (Advanced Driver Assistance Systems) that can be used in vehicles at present. Qualcomm claimed at the time however that the platform was not only capable of supporting permissible ADAS functions such as automatic emergency braking and lane keeping assist, but also in principle advanced Level 4 and Level 5 fully autonomous driving.

We are sceptical of that latter claim, especially as Qualcomm then identified autonomous driving as one of the targets for the latest RB5 platform. This implies that the more advanced vision capabilities of RB5 will be needed for autonomous driving, so that the Ride platform on its own cannot have been fully fit for the purpose.

We naturally asked Qualcomm to clarify this and explain how Ride and RB5 would coexist in future automotive systems, but had no reply in time for publication. Still, Qualcomm is not alone in over stating the capabilities of autonomous platforms, or underestimating the complexities of full blown self-driving without constraints over location or situation. This will not stop the platforms playing a major role in future automotive systems.

Another big target use case is the drone field, where Nokia has been stepping up activities lately, for example through a recently announced collaboration with the Indian Institute of Science (IISc) to establish a ‘center of excellence’ for networked robotics. The main focus is on 5G-connected drones in smart agriculture and emergency services, where many of the processes rely on high resolution vision, if not always on low latency.

Nokia already collaborates on several robotics or industrial control projects, including one also involving GE Digital over industrial internet of things (IIoT) applications. Developments around the RB5 platform will be followed closely within this Indian collaboration.