Your browser is not supported. Please update it.

13 September 2021

Sigfox boosts AI edge credentials with Google Coral partnership

Sigfox, whose eponymous protocol is one of the big four in low power WAN (LPWAN) markets, has partnered with Google’s hardware component developer Coral to strengthen its position in the fast-growing field of machine learning-based IoT edge processing. This will allow Sigfox to integrate a Coral chip – the Coral Edge TPU (tensor processing unit) ASIC – into IoT devices running its protocol to execute inferences locally. This is touted as reducing energy consumption, but the real benefit lies in being able to overcome the bandwidth limitations of LPWAN protocols, which has prevented powerful inferencing in computationally constrained IoT devices in the field. Many inferencing applications require a combination of rapid input/output and computational power that until, recently has only been available in data centers some distance away in the cloud.

Sigfox now aims to incorporate the engines performing stripped-down machine learning inferencing in its IoT edge devices, having joined the Coral Partnership Program to do so.  This will allow inferences to be run locally, fed by information from the field.

“At Coral, we believe efficient AI starts with enabling capabilities at the edge,” said Ajay Nair, product evangelist for machine learning solutions at Google. “The addition of Sigfox to our ecosystem does this by allowing engineers to leverage the Sigfox 0G network, making gathering and streaming the right data from remote, low power or low connectivity settings easier without having to sacrifice efficiency or privacy along the way.”

0G is the name Sigfox has given to its platform based on lightweight Ultra-Narrow Band (UNB) modulation, operating in 200 KHz of publicly available spectrum to exchange messages at between 100bps and 600bps, depending on the region. Range can reach 50 kilometers in favorable rural locations but more like 3km in denser urban areas.

As Nair’s comment implies, Sigfox is later than some in adding this Coral-based AI edge capability. Certainly, its major rival, LoRaWAN, has been involved in demonstrations featuring similar edge capabilities for well over a year. LoRaWAN networks have been used for on-device machine learning utilizing TinyML, a reduced machine learning inferencing package designed for execution of automated tasks such as image analysis on low energy systems, such as sensors or microcontrollers.

Sigfox can argue though that it can now offer a full-stack version of Google TensorFlow Lite on its devices, providing access to the world’s dominant machine learning platform. The whole machine learning market is hard to count, but by almost all measures TensorFlow has well over half the global market, despite implementations being most concentrated in the USA and UK.

TensorFlow is a complete package for machine learning, comprising all the software components required for both training and inferencing. Training is an iterative process involving various forms of regression to tune the model to make correct inferences or predictions based on a given set of data. Inferencing is then the process of applying the trained model to make deductions or decisions upon presentation of target data, such as the image of a face, or diagnostic measurements in healthcare.

The learning phase can be supervised or directed, when the goal is to make predictions within a given set of possibilities. Alternatively, it can be undirected when the aim is to identify underlying patterns within a data set that may not be known about in advance. The latter has been used for example in retail to identify products that tend to be purchased together and turn up in the same shopping baskets.

The learning phase is far more intensive computationally and usually requires powerful centralized resources. The inferencing phase still requires significant power but considerably less and can be executed locally in dedicated ASICs, such as the Coral Edge TPU. Until recently, however, inference has also required more power than available in typical IoT devices and so needed a network capable of shifting the inference data, such as an image or short video sequence, sufficiently quickly. Wireless LPWAN protocols lack that capability, leading to development of these ASICs to execute the inferencing locally and then just transmit the results over the network.

Such ASICs had to be small and cheap enough to be accommodated within modest IoT devices without hiking the cost too much. This led to development of stripped-down inferencing software such as Google’s TensorFlow Mobile, which evolved into the current version, TensorFlow Lite, to execute faster and generate smaller amounts of data that can be more readily handled on IoT ASICs.

TensorFlow Lite runs in the Edge TPU, which although far less powerful than Google’s centralized Cloud TPU capable of heavyweight training, can perform a higher number of operations per-unit power, which is crucial for an IoT device that may rely on batteries or lightweight solar generation in the field.

While a single v3 Cloud TPU can perform 420 trillion floating-point operations per second (420 teraflops), the Edge TPU manages one trillion fixed point only, but consuming just two watts of power.

The current first-generation Edge TPU has been designed particularly for vision-based ML applications, capable of executing the deep feed-forward neural networks (DFF) such as convolutional neural networks (CNN) that are ideal for those.

A feedforward neural network comprises multiple layers of neurons, each of which computes values via a simple formula from a single input. The neurons are connected in multiple layers such that the output of the previous layer is fed forward to yield the input of the next layer. By having many layers, the neural network can combine input features across them to derive its own, more complex representation of the data in the hidden layers. This yields more analytical and predictive power than straightforward linear statistical regression, which was the gold standard of traditional data mining before the more recent escalation in computational power. The features learnt in this way are hierarchical, being accumulated layer by layer.

Inferencing from such feedforward neural networks can now be performed in IoT components such as the Edge TPU. It has already been demonstrated in various trials using the LoRa protocol, with some valuable applications. There have also been some complete systems developed, including one by the University of Turku in Finland which combined LoRa, edge and fog computing, with IoT and deep learning in a system capable of detecting falls among the elderly, using inertial data as input.

The main point was that the university developed an architecture comprising an additional layer of sensor nodes that can then be connected locally within even a single room over say BLE (Bluetooth Low Energy) to a gateway. The gateway then consolidates and relays signals and also houses the ML processing engine, being connected via LoRaWAN into the Internet or cloud. This avoids reliance on more intermittent cellular or WiFi connections, while circumventing the bandwidth limitations of LoRa. This can then connect up multiple healthcare sensors, such as cardiovascular or diabetes monitors, and blood pressure monitors, as well as integrating contextual data such as temperature, humidity and air quality.

The combination of e-health and contextual data has already been shown to improve the accuracy of disease diagnosis and analysis. This aspect of health is set for rapid growth given the trend away from direct face-to-face consultations in the wake of the Covid-19 pandemic.