Your browser is not supported. Please update it.

6 March 2015

RIoT @ MWC: The hardware roundup

•    Imagination’s new GPU aimed straight at wearables and IoT
•    Rambus lensless image sensor gives eyes to tiny IoT devices
•    Libelium announces Microsoft Azure integration, plans more accurate sensors
•    Yoga Tiny ZigBee smart home hub impresses, features super-cool UI
•    ARM targets emerging apps, predicts IoT shipments to soar
•    Philips Hue not bound to hub but loves functionality, turns $2 consumables into “exciting” $60 product

Imagination’s new GPU aimed straight at wearables and IoT

Imagination Technologies, a veteran GPU designer and the new owner of the low-power processor architecture MIPS, launched the G6020 at MWC – a GPU that is intended to serve the low-power requirements needed for devices in the wearables and IoT markets.

Part of the PowerVR range, the new GPU won’t be seen in flagship smartphones, although it will likely begin appearing in devices at the lower end of the handset spectrum. Imagination’s most notable customer in the smartphone space is Apple, which licenses Imagination’s designs for use in the A-Series SoCs that appear in iPhones and iPads.

The graphics processing capabilities mean that the GPU is well-suited to any device that requires a screen, from wrist-mounted computers all the way up to interactive household appliances and car dashboards. Ideally, any device that requires OpenGL 3.0 on a 720p display at 60fps is a potential G6020 customer – who would ideally combine the GPU with Imagination’s range of MIPS-based CPUs and connectivity modules.

This GPU is also being pushed as a noticeable step-up from the Series 5 GX5300, which RIoT covered back in July, and one that can make the transition from high-intensity smartphone displays down to watches, just by clocking the processor at a lower speed – to save on power consumption and the associated increase in heat output.

Instead of a general purpose GPU for the lower end of the emerging wearables market, Imagination hopes this will be driving the rich color displays of premium wearables – and given its previous deals, we wouldn’t be surprised if Imagination’s technology appears in Apple’s Watch or future devices.

Although no immediate customers have been announced at launch, devices are expected to appear in 12-18 months. The designs will be licensed by Imagination’s customers, and are ready to be implemented today.

Imagination says its G6020 is the smallest member of the PowerVR Series 6 family, with four arithmetic logic unit (ALU) cores, and a silicon footprint of 2.2mm2. A highly optimized universal shading cluster (USC) engine, designed specifically for UIs, is being attributed to this miniscule package size, which is cut in 28nm at 400MHz.

Rambus lensless image sensor gives eyes to tiny IoT devices

Patrick Gill, Rambus’ principal research scientist, explained that the driving force behind the work on the new image sensing technology was spurred by the package and cost restrictions that traditional lenses bring to the table. Instead, Rambus set out to create a radically different approach that bypasses the lens entirely – relying on complex mathematics to assemble the image instead of a focusing lens.

This was possibly the coolest thing we saw at MWC, and it’s telling that it came from an established company with the R&D budgets to invest heavily in moonshot projects such as this – and not from one of the startups that grace the press as darlings of crowdfunding websites.

Rambus itself was founded on a technique to more quickly move data from the CPU and into memory, developed by two electrical engineers in 1990. Since then, Rambus has expanded into serial links and security, acquiring Cryptography Research in 2011 for its on-chip security portfolio. Roughly splitting its business into three main sections Memory Interfaces, Security, and Emerging Technology, the latter has been responsible for this new approach to image sensing.

The new Lensless Smart Sensor can be very quickly described as an image sensor that would normally be found in a lens-based system, with an additional layer placed on top of the sensor in the silicon foundry. The sensor, which is produced using the CMOS techniques that are central to so much chip production, is called an Integrated Diffraction Grating – and is essentially a pattern that is used to focus light on the sensor in a binary fashion that is either on or off.

The key to the approach is the binary pattern, which allows the raw input from the sensor to be reconstructed using a series of algorithms that is not processor-intensive. This means that low resolution, but low signal-to-noise images can be captured without the need for a lens (enabling smaller devices), and processed locally (and therefore quickly) without the need for all the computational resources needed for traditional image recognition and analysis.

The package size of the unit is also impressive, with a traditional lens measuring some 1.5mm, and the new Rambus approach measuring just 0.055mm according to the data sheets. We were shown the wafer on which the grating was cut, and it did indeed look absolutely miniscule.

If Rambus’ claims are true, this is a seriously disruptive bit of technology, given that the company believes that it could be mass-produced at a cost of 1.5 US cents per mm2 – giving a full sensor unit a theoretical price of less than a dollar, for the rich image sensing experience provided by the technology.

In the demos we were shown, the raw input creates a spiral-like pattern, which the package then throws some math at to reconstruct the image – in this case Patrick’s hand, at a latency that seemed to average 1-3 milliseconds. In another demonstration, an LED was used to show the unit’s sensitivity to strong points of light instead of broad objects like hands or people.

The LED demonstration used two of the sensors positioned 1.8mm apart, and calculated the range to the light source based on this difference in positioning. Currently the system used less than 10 microwatts, and Rambus hopes to get this nearer to 1-2 microwatts shortly. These sorts of levels are well suited to long term usage on battery-powered devices.

There is a significant tradeoff for getting this sort of functionality inside such a small and relatively cheap package. The images are low resolution, which means that they will not be good for applications looking for color reproduction or video capture. Instead, they are much more suited to motion detection, point tracking, range finding, gesture and object recognition.

Developers should be able to see the advantages afforded by opting for the system, given that its inability to recognize faces or text could be considered an asset in certain privacy-conscious applications – where only a presence alert is required, rather than a full identity record.

To encourage these developers to get on board, Rambus has established a Partners in Open Development (POD) program, that aims to supply Raspberry Pi Intel Galileo, and Arduino-based development kits to get the technology out into the wild. Rambus says it is working closely with universities, and that it hasn’t decided on the monetization strategy of POD yet – but that it recognizes the benefits of crowd-sourced knowledge.

The POD program has initially identified four key verticals that it wants to target: cities, transportation, manufacturing and medical. Rambus is partnering with two design firms – frog and IXDS – as part of a program to promote adoption of the technology amongst developers who are quite rightly skeptical of the claims.

Elsewhere and in the same field as the Rambus tech, researchers at Harvard also announced the success of their “achromatic metasurface” technology, which uses nano-sized antenna to guide light to a focal point on the image sensor – so that it too can capture images without the need for a traditional lens. The Harvard approach is still in the labs, but it does capture color.

Libelium announces Microsoft Azure integration, plans more accurate sensors

We met with Libelium CEO Alicia Asin Perez to catch up on the happenings of the IoT hardware vendor, which sells sensors and gateways that are used extensively in Barcelona – a city that keeps appearing near the top of lists of leading smart cities.

Unlike most of the dedicated IoT players that were on show at MWC, Libelium makes its money almost solely on hardware sales, unlike the service-based business models that aim to provide an end-to-end experience. To ensure that its customers are well catered for, Libelium approaches those businesses in the cloud game, to ensure interoperability between its Waspmote sensor platforms and Meshlium gateways.

The latest addition to this partner ecosystem is Microsoft’s Azure platform, which should ensure that Libelium adopters can port the data collected by their Libelium purchases directly into the Microsoft cloud platform, before moving it on into a third party management system.

The factory-ready hardware can then be deployed in the field, with the Waspmote units acting as sensor relays that link the extensive range of sensors (from sound, water, gaseous, air quality, etc.) to the cloud via the Meshlium gateway and its software stack.

Asin said that Libelium was concentrating on developing sensors that provide more high quality information that is more useful to the customer. She described how the newer and more accurate sensors were being built to address a new wave of smart city deployments. These sensor improvements will be part of a new smart city strategy that will be launched in April, according to Asin.

In broad strokes, previous connected city deployments saw members of the IT department flinging sensors at any application they could afford within their budgets. What Libelium is seeing now is a shift towards the desire to gather very detailed information for strategic decisions.

Asin described how Barcelona currently uses calibrated sonometers to measure noise pollution, to the tune of €30,000 per unit. While Libelium currently supplies noise meters that could be used in the same application, the company is looking for a way to provide data of an equivalent quality at a much lower price point.

What’s more, a system like Libelium’s would allow a city to adopt it for one use case, such as noise pollution monitoring, and then add additional sensors to the already deployed Waspmotes – which can support up to six sensors per unit.
That expandable business model fits well with the budgetary restraints that cities and municipalities often find themselves under. It enables a modular approach that doesn’t require a massive upfront cost, which might be written off if the city has to move platforms.

Libelium’s selling point lies in being able to upgrade at a later date using the existing network infrastructure, and Asin pointed to a deployment in Latin America that added new sensor functionality to a previously purchased smart parking system for only 2% of the cost it initially paid for the parking – simply my plugging new sensors into the already-installed Waspmotes.

This functionality makes replacing expired sensors very straightforward, which in turn lowers the maintenance costs and downtime that systems without the modular design might suffer – especially if they have to be sent back to the manufacturer for refurbishment, or replaced entirely.

Yoga Tiny ZigBee smart home hub impresses, features super-cool UI

There are an increasing number of smart home platforms and products (as well as an alarming number of wearables) cropping up at tradeshows, and MWC was no exception. We were shown around an upcoming white-label Technicolor system, using the established CPE vendor’s gateways as control points in the home, with a pre-approved list of devices chosen by the operator that Technicolor will sell the system to.

But a small startup from Estonia showed us a much more exciting product. Yoga, unveiled its Tiny smart home gateway, which lives up to its name, and the larger Yoga PRO1, which is much larger and aimed at industrial implementations and perhaps multiple dwelling units (MDUs).

The Tiny is about 4-inches long, 2-inches tall, and maybe ¾ of an inch deep. It has a power input, and Ethernet input to connect it to the home router, and most importantly a Texas Instruments ZigBee chip – to allow it to become the heart of a ZigBee controlled smart home.

Powered by an Intel X1000 series Quark SoC, and also packing WiFi and BLE, the app is controlled by an iOS and Android app, which replicates the UI found on the Yoga website. Through these two interfaces, users can control their homes remotely, and while inside.

But Yoga had a very impressive touchscreen panel (the size of a medium TV by our estimation) which showed the potential of the system. On that touchscreen, users are able to digitally construct a version of their own home, and then place their connected devices appropriately within the home.

This then allows the touchscreen to become an interactive control panel for the home, in which users can effectively travel around the home with full control over their devices. A particularly cool implementation saw video from a security camera ported onto both the corner of the screen, but also on the digital representation of the TV in the living room.

While the touchscreen’s five-figure price tag doesn’t match the affordable ethos of the Tiny (€99), it is certainly a premium feature that would not be out of place among the custom-installed home automation services.

The features that were once complete luxuries are slowly becoming affordable, and with a platform like Tiny, with its low entry-cost, those with disposable income are able to slowly add devices to their smart home as they can afford them – or at the other end of the model, they can be given a smart home as a service subscription package from a TV operator or ISP.

ARM targets emerging apps, predicts IoT shipments to soar

On the final day of MWC, we spoke to ARM’s VP of Segment Marketing, Ian Ferguson, to get the silicon giant’s perspective on the IoT. Moving from ZigBee controlled air flow management systems in silicon fabs, to ECG-enabled and sweat-analyzing smart clothing, Ferguson said that his measure of success for knowing when the IoT has truly arrived will be the day that people stop labelling things smart.

In terms of chip shipments, ARM is working closely with developers who are just beginning their journey into connectivity, and Ferguson pointed to the firm’s involvement in the developer community and universities as proof of this strategy.

When asked how ARM was positioning itself directly for the IoT, we were told that it was vital that ARM understood the use cases and applications that are emerging. Given its current involvement in high-end smartphones all the way down to very basic microcontrollers, Ferguson stressed the importance of developing the right chips for the whole spectrum of applications that will emerge between those two extremes.

We then enquired about the proportion of chip shipments that the IoT currently represents in ARM’s revenues. Ferguson assured us that although the answer sounded like a dodge, it was actually very hard for the company to work out exactly which devices its chip royalties came from – and that it was something ARM was working on breaking out in its results more accurately.

Ferguson did say that the Cortex-M range (the microcontrollers) currently accounts for around 4 billion of the 12 billion ARM chips that are shipped globally each year. The Cortex-R chips, which are found in the controllers in hard drives, LTE modems and automotive applications among many uses, account for about 1 billion. The other 7 billion chips are in the more advanced Cortex-A family.

We pushed to know about the future composition of the shipments after the IoT has become more established, and Ferguson said that in five years, IoT and embedded chips will be out-shipping mobile by far. However, the percentage of royalties that they comprise may not change that much, due to the massive difference that ARM receives as a percentage of the average sale price of the chips.

The chips found in mobile devices are worth a lot more to ARM than the tiny microcontrollers, as the royalty is based solely on the cost of the chip – which will vary from dollars for Cortex-A products, all the way down to cents for the MCUs – often at a ratio of 20:1.

Philips Hue not bound to hub but loves functionality, turns $2 consumables into “exciting” $60 product

At the end of a very busy week, we met George Yianni, Philips’ head of technology in the connected lighting division, and the man in charge of Philips Hue, the smart bulbs that have taken the smart home space by storm.

On the origin of Hue, Yianni explained that Philips treated the project as an in-house startup, or an intra-preneur. Begun in 2011, the team tried to use technology to reposition lighting as a product that consumers are excited about, instead of something that is just taken for granted.

In the keynote that Yianni opened after our meeting, The Business of IoT, he announced that Philips had turned a $2 commodity (a regular bulb) into a $60 product that consumers were excited about.

The reasoning behind this, according to Philips, is that there isn’t a single action in the home that is not impacted by lighting – an argument that is also true for applications in the business world. Philips believes that there are four main purposes of lighting, and that adding technology to it can help increase its value – and encourage customers to pay much higher prices for these products.

The first purpose, Yianni explained, is to create ambience or mood in the home, and directly affecting the aesthetic of a home that is far easier to personalize than having to redecorate – and in a way that is adaptive according to setting or time thanks to the full range of reproducible colors.

The second is the security benefits that an automated lighting system can bring to the home, whether its protecting the empty home by discouraging burglars with disguised occupancy, or lighting the home before the occupants arrive home on a dark night – for an added peace of mind that many will pay a premium for.

The third is the medical benefits that can be influenced by lighting, such as improved sleep and a less intrusive morning routine by controlling the levels of blue in the illumination. It initially sounds farfetched, but there is a lot of science to back it up.

And the last purpose is using lighting to convey information to the occupants, such as alerts or even navigation within the home. A tangential link to this is also found in Hue’s use with TVs, to match the lighting in a room to the picture on the screen – a process that Yianni said places no extra strain on LED lighting, as it is switching on and off thousands of times a second in normal operations, so the pulsing is well within operational limits.

We asked about the future of the ZigBee hub that currently controls the lights in the home. Yianni explained that Philips is very fond of the functionality, which avoids latency by keeping the commands within the home network and not relaying them via a cloud, but that it is not tied to the hardware itself – and open to seeing it integrated into other devices in the future.

Simlarly, Yianni answered that Hue would continue to focus solely on the consumer lighting market, and had no plans to diversify into other connected home applications. He said this was not the end of the road, and that Philips had barely scratched the surface in terms of adding value to lighting, and hoped that the companies open approach would position it well within other smart home platforms that adopt or support the lights.

The Hue range began as a pure consumer project, as Philips was already targeting business customer with its connected lighting division, but the Hue line has seen traction among small and medium businesses that are looking to create ambience in their premises.

We also learned of another interesting deployment that saw Hue lights used inside a museum to convey locational information about the user’s position within the exhibit to a tour-guide application. Using the phone’s camera, the lights could pulse to encode data that could be picked up by the app and used to trigger the next video or guided discussion.

It’s a similar approach to pureLiFi, and is a very exciting area of invisible networking technology that could go a long way to relieving the increasingly congested WiFi spectrum.