Your browser is not supported. Please update it.

9 August 2021

Google’s inhouse smartphone processor is a new blow to Qualcomm

No surprise that Qualcomm focused so heavily on diversifying its business, when discussing its most recent set of financial results (see Wireless Watch 2 August 2021). In its core smartphone chip business, not only does it face rising competition from MediaTek and Chinese providers – especially as 5G moves from premium into midrange handsets – but its relationships with Apple and now Google are at risk.

Apple was forced to turn to Qualcomm for its 5G modems (it makes its own processors), when a co-development with Intel foundered, but it is designing modems for future iPhones inhouse. Samsung, of course, has its own chips, though it does use Qualcomm system-on-chip designs for some of its models – but given the expansion of its semiconductor resources, that business cannot be guaranteed in future.

And now Google has announced that it has designed its own chips for its next generation of Pixel devices, the Pixel 6 and Pixel Pro. Google’s own smartphones sell in small numbers compared to Apple’s and Samsung’s, but the company is highly influential in the mobile world because of its control of Android, and its latest announcement sent Qualcomm’s shares downwards (though they recovered the next day).

Google has said that available SoCs were not capable of the image manipulation it needed for some of the high end Pixel 6 applications, and so it had to go its own way. The handsets are 5G-capable, but there has been no mention of the modem supplier, which may well still be Qualcomm (as Apple discovered, it is extremely difficult to escape from Qualcomm when it comes to high end modems).

Named Tensor, the new system-on-chip houses Google’s namesake AI-based silicon, scaled down from its usual data center form factor, and the Pixel 6 is positioned to be the first flagship device with native support for the AV1 video codec standard, which Google supports.

Google claims Tensor’s performance extends image sharpening techniques like Night Sight from photo stills to recording video, and supports the Duplex conversational AI technology. There is a new Speech on Device API (SODA) for data center-equivalent local processing.

The core focus of the announcement was on how Google has brought AI-based functions to the image signal processor (ISP), named the Pixel Visual Core.

We know that Tensor is an ARM-based design, but it looks like Google has been overhauling the memory architecture, to provide the faster data manipulation needed for working on video capture. The required image processing algorithms are also supported in bespoke silicon elements too. As it stands, there is almost no detail on the other supporting elements of the SoC, and it is not clear how much of it is comprised of Google-designed components versus off-the-shelf ARM elements.

New camera hardware will naturally require more horsepower. If you are capturing a larger image, from a larger camera sensor, there is more data to be sifted through – compounded from Google’s move to capture video from two distinct physical sensors, and blend the result for the best possible image quality. Consequently, the file sizes produced will also leap forward, and so the need for more efficient image and video compression is required. This is where AV1 would step in, and solve that storage and network bandwidth headache.

Having a more capable SoC on the device could also unlock some unique video display capabilities. Smartphone display resolutions have not progressed past 1440p, on the whole. Because of the small physical size of the screen, visible pixel arrays are not really a concern, and so there has not been the same need to move beyond 1080p and into 4K (2160p). Compared to TVs, however, their panel refresh rate has increased substantially. To this end, instead of using AI-based techniques to upscale 1080p feeds to a 2180p screen, there is significant scope to adapt 30fps or 60fps feeds to the variable refresh rates possible on smartphones, for a smoother experience.

In the next five years, more foldable phones are going to arrive on the scene. Essentially, they are two standard displays merged together, but the ‘full-screen’ pixel counts would very quickly approach what are encountered in 4K TVs and monitors. To that end, the required display drivers and video processing silicon will quickly need to support 4K resolutions, in devices that many have assumed would never require such technologies.

Consequently, a significant expansion in the demand for high performance video silicon in smartphones seems very likely, as well as transcoding to support premium video sources on those devices.

Qualcomm and Google insisted they would continue to work together, and Qualcomm said in a statement: “Qualcomm Technologies and Google have been partners for more than 15 years starting with bringing the first Android devices to market. We will continue to work closely with Google on existing and future products based on Snapdragon platforms to deliver the next-generation of user experiences for the 5G era.”

How long term that partnership turns out to be will depend how far Qualcomm can leverage its undoubted prowess in mobile chipsets to stay ahead of the inhouse developments at Google (or Apple), forcing them, however reluctantly, to stay loyal. That is easier to do at the start of a new technology generation, and Qualcomm indeed achieved a huge headstart in early 5G, as it had in 4G – as demonstrated by Apple’s enforced, if temporary, return to the fold. Recently, it has fended off MediaTek’s catch-ups to some extent by becoming virtually the only supplier of smartphone SoCs that incorporate millimeter wave support.

But as technologies mature, it becomes harder to stay well ahead of the field in performance, integration and power efficiency, while also becoming increasingly price competitive. And Google, like Apple, will have some advantages, notably the ability to integrate its own operating system tightly, which could mask potential performance inferiorities with slicker software implementation.

Google, of course, has significant chip design capabilities and resources, which have been showcased most publicly in its tensor processing units for AI, but have also gone into co-developing server processors with various partners (including, reportedly, Qualcomm, though that project does not appear to have been commercialized).

Like Apple, the search giant appears to want more control over its whole platform, including the SoC, a reversal of a strategy that has been in place since it cooperated with Qualcomm on the very first Android designs.

Moor Insights & Strategy senior analyst Anshel Sag told FierceWireless: “The reality is that Google wants more control over what its SoC costs and design are and it’s likely that Google wants to get more AI performance out of a less expensive chip,” Sag said.

The increasing convergence of 5G and AI lies at the heart of this new departure. It seems that every week a new start-up emerges pitching a chip design that combines these two technologies in some way, EdgeQ being the most famous (and co-founded by former Qualcomm chief Paul Jacobs). Google has invested very heavily in AI chip technology, and calling its smartphone SoC Tensor emphasizes the connection, as it brings innovations that were targeted at data centers to the mobile edge and the device.

Qualcomm, too, has been heavily focused on AI in recent years, though its starting point was to support these processes on low power devices, and its breakthroughs in this respect have been one of the most successful aspects of its diversification strategy, winning its contracts in the auto sector for self-driving car systems. However, Qualcomm is known to charge a high premium to customers for its AI-intensive products, and Google may have balked at that.

Another reason for dropping Qualcomm in some models may be the chip giant’s lack of support for AV1, the new generation video codec standard. This has not been officially mentioned, though there has been plenty of talk about Tensor’s ability to process video efficiently on low power devices. Google recently unveiled its new Video (trans)Coding Unit (VCU), which showed the company overhauling its cloud computing infrastructure to support AV1 – and clearly it will want to push the whole Android ecosystem to support the same encoding technology, to maximize opportunities for its cloud platform.

So Google has laid the groundwork for AV1 in its own data centers, and is using the new Pixel as a showcase for the codec, which should hopefully prove to the other smartphone makers that AV1 is ready for mass adoption. While the handset can show off natural language processing and computational camera processing, Google’s greatest success will be the proof that a smartphone power package can handle AV1 in real time applications.

By contrast, Qualcomm has avoided implementing AV1 in its designs. With MediaTek and even Rockchip beating it to the punch, Qualcomm has been quite non-committal about when, or if, it expects to have AV1 support in its Snapdragon flagship SoC family. It is not a member of AOMedia, the Google-dominated industry alliance that supports AV1.

Cristiano Amon, Qualcomm’s CEO remains upbeat about continuing business with Google and with Apple. On the results call, he insisted he was “very happy with our relationship with Apple,” and pointed to a recent new multiyear agreement with the company and a focus on forthcoming iPhones with millimeter wave support. He added: “We have other phones to go, and we’re very happy with the way things are progressing.”

And he was keen to emphasize the shifts in the handset market and the rise of companies that, unlike Apple, Samsung and Huawei – the big three of two years ago – do not have their own chips. He said: “The biggest opportunity for Qualcomm in mobile is the changes that are happening in the mobile landscape right now. This quarter, Xiaomi is the number two OEM in awards and shipments, and we see this shift in OEM market share create an incredible opportunity for us.”