Mobile operators are intensely interested in the potential of edge compute. If they could integrate their network elements and sites (base stations, central offices) with edge compute nodes, they argue, they could take a premium role in the value chain for distributed cloud services.
Their new mini-data centers would be well connected and numerous, so not only would they support enhanced services for the MNO’s own customers – better quality video delivered close to the user, low latency IoT applications – but they could be offered, for a fee, to partners with less easy access to distributed infrastructure and wireless connections. Cable operators, cities, even webscale providers might be customers.
This vision was inherent in ETSI’s MEC (Multi-access Edge Compute, though the ‘M’ started simply as mobile). The assumptions behind the architecture were that MNO network assets and edge compute nodes for distributed cloud services would align neatly and put the operator in pole position to monetize both the locations and the applications.
But, as with other opportunities for which MNOs seemed to have an inbuilt advantage, has this one slipped out of their hands? Just as they assumed mobile payments would be SIM-enabled, and therefore in their control – only to be disappointed by cloud alternatives from Google and others – so the edge cloud market is looking too complicated and variegated to be easily addressed by one approach to infrastructure. ETSI MEC itself has become more collaborative with other organizations, working on APIs which will enable any edge node to work with radio networks and operator platforms.
That suggests that the operator-driven vision of edge compute has been sidelined, because it is too inflexible. It shoehorns the edge into the locations where operators have assets, whereas in fact, the edge needs to be entirely fluid, since different industries and use cases will place it in very different locations. Some use cases, like video delivery, may well fit well with central offices; others will be closer to the user and might work from base stations; but in many more cases, enterprises will locate their edge nodes on their own premises; or for extremely personalized, context-based services, nodes could be required all over a city, and never more than a few meters from a user.
An alternative view of edge compute architecture to that of MEC emanated from the IT business, and specifically from Cisco. The company made its fog computing idea the basis of an industry group called OpenFog Consortium, and that has scored a significant win by being adopted as a standard by the IEEE. The IEEE’s approach to standards, and its areas of activity, are different to those of 3GPP and ETSI, so it is unsurprising it did not choose MEC, but this is not just about politics. The advantage of OpenFog is it does not dictate where the edge should be, but defines standards to ensure that any edge cloud resource – which can be placed anywhere from the user’s home right up to the central cloud data center – is interoperable.
The new standard will be called IEEE 1934, and, given the weight of the IEEE, it should be a significant boost to the OpenFog technology, and to fog/edge computing in general.
As well as Cisco, the main companies behind OpenFog come from data center systems, device chips and vertical industries – the other founding members were ARM, Dell, Intel, Microsoft, and Princeton University. Importantly, the Consortium has good support from key vertical industries engaged in the Industrial IoT, probably the most significant driver of edge compute requirements. So GE Digital and Scheider Electric, as well as the IEEE itself, joined the board in 2016, and OpenFog works closely with the GE-led IIoT Consortium, among other industrial groups.
It does cooperate with ETSI too, and is looking to adopt the MEC APIs, among others, but chairman Helder Antunes acknowledges that there has been limited telco involvement to date – something he hopes to change. That in itself suggests that the operators have been too focused on their own particular view of edge compute, including MEC, and have missed the opportunity to take an early role in setting OpenFog’s directions.
IEEE 1934 has adopted the same terminology as OpenFog itself, proclaiming “a system level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the cloud-to-things continuum. It supports industry verticals and application domains, enables services and applications to be distributed closer to the data-producing sources, and extends from the things, over the network edges, through the cloud and across multiple protocol layers.”
This will be heavily driven by low latency, high availability IoT services, which will justify investment in edge compute more effectively than the more telco-specific use cases like mobile video delivery (increasingly hard to monetize) or personalized marketing and promotions (unwelcome to many users). But in the IIoT, the industrial players themselves drive the requirements and topologies, and turn to the operators just for 5G connectivity, where that is needed (and if they can access a private network, they may even exclude the MNO).
According to OpenFog: “The sheer breadth and scale of IoT, 5G and AI applications require collaboration at a number of levels, including hardware, software across edge and cloud, as well as the protocols and standards that enable all of our ‘things’ to communicate. Existing infrastructures simply can’t keep up with the data volume and velocity created by IoT devices, nor meet the low latency response times required in certain use cases such as emergency services and autonomous vehicles.
“By extending the cloud closer to the edge of the network, fog enables latency-sensitive computing to be performed in proximity to the data-generating sensors, resulting in more efficient network bandwidth and more functional and efficient IoT solutions. Fog computing also offers greater business agility through deeper and faster insights, increased security and lower operating expenses.”
But while the anointing by the IEEE will help, OpenFog isn’t the only initiative in this sector. While MEC may be a fading star, there are other options, such as the Linux Foundation’s EdgeX Foundry, an open source project that wants to build a similar framework.
In the IIoT, fog is intended for entirely new workloads, where there conventionally hasn’t been a server room or cabinet to use. And in environments like factories or power plants, where there are already such facilities, the principle is that these resources could become far more powerful, and far more interoperable with other connected processes or equipment. Further, appliances within these facilities may end up with more computational power, and act as fog compute nodes themselves.
In addition to the reference architecture, some concrete use cases are being developed, with predictive analytics on the edge device being one of the most promising. A partnership between ADLink, IBM, and PrismTech is a good example of the thinking here.
The architectures itself is based on a core set of pillars, which collectively comprise that horizontal system-level architecture. The pillars are: Security, Scalability, Open, Autonomy, RAS (reliability availability and serviceability), Agility, Hierarchy, and Programmability.
The document has a pretty sage way of summarizing fog computing’s purpose in turning data into actionable wisdom, around the acronym DIKW – ‘Data gathered becomes Information when stored, and retrievable [information] becomes Knowledge. Knowledge enables Wisdom for autonomous IoT.’
For resiliency, the fog nodes can be linked together as a mesh, to provide load balancing, fault tolerance and data sharing, with minimization of cloud communications. As for hardware, it covers CPUs, GPU accelerators, FPGAs, as well as RAM array storage, SSDs, and HDDs, and Hardware Platform Management (HPM) devices.
As for next steps, the group will set about the lengthy process of bringing other standards into the ecosystem. Via testbeds and APIs, OpenFog certification would denote those systems and standards that are compatible with the spec.