The IEEE standards organization has picked the OpenFog Consortium’s Reference Architecture (ORA) to be the basis of its new IEEE 1934 standard, which defines a universal technical framework for fog computing, designed for the data-intensive requirements of the IoT, 5G, and AI applications.
But OpenFog isn’t the only initiative in this sector. The Linux Foundation’s EdgeX Foundry is an open source project that wants to build a similar framework. There is very little overlap between the two groups’ memberships, and we have speculated that this could be yet another example of an IoT standards war. The EdgeX Foundry was launched in April 2017, after the ORA was published, but it has gone pretty quiet since then.
Fog computing, so-called because the computing is taking place nearer the ground (source) than clouds, is a way of distributing the compute power needed to run an application. In the IoT, it’s a way to get around the latency problems that can occur when a round-trip takes a long time – to be backhauled to the cloud, processed, and then have a command sent back to the edge.
Among IT crowds, fog computing can have an awful air of ‘buzzword of the month,’ as many workers there can remember using similar principles to organize databases and servers in multiple remote locations where a network connection could not be relied upon. As so much effort has been put into migrating to cloud infrastructure, the idea of using fog computing in these conventional enterprise workloads seems rather backwards.
However, in the IoT, fog is intended for entirely new workloads, where there conventionally hasn’t been a server room or cabinet to use. However, in environments like factories or power plants, where there are already such facilities, the principle is that these resources could become much more powerful, and be far more interoperable with other connected processes or equipment. Further, you may find that appliances within these facilities end up with more computational power, and can act as fog compute nodes themselves.
Riot covered the launch of the ORA back in February 2017, about 18-months after the OpenFog Consortium had been established – by ARM, Cisco, Dell, Intel, and Microsoft. There are now many more members, spanning vendors and academia, and a few clear use cases – with predictive analytics on the edge device being one of the most promising. A partnership between ADLink, IBM, and PrismTech is a good example of the thinking here.
“The reference architecture provided a solid, high-level foundation for the development of fog computing standards,” said John Zao, Chair, IEEE Standards Working Group on Fog Computing & Networking Architecture Framework, which was sponsored by the IEEE Communications Society’s Edge, Fog, and Cloud Communications Standards Committee. “The OpenFog technical committee and the IEEE standards committee worked closely during this process and benefited from the collaboration and synergies that developed. We’re very pleased with the results of this standards effort.”
The ORA itself is comprised of a core set of pillars, which collectively comprise what the Consortium would describe as a horizontal system-level architecture. The pillars are; Security, Scalability, Open, Autonomy, RAS, Agility, Hierarchy, and Programmability.
The document has a pretty sage way of summarizing fog computing’s purpose in turning data into actionable wisdom. Shortened to DIKW, it states “Data gathered becomes Information when stored, and retrievable [information] becomes Knowledge. Knowledge enables Wisdom for autonomous IoT.”
For resiliency, the fog nodes can be linked together as a mesh, to provide load-balancing, fault tolerance, and data sharing, with minimization of cloud communications. As for hardware approaches, it covers CPUs, GPU accelerators, FPGAs, as well as RAM array storage, SSDs, and HDDs – as well as Hardware Platform Management (HPM) devices.
The document outlines the software side of things too, as well a rather detailed use case examining how the spec would work in and end-to-end airport visual security system (vehicle arrival, baggage, security, transit), as it negotiates and manages the different interactions between the disparate systems.
As for next steps, the group will set about the lengthy process of bringing other standards into the ecosystem – a gargantuan process. With testbeds and APIs, OpenFog certification would denote those systems and standards that are compatible with the spec.
“We now have an industry-backed and -supported blueprint that will supercharge the development of new applications and business models made possible through fog computing,” said Helder Antunes, chairman of the OpenFog Consortium and senior director of Cisco. “This is a significant milestone for OpenFog and a monumental inflection point for those companies and industries that will benefit from the ensuing innovation and market growth made possible by the standard.”