OpenFog publishes Reference Architecture, with eyes on AI and IoT

The OpenFog Consortium has unveiled its Reference Architecture design, aimed at providing a universal framework to support emerging IoT, 5G and artificial intelligence applications, which will increasingly rely on distributing cloud capabilities throughout the network and right to the edge.

Its document sums up the intentions of fog computing, to turn data into actionable wisdom. It coins a whole new acronym, DIKW, which it explains thus: “Data gathered becomes Information when stored, and retrievable [information] becomes Knowledge. Knowledge enables Wisdom for autonomous IoT.”

OpenFog wants to establish its technology, originated by Cisco, as a de facto standard to allow these multi-faceted applications to exchange data – though success will rely on broad industry support, and probably cooperation with other alliances and standards bodies such as ETSI’s Multi-Access Edge Computing (MEC) project, which has similar objectives.

The OpenFog Consortium was founded in November 2015 by ARM, Cisco, Dell, Intel, Microsoft and Princeton University. It has since grown to 55 members, including AT&T, GE, Hitachi, Sakura Internet, Schneider Electric and Shanghai Tech University (all Contributor members). Its geographic balance has moved significantly towards east Asia – other members include Hon Hai (Foxconn), Fujitsu, Mitsubishi, NEC, NTT and Toshiba.

The OpenFog Reference Architecture itself is built on a core set of pillars, which support a horizontal system-level architecture. The pillars are Security, Scalability, Open, Autonomy, RAS, Agility, Hierarchy and Programmability.

For resiliency, the fog nodes can be linked together as a mesh, to provide load balancing, fault tolerance and data sharing, and to minimize communications back to the central cloud. As for hardware, it covers CPUs, GPU accelerators, FPGAs, as well as RAM array storage, SSDs, and HDDs – as well as Hardware Platform Management (HPM) devices.

The document outlines the software side of things too, as well a rather detailed use case examining how the spec would work in and end-to-end airport visual security system (vehicle arrival, baggage, security, transit), as it negotiates and manages the different interactions between the disparate systems.

As for next steps, the group will set about the lengthy process of bringing other standards into the ecosystem – a gargantuan process. With testbeds and APIs, OpenFog certification would denote those systems and standards that are compatible with the spec.

“Just as TCP/IP became the standard and universal framework that enabled the internet to take off, members of OpenFog have created a standard and universal framework to enable interoperability for 5G, IoT, and AI applications,” said Helder Antunes, chairman of the consortium, and a senior director at Cisco. “While fog computing is starting to be rolled out in smart cities, connected cars, drones, and more, it needs a common, interoperable platform to turbocharge the tremendous opportunity in digital transformation. The new Reference Architecture is an important giant step in that direction.”

The evolution of computing architectures needs to be quickly revisited in order to explain the ‘fog’ term. In the good old days, a central mainframe carried out all the processing, serving the results to terminals at the edge of the network – i.e. desks inside an office building.

The emergence of the PC meant that mainframes were no longer required to do as much processing, as those personal computers were able to run their own processes and applications at the edge. Cloud computing servers were born of the need for more powerful computing than what was on offer in those PCs, and the rise of cloud has parallels with the days of the mainframe.

However, with the rise of mobile networks, the edge has expanded from desks within a building to truly remote locations – enabling all kinds of wireless applications, with the trade-off for that extended range usually being less processing power on tap to enable a longer battery life.

As such, those remote edge devices are typically pretty lightweight if they don’t have a wired power supply, and are in the fashion of simply sending readings back to cloud applications – where the data they generate can be put to use in gargantuan instances of processors and storage, somewhere hidden in the clouds and out of sight of the user.

Edge computing is the term used to describe moving some of that computational workload from the cloud and nearer to the source (ground) – and that’s how the term ‘fog’ arises (like cloud, but closer to the source). The main benefits that fog advocates point out are improvements in security, latency, agility, efficiency and cognition – all of them highly relevant for massive IoT use cases, and for the AI and machine learning technologies which will efficiently process all that data.

When it comes to security and privacy, processing data on the network edge can ensure that it stays off of airways and out of networking infrastructure that could be compromised. Similarly, keeping it at the edge can reduce the latency of applications that need to take action based on the data – as the data can be analyzed at the gateway, rather than having to travel all the way to a central cloud application that would then send a command back to the gateway.

And the costs of relaying that information to the cloud can be removed, which is especially beneficial to wireless applications that might have to rely on an MNO’s cellular network to collect information. The data bill for these gateways could be slashed, and for the user, the cloud storage and processing costs could be significantly reduced – as long as you trust the network edge boxes to be able to carry out the computation themselves.