Graphics titan Nvidia has launched the public version of its Omniverse platform, with which it plans to become an integral part of metaverse development. Looking to build on the CES hype, and having been first unveiled in May 2020, Nvidia hopes that securing a central position in metaverse development circles will provide it with a clear path towards monetization.
Since the first announcement, there has been little new detail, but now the wraps are truly off. Anyone with a PC running an Nvidia GeForce RTX graphics card can download the app and connect to the cloud-based toolkit, to experiment with metaverse environment creation. Omniverse was in public beta last year, but Nvidia announced business subscriptions of Omniverse Enterprise in April 2021, starting at $9,000 a year, and commercially launched it in November.
That is well out of reach of the User Generated Content crowd. This community of creators, and their associated audiences, will be absolutely essential to driving adoption of metaverse-based entertainment. If Nvidia wanted to be of use here, and hopefully able to monetize that ecosystem, it had to be free of charge.
We are years, if not decades, away from a convincing metaverse experience. We collectively saw how society responded to Google Glass, and years of VR headsets being extremely prominent at tradeshows have done little to convince the technology press that we are on the cusp of a metaverse revolution – let alone the wider public.
If we are to believe that a proper VR headset it an integral part of such experiences, then we need hundreds of millions to billions of these devices to be sold. It would be an easier ask for AR glasses, but without some sort of technology that could directly beam images into our eyes, or perhaps straight into our optic nerves, we are entirely dependent on some form of headgear.
Until the day that some sort of ceiling-mounted projector can blast us with such an array of light, we’re confined to the head-mounted alternative. As it stands, in the development environments that suit those sorts of gadgets, Nvidia has built out a massive lead.
GPU rival AMD is chasing Nvidia down, in that core market, and even Intel has managed to create a viable-looking dedicated graphics card. Nvidia’s AI-prowess propelled its share price to the scale that it could mount an acquisition attempt of British mobile silicon institution Arm. That purchase is still awaiting regulatory approval, and it is easy to see why, as Nvidia is a much better fit than the current owner, Japanese MNO SoftBank.
The most pressing concern is that Nvidia rivals are dependent on Arm designs, and that Arm’s strange monopoly position in the industry has been largely allowed by regulators because Arm would sell its designs to anyone. With Nvidia pulling the strings, the fear is that this practice would cease, or that it would take an anticompetitive turn.
But getting into mobile devices, at the dawning of the metaverse age, would be a very solid step for Nvidia. Being involved at both the design and device stage could provide it with an unassailable position, as so many of these current and future devices are going to rely on Arm designs – at least until RISC-V provides a viable commercial alternative.
Returning to the new Omniverse platform, the first step in Nvidia’s incursion, and the free version only allows a two-person collaboration, which should rule out anything but the most basic client-creator interactions. Additional users are going to have to pony up, but with integrations with Adobe, Autodesk, Blender, Epic’s MetaHuman, and Maya, for creating the environments themselves, and tie-ins with 3D asset libraries from Epic’s Sketchfab and Shutterstock’s Turbosquid, the promise is that Nvidia’s option is both the best and the cheapest.
At its own GTC conference, Nvidia took some time to highlight the Apple-backed Pixar 3D file format Universal Scene Description (USD) as a pivotal technology. Nvidia likened USD to “the HTML of the metaverse,” and in the new presentation, it becomes clear how important such a tool is.
Without such a framework, which is not an official industry standard as of yet, getting all of the component pieces to mesh together into a cohesive experience would be nigh-on impossible. It also seems foolish to believe that USD is going to be the finished article already, as the metaverse trend is in such a nascent stage.
In a related note, there are some signs of the earliest use cases, chiefly gaming, as the Omniverse Machinima tool is a way to import assets from video games into the development environment.
Nvidia has a wealth of AI-based expertise to build on, and its Audio2Face tool is one of the best examples. It will create facial animations for a 3D model from an audio recording, which will be useful for guiding people through a VR or AR experience. In time, such tools need to be able to handle live audio inputs, but the industry is not concerned with those capabilities catching up.
With the public release, some new features have been added. Nucleus Cloud is a collaborative database and engine, to allow large environments to be shared, akin to the way people would collaborate on regular office documents. There are more Omniverse Extensions, small Python-based tools for specific tasks, and more Omniverse Apps, which are more fully featured applications such as driving and physics simulations. The Replicator is available too, for using these simulations to train neural networks.
To date, there are 100,000 Omniverse users, according to Nvidia, but it does not break out what portion of that are enterprise subs. Ericsson has confirmed that it is a customer, using the platform to create virtual versions of cities that it uses for its network planning and then deployment, to see how radio waves will behave in the virtual environment.