Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.
It takes more than a clever speech by Facebook or anyone else to create the metaverse. Before any of us can begin exploring it, someone — or more likely a combination of many someone’s — needs to begin building it. For that to happen globally and at real scale, we will need the right combination of devices, standards, and network technology — none of which are fully here yet.
We will access the metaverse via different types of devices, with different methods of entry and connectivity and different styles — from fully immersive headsets to fashionable glasses that could be worn all day, every day. And while the device makers will compete for market share and attention based on their differing user interfaces, the virtual worlds they access will need to have many shared and standardized methods of exploring the metaverse experience.
The truth is that currently a lot of those standards and standardized approaches are missing and still need to be created. A simple example of this can be found around mapping and the simultaneous localization that will be essential to create the mix of physical and digital augmented realities that could make up the metaverse. Today, device manufacturers and platforms each have their own proprietary data for this process, and there is nothing that could pass for an agreed standard.
The virtual and mixed reality worlds of the metaverse will be created using spatial mapping — the process by which devices capture and combine the sensory data they collect to construct a three-dimensional rendering of a space. The computational algorithms required to do this can be on the device, in the network, or more likely a combination of the two.
Event
MetaBeat 2022
MetaBeat will bring together metaverse thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 3-4 in San Francisco, CA.
In this metaverse experience, latency is critical as the space needs to re-render in real-time as you move through it and the real-world surfaces are overlaid with virtualized colors, textures, and images. That makes edge computing key in terms of delivering the metaverse experience because in virtual reality applications, unless latency falls below 20 milliseconds, many users will experience nausea. Time warping is used to try to combat this, but a lower latency connection would give a much better quality of experience (QoE). Additionally, in augmented reality settings, high network and processing latency will give a poor QoE when tracking or interacting with real world objects.
And for the metaverse to be device and platform-independent, the current fragmented landscape of proprietary mapping solutions will need to merge into accepted standards. The OpenXR community is trying to address this through open APIs, the 3GPP is addressing it through standards in radio and network optimization, and MPEG is looking at a range of compression techniques, including spatial audio, haptics, and higher efficiency video codecs.
But whether it is uplink and downlink transport optimization for all of the XR data streams, video, audio, haptics and point cloud processing, or dedicated network slicing, there is a need to standardize the processes underpinning spatial mapping data to ensure the metaverse is a universally accessible experience, not a proprietary fragmented one.
The devices, too, have a way to go before they can become ubiquitous — especially if we are talking wearable, fashionable, glasses.
The merged reality version of the metaverse involves creating a digital representation of the real world and overlaying textures upon it, potentially inserting virtual objects, or modifying real world objects to create a different appearance. You could turn a real city into one that looks medieval, or you could turn New York into Gotham City — still see real buildings, still see real people, but overlay them with a different appearance. Another version of the metaverse might be entirely virtual and created using a fully immersive headset. Versions would likely be device or platform dependent, but they will all require standards and advances in network technology to deliver those personal experiences.
Of course, while some of this can be delivered using 5G — given the bandwidth, reliability and latency levels available over 5G networks — the ubiquity and scale required is still some way off. Facebook could release some AR glasses and call it the metaverse, but then Apple could release its own cool device and term that the metaverse. Both may develop a business model that works for them, but it will also need to work for the (mostly mobile) network operators that will provide the connectivity.
Additionally, while Facebook’s business model is likely to be platform-based and involve advertising, Apple’s is almost certainly going to be based around the ‘cool’ device and user interface. But the plain fact is that unless the devices talk to and interact with one another, unless all these rendered worlds use the same standards and data sharing techniques, and unless the networks can deliver the capacity and connectivity at an affordable and sustainable price, the metaverse will stall — or fall short of delivering on its full potential as quickly as it could.
No one company owns the internet. No one company owns the commerce on the internet, the access to it, the user interface, the innovation, or the ideas it has unleashed. Yes, some companies are internet giants — but the internet is also home to millions of small and successful companies and individuals. The metaverse must follow that same template to fully succeed for the widest possible global audience of enablers and users.
Chris Phillips is Senior Director of Advanced Research and Development, Media IP at Xperi. His current focus is on eXtended reality, the metaverse, and cloud gaming research topics. Prior to Xperi, he led Ericsson’s eXtended Reality research and held research positions at AT&T Laboratories and the former AT&T Bell Laboratories. He is also an inventor with over 100 worldwide granted patents.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!