The Metaverse has gained elevated attention over the past few years, followed recently by a cooling off in what some are already calling a “metaverse winter.” In this article – part II of the series – we’ll explore the origins of the Metaverse as a concept, its meaning, and how it differs from (but relates to) VR. For background and added context, you can see part I of this series here.
Backing up, the Metaverse is a term that refers to a still-theoretical and fully immersive virtual space where users can interact with each other and the environment. It can be accessed through VR, though many believe that this limits the opportunity and that it should be accessible through 2D screens. Indeed, the closest metaverse-like experiences we have today fall into the latter category, including multiplayer gaming like Fortnite.
Another agreed attribute of the metaverse is that users are represented by avatars and can interact with the virtual environment and with other users in real time. This makes it time-synchronous but place-shifted. And the concept is not just limited to gaming and entertainment: It has the potential to revolutionize the way we work, learn, and socialize.
As you may know, the term Metaverse was first introduced in Neal Stephenson’s 1992 science fiction novel, “Snow Crash.” In the book, Stephenson describes a virtual world where users can interact with each other much like the real world. This virtual world is accessible through a neural interface, allowing users to experience it as if it were real.
Stephenson’s vision was far ahead of its time, and many of the concepts he introduced drive current thinking about what the metaverse should be, and how it’s already developing. For example, he anticipated the rise of virtual currencies, which are now a reality thanks to blockchain technology. And while VR has come a long way since Snow Crash published, it still has some way to go before it can deliver a fully-actualized metaverse. Most VR experiences are currently limited to gaming and entertainment, though social interaction and collaboration can be seen in early metaverse efforts like VRChat and Horizon Worlds.
This social aspect is what many proponents project will drive the appeal and value of an eventual metaverse, including IRL-like activities like virtual conferences, concerts, and enterprise collaboration. The current state of VR offers a level of immersion that makes it meaningful, but it still lacks true “presence” or what Meta Reality Labs Chief Scientist Michael Abrash calls VR’s Turing Test. The hope is that advancements in neural interfaces and haptic technology will help us get to that point.
One of the primary challenges in creating a Metaverse is the sheer volume of computational power and bandwidth required to support it. The Metaverse would need to be able to handle a massive number of concurrent users, all interacting with each other and with the virtual environment in synchronous ways. This requires not only powerful servers and data centers but also fast and reliable internet connections for all users.
Another important hardware requirement is VR equipment. While it’s possible to access some virtual worlds using traditional input devices like keyboards and 2D screens, as noted, the true potential of the Metaverse lies in its ability to fully immerse users in a virtual environment. This requires VR headsets that can provide high-quality visuals, precise tracking, and comfortable wearability for extended periods of time. These are all the things Michael Abrash references in his “Turing Test” construct.
In addition to hardware, the Metaverse also requires sophisticated software to create and manage the UX of virtual worlds. This includes everything from creation engines for constructing realistic 3D environments, to rendering software, to compression and optimization to AI algorithms that can generate non-player characters (NPCs) and other virtual entities.
One of the biggest challenges of creating a Metaverse is ensuring that all of these software components work seamlessly together. This requires a high level of coordination and standardization among software providers, as well as the creation of open standards that allow for interoperability between different virtual environments, and other things like avatar compatibility across different corners of the metaverse. That last part is less of a technological challenge than a logistical one. Getting proprietary interests and tech giants to agree on common standards could be difficult when they’re interested in creating and protecting their own walled gardens.
Despite these challenges, there have been several advances in VR in recent years that get us closer to the vision. For example, one relatively small but meaningful step has been the advent of standalone headsets like Meta Quest 2, which makes it easier to access virtual worlds without being tethered to a PC. Meanwhile, advances in haptic feedback have made it possible to create more immersive and tactile experiences within virtual environments. Other promising advances in VR include eye-tracking, which can be used to provide more natural and intuitive interaction with virtual objects; and machine learning algorithms that can generate more realistic and dynamic NPC behavior.
While the Metaverse remains a complex and challenging concept, it represents an exciting new frontier for VR (and AR for that matter). By constructing fully immersive and interoperable virtual worlds accessible to anyone with an internet connection – a sort of 3D version of today’s web – the metaverse has the potential to transform everything from entertainment to education to social interaction.
We’ll pause there and return next week in Part III of the series to examine how all of the above could materialize and what the societal impact could be…