XR? Immersive Learning? Virtual reality? Keeping track of the evolving set of terms used to describe immersive technology and the Metaverse can be daunting.
In this blog post, we define key terms to help simplify the immersive technology landscape, and explain how various Web 3.0 technologies coexist, differ, and interact.
Virtual Reality (VR) is the use of computer modeling and simulation that enables a person to interact with an artificial, three-dimensional visual, or other sensory environment. In a VR experience, the user is fully immersed within a virtual environment, with no view of the ‘real world.’
These environments can be designed to realistically simulate real-world environments and events, which helps our brains process the experiences we have in VR in the same way we process real-life experiences. Pilots have trained for years on flight simulators; these simulators are an example of VR.
Augmented Reality (AR) is the process of supplementing, or “augmenting,” our view of the real world with digital images using video, or photographic displays. Forbes states: "In augmented reality, virtual information and objects are overlaid on the real world. This experience enhances the real world with digital details such as images, text, and animation."
Examples of AR include the Pokémon Go game, interior decorating apps that allow you to see what furniture would look like in your home, and Snapchat’s Bitmojis that place avatars and other digital objects on top of photos and videos via filters.
Mixed Reality (MR) is similar to augmented reality in that it overlays digital graphics on the user’s view of the ‘real world’ environment. However, MR takes this a step further, integrating digital objects into the ‘real world’ view in a seamless manner. The result is a new, ‘mixed environment’ where the user can interact with both the real world and digital objects simultaneously.
In an MR experience, the digital objects may also be enabled to respond to changes in the real world environment, or to the user’s actions within it. For example, digital overlays within a user’s home could display measurements of the walls and furniture, or an MR game could change the display of digital graphics based on the user’s voice, eye movement, and hand gestures. Imaging technology and sensors enable these experiences, allowing digital objects to be anchored realistically in the ‘real world.’
The Microsoft HoloLens is one example of a mixed reality device that enables such experiences. During the pandemic, one British hospital used HoloLens to conduct "virtual rounds" – so one doctor (instead of the usual four) wore a headset to make rounds and see patients, minimizing exposure risk and scarce PPE. A remote colleague can collaborate through the headset and see the patient, while having relevant medical documents and MRI images projected at the point of care.
Extended Reality (XR) is an umbrella term that accounts for all immersive technologies that extend the reality we experience by either blending the virtual and “real” worlds, or by creating a fully immersive experience. VR, AR, and MR can be thought of as sub categories of XR, or technologies that exist on the spectrum of XR technologies. XR is often used as a term to reference the entire category of VR and AR technologies.
Spatial computing is an overarching term that refers to the process of making computers interact seamlessly in a 3-D world using AR, VR and MR.
Traditional computing takes place on a two-dimensional screen – a user interacts with a computer during traditional computing. But spatial computing happens in three dimensions. Through headsets and smart glasses, users interact with a 3D virtual world that marries the real world and the digital landscape. Like the term ‘XR,’ the term ‘spatial computing’ is often used to refer to the spectrum of VR, AR, and MR technologies.
The word “immersive” is indicative of the engaging nature of the experiences that technologies in this category enable. Immersive technologies deliver a new level of engagement and realism in comparison to other digital experiences, like consuming video content, viewing 2D images, or participating in a video conference call.
While roleplay and physical simulations in the real-world can be considered immersive experiences, modern-day ‘immersive technology’ refers to the use of extended reality, or spatial computing applications to deliver immersive experiences. ‘Immersive technology, ‘extended reality,’ and ‘spatial computing’ are often used interchangeably as umbrella terms to refer to the spectrum of VR, AR, and XR technology.
Immersive technologies make the Metaverse possible. Companies are starting to introduce VR conference calls – where you can collaborate in real-time with a colleague who is on the other side of the world. You can virtually ‘attend’ an open house, even though you are several states away. Or try on clothing in the Metaverse without ever leaving your house. These experiences are happening in the Metaverse, and being made possible by immersive technologies like VR and AR.
Immersive Learning uses VR, AR, or MR technology to create virtual environments in which learners immerse themselves in lifelike simulated experiences to learn, practice, and refine their skills in a risk-free fashion. The benefit of immersive learning is that learners gain access to a realistic experience, and the opportunity to practice and apply skills. Examples of immersive learning include flight simulators, an Accenture program that helps former prisoners conduct mock job interviews, or a Ken Blanchard program that helps leaders build trust through VR.
The data supports immersive learning's efficacy and benefits: PwC’s VR soft skills training study found that "learners trained with VR were up to 275% more confident to act on what they learned after training — a 40% improvement over classroom and 35% improvement over elearn training."
For more information about immersive learning, hear from experts at companies like Accenture, HTC, Meta, and The Ken Blanchard companies in this blog post: What Experts Are Saying About Learning in the Metaverse, or check out our guide on immersive learning content.
Author Neal Stephenson coined the term “metaverse” in 1992 in his science-fiction novel “Snow Crash,” which discussed a virtual reality-based successor to the Internet. Investopedia defines it as "a shared virtual environment that people access via the Internet. Technologies like virtual reality (VR), augmented reality (AR), and 3D graphics make the Metaverse possible - creating virtual worlds that give users a sense of 'virtual presence.'”
In other words, the Metaverse is the idea that the aforementioned virtual and augmented reality experiences will eventually be connected, offering people a way to move from one environment to another within a much larger and interconnected digital world. This is analogous to the way people move from one building to another within the same city in the real world.
In comparison to individual or siloed virtual reality simulations, the Metaverse connects these simulations, and allows people to have co-presence in VR - one user can be having a virtual experience, and another can join the same virtual environment from a completely different location to share the experience. This opens up possibilities for the Metaverse to host virtual events ranging from lessons in a virtual classroom to concerts and art exhibits.
New to the Metaverse? Check out the basics in this blog post: Metaverse 101: Explaining VR, AR, XR, and the Metaverse
A microverse is a destination within the Metaverse - an environment created to offer a specific virtual experience. These virtual environments and the platforms that host them have been created by companies like Meta, Microsoft, Roblox, Epic Games, Decentraland, and more.
They are often referred to as ‘walled gardens,’ indicating the fact that the virtual destination does not offer the ability for users to freely move from one environment to another as a user would on the internet moving from one website to another. Epic Games’ popular video game ‘Fortnite’ is a good example of a microverse, or ‘self-contained Metaverse,’ as the company describes it. Accessing these microverses requires users to create accounts on each respective Metaverse platform in order to access them, similar to the way users access various social media platforms separately.
‘Microverses’ are not typically connected in any way, or interoperable with virtual environments hosted on another platform. This may change in the near future, as these platforms evolve, and companies seek to build Metaverse standards that enable interconnectivity between platforms.
A digital twin is a virtual representation of a real-life object in the Metaverse. Digital twins can be digital models of buildings, environments, tools, products, or anything else that exists in the real world.
Digital twins have immense value because they can be used in digital experiences to realistically simulate how objects are interacted with. For example, technicians can train to assemble or repair equipment using realistic 3D models, and consumers ‘try on’ digital twins of clothing and shoes.
Digital Twins can also simulate how equipment and processes work in the physical world by pairing them with data from their real-world equivalent. If you created a digital twin of a wind turbine, for example, and paired it with real-world geographic and weather data, the virtual turbine could then simulate how much energy it would collect in certain areas and weather patterns to help guide decision making for energy companies.
Digital twins provide a seamless experience for those moving between real-life and the Metaverse, but also provide critical intelligence and trial and error for a range of industries – without the cost and risk.
This video shows a VR learning module where the user trains to be an insurance claims employee in a digital twin of a customer’s home.
People also have realistic digital representation in the Metaverse, as digital avatars have become the standard for how people take on a presence and communicate in VR and AR experiences. A digital avatar is a personal representation of a person in the Metaverse. An avatar may look very similar to how an individual looks in real-life, or it could look entirely different, and depict a fictional character depending upon personal preferences.
Many users choose virtual humans as their avatar, mimicking real people both visually, as well as with realistic speech and body language. Virtual humans can be controlled by users as avatars, and they can also be included in immersive experiences as non-playable characters (NPCs) for users to interact with in simulations. This is similar to the way users interact with characters in a video game. Content creation platforms like no-code authoring tools enable organizations and individuals to pre-determine the speech and physical movements of NPCs in virtual experiences.
Users can create an avatar for themselves and access immersive environments via a VR headset, such as the Meta Quest and HTC Vive, as well as through desktop computers and mobile devices. Alternatively, many Metaverse platforms support the use of NFTs (see below) as avatars within VR and AR experiences. These options for how individuals are portrayed in immersive experiences offer people a new level of personalization and representation within digital platforms. Meta rolled out a range of customizations to avatars back in 2021, letting users shape their eyes, nose, mouth, wrinkles, hairstyles, body types, beards, makeup, clothing and more.
A virtual environment is a digital destination or location that people can ‘enter,’ and engage with digital twins, or replicas, of any real-world environment. Virtual environments offer everything from socialization to entertainment and educational experiences. Metaverse platforms like Roblox, Decentraland, and Minecraft are examples of platforms being used to power these immersive, virtual destinations for users to access via desktop computers, mobile devices, or VR headsets.
Whether it be a virtual storefront displaying the newest digital fashion for people to check out, a gallery environment showcasing digital artwork, or a virtual office space for colleagues to collaborate in, the virtual environments of the Metaverse are enabling a new range of digital experiences.
Check out the Talespin platform’s virtual environment asset library to explore environments designed for immersive learning simulations.
Co-Presence is another key theme of the Metaverse. In addition to self-guided, or siloed virtual reality simulations, the Metaverse also offers multi-user simulations. This allows people to have co-presence in VR. Co-presence means that more than one user can experience the same shared virtual experience.
One user can be having a virtual experience, and another can join the same virtual environment from a completely different location to share in the experience. This opens up possibilities for the Metaverse to host virtual events ranging from group learning sessions to concerts and art exhibits.
For example, Glue empowers employees to meet in the Metaverse to conduct brainstorm sessions, give presentations to colleagues, and have virtual whiteboard sessions. Wave, meanwhile, enables users to participate in "Waves" - live, interactive, and immersive shows and concerts.
Web 2.0 refers to the current state of the Internet in which users access information and content via websites and applications, and are able create their own digital content for other people to access. Web 2.0 became prevalent in the early 2000s, as new technologies like social media and website building tools ushered in the rise of user generated content. This fueled the growth of dynamic digital communities and economies, and the rise of things like e-commerce, and digital socialization.
Web 2.0 represents an evolution from Web 1.0, which was the version of the internet used in the 1990s. Most Web 1.0 users were passive participants – viewing websites that needed to be created by developers. In Web 2.0, a wider range of internet users have become engaged with the internet as content publishers, commenters, editors, and more. Web 2.0 also brought about more rich digital experiences, as new media optimization technologies expanded the types of content that can be shared and consumed online.
Web 3.0, a term coined to represent the next iteration of the internet, is a collection of new digital technologies designed to create a more intelligent, decentralized, and connected internet. Web 3.0 technologies like blockchain, machine learning, semantic web, the Metaverse, and AI are being leveraged to create a new class of web infrastructure and applications.
Web 2.0 - the internet as we know it today - is characterized by websites and social media platforms that collect user data, and central authorities that oversee digital financial transactions. One of the key themes of Web 3.0 is decentralization, or the idea that websites and applications no longer have a middle man overseeing transactions and data collection. In a Web 3.0 paradigm, for example, all transactions are recorded over a blockchain-enabled distributed ledger.
Web 3.0 also introduces a new user interface layer for web applications via the Metaverse, in which users can leverage technologies like VR and AR for more immersive and personalized digital interactions.
Blockchain technology is a decentralized digital ledger that exists across a network. Its list of records, called blocks, are securely linked together using cryptography, enabling transactions, proof of ownership, and data exchanges that are not controlled by any single company or individual - this is one of the cornerstones of Web 3.0.
Why is it important? Here's IBM's take: "Business runs on information. The faster it’s received and the more accurate it is, the better. Blockchain is ideal for delivering that information because it provides immediate, shared and completely transparent information stored on an immutable ledger that can be accessed only by permissioned network members. A blockchain network can track orders, payments, accounts, production and much more."
This is also critical for cryptocurrency, which has emerged as one of the most predominant use cases for blockchain technology. Cryptocurrencies use blockchain technology to enable digital transactions that do not require a 3rd party governing body – this allows anyone, anywhere to make digital purchases and engage in digital financial transactions. Blockchain technology is used to maintain a secure and immutable record of transactions with these cryptocurrencies. They're also used to prove ownership and enable secure, peer-to-peer transactions for Non-Fungible Tokens (NFTs).
Non-Fungible Tokens (NFT’s) are digital assets that use blockchain technology to prove their ownership and authenticity, and enable their secure exchange in transactions. No two NFT’s are exactly alike, with every NFT considered to be a unique unit of data. Thanks to blockchain technology, NFT’s enable authenticity to be verified for digital assets in a way that was previously not possible. This is analogous to the way unique physical assets like limited edition trading cards, or rare works of art are verified to be authentic in the physical world.
Examples of digital assets that can be ‘minted’ as NFT’s, or turned into a unit of data on a blockchain, include digital works of art, music, photos, videos, collectibles, tokens for club and association membership, and other forms of digital content. NFT technology is not limited to use cases like artwork, with NFT technology being used to verify ownership of physical assets like real estate, and to divide ownership of assets like companies as well.
Cryptocurrency is a digital or virtual currency that exists only digitally, and has no central issuing or regulating authority. Cryptocurrencies exist on decentralized systems, or blockchains, to record transactions, oversee the issuance of new units, and prove ownership. For these reasons, cryptocurrencies are difficult to counterfeit. Unlike NFTs, cryptocurrencies are fungible, meaning one unit of a cryptocurrency like Bitcoin is interchangeable with all other Bitcoins.
Artificial intelligence (AI) is the ability of a digital computer or robot to perform tasks most associated with intelligent beings. This ability comes from leveraging computers and machines to mimic the problem-solving and decision making capabilities of humans. Artificial intelligence encompasses sub-fields of machine learning and deep learning. Robots that manufacture automobiles on an assembly line, maps and navigation that can anticipate traffic and direct you on the best route, or personal voice assistants such as Alexa, are examples of artificial intelligence in action.
We hope this primer on immersive technology and Web 3.0 terms helps as you explore the Metaverse. Is there a term we left out that you’d like to see defined? Contact us, or check out more Metaverse news and intel on the Talespin Blog.