The world of immersive computing and experience design has seen an intense evolution from early virtual reality prototypes of the 90’s to Google Glass, augmented reality functions merged with mobile operating systems (after Apple bought Metaio), to mixed reality glasses like the Microsoft HoloLens and Magic Leap. Now all immersive technologies are grouped into the umbrella term Extended Reality or XR for short, but what comes next?
Figure 1: As a society we moved from classic desktop computers to laptops with keyboard and mouse to tablets, AR glasses and VR devices.
XR was a natural evolution of VR and AR concepts for immersive or ubiquitous computing. Depending on who you talked with, it was easy to get different opinions on what exactly “virtual” or “augmented” or “mixed” reality was supposed to be. To large extents they just became marketing terms to push development budgets, as opposed to defined (and well-designed) experiences that users could wrap their heads around. I always favor simple definitions for new technologies and take virtual reality as the replacement of a users’ reality to the fullest extent possible. Augmented Reality is the augmentation, such that the goal is to augment and modify a users’ view of their current reality through inclusion of media or information elements. XR, or Extended Reality is in my view a toolbox of immersive technologies, with a focus on user experience design as opposed to only using traditional VR or AR toolboxes.
Figure 2: Historically we have been developing virtual reality experiences for entertainment and different use cases.
A dream of technology evangelists has been that XR would take off across all facets of society, but as with many new technologies, many applications are not widespread, but are empowering customers in specific use cases. Gaming for example has been a consistent driver for VR adoption. In medicine VR and AR technologies offer enhanced training and surgical planning, as well as accessing data during surgery. Many prototypes have also shown virtual office concepts with users engulfed with computing screens. However, buying and putting on VR or AR goggles is a barrier to entry, and the experience has to be worth the effort.
Figure 3: Interaction experiences have tried to expand beyond the keyboard-mouse combination to interact with virtual worlds.
At the heart of XR is the ability to enhance the human experience. However, in the real world of user experience design it’s difficult to make the case for wearing immersive glasses to check Facebook when you can easily do it from your phone. XR is really the story of evolving computer interaction patterns. We started with the keyboard and mouse, then moved on to mobile (tapping and pinch-to-zoom) computing. As with many new technologies, the real value isn’t in finding a “killer app” for XR. It’s about solving problems and reducing friction when building interactive experiences. For example, XR has many applications in the medical field, where visualized 3D models in an intuitive way can help to train better surgeons and help them operate on patients with better surgical planning.
However, for most consumer applications XR technologies generally focus on the replacement of sound and visual input to a user. There has also work on including smell and tactile feedback, but you end up wearing mask, looking like Bain from the Dark Knight. You can also take things to the extreme with 4D cinema, which is supposed to add tactile feeling to a normal cinema experience, but generally just pulls you out of the narrative arc of the movie because it’s too distracting. XR does have the ability to extend the senses we have when interacting with virtual worlds. However, a missing component is emotional intelligence. While XR generally refers to building virtual worlds, the truth is it’s about extending our natural abilities in those worlds. Currently people are seen essentially as data clouds for algorithms inputs. From our online interactions and data history, Artificial Intelligence systems are used to characterize humans according to likes, time spent watching videos, our playlists, websites we’ve visited, etc. This may help Kickstarter target ads to me for the next “best backpack ever” product launch, but it doesn’t understand my emotional needs.
To create a compelling value for society, XR needs to move beyond simply being an immersive way to view 3D content. The next step of XR is the integration of intuitive biosensors and neurotechnologies that enable systems to understand users from a physiological perspective. Adding the ability to measure heart rate, extract heart rate variability, sense anxiety, understand frustration with brain activity, these are the elements that enable emotional intelligence in computing systems. For some the idea of using sensors to understand our emotional needs is scary. We already use sensors to characterize humans through movement (fitness bands) to extract how we walk, run and sleep. This data helps many people understand themselves better and make choices to improve their lives. As we develop more autonomous systems and XR environments, it’s important to design systems that can learn about user needs to provide better experiences. Just as humans learn empathy and develop emotional intelligence by listening and learning form the people around us, and so can XR systems if we give them that ability.
I’m continually inspired to build a world where we give machines the ability to feel our needs and make emotional intelligence a natural way to interact with the connected world around us. Are you interested in building a future where you’re more that just a dataset to train an algorithm with? If so then get in touch and let’s chart a path.
Hello, I’m Mark.
I am an artist-engineer focused on building intuitive technologies for humanity. I’ve always been fascinated by the intersection between virtual and physical worlds, which we see more and more of as mobile, augmented and virtual reality technologies are adopted in society.
As an engineer at a deep-tech startup I’m excited to build the foundations of how we can more seamlessly interact with the connected world and with other people (the Internet of Humans). The team and I are continually faced with new challenges in data and product design that drive us forward.
We are located in the outskirts of Zurich, near the airport. We always welcome drop-in visits!