In this post our Full-Stack Developer Daniel Burger talks you through the exciting future of BCIs being used in Virtual and Augmented Reality.
If you want to learn more, or want us to cover a specific topic, then reach out today!
Extended reality (XR), the umbrella term for augmented reality (AR) and virtual reality (VR), is becoming increasingly popular in the gaming world and this has now led to a realisation of a form of the metaverse, a combination of virtual 3D worlds designed for social interaction.
Now while the technology is still a long way away from holodecks and The Matrix, it is becoming more intuitive and accessible for people around the world. When it was released after a kickstarter campaign in 2012, the Oculus Rift became the first truly successful VR headset before being bought by then Facebook, now Meta, for $2billion in 2014. It remained more of niche gaming device for quite a number of years, before the Oculus Quest 2 went on to sell over 1 million units in 2020. Meanwhile, Pokémon Go became a global sensation with would be Pokémon hunters risking life and limb to catch them all. This was one of the first truly mainstream AR applications to see such widespread popularity and was a lot of people’s first interaction with AR and allowed people to experience XR without the need of additional equipment other than the smart phone in their pocket.
Since then, XR has gone from strength to strength, however there is still a heavy reliance on existing methods of interaction with XR applications. VR applications tend to use controllers to dictate a lot of the movement and interaction in game, with only a few minor exceptions such as the VR game Nock VR which was released in March 2022. In this game, you move your arms in a way similar to a gondolier might steer his boat through the canals of Venice to launch yourself around the arena. This is just one example of how things could change in how we interact with XR environments, especially as the metaverse becomes more popular.
Interacting with 3D virtual spaces could lead to the next major change in how we interact with technology in a way that hasn’t been seen since the adaptation of touch screens that came about with the first iteration of the iPhone in 2007. This new technology allowed for easy digital adaption of existing concepts. We could suddenly read a book on our phones, but in a more intuitive and natural way. Instead of scrolling through a PDF or text document, we could physically turn the page with the swipe of a finger or watch videos on our phone in a specifically designed app that looks like a TV (check out the original icon for the YouTube app). This is known as skeuomorphism, a process in which existing aspects of a real-world concept are integrated into the digital version. This process is already happening in VR games with things we recognise in other existing digital formats such as menu screens which require either physical interaction or the use of a controller. But how could this change with the integration of brain computer interfaces (BCIs)?
BCIs allow people to interact with computers through a direct connection between their brain and a computer, an interaction that could do away with the need for a controller or certain physical movements. In the real world, we use our brain to interact with the world and react to external stimuli, as we build more intuitive and interactive 3D worlds it could naturally require input from our brains to create a better experience for the user. However, this new interaction would be a huge change in the same way that the iPhone was and will require new ways of thinking around design in hardware and software. Our Guardian device and the Guardian Neuro-Intelligence Platform is designed to be the first steppingstone for this transition. With a BCI in a form factor that many have already adapted into their ever day lives, the ear bud, and an API that allows designers and engineers to develop new and innovative experiences for users, it is possible that we will see the beginnings of this change in the near future. Something we at IDUN can’t wait to make happen.
After working as a software engineer in the industry for several years, developing software for advertising technology and digital reality, I transitioned into neurotechnology. At IDUN, I am in charge of the neurointelligence platform’s continuous full-stack development, cloud infrastructure operations, and research-driven design, with a focus on interactive brain-machine interface software and with a passion for VR applications.
We are located in the outskirts of Zurich, near the airport. We always welcome drop-in visits!