Blog

How the IDUN Guardian fuses brain and body sensors to create better insights

Discover how the IDUN Guardian fuses brain and body sensors to provide deeper insights into human performance, health, and well-being.


Here’s something that happens more often than you’d think in our lab: we’re looking at EEG data from someone’s workday, and there’s this dramatic spike in brain activity around 2:47 PM. The signal looks intense, like the person just had a major cognitive breakthrough. But then we check the IMU data and realize the far more mundane likelihood that that “major cognitive breakthrough” was just the person coughing.

cough_artifact-1

This is the fundamental problem with brain monitoring that we’ve been wrestling with for years. Raw EEG gives us incredible insights into neural activity, but without context, we’re essentially trying to understand a movie by looking at individual frames. That spike could mean deep focus, mild frustration, or simply head movement, and it’s hard to tell the difference from the brain signal alone.

We’ve made some real headway on this challenge lately, thanks largely to work that started as the master’s thesis of our newest employee Luana Gisler. What began as academic research has become a core part of how our IDUN Guardian platform actually works in the field. And honestly, it’s changed how we think about brain monitoring entirely.


Why Context Has Always Been the Missing Piece

Anyone who’s worked with EEG knows about motion artifacts, those electrical signals that swamp your brain readings whenever the subject moves. The brain operates on microvolts, while a simple head turn can generate signals thousands of times stronger. It’s like trying to record a whispered conversation during a thunderstorm.

For decades, the standard solution was basically “don’t move.” Lab studies meant sitting perfectly still, often with head restraints, in carefully controlled rooms. This worked fine for research, but it severely limited what we could actually learn about how brains work in real life.

The real challenge comes with unsupervised recordings, when people wear our devices throughout their day without a researcher taking notes. We’d get back hours of data with no idea what was happening when different patterns emerged. Was that alpha wave suppression due to focused attention, or because someone was walking to a meeting? Without context, even the richest neural data becomes surprisingly hard to interpret.


Turning Motion from Enemy to Ally

At IDUN, we have shifted our approach: instead of treating body movement as unwanted noise, what if we could use it to better understand brain activity?

After all, far from being separate systems, our brains and bodies are impossible to disentangle. When we move, our neural activity changes in predictable ways, not just from motor commands but from sensory feedback, attention shifts, and the cognitive demands of different postures. Rather than filtering this out, we should leverage these relationships.

This is where IMU sensors become incredibly valuable. These are the same motion-sensing chips in smartphones and gaming controllers. Every IMU has three basic components:

  • Accelerometers – measure forces acting on the device (including gravity),

  • Gyroscopes – measure how fast the device is turning or spinning, and around which axis,

  • Magnetometers – measure which direction the device is facing.

These sensors, which are built into the Guardian, track movement with remarkable precision, which has value in and of itself.

But we realized we could do much more with that motion data when combined with EEG readings. Instead of just detecting artifacts, we could start understanding the full picture of what someone was doing when specific brain patterns emerged.


What Automatic Activity Recognition Actually Looks Like

Our latest daytime report showcases what this sensor fusion approach enables in practice. The system takes raw IMU data and transforms it into an intuitive body-centered coordinate frame: x for forward motion, y for leftward, z for upward. This sounds straightforward, but getting the math right for all possible head orientations and movements took considerable work.

More importantly, we now provide automatic activity classification that breaks down into several categories:

  • Sitting – relatively stable posture with characteristic micro-movements for balance and small adjustments.

  • Standing – more dynamic balance-related movements and slight sway.

  • Lying down – completely different gravitational context, with minimal head movement.

  • Movement – subdivided further into sitting movement, lying movement, and active movement (walking, exercise, pushups, etc.).

Every EEG segment now gets automatically labeled with this contextual information. When we see changes in brain activity, we immediately know what the person was doing. This has been a game-changer for interpreting data from field studies.


The Proof in the Pudding

Because we at IDUN believe in showing rather than telling, I ran a little experiment on myself. While wearing my own Guardian, I performed five everyday activities for a minute each, with a brief transition between activities:

  • Standing still

  • Sitting still

  • Lying down

  • Walking

  • Doing pushups

I then generated a daytime report to predict my activities based on my IMU signal. Aside from confusing sitting still and standing still, it did not disappoint.
daytime_report_IMU


Real Applications We’re Already Seeing

This contextual approach has opened up applications we couldn’t tackle before:

  • Workplace cognitive assessment – measure mental workload during real tasks, automatically accounting for posture and movement.

  • Sleep studies – practical at home, detecting position changes while focusing on neural sleep signatures.

  • Mental health – combining neural and behavioral data for a more complete picture.

  • Cognitive training – personalizing methods based on how physical state impacts mental performance.


Upcoming Potential Applications

We’re still early in exploring what EEG-IMU fusion can accomplish. Current applications focus mainly on artifact removal and basic activity recognition, but the potential extends much further:

  • Predictive health monitoring – deviations from normal patterns as early warning signs.

  • Performance optimization – real-time feedback on positions and movement for peak performance.

  • Advanced interfaces – brain-body-computer integration beyond traditional BCIs.

  • Population-scale research – scalable, context-aware neuroscience studies with unprecedented sample sizes.


Mind-Body Sensor Fusion and the Future of BCI

Working on this technology over the past few years has been fascinating because it represents a real paradigm shift in how we approach brain monitoring. Instead of trying to eliminate the complexities of real-world human behavior, we’re embracing them to create more complete and actionable insights.

This recognizes that fundamentally,  we’re embodied beings whose mental processes are intimately connected to physical experience. By monitoring both neural activity and physical context, we’re getting closer to understanding not just what the brain is doing, but why it’s doing it.

The path from Luana’s initial research to our current product capabilities shows how academic-industry collaboration can work when there’s genuine focus on practical applications. University research provided the theoretical foundation and innovative thinking, while our development team refined and validated the approach for commercial deployment.

We’re moving toward a future where brain monitoring becomes as natural and informative as checking heart rate or step count. The sensor fusion approach that emerged from that master’s thesis research is just the beginning of what becomes possible when we combine rigorous neuroscience with practical engineering.

The brain-body connection we’ve learned to leverage through this work is opening up entirely new ways to understand and optimize human performance, health, and well-being in real-world settings, and we can't wait to see what benefits it unlocks!


About the Author

Philip Egger – IDUN’s Data Science Lead

Similar posts

Stay Ahead with Insights from IDUN Technologies

Subscribe now to be the first to hear about the latest developments in our IDUN Guardian product, brain-computer interfaces, and neuroscience.