When we first envisioned creating “the smartwatch for the brain,” we knew the path wouldn’t be straightforward. Traditional full-scalp EEG requires many electrodes spread across the scalp in standardized locations. This is fine when you’re in a lab surrounded by researchers; our ambition is to also make it possible when you’re at home and alone.
Sleep, in particular, is an area of substantial interest to us: if our end users can get accurate sleep analytics across many nights, they can use this valuable information to optimize their daily activities to live healthier lives. Such analytics requires the setup to be simple and unobtrusive enough for a non-expert.
Today, we’re thrilled to celebrate the completion of the master’s thesis of IDUN intern Thibaut Haslé, conducted in collaboration with EPFL’s Neuro-X Institute and Paris-based startup SigmaNova. Under the supervision of IDUN Data Science Lead Philip Egger, SigmaNova researcher Richard Gao, and EPFL Professor Martin Schrimpf, Thibaut has demonstrated something remarkable: brain foundation models can transform minimal EEG data into sophisticated sleep analysis, matching the performance of traditional multi-channel systems.
To simplify things a bit, human sleep typically occurs in three stages: light, deep, and rapid eye movement (REM) sleep, whose relative quantities, in addition to the total amount of sleep, determine your overall sleep quality. Combined with an extra stage for “wake”, the sleep staging problem thus becomes: given a segment of time, which of these four labels to assign to that segment?
Understanding sleep quality typically requires a visit to a sleep clinic, where patients spend the night connected to dozens of sensors measuring brain activity, eye movements, and muscle tension. This polysomnography (PSG) setup is the gold standard for sleep analysis, but it’s expensive, uncomfortable, and completely impractical for everyday use.
The IDUN Guardian takes a radically different approach, using just two tiny electrodes positioned in the ear canals to capture brain signals. It’s like trying to understand a symphony by listening through a keyhole; you have much less information to work with, but the core patterns are still there if you know how to extract them.
Traditional algorithms struggle with this constraint. Even YASA, one of the most robust classical sleep-staging algorithms, sees its performance degrade significantly when applied to single-channel ear-EEG compared to full PSG setups.
Foundation large language models like GPT are truly the Swiss Army knives of the AI world in that after having been pretrained (that's the 'P' in GPT) on vast text corpora, they can be fine-tuned for specific tasks using much smaller contextualized datasets.
By analogy, brain foundation models can be pre-trained on massive EEG datasets and then adapted to new scenarios with minimal labeled data. Thibaut used CBraMod, an open-source foundation model pre-trained on over 9,000 hours of diverse EEG recordings, whereby it learned fundamental patterns of electrical brain activity, even though it had never seen single-channel ear-EEG data.
He fine-tuned the model to become a 4-class classifier (the classes being the three sleep stages plus wake), and evaluated its performance using Cohen’s κ score, a version of standard classification accuracy corrected for the fact that a random classifier would sometimes get the answer right by chance. Where a coinflip would have an accuracy of 25%, this would correspond to a κ score of zero, while perfect classification has an accuracy and a κ score of one.
The results were striking:
Think of it like teaching someone to play a new musical instrument. If they already understand music theory, rhythm, and harmony from playing other instruments, they’ll pick up the new one much faster than a complete beginner.
Similarly, the foundation model’s pre-training on diverse EEG data gave it an intuitive understanding of brain rhythms, sleep transitions, and the temporal patterns that distinguish different sleep stages. When shown ear-EEG data for the first time, it could leverage this prior knowledge to quickly adapt to the new signal characteristics.
The technical elegance lies in the approach:
Thibaut has shown that foundation models can bridge the gap between laboratory-grade brain monitoring and practical, everyday devices, and we at IDUN are excited to replicate this successful proof of concept to other downstream use cases, such as:
We’re entering an era where AI systems trained on population-scale brain data can be rapidly adapted to individual users. As research into brain foundation models yields efficiencies similar to those that have emerged in natural language, we may also see these capabilities be made available across consumer devices while maintaining clinical-grade accuracy.
This breakthrough represents the culmination of outstanding collaboration between academic research and industry innovation. Thibaut’s diligence and scientific rigor, combined with the expertise of his supervisors and the unique datasets available through IDUN’s technology, created the perfect environment for this discovery.
We’re particularly proud of how this work bridges fundamental research and practical application to deliver insights that are both academically interesting and directly applicable to improving the lives of anyone who wants to improve their life by improving their sleep.
Imagine a world where understanding your sleep patterns is as easy as checking your step count, where cognitive fatigue is monitored as routinely as heart rate, and where brain health becomes part of everyday wellness tracking.
Thibaut’s thesis takes us one small step closer to that future. By showing that foundation models can extract clinical-grade insights from minimal brain signals, he’s helped validate IDUN’s core mission: making professional-quality neurotechnology accessible to everyone.
The future of personalized neurotechnology starts now, and we are thrilled to be part of the journey.
The full thesis, “Leveraging Brain Foundation Models in Consumer Neurotech: Single-Channel Ear-EEG” by Thibaut Haslé, provides detailed methodology and results for researchers interested in reproducing and extending this work.