VowelNet: Enhancing Communication with Wearable EEG-Based Vowel Imagery

Abstract

VowelNet is a lightweight neural network for decoding imagined speech on ultra-low-power wearables. Using only eight scalp EEG channels, the system classifies vowel versus rest with 91.1% accuracy and distinguishes between multiple vowels with 61.8% accuracy. Deployed on a GAP9 processor, it runs with 41 ms latency and 17 mW power consumption, enabling continuous operation for more than 24 hours while requiring eight times fewer channels than prior speech-imagery systems.

Key Highlights

  • Classifies imagined vowels using only eight EEG channels with 91.1% vs. rest accuracy and 61.8% inter-vowel accuracy.
  • Runs on GAP9 with 41 ms latency and 17 mW power, supporting 24+ hours of continuous operation.
  • Requires eight times fewer channels than previous wearable speech-imagery brain–computer interfaces.
Thorir Mar Ingolfsson
Thorir Mar Ingolfsson
Postdoctoral Researcher

I develop efficient machine learning systems for biomedical wearables that operate under extreme resource constraints. My work bridges foundation models, neural architecture design, and edge deployment to enable real-time biosignal analysis on microwatt-scale devices.

Related