My research is at the intersection of artificial intelligence, cognitive neuroscience and hearing healthcare and technology. I aim to gain more insight neural sound processing in normal listeners and hearing impaired listeners and to combine this knowledge with advanced machine hearing technology to develop solutions for assistive hearing devices and hearables.

Machine hearing

Aim: Optimize AI models to mediate the challenges of performing machine hearing tasks in real-world listening scenes (e.g. with multiple simultaneous sound sources, reverberation, etc).

Approach:

  • Create spatialized, reverberant listening scenes mimicking the properties of real-world listening scenes.

  • Maximize feature availability for AI algorithms based on human hearing principles.

Human hearing

Aim: To unravel the neurocomputational mechanisms underlying naturalistic spatial hearing and to understand the simultaneous encoding and integration of spatial and non-spatial sound features in complex, real-life listening scenes.

Approach:

  • Neurobiological-inspired deep neural network modelling.

  • Empirical validation using neural data acquired with unique ultra high-field functional measurements of neural responses in auditory brainstem nuclei and with invasive intracranial recordings.

Neural data:

  • Ultra high-field functional magnetic resonance imaging (UHF fMRI, 7 Tesla).

  • Invasive intracranial measurements (stereotactic electroencephalography [sEEG], electrocorticography [ECoG]).

  • Electroencephalography (EEG).