Towards a smart hearing aid: Decoding the brain's response to speech
Understanding speech in noise is a main problem for people with hearing impairment, and it persists for hearing aid users. The problem is impeded by an often imperfect fitting of the settings in a hearing aid that is informed by pure-tone audiometry but not directly by speech-in-noise comprehension. Moreover, algorithms for enhancing the intelligibility of speech in noise exist, but their usage in a hearing aid requires knowledge of the user's target sound, such as a particular voice amongst competing speakers that the hearing aid wearer wants to listen to. Here we present recent progress on decoding speech comprehension as well as the attentional focus of a listener to one of two competing voices from non-invasive EEG recordings. The decoding is based on both cortical and subcortical neural activity in relation to different acoustic as well as linguistic features of speech. The developed methods may be applied in a smart hearing aid that measures brain activity from electrodes within the ear canal to better fit the hearing aid's settings as well as to inform its noise-reduction algorithm.