The effects of attention and visual input on noninvasive electrophysiological indices of natural speech processing at different hierarchical levels
How the human brain extracts meaning from the dynamic patterns of sound that constitute speech remains poorly understood. This is especially true in natural environments where the speech signal has to be processed against a complex mixture of background sounds. In this talk I will outline efforts over the last few years to derive non-invasive indices of natural speech processing in the brain. I will discuss how these indices are affected by attention and visual input and how attentional selection and multisensory integration can be “decoded” from EEG data. I will outline work showing that EEG and MEG are sensitive not just to the low-level acoustic properties of speech, but also to higher-level linguistic aspects of this most important of signals. This will include demonstrating that these signals reflect processing at the level of phonetic features. And, based on our most recent work, it will also include evidence that EEG is exquisitely sensitivity to the semantic processing of natural, running speech in a way that is very strongly affected by attention and intelligibility. While showcasing these findings, I will outline a number of paradigms and methodological approaches for eliciting non-invasive indices of speech-specific processing that should be useful in advancing our understanding of receptive speech processing in particular populations.