The basic mechanisms underlying comprehension of spoken language are still largely unknown. Over the past decade, the study team has gained new insights to how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. However, the next set of questions await pertaining to the sequencing of those auditory elements and how they are integrated with other features, such as, the amplitude envelope of speech. Further investigation of the cortical representation of speech sounds can likely shed light on these fundamental questions. Previous research has implicated the superior temporal cortex in the processing of speech sounds, but little is known about how these sounds are linked together into the perceptual experience of words and continuous speech. The overall goal is to determine how the brain extracts linguistic elements from a complex acoustic speech signal towards better understanding and remediating human language disorders.
Intracranial high-density electrodes make it possible to record neural activity directly from the brain surface with unparalleled spatial and temporal resolution to unravel both local and population encoding of speech sounds. This study proposes to assess speech perception in patients who are undergoing surgery for seizure localization or awake intraoperative brain mapping. Electrode placement is based on the clinical needs of each patient. The research study team will examine the mechanisms of phonetic encoding to reveal both the organization of auditory speech feature selectivity and the distributed population-level processing that give rise to the emergent properties of spoken language perception. The aims of this study seeks to determine the cortical encoding of phonological sequencing (Aim 1), representation of amplitude landmark coding in speech (Aim 2), and the shared and distinct mechanisms for speech and music melody encoding (Aim 3). Together, these aims will advance our understanding of speech encoding in the human brain beyond consonants and vowels, addressing questions pertaining to sequencing, amplitude coding, and auditory specialization. These results should heavily impact current theories of speech processing and, therefore, will have significant implications for understanding and remediating human language disorders.