발표자: Hyojin Park (School of Psychology, Centre for Human Brain Health (CHBH), University of Birmingham, UK)
제목: Brain rhythms and multi-modal speech perception
Numerous neurophysiological studies have shown that cortical oscillations play an important role in the segmentation, parsing, and coding of continuous speech streams by demonstrating brain rhythms track speech rhythms, in a process known as speech entrainment. Recent evidence suggests that this mechanism facilitates speech intelligibility. We recently demonstrated that this is the case for visual speech (lip movement) as well. This has led us to ask to what extent auditory and visual information are represented in brain areas, either jointly or individually. Which system conveys shared information from multisensory inputs and which system represents the inputs synergistically? In my talk, first, I will present our recent work which shows how information in entrained auditory and visual speech interact to facilitate speech comprehension. Here we used a novel Information Theory approach (Partial Information Decomposition: PID) to decompose dynamic information quantities, e.g. synergy, redundancy, unique information. Second, I will show how the information interaction between audiovisual speech rhythms is represented as a function of time in different brain regions. Third, I will also discuss our recent results in the question of linking function to anatomy via diffusion tensor imaging that revealed the degree of white matter integrity in individuals differently predicts information interaction between audiovisual speech rhythms. Lastly, I will demonstrate how the brain processes high-level semantic gist (topic keywords) using Natural Language Processing (NLP)-based topic model algorithm.
발표일시: 2022년 7월 27일 수요일 오후 4시