[feed] Atom [feed] RSS 1.0 [feed] RSS 2.0

Kumar, G Vinodh and Halder, Tamesh and Jaiswal, Amit K and Mukherjee, Abhishek and Roy, Dipanjan and Banerjee, Arpan (2016) Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study. Frontiers in Psychology, 13 (7). p. 1598.

[img]
Preview
Text
25_Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception.pdf

Download (2808Kb) | Preview

Abstract

Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our study indicates that the temporal integration underlying multisensory speech perception requires to be understood in the framework of large-scale functional brain network mechanisms in addition to the established cortical loci of multisensory speech perception.

Item Type: Article
Subjects: Neurodegenerative Disorders
Neuro-Oncological Disorders
Neurocognitive Processes
Neuronal Development and Regeneration
Informatics and Imaging
Genetics and Molecular Biology
Depositing User: Dr. D.D. Lal
Date Deposited: 05 May 2017 08:37
Last Modified: 13 Dec 2021 05:56
URI: http://nbrc.sciencecentral.in/id/eprint/81

Actions (login required)

View Item View Item