LEADER 06375oam 2200661 450 001 9910131532203321 005 20221206181054.0 035 $a(CKB)3710000000504555 035 $a(SSID)ssj0001664972 035 $a(PQKBManifestationID)16454261 035 $a(PQKBTitleCode)TC0001664972 035 $a(PQKBWorkID)14999642 035 $a(PQKB)11704633 035 $a(WaSeSS)IndRDA00055884 035 $a(oapen)https://directory.doabooks.org/handle/20.500.12854/41553 035 $a(EXLCZ)993710000000504555 100 $a20160829d2014 fy 0 101 0 $aeng 135 $aurmn#---||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 00$aAudiovisual speech recognition$b[electronic resource] $ecorrespondence between brain and behavior /$ftopic editor Nicholas Altieri 210 $cFrontiers Media SA$d2014 210 1$aLausanne, Switzerland :$cFrontiers Media SA,$d2014. 210 4$dİ2014 215 $a1 online resource (101 pages) $cillustrations, charts; digital, PDF file(s) 225 0 $aFrontiers Research Topics 300 $aBibliographic Level Mode of Issuance: Monograph 300 $aPublished in Frontiers in Psychology. 311 $a2-88919-251-2 320 $aIncludes bibliographical references. 327 $aAudiovisual Integration: An Introduction to Behavioral and Neuro-Cognitive Methods / Nicholas Altieri -- Speech Through Ears and Eyes: Interfacing the Senses With the Supramodal Brain / Virginie van Wassenhove -- Neural Dynamics of Audiovisual Speech Integration Under Variable Listening Conditions: An Individual Participant Analysis / Nicholas Altieri and Michael J. Wenger -- Gated Audiovisual Speech Identification in Silence vs. Noise: Effects on Time and Accuracy / Shahram Moradi, Bjo?rn Lidestam and Jerker Ro?nnberg -- Susceptibility to a Multisensory Speech Illusion in Older Persons is Driven by Perceptual Processes / Annalisa Setti, Kate E. Burke, Rose Anne Kenny and Fiona N. Newell -- How Can Audiovisual Pathways Enhance the Temporal Resolution of TimeCompressed Speech in Blind Subjects? / Ingo Hertrich, Susanne Dietrich and Hermann Ackermann -- Audio-Visual Onset Differences are used to Determine Syllable Identity for Ambiguous Audio-Visual Stimulus Pairs / Sanne ten Oever, Alexander T. Sack, Katherine L. Wheat, Nina Bien and Nienke van Atteveldt -- Brain Responses and Looking Behavior During Audiovisual Speech Integration in Infants Predict Auditory Speech Comprehension in the Second Year of Life / Elena V. Kushnerenko, Przemyslaw Tomalski, Haiko Ballieux, Anita Potton, Deidre Birtles, Caroline Frostick and Derek G. Moore -- Multisensory Integration, Learning, and the Predictive Coding Hypothesis / Nicholas Altieri -- The Interaction Between Stimulus Factors and Cognitive Factors During Multisensory Integration of Audiovisual Speech / Ryan A. Stevenson, Mark T. Wallace and Nicholas Altieri -- Caregiver Influence on Looking Behavior and Brain Responses in Prelinguistic Development / Heather L. Ramsdell-Hudock. 330 $aPerceptual processes mediating recognition, including the recognition of objects and spoken words, is inherently multisensory. This is true in spite of the fact that sensory inputs are segregated in early stages of neuro-sensory encoding. In face-to-face communication, for example, auditory information is processed in the cochlea, encoded in auditory sensory nerve, and processed in lower cortical areas. Eventually, these ?sounds? are processed in higher cortical pathways such as the auditory cortex where it is perceived as speech. Likewise, visual information obtained from observing a talker?s articulators is encoded in lower visual pathways. Subsequently, this information undergoes processing in the visual cortex prior to the extraction of articulatory gestures in higher cortical areas associated with speech and language. As language perception unfolds, information garnered from visual articulators interacts with language processing in multiple brain regions. This occurs via visual projections to auditory, language, and multisensory brain regions. The association of auditory and visual speech signals makes the speech signal a highly ?configural? percept. An important direction for the field is thus to provide ways to measure the extent to which visual speech information influences auditory processing, and likewise, assess how the unisensory components of the signal combine to form a configural/integrated percept. Numerous behavioral measures such as accuracy (e.g., percent correct, susceptibility to the ?McGurk Effect?) and reaction time (RT) have been employed to assess multisensory integration ability in speech perception. On the other hand, neural based measures such as fMRI, EEG and MEG have been employed to examine the locus and or time-course of integration. The purpose of this Research Topic is to find converging behavioral and neural based assessments of audiovisual integration in speech perception. A further aim is to investigate speech recognition ability in normal hearing, hearing-impaired, and aging populations. As such, the purpose is to obtain neural measures from EEG as well as fMRI that shed light on the neural bases of multisensory processes, while connecting them to model based measures of reaction time and accuracy in the behavioral domain. In doing so, we endeavor to gain a more thorough description of the neural bases and mechanisms underlying integration in higher order processes such as speech and language recognition. 606 $aCognitive science 606 $aPsychology 606 $aPsychology$2HILCC 606 $aSocial Sciences$2HILCC 610 $aModels of Integration 610 $aAudiovisual speech and aging 610 $aIntegration Efficiency 610 $aMultisensory language development 610 $aVisual prediction 610 $aAudiovisual integration 610 $aimaging 615 0$aCognitive science. 615 0$aPsychology. 615 7$aPsychology 615 7$aSocial Sciences 676 $a153.6 700 $aNicholas Altieri$4auth$01370391 702 $aAltieri$b Nicholas 801 0$bPQKB 801 2$bUkMaJRU 906 $aBOOK 912 $a9910131532203321 996 $aAudiovisual speech recognition$93398710 997 $aUNINA