LEADER 04146nam 2200469z- 450 001 9910227347203321 005 20240424230127.0 035 $a(CKB)4100000000883861 035 $a(oapen)https://directory.doabooks.org/handle/20.500.12854/56191 035 $a(EXLCZ)994100000000883861 100 $a20202102d2017 |y 0 101 0 $aeng 135 $aurmn|---annan 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 00$aPhonology in the bilingual and bidialectal lexicon /$ftopic editors, Isabelle Darcy, Indiana University, Bloomington, USA, Annie Tremblay, University of Kansas, USA, Miquel Simonet, University of Arizona, USA 210 $cFrontiers Media SA$d2017 215 $a1 electronic resource (185 p.) 225 1 $aFrontiers Research Topics 311 $a2-88945-210-7 330 $aA conversation between two people can only take place if the words intended by each speaker are successfully recognized. Spoken word recognition is at the heart of language comprehension. This automatic and smooth process remains a challenge for models of spoken word recognition. Both the process of mapping the speech signal onto stored representations for words, and the format of the representation themselves are subject to debate. So far, existing research on the nature of spoken word representations has focused mainly on native speakers. The picture becomes even more complex when looking at spoken word recognition in a second language. Given that most of the world?s speakers know and use more than one language, it is crucial to reach a more precise understanding of how bilingual and multilingual individuals encode spoken words in the mental lexicon, and why spoken word recognition is more difficult in a second language than in the native language. Current models of native spoken word recognition operate under two assumptions: (i) that listeners? perception of the incoming speech signal is optimal; and (ii) that listeners? lexical representations are accurate. As a result, lexical representations are easily activated, and intended words are successfully recognized. However, these assumptions are compromised when applied to a later-learned second language. For a variety of reasons (e.g., phonetic/phonological, orthographic), second language users may not perceive the speech signal optimally, and they may still be refining the motor routines needed for articulation. Accordingly, their lexical representations may differ from those of native speakers, which may in turn inhibit their selection of the intended word forms. Second language users also have to solve a larger selection challenge?having words in more than one language to choose from. Thus, for second language users, the links between perception, lexical representations, orthography, and production are all but clear. Even for simultaneous bilinguals, important questions remain about the specificity and interdependence of their lexical representations and the factors influencing cross-language word activation. This Frontiers Research Topic seeks to further our understanding of the factors that determine how multilinguals recognize and encode spoken words in the mental lexicon, with a focus on the mapping between the input and lexical representations, and on the quality of lexical representations. 606 $aMultilingualism$xPsychological aspects 606 $aBilingualism$xPsychological aspects 606 $aLexical phonology 610 $aPhonological knowledge 610 $aSecond-language speech 610 $abilingual and bidialectal lexicon 610 $aspoken word recognition 610 $alexical access 610 $aorthographic knowledge 615 0$aMultilingualism$xPsychological aspects. 615 0$aBilingualism$xPsychological aspects. 615 0$aLexical phonology. 676 $a414.01/9 702 $aDarcy$b Isabelle$f1974-, 702 $aSimonet$b Miquel 702 $aTremblay$b Annie 906 $aBOOK 912 $a9910227347203321 996 $aPhonology in the Bilingual and Bidialectal Lexicon$93030678 997 $aUNINA