LEADER 03843nam 2200649 a 450 001 9910437865503321 005 20200520144314.0 010 $a1-283-90825-5 010 $a3-642-33693-0 024 7 $a10.1007/978-3-642-33693-5 035 $a(CKB)2670000000279902 035 $a(EBL)1082713 035 $a(OCoLC)820022411 035 $a(SSID)ssj0000798938 035 $a(PQKBManifestationID)11436998 035 $a(PQKBTitleCode)TC0000798938 035 $a(PQKBWorkID)10759703 035 $a(PQKB)11172973 035 $a(DE-He213)978-3-642-33693-5 035 $a(MiAaPQ)EBC1082713 035 $a(PPN)168325187 035 $a(EXLCZ)992670000000279902 100 $a20121126d2013 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 14$aThe Naive Bayes model for unsupervised word sense disambiguation $easpects concerning feature selection /$fFlorentina T. Hristea 205 $a1st ed. 2013. 210 $aHeidelberg $cSpringer$d2013 215 $a1 online resource (78 p.) 225 0$aSpringerBriefs in statistics,$x2191-544X 300 $aDescription based upon print version of record. 311 $a3-642-33692-2 320 $aIncludes bibliographical references and index. 327 $a1.Preliminaries -- 2.The Naïve Bayes Model in the Context of Word Sense Disambiguation -- 3.Semantic WordNet-based Feature Selection -- 4.Syntactic Dependency-based Feature Selection -- 5.N-Gram Features for Unsupervised WSD with an Underlying Naïve Bayes Model References -- Index.  . 330 $aThis book presents recent advances (from 2008 to 2012) concerning use of the Naïve Bayes model in unsupervised word sense disambiguation (WSD). While WSD, in general, has a number of important applications in various fields of artificial intelligence (information retrieval, text processing, machine translation, message understanding, man-machine communication etc.), unsupervised WSD is considered important because it is language-independent and does not require previously annotated corpora. The Naïve Bayes model has been widely used in supervised WSD, but its use in unsupervised WSD has led to more modest disambiguation results and has been less frequent. It seems that the potential of this statistical model with respect to unsupervised WSD continues to remain insufficiently explored. The present book contends that the Naïve Bayes model needs to be fed knowledge in order to perform well as a clustering technique for unsupervised WSD and examines three entirely different sources of such knowledge for feature selection: WordNet, dependency relations and web N-grams. WSD with an underlying Naïve Bayes model is ultimately positioned on the border between unsupervised and knowledge-based techniques. The benefits of feeding knowledge (of various natures) to a knowledge-lean algorithm for unsupervised WSD that uses the Naïve Bayes model as clustering technique are clearly highlighted. The discussion shows that the Naïve Bayes model still holds promise for the open problem of unsupervised WSD. 410 0$aSpringerBriefs in Statistics,$x2191-544X 606 $aSemantics$xData processing 606 $aAmbiguity 606 $aNatural language processing (Computer science) 606 $aComputational linguistics 615 0$aSemantics$xData processing. 615 0$aAmbiguity. 615 0$aNatural language processing (Computer science) 615 0$aComputational linguistics. 676 $a006.3/5 676 $a401.430285 700 $aHristea$b Florentina T$01756373 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910437865503321 996 $aThe Naive Bayes model for unsupervised word sense disambiguation$94193615 997 $aUNINA