LEADER 05904nam 22006615 450 001 996465899103316 005 20200704044931.0 010 $a3-540-44645-1 024 7 $a10.1007/3-540-44645-1 035 $a(CKB)1000000000211420 035 $a(SSID)ssj0000322389 035 $a(PQKBManifestationID)11234013 035 $a(PQKBTitleCode)TC0000322389 035 $a(PQKBWorkID)10287570 035 $a(PQKB)11608127 035 $a(DE-He213)978-3-540-44645-3 035 $a(MiAaPQ)EBC3072409 035 $a(PPN)15519738X 035 $a(EXLCZ)991000000000211420 100 $a20121227d2001 u| 0 101 0 $aeng 135 $aurnn|008mamaa 181 $ctxt 182 $cc 183 $acr 200 10$aCross-Language Information Retrieval and Evaluation$b[electronic resource] $eWorkshop of Cross-Language Evaluation Forum, CLEF 2000, Lisbon, Portugal, September 21-22, 2000, Revised Papers /$fby Carol Peters 205 $a1st ed. 2001. 210 1$aBerlin, Heidelberg :$cSpringer Berlin Heidelberg :$cImprint: Springer,$d2001. 215 $a1 online resource (X, 394 p.) 225 1 $aLecture Notes in Computer Science,$x0302-9743 ;$v2069 300 $aBibliographic Level Mode of Issuance: Monograph 311 $a3-540-42446-6 320 $aIncludes bibliographical references and index. 327 $aEvaluation for CLIR Systems -- CLIR Evaluation at TREC -- NTCIR Workshop : Japanese- and Chinese-English Cross-Lingual Information Retrieval and Multi-grade Relevance Judgments -- Language Resources in Cross-Language Text Retrieval: A CLEF Perspective -- The Domain-Specific Task of CLEF - Specific Evaluation Strategies in Cross-Language Information Retrieval -- Evaluating Interactive Cross-Language Information Retrieval: Document Selection -- New Challenges for Cross-Language Information Retrieval: Multimedia Data and the User Experience -- Research to Improve Cross-Language Retrieval ? Position Paper for CLEF -- The CLEF-2000 Experiments -- CLEF 2000 ? Overview of Results -- Translation Resources, Merging Strategies, and Relevance Feedback for Cross-Language Information Retrieval -- Cross-Language Retrieval for the CLEF Collections ? Comparing Multiple Methods of Retrieval -- A Language-Independent Approach to European Text Retrieval -- Experiments with the Eurospider Retrieval System for CLEF 2000 -- A Poor Man?s Approach to CLEF -- Ambiguity Problem in Multiingual Information Retrieval -- The Use of NLP Techniques in CLIR -- CLEF Experiments at Maryland: Statistical Stemming and Backoff Translation -- Multilingual Information Retrieval Based on Parallel Texts from the Web -- Mercure at CLEF-1 -- Bilingual Tests with Swedish, Finnish, and German Queries: Dealing with Morphology, Compound Words, and Query Structure -- A Simple Approach to the Spanish-English Bilingual Retrieval Task -- Cross-Language Information Retrieval Using Dutch Query Translation -- Bilingual Information Retrieval with HyREX and Internet Translation Services -- Sheffield University CLEF 2000 Submission ? Bilingual Track: German to English -- West Group at CLEF 2000: Non-english Monolingual Retrieval -- ITC-irst at CLEF 2000: Italian Monolingual Track -- Automatic Language-Specific Stemming in Information Retrieval. 330 $aThe first evaluation campaign of the Cross-Language Evaluation Forum (CLEF) for European languages was held from January to September 2000. The campaign cul- nated in a two-day workshop in Lisbon, Portugal, 21 22 September, immediately following the fourth European Conference on Digital Libraries (ECDL 2000). The first day of the workshop was open to anyone interested in the area of Cross-Language Information Retrieval (CLIR) and addressed the topic of CLIR system evaluation. The goal was to identify the actual contribution of evaluation to system development and to determine what could be done in the future to stimulate progress. The second day was restricted to participants in the CLEF 2000 evaluation campaign and to their - periments. This volume constitutes the proceedings of the workshop and provides a record of the campaign. CLEF is currently an activity of the DELOS Network of Excellence for Digital - braries, funded by the EC Information Society Technologies to further research in digital library technologies. The activity is organized in collaboration with the US National Institute of Standards and Technology (NIST). The support of DELOS and NIST in the running of the evaluation campaign is gratefully acknowledged. I should also like to thank the other members of the Workshop Steering Committee for their assistance in the organization of this event. 410 0$aLecture Notes in Computer Science,$x0302-9743 ;$v2069 606 $aData structures (Computer science) 606 $aInformation storage and retrieval 606 $aArtificial intelligence 606 $aData Structures and Information Theory$3https://scigraph.springernature.com/ontologies/product-market-codes/I15009 606 $aInformation Storage and Retrieval$3https://scigraph.springernature.com/ontologies/product-market-codes/I18032 606 $aArtificial Intelligence$3https://scigraph.springernature.com/ontologies/product-market-codes/I21000 615 0$aData structures (Computer science). 615 0$aInformation storage and retrieval. 615 0$aArtificial intelligence. 615 14$aData Structures and Information Theory. 615 24$aInformation Storage and Retrieval. 615 24$aArtificial Intelligence. 676 $a025.04 700 $aPeters$b Carol$4aut$4http://id.loc.gov/vocabulary/relators/aut 701 $aPeters$b C$g(Carol)$0348398 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a996465899103316 996 $aCross-Language Information Retrieval and Evaluation$92860825 997 $aUNISA