LEADER 06344nam 22008535 450 001 9910483817103321 005 20250723063237.0 010 $a1-280-38935-4 010 $a9786613567277 010 $a3-642-15998-2 024 7 $a10.1007/978-3-642-15998-5 035 $a(CKB)2670000000045131 035 $a(SSID)ssj0000446648 035 $a(PQKBManifestationID)11291502 035 $a(PQKBTitleCode)TC0000446648 035 $a(PQKBWorkID)10496234 035 $a(PQKB)11685042 035 $a(DE-He213)978-3-642-15998-5 035 $a(MiAaPQ)EBC3065836 035 $a(PPN)149024584 035 $a(EXLCZ)992670000000045131 100 $a20100910d2010 u| 0 101 0 $aeng 135 $aurnn|008mamaa 181 $ctxt 182 $cc 183 $acr 200 10$aMultilingual and Multimodal Information Access Evaluation $eInternational Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings /$fedited by Maristella Agosti, Nicola Ferro, Carol Peters, Maarten de Rijke, Alan Smeaton 205 $a1st ed. 2010. 210 1$aBerlin, Heidelberg :$cSpringer Berlin Heidelberg :$cImprint: Springer,$d2010. 215 $a1 online resource (XIII, 145 p. 21 illus.) 225 1 $aInformation Systems and Applications, incl. Internet/Web, and HCI,$x2946-1642 ;$v6360 300 $aInternational conference proceedings. 311 08$a3-642-15997-4 320 $aIncludes bibliographical references and author index. 327 $aKeynote Addresses -- IR between Science and Engineering, and the Role of Experimentation -- Retrieval Evaluation in Practice -- Resources, Tools, and Methods -- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages -- A New Approach for Cross-Language Plagiarism Analysis -- Creating a Persian-English Comparable Corpus -- Experimental Collections and Datasets (1) -- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases -- Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation -- Experimental Collections and Datasets (2) -- MapReduce for Information Retrieval Evaluation: ?Let?s Quickly Test This on 12 TB of Data? -- Which Log for Which Information? Gathering Multilingual Data from Different Log File Types -- Evaluation Methodologies and Metrics (1) -- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements -- On the Evaluation of Entity Profiles -- Evaluation Methodologies and Metrics (2) -- Evaluating Information Extraction -- Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation -- Automated Component?Level Evaluation: Present and Future -- Panels -- The Four Ladies of Experimental Evaluation -- A PROMISE for Experimental Evaluation. 330 $aIn its ?rst ten years of activities (2000-2009), the Cross-Language Evaluation Forum (CLEF) played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain, such as cro- language question answering, image and geographic information retrieval, int- activeretrieval,and many more.It also promotedthe study andimplementation of appropriateevaluation methodologies for these diverse types of tasks and - dia. As a result, CLEF has been extremely successful in building a wide, strong, and multidisciplinary research community, which covers and spans the di?erent areasofexpertiseneededto dealwith thespreadofCLEFtracksandtasks.This constantly growing and almost completely voluntary community has dedicated an incredible amount of e?ort to making CLEF happen and is at the core of the CLEF achievements. CLEF 2010 represented a radical innovation of the ?classic CLEF? format and an experiment aimed at understanding how ?next generation? evaluation campaigns might be structured. We had to face the problem of how to innovate CLEFwhile still preservingits traditionalcorebusiness,namely the benchma- ing activities carried out in the various tracks and tasks. The consensus, after lively and community-wide discussions, was to make CLEF an independent four-day event, no longer organized in conjunction with the European Conference on Research and Advanced Technology for Digital Libraries (ECDL) where CLEF has been running as a two-and-a-half-day wo- shop. CLEF 2010 thus consisted of two main parts: a peer-reviewed conference ? the ?rst two days ? and a series of laboratories and workshops ? the second two days. 410 0$aInformation Systems and Applications, incl. Internet/Web, and HCI,$x2946-1642 ;$v6360 606 $aNatural language processing (Computer science) 606 $aUser interfaces (Computer systems) 606 $aHuman-computer interaction 606 $aInformation storage and retrieval systems 606 $aData mining 606 $aApplication software 606 $aComputational linguistics 606 $aNatural Language Processing (NLP) 606 $aUser Interfaces and Human Computer Interaction 606 $aInformation Storage and Retrieval 606 $aData Mining and Knowledge Discovery 606 $aComputer and Information Systems Applications 606 $aComputational Linguistics 615 0$aNatural language processing (Computer science) 615 0$aUser interfaces (Computer systems) 615 0$aHuman-computer interaction. 615 0$aInformation storage and retrieval systems. 615 0$aData mining. 615 0$aApplication software. 615 0$aComputational linguistics. 615 14$aNatural Language Processing (NLP). 615 24$aUser Interfaces and Human Computer Interaction. 615 24$aInformation Storage and Retrieval. 615 24$aData Mining and Knowledge Discovery. 615 24$aComputer and Information Systems Applications. 615 24$aComputational Linguistics. 676 $a025.04 701 $aAgosti$b Maristella$0311943 712 12$aInternational Conference of the Cross-Language Evaluation Forum 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910483817103321 996 $aMultilingual and multimodal information access evaluation$94200510 997 $aUNINA