1.

Record Nr.

UNISA996465899203316

Titolo

Multilingual and Multimodal Information Access Evaluation [[electronic resource] ] : Second International Conference of the Cross-Language Evaluation Forum, CLEF 2011, Amsterdam, The Netherlands, September 19-22, 2011, Proceedings / / edited by Pamela Forner, Julio Gonzalo, Jaama Kekäläinen, Mounia Lalmas, Maarten de Rijke

Pubbl/distr/stampa

Berlin, Heidelberg : , : Springer Berlin Heidelberg : , : Imprint : Springer, , 2011

ISBN

3-642-23708-8

Edizione

[1st ed. 2011.]

Descrizione fisica

1 online resource (X, 143 p.)

Collana

Information Systems and Applications, incl. Internet/Web, and HCI ; ; 6941

Disciplina

025.04

Soggetti

Information storage and retrieval

Natural language processing (Computer science)

User interfaces (Computer systems)

Data mining

Application software

Computational linguistics

Information Storage and Retrieval

Natural Language Processing (NLP)

User Interfaces and Human Computer Interaction

Data Mining and Knowledge Discovery

Information Systems Applications (incl. Internet)

Computational Linguistics

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Note generali

Bibliographic Level Mode of Issuance: Monograph

Nota di bibliografia

Includes bibliographical references and index.

Sommario/riassunto

This book constitutes the refereed proceedings of the Second International Conference on Multilingual and Multimodal Information Access Evaluation, in continuation of the popular CLEF campaigns and workshops that have run for the last decade, CLEF 2011, held in Amsterdem, The Netherlands, in September 2011. The 14 revised full



papers presented together with 2 keynote talks were carefully reviewed and selected from numerous submissions. The papers accepted for the conference included research on evaluation methods and settings, natural language processing within different domains and languages, multimedia and reflections on CLEF. Two keynote speakers highlighted important developments in the field of evaluation: the role of users in evaluation and a framework for the use of crowdsourcing experiments in the setting of retrieval evaluation.