13660nam 22009375 450 99620252900331620200701163145.03-319-11382-810.1007/978-3-319-11382-1(CKB)3710000000227404(SSID)ssj0001338737(PQKBManifestationID)11994248(PQKBTitleCode)TC0001338737(PQKBWorkID)11356217(PQKB)11105085(DE-He213)978-3-319-11382-1(MiAaPQ)EBC6298435(MiAaPQ)EBC5596147(Au-PeEL)EBL5596147(OCoLC)889715640(PPN)180626574(EXLCZ)99371000000022740420140819d2014 u| 0engurnn#008mamaatxtccrInformation Access Evaluation -- Multilinguality, Multimodality, and Interaction[electronic resource] 5th International Conference of the CLEF Initiative, CLEF 2014, Sheffield, UK, September 15-18, 2014, Proceedings /edited by Evangelos Kanoulas, Mihai Lupu, Paul Clough, Mark Sanderson, Mark Hall, Allan Hanbury, Elaine Toms1st ed. 2014.Cham :Springer International Publishing :Imprint: Springer,2014.1 online resource (XVIII, 324 p. 64 illus.)Information Systems and Applications, incl. Internet/Web, and HCI ;8685Bibliographic Level Mode of Issuance: Monograph3-319-11381-X Intro -- Preface -- Organization -- Keynote Presentations -- Table of Contents -- Evaluation -- Making Test Corpora for Question Answering More Representative -- 1 Introduction -- 2 Corpora Sources -- 3 Analysis and Comparison of Question Corpora -- 4 Constructing Evaluation Corpora -- 4.1 Extending QALD to Improve Representativeness -- 4.2 Building a New Evaluation Corpus -- 5 Conclusions and Future Work -- Towards Automatic Evaluation of Health-Related CQA Data -- 1 Introduction -- 2 Related Work -- 3 Resources and Data -- 3.1 Disease and Medicine Dictionaries -- 3.2 Otvety@Mail.Ru -- 4 Experiment -- 4.1 Data Preparation -- 4.2 Manual Evaluation -- 4.3 Automatic Matching -- 5 Discussion -- 5.1 Quality of Manual Assessment -- 5.2 Inconsistency of Automatic vs. Manual Labels -- 5.3 Analysis of User Opinions -- 6 Conclusion -- Rethinking How to Extend Average Precision to Graded Relevance -- 1 Introduction -- 2 Mapping Binary Measures into Multi-graded Ones -- 3 Graded Average Precision -- 4 Extended Graded Average Precision (xGAP) -- 5 Expected Graded Average Precision (eGAP) -- 6 Evaluation -- 7 Conclusions and Future Work -- CLEF 15th Birthday: What Can We Learn From Ad Hoc Retrieval? -- 1 Motivations and Approach -- 2 Research Questions -- 3 Experimental Analysis -- 4 Future Works -- An Information Retrieval Ontology for Information Retrieval Nanopublications -- 1 Motivation -- 2 Ontology Description -- 3 Nanopublications in IR -- 4 Future Work -- Supporting More-Like-This Information Needs:Finding Similar Web Content in Different Scenarios -- 1 Introduction -- 2 Related Work -- 3 Approach -- 3.1 Link Crawling -- 3.2 Search Engine Related-Operator -- 3.3 Keyqueries -- 4 Evaluation -- 4.1 Individual Classification Results -- 4.2 Comparison and Hypotheses' Validity -- 4.3 Further Observations: Overlap and Efficiency -- 5 Conclusion and Outlook.Domain-Specific Approaches -- SCAN: A Swedish Clinical Abbreviation Normalizer -- 1 Introduction -- 1.1 Related Work -- 1.2 Aim and Objective -- 2 Method -- 2.1 Data and Content Analysis -- 2.3 SCAN: Iterative Development -- 2.4 Evaluation: System Results, Expansion Coverage and Lexicon Creation -- 3 Results -- 3.1 Content Analysis and Characterization -- 3.2 Error Analysis of SCAN 1.0 and Development of SCAN 2.0 -- 3.3 Abbreviation Identification -- 3.4 Abbreviation Expansion Coverage Analysis and Lexicons -- 4 Analysis and Discussion -- 4.1 Limitations and Future Work -- 4.2 Significance of Study -- 5 Conclusions -- A Study of Personalised Medical Literature Search -- 1 Introduction -- 2 Related Work -- 3 Personalisation Approaches -- 3.1 P-Click -- 3.2 G-Click -- 3.3 Medical Interest Profiling -- 4 Experimental Setup -- 5 Results -- 5.1 Evaluating Using Click-Logs -- 5.2 Evaluating via User-Study -- 6 Conclusions -- A Hybrid Approach for Multi-faceted IRin Multimodal Domain -- 1 Introduction -- 2 Related Work -- 3 Model Representation -- 3.1 Traversal Method - Spreading Activation -- 3.2 Hybrid Search -- 4 Experiment Design -- 4.1 Data Collection -- 4.2 Standard Text and Image Search -- 4.3 Graph Search -- 5 Results and Discussion -- 6 Conclusion -- Discovering Similar Passages Within Large Text Documents -- 1 Introduction -- 2 Basic Alignment Algorithm -- 3 Discovering Multiple Passages -- 4 Handling Long Documents -- 5 Experiments and Results -- 5.1 Test Corpus -- 5.2 Performance Measures -- 5.3 Experimental Results -- 6 Conclusions -- Improving Transcript-Based Video Retrieval Using Unsupervised Language Model Adaptation -- 1 Introduction -- 2 Method -- 3 Results and Discussion -- 4 Conclusions -- Alternative Search Tasks -- Self-supervised Relation Extraction Using UMLS -- 1 Introduction -- 2 Related Work -- 3 Unified Medical Language System.4 Generation of Annotated Corpus -- 5 Relation Classifier -- 6 Data Analysis -- 7 Evaluation Methods -- 7.1 Held-Out -- 7.2 Manual Evaluation -- 8 Conclusion -- Authorship IdentificationUsing Dynamic Selection of Featuresfrom Probabilistic Feature Set -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Probabilistic Feature Set -- 3.2 Distance Measure -- 3.3 Authorship Verification Using a KNN-Based Approach -- 3.4 Dynamic Feature Selection -- 4 Experiments -- 4.1 Dataset -- 4.2 Experimental Setup -- 4.3 Results and Discussion -- 5 Conclusions and Future Work -- References -- A Real-World Framework for Translatoras Expert Retrieval -- 1 Introduction -- 2 Translator Recommendation -- 2.1 Use-Case -- 2.2 The Platform -- 3 Methods and Related Work -- 3.1 Aggregation Functions -- 3.2 Learning to Rank -- 3.3 Evaluation -- 4 Experimental Results -- 4.1 Aggregation Functions -- 4.2 Learning to Rank -- 5 Conclusion and Future Work -- Comparing Algorithms for Microblog Summarisation -- 1 Introduction -- 2 Summarisation Algorithms -- 3 Experimental Setup -- 4 Results -- 5 Conclusions -- The Effect of Dimensionality Reductionon Large Scale Hierarchical Classification -- 1 Introduction -- 2 Cascade Classification with PCA -- 3 Experimental Results -- 3.1 Experimental Set-Up -- 3.2 Feature Selection Results -- 3.3 Results on the Dry-Run Dataset -- 3.4 Results on the Large Dataset -- 4 Conclusion -- References -- CLEF Lab Overviews -- Overview of the ShARe/CLEF eHealth Evaluation Lab 2014 -- 1 Introduction -- 2 Materials and Methods -- 2.1 Text Documents -- 2.2 Human Annotations, Queries, and Relevance Assessments -- 2.3 Evaluation Methods -- 3 Results -- 4 Conclusions -- ImageCLEF 2014: Overview and Analysis of the Results -- 1 Introduction -- 2 ImageCLEF 2014: The Tasks, The Dataand Participation -- 2.1 Domain Adaptation Task.2.2 Scalable Concept Image Annotation Task -- 2.3 Liver CT Image Annotation Task -- 2.4 Robot Vision Task -- 3 Conclusions -- Overview of INEX 2014 -- 1 Introduction -- 2 Interactive Social Book Search Track -- 2.1 Aims and Tasks -- 2.2 Experimental Setup -- 2.3 Results -- 2.4 Outlook -- 3 Social Book Search Track -- 3.1 Aims and Tasks -- 3.2 Test Collections -- 3.3 Results -- 3.4 Outlook -- 4 Tweet Contextualization Track -- 4.1 Aims and Tasks -- 4.2 Test Collection -- 4.3 Evaluation -- 4.4 Results -- 4.5 Outlook -- 5 Envoi -- LifeCLEF 2014: Multimedia Life Species Identification Challenges -- 1 LifeCLEF Lab Overview -- 1.1 Motivations -- 1.2 Evaluated Tasks -- 1.3 Main Contributions -- 2 Task1: PlantCLEF -- 2.1 Context -- 2.2 Dataset -- 2.3 Task Description -- 2.4 Participants and Results -- 3 Task2: BirdCLEF -- 3.1 Context -- 3.2 Dataset -- 3.3 Task Description -- 3.4 Participants and Results -- 4 Task3: FishCLEF -- 4.1 Context -- 4.2 Dataset -- 4.3 Task Description -- 4.4 Participants and Results -- 5 Conclusions and Perspectives -- Benchmarking News Recommendations in a Living Lab -- 1 Introduction -- 2 Evaluation of Information Access Systems -- 3 Real-Time News Recommendation -- 4 Living Lab Scenario -- 4.1 Online News Recommendation: The Plista Use Case -- 4.2 Publishers and Users -- 4.3 Infrastructure -- 5 Evaluation Scenarios -- 5.1 The News Recommendation Challenge 2013 -- 5.2 CLEF NEWSREEL 2014 -- 5.3 Discussion -- 6 Conclusion -- Improving the Reproducibility of PAN's Shared Tasks: -- 1 Introduction -- 1.1 Contrasting Shared Tasks by Submission Type -- 1.2 Related Work -- 1.3 Contributions -- 2 TIRA: A Web Service for Shared Tasks -- 2.1 Software Submissions: Who is Responsible for their Successful Execution? -- 2.2 Life of a Participant -- 2.3 Life of an Organizer -- 3 Plagiarism Detection -- 3.1 Related Work -- 3.2 Source Retrieval.3.3 Text Alignment -- 4 Author Identification -- 4.1 Related Work -- 4.2 Evaluation Setup -- 4.3 Evaluation Corpus -- 4.4 Performance Measures -- 4.5 Evaluation Results -- 5 Author Profiling -- 5.1 Related Work -- 5.2 Evaluation Corpora -- 5.3 Evaluation Results -- 6 Conclusion and Outlook -- Overview of CLEF Question Answering Track 2014 -- 1 Introduction -- 2 Tasks -- 2.1 QALD: Question Answering over Linked Data -- 2.2 Task QALD-4.1: Multilingual Question Answering -- 2.3 Task QALD-4.2: Biomedical Question Answering over Interlinked Data -- 2.4 Task QALD-4.3: Hybrid Question Answering -- 2.5 BioASQ: Biomedical Semantic Indexing and Question Answering -- 2.6 Task BioASQ 1: Large-Scale Semantic Indexing -- 2.7 Task BioASQ 2: Biomedical Semantic Question Answering -- 2.8 Entrance Exams Task -- 3 Participation -- 4 Main Conclusions -- References -- Overview of RepLab 2014: Author Profiling and Reputation Dimensions for Online Reputation Management -- 1 Introduction -- 2 Tasks Definition -- 2.1 Reputation Dimensions Classification -- 2.2 Author Profiling -- 3 Data Sets -- 3.1 Reputation Dimensions Classification Data Set -- 3.2 Author Profiling Data Set -- 3.3 Shared PAN-RepLab Author Profiling Data Set -- 4 Evaluation Methodology -- 4.1 Baselines -- 4.2 Evaluation Measures -- 5 Participation -- 6 Evaluation Results -- 6.1 Reputation Dimensions Classification -- 6.2 Author Categorisation -- 6.3 Author Ranking -- 7 Conclusions -- Author Index.This book constitutes the refereed proceedings of the 5th International Conference of the CLEF Initiative, CLEF 2014, held in Sheffield, UK, in September 2014. The 11 full papers and 5 short papers presented were carefully reviewed and selected from 30 submissions. They cover a broad range of issues in the fields of multilingual and multimodal information access evaluation, also included are a set of labs and workshops designed to test different aspects of mono and cross-language information retrieval systems.Information Systems and Applications, incl. Internet/Web, and HCI ;8685Natural language processing (Computer science)Artificial intelligenceInformation storage and retrievalApplication softwareUser interfaces (Computer systems)Computational linguisticsNatural Language Processing (NLP)https://scigraph.springernature.com/ontologies/product-market-codes/I21040Artificial Intelligencehttps://scigraph.springernature.com/ontologies/product-market-codes/I21000Information Storage and Retrievalhttps://scigraph.springernature.com/ontologies/product-market-codes/I18032Information Systems Applications (incl. Internet)https://scigraph.springernature.com/ontologies/product-market-codes/I18040User Interfaces and Human Computer Interactionhttps://scigraph.springernature.com/ontologies/product-market-codes/I18067Computational Linguisticshttps://scigraph.springernature.com/ontologies/product-market-codes/N22000Natural language processing (Computer science).Artificial intelligence.Information storage and retrieval.Application software.User interfaces (Computer systems).Computational linguistics.Natural Language Processing (NLP).Artificial Intelligence.Information Storage and Retrieval.Information Systems Applications (incl. Internet).User Interfaces and Human Computer Interaction.Computational Linguistics.006.35Kanoulas Evangelosedthttp://id.loc.gov/vocabulary/relators/edtLupu Mihaiedthttp://id.loc.gov/vocabulary/relators/edtClough Pauledthttp://id.loc.gov/vocabulary/relators/edtSanderson Markedthttp://id.loc.gov/vocabulary/relators/edtHall Markedthttp://id.loc.gov/vocabulary/relators/edtHanbury Allanedthttp://id.loc.gov/vocabulary/relators/edtToms Elaineedthttp://id.loc.gov/vocabulary/relators/edtMiAaPQMiAaPQMiAaPQBOOK996202529003316Information Access Evaluation -- Multilinguality, Multimodality, and Interaction2587632UNISA