Call for Labs Participation
CLEF 2015: Conference and Labs of the Evaluation Forum
Experimental IR meets Multilinguality, Multimodality and Interaction
8-11 September 2015, Toulouse - France
The CLEF Initiative (Conference and Labs of the Evaluation Forum, formerly known as Cross-Language Evaluation Forum) is a self-organized body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual and multimodal information with various levels of structure.
CLEF 2015 is the sixth CLEF conference continuing the popular CLEF campaigns, which have run since 2000 contributing to the systematic evaluation of information access systems, primarily through experimentation on shared tasks. CLEF 2015 consists of an independent conference and a set of labs and workshops designed to test different aspects of mono and cross-language Information retrieval systems.
Each lab focuses on a particular sub-problem or variant of the retrieval task as described below. Researchers and practitioners from all segments of the information access and related communities are invited to participate, choosing to take part in any or all evaluation labs. Eight labs are offered at CLEF 2015:
Lab details
• CLEFeHealth
CLEFeHealth explores scenarios which aim to ease patients and nurses understanding and accessing of eHealth information.
The goals of the lab are to develop processing methods and resources in a multilingual setting to enrich difficult-to-understand eHealth texts, and provide valuable documentation.
The lab contains two tasks:
- Task 1 - Information Extraction from Clinical Data
- (a) Clinical speech recognition
-
- (b) Named entity recognition from clinical narratives in European languages.
NEW for 2015: non-English languages, clinical spoken language
- Task 2 - User-centered Health Information Retrieval
- (a) Monolingual IR (English)
- (b) Multilingual IR (Chinese, Czech, French, German, Portuguese, Romanian)
NEW for 2015: queries, evaluation criteria, CLIR languages
Lab coordination:
- Lorraine Goeuriot (Université Joseph Fourier, FR - lorraine.goeuriot [at] imag.fr),
- Liadh Kelly (Trinity College Dublin, IRL - liadh.kelly [at] scss.tcd.ie)
Lab website: https://sites.google.com/site/clefehealth2015/
• ImageCLEF
In 2015, ImageCLEF will organize four main tasks with a global objective of benchmarking automatic annotation and indexing of images. The tasks tackle different aspects of the annotation problem and are aimed at supporting and promoting cutting-edge research addressing the key challenges in the field:
- Task 1 - Image Annotation: A task aimed at the development of systems for automatic multi-concept image annotation, localization and subsequent sentence description generation.
- Task 2 - Medical Classification: Addresses the problem of labeling and separation of compound figures from biomedical literature.
- Task 3 - Medical Clustering: Addresses the problem of clustering body parts x-rays.
- Task 4 - Liver CT Annotation: A study towards automated structured reporting, the task is computer aided automatic annotation of liver CT volumes by filling in a pre-prepared form.
Lab coordination:
- Mauricio Villegas (Universitat Politècnica de Valencia, SP - mauvilsa [at] upv.es)
- Henning Müller (University of Applied Sciences Western Switzerland in Sierre, CH - henning.mueller [at] hevs.ch)
Lab website: http://www.imageclef.org/2015
• LifeCLEF
The LifeCLEF lab continues image-based plant identification task which was has originally run within ImageCLEF since 2011. However, the LifeCLEF tasks radically enlarges the evaluated challenge towards multimodal data by (i) considering birds and fish in addition to plants, (ii) considering audio and video content in addition to images, (iii) scaling-up the evaluation data to hundreds of thousands of life media records and thousands of living species. LiefCLEF tasks at CLEF 2015:
- Task 1 - BirdCLEF: an audio record-based bird identification task, based on the Xeno-Canto social network. 500 bird species from Brasil from hundreds recordists around 15k recording.
- Task 2 - PlantCLEF: an image-based plant identification task based on the Tela Botanica social network. 500 plant species from France from hundreds of photographers, around 50k images.
- Task 3 - FishCLEF: a fish video surveillance task based on the Fish4Knowledge network. 30 fish species from the Taiwan’s coral reef from underwater cameras, 2000 videos, and 2 million images.
Lab coordination:
- Alexis Joly (INRIA Sophia-Antipolis - ZENITH team, Montpellier, FR - alexis.joly [at] inria.fr)
- Henning Müller (University of Applied Sciences Western Switzerland in Sierre, CH - henning.mueller [at] hevs.ch)
Lab website: http://www.imageclef.org/lifeclef/2015
• Living Labs for IR (LL4IR)
The main goal LL4IR is to provide a benchmarking platform for researchers to evaluate their ranking systems in a live setting with real users in their natural task environments. The lab acts as a proxy between commercial organizations (live environments) and lab participants (experimental systems), facilitates data exchange, and makes comparison between the participating systems.
CLEF 2015 sees the first edition of the lab, which features one task:
- Task 1 – Product search and web search
Lab coordination:
- Krisztian Balog (University of Stavanger, N - krisztian.balog [at] uis.no)
- Liadh Kelly (Dublin City University, IRL - liadh.kelly [at] scss.tcd.ie)
- Anne Schuth (University of Amsterdam, NL - anne.schuth [at] uva.nl).
Lab website: http://living-labs.net/clef-lab/
• News Recommendation Evaluation Lab (NEWSREEL)
CLEF 2015 is the second iteration of this lab. NEWSREEL provides two tasks designed to address the challenge of real-time news recommendation. Participants can: a) develop news recommendation algorithms and b) have them tested by millions of users over the period of a few weeks in a living lab. The following tasks are offered:
- Task 1 – Benchmark News Recommendations in a Living Lab: benchmarking news recommendation algorithms in a living lab environment: participants will be given the opportunity to develop news recommendation algorithms and have them tested by potentially millions of users over the period of one year.
- Task 2 – Benchmarking News Recommendations in a Simulated Environment: simulates a real-time recommendation task using a novel recommender systems reference framework. Participants in the task have to predict users’ clicks on recommended news articles in simulated real time.
Lab coordination:
- Frank Hopfgartner (University of Glasgow, UK - frank.hopfgartner [at] gmail.com)
- Torben Brodt (plista GmbH, Berlin, DE - tb [at] plista.com)
Lab website: http://www.clef-newsreel.org/
• Uncovering Plagiarism, Authorship and Social Software Misuse (PAN)
This is the 12th edition of the PAN lab on evaluation of uncovering plagiarism, authorship, and social software misuse. PAN offers three tasks at CLEF 2015 with new evaluation resources consisting of large-scale corpora, performance measures, and web services that allow for meaningful evaluations. The main goal is to provide for sustainable and reproducible evaluations, to get a clear view of the capabilities of state-of-the-art-algorithms. The tasks are:
- Task 1 - Plagiarism Detection: Given a document, is it an original?
- Task 2 - Author Identification: Given a document, who wrote it?
- Task 3 - Author Profiling: Given a document, what’re its author’s traits (age / gender / personality)?
Lab coordination: pan [at] webis.de
- Martin Potthast (Bauhaus-Universität Weimar, DE),
- Benno Stein (Bauhaus-Universität Weimar, DE),
- Paolo Rosso (Universitat Politècnica de València, SP),
- Efstathios Stamatatos (University of the Aegean, GR).
Lab website: http://pan.webis.de
• Question answering (QA)
In the current general scenario for the CLEF QA Track, the starting point is always a Natural Language question. However, answering some questions may need to query Linked Data (especially if aggregations or logical inferences are required); whereas some questions may need textual inferences and querying free-text. Answering some queries may need both. The tasks are:
- Task 1 – QALD: Question Answering over Linked Data;
- Task 2 – Entrance Exams: Questions from reading tests;
- Task 3 – BioASQ: Large-Scale Biomedical Semantic Indexing;
- Task 4 – BioASQ: Biomedical Question answering.
Lab coordination:
- Anselmo Peñas (Universidad Nacional de Educación a Distancia, SP - anselmo [at] lsi.uned.es)
- Georgios Paliouras (NCSR Demokritos, GR - paliourg [at] iit.demokritos.gr)
- Christina Unger (CITEC Universitat Bielefeld, DE - cunger [at] cit-ec.uni-bielefeld.de)
Lab website: http://nlp.uned.es/clef-qa/
• Social Book Search (SBS)
The Social Book Search Lab was previously part of the INEX evaluation benchmark (since 2007). Real-world information needs are generally complex, yet almost all research focuses instead on either relatively simple search based on queries or recommendation based on profiles. The goal of the Social Book Search Lab is to investigate techniques to support users in complex book search tasks that involve more than just a query and results list. The SBS tasks for CLEF 2015 are:
Task 1 - Suggestion Track: a system-oriented task to suggest books based on rich search requests combining several topical and contextual relevance signals, as well as user profiles and real-world relevance judgments.
Task 2 - Interactive Track: a user-oriented interactive task investigating systems that support users in each of multiple stages of a complex search tasks.
Lab coordination:
- Jaap Kamps, Marijn Koolen, Hugo Huurdeman (University of Amsterdam, NL - kamps [at] uva.nl, marijn.koolen [at] uva.nl, h.c.huurdeman [at] uva.nl)
- Toine Bogers, Mette Skov (Aalborg University, Copenhagen, DK - toine [at] hum.aau.dk, skov [at] hum.aau.dk)
- Mark Hall (Edge Hill University, Ormskirk, UK - hallmark [at] edgehill.ac.uk)
Lab website: http://social-book-search.humanities.uva.nl/
Lab registration
Participants must register for tasks via the following website:
http://clef2015-labs-registration.dei.unipd.it/
Data
The training and test data are provided by the organizers, which allow participating systems to be evaluated and compared in a systematic way.
Worshops
The Lab Workshop sessions will take place within the CLEF 2015 conference at the conference site in Toulouse. Lab coordinators will present a summary of their lab in an overview presentations during the plenary scientific paper sessions in the CLEF 2015 conference, to allow non-participants to gain an overview of the motivation, objectives, outcomes and future challenges of each Lab. The separate Lab Workshop sessions provide a forum for participants to present their results (including failure analyses and system comparisons), description of retrieval techniques used, and other issues of interest to researchers in the field. Participating groups will be invited to present their results in a joint poster session.
Publication
All groups participating each evaluation Lab are asked to submit a paper for the CLEF 2015 Working Notes. These will be published in the online CEUR-WS Proceedings and on the conference website.
Two different and separate types of overviews will be produced by Lab Organizers, one for the Online Working Notes, and one for Conference Proceedings (published by Springer in their Lecture Notes for Computer Science - LNCS series).
Timeline
The timeline for 2015 Labs is as follows:
- November 3, 2014: Labs Registration opens
- April 30, 2015: Labs Registration closes
- November 3, 2014 – May 15, 2015: Evaluation Campaign
- May 15, 2015: End of the Evaluation Cycle
- June 7, 2015: Submission of Participant Papers
- May 31, 2015 – June 30, 2015: Review process of Participants Papers
- June 30, 2015: Notification of Acceptance Participant Papers
- July 15, 2015: Camera Ready Copy of Participant Papers
- September 8-11 2015 CLEF 2015 Conference
Organization
General Chairs:
- Josiane Mothe - IRIT, Université de Toulouse, France
- Jacques Savoy - University of Neuchâtel, Switzerland
Program Chairs:
- Jaap Kamps - University of Amsterdam, The Netherlands
- Karen Pinel-Sauvagnat - Université de Toulouse, France
Lab chairs:
- Gareth Jones - Dublin City University, Ireland
- Eric SanJuan - Université d'Avignon, France
Lab committee:
- Nicola Ferro - University of Padova, Italy
- Donna Harman - National Institute for Standard and Technology, USA
- Maarten de Rijke - University of Amsterdam, The Netherlands
- Carol Peters - ISTI, National Council of Research (CNR), Italy
- Jacques Savoy - University of Neuchâtel, Switzerland
- William Webber - William Webber Consulting, Australia
Local organization committee:
- The IRIT Information Retrieval and Mining (SIG) team