Overview of ImageCLEF 2018: Challenges, Datasets and Evaluation

TitleOverview of ImageCLEF 2018: Challenges, Datasets and Evaluation
Publication TypeConference Paper
Year of Publication2018
AuthorsIonescu, B, Müller, H, Villegas, M, de Herrera, AGarcía Se, Eickhoff, C, Andrearczyk, V, Cid, YDicente, Liauchuk, V, Kovalev, V, Hasan, SA, Ling, Y, Farri, O, Liu, J, Lungren, M, Dang-Nguyen, D-T, Piras, L, Riegler, M, Zhou, L, Lux, M, Gurrin, C
Conference NameExperimental {IR} Meets Multilinguality, Multimodality, and Interaction - 9th International Conference of the {CLEF} Association, {CLEF} 2018, Avignon, France, September 10-14, 2018, Proceedings

This paper presents an overview of the ImageCLEF 2018 evaluation campaign, an event that was organized as part of the CLEF (Conference and Labs of the Evaluation Forum) Labs 2018. ImageCLEF is an ongoing initiative (it started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval with the aim of providing information access to collections of images in various usage scenarios and domains. In 2018, the 16th edition of ImageCLEF ran three main tasks and a pilot task: (1) a caption prediction task that aims at predicting the caption of a figure from the biomedical literature based only on the figure image; (2) a tuberculosis task that aims at detecting the tuberculosis type, severity and drug resistance from CT (Computed Tomography) volumes of the lung; (3) a LifeLog task (videos, images and other sources) about daily activities understanding and moment retrieval, and (4) a pilot task on visual question answering where systems are tasked with answering medical questions. The strong participation, with over 100 research groups registering and 31 submitting results for the tasks, shows an increasing interest in this benchmarking campaign.

Citation KeyDBLP:conf/clef/IonescuMVHEACLK18