ImageCLEF 2026
MultimodalReasoning

Building on the success of previous iterations, the 2026 edition shifts focus toward a updated high-quality dataset from 2025 academic questions and fully transparent open-source development.

Evaluate the next generation of frontier multimodal models on challenging vision-based exam problems.

High-Quality Datasets

Updated reasoning-intensive data from 2025 academic questions, focusing on diagrams, charts, and technical schematics.

Open-Source Solutions

Promoting transparency with fully open-source architectures and reproducible systems.

Latest News

Jan 20, 2026

Competition Information Release

Official announcement and website launch for the 2026 edition.

Important Dates

26 Jan 2026

Registration opens for all ImageCLEF tasks

07 Mar 2026

(OpenQA) Train & Development dataset release

17 Apr 2026

(OpenQA+MCQ) Test dataset release

23 Apr 2026

Registration closes for all ImageCLEF tasks

07 May 2026

Deadline for submitting participant runs

All dates follow 23:59 Anywhere on Earth (AoE).

Competition Tasks

The competition tasks are designed to evaluate the performance of multimodal models on challenging vision-based exam problems.

1

Multiple-Choice Question Answering (MCQ)

Given an image of a question with three to five possible answer options, the solution must select the single correct answer.

Classification
2

Open Question Answering (OpenQA)

Given an image of a question without predefined answer options, the solution must generate a free-form textual answer.

Generative

Data & Resources

Competition Guidelines

VM Environment

GPU
A40 (40GB)
Command
bash inference.sh

Submissions must be reproducible and run within these constraints for the private test set evaluation.

Categories

To promote and encourage efficient solutions, the competition has two categories:

  • Tiny Models ≤ 7B parameters.
  • Normal Models ≥ 8B parameters.

Rules & Constraints

  • Open-weights/source models only.
  • Pre-trained VLM models allowed.
  • External data allowed (must be disclosed).
  • Proprietary models (e.g. GPT-4 API) NOT allowed.

Evaluation & Submission

Leaderboard Phase

Submit JSON predictions only.

[{ "question_id": "...", "predicted_answer": "..." }]

Max 20 submissions/day, 200 total.

Final Evaluation

  • Public GitHub repository
  • Model weights
  • Training dataset (optional)
  • Execution command

Organizers

Dimitar Dimitrov

Dimitar Dimitrov*

Sofia University "St. Kliment Ohridski", Bulgaria

Momina Ahsan

Momina Ahsan*

MBZUAI, UAE

Ming Shan Hee

Ming Shan Hee*

MBZUAI, UAE

Zhuohan Xie

Zhuohan Xie

MBZUAI, UAE

Sarfraz Ahmad

Sarfraz Ahmad

MBZUAI, UAE

Dimitrina Zlatkova

Dimitrina Zlatkova

Sofia University "St. Kliment Ohridski", Bulgaria

Georgi Pachov

Georgi Pachov

Sofia University "St. Kliment Ohridski", Bulgaria

Ivan Koychev

Ivan Koychev

Sofia University "St. Kliment Ohridski", Bulgaria

Preslav Nakov

Preslav Nakov

MBZUAI, UAE

* Competition Co-Leads