About the Lab
FinMMEval Lab integrates financial reasoning, multilingual understanding, and decision-making into a unified evaluation suite designed to promote robust, transparent, and globally competent financial AI. The 2026 edition introduces three interconnected tasks spanning five languages.
Multi-modal inputs: news, filings, macro indicators, tests.
Multiple languages with low-resource representations: English, Chinese, Arabic, Hindi, Greek, Japanese, Spanish.
Tasks spanning Q&A, and decision making.
Metrics centered on Accuracy, ROUGE-1, BLEURT and performance quantitative metrics (e.g. CR, SR, MD).
"How can I tailor my setup to make an LLM exceptionally good at finance?"
Tasks
Choose one or more tasks. Each submission must provide calibrated confidence scores and an evidence trace.
Task 1 - Financial Exam Q&A
Given a stand-alone multiple-choice question Q with four candidate options { A1, A2, A3, A4 }, the system must select the correct answer A∗. Questions cover valuation, accounting, ethics, corporate finance, and regulatory knowledge.
Motivation
Professional financial qualification exams (e.g., CFA, EFPA) require the integration of theoretical and regulatory knowledge with applied reasoning. Existing LLMs often rely on factual recall without demonstrating the analytical rigor expected from human candidates.
Data
- EFPA (Spanish): 50 exam-style financial questions on investment and regulation.
- GRFinQA (Greek): 225 multiple-choice finance questions from university-level exams.
- CFA (English): 600 exam-style multiple-choice questions covering nine core domains.
- CPA (Chinese): 300 exam-style financial questions focusing on major modules.
- BBF (Hindi): 500-1000 exam-style financial multiple-choice questions covering over 30 domains.
Evaluation
Models are required to output the correct answer label. Performance is measured by accuracy, defined as the proportion of correctly identified options in the test set.
Important Dates
Specific dates will be announced once they are fixed.
Latest News
Unique registered participants grow every day
We are experiencing consistent growth in unique participants through registrations, demonstrating interest prior to the start of our marketing campaign.
Read more →
How to Participate
Engage with the challenges in a way that suits you - from a quick, one-time experiment to a detailed research project. While we invite you to share your findings in our workshop notes, you are also free to develop promising results into a full paper for an archival journal.
The workshop itself is a perfect opportunity to refine your ideas through discussion with peers.
Ready to join?
Sign up via the CLEF registration form (FinMMEval section)
Packaging Checklist
-
✓Results JSONL (per task)
-
✓System Card (architecture, data usage, risks)
-
✓Reproducibility (seed, versions, hardware)
-
✓License compliance acknowledgements (if applicable)
Organizers
Organizing committee and partner institutions.
Frequently Asked Questions
Who can participate?
Researchers and practitioners from academia and industry. Student teams are particularly welcome.
How is data licensed?
Research-only license; redistribution of raw sources may be restricted.
Can we submit to multiple tasks?
Yes. Submit independent result bundles per task.
Are ensembles allowed?
Yes, but disclose all components in the system card.