Large multimodal models evaluation
a survey
Zicheng Zhang (Shanghai Artificial Intelligence Laboratory)
Junying Wang (Shanghai Artificial Intelligence Laboratory, Fudan University)
Farong Wen (Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University)
Yijin Guo (Shanghai Jiao Tong University, Shanghai Artificial Intelligence Laboratory)
Xiangyu Zhao (Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University)
Xinyu Fang (Shanghai Artificial Intelligence Laboratory, Zhejiang University - Hangzhou)
Shengyuan Ding (Fudan University, Shanghai Artificial Intelligence Laboratory)
Xuemei Zhou (TU Delft - Multimedia Computing)
Guangtao Zhai (Shanghai Artificial Intelligence Laboratory)
undefined More Authors
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
As large multimodal models (LMMs) advance rapidly across diverse multimodal understanding and generation tasks, the need for systematic and reliable evaluation frameworks becomes increasingly critical. To address this need, this survey provides a structured overview of LMM evaluation, centered around two main axes: multimodal evaluation for understanding and generation. (1) For understanding, a dual-perspective framework is introduced to distinguish benchmarks between general capabilities, which emphasize common tasks, and specialized capabilities, which reflect expert-level competence in domain-specific fields. (2) For generation, evaluation is organized by output modality, including image, video, audio, and 3D content. (3) From a community perspective, this survey further highlights authoritative leaderboards and foundational tools that have been instrumental in establishing a comprehensive evaluation ecosystem for LMMs. By unifying general-specialized understanding and modality-specific generation evaluations, this survey clarifies the current landscape and provides guidance for future research in the LMM evaluation field.
Files
File under embargo until 18-05-2026