Papers
arxiv:2604.05015

Video-MME-v2: Towards the Next Stage in Benchmarks for Comprehensive Video Understanding

Published on Apr 6
· Submitted by
Chaoyou Fu
on Apr 8
#1 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

Video-MME-v2 presents a comprehensive benchmark for evaluating video understanding models through a progressive hierarchy and group-based evaluation to assess robustness and faithfulness.

AI-generated summary

With the rapid advancement of video understanding, existing benchmarks are becoming increasingly saturated, exposing a critical discrepancy between inflated leaderboard scores and real-world model capabilities. To address this widening gap, we introduce Video-MME-v2, a comprehensive benchmark designed to rigorously evaluate the robustness and faithfulness of video understanding. To systematically evaluate model capabilities, we design a progressive tri-level hierarchy that incrementally increases the complexity of video comprehension, ranging from multi-point visual information aggregation, to temporal dynamics modeling, and ultimately to complex multimodal reasoning. Besides, in contrast to conventional per-question accuracy, we propose a group-based non-linear evaluation strategy that enforces both consistency across related queries and coherence in multi-step reasoning. It penalizes fragmented or guess-based correctness and assigns credit only to answers supported by valid reasoning. To guarantee data quality, Video-MME-v2 is constructed through a rigorously controlled human annotation pipeline, involving 12 annotators and 50 independent reviewers. Backed by 3,300 human-hours and up to 5 rounds of quality assurance, Video-MME-v2 aims to serve as one of the most authoritative video benchmarks. Extensive experiments reveal a substantial gap between current best model Gemini-3-Pro and human experts, and uncover a clear hierarchical bottleneck where errors in visual information aggregation and temporal modeling propagate to limit high-level reasoning. We further find that thinking-based reasoning is highly dependent on textual cues, improving performance with subtitles but sometimes degrading it in purely visual settings. By exposing these limitations, Video-MME-v2 establishes a demanding new testbed for the development of next-generation video MLLMs.

Community

Paper author Paper submitter
edited about 5 hours ago
Paper author Paper submitter

image

Paper author Paper submitter

image

Paper author Paper submitter

image

the group-based non-linear evaluation catches my eye because it tries to tie accuracy to consistency across related queries and coherent multi-step reasoning, not just per-question hits. i'd love to know how they operationalize 'valid reasoning' across a group, especially how they handle cases where a model stamps a correct answer with spurious cues versus actually tracing a justification. they report thinking-based reasoning leans on textual cues, which nudges scores when subtitles exist but hurts in purely visual settings; an ablation where subtitles are removed across all tasks would help isolate this bias. the arxivlens breakdown helped me parse the method details without diving into the whole appendix, nice touch for quick digest. still, it's worth checking how this scales beyond the current dataset and whether the non-linear scoring could be gamed by models producing plausible but unsupported reasoning.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.05015
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.05015 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.05015 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.