Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Seeing the Scene Matters: a Scene-Aware Long-Video Benchmark

πŸ€— Benchmark | πŸ“ Paper

πŸ‘€ Overview

Long-video understanding remains challenging for multimodal large language models because real videos are not merely long sequences of frames, but are organized into semantically coherent scenes. Existing video benchmarks often emphasize short-clip perception or sparse frame matching, which makes it difficult to evaluate whether a model can understand scene-level events, connect multimodal cues across multi-minute videos, and reason over temporally distributed evidence.

We introduce SceneBench, a scene-aware long-video benchmark designed to evaluate long-video understanding at the scene level. SceneBench focuses on multi-minute videos and diverse task formats, including Title Prediction, Comment Prediction, ClipQA, SceneQA, SceneQA-Audio, and I-VQA. After ambiguity filtering and quality control, the benchmark contains 8,507 final QA pairs.

🌟 Highlights

  • Scene-aware long-video benchmark: SceneBench targets long, multi-minute videos and emphasizes scene-level understanding rather than isolated frame perception.

  • Manually annotated and cleaned data: All tasks are manually annotated, and ambiguous samples are removed during quality control, resulting in 8,507 final QA pairs used in reported experiments.

  • Multimodal scene-level reasoning: SceneBench evaluates whether models can integrate visual, textual, and audio-related cues across coherent scene units.

  • Practical long-video evaluation: The benchmark is designed for realistic long-video settings where simply increasing the number of input frames may introduce irrelevant visual noise or exceed model memory limits.

License

This dataset is released under the CC-BY-NC-SA-4.0 license.

⚠️ By accessing or using this dataset, users are expected to comply with the license terms. The dataset is provided strictly for non-commercial research use. Any use beyond this scope, including redistribution or application for commercial purposes, is not permitted, and users are responsible for any consequences resulting from improper use.

We do not claim ownership of the original video materials included in this dataset. The videos are provided only to support academic research, and all rights remain with their respective copyright holders. To reduce potential impact on the original works, the collected video clips have been processed through operations such as resolution reduction, temporal trimming, and format or size adjustment.

If any copyright holder believes that their content has been improperly included and wishes to request removal, please contact us at sia1910023@gmail.com or submit an issue in this repository.

πŸ“œ Citation

If you find our work is useful for your research, please consider citing our paper.

@inproceedings{anonymous2026seeing,
  title     = {Seeing the Scene Matters: a Scene-Aware Long-Video Benchmark},
  author    = {Sengnam Chen, Hao Chen, Chenglam Ho, Xinyu Mao, Jinping Wang, Yu Zhang, Chao Li},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2026},
  url       = {https://arxiv.org/abs/2603.27259}
}
Downloads last month
36

Paper for SinerChen/SceneBench