Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
                  pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
                  examples = [ujson_loads(line) for line in batch.splitlines()]
                              ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
                  return pd.io.json.ujson_loads(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              ValueError: Expected object or value
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

PresentBench: A Fine-Grained Rubric-Based Benchmark for Slide Generation

[🌐 Homepage] [πŸ“– Paper] [πŸ’» Code]

This repository hosts the PresentBench benchmark dataset.

πŸ“„ Abstract

Slides serve as a critical medium for conveying information in presentation-oriented scenarios such as academia, education, and business. Despite their importance, creating high-quality slide decks remains time-consuming and cognitively demanding. Recent advances in generative models, such as Nano Banana Pro, have made automated slide generation increasingly feasible. However, existing evaluations of slide generation are often coarse-grained and rely on holistic judgments, making it difficult to accurately assess model capabilities or track meaningful advances in the field. In practice, the lack of fine-grained, verifiable evaluation criteria poses a critical bottleneck for both research and real-world deployment.

In this paper, we propose PresentBench, a fine-grained, rubric-based benchmark for evaluating automated real-world slide generation. It contains 238 evaluation instances, each supplemented with background materials required for slide creation. Moreover, we manually design an average of 54.1 checklist items per instance, each formulated as a binary question, to enable fine-grained, instance-specific evaluation of the generated slide decks.

Extensive experiments show that PresentBench provides more reliable evaluation results than existing methods, and exhibits significantly stronger alignment with human preferences. Furthermore, our benchmark reveals that NotebookLM significantly outperforms other slide generation methods, highlighting substantial recent progress in this domain.

πŸ† Leaderboard

Comparative results across five domains. The highest scores are highlighted in red, and the second-highest scores are highlighted in blue.

Method Total Academia Advertising Education Economics Talk
NotebookLM 62.5 68.6 54.9 55.0 58.2 69.2
Manus 1.6 57.8 64.0 52.4 50.7 52.8 63.0
Tiangong 54.7 59.2 44.5 53.7 46.5 59.8
Zhipu 53.6 57.5 41.0 52.5 47.6 59.0
PPTAgent v2 50.2 53.3 46.7 46.1 46.1 56.6
Gamma 49.2 54.4 46.7 47.8 35.1 56.3
Doubao 48.0 50.3 42.9 45.4 44.0 54.7
Qwen 35.9 39.4 31.9 36.6 26.5 38.6

πŸ—‚οΈ Dataset Structure

Domains under <dataset_root>/ include (non‑exhaustive):

  • academia/
  • advertising/
  • economics/
  • education/
  • talk/ Each leaf case typically looks like:
  • material.pdf|material.md|material_N.md|material_N.pdf – source documents (PDFs, text, etc.).
  • generation_task/ – prompts and evaluation configuration:
    • generation_prompt.md
    • judge_prompt.json

βš™οΈ Usage

To evaluate slide generation systems with this dataset, please follow the evaluation pipeline and scripts provided in the code repository (e.g., environment setup, data preparation, inference, and evaluation).

πŸ“œ Licensing Information

The PresentBench benchmark aggregates background materials collected from multiple public sources. Each source remains governed by its own original license and terms of use.

  • Data Source Licenses: Users must strictly comply with the licensing terms and conditions of each original background-material source included in this benchmark. We recommend carefully reviewing the original license for each source before use.

  • Prompts and Evaluation Rubrics: The task instructions and evaluation checklists are created by us. To the extent that we hold any related intellectual property rights, these contributions are made available under the Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC-4.0) license.

  • Copyright Concerns: This benchmark is compiled for academic research purposes. If you believe any content in PresentBench infringes upon your copyright, please contact us immediately at chen.xs.gm[at]gmail.com. We will promptly review and address the matter, including removal of the concerned content upon verification.

πŸ“š Citation

BibTeX:

@article{chen2026presentbench,
  title={PresentBench: A Fine-Grained Rubric-Based Benchmark for Slide Generation},
  author={Chen, Xin-Sheng and Zhu, Jiayu and Li, Pei-lin and Wang, Hanzheng and Yang, Shuojin and Guo, Meng-Hao},
  journal={arXiv preprint arXiv:2603.07244},
  year={2026}
}
Downloads last month
79

Paper for PresentBench/PresentBench