license: cc-by-nc-4.0
language:
- en
- ar
- hi
tags:
- Document_Understanding
- Document_Packet_Splitting
- Document_Comprehension
- Document_Classification
- Document_Recognition
- Document_Segmentation
pretty_name: DocSplit Benchmark
size_categories:
- 1M<n<10M
In addition to the dataset, we release this repository containing the complete toolkit for generating the benchmark datasets, along with Jupyter notebooks for data analysis.
DocSplit: Document Packet Splitting Benchmark Generator
A toolkit for creating benchmark datasets to test document packet splitting systems. Document packet splitting is the task of separating concatenated multi-page documents into individual documents with correct page ordering.
Overview
This toolkit generates five benchmark datasets of varying complexity to test how well models can:
- Detect document boundaries within concatenated packets
- Classify document types accurately
- Reconstruct correct page ordering within each document
Document Source
We uses the documents from RVL-CDIP-N-MP:
https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp
Quick Start
Clone from Hugging Face
This repository is hosted on Hugging Face at: https://huggingface.co/datasets/amazon/doc_split
Choose one of the following methods to download the repository:
Option 1: Using Git with Git LFS (Recommended)
Git LFS (Large File Storage) is required for Hugging Face datasets as they often contain large files.
Install Git LFS:
# Linux (Ubuntu/Debian):
sudo apt-get install git-lfs
git lfs install
# macOS (Homebrew):
brew install git-lfs
git lfs install
# Windows: Download from https://git-lfs.github.com, then run:
# git lfs install
Clone the repository:
git clone https://huggingface.co/datasets/amazon/doc_split
cd doc_split
pip install -r requirements.txt
Option 2: Using Hugging Face CLI
# 1. Install the Hugging Face Hub CLI
pip install -U "huggingface_hub[cli]"
# 2. (Optional) Login if authentication is required
huggingface-cli login
# 3. Download the dataset
huggingface-cli download amazon/doc_split --repo-type dataset --local-dir doc_split
# 4. Navigate and install dependencies
cd doc_split
pip install -r requirements.txt
Option 3: Using Python SDK (huggingface_hub)
from huggingface_hub import snapshot_download
# Download the entire dataset repository
local_dir = snapshot_download(
repo_id="amazon/doc_split",
repo_type="dataset",
local_dir="doc_split"
)
print(f"Dataset downloaded to: {local_dir}")
Then install dependencies:
cd doc_split
pip install -r requirements.txt
Tips
- Check Disk Space: Hugging Face datasets can be large. Check the "Files and versions" tab on the Hugging Face page to see the total size before downloading.
- Partial Clone: If you only need specific files (e.g., code without large data files), use:
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/amazon/doc_split cd doc_split # Then selectively pull specific files: git lfs pull --include="*.py"
Usage
Step 1: Create Assets
Convert raw PDFs into structured assets with page images (300 DPI PNG) and OCR text (Markdown).
Option A: AWS Textract OCR (Default)
Best for English documents. Processes all document categories with Textract.
python src/assets/run.py \
--raw-data-path data/raw_data \
--output-path data/assets \
--s3-bucket your-bucket-name \
--s3-prefix textract-temp \
--workers 10 \
--save-mapping
Requirements:
- AWS credentials configured (
aws configure) - S3 bucket for temporary file uploads
- No GPU required
Option B: Hybrid OCR (Textract + DeepSeek)
Uses Textract for most categories, DeepSeek OCR only for the "language" category (multilingual documents).
Note: For this project, DeepSeek OCR was used only for the "language" category and executed in AWS SageMaker AI with GPU instances (e.g., ml.g6.xlarge).
1. Install flash-attention (Required for DeepSeek):
# For CUDA 12.x with Python 3.12:
cd /mnt/sagemaker-nvme # Use larger disk for downloads
wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
pip install flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
# For other CUDA/Python versions: https://github.com/Dao-AILab/flash-attention/releases
2. Set cache directory (Important for SageMaker):
# SageMaker: Use larger NVMe disk instead of small home directory
export HF_HOME=/mnt/sagemaker-nvme/cache
export TRANSFORMERS_CACHE=/mnt/sagemaker-nvme/cache
3. Run asset creation:
python src/assets/run.py \
--raw-data-path data/raw_data \
--output-path data/assets \
--s3-bucket your-bucket-name \
--use-deepseek-for-language \
--workers 10 \
--save-mapping
Requirements:
- NVIDIA GPU with CUDA support (tested on ml.g6.xlarge)
- ~10GB+ disk space for model downloads
- flash-attention library installed
- AWS credentials (for Textract on non-language categories)
- S3 bucket (for Textract on non-language categories)
How it works:
- Documents in
raw_data/language/→ DeepSeek OCR (GPU) - All other categories → AWS Textract (cloud)
Parameters
--raw-data-path: Directory containing source PDFs organized by document type--output-path: Where to save extracted assets (images + OCR text)--s3-bucket: S3 bucket name (required for Textract)--s3-prefix: S3 prefix for temporary files (default: textract-temp)--workers: Number of parallel processes (default: 10)--save-mapping: Save CSV mapping document IDs to file paths--use-deepseek-for-language: Use DeepSeek OCR for "language" category only--limit: Process only N documents (useful for testing)
What Happens
- Scans
raw_data/directory for PDFs organized by document type - Extracts each page as 300 DPI PNG image
- Runs OCR (Textract or DeepSeek) to extract text
- Saves structured assets in
output-path/{doc_type}/{doc_name}/ - Optionally creates
document_mapping.csvlisting all processed documents - These assets become the input for Step 2 (benchmark generation)
Output Structure
data/assets/
└── {doc_type}/{filename}/
├── original/{filename}.pdf
└── pages/{page_num}/
├── page-{num}.png # 300 DPI image
└── page-{num}-textract.md # OCR text
Interactive Notebooks
Explore the toolkit with Jupyter notebooks:
notebooks/01_create_assets.ipynb- Create assets from PDFsnotebooks/02_create_benchmarks.ipynb- Generate benchmarks with different strategiesnotebooks/03_analyze_benchmarks.ipynb- Analyze and visualize benchmark statistics
Benchmark Output Format
Each benchmark JSON contains:
{
"benchmark_name": "poly_seq",
"strategy": "PolySeq",
"split": "train",
"created_at": "2026-01-30T12:00:00",
"documents": [
{
"spliced_doc_id": "splice_0001",
"source_documents": [
{"doc_type": "invoice", "doc_name": "doc1", "pages": [1,2,3]},
{"doc_type": "letter", "doc_name": "doc2", "pages": [1,2]}
],
"ground_truth": [
{"page_num": 1, "doc_type": "invoice", "source_doc": "doc1", "source_page": 1},
{"page_num": 2, "doc_type": "invoice", "source_doc": "doc1", "source_page": 2},
...
],
"total_pages": 5
}
],
"statistics": {
"total_spliced_documents": 1000,
"total_pages": 7500,
"unique_doc_types": 16
}
}
Requirements
- Python 3.8+
- AWS credentials (for Textract OCR)
- Dependencies:
boto3,loguru,pymupdf,pillow
Generate Benchmark Datasets
# 1. Download and extract RVL-CDIP-N-MP source data from HuggingFace (1.25 GB)
# This dataset contains multi-page PDFs organized by document type
# (invoices, letters, forms, reports, etc.)
mkdir -p data/raw_data
cd data/raw_data
wget https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp/resolve/main/data.tar.gz
tar -xzf data.tar.gz
rm data.tar.gz
cd ../..
# 2. Create assets from raw PDFs
# Extracts each page as PNG image and runs OCR to get text
# These assets are then used in step 4 to create benchmark datasets
# Output: Structured assets in data/assets/ with images and text per page
python src/assets/run.py --raw-data-path data/raw_data --output-path data/assets
# 3. Generate benchmark datasets
# This concatenates documents using different strategies and creates
# train/test/validation splits with ground truth labels
# Output: Benchmark JSON files in data/benchmarks/ ready for model evaluation
python src/benchmarks/run.py \
--strategy poly_seq \
--assets-path data/assets \
--output-path data/benchmarks
Pipeline Overview
Raw PDFs → [Create Assets] → Page Images + OCR Text → [Generate Benchmarks] → DocSplit Benchmarks
Five Benchmark Datasets
The toolkit generates five benchmarks of increasing complexity, based on the DocSplit paper:
1. DocSplit-Mono-Seq (mono_seq)
Single Category Document Concatenation Sequentially
- Concatenates documents from the same category
- Preserves original page order
- Challenge: Boundary detection without category transitions as discriminative signals
- Use Case: Legal document processing where multiple contracts of the same type are bundled
2. DocSplit-Mono-Rand (mono_rand)
Single Category Document Pages Randomization
- Same as Mono-Seq but shuffles pages within documents
- Challenge: Boundary detection + page sequence reconstruction
- Use Case: Manual document assembly with page-level disruptions
3. DocSplit-Poly-Seq (poly_seq)
Multi Category Documents Concatenation Sequentially
- Concatenates documents from different categories
- Preserves page ordering
- Challenge: Inter-document boundary detection with category diversity
- Use Case: Medical claims processing with heterogeneous documents
4. DocSplit-Poly-Int (poly_int)
Multi Category Document Pages Interleaving
- Interleaves pages from different categories in round-robin fashion
- Challenge: Identifying which non-contiguous pages belong together
- Use Case: Mortgage processing where deeds, tax records, and notices are interspersed
5. DocSplit-Poly-Rand (poly_rand)
Multi Category Document Pages Randomization
- Complete randomization across all pages (maximum entropy)
- Challenge: Worst-case scenario with no structural assumptions
- Use Case: Document management system failures or emergency recovery
Project Structure
doc-split-benchmark/
├── README.md
├── requirements.txt # All dependencies
├── src/
│ ├── assets/ # Asset creation from PDFs
│ │ ├── run.py # Main script
│ │ ├── models.py # Document models
│ │ └── services/
│ │ ├── pdf_loader.py
│ │ ├── textract_ocr.py
│ │ └── asset_writer.py
│ │
│ └── benchmarks/ # Benchmark generation
│ ├── run.py # Main script
│ ├── models.py # Benchmark models
│ └── services/
│ ├── asset_loader.py
│ ├── split_manager.py
│ ├── benchmark_generator.py
│ ├── benchmark_writer.py
│ └── strategies/
│ ├── mono_seq.py # DocSplit-Mono-Seq
│ ├── mono_rand.py # DocSplit-Mono-Rand
│ ├── poly_seq.py # DocSplit-Poly-Seq
│ ├── poly_int.py # DocSplit-Poly-Int
│ └── poly_rand.py # DocSplit-Poly-Rand
│
├── notebooks/ # Interactive examples
│ ├── 01_create_assets.ipynb
│ ├── 02_create_benchmarks.ipynb
│ └── 03_analyze_benchmarks.ipynb
│
└── data/ # Generated data (not in repo)
├── raw_data/ # Downloaded PDFs
├── assets/ # Extracted images + OCR
└── benchmarks/ # Generated benchmarks
Generate Benchmarks [Detailed]
Create DocSplit benchmarks with train/test/validation splits.
python src/benchmarks/run.py \
--strategy poly_seq \
--assets-path data/assets \
--output-path data/benchmarks \
--num-docs-train 800 \
--num-docs-test 200 \
--num-docs-val 500 \
--size small \
--random-seed 42
Parameters:
--strategy: Benchmark strategy -mono_seq,mono_rand,poly_seq,poly_int,poly_rand, orall(default: all)--assets-path: Directory containing assets from Step 1 (default: data/assets)--output-path: Where to save benchmarks (default: data/benchmarks)--num-docs-train: Number of spliced documents for training (default: 8)--num-docs-test: Number of spliced documents for testing (default: 5)--num-docs-val: Number of spliced documents for validation (default: 2)--size: Benchmark size -small(5-20 pages) orlarge(20-500 pages) (default: small)--split-mapping: Path to split mapping JSON (default: data/metadata/split_mapping.json)--random-seed: Seed for reproducibility (default: 42)
What Happens:
- Loads all document assets from Step 1
- Creates or loads stratified train/test/val split (60/25/15 ratio)
- Generates spliced documents by concatenating/shuffling pages per strategy
- Saves benchmark CSV files with ground truth labels
Output Structure:
data/
├── metadata/
│ └── split_mapping.json # Document split assignments (shared across strategies)
└── benchmarks/
└── {strategy}/ # e.g., poly_seq, mono_rand
└── {size}/ # small or large
├── train.csv
├── test.csv
└── validation.csv
How to cite this dataset
@misc{docsplit,
author = {Islam, Md Mofijul and Salekin, Md Sirajus and Balakrishnan, Nivedha and Bishop, Vincil C. and Jain, Niharika and Romo, Spencer and Strahan, Bob and Xie, Boyi and Socolinsky, Diego A. },
title = {DocSplit: A Comprehensive Benchmark Dataset and Evaluation Approach
for Document Packet Recognition and Splitting},
howpublished = {\url{https://huggingface.co/datasets/amazon/doc_split/}},
url = {https://huggingface.co/datasets/amazon/doc_split/},
type = {dataset},
year = {2026},
month = {February},
timestamp = {2024-02-04},
note = {Accessed: 2026-02-04}
}
License
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: CC-BY-NC-4.0