The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Cross-Scenario Physics-Code Transfer Benchmark
Anonymous artifact for the NeurIPS 2026 Evaluations & Datasets Track submission "A Benchmark for Cross-Scenario Physics-Code Transfer: Compositionality Metrics on Frozen Video Features."
This Hugging Face repository accompanies the paper PDF and supplementary code zip submitted to OpenReview. It hosts the data tensors and labels reviewers need to inspect data quality, validate the protocol, and re-run all the analyses described in the paper.
Repository contents
.
βββ README.md this file
βββ LICENSE Apache-2.0 (+ upstream attributions)
βββ croissant.json Croissant v1.0 metadata + RAI fields
βββ features/
β βββ vjepa2_collision_pooled.pt V-JEPA 2 features for 600 collision scenes
β shape: [600, 4, 1024], float32,
β mean-pooled over 4 evenly-spaced frames
βββ labels/
β βββ labels_collision.npz mass scalars/bins + restitution scalars/bins
β β for the 600 collision scenes
β βββ labels_ramp.npz restitution + friction labels (ramp, 300 scenes)
β βββ labels_flat_drop.npz restitution + friction labels (flat-drop, 300)
β βββ labels_elasticity.npz restitution + drop-height labels (600 scenes)
β βββ labels_ramp_3prop.npz 3-property labels for ramp (multi-prop training)
βββ code/ reproduction scripts (mirror of supplementary)
βββ ... (17 .py files; see paper appendix)
What this release covers vs. the full benchmark
The full benchmark described in the paper includes:
- All four Kubric scenarios (collision, ramp, flat-drop, elasticity) β 1,800 scenes total
- 75-scene matched-visual low-gravity collision variant
- Pre-extracted features for 8 frozen backbones (V-JEPA 2, V-JEPA 2.1, DINOv2-S/L, CLIP ViT-L/14, MAE, SigLIP, VideoMAE)
- Phys101 V-JEPA 2 features (spring/ramp/fall, 2,673 clips)
- Ground-truth per-object position + velocity tracks
- Rendered scene videos (256Γ256, 48 frames at 24fps)
This Hugging Face repository hosts the load-bearing subset for reviewer inspection: V-JEPA 2 features for the collision source scenario, all 4 Kubric label files, and full reproduction code. Together these are sufficient to verify the within-scenario protocol, the label binning logic, the message-extraction pipeline, the permutation tests on the 24-config sweep, the within-architecture analyses, and the headline sufficiency claim.
The remaining ramp/flat-drop/elasticity feature tensors, the 8-backbone feature suite, the Phys101 features, the GT track tensors, and the rendered scene videos (~70 GB total) are prepared for an immediate post-acceptance public release with full author attribution. Anonymous reviewers requiring the full bundle for verification can request it through the OpenReview AuthorβReviewer messaging channel; it will be made available on this repository under the same anonymous account.
Reproducing the headline result
To verify the headline (top-5 vs bot-5 PosDis sufficiency observation, permutation $p=0.84$):
# 1. Download this repository
huggingface-cli download physics-code-transfer-bench/cross-scenario-physics-code-transfer \
--repo-type dataset --local-dir physics-bench
# 2. Set up the expected directory structure for the code
mkdir -p results/kinematics_vs_mechanics
cp physics-bench/features/* results/
cp physics-bench/labels/* results/kinematics_vs_mechanics/
# 3. Re-run the permutation test from the headline (reads paper-reported numbers, no GPU needed)
cd physics-bench/code
python _compute_perm_test.py
python _compute_within_arch_perm.py
Expected output: top-5 vs bot-5 PosDis two-sided $p = 0.84$ (matches paper); within-architecture permutation results match Section 4.5 of the paper.
To re-run the within-collision sender training (requires the V-JEPA 2 collision features, which are in this release):
PYTHONUNBUFFERED=1 PYTORCH_ENABLE_MPS_FALLBACK=1 \
python _rev_q_addendum2_high_posdis.py
Croissant metadata
croissant.json is a Croissant v1.0 metadata file describing the benchmark, file formats, splits, and Responsible-AI annotations (data collection protocol, annotation protocol, preprocessing, use cases, limitations, social impact, biases, personal/sensitive information, release/maintenance plan). It has been validated locally with the Croissant validator.
Citation
@inproceedings{anonymous2026benchmark,
title = {A Benchmark for Cross-Scenario Physics-Code Transfer:
Compositionality Metrics on Frozen Video Features},
author = {Anonymous Authors},
booktitle = {NeurIPS 2026 Evaluations \& Datasets Track (under review)},
year = {2026}
}
License
- This benchmark and accompanying code are released under the Apache License 2.0 (consistent with the upstream Kubric license).
- Phys101: we redistribute only V-JEPA 2 features extracted from Phys101, not the source video; Phys101 itself remains under its original CC-BY 4.0 license.
- V-JEPA 2 / V-JEPA 2.1: features are derived from the publicly released encoders (Meta CC-BY-NC 4.0 research license); we redistribute only feature tensors for non-commercial research use.
- Full upstream attribution is in
LICENSE.
Anonymity statement
This repository is hosted under an anonymous account for double-blind NeurIPS 2026 review. No personally identifiable information (author names, institutions, contact emails) appears in the README, the LICENSE, the Croissant metadata, code, or any commit history. Public release with author attribution is contingent on acceptance.
- Downloads last month
- -