Video-Text-to-Text
Transformers
Safetensors
English
qwen2_5_vl
image-text-to-text
text-generation-inference
Instructions to use Video-R1/Video-R1-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Video-R1/Video-R1-7B with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Video-R1/Video-R1-7B") model = AutoModelForImageTextToText.from_pretrained("Video-R1/Video-R1-7B") - Notebooks
- Google Colab
- Kaggle
Add/improve model card: add pipeline tag, library name, and link to paper/code
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,3 +1,9 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: video-text-to-text
|
| 4 |
+
library_name: transformers
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
This repository contains the Video-R1-7B model as presented in [Video-R1: Reinforcing Video Reasoning in MLLMs](https://arxiv.org/pdf/2503.21776).
|
| 8 |
+
|
| 9 |
+
Code: https://github.com/tulerfeng/Video-R1
|