BeamPERL — DeepSeek-R1-Distill-Qwen-1.5B

BeamPERL is a parameter-efficient, reinforcement-learning fine-tuned language model specialized in beam mechanics problem-solving. It is built on top of DeepSeek-R1-Distill-Qwen-1.5B using LoRA adapters trained with Group Relative Policy Optimization (GRPO) and verifiable reward signals.

Model Details

Property Value
Base model tphage/DeepSeek-R1-Distill-Qwen-1.5B
Fine-tuning method GRPO (RL) + LoRA (PEFT)
LoRA rank / alpha 32 / 128
LoRA dropout 0.05
LoRA target modules q, k, v, o, gate, up, down projections
Training precision bfloat16
Max sequence length 2048 tokens (256 prompt + 1792 completion)
Training dataset tphage/BeamRL-TrainData (synthetic beam mechanics QA)

Reward Functions

Reward Weight Description
Accuracy 0.667 Correctness of predicted reaction forces / coefficients
Format 0.333 Requires reasoning in <think> tags and answer in \boxed{}

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("tphage/BeamPERL", torch_dtype="bfloat16", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("tphage/BeamPERL")

prompt = "Determine the reaction forces at the pin support (x=0.0*L) and the roller support (x=9.0*L) for a statically loaded beam with a length of 9*L, a point load of -13*P at x=3.0*L, and supports at x=0.0*L (pin) and x=9.0*L (roller)."

messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)

outputs = model.generate(inputs, max_new_tokens=1792, temperature=0.6)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

The model reasons step-by-step inside <think>...</think> tags and gives its final answer in \boxed{...} format.

Training

LoRA adapters were trained using GRPO via the BeamPERL framework on a synthetic dataset of beam mechanics questions generated with the SymBeam library. The base model weights were kept frozen throughout training.

Citation

@misc{hage2025beamperl,
  title={BeamPERL: Parameter-Efficient Reinforcement Learning for Verifiable Beam Mechanics Problem-Solving},
  author={Tarjei P. Hage and Markus J. Buehler},
  year={2025},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}

Acknowledgements

Built upon Tina and Open R1. Dataset generation uses a custom version of SymBeam.

Downloads last month
-
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tphage/BeamPERL

Adapter
(196)
this model

Paper for tphage/BeamPERL