u-10bei/structured_data_with_cot_dataset
Viewer • Updated • 2.5k • 25
How to use thetmon/c1 with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "thetmon/c1")<clean cot>
This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).
This repository contains LoRA adapter weights only. The base model must be loaded separately.
This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV).
Loss is applied only to the final assistant output, while intermediate reasoning (Chain-of-Thought) is masked.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "your_id/your-repo"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
base, torch_dtype=torch.float16, device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
Training data: u-10bei/structured_data_with_cot_dataset
Dataset License: MIT License. Compliance: Users must comply with the MIT license and the base model's original terms of use.
Base model
Qwen/Qwen3-4B-Instruct-2507