run
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = "akahana/en-id-finetuned-4bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "How are you?"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Results & Benchmarks
| Metric |
Base Model |
Fine-tuned Model |
| BLEU |
34.32 |
36.59 |
| CHRF |
60.81 |
62 |
| METEOR |
0.6 |
0.62 |
| TER ↓ |
53.42 |
50.85 |