Instructions to use peterjandre/finetuned-codet5-vbnet-csharp with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use peterjandre/finetuned-codet5-vbnet-csharp with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "translation" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("translation", model="peterjandre/finetuned-codet5-vbnet-csharp")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("peterjandre/finetuned-codet5-vbnet-csharp", dtype="auto") - Notebooks
- Google Colab
- Kaggle
π CodeT5 VB.NET β C# Translator
This is a fine-tuned version of Salesforce/CodeT5-base for translating VB.NET to C#.
π Evaluation Metrics
BLEU Score: 0.4506
- 1-gram: 0.6698
- 2-gram: 0.5402
- 3-gram: 0.4656
- 4-gram: 0.4132
- Brevity penalty: 0.8773
- Length ratio: 0.8843
ROUGE Scores:
- ROUGE-1: 0.5836
- ROUGE-2: 0.4586
- ROUGE-L: 0.5378
- ROUGE-Lsum: 0.5781
π§ Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("{repo_id}")
tokenizer = AutoTokenizer.from_pretrained("{repo_id}")
vb_code = "Dim x As Integer = 5"
inputs = tokenizer(f"translate VB.NET to C#: {vb_code}", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π Dataset Format
Training data was in JSONL with fields:
"vb_code": VB.NET input"csharp_code": corresponding C# output
π License
MIT