CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Paper • 2002.08155 • Published • 2
# Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("claudios/codebert-base")
model = AutoModel.from_pretrained("claudios/codebert-base")This is an unofficial reupload of microsoft/codebert-base in the SafeTensors format using transformers 4.40.1. The goal of this reupload is to prevent older models that are still relevant baselines from becoming stale as a result of changes in HuggingFace. Additionally, I may include minor corrections, such as model max length configuration.
Original model card below:
Pretrained weights for CodeBERT: A Pre-Trained Model for Programming and Natural Languages.
The model is trained on bi-modal data (documents & code) of CodeSearchNet
This model is initialized with Roberta-base and trained with MLM+RTD objective (cf. the paper).
Please see the official repository for scripts that support "code search" and "code-to-document generation".
@misc{feng2020codebert,
title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
author={Zhangyin Feng and Daya Guo and Duyu Tang and Nan Duan and Xiaocheng Feng and Ming Gong and Linjun Shou and Bing Qin and Ting Liu and Daxin Jiang and Ming Zhou},
year={2020},
eprint={2002.08155},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="claudios/codebert-base")