MGTBench: Benchmarking Machine-Generated Text Detection
Paper • 2303.14822 • Published
How to use huyen89/MGTDetectionModel with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="huyen89/MGTDetectionModel") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("huyen89/MGTDetectionModel")
model = AutoModelForSequenceClassification.from_pretrained("huyen89/MGTDetectionModel")This modelcard aims to be a base template for new models. It has been generated using this raw template.
This model is used for detecting Machine-generated Texts.
This model trains on a corpus of both human and StableLM-generated answers for questions from SQuAD1 dataset. The dataset can be found here.
This model is created by fine-tuning Distilbert-base-uncased model. The training procedure follows He et al.'s instructions.