Instructions to use stas/tiny-m2m_100 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use stas/tiny-m2m_100 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("stas/tiny-m2m_100") model = AutoModelForSeq2SeqLM.from_pretrained("stas/tiny-m2m_100") - Notebooks
- Google Colab
- Kaggle
Tiny M2M100 model
This is a tiny model that is used in the transformers test suite. It doesn't do anything useful beyond functional testing.
Do not try to use it for anything that requires quality.
The model is indeed 4MB in size.
You can see how it was created here
If you're looking for the real model, please go to https://huggingface.co/facebook/m2m100_418M.
- Downloads last month
- 3,620
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support