Instructions to use microsoft/phi-2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/phi-2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="microsoft/phi-2")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2") model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2") - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use microsoft/phi-2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "microsoft/phi-2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/phi-2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/microsoft/phi-2
- SGLang
How to use microsoft/phi-2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "microsoft/phi-2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/phi-2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "microsoft/phi-2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/phi-2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use microsoft/phi-2 with Docker Model Runner:
docker model run hf.co/microsoft/phi-2
I encountered an error when attempting to train phi
I encountered an error when attempting to train phi. The error occurred at line 64 with the message:
"File "/home/phelixzhen/anaconda3/envs/LM/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
TypeError: PhiModel.forward() got an unexpected keyword argument 'labels'"
It appears that the model does not require a 'labels' tag, but one is being passed to it. I have been trying to resolve this issue for a long time without success. It would mean a lot to me if someone could help me with this. Below is my code:
from transformers import AutoConfig, AutoModel, AutoTokenizer
model_name_or_path = "./"
model = AutoModel.from_pretrained(model_name_or_path)
config = AutoConfig.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,model_max_length=2048)
import os
from datasets import load_dataset
directory_path = "/mnt/n/test"
ds = load_dataset(directory_path, split="train")
ds = ds.remove_columns(['meta'])
tokenizer.pad_token = tokenizer.eos_token
import torch
def encode_example(example):
inputs = tokenizer.encode_plus(example['text'],truncation=True,padding="max_length",max_length=2048,return_tensors="pt", return_attention_mask=False)
return inputs
dsm = ds.map(encode_example)
from transformers import TrainingArguments, Trainer
from transformers import DefaultDataCollator
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False
)
print(dsm[1])
training_args = TrainingArguments(
output_dir="/mnt/n/save",
per_device_train_batch_size=2,
gradient_accumulation_steps=2,
learning_rate=2e-4,
save_steps=300,
fp16=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dsm,
data_collator=data_collator
)
trainer.train()