Instructions to use microsoft/phi-2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/phi-2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="microsoft/phi-2")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2") model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2") - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use microsoft/phi-2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "microsoft/phi-2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/phi-2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/microsoft/phi-2
- SGLang
How to use microsoft/phi-2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "microsoft/phi-2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/phi-2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "microsoft/phi-2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/phi-2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use microsoft/phi-2 with Docker Model Runner:
docker model run hf.co/microsoft/phi-2
What is the best way for the inference process in LORA in PEFT approach
Here is the SFTtrainer method i used for finetuning mistral
trainer = SFTTrainer(
model=peft_model,
train_dataset=data,
peft_config=peft_config,
dataset_text_field=" column name",
max_seq_length=3000,
tokenizer=tokenizer,
args=training_arguments,
packing=packing,
)
trainer.train()
I found different mechanisms for the finetuned model inference after PEFT based LORA finetuning
Method - 1
save adapter after completing training and then merge with base model then use for inference
trainer.model.save_pretrained("new_adapter_path")
from peft import PeftModel
finetuned_model = PeftModel.from_pretrained(base_model,
new_adapter_path,
torch_dtype=torch.float16,
is_trainable=False,
device_map="auto"
)
finetuned_model = finetuned_model.merge_and_unload()
Method - 2
save checkpoints during training and then use the checkpoint with the least loss
from peft import PeftModel
finetuned_model = PeftModel.from_pretrained(base_model,
"least loss checkpoint path",
torch_dtype=torch.float16,
is_trainable=False,
device_map="auto"
)
finetuned_model = finetuned_model.merge_and_unload()
Method - 3
same method with AutoPeftModelForCausalLM class
model = AutoPeftModelForCausalLM.from_pretrained(
"output directory checkpoint path",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="cuda")
finetuned_model = finetuned_model.merge_and_unload()
Method-4
AutoPeftModelForCausalLM class specifies the output folder without specifying a specific checkpoint
instruction_tuned_model = AutoPeftModelForCausalLM.from_pretrained(
training_args.output_dir,
torch_dtype=torch.bfloat16,
device_map = 'auto',
trust_remote_code=True,
)
finetuned_model = finetuned_model.merge_and_unload()
Method-5
All the above methods without merging
#finetuned_model = finetuned_model.merge_and_unload()
Which is the actual method I should follow for inference?
and when to use which method over another?