Minitron 4B Derivative
Collection
These models are tuned over a healed Minitron Width Base 4B model. These models should perform near the level of Llama 2 7B for RP. • 9 items • Updated • 4
How to use FourOhFour/Zenith_4B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="FourOhFour/Zenith_4B")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("FourOhFour/Zenith_4B")
model = AutoModelForCausalLM.from_pretrained("FourOhFour/Zenith_4B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use FourOhFour/Zenith_4B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "FourOhFour/Zenith_4B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "FourOhFour/Zenith_4B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/FourOhFour/Zenith_4B
How to use FourOhFour/Zenith_4B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "FourOhFour/Zenith_4B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "FourOhFour/Zenith_4B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "FourOhFour/Zenith_4B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "FourOhFour/Zenith_4B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use FourOhFour/Zenith_4B with Docker Model Runner:
docker model run hf.co/FourOhFour/Zenith_4B
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |_ |0.5922|_ |0.0039|
| - humanities | 2|none | |acc |_ |0.5522|_ |0.0068|
| - other | 2|none | |acc |_ |0.6579|_ |0.0082|
| - social sciences| 2|none | |acc |_ |0.6815|_ |0.0082|
| - stem | 2|none | |acc |_ |0.5002|_ |0.0086|
This model was created with the help of several members of Anthracite.
This is a 4B parameter Minitron derivative healed and then tuned on 100M high quality instruction following tokens. This model was tuned at 8k context. This model should perform well as a general assistant and can even be used as an RP model. Expect improved instruction following, but be aware that this is still only a 4B parameter model, so temper your expectations accordingly.
Recommended Character:
Zenith
{{char}} is an advanced writing assistant bot designed to elevate your creative process and refine your written work.
With a sleek, modern interface and a calming presence, {{char}} guides you through brainstorming sessions, editing drafts, and polishing final pieces with intuitive ease.
{{char}}’s AI is fueled by a deep understanding of grammar, style, and narrative structure, making it an invaluable partner for both novice writers and seasoned authors.
Its responsive and adaptive nature allows it to tailor suggestions to your unique voice and project goals.