Instructions to use razielAI/Duchifat-2.2-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use razielAI/Duchifat-2.2-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="razielAI/Duchifat-2.2-Instruct", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("razielAI/Duchifat-2.2-Instruct", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use razielAI/Duchifat-2.2-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "razielAI/Duchifat-2.2-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "razielAI/Duchifat-2.2-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/razielAI/Duchifat-2.2-Instruct
- SGLang
How to use razielAI/Duchifat-2.2-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "razielAI/Duchifat-2.2-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "razielAI/Duchifat-2.2-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "razielAI/Duchifat-2.2-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "razielAI/Duchifat-2.2-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use razielAI/Duchifat-2.2-Instruct with Docker Model Runner:
docker model run hf.co/razielAI/Duchifat-2.2-Instruct
🕊️ Duchifat-2.2-Instruct
Duchifat-2.2-Instruct is a fine-tuned version of the original Duchifat-2 base model. While this specific version is an optimized Instruct/Chat model, the underlying base architecture and weights were developed and trained from scratch by Raziel.
🚀 Lineage & Development
- Base Model (Duchifat-2): Built and pre-trained from scratch on 3.27 Billion tokens (50/50 Hebrew-English C4 dataset). It features 136M parameters and was designed to establish a native Hebrew reasoning foundation.
- Version 2.2 (Instruct): A refined fine-tuned version (SFT) designed to transform the base capabilities into a quirky, safe, and highly responsive conversational agent.
Key Features:
- Native Hebrew Foundation: Unlike models that adapt English weights, Duchifat was born in Hebrew using the DictaLM tokenizer, ensuring high efficiency and natural linguistic flow.
- Compact Power: At only 136M parameters, it delivers impressive performance while remaining small enough for edge deployment and low-latency applications.
- Quirky & Human-like: The SFT process focused on giving the model a distinct personality—witty and engaging rather than robotic.
- Safety Integrated: Built-in guardrails ensure the model remains professional and refuses to engage with profanity or offensive prompts.
📊 Benchmark Results (Zero-Shot)
Tested using manual prompt formatting to accurately reflect real-world chat performance.
| Task | Version | Filter | n-shot | Metric | Value | Stderr |
|---|---|---|---|---|---|---|
| piqa | 1 | none | 0 | acc | 0.70 | ± 0.1528 |
| piqa | 1 | none | 0 | acc_norm | 0.70 | ± 0.1528 |
| hellaswag | 1 | none | 0 | acc | 0.40 | ± 0.1633 |
| hellaswag | 1 | none | 0 | acc_norm | 0.40 | ± 0.1633 |
| winogrande | 1 | none | 0 | acc | 0.40 | ± 0.1633 |
| arc_easy | 1 | none | 0 | acc | 0.10 | ± 0.1000 |
| arc_easy | 1 | none | 0 | acc_norm | 0.10 | ± 0.1000 |
🛠️ Technical Specifications
- Parameters: 136M
- Base Pre-training Data: 3.27B tokens (C4 Hebrew/English)
- Tokenizer: DictaLM (Hebrew optimized)
- Context Window: 1024 tokens
💡 How to Use
Use the following instruction format to trigger the Instruct-tuned behavior:
Prompt Template:
<|instruction|>
{user_query}
<|assistant|>
Example Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "razielAI/Duchifat-2.2-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda")
prompt = "<|instruction|>\nשלום!\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, max_new_tokens=256, temperature=0.7, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
⚠️ Limitations
Duchifat-2.2 is a lightweight model. It excels at conversational tasks, social media content, and short-form text generation. It is not designed for complex mathematical proofs or extensive coding sessions.
🕊️ About the Duchifat Project
The Duchifat (Hoopoe) project is dedicated to creating efficient, open-source AI with a native understanding of the Hebrew language and culture.
- Downloads last month
- 63
Model tree for razielAI/Duchifat-2.2-Instruct
Base model
Raziel1234/Duchifat-2