Instructions to use PygmalionAI/Eleusis-12B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use PygmalionAI/Eleusis-12B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="PygmalionAI/Eleusis-12B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("PygmalionAI/Eleusis-12B") model = AutoModelForCausalLM.from_pretrained("PygmalionAI/Eleusis-12B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use PygmalionAI/Eleusis-12B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "PygmalionAI/Eleusis-12B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PygmalionAI/Eleusis-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/PygmalionAI/Eleusis-12B
- SGLang
How to use PygmalionAI/Eleusis-12B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "PygmalionAI/Eleusis-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PygmalionAI/Eleusis-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "PygmalionAI/Eleusis-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PygmalionAI/Eleusis-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use PygmalionAI/Eleusis-12B with Docker Model Runner:
docker model run hf.co/PygmalionAI/Eleusis-12B
Eleusis-12B
An interesting experimental model...
Model Details
Alongside the release of Pygmalion-3, we present an additional roleplay model based on Mistral's Nemo Base named Eleusis, a unique model that has a distinct voice among its peers. Though it was meant to be a test run for further experiments, this model was received warmly to the point where we felt it was right to release it publicly.
We release the weights of Eleusis under the Apache 2.0 license, ensuring a free and open ecosystem for it to flourish under.
Prompting
Like its component models, Eleusis utilizes the standard ChatML format.
<|im_start|>system
Your responses must be detailed, creative, immersive, and drive the scenario forward.<|im_end|>
<|im_start|>user
{{user}}: Good evening!<|im_end|>
<|im_start|>assistant
{{char}}:
Note that this system prompt is an example and experimentation is encouraged for your use-case purposes. {{user}} and {{char}} are placeholder names and should be replaced with the user's name and the character to be roleplayed as by the model.
Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was not fine-tuned to be safe and harmless: the base model and this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
Acknowledgements
A warm thank you is required for the creators of the models we used to construct Eleusis, and a huge shout out once more to Pyg's wonderful community, who's with us every step of the way.
- Downloads last month
- 47
Model tree for PygmalionAI/Eleusis-12B
Base model
mistralai/Mistral-Nemo-Base-2407