Instructions to use vanta-research/atom-80b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use vanta-research/atom-80b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="vanta-research/atom-80b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("vanta-research/atom-80b") model = AutoModelForCausalLM.from_pretrained("vanta-research/atom-80b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use vanta-research/atom-80b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "vanta-research/atom-80b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vanta-research/atom-80b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/vanta-research/atom-80b
- SGLang
How to use vanta-research/atom-80b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "vanta-research/atom-80b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vanta-research/atom-80b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "vanta-research/atom-80b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vanta-research/atom-80b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use vanta-research/atom-80b with Docker Model Runner:
docker model run hf.co/vanta-research/atom-80b
VANTA Research
Independent AI research lab building safe, resilient language models optimized for human-AI collaboration
Atom-80B
Overview
Atom-80B is a state-of-the-art language model fine-tuned on the Qwen3 80B Next base, optimized for high-fidelity reasoning, collaborative interaction, and cognitive extension. Atom-80B is designed to be friendly, enthusiastic, and collaboration-first.
This model is a continuation of Project Atom from VANTA Research, which aims to scale the Atom persona from 4B-400B+. This model is the 5th in the Project Atom series.
Key strengths:
- Complex, multi-step reasoning
- Collaborative task execution and agentic workflows
- Stable, flavorful persona alignment
- Optimized inference efficiency
Training and Data
Base Model
- Qwen3 80B Next: A leading foundation model with robust multilingual and coding capabilities.
Fine-Tuning Datasets
Atom-80B was fine-tuned on the same high-quality datasets as the smaller Atom variants, including:
- Collaborative exploration and brainstorming
- Research synthesis and question formulation
- Technical explanation at varying complexity levels
- Lateral thinking and creative problem-solving
- Empathetic and supportive dialogue patterns
Intended Use
Primary Applications
- Collaborative Brainstorming: Generating diverse ideas and building iteratively on user suggestions
- Research Assistance: Synthesizing information, identifying key arguments, and formulating research questions
- Technical Explanation: Simplifying complex concepts across difficulty levels (including ELI5)
- Code Discussion: Exploring implementation approaches, debugging strategies, and architectural decisions
- Creative Problem-Solving: Encouraging unconventional approaches and lateral thinking
Out-of-Scope Use
This model shall not be used for:
- High-stakes decision-making without human oversight
- Medical, legal, or financial advice
- Generation of harmful, biased, or misleading content
- Applications requiring guaranteed factual accuracy
Usage
Installation
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("vanta-research/atom-80B", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("vanta-research/atom-80B")
inputs = tokenizer("Explain quantum computing like I'm 10.", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Ethical Considerations
This model is designed to support exploration and learning, not to replace human judgment. Users should:
- Verify factual claims against authoritative sources
- Apply critical thinking to generated suggestions
- Recognize the model's limitations in high-stakes scenarios
- Be mindful of potential biases in outputs
- Use responsibly in accordance with applicable laws and regulations
Citation
@misc{atom-80b,
title={Atom-80B: A Collaborative Thought Partner},
author={VANTA Research},
year={2026},
howpublished={https://huggingface.co/vanta-research/atom-80b}
}
Contact
- Organization: hello@vantaresearch.xyz
- Engineering/Design: tyler@vantaresearch.xyz
- Downloads last month
- 9
