Question Answering
Transformers
Safetensors
English
qwen3
text-generation
Pathology
Agent
text-generation-inference
Instructions to use WenchuanZhang/Agentic-Router with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use WenchuanZhang/Agentic-Router with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="WenchuanZhang/Agentic-Router")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("WenchuanZhang/Agentic-Router") model = AutoModelForCausalLM.from_pretrained("WenchuanZhang/Agentic-Router") - Notebooks
- Google Colab
- Kaggle
File size: 214 Bytes
e489a76 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | {
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"temperature": 0.6,
"top_k": 20,
"top_p": 0.95,
"transformers_version": "4.53.2"
}
|