Instructions to use LindaChiu/for_web_UI with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use LindaChiu/for_web_UI with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="LindaChiu/for_web_UI", filename="Meta-Llama-3-8B-Instruct.Q5_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use LindaChiu/for_web_UI with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf LindaChiu/for_web_UI:Q5_K_M # Run inference directly in the terminal: llama-cli -hf LindaChiu/for_web_UI:Q5_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf LindaChiu/for_web_UI:Q5_K_M # Run inference directly in the terminal: llama-cli -hf LindaChiu/for_web_UI:Q5_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf LindaChiu/for_web_UI:Q5_K_M # Run inference directly in the terminal: ./llama-cli -hf LindaChiu/for_web_UI:Q5_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf LindaChiu/for_web_UI:Q5_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf LindaChiu/for_web_UI:Q5_K_M
Use Docker
docker model run hf.co/LindaChiu/for_web_UI:Q5_K_M
- LM Studio
- Jan
- vLLM
How to use LindaChiu/for_web_UI with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LindaChiu/for_web_UI" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LindaChiu/for_web_UI", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/LindaChiu/for_web_UI:Q5_K_M
- Ollama
How to use LindaChiu/for_web_UI with Ollama:
ollama run hf.co/LindaChiu/for_web_UI:Q5_K_M
- Unsloth Studio new
How to use LindaChiu/for_web_UI with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for LindaChiu/for_web_UI to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for LindaChiu/for_web_UI to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for LindaChiu/for_web_UI to start chatting
- Docker Model Runner
How to use LindaChiu/for_web_UI with Docker Model Runner:
docker model run hf.co/LindaChiu/for_web_UI:Q5_K_M
- Lemonade
How to use LindaChiu/for_web_UI with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull LindaChiu/for_web_UI:Q5_K_M
Run and chat with the model
lemonade run user.for_web_UI-Q5_K_M
List all available models
lemonade list
Meta-Llama-3-8B-Instruct-GGUF
- This is GGUF quantized version of meta-llama/Meta-Llama-3-8B-Instruct created using llama.cpp
- Re-uploaded with new end token
Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| Training Data | Params | Context length | GQA | Token count | Knowledge cutoff | |
| Llama 3 | A new mix of publicly available online data. | 8B | 8k | Yes | 15T+ | March, 2023 |
| 70B | 8k | Yes | December, 2023 |
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: https://llama.meta.com/llama3/license
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
- Downloads last month
- 4
5-bit
Model tree for LindaChiu/for_web_UI
Base model
meta-llama/Meta-Llama-3-8B-Instruct