These can run on a NVIDIA GeForce MX150 at a relatively fast performance, load in FP16 if they aren't already BF16 or FP16
Caio Silva De Oliveira
CaioXapelaum
AI & ML interests
None yet
Organizations
None yet
spaces 8
Sleeping
1
HF Inference Models
π»
Missing /v1/models endpoint for serverless inference API
Sleeping
2
Inference Code
π₯
Sleeping
5
Curl Converter
π
Sleeping
4
SDXL Lightning 4Step
π
Running
8
GGUF Playground
π
Display a relocation message for GGUF Playground
Sleeping
Tokenizers4All
π’
models 12
CaioXapelaum/Qwen-2.5-0.5B-Instruct-4bit
Text Generation β’ 0.5B β’ Updated
β’ 1 β’ 2
CaioXapelaum/Qwen2.5-1.5B-Instruct-Q4-mlx
Text Generation β’ 0.2B β’ Updated
β’ 29
CaioXapelaum/tiny_starcoder_py-Q8_0-GGUF
Text Generation β’ 0.2B β’ Updated
β’ 15
CaioXapelaum/Qwen2.5-3B-F32-GGUF
Updated
CaioXapelaum/sdxl
Text-to-Image β’ Updated
β’ 68 β’ 1
CaioXapelaum/Qwen2.5-Coder-0.5B-Instruct-Q4-mlx
Text Generation β’ 77.3M β’ Updated
β’ 4
CaioXapelaum/Qwen2.5-Coder-1.5B-Q4_K_M-GGUF
Text Generation β’ 2B β’ Updated
β’ 20
CaioXapelaum/entity-classifier
Image Classification β’ 85.8M β’ Updated
β’ 7
CaioXapelaum/Llama-3.1-Storm-8B-Q5_K_M-GGUF
Text Generation β’ 8B β’ Updated
β’ 16
CaioXapelaum/Orca-2-7b-Patent-Instruct-Llama-2-Q5_K_M-GGUF
7B β’ Updated
β’ 5