How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf MidnightRunner/Misc:
# Run inference directly in the terminal:
llama-cli -hf MidnightRunner/Misc:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf MidnightRunner/Misc:
# Run inference directly in the terminal:
llama-cli -hf MidnightRunner/Misc:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf MidnightRunner/Misc:
# Run inference directly in the terminal:
./llama-cli -hf MidnightRunner/Misc:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf MidnightRunner/Misc:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf MidnightRunner/Misc:
Use Docker
docker model run hf.co/MidnightRunner/Misc:
Quick Links

πŸ—‚ MidnightRunner/Misc

Overview

This repo is my miscellaneous toolbox β€” a collection of models, upscalers, denoisers, configs, and other bits I keep around for quick pulls.

It’s not meant to be polished, just fast standbys that I can drop into workflows when needed.


Contents

  • Upscalers

    • ESRGAN, RealESRGAN, AnimeSharp, UltraSharp
    • SwinIR, NMKD, Foolhardy
  • Denoisers & Sharpeners

    • ITF SkinDiff Detail Lite
    • Lexica Sharp series
    • DeNoise realplksr
  • Experimental Checkpoints

    • Astraali configs
    • OmniSR (x2, x3, x4)
    • SAM (Segment Anything) weights
    • Motion tests (bounceV, danceMax, etc.)
  • Workflow Utilities

    • FixFP16Errors
    • Oddball safetensors
    • β€œJust in case” helper models

Quick Pulls

Fetch files or the entire repo with Hugging Face tools:

# clone the whole repo
git lfs install
git clone https://huggingface.co/MidnightRunner/Misc

# download a single file
huggingface-cli download MidnightRunner/Misc 4x-UltraSharp.pth

# pull from Python
from huggingface_hub import hf_hub_download

file = hf_hub_download(
    repo_id="MidnightRunner/Misc",
    filename="4x-UltraSharp.pth"
)

Notes

  • Disorganized on purpose: this is a stash, not a showcase.
  • Everything here is tested, works, and has bailed me out more than once.
  • Licenses follow their original sources.
Downloads last month
109
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support