What if each weight in a neural network could only be −1, 0, or +1? That is the premise of 1-bit quantization, and it is more powerful than it sounds. This post breaks down how it works, why it matters, and where it falls short.
What is Quantization?
A standard neural network stores weights as 32-bit or 16-bit floating point numbers. Those floats carry a lot of information, but also a lot of memory cost. Quantization is the process of reducing the precision of those numbers to save space and speed up computation.
Most production models today use 8-bit (INT8) or 4-bit (INT4) quantization. These methods compress weights into integers while still preserving enough numeric range to keep quality high. 1-bit takes this to the extreme: every single weight is represented by just one bit.
INT8 → ~7 GB
INT4 → ~3.5 GB
1-bit → ~0.9 GB
How Does 1-Bit Actually Work?
Pure binary quantization maps every weight to either +1 or −1. The model learns which sign each weight should carry, not its magnitude. During inference, all multiplications become cheap additions and subtractions, no floating point needed.
The most important recent work in this space is BitNet (Microsoft Research, 2023) and its successor BitNet b1.58 (2024). BitNet b1.58 uses a ternary scheme: weights are constrained to {−1, 0, +1}. The extra zero value turns many operations into a complete no-op, making inference even faster.
activations are still quantized to INT8
It's like a dream for weak hardware users.
Training vs Post-Training Quantization
There are two fundamentally different approaches here, and the distinction matters a lot.
- Post-Training Quantization (PTQ): take a pre-trained FP16 model and quantize it after the fact. Fast and convenient, but quality degrades — especially below 4 bits.
- Quantization-Aware Training (QAT): train the model from scratch with quantized weights. The model adapts to its constraints during training. This is how BitNet works — and it is what makes 1-bit viable at all.
Trying to PTQ a standard model down to 1-bit produces catastrophic quality loss. 1-bit only works if the model is trained to be 1-bit from day one.
The Numbers: How Much Do You Lose?
The honest answer: it depends heavily on model size. Small models suffer more than large ones. A 125M parameter BitNet model loses noticeably more quality than a 7B BitNet model when compared to their FP16 equivalents.
| Format | Bits/weight | Memory (7B) | Speed | Quality loss |
|---|---|---|---|---|
| FP16 | 16 | ~14 GB | baseline | none |
| INT8 | 8 | ~7 GB | 1.5–2× | minimal |
| INT4 | 4 | ~3.5 GB | 2–4× | low |
| 1.58-bit | ~1.58 | ~0.9 GB | up to 8× | moderate* |
* at large scale (7B+), quality loss becomes very competitive with INT4.
Why This Matters for Edge and Tiny Models
For us at SupraLabs, 1-bit quantization is an interesting reference point. At sub-1M parameters, the scale of Supra Mini, the quality penalty of 1-bit QAT is severe. The model simply does not have enough capacity to absorb the constraint. At our scale, every bit of precision counts.
Where 1-bit shines is on large models deployed at the edge: think 7B+ models running on phones, embedded devices, or microcontrollers without a GPU. The memory savings are dramatic and the inference speedup from replacing multiplications with additions is real and measurable.
The Catch
1-bit is not a free lunch. The main trade-offs are:
- Requires purpose-built training = no PTQ shortcut.
- It's a 50/50 chance for small models = it can help our model or kill it lol
- Small models suffer = below ~1B parameters, the quality loss is hard to justify.
- Activations still need INT8 = it's not fully binary end-to-end yet.
How this helps us?(and YOU!)
We, SupraLabs, are going to try every type of experiment, quantization, pruning, distillation, all to create the best models for you!
Final Thought
1Bit quantization is a little bit sensitive area for small models, but we are going to try everything to do it works!
SupraLabs_