GGUF hybrid layer quantization of Ministral-3-3B-Instruct-2512 by mistralai
Original model: https://huggingface.co/mistralai/Ministral-3-3B-Instruct-2512
The hybrid quant employs different quantization levels on a per layer basis to increased flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simultaneously optimize quantized size and model performance. The quants are all K to increase processing efficiency on old GPUs or CPUs.
The Q6_K_H layer quant is as follows:
Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0
LAYER_TYPES='[
[0 ,"Q6_K_L"],[1 ,"Q6_K_L"],[2 ,"Q6_K_M"],[3 ,"Q6_K_M"],[4 ,"Q6_K_S"],[5 ,"Q6_K_S"],[6 ,"Q6_K_S"],
[7 ,"Q5_K_L"],[8 ,"Q5_K_M"],[9 ,"Q5_K_M"],[10,"Q5_K_L"],[11,"Q5_K_M"],[12,"Q5_K_L"],[13,"Q5_K_L"],
[14,"Q5_K_L"],[15,"Q5_K_L"],[16,"Q5_K_L"],[17,"Q6_K_S"],[18,"Q6_K_S"],[19,"Q6_K_S"],[20,"Q6_K_S"],
[21,"Q6_K_M"],[22,"Q6_K_M"],[23,"Q6_K_M"],[24,"Q6_K_L"],[25,"Q6_K_L"]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
The Q4_K_H layer quant is as follows:
LAYER_TYPES='[
[0 ,"Q6_K_S"],[1 ,"Q5_K_L"],[2 ,"Q5_K_M"],[3 ,"Q5_K_S"],[4 ,"Q4_K_L"],[5 ,"Q5_K_S"],[6 ,"Q4_K_L"],[7 ,"Q4_K_M"],
[8 ,"Q4_K_S"],[9, "Q4_K_S"],[10,"Q4_K_S"],[11,"Q4_K_S"],[12,"Q4_K_M"],[13,"Q4_K_M"],[14,"Q4_K_M"],[15,"Q4_K_M"],
[16,"Q4_K_M"],[17,"Q4_K_L"],[18,"Q4_K_M"],[19,"Q4_K_L"],[20,"Q5_K_S"],[21,"Q5_K_M"],[22,"Q5_K_L"],[23,"Q6_K_S"],
[24,"Q6_K_M"],[25,"Q6_K_L"]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
The quants were optimized for good reasoning performance across a curated set of test prompts.
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| IQ4_XS | 2e9 | 8.0 | - |
| Q4_K_H | 2.4e9 | 7.9 | Hybrid quant with Q6_K embed Q6_K output |
| Q6_K | 2.8e9 | 7.8 | - |
| Q6_K_H | 2.8e9 | 7.8 | Hybrid quant with Q6_K embed Q6_K output |
Usage:
This is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .
Performance (4070 cuda backend):
| Q | QKV | Context | gen rate (tps) |
|---|---|---|---|
| Q6_K_H | F16 | 78k | 125 |
| Q6_K_H | Q8_0 | 142k | 125 |
| Q4_K_H | F16 | 82k | 140 |
| Q4_K_H | Q8_0 | 149k | 140 |
The model is trained at 16k context which can be extended to 256k using YARN:
-rope-scaling yarn --yarn-orig-ctx 16384 --rope_scale 16
For other than 256k context set rope_scale to the fraction of configured context size / 16384.0.
NOTE: For improved inference performance, context size may be limited to 16k. Unless large context is needed, this is recommended for best performance, particular when using less accurate Q4_K_H quant.
Benchmarks:
A full set of vision benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| Ministral-3-3B-Instruct-2512.Q4_K_H.gguf | Q4_K_H | 2.4e9 B | 0.4B smaller than Q6_K_H |
| Ministral-3-3B-Instruct-2512.Q6_K_H.gguf | Q6_K_H | 2.8e9 B | ~Q6_K size with better performance |
| Ministral-3-3B-Instruct-2512.mmproj.gguf | F16 | 0.84e9 B | multimedia projector |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 91
6-bit
Model tree for steampunque/Ministral-3-3B-Instruct-2512-Hybrid-GGUF
Base model
mistralai/Ministral-3-3B-Base-2512