Kimi K2 thinking q6K gguf

Kimi-K2-Thinking is one of the best thinking models now, comparing to Open AI 5.1, xAI 4, et al.

quantized models Comparison

Type Bits Quality Description
IQ1 1-bit very Low Minimal footprint; worse than Q2/IQ2
Q2/IQ2 2-bit 🟥 Low Minimal footprint; only for tests
Q3/IQ3 3-bit 🟧 Low–Med “Medium” variant
Q4/IQ4 4-bit 🟩 Med–High “Medium” — 4-bit
**Q5 ** 5-bit 🟩🟩 High Excellent general-purpose quant
**Q6_K ** 6-bit 🟩🟩🟩 Very High Almost FP16 quality, larger size
**Q8 ** 8-bit 🟩🟩🟩🟩 Near-lossless baseline

Q6_K model is an excellent high quality model.

Downloads last month
118
GGUF
Model size
1T params
Architecture
deepseek2
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for John1604/Kimi-K2-Thinking-q6K-gguf

Quantized
(13)
this model