view article Article Fine-tuning LLMs to 1.58bit: extreme quantization made easy By medmekk and 5 others • Sep 18, 2024 • 246
Running on Zero 101 101 Chat with Kimi-VL-A3B-Thinking 🤔 Chat with Kimi-VL-A3B-Thinking using text and images