Alexone
krustik
AI & ML interests
Testing Evaluating
Recent Activity
new activity
24 days ago
allenai/Llama-3.1-Tulu-3-405B:Test result
new activity
24 days ago
Qwen/Qwen2.5-Omni-7B:Where is the MLX version of qwen 2.5 Omni?
new activity
24 days ago
bartowski/DeepSeek-R1-GGUF:Tested Q6, uses 567Gb Ram
Organizations
None yet
krustik's activity
Test result
4
#2 opened 2 months ago
by
krustik
Where is the MLX version of qwen 2.5 Omni?
3
1
#22 opened 24 days ago
by
Ekolawole
Tested Q6, uses 567Gb Ram
1
10
#2 opened 3 months ago
by
krustik
Amazing, test result by 567Gb RAM on 10 years old hardware
1
#42 opened 24 days ago
by
krustik
How many bits of Quantization is enough for Code Generation Tasks?
1
#5 opened 25 days ago
by
luweigen
Please, hurry up and release the quantized version!
1
#23 opened 28 days ago
by
zy19898
Why many small files?
1
21
#1 opened about 2 months ago
by
rdtfddgrffdgfdghfghdfujgdhgsf
Где gguf?
13
#2 opened about 2 months ago
by
Colegero
Огромное вам спасибо. Продолжение?
10
#5 opened about 2 months ago
by
Makar7
New research paper, R1 type reasoning models can be drastically improved in quality
2
#19 opened 3 months ago
by
krustik
A Step-by-step deployment guide with ollama
3
4
#16 opened 3 months ago
by
snowkylin

First review, Q5-K-M require 502Gb RAM, better than Meta 405billions
1
6
#11 opened 3 months ago
by
krustik
Resource Requirements for Running DeepSeek v3 Locally
5
#56 opened 3 months ago
by
wilfoderek

Smaller version for Home User GPU's
26
10
#2 opened 4 months ago
by
apcameron
What is the VRAM requirement to run this ?
5
#1 opened 3 months ago
by
RageshAntony
minimum vram?
1
11
#9 opened 4 months ago
by
CHNtentes
You will release a small version for consumer hardware like the v2 generation?
9
#35 opened 4 months ago
by
anon-linux-mint

Amazing, the only local model which repaired code in my test.
2
#5 opened 4 months ago
by
krustik
Copyright for generated content
3
#9 opened 5 months ago
by
kolia99