Big Deeper
BigDeeper
AI & ML interests
Differentiable hashing, orthonormal polynomial language modeling, image compression into language representations.
Recent Activity
liked
a model
26 days ago
deepseek-ai/DeepSeek-V3
new activity
about 2 months ago
ByteDance/LatentSync:Very large RAM foot print.
new activity
about 2 months ago
ByteDance/LatentSync:Very large RAM foot print.
Organizations
None yet
BigDeeper's activity
Very large RAM foot print.
4
#1 opened 3 months ago
by
BigDeeper
THE q8_0 version appears to go on and on indefinitely.
6
#1 opened 3 months ago
by
BigDeeper
Having a problem. Unable to find a suitable output format for 'video_out.mp4
#1 opened 3 months ago
by
BigDeeper
Any ideas how to mitigate this problem?
#3 opened 3 months ago
by
BigDeeper
Longer video?
6
#25 opened 5 months ago
by
BigDeeper
What minimal VRAM does it require?
12
#18 opened 5 months ago
by
DrNicefellow

VSCODE + Cline + Ollama + Qwen2.5-Coder-32B-Instruct.Q8_0
3
#20 opened 5 months ago
by
BigDeeper
comfyui does not recognize model files in sft format
4
5
#18 opened 9 months ago
by
peidong
Are there advantages or disadvantages in changing the format for translation?
3
#10 opened 9 months ago
by
BigDeeper
What does 120B really mean?
3
#1 opened 12 months ago
by
BigDeeper
Does anyone know which specific Python library contains the tokenizer that was used to train Llama-3-70b?
1
2
#11 opened 12 months ago
by
BigDeeper
15 TeraTokens = 190 Million books
2
#4 opened about 1 year ago
by
Languido
Has anyone tried this gguf with agentic framework?
3
#6 opened 12 months ago
by
BigDeeper
gguf
30
#24 opened about 1 year ago
by
LaferriereJC
gguf
30
#24 opened about 1 year ago
by
LaferriereJC
I have now tried two quantizations 8_0, and 6_K, they both fail like you see below.
3
#2 opened 12 months ago
by
BigDeeper