Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

ModelCloud
/
internlm-2.5-7b-chat-gptq-4bit

Feature Extraction
Transformers
Safetensors
internlm2
internlm 2.5
gptq
4bit
gptqmodel
custom_code
4-bit precision
Model card Files Files and versions
xet
Community
1
internlm-2.5-7b-chat-gptq-4bit
Ctrl+K
Ctrl+K
  • 2 contributors
History: 5 commits
Qubitium's picture
Qubitium
Update README.md
2e2dda7 verified 11 months ago
  • .gitattributes
    1.52 kB
    initial commit 11 months ago
  • README.md
    514 Bytes
    Update README.md 11 months ago
  • added_tokens.json
    189 Bytes
    Upload folder using huggingface_hub (#1) 11 months ago
  • config.json
    1.37 kB
    Upload folder using huggingface_hub (#1) 11 months ago
  • configuration_internlm2.py
    8.84 kB
    Upload folder using huggingface_hub (#1) 11 months ago
  • model.safetensors
    5.15 GB
    xet
    Upload folder using huggingface_hub (#1) 11 months ago
  • modeling_internlm2.py
    80.7 kB
    Upload folder using huggingface_hub (#1) 11 months ago
  • quantize_config.json
    340 Bytes
    Upload folder using huggingface_hub (#1) 11 months ago
  • special_tokens_map.json
    713 Bytes
    Upload folder using huggingface_hub (#1) 11 months ago
  • tokenization_internlm2.py
    8.81 kB
    Upload folder using huggingface_hub (#1) 11 months ago
  • tokenization_internlm2_fast.py
    7.81 kB
    Upload folder using huggingface_hub (#1) 11 months ago
  • tokenizer.json
    5.79 MB
    Upload folder using huggingface_hub (#1) 11 months ago
  • tokenizer.model
    1.48 MB
    xet
    Upload folder using huggingface_hub (#1) 11 months ago
  • tokenizer_config.json
    37.3 kB
    Upload folder using huggingface_hub (#1) 11 months ago