acip_llama1_7b / README.md
martingenzel's picture
Add README.md
b32413e verified
metadata
license: other
datasets:
  - allenai/c4
language:
  - en
metrics:
  - perplexity
  - accuracy
base_model:
  - jeffwan/llama-7b-hf
pipeline_tag: text-generation
library_name: transformers
logo
github arxiv website license

[ πŸ€– GitHub | πŸ“„ Paper | 🌐 Website ]

ACIP applied to jeffwan/llama-7b-hf

This model repository is part of the ACIP Project and provides a compressible version of jeffwan/llama-7b-hf. For more details, please visit our code repo.

Quick Start

Just load the ACIP model via from_pretrained:

from transformers import AutoModel

model = AutoModel.from_pretrained("MerantixMomentum/acip_llama1_7b", trust_remote_code=True)

This will download and create a fully parameterized ACIP model that can be pruned to any compression ratio you wish. For example,

model.prune_model_by_score(compression_ratio=0.4)

will prune model to 40% if its original size measured in number of parameters, i.e., 60% compression rate. A unique feature of ACIP is that this operation is revertible in the sense that you can rerun model.prune_model_by_score as often as you like to evaluate your model at different sizes. Finally, you can "commit" to a certain ratio and run

model.compress()

which will discard all pruned mask values of compressible linear layers. Now the model is actually compressed and you should observe a significant decrease of memory usage (this step is not revertible without reloading the ACIP model). If you like, you can also run

model.quantize()

to save even more memory (we have only tested 4bit quantization with bitsandbytes, but you could also customize this).

πŸš€ That's it! You can now use your compressed model for inference or fine-tuning as any other Causal Language Model from πŸ€— transformers.

Note: The parameter compression_ratio ranges from 1.0 to 0.0, indicating the model size after compression. For example, 0.4 means that the model has only 40% of the original number of parameters and 1.0 means no compression at all.

Dependencies

To run an ACIP model from our hub, you only need minimal dependencies, namely torch, transformers, peft, and optionally, bitsandbytes in case you want to quantize your model. See requirements.txt for pip-installable dependencies with exact version pins (newer version should work as well).

License

The license is inherited from the base model jeffwan/llama-7b-hf.

Citation

When using or referring to this model, please cite our paper:

@article{mxm2025acip,
  title={Choose Your Model Size: Any Compression by a Single Gradient Descent}, 
  author={M. Genzel, P. Putzky, P. Zhao, S. Schulze, M. Mollenhauer, R. Seidel, S. Dietzel, T. Wollmann},
  year={2025},
  journal={Preprint arXiv:2502.01717}
}