Example workflow - based on the Comfyui example workflow

This is a direct GGUF conversion of Wan-AI/Wan2.1-VACE-14B

All quants are created from the FP32 base file, though I only uploaded the Q8_0 and less, if you want the F16 or BF16 one I would upload it per request.

The model files can be used with the ComfyUI-GGUF custom node.

Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.

The VAE can be downloaded from this repository by Kijai

Please refer to this chart for a basic overview of quantization types.

For conversion I used the conversion scripts from city96

Downloads last month
37,188
GGUF
Model size
17.3B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ 1 Ask for provider support

Model tree for QuantStack/Wan2.1-VACE-14B-GGUF

Quantized
(1)
this model