Wan2.1
π Wan ο½ π₯οΈ GitHub | π€ Hugging Face | π€ ModelScope | π Technical Report | π Blog | π¬ WeChat Group | π Discord
Wan: Open and Advanced Large-Scale Video Generative Models
In this repository, we present Wan2.1, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation. Wan2.1 offers these key features:
- π SOTA Performance: Wan2.1 consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks.
- π Supports Consumer-grade GPUs: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed-source models.
- π Multiple Tasks: Wan2.1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.
- π Visual Text Generation: Wan2.1 is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications.
- π Powerful Video VAE: Wan-VAE delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation.
Video Demos
π₯ Latest News!!
- Apr 17, 2025: π We introduce Wan2.1 FLF2V with its inference code and weights!
- Mar 21, 2025: π We are excited to announce the release of the Wan2.1 technical report. We welcome discussions and feedback!
- Mar 3, 2025: π Wan2.1's T2V and I2V have been integrated into Diffusers (T2V | I2V). Feel free to give it a try!
- Feb 27, 2025: π Wan2.1 has been integrated into ComfyUI. Enjoy!
- Feb 25, 2025: π We've released the inference code and weights of Wan2.1.
Community Works
If your work has improved Wan2.1 and you would like more people to see it, please inform us.
- CFG-Zero enhances Wan2.1 (covering both T2V and I2V models) from the perspective of CFG.
- TeaCache now supports Wan2.1 acceleration, capable of increasing speed by approximately 2x. Feel free to give it a try!
- DiffSynth-Studio provides more support for Wan2.1, including video-to-video, FP8 quantization, VRAM optimization, LoRA training, and more. Please refer to their examples.
π Todo List
- Wan2.1 Text-to-Video
- Multi-GPU Inference code of the 14B and 1.3B models
- Checkpoints of the 14B and 1.3B models
- Gradio demo
- ComfyUI integration
- Diffusers integration
- Diffusers + Multi-GPU Inference
- Wan2.1 Image-to-Video
- Multi-GPU Inference code of the 14B model
- Checkpoints of the 14B model
- Gradio demo
- ComfyUI integration
- Diffusers integration
- Diffusers + Multi-GPU Inference
- Wan2.1 First-Last-Frame-to-Video
- Multi-GPU Inference code of the 14B model
- Checkpoints of the 14B model
- Gradio demo
- ComfyUI integration
- Diffusers integration
- Diffusers + Multi-GPU Inference
π Quickstart
This repository corresponding to the diffusers format weights. You can find original release weights here: Wan2.1-FLF2V-14B-720P.
Using with diffusers
Make sure you upgrade to latest version of diffusers:
pip install git+https://github.com/huggingface/diffusers.git
And then you can run:
import numpy as np
import torch
import torchvision.transforms.functional as TF
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
from transformers import CLIPVisionModel
model_id = "Wan-AI/Wan2.1-FLF2V-14B-720P-Diffusers"
image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanImageToVideoPipeline.from_pretrained(
model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16
)
pipe.to("cuda")
first_frame = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_first_frame.png")
last_frame = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_last_frame.png")
def aspect_ratio_resize(image, pipe, max_area=720 * 1280):
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
image = image.resize((width, height))
return image, height, width
def center_crop_resize(image, height, width):
# Calculate resize ratio to match first frame dimensions
resize_ratio = max(width / image.width, height / image.height)
# Resize the image
width = round(image.width * resize_ratio)
height = round(image.height * resize_ratio)
size = [width, height]
image = TF.center_crop(image, size)
return image, height, width
first_frame, height, width = aspect_ratio_resize(first_frame, pipe)
if last_frame.size != first_frame.size:
last_frame, _, _ = center_crop_resize(last_frame, height, width)
prompt = "CG animation style, a small blue bird takes off from the ground, flapping its wings. The bird's feathers are delicate, with a unique pattern on its chest. The background shows a blue sky with white clouds under bright sunshine. The camera follows the bird upward, capturing its flight and the vastness of the sky from a close-up, low-angle perspective."
output = pipe(
image=first_frame, last_image=last_frame, prompt=prompt, height=height, width=width, guidance_scale=5.5
).frames[0]
export_to_video(output, f"wan-ff2v.mp4", fps=16)
π‘Note: Please note that this example does not integrate Prompt Extension and distributed inference. We will soon update with the integrated prompt extension and multi-GPU version of Diffusers.
Manual Evaluation
(1) Text-to-Video Evaluation
Through manual evaluation, the results generated after prompt extension are superior to those from both closed-source and open-source models.

(2) Image-to-Video Evaluation
We also conducted extensive manual evaluations to evaluate the performance of the Image-to-Video model, and the results are presented in the table below. The results clearly indicate that Wan2.1 outperforms both closed-source and open-source models.

Computational Efficiency on Different GPUs
We test the computational efficiency of different Wan2.1 models on different GPUs in the following table. The results are presented in the format: Total time (s) / peak GPU memory (GB).

The parameter settings for the tests presented in this table are as follows: (1) For the 1.3B model on 8 GPUs, set
--ring_size 8
and--ulysses_size 1
; (2) For the 14B model on 1 GPU, use--offload_model True
; (3) For the 1.3B model on a single 4090 GPU, set--offload_model True --t5_cpu
; (4) For all testings, no prompt extension was applied, meaning--use_prompt_extend
was not enabled.
π‘Note: T2V-14B is slower than I2V-14B because the former samples 50 steps while the latter uses 40 steps.
Introduction of Wan2.1
Wan2.1 is designed on the mainstream diffusion transformer paradigm, achieving significant advancements in generative capabilities through a series of innovations. These include our novel spatio-temporal variational autoencoder (VAE), scalable training strategies, large-scale data construction, and automated evaluation metrics. Collectively, these contributions enhance the modelβs performance and versatility.
(1) 3D Variational Autoencoders
We propose a novel 3D causal VAE architecture, termed Wan-VAE specifically designed for video generation. By combining multiple strategies, we improve spatio-temporal compression, reduce memory usage, and ensure temporal causality. Wan-VAE demonstrates significant advantages in performance efficiency compared to other open-source VAEs. Furthermore, our Wan-VAE can encode and decode unlimited-length 1080P videos without losing historical temporal information, making it particularly well-suited for video generation tasks.

(2) Video Diffusion DiT
Wan2.1 is designed using the Flow Matching framework within the paradigm of mainstream Diffusion Transformers. Our model's architecture uses the T5 Encoder to encode multilingual text input, with cross-attention in each transformer block embedding the text into the model structure. Additionally, we employ an MLP with a Linear layer and a SiLU layer to process the input time embeddings and predict six modulation parameters individually. This MLP is shared across all transformer blocks, with each block learning a distinct set of biases. Our experimental findings reveal a significant performance improvement with this approach at the same parameter scale.

Model | Dimension | Input Dimension | Output Dimension | Feedforward Dimension | Frequency Dimension | Number of Heads | Number of Layers |
---|---|---|---|---|---|---|---|
1.3B | 1536 | 16 | 16 | 8960 | 256 | 12 | 30 |
14B | 5120 | 16 | 16 | 13824 | 256 | 40 | 40 |
Data
We curated and deduplicated a candidate dataset comprising a vast amount of image and video data. During the data curation process, we designed a four-step data cleaning process, focusing on fundamental dimensions, visual quality and motion quality. Through the robust data processing pipeline, we can easily obtain high-quality, diverse, and large-scale training sets of images and videos.
Comparisons to SOTA
We compared Wan2.1 with leading open-source and closed-source models to evaluate the performance. Using our carefully designed set of 1,035 internal prompts, we tested across 14 major dimensions and 26 sub-dimensions. We then compute the total score by performing a weighted calculation on the scores of each dimension, utilizing weights derived from human preferences in the matching process. The detailed results are shown in the table below. These results demonstrate our model's superior performance compared to both open-source and closed-source models.
Citation
If you find our work helpful, please cite us.
@article{wan2025,
title={Wan: Open and Advanced Large-Scale Video Generative Models},
author={WanTeam and Ang Wang and Baole Ai and Bin Wen and Chaojie Mao and Chen-Wei Xie and Di Chen and Feiwu Yu and Haiming Zhao and Jianxiao Yang and Jianyuan Zeng and Jiayu Wang and Jingfeng Zhang and Jingren Zhou and Jinkai Wang and Jixuan Chen and Kai Zhu and Kang Zhao and Keyu Yan and Lianghua Huang and Mengyang Feng and Ningyi Zhang and Pandeng Li and Pingyu Wu and Ruihang Chu and Ruili Feng and Shiwei Zhang and Siyang Sun and Tao Fang and Tianxing Wang and Tianyi Gui and Tingyu Weng and Tong Shen and Wei Lin and Wei Wang and Wei Wang and Wenmeng Zhou and Wente Wang and Wenting Shen and Wenyuan Yu and Xianzhong Shi and Xiaoming Huang and Xin Xu and Yan Kou and Yangyu Lv and Yifei Li and Yijing Liu and Yiming Wang and Yingya Zhang and Yitong Huang and Yong Li and You Wu and Yu Liu and Yulin Pan and Yun Zheng and Yuntao Hong and Yupeng Shi and Yutong Feng and Zeyinzi Jiang and Zhen Han and Zhi-Fan Wu and Ziyu Liu},
journal = {arXiv preprint arXiv:2503.20314},
year={2025}
}
License Agreement
The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generated contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the license.
Acknowledgements
We would like to thank the contributors to the SD3, Qwen, umt5-xxl, diffusers and HuggingFace repositories, for their open research.
Contact Us
If you would like to leave a message to our research or product teams, feel free to join our Discord or WeChat groups!