About Wan 2.7

Wan 2.7 is the latest evolution of Alibaba's open-source AI video generation model, designed to make cinematic video creation accessible to everyone.

What Is Wan 2.7?

Wan 2.7 is a state-of-the-art AI video generation model developed within Alibaba's Qwen ecosystem. Built on a 27-billion-parameter Mixture-of-Experts (MoE) architecture, Wan 2.7 generates cinematic 1080P HD videos from text descriptions and images. It represents a major leap forward in open-source AI video generation, combining visual fidelity, audio synchronization, and motion consistency in a single unified model.

Unlike closed-source alternatives, Wan 2.7 is fully open source, giving developers, researchers, and creators complete access to the model weights, architecture, and training methodology. This transparency has made it one of the most popular AI video models on GitHub, with over 15,000 stars and an active community of contributors.

The Evolution of Wan Video Models

The Wan model family has rapidly evolved through multiple generations, each bringing significant improvements in video quality, capabilities, and efficiency:

Wan 2.1 — The Foundation

The original open-source release established the core architecture with text-to-video and image-to-video capabilities at 480P and 720P resolutions using a 14B parameter model.

Wan 2.2 — Mixture-of-Experts

Introduced the MoE architecture (27B total, 14B active), trained on 65% more images and 83% more videos. Added LoRA support, VACE features, and a new VAE with 16x16x4 compression ratio.

Wan 2.5 — Speed & Scale

Optimized for fast content creation with improved generation speed and quality refinements, becoming a practical tool for daily content workflows.

Wan 2.6 — Multi-Shot Storytelling

Introduced connected multi-shot video generation, real-person image inputs, up to five video references, and 1080P output. Released December 2025.

Wan 2.7 — The Current Generation

Major upgrades across visual quality, audio, motion dynamics, stylization, and consistency. New features include 9-grid image-to-video, subject + voice reference, first-and-last-frame control, and instruction-based video editing. Launching March 2026.

Key Capabilities

Wan 2.7 delivers the most comprehensive feature set of any open-source AI video generator:

Open Source & Community

Wan 2.7 is built on the principle that powerful AI video generation should be accessible to all. The model weights are available on Hugging Face, the code is on GitHub under the Wan-Video organization, and an active community contributes extensions, fine-tuning workflows, and integration tools for platforms like ComfyUI.