Knowledge Base
  • Dreamerland
  • workflow
    • Hires. fix
      • Hires. fix with different model
    • Optical illusion
    • Inpainting
      • BrushNet
      • Inpaint not masked
    • Regional Prompt (color mask)
      • Regional Prompt with Reference Image
    • Regional Prompt (prompt mode)
    • Outpainting
    • Image prompt
      • Consistent face
      • Consistent anime character
      • Style
    • Doodle
      • Inpaint with doodle
      • Area inpaint with doodle
    • Motion brush
    • Relighting
    • Live Portrait
    • Face Swap
    • 3D Parallax effect
    • In-context transfer
  • Video
    • Image to Video
    • Hunyuan Video
      • Image to Video
    • Wan2.1 Video
      • Pose reference
      • Video to Video
        • Reference Image
        • Key frames
      • Inpaint
  • Basic terms
    • Sampling method
    • Guidance scale
    • Negative embeddings
    • CLIP skip
  • FAQ
    • Dreamerland Web app
    • Prompt syntax () and []
    • Bad quality image or deformed image
    • High res. fix vs tiled upscale
    • How to enable sensitive content
    • Wildcards
Powered by GitBook
On this page
  1. Video
  2. Wan2.1 Video

Video to Video

Last updated 14 days ago

We’ve introduced Video-to-Video in WAN 2.1, allowing you to transform your source videos with AI. Simply upload your input video, enter a text prompt describing your desired output, and let the model generate a new version while preserving motion and composition.

  1. Upload a video source

  2. Select Control type (Pose, Depth, Face or Canny)

  3. Enter the prompt

Here is the result of the example using this sample video and prompt: