Stable Video Diffusion Image-to-Video Model
(SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning. This model was trained to generate 25 frames at resolution 576x1024 given a context frame of the same size, finetuned from SVD Image-to-Video [14 frames]. We also finetune the widely used f8-decoder for temporal consistency. For convenience, we additionally provide the model with the standard frame-wise decoder here.
https://stability.ai/news/stable-video-diffusion-open-ai-video-model
Oh wow, I know the results are probably cherry picked, but this still seems like such a step-up.