mirror of
https://github.com/huggingface/diffusers.git
synced 2025-12-11 15:04:45 +08:00
* introduce videoprocessor. * fix quality * address yiyi's feedback * fix preprocess_video call. * video_processor -> image_processor * fix * fix more. * quality * image_processor -> video_processor * support List[List[PIL.Image.Image]] * change to video_processor. * documentation * Apply suggestions from code review * changes * remove print. * refactor video processor (part # 7776) (#7861) * update * update remove deprecate * Update src/diffusers/video_processor.py * update * Apply suggestions from code review * deprecate list of 5d for video and list of 4d for image + apply other feedbacks * up --------- Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> * add doc. * tensor2vid -> postprocess_video. * refactor preprocess with preprocess_video * set default values. * empty commit * more refactoring of prepare_latents in animatediff vid2vid * checking documentation * remove documentation for now. * fix animatediff sdxl * fix test failure [part of video processor PR] (#7905) up * remove preceed_with_frames. * doc * fix * fix * remove video input as a single-frame video. --------- Co-authored-by: YiYi Xu <yixu310@gmail.com>
921 B
921 B
Video Processor
The VideoProcessor provides a unified API for video pipelines to prepare inputs for VAE encoding and post-processing outputs once they're decoded. The class inherits [VaeImageProcessor] so it includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.