Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
End of preview.

One-to-All Animation: Alignment-Free Character Animation and Image Pose Transfer

This repository contains the sample training data and benchmarks associated with the paper One-to-All Animation: Alignment-Free Character Animation and Image Pose Transfer.

The paper presents a unified framework for high-fidelity character animation and image pose transfer for references with arbitrary layouts, addressing spatial misalignment and partially visible references through innovative techniques.

🌟 Highlights

We provide a complete and reproducible training and evaluation pipeline:

  • βœ… Full Training Code: Three-stage progressive training from scratch
  • βœ… Complete Benchmarks: Reproduction code and pre-trained checkpoints
  • βœ… Flexible Training Codebase: Multi-resolution, multi-aspect-ratio, and multi-frame training codebase
  • βœ… Datasets: Pre-processed open-source datasets + self-collected cartoon data

β˜•οΈ Quick Inference (Sample Usage)

To perform quick inference with the models, follow these steps from the GitHub repository:

πŸ”§ Dependencies and Installation

  1. Clone Repo

    git clone https://github.com/ssj9596/One-to-All-Animation.git
    cd One-to-All-Animation
    
  2. Create Conda Environment and Install Dependencies

    # create new conda env
    conda create -n one-to-all python=3.12
    conda activate one-to-all
    
    # install pytorch
    pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
    # or
    pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 -i https://mirrors.aliyun.com/pypi/simple/
    
    # install python dependencies
    pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
    
    
    # (Recommended) install flash attention 3 (or 2) from source:
    # https://github.com/Dao-AILab/flash-attention
    

🎬 Training from scratch

πŸ’‘ Data Collection Required: We find current open-source datasets are not sufficient for training from scratch. We strongly recommend collecting at least 3,000 additional high-quality video samples for better results.

We divide the training process into several steps to help you reproduce our results from scratch (using 1.3B as an example).

  1. Download Pretrained Models

    Download the base model from HuggingFace: Wan-AI/Wan2.1-T2V-1.3B-Diffusers

  2. Download Training Datasets and Pose Pool

    cd datasets
    bash setup_datasets.sh
    

    This will download and prepare:

    • Training datasets (open-source + cartoon): datasets/opensource_dataset/
    • Pose pool for face enhancement: datasets/opensource_pose_pool/
    Manual Download Links
  3. Training

    We provide three-stage training scripts:

    • Stage 1: Reference Extractor
    cd video-generation
    bash training_scripts/train1.3b_only_refextractor_2d.sh
    # Convert checkpoint to FP32
    cd outputs_wanx1.3b/train1.3b_only_refextractor_2d/checkpoint-xxx
    mkdir fp32_model_xxx
    python zero_to_fp32.py . fp32_model_xxx --safe_serialization
    # Run inference (update model path in inference_refextractor.py first)
    cd ../../../
    # Edit inference_refextractor.py and change ckpt_path to:
    # ./outputs_wanx1.3b/train1.3b_only_refextractor_2d/checkpoint-xxx/fp32_model_xxx
    python inference_refextractor.py
    
    • Stage 2: Pose Control
    bash training_scripts/train1.3b_posecontrol_prefix_2d.sh
    
    • Stage 3: Token Replace for Long video generation
    bash training_scripts/train1.3b_posecontrol_prefix_2d_tokenreplace.sh
    

    πŸ’‘ Training Notes:

    • Each stage uses different training resolutions - check the scripts for specific resolution settings
    • Fine-tuning from our checkpoints: If you want to continue training from our pre-trained models, directly use the Stage 3 script and modify the checkpoint path

πŸ“Š Reproduce Paper Results

We provide scripts to reproduce the quantitative results reported in our paper.

  1. Download Benchmark

    cd benchmark
    bash setup_datasets.sh
    
  2. Prepare Model Input

    cd ../video-generation
    python reproduce/infer_preprocess.py 
    
  3. Run Inference We provide inference scripts for different model sizes and datasets:

    # TikTok dataset
    python reproduce/inference_tiktok1.3b.py   # 1.3B model
    python reproduce/inference_tiktok14b.py    # 14B model
    
    # Cartoon dataset
    python reproduce/inference_cartoon1.3b.py  # 1.3B model
    python reproduce/inference_cartoon14b.py   # 14B model
    
  4. Prepare gt/pred pairs for Judge

    cd ../benchmark
    # TikTok dataset
    python prepare_eval_frames_tiktok.py
    # Cartoon dataset
    python prepare_eval_frames_cartoon.py
    
  5. Run judge

    # prepare DisCo environment and lpips fvd ckpt for judge
    cd DisCo
    # TikTok dataset
    bash eval_tiktok.sh
    python summary.py
    
Downloads last month
1,147