EvAnimate: Event-conditioned Image-to-Video Generation for Human Animation
Paper
β’
2503.18552
β’
Published
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image |
|---|
EvHumanMotion is a real-world dataset captured using the DAVIS346 event camera, focusing on human motion under diverse and challenging scenarios. It is designed to support event-driven human animation research, especially under motion blur, low-light, and overexposure conditions. This dataset was introduced in the EvAnimate paper.
The dataset is organized into two main parts:
.aedat4 format.Each is categorized into:
indoor_day/indoor_night_high_noise/indoor_night_low_noise/outdoor_day/outdoor_night/Each environment contains four scenarios:
low_light/motion_blur/normal/over_exposure/Example path:
EvHumanMotion_frame/indoor_day/low_light/dvSave-2025_03_04_13_02_53/
βββ event_frames/
β βββ events_0000013494.png
β βββ ...
βββ frames/
βββ frames_1741064573617350.png
βββ ...
.aedat4 and frame-level)This dataset supports:
from datasets import load_dataset
dataset = load_dataset("potentialming/EvHumanMotion")
Apache 2.0 License
If you use this dataset, please cite:
@article{qu2025evanimate,
title={EvAnimate: Event-conditioned Image-to-Video Generation for Human Animation},
author={Qu, Qiang and Li, Ming and Chen, Xiaoming and Liu, Tongliang},
journal={arXiv preprint arXiv:2503.18552},
year={2025}
}
Dataset maintained by Ming Li.