task_categories:
- image-text-to-text
- video-text-to-text
- object-detection
- image-segmentation
language:
- en
This repository contains the evaluation data presented in: OneThinker: All-in-one Reasoning Model for Image and Video
Project Page: https://github.com/tulerfeng/OneThinker Code: https://github.com/tulerfeng/OneThinker
About OneThinker
We introduce OneThinker, an all-in-one multimodal reasoning generalist that is capable of thinking across a wide range of fundamental visual tasks within a single model.
We construct the large-scale OneThinker-600k multi-task training corpus and build OneThinker-SFT-340k with high-quality CoT annotations for cold-start SFT. Moreover, we propose EMA-GRPO, a new RL method that balances heterogeneous reward signals across diverse visual tasks, via simply tracking task-wise moving averages of reward std.
OneThinker demonstrates strong performance on 31 benchmarks across 10 fundamental vision tasks, while showing cross-task knowledge transfer and promising zero-shot generalization toward a unified multimodal reasoning generalist.
All code, models, and data are fully released.
Dataset
Our dataset covers both image and video modalities and spans a series of fundamental visual reasoning tasks, including rule-based QA, open-ended QA, captioning, spatial grounding, temporal grounding, spatio-temporal grounding, tracking, and segmentation
To enable effective SFT initialization for reasoning, we leverage a strong proprietary model, Seed1.5-VL to produce CoT annotations.
The onethinker_rl_train.json file is for RL training while onethinker_sft_image.json and onethinker_sft_video.json is for SFT cold start. The json files end with _unsampled are unsampled full set.
Sample Usage
For inference on a single example, you may refer to:
python ./Evaluation/inference_single/inference.py