Papers
arxiv:2512.04563

COOPER: A Unified Model for Cooperative Perception and Reasoning in Spatial Intelligence

Published on Dec 4
· Submitted by zhangzef on Dec 8
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

A unified multimodal large language model (MLLM) that integrates depth and segmentation modalities enhances spatial reasoning and perception through adaptive interleaved reasoning, improving spatial intelligence and general performance.

AI-generated summary

Visual Spatial Reasoning is crucial for enabling Multimodal Large Language Models (MLLMs) to understand object properties and spatial relationships, yet current models still struggle with 3D-aware reasoning. Existing approaches typically enhance either perception, by augmenting RGB inputs with auxiliary modalities such as depth and segmentation, or reasoning, by training on spatial VQA datasets and applying reinforcement learning, and thus treat these two aspects in isolation. In this work, we investigate whether a unified MLLM can develop an intrinsic ability to enhance spatial perception and, through adaptive interleaved reasoning, achieve stronger spatial intelligence. We propose COOPER, a unified MLLM that leverages depth and segmentation as auxiliary modalities and is trained in two stages to acquire auxiliary modality generation and adaptive, interleaved reasoning capabilities. COOPER achieves an average 6.91\% improvement in spatial reasoning while maintaining general performance. Moreover, even a variant trained only for auxiliary modality generation attains a 7.92\% gain on distance and size estimation, suggesting that learning to generate auxiliary modalities helps internalize spatial knowledge and strengthen spatial understanding.

Community

Paper submitter

Visual Spatial Reasoning is crucial for enabling Multimodal Large Language Models (MLLMs) to understand object properties and spatial relationships, yet current models still struggle with 3D-aware reasoning. Existing approaches typically enhance either perception, by augmenting RGB inputs with auxiliary modalities such as depth and segmentation, or reasoning, by training on spatial VQA datasets and applying reinforcement learning, and thus treat these two aspects in isolation.

In this work, we investigate whether a unified MLLM can develop an intrinsic ability to enhance spatial perception and, through adaptive interleaved reasoning, achieve stronger spatial intelligence.
We propose \textbf{COOPER}, a unified MLLM that leverages depth and segmentation as auxiliary modalities and is trained in two stages to acquire auxiliary modality generation and adaptive, interleaved reasoning capabilities.

COOPER achieves an average \textbf{6.91%} improvement in spatial reasoning while maintaining general performance. Moreover, even a variant trained only for auxiliary modality generation attains a \textbf{7.92%} gain on distance and size estimation, suggesting that learning to generate auxiliary modalities helps internalize spatial knowledge and strengthen spatial understanding.

Paper submitter

Key Features of the Github Repository

  • 🧠 GRPO Training for BAGEL via TRL:
    Fine-tune BAGEL-style multimodal models with RL-style objectives.
    Optimize perception–reasoning behavior directly from feedback signals.
    Seamlessly extend from supervised multimodal CoT training to RL-based refinement.

  • 📊 VLMEvalKit Integration for BAGEL:
    One-line evaluation on a wide range of multimodal benchmarks.
    Unified interfaces for dataset loading, inference, and result aggregation.
    Direct comparison with other VLMs under consistent evaluation protocols.

  • 🧩 SIBench (Single-Image Part) + GPT/Deepseek Answer Extraction:
    Fully integrated into VLMEvalKit as a first-class evaluation task.
    Equipped with GPT/Deepseek-based answer extractors to: Robustly parse free-form model outputs. Reduce evaluation noise from formatting and phrasing. Provide more accurate and reliable spatial reasoning scores.

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.04563 in a Space README.md to link it from this page.

Collections including this paper 1