-
WebWatcher: Breaking New Frontier of Vision-Language Deep Research Agent
Paper • 2508.05748 • Published • 141 -
Agent Lightning: Train ANY AI Agents with Reinforcement Learning
Paper • 2508.03680 • Published • 121 -
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
LongSplat: Robust Unposed 3D Gaussian Splatting for Casual Long Videos
Paper • 2508.14041 • Published • 59
Collections
Discover the best community collections!
Collections including paper arxiv:2506.07491
-
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
RynnEC: Bringing MLLMs into Embodied World
Paper • 2508.14160 • Published • 19 -
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models
Paper • 2504.10479 • Published • 303
-
A Dataset for Crucial Object Recognition in Blind and Low-Vision Individuals' Navigation
Paper • 2407.16777 • Published -
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
Cyclic-Bootstrap Labeling for Weakly Supervised Object Detection
Paper • 2308.05991 • Published -
jxu124/objects365
Viewer • Updated • 1.82M • 371 • 2
-
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning
Paper • 2506.07044 • Published • 114 -
ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning
Paper • 2506.09513 • Published • 100 -
AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time
Paper • 2505.24863 • Published • 97 -
Seedance 1.0: Exploring the Boundaries of Video Generation Models
Paper • 2506.09113 • Published • 104
-
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
Story2Board: A Training-Free Approach for Expressive Storyboard Generation
Paper • 2508.09983 • Published • 68 -
Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens
Paper • 2503.01710 • Published • 6 -
HunyuanWorld 1.0: Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels
Paper • 2507.21809 • Published • 135
-
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
Hunyuan3D 2.5: Towards High-Fidelity 3D Assets Generation with Ultimate Details
Paper • 2506.16504 • Published • 29 -
3D-R1: Enhancing Reasoning in 3D VLMs for Unified Scene Understanding
Paper • 2507.23478 • Published • 15
-
BLIP3-o: A Family of Fully Open Unified Multimodal Models-Architecture, Training and Dataset
Paper • 2505.09568 • Published • 97 -
Qwen3 Technical Report
Paper • 2505.09388 • Published • 317 -
GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning
Paper • 2505.11049 • Published • 60 -
Emerging Properties in Unified Multimodal Pretraining
Paper • 2505.14683 • Published • 134
-
WebWatcher: Breaking New Frontier of Vision-Language Deep Research Agent
Paper • 2508.05748 • Published • 141 -
Agent Lightning: Train ANY AI Agents with Reinforcement Learning
Paper • 2508.03680 • Published • 121 -
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
LongSplat: Robust Unposed 3D Gaussian Splatting for Casual Long Videos
Paper • 2508.14041 • Published • 59
-
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
RynnEC: Bringing MLLMs into Embodied World
Paper • 2508.14160 • Published • 19 -
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models
Paper • 2504.10479 • Published • 303
-
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
Story2Board: A Training-Free Approach for Expressive Storyboard Generation
Paper • 2508.09983 • Published • 68 -
Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens
Paper • 2503.01710 • Published • 6 -
HunyuanWorld 1.0: Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels
Paper • 2507.21809 • Published • 135
-
A Dataset for Crucial Object Recognition in Blind and Low-Vision Individuals' Navigation
Paper • 2407.16777 • Published -
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
Cyclic-Bootstrap Labeling for Weakly Supervised Object Detection
Paper • 2308.05991 • Published -
jxu124/objects365
Viewer • Updated • 1.82M • 371 • 2
-
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning
Paper • 2506.07044 • Published • 114 -
ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning
Paper • 2506.09513 • Published • 100 -
AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time
Paper • 2505.24863 • Published • 97 -
Seedance 1.0: Exploring the Boundaries of Video Generation Models
Paper • 2506.09113 • Published • 104
-
SpatialLM: Training Large Language Models for Structured Indoor Modeling
Paper • 2506.07491 • Published • 50 -
Hunyuan3D 2.5: Towards High-Fidelity 3D Assets Generation with Ultimate Details
Paper • 2506.16504 • Published • 29 -
3D-R1: Enhancing Reasoning in 3D VLMs for Unified Scene Understanding
Paper • 2507.23478 • Published • 15
-
BLIP3-o: A Family of Fully Open Unified Multimodal Models-Architecture, Training and Dataset
Paper • 2505.09568 • Published • 97 -
Qwen3 Technical Report
Paper • 2505.09388 • Published • 317 -
GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning
Paper • 2505.11049 • Published • 60 -
Emerging Properties in Unified Multimodal Pretraining
Paper • 2505.14683 • Published • 134