Xue0823 nielsr HF Staff commited on
Commit
74db362
·
verified ·
1 Parent(s): 0204ddf

Improve dataset card: Add task categories, paper link, relevant tags, and generation usage (#2)

Browse files

- Improve dataset card: Add task categories, paper link, relevant tags, and generation usage (ba274943ea0cb23d9a0a9692e0d94044f77aec5e)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +48 -7
README.md CHANGED
@@ -1,11 +1,25 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
3
  ---
4
- ### Introduction
5
- Official dataset repo for paper: Reasoning Path and Latent State Analysis for Mulit-view Visual Spatial Reasoning: A Cognitive Science Perspective
6
- Github link: https://github.com/pittisl/ReMindView-Bench
7
 
8
- ### Reconstructing the dataset
 
 
 
 
 
 
 
9
 
10
  The dataset archive is split into 45GB parts to comply with the per-file limit. To rebuild the original tar after downloading all parts:
11
 
@@ -13,7 +27,34 @@ The dataset archive is split into 45GB parts to comply with the per-file limit.
13
  cat ReMindView-Bench.tar.part-* > ReMindView-Bench.tar
14
  ```
15
 
16
- ### Dataset content
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  VQA samples are stored in CSV files with the following columns: `folder_path` (scene/view folder), `query_type` (query relationship type), `query`, `ground_truth`, `choices`, `cross_frame` (whether cross frame reasoning is necessary), `perspective_changing` (whether requiring perspective changing), and `object_num` (object number in all frames).
19
 
@@ -27,7 +68,7 @@ Example row:
27
  - `perspective_changing`: `False`
28
  - `object_num`: `18`
29
 
30
- ### Sample scenes
31
 
32
  Below are several example renders from ReMindView-Bench showing indoor layouts and object detail captured in the benchmark.
33
  Example 1:
@@ -84,4 +125,4 @@ Example 8:
84
 
85
  - Query: If you are positioned where the white small kitchen cabinet is, facing the same direction of the white small kitchen cabinet and then turn left, which object would be in the front of the dining table from this view direction?
86
  - Choice: A. wineglass, B. pot, C. chair
87
- - Answer: C. chair
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - image-text-to-text
5
+ language:
6
+ - en
7
+ tags:
8
+ - vlm
9
+ - spatial-reasoning
10
+ - multi-view
11
+ - vqa
12
+ - cognition
13
  ---
 
 
 
14
 
15
+ # ReMindView-Bench Dataset
16
+
17
+ [Paper](https://huggingface.co/papers/2512.02340) | [Code](https://github.com/pittisl/ReMindView-Bench)
18
+
19
+ ## Introduction
20
+ ReMindView-Bench is a cognitively grounded benchmark for evaluating how Vision-Language Models (VLMs) construct, align, and maintain spatial mental models across complementary viewpoints. It addresses the struggle of current VLMs to maintain geometric coherence and cross-view consistency for spatial reasoning in multi-view settings by providing a fine-grained benchmark that isolates multi-view reasoning.
21
+
22
+ ## Reconstructing the dataset
23
 
24
  The dataset archive is split into 45GB parts to comply with the per-file limit. To rebuild the original tar after downloading all parts:
25
 
 
27
  cat ReMindView-Bench.tar.part-* > ReMindView-Bench.tar
28
  ```
29
 
30
+ ## Dataset Generation
31
+
32
+ To generate scenes, renders, and QA CSVs for the benchmark, follow these steps from the GitHub repository:
33
+
34
+ 1. **Install Blender and Python Dependencies**:
35
+ You need to install Blender (headless is fine) and the Python dependencies used by Infinigen plus common packages. Run the scripts with Blender’s bundled Python or `blender --background --python <script> -- --flags` so `bpy` is available.
36
+
37
+ 2. **Generate Scenes and Renders**:
38
+ From the repo root, generate scenes and renders:
39
+ ```bash
40
+ bash scene_generation.sh
41
+ ```
42
+ This script sweeps seeds 0–9 across five room types and writes scenes to `outputs/indoors/<ROOM>_<SEED>`, object-centric frames to `object_centric_view_frame_outputs/<ROOM>/<ROOM>_<SEED>`, and view-centric frames to `view_centric_view_frame_outputs/<ROOM>/<ROOM>_<SEED>`.
43
+
44
+ 3. **Clean Empty/Invalid Views**:
45
+ ```bash
46
+ python clean_visual_data.py --dir_path object_centric_view_frame_outputs
47
+ ```
48
+ And the same for `view_centric_view_frame_outputs`.
49
+
50
+ 4. **Produce QA CSVs**:
51
+ Choose one of `view_view`, `view_object`, `object_object`. For example:
52
+ ```bash
53
+ python ground_truth_generation.py --image_folder object_centric_view_frame_outputs --qa_type object_object
54
+ ```
55
+ The output CSV will be saved beside the image folder (e.g., `object_centric_view_frame_outputs/object_object_qa.csv`).
56
+
57
+ ## Dataset content
58
 
59
  VQA samples are stored in CSV files with the following columns: `folder_path` (scene/view folder), `query_type` (query relationship type), `query`, `ground_truth`, `choices`, `cross_frame` (whether cross frame reasoning is necessary), `perspective_changing` (whether requiring perspective changing), and `object_num` (object number in all frames).
60
 
 
68
  - `perspective_changing`: `False`
69
  - `object_num`: `18`
70
 
71
+ ## Sample scenes
72
 
73
  Below are several example renders from ReMindView-Bench showing indoor layouts and object detail captured in the benchmark.
74
  Example 1:
 
125
 
126
  - Query: If you are positioned where the white small kitchen cabinet is, facing the same direction of the white small kitchen cabinet and then turn left, which object would be in the front of the dining table from this view direction?
127
  - Choice: A. wineglass, B. pot, C. chair
128
+ - Answer: C. chair