JessicaE commited on
Commit
39c1d3a
·
verified ·
1 Parent(s): 2dbc2c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +198 -1
README.md CHANGED
@@ -41,4 +41,201 @@ language:
41
  - en
42
  size_categories:
43
  - 100K<n<1M
44
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  - en
42
  size_categories:
43
  - 100K<n<1M
44
+ ---
45
+
46
+ # OpenSeeSimE-Structural: Engineering Simulation Visual Question Answering Benchmark
47
+
48
+ ## Dataset Summary
49
+
50
+ OpenSeeSimE-Structural is a large-scale benchmark dataset for evaluating vision-language models on structural analysis simulation interpretation tasks. It contains over 100,000 question-answer pairs across parametrically-varied structural simulations including stress analysis, and deformation patterns.
51
+
52
+ ## Purpose
53
+
54
+ While vision-language models (VLMs) have shown promise in general visual reasoning, their effectiveness for specialized engineering simulation interpretation remains largely unexplored. This benchmark enables:
55
+
56
+ - Statistically robust evaluation of VLM performance on engineering visualizations
57
+ - Assessment across multiple reasoning capabilities (captioning, reasoning, grounding, relationship understanding)
58
+ - Evaluation using different question types (binary classification, multiple-choice, spatial grounding)
59
+
60
+ ## Dataset Composition
61
+
62
+ ### Statistics
63
+ - **Total instances**: 102,678 question-answer pairs
64
+ - **Simulation types**: 5 structural models (Dog Bone, Hip Implant, Pressure Vessel, Beams, Wall Bracket)
65
+ - **Parametric variations**: 1,024 unique instances per base model (4^5 parameter combinations)
66
+ - **Question categories**: Captioning, Reasoning, Grounding, Relationship Understanding
67
+ - **Question types**: Binary, Multiple-choice, Spatial grounding
68
+ - **Media formats**: Both static images (1920×1440 PNG) and videos (Originally Extracted at: 200 frames, 29 fps, 7 seconds)
69
+
70
+ ### Simulation Parameters
71
+
72
+ Each base model varies across 5 parameters with 4 values each:
73
+
74
+ **Dog Bone**: Length, Thickness, Diameter, Axial Load, Bending Load
75
+ **Hip Implant**: Beam Length, Beam Diameter, Ball Diameter, Axial Load, Bending Load
76
+ **Pressure Vessel**: Length, Thickness, Diameter, Material, Pressure
77
+ **Thermal Beam**: Thickness, Bending Load, Young's Modulus, Tensile Yield Strength, Cross Section Shape
78
+ **Wall Bracket**: Length, Width, Height, Thickness, Bending Force
79
+
80
+ ### Question Distribution
81
+
82
+ - **Binary Classification**: 40% (yes/no questions about symmetry, stress types, uniformity, etc.)
83
+ - **Multiple-Choice**: 30% (4-option questions about deformation direction, stress dominance, magnitude ranges, etc.)
84
+ - **Spatial Grounding**: 30% (location-based questions with labeled regions A/B/C/D)
85
+
86
+ ## Data Collection Process
87
+
88
+ ### Simulation Generation
89
+ 1. Base models sourced from Ansys Mechanical tutorial files
90
+ 2. Parametric automation via PyMechanical and PyGeometry interfaces
91
+ 3. Systematic variation across 5 parameters with 4 linearly-spaced values
92
+ 4. All simulations solved using finite element analysis with validated convergence settings
93
+
94
+ ### Ground Truth Extraction
95
+ Automated extraction eliminates human annotation costs and ensures consistency:
96
+
97
+ - **Statistical Analysis**: Direct queries on result arrays (max, min, mean, std)
98
+ - **Distribution Analysis**: Threshold-based classification using coefficient of variation
99
+ - **Physics-Based Classification**: Stress tensor analysis and mechanics principles
100
+ - **Spatial Localization**: Color-based region generation with computer vision algorithms
101
+
102
+ All ground truth derived from numerical simulation results rather than visual interpretation.
103
+
104
+ ## Preprocessing and Data Format
105
+
106
+ ### Image Processing
107
+ - Resolution: 1920×1440 pixels
108
+ - Format: PNG with lossless compression
109
+ - Standardized viewing orientations: front, back, left, right, top, bottom, isometric
110
+ - Consistent color mapping: rainbow gradients (red=maximum, blue=minimum)
111
+ - Automatic deformation scaling (1.5× relative to maximum dimension)
112
+
113
+ ### Video Processing
114
+ - 200 frames at 29 fps (7 seconds duration)
115
+ - Maximum deformation at frame 100 (temporal midpoint)
116
+ - H.264 compression at 1920×1440 resolution
117
+ - Uniform frame sampling for model input (32 frames)
118
+
119
+ ### Data Fields
120
+ ```python
121
+ {
122
+ 'file_name': str, # Unique identifier
123
+ 'source_file': str, # Base simulation model
124
+ 'question': str, # Question text
125
+ 'question_type': str, # 'Binary', 'Multiple Choice', or 'Spatial'
126
+ 'question_id': int, # Question identifier (1-20)
127
+ 'answer': str, # Ground truth answer
128
+ 'answer_choices': List[str], # Available options
129
+ 'correct_choice_idx': int, # Index of correct answer
130
+ 'image': Image, # PIL Image object (1920×1440)
131
+ 'video': Video, # Video frames
132
+ 'media_type': str # 'image' or 'video'
133
+ }
134
+ ```
135
+
136
+ ## Labels
137
+
138
+ All labels are automatically generated from simulation numerical results:
139
+
140
+ - **Binary questions**: "Yes" or "No"
141
+ - **Multiple-choice**: Single letter (A/B/C/D) or descriptive option
142
+ - **Spatial grounding**: Region label (A/B/C/D) corresponding to labeled visualization locations
143
+
144
+ Label generation employs domain-specific thresholds:
145
+ - Uniformity: CV ≤ 0.2 (20%)
146
+ - Symmetry: 60% of node pairs within 10% tolerance (structural)
147
+ - Spatial matching: 50-pixel separation for region placement
148
+
149
+ ## Dataset Splits
150
+
151
+ - **Test split only**: 102,678 instances
152
+ - No train/validation splits provided (evaluation benchmark, not for model training)
153
+ - Representative sampling across all simulation types and question categories
154
+
155
+ ## Intended Use
156
+
157
+ ### Primary Use Cases
158
+ 1. **Benchmark evaluation** of vision-language models on engineering simulation interpretation
159
+ 2. **Capability assessment** across visual reasoning dimensions (captioning, spatial grounding, relationship understanding)
160
+ 3. **Transfer learning analysis** from general-domain to specialized technical visual reasoning
161
+
162
+ ### Out-of-Scope Use
163
+ - Real-time engineering decision-making without expert validation
164
+ - Safety-critical applications without human oversight
165
+ - Generalization to simulation types beyond structural mechanics
166
+
167
+ ## Limitations
168
+
169
+ ### Technical Limitations
170
+ - **Objective tasks only**: Excludes subjective engineering judgments requiring domain expertise
171
+ - **Single physics domain**: Structural mechanics only (see OpenSeeSimE-Fluid for fluid dynamics)
172
+ - **Ansys-specific**: Visualizations generated using Ansys Mechanical rendering conventions
173
+ - **Static parameters**: Fixed material properties and boundary conditions per instance
174
+ - **2D visualizations**: All inputs are 2D projections of 3D simulations
175
+
176
+ ### Known Biases
177
+ - **Color scheme dependency**: Questions exploit default rainbow gradient conventions
178
+ - **Geometry bias**: Selected simulation types may not represent full diversity of structural analysis applications
179
+ - **View orientation bias**: Standardized camera positions may not capture all critical simulation features
180
+
181
+ ## Ethical Considerations
182
+
183
+ ### Responsible Use
184
+ - Models evaluated on this benchmark should NOT be deployed for safety-critical engineering decisions without expert validation
185
+ - Automated interpretation should augment, not replace, human engineering expertise
186
+ - Users should verify that benchmark performance translates to their specific simulation contexts
187
+
188
+ ### Data Privacy
189
+ - All simulations contain no proprietary or confidential engineering data
190
+ - No personal information collected
191
+ - Publicly available tutorial files used as base models
192
+
193
+ ### Environmental Impact
194
+ - Dataset generation required significant computational CPU resources
195
+ - Consider environmental cost of large-scale model evaluation on this benchmark
196
+
197
+ ## License
198
+
199
+ MIT License - Free for academic and commercial use with attribution
200
+
201
+ ## Citation
202
+
203
+ If you use this dataset, please cite:
204
+
205
+ ```bibtex
206
+ @article{ezemba2024opensesime,
207
+ title={OpenSeeSimE: A Large-Scale Benchmark to Assess Vision-Language Model Question Answering Capabilities in Engineering Simulations},
208
+ author={Ezemba, Jessica and Pohl, Jason and Tucker, Conrad and McComb, Christopher},
209
+ year={2025}
210
+ }
211
+ ```
212
+
213
+ ## AI Usage Disclosure
214
+
215
+ ### Dataset Generation
216
+ - **Simulation automation**: Python scripts with Ansys PyMechanical interface
217
+ - **Ground truth extraction**: Automated computational protocols (no AI involvement)
218
+ - **Quality validation**: Expert oversight of automated extraction procedures
219
+ - **No generative AI** used in dataset creation, labeling, or curation
220
+
221
+ ### Visualization Generation
222
+ - Ansys Mechanical rendering engine (deterministic, physics-based)
223
+ - Standardized color mapping and camera controls
224
+ - No AI-based image generation or enhancement
225
+
226
+ ## Contact
227
+
228
+ **Authors**: Jessica Ezemba ([email protected]), Jason Pohl, Conrad Tucker, Christopher McComb
229
+ **Institution**: Department of Mechanical Engineering, Carnegie Mellon University
230
+
231
+ ## Acknowledgments
232
+
233
+ - Ansys for providing simulation software and tutorial files
234
+ - Carnegie Mellon University for computational resources
235
+ - Reviewers and domain experts who validated the automated extraction protocols
236
+
237
+ ---
238
+
239
+ **Version**: 1.0
240
+ **Last Updated**: December 2025
241
+ **Status**: Complete and stable