Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'test' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
image_size: struct<width: int64, height: int64>
coordinate_system: struct<origin: string, x_axis: string, y_axis: string>
objects: list<item: struct<id: string, category: string, color_rgb: list<item: int64>, position: struct<center_x: double, center_y: double>, bbox: struct<x_min: int64, y_min: int64, x_max: int64, y_max: int64, width: int64, height: int64>, size: struct<area_pixels: int64, radius_pixels: double>>>
vs
image_size: struct<width: int64, height: int64>
coordinate_system: struct<origin: string, x_axis: string, y_axis: string>
objects: list<item: struct<id: string, category: string, color_rgb: list<item: int64>, position: struct<center_x: double, center_y: double>, bbox: struct<x_min: int64, y_min: int64, x_max: int64, y_max: int64, width: int64, height: int64>, size: struct<area_pixels: int64, radius_pixels: double, length_pixels: double, thickness_pixels: double, stroke_thickness_pixels: double>, orientation: struct<angle_deg: double, definition: string>>>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 572, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              image_size: struct<width: int64, height: int64>
              coordinate_system: struct<origin: string, x_axis: string, y_axis: string>
              objects: list<item: struct<id: string, category: string, color_rgb: list<item: int64>, position: struct<center_x: double, center_y: double>, bbox: struct<x_min: int64, y_min: int64, x_max: int64, y_max: int64, width: int64, height: int64>, size: struct<area_pixels: int64, radius_pixels: double>>>
              vs
              image_size: struct<width: int64, height: int64>
              coordinate_system: struct<origin: string, x_axis: string, y_axis: string>
              objects: list<item: struct<id: string, category: string, color_rgb: list<item: int64>, position: struct<center_x: double, center_y: double>, bbox: struct<x_min: int64, y_min: int64, x_max: int64, y_max: int64, width: int64, height: int64>, size: struct<area_pixels: int64, radius_pixels: double, length_pixels: double, thickness_pixels: double, stroke_thickness_pixels: double>, orientation: struct<angle_deg: double, definition: string>>>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

VisPhyBench

To evaluate how well models reconstruct appearance and reproduce physically plausible motion, we introduce VisPhyBench, a unified evaluation protocol comprising 209 scenes derived from 108 physical templates that assesses physical understanding through the lens of code-driven resimulation in both 2D and 3D scenes, integrating metrics from different aspects. Each scene is also annotated with a coarse difficulty label (easy/medium/hard).

Dataset Details

  • Created by: Jiarong Liang
  • Language(s) (NLP): English
  • License: MIT
  • Repository: https://github.com/TIGER-AI-Lab/VisPhyWorld

Uses

The dataset is used to evaluate how well models reconstruct appearance and reproduce physically plausible motion.

Dataset Structure

The difficulty of the two sets of VisPhyBench splits, sub (209) and test (49), are as follows: sub is 114/67/28 (54.5%/32.1%/13.4%), and test is 29/17/3 (59.2%/34.7%/6.1%) (Easy/Medium/Hard).

What each sample contains

VisPhyBench is provided as two splits:

  • sub: a larger split intended for evaluation and analysis.
  • test: a smaller split subsampled from sub for quick sanity checks.

For each sample, we provide:

  1. A short video of a synthetic physical scene.
  2. A detection JSON (per sample) that describes the scene in the first frame.
  3. A difficulty label (easy/medium/hard) derived from the mean of eight annotators’ 1–5 ratings.

Detection JSON format

Each detection JSON includes:

  • image_size: the image width/height.
  • coordinate_system: conventions for coordinates (e.g., origin and axis directions).
  • objects: a list of detected objects. Each object includes:
    • id: unique identifier.
    • category: coarse geometry category.
    • color_rgb: RGB color triplet.
    • position: object position (e.g., center coordinates).
    • bbox: bounding box coordinates and size.
    • size: coarse size fields (e.g., radius/length/thickness).

These fields specify object locations and attributes precisely, which helps an LLM initialize objects correctly when generating executable simulation code.

BibTeX

@misc{visphybench2026,
  title        = {VisPhyBench},
  author       = {Liang, Jiarong and Ku, Max and Hui, Ka-Hei and Nie, Ping and Chen, Wenhu},
  howpublished = {GitHub repository},
  year         = {2026},
  url          = {https://github.com/TIGER-AI-Lab/VisPhyWorld}
}
Downloads last month
13