Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type
struct<actor_pos: struct<x: double, y: double, z: double>, actor_rpy: struct<x: int64, y: int64, z: double>, ad: int64, camera_pos: struct<x: double, y: double, z: double>, camera_rpy: struct<x: int64, y: double, z: double>, lr: int64, time: int64, ud: int64, ws: int64>
to
{'actor_pos': {'x': Value('float64'), 'y': Value('float64'), 'z': Value('float64')}, 'actor_rpy': {'x': Value('int64'), 'y': Value('float64'), 'z': Value('float64')}, 'ad': Value('int64'), 'lr': Value('int64'), 'time': Value('int64'), 'ud': Value('int64'), 'ws': Value('int64')}
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
                  cast_array_to_feature(
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2092, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<actor_pos: struct<x: double, y: double, z: double>, actor_rpy: struct<x: int64, y: int64, z: double>, ad: int64, camera_pos: struct<x: double, y: double, z: double>, camera_rpy: struct<x: int64, y: double, z: double>, lr: int64, time: int64, ud: int64, ws: int64>
              to
              {'actor_pos': {'x': Value('float64'), 'y': Value('float64'), 'z': Value('float64')}, 'actor_rpy': {'x': Value('int64'), 'y': Value('float64'), 'z': Value('float64')}, 'ad': Value('int64'), 'lr': Value('int64'), 'time': Value('int64'), 'ud': Value('int64'), 'ws': Value('int64')}
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
                  builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1919, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

total_time
int64
mark_time
int64
data
dict
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 0, "z": 0 }, "ad": 0, "lr": 0, "time": 0, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976419 }, "actor_rpy": { "x": 0, "y": 0, "z": 0 }, "ad": 0, "lr": 0, "time": 1, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 0, "z": 0 }, "ad": 0, "lr": 0, "time": 2, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 0, "z": 0 }, "ad": 0, "lr": 0, "time": 3, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 0, "z": 0 }, "ad": 0, "lr": 0, "time": 4, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 0, "z": 0 }, "ad": 0, "lr": 0, "time": 5, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 0, "z": 0 }, "ad": 0, "lr": 0, "time": 6, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976418 }, "actor_rpy": { "x": 0, "y": 0, "z": 0 }, "ad": 0, "lr": 0, "time": 7, "ud": 1, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 2.5, "z": 0 }, "ad": 0, "lr": 0, "time": 8, "ud": 1, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": 0 }, "ad": 0, "lr": 0, "time": 9, "ud": 1, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 7.5, "z": 0 }, "ad": 0, "lr": 0, "time": 10, "ud": 1, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 10, "z": 0 }, "ad": 0, "lr": 0, "time": 11, "ud": 1, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 12.5, "z": 0 }, "ad": 0, "lr": 0, "time": 12, "ud": 1, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": 0 }, "ad": 0, "lr": 0, "time": 13, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": 0 }, "ad": 0, "lr": 0, "time": 14, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030397, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": 0 }, "ad": 0, "lr": 0, "time": 15, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": 0 }, "ad": 0, "lr": 0, "time": 16, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": 0 }, "ad": 0, "lr": 0, "time": 17, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976419 }, "actor_rpy": { "x": 0, "y": 15, "z": -2.5 }, "ad": 0, "lr": 1, "time": 18, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -5 }, "ad": 0, "lr": 1, "time": 19, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030397, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -7.5 }, "ad": 0, "lr": 1, "time": 20, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -10 }, "ad": 0, "lr": 1, "time": 21, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -12.5 }, "ad": 0, "lr": 1, "time": 22, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976419 }, "actor_rpy": { "x": 0, "y": 15, "z": -15 }, "ad": 0, "lr": 1, "time": 23, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -20 }, "ad": 0, "lr": 1, "time": 24, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -22.5 }, "ad": 0, "lr": 1, "time": 25, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -25 }, "ad": 0, "lr": 1, "time": 26, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976418 }, "actor_rpy": { "x": 0, "y": 15, "z": -27.5 }, "ad": 0, "lr": 1, "time": 27, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -27.5 }, "ad": 0, "lr": 1, "time": 28, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -30 }, "ad": 0, "lr": 1, "time": 29, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -32.5 }, "ad": 0, "lr": 1, "time": 30, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976418 }, "actor_rpy": { "x": 0, "y": 15, "z": -35 }, "ad": 0, "lr": 1, "time": 31, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -37.5 }, "ad": 0, "lr": 1, "time": 32, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030397, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -40 }, "ad": 0, "lr": 1, "time": 33, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -42.5 }, "ad": 0, "lr": 1, "time": 34, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -45 }, "ad": 0, "lr": 1, "time": 35, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -45 }, "ad": 0, "lr": 0, "time": 36, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -45 }, "ad": 0, "lr": 0, "time": 37, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -45 }, "ad": 0, "lr": 0, "time": 38, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030397, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 15, "z": -45 }, "ad": 0, "lr": 0, "time": 39, "ud": 2, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 12.5, "z": -45 }, "ad": 0, "lr": 0, "time": 40, "ud": 2, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030397, "y": 210.0535565184991, "z": 288.0396757976418 }, "actor_rpy": { "x": 0, "y": 10, "z": -45 }, "ad": 0, "lr": 0, "time": 41, "ud": 2, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 7.5, "z": -45 }, "ad": 0, "lr": 0, "time": 42, "ud": 2, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -45 }, "ad": 0, "lr": 0, "time": 43, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -45 }, "ad": 0, "lr": 0, "time": 44, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976419 }, "actor_rpy": { "x": 0, "y": 5, "z": -45 }, "ad": 0, "lr": 0, "time": 45, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -47.5 }, "ad": 0, "lr": 1, "time": 46, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -50 }, "ad": 0, "lr": 1, "time": 47, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -55 }, "ad": 0, "lr": 1, "time": 48, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976419 }, "actor_rpy": { "x": 0, "y": 5, "z": -57.5 }, "ad": 0, "lr": 1, "time": 49, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -60 }, "ad": 0, "lr": 1, "time": 50, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.323940303955, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -62.5 }, "ad": 0, "lr": 1, "time": 51, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -62.5 }, "ad": 0, "lr": 1, "time": 52, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.0396757976418 }, "actor_rpy": { "x": 0, "y": 5, "z": -67.5 }, "ad": 0, "lr": 1, "time": 53, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.0396757976418 }, "actor_rpy": { "x": 0, "y": 5, "z": -67.5 }, "ad": 0, "lr": 1, "time": 54, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -70 }, "ad": 0, "lr": 1, "time": 55, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -72.5 }, "ad": 0, "lr": 1, "time": 56, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -75 }, "ad": 0, "lr": 1, "time": 57, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -77.5 }, "ad": 0, "lr": 1, "time": 58, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -80 }, "ad": 0, "lr": 1, "time": 59, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -82.5 }, "ad": 0, "lr": 1, "time": 60, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030397, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -85 }, "ad": 0, "lr": 1, "time": 61, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976418 }, "actor_rpy": { "x": 0, "y": 5, "z": -87.5 }, "ad": 0, "lr": 1, "time": 62, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -90 }, "ad": 0, "lr": 1, "time": 63, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -92.5 }, "ad": 0, "lr": 1, "time": 64, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -95 }, "ad": 0, "lr": 1, "time": 65, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -97.5 }, "ad": 0, "lr": 1, "time": 66, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -100 }, "ad": 0, "lr": 1, "time": 67, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -102.5 }, "ad": 0, "lr": 1, "time": 68, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -105 }, "ad": 0, "lr": 1, "time": 69, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976419 }, "actor_rpy": { "x": 0, "y": 5, "z": -107.5 }, "ad": 0, "lr": 1, "time": 70, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -110 }, "ad": 0, "lr": 1, "time": 71, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -115 }, "ad": 0, "lr": 1, "time": 72, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -117.5 }, "ad": 0, "lr": 1, "time": 73, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976418 }, "actor_rpy": { "x": 0, "y": 5, "z": -120 }, "ad": 0, "lr": 1, "time": 74, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -120 }, "ad": 0, "lr": 1, "time": 75, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -122.5 }, "ad": 0, "lr": 1, "time": 76, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -125 }, "ad": 0, "lr": 1, "time": 77, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.323940303955, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -127.5 }, "ad": 0, "lr": 1, "time": 78, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -130 }, "ad": 0, "lr": 1, "time": 79, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -132.5 }, "ad": 0, "lr": 1, "time": 80, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -135 }, "ad": 0, "lr": 1, "time": 81, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -137.5 }, "ad": 0, "lr": 1, "time": 82, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -140 }, "ad": 0, "lr": 1, "time": 83, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -142.5 }, "ad": 0, "lr": 1, "time": 84, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -145 }, "ad": 0, "lr": 1, "time": 85, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -147.5 }, "ad": 0, "lr": 1, "time": 86, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -150 }, "ad": 0, "lr": 1, "time": 87, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -152.5 }, "ad": 0, "lr": 1, "time": 88, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -155 }, "ad": 0, "lr": 1, "time": 89, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -157.5 }, "ad": 0, "lr": 1, "time": 90, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -160 }, "ad": 0, "lr": 1, "time": 91, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -162.5 }, "ad": 0, "lr": 1, "time": 92, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976419 }, "actor_rpy": { "x": 0, "y": 5, "z": -165 }, "ad": 0, "lr": 1, "time": 93, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976419 }, "actor_rpy": { "x": 0, "y": 5, "z": -167.5 }, "ad": 0, "lr": 1, "time": 94, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.0396757976419 }, "actor_rpy": { "x": 0, "y": 5, "z": -170 }, "ad": 0, "lr": 1, "time": 95, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -175 }, "ad": 0, "lr": 1, "time": 96, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.05355651849908, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": -177.5 }, "ad": 0, "lr": 1, "time": 97, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.323940303955, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": 180 }, "ad": 0, "lr": 1, "time": 98, "ud": 0, "ws": 0 }
4,594
1,531
{ "actor_pos": { "x": -40716.32394030396, "y": 210.0535565184991, "z": 288.03967579764185 }, "actor_rpy": { "x": 0, "y": 5, "z": 177.5 }, "ad": 0, "lr": 1, "time": 99, "ud": 0, "ws": 0 }
End of preview.

logo

MIND: Benchmarking Memory Consistency and Action Control in World Models

TL;DR: The first open-domain closed-loop revisited benchmark for evaluating memory consistency and action control in world models

🌐Homepage | πŸ‘‰ Dataset | πŸ“„ Paper | πŸ’» Code | πŸ† Leaderboard (coming soon)

πŸ“’ Updates

  • [2026-2-9]: MIND is online πŸŽ‰ πŸŽ‰ πŸŽ‰

πŸ“ TODO

  • Open-source MIND-World (1.3B) all training and inference code including a detailed code tutorial
  • Release the weights of all stages for MIND-World (1.3B) including frame-wised student model
  • Building Leaderboard
  • Building repo Awesomeβ€”Interactive World Model

πŸ“‘ Table of Contents

πŸ“œ Abstract

​ World models aim to understand, remember, and predict dynamic visual environments, yet a unified benchmark for evaluating their fundamental abilities remains lacking. To address this gap, we introduce MIND, the first open-domain closed-loop revisited benchmark for evaluating Memory consIstency and action coNtrol in worlD models. MIND contains 250 high-quality videos at 1080p and 24 FPS, including 100 (first-person) + 100 (third-person) video clips under a shared action space and 25 + 25 clips across varied action spaces covering eight diverse scenes. We design an efficient evaluation framework to measure two core abilities: memory consistency and action control, capturing temporal stability and contextual coherence across viewpoints. Furthermore, we design various action spaces, including different character movement speeds and camera rotation angles, to evaluate the action generalization capability across different action spaces under shared scenes. To facilitate future performance benchmarking on MIND, we introduce MIND-World, a novel interactive Video-to-World baseline. Extensive experiments demonstrate the completeness of MIND and reveal key challenges in current world models, including the difficulty of maintaining long-term memory consistency and generalizing across action spaces.

🌟 Project Overview

defense

Fig 1. Overview of the MIND. We build and collect the first open-domain benchmark using Unreal Engine 5, supporting both first-person and third-person perspectives with 1080p resolution at 24 FPS.

πŸ“Š Dataset Overview

defense

Fig 2. Distribution for Scene Categories and Action Space in MIND Dataset. MIND supports open-domain scenarios with diverse and well-balanced action spaces.

πŸš€ Setup

1. Environment setup
  • Follow ViPE's instruction to build conda envrionment, until ViPE command is avilable

  • install our requirements under the same conda env (in the same above env.):

pip install -r requirements.txt 
2. Multi-GPU Support

How Multi-GPU Works

  • Videos are put into a task queue.
  • Each GPU process take one task from the queue when vacant.
  • If failed, the task will be put back into the queue.
  • Progress bars show accumulation for all results.
  • Every time when a task is finished, the result file is updated. You can obtain intermediate results from the file.

The metrics computation supports multi-GPU parallel processing for faster evaluation.

python src/process.py --gt_root /path/to/MIND-Data --test_root /path/to/test/videos --num_gpus 8 --metrics lcm,visual,action
  • --gt_root: Ground truth data root directory (required)
  • --test_root: Test data root directory (required)
  • --dino_path: DINOv3 model weights directory (default: ./dinov3_vitb16)
  • --num_gpus: Number of GPUs to use for parallel processing (default: 1)
  • --video_max_time: Maximum video frames to process (default: None = use all frames)
  • --output: Output JSON file path (default: result_{test_root}_{timestamp}.json)
  • --metrics: Comma-separated metrics to compute (default: lcm,visual,dino,action,gsc)
3. How to order your test files
{model_name}
β”œβ”€β”€ 1st_data
β”‚   β”œβ”€β”€ action_space_test
β”‚   β”‚   β”œβ”€β”€ {corresponding data name}
β”‚   β”‚   β”‚   └── video.mp4
|   |   ...
|   |    
β”‚   β”œβ”€β”€ mirror_test
β”‚   |   β”œβ”€β”€ {arbitrary data name}
β”‚   β”‚   β”‚   β”œβ”€β”€ path-1.mp4
β”‚   β”‚   β”‚   β”œβ”€β”€ path-2.mp4
β”‚   β”‚   β”‚   β”œβ”€β”€ path-3.mp4
β”‚   β”‚   β”‚   ...
β”‚   β”‚   β”‚   └── path-10.mp4
|   |   ...
|   |
|   └── mem_test
β”‚       β”œβ”€β”€ {corresponding data name}
β”‚       β”‚   └── video.mp4
|       ...
|
β”œβ”€β”€ 3rd_data
β”‚   β”œβ”€β”€ action_space_test
β”‚   β”‚   β”œβ”€β”€ {corresponding data name}
β”‚   β”‚   β”‚   └── video.mp4
|   |   ...
|   |    
β”‚   β”œβ”€β”€ mirror_test
β”‚   |   β”œβ”€β”€ {carbitrary data name}
β”‚   β”‚   β”‚   β”œβ”€β”€ path-1.mp4
β”‚   β”‚   β”‚   β”œβ”€β”€ path-2.mp4
β”‚   β”‚   β”‚   β”œβ”€β”€ path-3.mp4
β”‚   β”‚   β”‚   ...
β”‚   β”‚   β”‚   └── path-10.mp4
|   |   ...
|   |
β”‚   └── mem_test
β”‚       β”œβ”€β”€ {corresponding data name}
β”‚       β”‚   └── video.mp4
|       ...
  • {model_name}: custom your model name
  • {corresponding data name}: corresponding ground truth data file name
4. The detailed information of output Result.json
{
  "video_max_time": [int] video_max_time given in cmd parameters; max frames of the sample video to compute metrics (except action accuracy).
  "data": [
    {
      "path": [string] the directory name of the video data.
      "perspective": [string] 1st_data/3rd_data, the perspective of the video data.
      "test_type": [string] mem_test/action_space_test, the test set of the video data.
      "error": [string] the error occur when computing metrics
      "mark_time": [int] the divider of memory context and expected perdiction; the start frame index of the expected prediction.
      "total_time": [int] the total frames of the ground truth video.
      "sample_frames": [int ]the total frames of the video to be tested.
      "video_results": [ the general scene consistency metric result.
        {
          "video_name": [string] the name of the video of the specific action path
          "error": [string] the error occur when computing metrics in this video
          "mark_time": [int] the divider of prediction and mirror perdiction; the start frame index of the mirror prediction.
          "sample_frames": [int] total frames of prediction and mirror perdiction; should be 2x of marktime.
          "gsc": { 
            "length": [int] length of the origin prediction and the mirror prediction.
            "mse": [list[float]] the per-frame mean square error.
            "avg_mse": [float] the average of mse.
            "lpips": [list[float]] the per-frame Learned Perceptual Image Patch Similarity.
            "avg_lpips": [float] the average of lpips.
            "ssim": [list[float]] the per-frame Structural Similarity Index Measure.
            "avg_ssim": [float] the average of ssim.
            "psnr": [list[float]] the per-frame Peak Signal-to-Noise Ratio.
            "avg_psnr": [float] the average of psnr.
          }
        },
        ...
      ]
      "lcm": { the long context memory metric result.
        "mse": [list[float]] the per-frame mean square error.
        "avg_mse": [float] the average of mse.
        "lpips": [list[float]] the per-frame Learned Perceptual Image Patch Similarity.
        "avg_lpips": [float] the average of lpips.
        "ssim": [list[float]] the per-frame Structural Similarity Index Measure.
        "avg_ssim": [float] the average of ssim.
        "psnr": [list[float]] the per-frame Peak Signal-to-Noise Ratio.
        "avg_psnr": [float] the average of psnr.
      },
      "visual_quality": { the visual quality metric result.
        "imaging": [list[float]] the per-frame imaging quality.
        "avg_imaging": [float] the average of imaging quality. 
        "aesthetic": [list[float]] the per-frame aesthetic quality.
        "avg_imaging": [float] the average of aesthetic quality. 
      },
      "action": { the action accuracy metric result. computed by ViPE pose estimation and trajectory alignment.
        "__overall__": { the overall statistics of all valid frames after outlier filtering.
          "count": [int] number of valid samples used for statistics.
          "rpe_trans_mean": [float] mean of Relative Pose Error for translation (in meters).
          "rpe_trans_median": [float] median of RPE translation.
          "rpe_rot_mean_deg": [float] mean of RPE rotation in degrees.
          "rpe_rot_median_deg": [float] median of RPE rotation.
        },
        "translation": { the statistics of pure translation actions (forward/backward/left/right).
          "count": [int] number of valid samples for translation actions.
          "rpe_trans_mean": [float] mean RPE translation for translation actions.
          "rpe_trans_median": [float] median RPE translation for translation actions.
          "rpe_rot_mean_deg": [float] mean RPE rotation for translation actions.
          "rpe_rot_median_deg": [float] median RPE rotation for translation actions.
        },
        "rotation": { the statistics of pure rotation actions (cam_left/cam_right/cam_up/cam_down).
          "count": [int] number of valid samples for rotation actions.
          ...
        },
        "other": { the statistics of combined actions (e.g., forward+look_right).
          "count": [int] number of valid samples for other actions.
          ...
        },
        "act:forward": { the statistics of specific action "forward".
          "count": [int] number of valid samples for this action.
          "rpe_trans_mean": [float] mean RPE translation.
          "rpe_trans_median": [float] median RPE translation.
          "rpe_rot_mean_deg": [float] mean RPE rotation.
          "rpe_rot_median_deg": [float] median RPE rotation.
        },
        "act:look_right": { the statistics of specific action "look_right".
          ...
        },
        ...
      },
      "dino": { the dino mse metric result.
        "dino_mse": [list[float]] the per-frame mse of dino features.
        "avg_dino_mse": [float] the average of dino_mse. 
      }
    },
    ...
  ]
}

πŸ—‚ Dataset Format

MIND is available here ! ! !

1. The structure of MIND ground truth videos (both for training and for testing)
MIND-Data
β”œβ”€β”€ 1st_data
β”‚   β”œβ”€β”€ test
β”‚   β”‚   β”œβ”€β”€ action_space_test
β”‚   β”‚   β”‚   β”œβ”€β”€ {gt data name}
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ action.json
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ images.txt
β”‚   β”‚   β”‚   β”‚   └── video.mp4
|   |   |   ...
|   |   |    
β”‚   β”‚   └── mem_test
β”‚   β”‚       β”œβ”€β”€ {gt data name}
β”‚   β”‚       β”‚   β”œβ”€β”€ action.json
β”‚   β”‚       β”‚   β”œβ”€β”€ images.txt
β”‚   β”‚       β”‚   └── video.mp4
|   |       ...
|   └── train
|       β”œβ”€β”€ {gt data name}
|       β”‚   β”œβ”€β”€ action.json
|       β”‚   └── video.mp4
|       ...
|
β”œβ”€β”€ 3rd_data
β”‚   β”œβ”€β”€ test
β”‚   β”‚   β”œβ”€β”€ action_space_test
β”‚   β”‚   β”‚   β”œβ”€β”€ {gt data name}
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ action.json
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ images.txt
β”‚   β”‚   β”‚   β”‚   └── video.mp4
|   |   |   ...
|   |   |    
β”‚   β”‚   └── mem_test
β”‚   β”‚       β”œβ”€β”€ {gt data name}
β”‚   β”‚       β”‚   β”œβ”€β”€ action.json
β”‚   β”‚       β”‚   β”œβ”€β”€ images.txt
β”‚   β”‚       β”‚   └── video.mp4
|   |       ...
|   └── train
|       β”œβ”€β”€ {gt data name}
|       β”‚   β”œβ”€β”€ action.json
|       β”‚   └── video.mp4
|       ...
2. The detailed information of Action.json
{
    "mark_time": [int] the divider of memory context and expected perdiction; the start frame index of the expected prediction
    "total_time": [int] the total frames of the ground truth video
    "caption" : [text] the text description of the ground truth video
    "data": [
        {
            "time": [int] frame index
            "ws": [int] 0: move forward, 1: move backward
            "ad": [int] 0: move left, 1: move right
            "ud": [int] 0: look up, 1: look down
            "lr": [int] 0: look left, 1: look right
            "actor_pos": {
                "x": [float] the x-coordinate of the character
                "y": [float] the y-coordinate of the character
                "z": [float] the z-coordinate of the character
            },
            "actor_rpy": {
                "x": [float] the roll angle of the character (Euler angles)
                "y": [float] the pitch angle of the character
                "z": [float] the yaw angle of the character
            },
            "camera_pos": {
                    # only exists in 3rd-person mode
                "x": [float] the x-coordinate of the camera 
                "y": [float] the y-coordinate of the camera
                "z": [float] the z-coordinate of the camera
            },
            "camera_rpy": {
                       # only exists in 3rd-person mode
                "x": [float] the roll angle of the camera (Euler angles)
                "y": [float] the pitch angle of the camera
                "z": [float] the yaw angle of the camera
            }
        },
        ...
    ]
}

πŸ† LeaderBoard

The leaderboard is coming...

πŸŽ“ BibTex

If you find our work can be helpful, we would appreciate your citation and star:

@misc{ye2026mind,
      title={MIND: Benchmarking Memory Consistency and Action Control in World Models}, 
      author={Yixuan Ye and Xuanyu Lu and Yuxin Jiang and Yuchao Gu and Rui Zhao and Qiwei Liang and Jiachun Pan and Fengda Zhang and Weijia Wu and Alex Jinpeng Wang},
      year={2026},
      eprint={2602.08025},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.08025}, 
}

πŸ“§ Contact

Please send emails to yixuanye12@gmail.com if there is any question

πŸ™ Acknowledgements

We would like to thank ViPE and SkyReels-V2 for their great work.

Downloads last month
182