Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
npz
dict
__key__
stringlengths
30
57
__url__
stringclasses
2 values
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165126242
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165127256
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165128271
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165129272
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165130287
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165131290
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165132304
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165133324
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165134329
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
{"depth":[[10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000,10000000(...TRUNCATED)
ACE/approach_back/depth/20260120165135330
hf://datasets/Alvin16/SpaceSense-Bench@352a2756fcac597882efc8c4666552858b007345/raw/ACE.tar.gz
End of preview. Expand in Data Studio

SpaceSense-Bench: Multi-Modal Spacecraft Perception and Pose Estimation Dataset

Project Page | Paper | Toolkit & Code

SpaceSense-Bench is a high-fidelity simulation-based multi-modal (RGB, Depth, LiDAR Point Cloud) dataset for spacecraft component-level semantic understanding, containing 136 satellite models with synchronized multi-modal data.

teaser

Dataset Overview

Item Detail
Satellite Models 136 (sourced from NASA/ESA 3D models)
Data Modalities RGB, Depth, Semantic Segmentation, LiDAR Point Cloud, 6-DoF Pose
Image Resolution 1024 x 1024
Camera FOV 50 degrees
Semantic Classes 7 (main_body, solar_panel, dish_antenna, omni_antenna, payload, thruster, adapter_ring)
Simulation Platform Unreal Engine 5.2.0 + AirSim 1.8.1

Sample Usage

The SpaceSense-Toolkit provides tools for converting raw data to standard formats and visualizing the results.

Installation

pip install -r requirements.txt

Conversion and Visualization

# Visualize the raw data
python SpaceSense-Toolkit/visualize/raw_data_web_visualizer.py --raw-data data_example

# Convert to Semantic-KITTI (3D segmentation)
python SpaceSense-Toolkit/convert/airsim_to_semantickitti.py --raw-data data_example --output output/semantickitti --satellite-json SpaceSense-Toolkit/configs/satellite_descriptions.json

# Convert to MMSegmentation (2D segmentation)
python SpaceSense-Toolkit/convert/airsim_to_mmseg.py --raw-data data_example --output output/mmseg

# Convert to YOLO (Object detection)
python SpaceSense-Toolkit/convert/airsim_to_yolo.py --raw-data data_example --output output/yolo

Data Modalities

Modality Format Unit / Range Description
RGB PNG (1024x1024) 8-bit color Scene rendering
Depth PNG (1024x1024) int32, millimeters (0 ~ 10,000,000 mm, background = 10,000 m) Per-pixel depth map
Semantic Segmentation PNG (1024x1024) uint8, class ID per pixel (0 = background) Component-level segmentation mask
LiDAR Point Cloud ASC (x y z per line) meters, 3 decimal places Sparse 3D point cloud
6-DoF Pose CSV meters + Hamilton quaternion (w,x,y,z) Camera-to-target relative pose

Coordinate System & Units

Item Convention
Camera Frame X-forward, Y-right, Z-down (right-hand system)
World Frame AirSim NED, target spacecraft fixed at origin
Quaternion Hamilton convention: w + xi + yj + zk
Euler Angles ZYX intrinsic (Yaw-Pitch-Roll)
Position meters (m), 6 decimal places
Depth Map millimeters (mm), int32; deep space background = 10,000 m
LiDAR meters (m), .asc format (x y z), 3 decimal places
Timestamp YYYYMMDDHHMMSSmmm

Sensor Configuration

Camera (cam0)

  • Resolution: 1024 x 1024
  • FOV: 50 degrees
  • Image types captured: RGB (type 0), Segmentation (type 5), Depth (type 2)
  • TargetGamma: 1.0

LiDAR

  • Range: 300 m
  • Channels: 256
  • Vertical FOV: -20 to +20 degrees
  • Horizontal FOV: -20 to +20 degrees
  • Data frame: SensorLocalFrame

Data Split (Zero-shot / OOD)

The training and validation sets contain completely non-overlapping satellite models, so validation performance reflects zero-shot generalization to unseen spacecraft.

Split Satellites Rule
Train 117 All satellites excluding val and excluded
Test 14 Every 10th by index: seq 00, 10, 20, ..., 130
Validation 5 Seq 131-135, reserved for future testing

Data Organization

Each .tar.gz file in the raw/ folder contains data for one satellite:

<timestamp>_<satellite_name>/
β”œβ”€β”€ approach_front/
β”‚   β”œβ”€β”€ rgb/              # RGB images (.png)
β”‚   β”œβ”€β”€ depth/            # Depth maps (.png, int32, mm)
β”‚   β”œβ”€β”€ segmentation/     # Semantic masks (.png, uint8)
β”‚   β”œβ”€β”€ lidar/            # Point clouds (.asc)
β”‚   └── poses.csv         # 6-DoF poses
β”œβ”€β”€ approach_back/
β”œβ”€β”€ orbit_xy/
└── ...

Semantic Class Definitions

Class ID Name Description
0 background Deep space background
1 main_body Spacecraft main body / bus
2 solar_panel Solar panels / solar arrays
3 dish_antenna Dish / parabolic antennas
4 omni_antenna Omnidirectional antennas / booms
5 payload Scientific instruments / payloads
6 thruster Thrusters / propulsion systems
7 adapter_ring Launch adapter rings

License

This dataset is released under the CC-BY-NC-4.0 license. Non-commercial use only.

Citation

@article{SpaceSense-Bench,
    title={SpaceSense-Bench: A Large-Scale Multi-Modal Benchmark for Spacecraft Perception and Pose Estimation},
    author={Aodi Wu, Jianhong Zuo, Zeyuan Zhao, Xubo Luo, Ruisuo Wang, Xue Wan},
    year={2026},
    url={https://arxiv.org/abs/2603.09320}
}
Downloads last month
63

Paper for Alvin16/SpaceSense-Bench