The Dataset Viewer has been disabled on this dataset.

πŸ”Š Speech-RATE Dataset

Paper

πŸ“’ News & Announcements

  • [2026-04] We are currently finalizing the dataset β€” stay tuned for the full release!

πŸ“‹ Overview

Speech-RATE is a curated dataset derived from CT-RATE, designed to support research at the intersection of speech and medical imaging. It provides high-quality, synthetically generated audio recordings of radiology reports from CT-RATE. Combined with the original CT-RATE dataset, Speech-RATE enables the development and evaluation of multimodal models that integrate speech and CT imaging data.

Speech-RATE Overview

πŸ“Š Dataset Statistics

Property Value
Spoken findings sections 50,188
Total duration 1,197 h
Avg. length 86 s
Language English
Voices 8 (4F / 4M)
TTS engine Kokoro
Sampling rate 24 kHz

πŸ“‚ Dataset Structure

The Speech-RATE dataset is organized as follows:

Speech-RATE/
β”œβ”€β”€ dataset/
β”‚   β”œβ”€β”€ train/
β”‚   β”‚   β”œβ”€β”€ path_to_sample_001/file_name.wav
β”‚   β”‚   β”œβ”€β”€ path_to_sample_002/file_name.wav
β”‚   β”‚   └── ...
β”‚   └── valid/
β”‚       β”œβ”€β”€ path_to_sample_001/file_name.wav
β”‚       β”œβ”€β”€ path_to_sample_002/file_name.wav
β”‚       └── ...
β”œβ”€β”€ speech-classification/
β”‚   β”œβ”€β”€ train/
β”‚   β”‚   β”œβ”€β”€ path_to_sample_001/file_name.wav
β”‚   β”‚   β”œβ”€β”€ path_to_sample_002/file_name.wav
β”‚   β”‚   └── ...
β”‚   └── valid/
β”‚       β”œβ”€β”€ path_to_sample_001/file_name.wav
β”‚       β”œβ”€β”€ path_to_sample_002/file_name.wav
β”‚       └── ...
β”œβ”€β”€ metadata/
β”‚   β”œβ”€β”€ train.csv
β”‚   └── valid.csv
└── README.md

πŸ“ Directory Descriptions

  • dataset/: Contains .wav audio files organized like in CT-RATE
  • speech-classification/: Data used for speech-only classification tasks, following the CT-RATE report classifier data setup
  • metadata/: Additional information including speaker gender and speed

πŸ“š Citations

If you use the Speech-RATE dataset in your research, please cite the following papers:

@article{buess2025speechct,
  title={SpeechCT-CLIP: Distilling Text-Image Knowledge to Speech for Voice-Native Multimodal CT Analysis},
  author={Buess, Lukas and Geier, Jan and Bani-Harouni, David and Pellegrini, Chantal and Keicher, Matthias and Perez-Toro, Paula Andrea and Navab, Nassir and Maier, Andreas and Arias-Vergara, Tomas},
  journal={arXiv preprint arXiv:2510.02322},
  year={2025}
}
@article{hamamci2026generalist,
  title={Generalist foundation models from a multimodal dataset for 3D computed tomography},
  author={Hamamci, Ibrahim Ethem and Er, Sezgin and Wang, Chenyu and Almas, Furkan and Simsek, Ayse Gulnihan and Esirgun, Sevval Nil and Dogan, Irem and Durugol, Omer Faruk and Hou, Benjamin and Shit, Suprosanna and others},
  journal={Nature Biomedical Engineering},
  pages={1--19},
  year={2026},
  publisher={Nature Publishing Group UK London}
}

πŸ“„ License

As a derived dataset, Speech-RATE follows the license of CT-RATE. We are committed to fostering innovation and collaboration in the research community. To this end, all elements of the CT-RATE dataset are released under a Creative Commons Attribution (CC-BY-NC-SA) license. This licensing framework ensures that our contributions can be freely used for non-commercial research purposes, while also encouraging contributions and modifications, provided that the original work is properly cited and any derivative works are shared under similar terms.

πŸ™ Acknowledgements

We gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-UniversitΓ€t Erlangen-NΓΌrnberg (FAU). The hardware is funded by the German Research Foundation (DFG). This work was partially funded via the EVUK programme (β€œNext-generation AI for Integrated Diagnostics”) of the Free State of Bavaria, the Deutsche Forschungsgemeinschaft (DFG).

Built with ❀️ for the speech and medical research community
Downloads last month
2,301

Paper for lbuess/Speech-RATE