Below, we present the license requirements incorporated by reference.
A README statement for MULTISpoof Dataset is also presented. This describes the dataset and directory structure.
Both sections have different formatting to help with easy navigation.
=======================================================================
MULTISpoof Dataset
=======================================================================
The MULTISpoof dataset has been derived using real speech signals from the following datasets.
The French, German and Italian Dataset:
Reference Paper: Vineel Pratap et al. “MLS: A Large-Scale Multilingual Dataset for Speech Research”. (December 2020). doi: 10.21437/interspeech.2020-2826.
Dataset URL: https://www.openslr.org/94/
License: This dataset is released under a Creative Commons Attribution 4.0 International License (CC BY 4.0), and may be used according to the terms specified by the license.
License URL: https://creativecommons.org/licenses/by/4.0/
We are incorporating by reference, the license requirements of the above datasets we used.
MULTISpoof was derived from transcriptions of the real multilingual speech. It has been developed using 5 open-sourced and 1 commercial text-to-speech (TTS) methods to generate the synthetic speech samples. These methods and their licenses or terms of use are referenced below.
ElevenLabs (Commercial Software):
Reference: ElevenLabs, Speech Synthesis, 2025.
URL: https://elevenlabs.io/
Terms of use: ElevenLabs' terms of use permit the use of their services for commercial purposes, if we access their services through a paid subscription plan, which we have purchased.
Terms of use URL: https://elevenlabs.io/terms-of-use
F5-Spanish:
Reference: https://huggingface.co/jpgallegoar/F5-Spanish
Source Code URL: https://github.com/jpgallegoar/Spanish-F5/
License: This method is released under the Creative Commons Zero (CC0 1.0), and may be freely used, modified, and shared without restriction.
License URL: https://creativecommons.org/publicdomain/zero/1.0/
Fish-speech:
Reference: Liao, Shijia and Wang, Yuxuan and Li, Tianyu and Cheng, Yifan and Zhang, Ruoyi and Zhou, Rongzhi and Xing, Yijin, "Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis", 2024, doi: https://doi.org/10.48550/arXiv.2411.01156.
Source Code URL: https://github.com/fishaudio/fish-speech
License: This source code is released under the Apache License, Version 2.0, and may be used according to the terms specified by the license.
License URL: https://www.apache.org/licenses/LICENSE-2.0
XTTSv1:
Reference: https://huggingface.co/coqui/XTTS-v1
Source Code URL: https://github.com/coqui-ai/TTS
License: This source code is released under Mozilla Public License Version 2.0, and may be used according to
the terms specified by the license.
License URL: https://github.com/coqui-ai/TTS/blob/dev/LICENSE.txt
XTTSv2:
Reference: https://huggingface.co/spaces/coqui/xtts
Source Code URL: https://github.com/coqui-ai/TTS
License: This source code is released under Mozilla Public License Version 2.0, and may be used according to
the terms specified by the license.
License URL: https://github.com/coqui-ai/TTS/blob/dev/LICENSE.txt
YourTTS:
Reference Paper: E. Casanova, J. Weber, C. Shulby, A.
Junior, E. Gölge, and M. A. Ponti, “YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion
for everyone”, Proceedings of the International Conference on Machine Learning, pp. 2709–2720, July 2022.
Source Code URL: https://github.com/coqui-ai/TTS
License: This source code is released under Mozilla Public License Version 2.0, and may be used according to
the terms specified by the license.
License URL: https://github.com/coqui-ai/TTS/blob/dev/LICENSE.txt
We are incorporating by reference, the terms of use requirements and license requirements of all of the 6 TTS methods described above.
=======================================================================
README for MULTISpoof Dataset
=======================================================================
MULTISpoof Dataset (c) 2026 by Maria Risques, Kratika Bhagtani, Amit K. S. Yadav, and Edward J. Delp
MULTISpoof is licensed under CC BY 4.0 (Creative Commons Attribution 4.0 International) Note: This Creative Commons license does not supersede any of the license requirements described above.
1. General information
======================
Zero-shot Voice Cloning (VC) and Text-to-Speech (TTS) methods have advanced rapidly, making it easy to generate highly realistic synthetic speech. This raises serious concerns about the misuse of these methods. Numerous synthetic speech detection methods have been proposed, but most are trained and evaluated on English or Mandarin data, neglecting other major world languages. To address this gap, we introduce MULTISpoof, a dataset of real and synthetic multilingual speech covering three languages.
The dataset includes real speech from public corpora from diverse languages recorded under various acoustic conditions, as well as synthetic speech generated with multiple voice cloning synthesizers, ensuring linguistic and acoustic variability.
The real speech samples cover the following languages:
- French
- German
- Italian
Zero-shot voice cloning (VC) generates synthetic speech with minimal reference audio, without requiring prior model training on their voice, enabling scalable speaker synthesis. After researching and testing many systems in Italian, German and French, six zero-shot VC methods were employed in this dataset:
- ElevenLabs
- F5-Spanish
- Fish Speech
- XTTSv1.1
- XTTSv2
- YourTTS
All methods are open-source, except ElevenLabs, which is a commercial speech generator. For the development of this dataset, we purchased a paid plan which allows commercial use of the generated content.
The dataset is designed to evaluate a model’s ability to differentiate real speech from synthetic speech. For each speaker, synthetic speech is generated using the exact transcripts of their real speech across all synthesizers.
2. Directory Structure
======================
./train (29283 speech signals)
./val (1774 speech signals)
./test (43269 speech signals)
./protocols
|
.- train_metadata.csv
|
.- val_metadata.csv
|
.- test_metadata.csv
./README.md
./LICENSE.txt
3. Authors
======================
M. Risques, K. Bhagtani, A. K. S. Yadav, and E. J. Delp
4. Acknowledgements
======================
This material is partially based on research sponsored by DARPA and Air Force Research Laboratory (AFRL) under agreement number FA8750-20-2-1004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA and Air Force Research Laboratory (AFRL) or the U.S. Government. Address all correspondence to Edward J. Delp, ace@purdue.edu.
- Downloads last month
- 78