Datasets:
NV-Bench: Benchmarking Nonverbal Vocalization Synthesis in Expressive Text-to-Speech Models
Abstract
While recent text-to-speech (TTS) systems increasingly integrate nonverbal vocalizations (NVVs), their evaluation lacks standardized metrics and reliable ground truth references. To bridge this gap, we propose NV-Bench, the first benchmark grounded in a functional taxonomy that treats NVVs as communicative acts rather than acoustic artifacts. NV-Bench comprises 1,651 multilingual, in-the-wild utterances with paired human reference audio, balanced across 14 categories. We introduce a dual-dimensional evaluation protocol:
- Instruction Alignment — utilizes our proposed Paralinguistic Character Error Rate (PCER) to assess controllability.
- Acoustic Fidelity — quantifies the distributional gap between synthesized and real speech.
Experimental results demonstrate a strong correlation between our objective metrics and human perception, establishing NV-Bench as a standardized evaluation framework.
Dataset Overview
Subsets
| Subset | Language | Description | Label Type |
|---|---|---|---|
zh_single |
Chinese | Single nonverbal event per utterance | Single-label |
zh_multi |
Chinese | Multiple nonverbal events per utterance | Multi-label |
en_single |
English | Single nonverbal event per utterance | Single-label |
en_multi |
English | Multiple nonverbal events per utterance | Multi-label |
- Single-label Subset: Strictly balanced — exactly one NVV event per utterance (50 samples per category). Isolates fundamental generation capabilities.
- Multi-label Subset: Challenging utterances with 2+ NVV events — tests robustness under dense paralinguistic conditions with relative balance.
Data Fields
| Field | Type | Description |
|---|---|---|
text |
string |
Target text with inline NVV tags (e.g. [Laughter], [Cough]) |
prompt_text |
string |
Prompt text for zero-shot speaker cloning |
category |
string |
NVV category label (one of 14 categories) |
type |
string |
Subset identifier (zh_single, zh_multi, en_single, en_multi) |
wav |
Audio |
Ground-truth reference audio (MP3) |
prompt_wav |
Audio |
Speaker prompt audio for zero-shot cloning (MP3) |
Functional Taxonomy
NVVs are organized into three functional levels based on communicative intent:
| Name | Description | Categories |
|---|---|---|
| Vegetative Sounds | Biological reflexes grounding speech in physical realism | [Cough], [Sigh], [Breathing] |
| Affect Bursts | Valenced vocalizations conveying emotion or instant reactions | [Laughter], [Surprise-ah], [Surprise-oh], [Dissatisfaction-hnn] |
| Conversational Grunts | Interaction-management cues — filled pauses and prosodic particles | [Uhm], [Confirmation-en], [Question-ei], [Question-ah], [Question-en], [Question-oh], [Question-huh] |
Usage
from datasets import load_dataset
# Load a specific subset
dataset = load_dataset("AnonyData/NV-Bench", "zh_single", split="test")
# Access a sample
sample = dataset[0]
print(sample["text"]) # Target text with NVV tags
print(sample["category"]) # NVV category
print(sample["wav"]) # Ground-truth audio
print(sample["prompt_wav"]) # Speaker prompt audio
print(sample["prompt_text"]) # Prompt text
Evaluation Protocol
Instruction Alignment
Measures whether the model can generate the specified NVV events at the correct positions.
| Metric | Description |
|---|---|
| CER | Character Error Rate |
| PCER | Paralinguistic Character Error Rate |
| OCER | Overall Character Error Rate |
Acoustic Fidelity
Measures how realistic synthesized speech sounds compared to real recordings.
| Metric | Description |
|---|---|
| FAD / FD / KL | Distribution distance metrics |
| SIM | Speaker similarity |
| DNSMOS | Perceptual quality score |
Pipeline
- Data Processing — 565K clips (~1,560 hrs) filtered via Emilia-Pipeline & MiMo-Audio for single-speaker verification.
- Multi-lingual NVASR — SenseVoice-Small fine-tuned on 6 datasets with unified label taxonomy.
- Human Verification — 1,651 prompt-GT pairs (7.9 hrs).
Citation
Coming soon
License
This dataset is released under the CC BY NC SA 4.0 license.
- Downloads last month
- 24