The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns ({'last_updated', 'completed', 'failed'}) and 16 missing columns ({'language', 'text', 'method', 'instruction', 'metadata', 'word_count', 'video_id', 'quality_score', 'character_count', 'estimated_tokens', 'timestamp', 'response', 'video_metadata', 'content_type', 'source', 'transcription_method'}).
This happened while the json dataset builder was generating data using
hf://datasets/morka17/rtu-tgn/progress.json (at revision 3d2bab51ca06a196e02f946f52f79a9ae15a478e)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
completed: list<item: string>
child 0, item: string
failed: list<item: string>
child 0, item: string
last_updated: string
to
{'video_id': Value('string'), 'text': Value('string'), 'method': Value('string'), 'timestamp': Value('string'), 'metadata': {'title': Value('string'), 'duration': Value('int64'), 'upload_date': Value('string'), 'channel': Value('string'), 'view_count': Value('int64')}, 'instruction': Value('string'), 'response': Value('string'), 'source': Value('string'), 'transcription_method': Value('string'), 'language': Value('string'), 'video_metadata': {'title': Value('string'), 'channel': Value('string'), 'duration_seconds': Value('int64'), 'duration_formatted': Value('string'), 'upload_date': Value('string'), 'view_count': Value('int64'), 'category': Value('string')}, 'word_count': Value('int64'), 'character_count': Value('int64'), 'estimated_tokens': Value('float64'), 'quality_score': Value('float64'), 'content_type': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1455, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1054, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns ({'last_updated', 'completed', 'failed'}) and 16 missing columns ({'language', 'text', 'method', 'instruction', 'metadata', 'word_count', 'video_id', 'quality_score', 'character_count', 'estimated_tokens', 'timestamp', 'response', 'video_metadata', 'content_type', 'source', 'transcription_method'}).
This happened while the json dataset builder was generating data using
hf://datasets/morka17/rtu-tgn/progress.json (at revision 3d2bab51ca06a196e02f946f52f79a9ae15a478e)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
video_id string | text string | method string | timestamp string | metadata dict | instruction string | response string | source string | transcription_method string | language string | video_metadata dict | word_count int64 | character_count int64 | estimated_tokens float64 | quality_score float64 | content_type string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
_smJ9Md_06g | NG The Croaking Joke says the alive tubes fire in my head a bit, but then this something says that we give life and your head to anyone who understands them. Now it's taken away, it's demystified of verse. It says that we give life, that we give life and your head to anyone who understands the right. So that means it d... | whisper | 2025-10-03T10:57:39.111582 | {
"title": "HOW TO MAKE THE WORD OF GOD WORK",
"duration": 3211,
"upload_date": "20250917",
"channel": "THE GOOD-LIFE NATION",
"view_count": 2027
} | null | null | null | null | null | null | null | null | null | null | null |
tZl8FL6WwIY | Proccept of the Fall Proccept of the Fall from the Strength, beautiful verse of the scripture, and I hope you will get the message right now. This is my Son, who attended my words in Client and E, onto my Saints, led him to the path from the dying eyes, to the Kingdom and the Most of the Dine Heart, for their life onto... | whisper | 2025-10-03T11:12:24.488721 | {
"title": "PREMISES FOR EXPERIENCING GOD'S WORD",
"duration": 3002,
"upload_date": "20250914",
"channel": "THE GOOD-LIFE NATION",
"view_count": 2188
} | null | null | null | null | null | null | null | null | null | null | null |
IJvcddhY9Hk | So what are the components of your faith that make the Word of God produce what he talks about? So therefore, critical forces are called the principles. For critical forces are principles that every child of God must know and have for the Word of God to produce what he talks about. Number one, knowledge. Knowledge is k... | whisper | 2025-10-03T11:36:31.347029 | {
"title": "GOD'S IDEA OF ENLARGEMENT",
"duration": 3332,
"upload_date": "20250907",
"channel": "THE GOOD-LIFE NATION",
"view_count": 1855
} | null | null | null | null | null | null | null | null | null | null | null |
C8uKp_qw8g8 | "I have this compassion for humanity. There's this passion on my inside. I ask god a lot of question(...TRUNCATED) | null | 2025-10-06T22:03:59.504419 | null | "Provide a transcript of the video titled 'THE UNVEILING 4B- Prayer And Empowerment Meeting with RE(...TRUNCATED) | "I have this compassion for humanity. There's this passion on my inside. I ask god a lot of question(...TRUNCATED) | youtube | whisper | en | {"title":"THE UNVEILING 4B- Prayer And Empowerment Meeting with REVEREND P.N. UTOMI","channel":"TH(...TRUNCATED) | 11,830 | 61,496 | 15,379 | 0.8 | educational |
jUsrvuSigvQ | "Nadi goto piece that brought again from the dead allot jesus, that great shepherd of the sheep thro(...TRUNCATED) | null | 2025-10-06T22:06:56.017367 | null | "Provide a transcript of the video titled 'THE WAY OF THE SPIRIT(Understanding ,Faith, Sacrifice) Wi(...TRUNCATED) | "Nadi goto piece that brought again from the dead allot jesus, that great shepherd of the sheep thro(...TRUNCATED) | youtube | whisper | en | {"title":"THE WAY OF THE SPIRIT(Understanding ,Faith, Sacrifice) With REVEREND P.N UTOMI","channel":(...TRUNCATED) | 11,319 | 58,093 | 14,714.7 | 0.7 | educational |
lH5jLaU-oCo | "On the german side of the road to bohemann thanks for watching! perhaps you struggle with a way of (...TRUNCATED) | null | 2025-10-06T22:16:33.703806 | null | "Provide a transcript of the video titled 'FEED MY SHEEP SERIES WITH REVEREND P.N UTOMI (THE OPINION(...TRUNCATED) | "On the german side of the road to bohemann thanks for watching! perhaps you struggle with a way of (...TRUNCATED) | youtube | whisper | en | {"title":"FEED MY SHEEP SERIES WITH REVEREND P.N UTOMI (THE OPINION OF THE LOST)","channel":"THE GOO(...TRUNCATED) | 10,281 | 53,774 | 13,365.3 | 0.6 | educational |
cxPesUoF2Sw | "Inau, it's important that we be taking through certain important statements in the word of god on h(...TRUNCATED) | null | 2025-10-06T22:22:02.776848 | null | "Provide a transcript of the video titled 'SPIRITUAL GROWTH- PRAYER AND EMPOWERMENT MEETING WITH REV(...TRUNCATED) | "Inau, it's important that we be taking through certain important statements in the word of god on h(...TRUNCATED) | youtube | whisper | en | {"title":"SPIRITUAL GROWTH- PRAYER AND EMPOWERMENT MEETING WITH REVEREND P.N UTOMI","channel":"THE G(...TRUNCATED) | 2,598 | 14,804 | 3,377.4 | 0.8 | conversational |
WQCOhNLbzOM | "Thanks to you, houdur. Thank you, houdur discovery. Because in the holy place, what you have, you h(...TRUNCATED) | null | 2025-10-06T22:24:32.961190 | null | "Provide a transcript of the video titled 'PRAYER & EMPOWERMENT MEETING WITH REV. PN UTOMI(OUR PROGR(...TRUNCATED) | "Thanks to you, houdur. Thank you, houdur discovery. Because in the holy place, what you have, you h(...TRUNCATED) | youtube | whisper | en | {"title":"PRAYER & EMPOWERMENT MEETING WITH REV. PN UTOMI(OUR PROGRESSION IN CHRIST FROM SONSHIP TO (...TRUNCATED) | 7,585 | 39,417 | 9,860.5 | 0.8 | educational |
WQZsgUz-7QA | "We were told that the way it's the light that supplants. That means that it replaces another becaus(...TRUNCATED) | null | 2025-10-06T22:27:50.500955 | null | "Provide a transcript of the video titled 'NEW CREATION REALITIES WITH REVEREND P.N UTOMI(OUR PROGRE(...TRUNCATED) | "We were told that the way it's the light that supplants. That means that it replaces another becaus(...TRUNCATED) | youtube | whisper | en | {"title":"NEW CREATION REALITIES WITH REVEREND P.N UTOMI(OUR PROGRESSION IN CHRIST FROM SONSHIP TO E(...TRUNCATED) | 8,696 | 45,823 | 11,304.8 | 0.7 | educational |
a66_RNqpdmk | "I am listening to me. God is urging you, working for you to be filled off for you. If you don't kno(...TRUNCATED) | null | 2025-10-06T22:30:34.930331 | null | "Provide a transcript of the video titled 'PRAYER AND EMPOWERMENT with RPN (21/07/2021)' from the ch(...TRUNCATED) | "I am listening to me. God is urging you, working for you to be filled off for you. If you don't kno(...TRUNCATED) | youtube | whisper | en | {"title":"PRAYER AND EMPOWERMENT with RPN (21/07/2021)","channel":"THE GOOD-LIFE NATION","duration_s(...TRUNCATED) | 6,896 | 36,399 | 8,964.8 | 0.8 | educational |
YAML Metadata Warning:The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
YAML Metadata Warning:The task_categories "instruction-following" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
YouTube Transcripts Dataset for LLM Training
This dataset contains high-quality, structured transcripts from YouTube videos, specifically formatted for Large Language Model (LLM) training and fine-tuning.
Dataset Structure
The dataset is optimized for LLM training with the following structure:
Core Training Fields
text: Cleaned and normalized transcript textinstruction: Instruction format for fine-tuning (e.g., "Provide a transcript of the video titled '...'")response: The transcript content (same astextbut in instruction-response format)
Content Analysis
word_count: Number of words in the transcriptcharacter_count: Number of charactersestimated_tokens: Estimated token count for trainingquality_score: Quality score (0-1) based on length, structure, and metadatacontent_type: Classified content type (educational, conversational, instructional, narrative, general)
Metadata
video_id: YouTube video IDsource: Always "youtube"transcription_method: "whisper" (OpenAI Whisper)language: "en" (English)timestamp: Processing timestampvideo_metadata: Structured video informationtitle: Video titlechannel: Channel nameduration_seconds: Video duration in secondsduration_formatted: Human-readable duration (MM:SS or HH:MM:SS)upload_date: Video upload dateview_count: Number of viewscategory: Auto-classified category (education, business, health, technology, etc.)
Loading the Dataset
from datasets import load_dataset
# Load the complete dataset
dataset = load_dataset("morka17/rtu-tgn", data_files="data_shard_*.jsonl")
# For instruction fine-tuning
train_data = dataset['train']
for example in train_data:
instruction = example['instruction']
response = example['response']
# Use for instruction-following fine-tuning
# For general language modeling
for example in train_data:
text = example['text']
# Use for general language model training
Filtering and Quality Control
# Filter by quality score
high_quality = dataset.filter(lambda x: x['quality_score'] > 0.7)
# Filter by content type
educational_content = dataset.filter(lambda x: x['content_type'] == 'educational')
# Filter by length (optimal for training)
optimal_length = dataset.filter(lambda x: 1000 <= x['word_count'] <= 5000)
# Filter by category
business_content = dataset.filter(lambda x: x['video_metadata']['category'] == 'business')
Use Cases
1. Instruction Fine-tuning
Use the instruction and response fields for training models to follow instructions.
2. Conversational AI Training
Filter for content_type == 'conversational' for dialogue training.
3. Domain-specific Training
Filter by video_metadata.category for domain-specific fine-tuning.
4. Quality-based Training
Use quality_score to select high-quality training examples.
Data Quality
- Text Cleaning: Transcripts are cleaned to remove artifacts, normalize punctuation, and improve readability
- Quality Scoring: Each entry has a quality score based on length, structure, punctuation, and metadata
- Content Classification: Automatic classification into content types for targeted training
- Metadata Enrichment: Rich metadata for filtering and analysis
Sharding
The dataset is automatically sharded into files of max 10MB each (data_shard_XXXX.jsonl) for efficient loading and processing.
Last Updated
2025-10-26T14:52:26.835885
License and Usage
Please ensure compliance with YouTube's Terms of Service when using this dataset. This dataset is intended for research and educational purposes in natural language processing and machine learning.
Citation
If you use this dataset in your research, please cite:
@dataset{youtube_transcripts_llm,
title={YouTube Transcripts Dataset for LLM Training},
author={Generated via OpenAI Whisper},
year={2025},
url={https://huggingface.co/datasets/morka17/rtu-tgn}
}
- Downloads last month
- 19