Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 27 new columns ({'draft', 'number', 'labels_url', 'pull_request', 'closed_at', 'created_at', 'performed_via_github_app', 'user', 'locked', 'labels', 'repository_url', 'events_url', 'assignee', 'url', 'is_pull_request', 'milestone', 'reactions', 'active_lock_reason', 'author_association', 'updated_at', 'id', 'timeline_url', 'duration', 'state', 'assignees', 'comments_url', 'node_id'}) and 3 missing columns ({'embeddings', 'comment_length', 'text'}).

This happened while the json dataset builder was generating data using

hf://datasets/SebastianS/github-issues/issues-datasets-with-comments.jsonl (at revision e73b4ff89f31e0fbbb720aae0d8079bdcdf93d7d)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              url: string
              repository_url: string
              labels_url: string
              comments_url: string
              events_url: string
              html_url: string
              id: int64
              node_id: string
              number: int64
              title: string
              user: struct<avatar_url: string, events_url: string, followers_url: string, following_url: string, gists_url: string, gravatar_id: string, html_url: string, id: int64, login: string, node_id: string, organizations_url: string, received_events_url: string, repos_url: string, site_admin: bool, starred_url: string, subscriptions_url: string, type: string, url: string>
                child 0, avatar_url: string
                child 1, events_url: string
                child 2, followers_url: string
                child 3, following_url: string
                child 4, gists_url: string
                child 5, gravatar_id: string
                child 6, html_url: string
                child 7, id: int64
                child 8, login: string
                child 9, node_id: string
                child 10, organizations_url: string
                child 11, received_events_url: string
                child 12, repos_url: string
                child 13, site_admin: bool
                child 14, starred_url: string
                child 15, subscriptions_url: string
                child 16, type: string
                child 17, url: string
              labels: list<item: struct<color: string, default: bool, description: string, id: int64, name: string, node_id: string, url: string>>
                child 0, item: struct<color: string, default: bool, description: string, id: int64, name: string, node_id: string, url: string>
                    child 0, color: string
                    child 1, default: bool
                    child 2, description: string
                    child 3, id: int64
                    child 4, name: string
                    child 5, 
              ...
              : string
                    child 11, received_events_url: string
                    child 12, repos_url: string
                    child 13, site_admin: bool
                    child 14, starred_url: string
                    child 15, subscriptions_url: string
                    child 16, type: string
                    child 17, url: string
                child 4, description: string
                child 5, due_on: int64
                child 6, html_url: string
                child 7, id: int64
                child 8, labels_url: string
                child 9, node_id: string
                child 10, number: int64
                child 11, open_issues: int64
                child 12, state: string
                child 13, title: string
                child 14, updated_at: int64
                child 15, url: string
              comments: list<item: string>
                child 0, item: string
              created_at: int64
              updated_at: int64
              closed_at: int64
              author_association: string
              active_lock_reason: null
              draft: bool
              pull_request: struct<diff_url: string, html_url: string, merged_at: int64, patch_url: string, url: string>
                child 0, diff_url: string
                child 1, html_url: string
                child 2, merged_at: int64
                child 3, patch_url: string
                child 4, url: string
              body: string
              reactions: struct<+1: int64, -1: int64, confused: int64, eyes: int64, heart: int64, hooray: int64, laugh: int64, rocket: int64, total_count: int64, url: string>
                child 0, +1: int64
                child 1, -1: int64
                child 2, confused: int64
                child 3, eyes: int64
                child 4, heart: int64
                child 5, hooray: int64
                child 6, laugh: int64
                child 7, rocket: int64
                child 8, total_count: int64
                child 9, url: string
              timeline_url: string
              performed_via_github_app: null
              is_pull_request: bool
              duration: double
              to
              {'html_url': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'comments': Value(dtype='string', id=None), 'body': Value(dtype='string', id=None), 'comment_length': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'embeddings': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 27 new columns ({'draft', 'number', 'labels_url', 'pull_request', 'closed_at', 'created_at', 'performed_via_github_app', 'user', 'locked', 'labels', 'repository_url', 'events_url', 'assignee', 'url', 'is_pull_request', 'milestone', 'reactions', 'active_lock_reason', 'author_association', 'updated_at', 'id', 'timeline_url', 'duration', 'state', 'assignees', 'comments_url', 'node_id'}) and 3 missing columns ({'embeddings', 'comment_length', 'text'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/SebastianS/github-issues/issues-datasets-with-comments.jsonl (at revision e73b4ff89f31e0fbbb720aae0d8079bdcdf93d7d)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

html_url
string
title
string
comments
string
body
string
comment_length
int64
text
string
embeddings
sequence
https://github.com/huggingface/datasets/issues/2945
Protect master branch
Cool, I think we can do both :)
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propo...
8
Protect master branch After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f58...
[ -0.2390099317, -0.0966292247, -0.0759872422, -0.0984264985, -0.0993319154, -0.114640519, 0.0078532966, 0.239554435, -0.029255297, -0.0760675296, 0.3045841157, -0.0697917044, -0.1556909084, 0.2472542524, -0.0918809772, 0.2528043091, 0.1680651754, 0.000099914, -0.099873215, 0.086...
https://github.com/huggingface/datasets/issues/2945
Protect master branch
@lhoestq now the 2 are implemented. Please note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to...
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propo...
64
Protect master branch After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f58...
[ -0.1553194672, -0.1002306268, -0.070321016, -0.0798171908, -0.1042534709, -0.1879874468, 0.0104035577, 0.2728635967, -0.0098489737, -0.0841399208, 0.2926148772, -0.0778751224, -0.1324572563, 0.2158903182, -0.0562292449, 0.1705456823, 0.2003232092, -0.0407981202, -0.0923330337, ...
https://github.com/huggingface/datasets/issues/2943
Backwards compatibility broken for cached datasets that use `.filter()`
Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`. To avoid other users from having this issue we could make the caching differentiate the two, what do you think ?
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in...
50
Backwards compatibility broken for cached datasets that use `.filter()` ## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None...
[ -0.3049958348, 0.1243880838, -0.0465583764, 0.2368971109, 0.1517290324, -0.0666598156, -0.0069413707, 0.3285393715, 0.1737383604, 0.0293814819, -0.2313861847, 0.3581927419, -0.1503304988, 0.2527370155, -0.2977862358, -0.1378120482, 0.0444693454, 0.0321247876, -0.0614953265, 0.1...
https://github.com/huggingface/datasets/issues/2943
Backwards compatibility broken for cached datasets that use `.filter()`
If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests.
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in...
28
Backwards compatibility broken for cached datasets that use `.filter()` ## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None...
[ -0.3049958348, 0.1243880838, -0.0465583764, 0.2368971109, 0.1517290324, -0.0666598156, -0.0069413707, 0.3285393715, 0.1737383604, 0.0293814819, -0.2313861847, 0.3581927419, -0.1503304988, 0.2527370155, -0.2977862358, -0.1378120482, 0.0444693454, 0.0321247876, -0.0614953265, 0.1...
https://github.com/huggingface/datasets/issues/2943
Backwards compatibility broken for cached datasets that use `.filter()`
Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in...
22
Backwards compatibility broken for cached datasets that use `.filter()` ## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None...
[ -0.3049958348, 0.1243880838, -0.0465583764, 0.2368971109, 0.1517290324, -0.0666598156, -0.0069413707, 0.3285393715, 0.1737383604, 0.0293814819, -0.2313861847, 0.3581927419, -0.1503304988, 0.2527370155, -0.2977862358, -0.1378120482, 0.0444693454, 0.0321247876, -0.0614953265, 0.1...
https://github.com/huggingface/datasets/issues/2943
Backwards compatibility broken for cached datasets that use `.filter()`
I just merged a fix, let me know if you're still having this kind of issues :) We'll do a release soon to make this fix available
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in...
27
Backwards compatibility broken for cached datasets that use `.filter()` ## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None...
[ -0.3049958348, 0.1243880838, -0.0465583764, 0.2368971109, 0.1517290324, -0.0666598156, -0.0069413707, 0.3285393715, 0.1737383604, 0.0293814819, -0.2313861847, 0.3581927419, -0.1503304988, 0.2527370155, -0.2977862358, -0.1378120482, 0.0444693454, 0.0321247876, -0.0614953265, 0.1...
https://github.com/huggingface/datasets/issues/2943
Backwards compatibility broken for cached datasets that use `.filter()`
Definitely works on several manual cases with our dummy datasets, thank you @lhoestq !
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in...
14
Backwards compatibility broken for cached datasets that use `.filter()` ## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None...
[ -0.3049958348, 0.1243880838, -0.0465583764, 0.2368971109, 0.1517290324, -0.0666598156, -0.0069413707, 0.3285393715, 0.1737383604, 0.0293814819, -0.2313861847, 0.3581927419, -0.1503304988, 0.2527370155, -0.2977862358, -0.1378120482, 0.0444693454, 0.0321247876, -0.0614953265, 0.1...
https://github.com/huggingface/datasets/issues/2943
Backwards compatibility broken for cached datasets that use `.filter()`
Fixed by #2947.
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in...
3
Backwards compatibility broken for cached datasets that use `.filter()` ## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None...
[ -0.3049958348, 0.1243880838, -0.0465583764, 0.2368971109, 0.1517290324, -0.0666598156, -0.0069413707, 0.3285393715, 0.1737383604, 0.0293814819, -0.2313861847, 0.3581927419, -0.1503304988, 0.2527370155, -0.2977862358, -0.1378120482, 0.0444693454, 0.0321247876, -0.0614953265, 0.1...
https://github.com/huggingface/datasets/issues/2941
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
I tried `unshuffled_original_da` and it is also not working
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num...
9
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError ## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'exp...
[ -0.2700701952, -0.3562059104, 0.0761256963, 0.3269021809, 0.2630259991, -0.0228649452, -0.0282609034, 0.4201454818, -0.0057781218, 0.2675718367, -0.2937448621, 0.3795824349, -0.175184533, 0.0020705916, -0.1521454006, -0.3282049894, 0.0055401283, 0.0813263729, 0.0597292781, 0.03...
https://github.com/huggingface/datasets/issues/2937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
Hi @daqieq, thanks for reporting. Unfortunately, I was not able to reproduce this bug: ```ipython In [1]: from datasets import load_dataset ...: ds = load_dataset('wiki_bio') Downloading: 7.58kB [00:00, 26.3kB/s] Downloading: 2.71kB [00:00, ?B/s] Using custom data configuration default Downloading and prep...
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any er...
109
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied ## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset...
[ -0.2289305478, 0.3834535182, 0.040345341, 0.2550398409, -0.0121113108, 0.2622572184, 0.5036683083, 0.1329532564, 0.3442817032, 0.1357175708, -0.1089743897, 0.0772839189, 0.073150374, -0.0056987684, -0.1256225109, 0.0789283216, 0.0364452153, -0.0053882273, 0.214949578, 0.1039548...
https://github.com/huggingface/datasets/issues/2937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine. Running on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename an...
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any er...
194
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied ## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset...
[ -0.2289305478, 0.3834535182, 0.040345341, 0.2550398409, -0.0121113108, 0.2622572184, 0.5036683083, 0.1329532564, 0.3442817032, 0.1357175708, -0.1089743897, 0.0772839189, 0.073150374, -0.0056987684, -0.1256225109, 0.0789283216, 0.0364452153, -0.0053882273, 0.214949578, 0.1039548...
End of preview.

YAML Metadata Error:"language[0]" must only contain lowercase characters

YAML Metadata Error:"language[0]" with value "en-US" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.

Dataset Card for GitHub Issues

Dataset Description

this was an example dataset made from the huggingface course

Downloads last month
115