The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 26 new columns ({'is_pull_request', 'milestone', 'timeline_url', 'pull_request', 'user', 'repository_url', 'number', 'url', 'labels_url', 'created_at', 'performed_via_github_app', 'assignee', 'id', 'comments_url', 'assignees', 'author_association', 'closed_at', 'labels', 'locked', 'events_url', 'reactions', 'node_id', 'draft', 'updated_at', 'state', 'active_lock_reason'}) and 3 missing columns ({'comment_length', 'text', 'embeddings'}).

This happened while the json dataset builder was generating data using

hf://datasets/eleldar/github-issues/issues-datasets-with-comments.jsonl (at revision faaa5ae72041e00317ea618556294838aae8d07f)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              url: string
              repository_url: string
              labels_url: string
              comments_url: string
              events_url: string
              html_url: string
              id: int64
              node_id: string
              number: int64
              title: string
              user: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>
                child 0, login: string
                child 1, id: int64
                child 2, node_id: string
                child 3, avatar_url: string
                child 4, gravatar_id: string
                child 5, url: string
                child 6, html_url: string
                child 7, followers_url: string
                child 8, following_url: string
                child 9, gists_url: string
                child 10, starred_url: string
                child 11, subscriptions_url: string
                child 12, organizations_url: string
                child 13, repos_url: string
                child 14, events_url: string
                child 15, received_events_url: string
                child 16, type: string
                child 17, site_admin: bool
              labels: list<item: struct<id: int64, node_id: string, url: string, name: string, color: string, default: bool, description: string>>
                child 0, item: struct<id: int64, node_id: string, url: string, name: string, color: string, default: bool, description: string>
                    child 0, id: int64
                    child 1, node_id: string
                    child 2, url: string
                    child 3, name: string
                    child 4, color: string
                    child 5, defaul
              ...
              ld 7, followers_url: string
                    child 8, following_url: string
                    child 9, gists_url: string
                    child 10, starred_url: string
                    child 11, subscriptions_url: string
                    child 12, organizations_url: string
                    child 13, repos_url: string
                    child 14, events_url: string
                    child 15, received_events_url: string
                    child 16, type: string
                    child 17, site_admin: bool
                child 9, open_issues: int64
                child 10, closed_issues: int64
                child 11, state: string
                child 12, created_at: int64
                child 13, updated_at: int64
                child 14, due_on: int64
                child 15, closed_at: int64
              comments: list<item: string>
                child 0, item: string
              created_at: int64
              updated_at: int64
              closed_at: int64
              author_association: string
              active_lock_reason: null
              body: string
              reactions: struct<url: string, total_count: int64, +1: int64, -1: int64, laugh: int64, hooray: int64, confused: int64, heart: int64, rocket: int64, eyes: int64>
                child 0, url: string
                child 1, total_count: int64
                child 2, +1: int64
                child 3, -1: int64
                child 4, laugh: int64
                child 5, hooray: int64
                child 6, confused: int64
                child 7, heart: int64
                child 8, rocket: int64
                child 9, eyes: int64
              timeline_url: string
              performed_via_github_app: null
              draft: bool
              pull_request: struct<url: string, html_url: string, diff_url: string, patch_url: string, merged_at: int64>
                child 0, url: string
                child 1, html_url: string
                child 2, diff_url: string
                child 3, patch_url: string
                child 4, merged_at: int64
              is_pull_request: bool
              to
              {'html_url': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'comments': Value(dtype='string', id=None), 'body': Value(dtype='string', id=None), 'comment_length': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'embeddings': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 26 new columns ({'is_pull_request', 'milestone', 'timeline_url', 'pull_request', 'user', 'repository_url', 'number', 'url', 'labels_url', 'created_at', 'performed_via_github_app', 'assignee', 'id', 'comments_url', 'assignees', 'author_association', 'closed_at', 'labels', 'locked', 'events_url', 'reactions', 'node_id', 'draft', 'updated_at', 'state', 'active_lock_reason'}) and 3 missing columns ({'comment_length', 'text', 'embeddings'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/eleldar/github-issues/issues-datasets-with-comments.jsonl (at revision faaa5ae72041e00317ea618556294838aae8d07f)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

html_url
string
title
string
comments
string
body
string
comment_length
int64
text
string
embeddings
sequence
https://github.com/huggingface/datasets/issues/3973
ConnectionError and SSLError
Hi ! You can download the `oscar.py` file from this repository at `/datasets/oscar/oscar.py`. Then you can load the dataset by passing the local path to `oscar.py` to `load_dataset`: ```python load_dataset("path/to/oscar.py", "unshuffled_deduplicated_it") ```
code ``` from datasets import load_dataset dataset = load_dataset('oscar', 'unshuffled_deduplicated_it') ``` bug report ``` --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_2978...
32
ConnectionError and SSLError code ``` from datasets import load_dataset dataset = load_dataset('oscar', 'unshuffled_deduplicated_it') ``` bug report ``` --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) ~\A...
[ -0.5248147249, 0.0405230634, -0.1230327189, 0.0725033507, 0.3041602373, -0.0861735046, 0.3794277012, 0.3197903037, 0.1657542586, 0.1261250079, -0.0368023068, 0.1544640511, 0.0295699202, 0.2415485531, -0.0804175362, 0.0118007232, 0.0215492398, 0.2121128887, -0.2936364412, 0.1434...
https://github.com/huggingface/datasets/issues/3969
Cannot preview cnn_dailymail dataset
I guess the cache got corrupted due to a previous issue with Google Drive service. The cache should be regenerated, e.g. by passing `download_mode="force_redownload"`. CC: @severo
## Dataset viewer issue for '*cnn_dailymail*' **Link:** https://huggingface.co/datasets/cnn_dailymail *short description of the issue* Am I the one who added this dataset ? Yes-No
26
Cannot preview cnn_dailymail dataset ## Dataset viewer issue for '*cnn_dailymail*' **Link:** https://huggingface.co/datasets/cnn_dailymail *short description of the issue* Am I the one who added this dataset ? Yes-No I guess the cache got corrupted due to a previous issue with Google Drive service. Th...
[ -0.3001443148, 0.1003711745, -0.0295108072, 0.2527271807, -0.0365714543, 0.4788869023, 0.3518633544, 0.1342158616, -0.1076103002, 0.0816059932, 0.0185798742, -0.0681077242, -0.1297189742, 0.1330219954, -0.054589849, -0.0531774163, 0.0362982601, 0.0126727307, 0.0645384118, 0.152...
https://github.com/huggingface/datasets/issues/3968
Cannot preview 'indonesian-nlp/eli5_id' dataset
Hi @cahya-wirawan, thanks for reporting. Your dataset is working OK in streaming mode: ```python In [1]: from datasets import load_dataset ...: ds = load_dataset("indonesian-nlp/eli5_id", split="train", streaming=True) ...: item = next(iter(ds)) ...: item Using custom data configuration indonesian-nlp...
## Dataset viewer issue for '*indonesian-nlp/eli5_id*' **Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id I can not see the dataset preview. ``` Server Error Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exis...
271
Cannot preview 'indonesian-nlp/eli5_id' dataset ## Dataset viewer issue for '*indonesian-nlp/eli5_id*' **Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id I can not see the dataset preview. ``` Server Error Status code: 400 Exception: Status400Error Message: Not found. Maybe the ca...
[ -0.3884463906, -0.1418884844, -0.0735412017, 0.1538683325, -0.0247687586, 0.2311385423, 0.0929410681, 0.5587155223, 0.0132091083, -0.0031359883, -0.2292771786, 0.1413205415, -0.0089979507, 0.0560641401, 0.2455597967, -0.385282129, 0.1090712696, 0.2116556913, 0.0661699772, 0.206...
https://github.com/huggingface/datasets/issues/3968
Cannot preview 'indonesian-nlp/eli5_id' dataset
Thanks @albertvillanova for checking it. Btw, I have another dataset indonesian-nlp/lfqa_id which has the same issue. However, this dataset is still private, is it the reason why the preview doesn't work?
## Dataset viewer issue for '*indonesian-nlp/eli5_id*' **Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id I can not see the dataset preview. ``` Server Error Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exis...
31
Cannot preview 'indonesian-nlp/eli5_id' dataset ## Dataset viewer issue for '*indonesian-nlp/eli5_id*' **Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id I can not see the dataset preview. ``` Server Error Status code: 400 Exception: Status400Error Message: Not found. Maybe the ca...
[ -0.335232228, -0.1239321679, -0.027136052, 0.2779892683, -0.1499147117, 0.2525175214, 0.1689610779, 0.4171839058, 0.0262222327, 0.1278894395, -0.209331423, 0.0273422934, 0.0728633255, -0.0633361638, 0.2337566465, -0.2524942458, 0.137003839, 0.2076781541, 0.041900117, 0.13975013...
https://github.com/huggingface/datasets/issues/3965
TypeError: Couldn't cast array of type for JSONLines dataset
Hi @lewtun, thanks for reporting. It seems that our library fails at inferring the dtype of the columns: - `milestone` - `performed_via_github_app` (and assigns them `null` dtype).
## Describe the bug One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below). This reminds me a bit of #2799 where one can load the dataset in `pan...
27
TypeError: Couldn't cast array of type for JSONLines dataset ## Describe the bug One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below). This r...
[ 0.0036059762, -0.0479997993, -0.0535864346, 0.434281677, 0.4592538476, 0.1503392756, 0.4873134792, 0.3241958618, 0.4770776629, -0.057500869, -0.061184179, 0.1311786771, -0.1100012437, 0.2816880941, -0.0008711191, -0.3941471875, -0.0146889417, -0.0445327871, 0.1062430516, 0.1749...
https://github.com/huggingface/datasets/issues/3960
Load local dataset error
Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset: ```python >>> from datasets import load_dataset >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']} >>> ds = load_dataset('imagefolder', data...
When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this: ``` >>> from datasets import load_dataset >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']} >>> ds = load_dataset('nateraw/image-folder...
40
Load local dataset error When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this: ``` >>> from datasets import load_dataset >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']} >>> ds = load_da...
[ -0.3380225599, -0.114140451, -0.0211975258, 0.2390871197, 0.4253334999, -0.0225406867, 0.1771003604, 0.3576570451, 0.0669671223, 0.1895555854, -0.1314774305, 0.1565939039, -0.0968019739, 0.1201880723, -0.0979674831, -0.296218574, -0.0579133108, 0.1971371919, -0.1178741753, -0.1...
https://github.com/huggingface/datasets/issues/3960
Load local dataset error
> Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset: > > ```python > >>> from datasets import load_dataset > >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']} > >>> ds = load_dataset('imag...
When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this: ``` >>> from datasets import load_dataset >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']} >>> ds = load_dataset('nateraw/image-folder...
378
Load local dataset error When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this: ``` >>> from datasets import load_dataset >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']} >>> ds = load_da...
[ -0.3380225599, -0.114140451, -0.0211975258, 0.2390871197, 0.4253334999, -0.0225406867, 0.1771003604, 0.3576570451, 0.0669671223, 0.1895555854, -0.1314774305, 0.1565939039, -0.0968019739, 0.1201880723, -0.0979674831, -0.296218574, -0.0579133108, 0.1971371919, -0.1178741753, -0.1...
https://github.com/huggingface/datasets/issues/3956
TypeError: __init__() missing 1 required positional argument: 'scheme'
Hi @amirj, thanks for reporting. At first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server. Feel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility ...
## Describe the bug Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting El...
66
TypeError: __init__() missing 1 required positional argument: 'scheme' ## Describe the bug Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible th...
[ -0.2016850263, -0.3632028699, -0.0670639575, -0.1154993474, -0.0156479795, 0.2479981333, 0.2341077775, 0.2877420485, 0.0638611168, 0.0936895311, -0.0358448662, 0.3917693198, -0.0215827525, -0.152334705, 0.1281492412, -0.2309139818, 0.121085614, 0.087432608, 0.1187869459, -0.051...
https://github.com/huggingface/datasets/issues/3956
TypeError: __init__() missing 1 required positional argument: 'scheme'
@albertvillanova It doesn't seem a version incompatibility between the client and server, since the following code is working: ``` from elasticsearch import Elasticsearch es_client = Elasticsearch("http://localhost:9200") dataset.add_elasticsearch_index(column="e1", es_client=es_client, es_index_name="e1_index") ...
## Describe the bug Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting El...
30
TypeError: __init__() missing 1 required positional argument: 'scheme' ## Describe the bug Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible th...
[ -0.2016850263, -0.3632028699, -0.0670639575, -0.1154993474, -0.0156479795, 0.2479981333, 0.2341077775, 0.2877420485, 0.0638611168, 0.0936895311, -0.0358448662, 0.3917693198, -0.0215827525, -0.152334705, 0.1281492412, -0.2309139818, 0.121085614, 0.087432608, 0.1187869459, -0.051...
https://github.com/huggingface/datasets/issues/3956
TypeError: __init__() missing 1 required positional argument: 'scheme'
"Hi @amirj, \r\n\r\nI really think it is a version incompatibility issue between your Elasticsearch (...TRUNCATED)
"## Describe the bug\r\nBased on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elast(...TRUNCATED)
125
"TypeError: __init__() missing 1 required positional argument: 'scheme' \n ## Describe the bug\r\nBa(...TRUNCATED)
[-0.2016850263,-0.3632028699,-0.0670639575,-0.1154993474,-0.0156479795,0.2479981333,0.2341077775,0.2(...TRUNCATED)
End of preview.