url stringlengths 61 61 | repository_url stringclasses 1
value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 49 51 | id int64 1.68B 1.88B | node_id stringlengths 18 19 | number int64 5.79k 6.2k | title stringlengths 1 280 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone null | comments int64 0 44 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 3
values | active_lock_reason null | body stringlengths 3 17.6k ⌀ | reactions dict | timeline_url stringlengths 70 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6203/comments | https://api.github.com/repos/huggingface/datasets/issues/6203/events | https://github.com/huggingface/datasets/issues/6203 | 1,877,491,602 | I_kwDODunzps5v6D-S | 6,203 | Support loading from a DVC remote repository | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.c... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-09-01T14:04:52 | 2023-09-01T14:04:52 | null | NONE | null | ### Feature request
Adding support for loading a file from a DVC repository, tracked remotely on a SCM.
### Motivation
DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible thr... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6203/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6202/comments | https://api.github.com/repos/huggingface/datasets/issues/6202/events | https://github.com/huggingface/datasets/issues/6202 | 1,876,630,351 | I_kwDODunzps5v2xtP | 6,202 | avoid downgrading jax version | {
"login": "chrisflesher",
"id": 1332458,
"node_id": "MDQ6VXNlcjEzMzI0NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1332458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisflesher",
"html_url": "https://github.com/chrisflesher",
"followers_url": "https://api.github.com... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-09-01T02:57:57 | 2023-09-01T02:58:53 | null | NONE | null | ### Feature request
Whenever I `pip install datasets[jax]` it downgrades jax to version 0.3.25. I seem to be able to install this library first then upgrade jax back to version 0.4.13.
### Motivation
It would be nice to not overwrite currently installed version of jax if possible.
### Your contribution
I... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6202/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6201/comments | https://api.github.com/repos/huggingface/datasets/issues/6201/events | https://github.com/huggingface/datasets/pull/6201 | 1,875,256,775 | PR_kwDODunzps5ZOVbV | 6,201 | Fix to_json ValueError and remove pandas pin | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | open | false | null | [] | null | 3 | 2023-08-31T10:38:08 | 2023-08-31T14:08:51 | null | MEMBER | null | This PR fixes the root cause of the issue:
- #6197
This PR also removes the temporary pin of `pandas` introduced by:
- #6200
Note that for orient in ['records', 'values'], index value is ignored but
- in `pandas` < 2.1.0, a ValueError is raised if not index and orient not in ['split', 'table']
- for orien... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6201/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6201",
"html_url": "https://github.com/huggingface/datasets/pull/6201",
"diff_url": "https://github.com/huggingface/datasets/pull/6201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6201.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6200/comments | https://api.github.com/repos/huggingface/datasets/issues/6200/events | https://github.com/huggingface/datasets/pull/6200 | 1,875,169,551 | PR_kwDODunzps5ZOCee | 6,200 | Temporarily pin pandas < 2.1.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | 3 | 2023-08-31T09:45:17 | 2023-08-31T10:33:24 | 2023-08-31T10:24:38 | MEMBER | null | Temporarily pin `pandas` < 2.1.0 until permanent solution is found.
Hot fix #6197. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6200/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6200",
"html_url": "https://github.com/huggingface/datasets/pull/6200",
"diff_url": "https://github.com/huggingface/datasets/pull/6200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6200.patch",
"merged_at": "2023-08-31T10:24... | true |
https://api.github.com/repos/huggingface/datasets/issues/6199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6199/comments | https://api.github.com/repos/huggingface/datasets/issues/6199/events | https://github.com/huggingface/datasets/issues/6199 | 1,875,165,185 | I_kwDODunzps5vxMAB | 6,199 | Use load_dataset for local json files, but it not works | {
"login": "Garen-in-bush",
"id": 50519434,
"node_id": "MDQ6VXNlcjUwNTE5NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/50519434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Garen-in-bush",
"html_url": "https://github.com/Garen-in-bush",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | 2 | 2023-08-31T09:42:34 | 2023-08-31T19:05:07 | null | NONE | null | ### Describe the bug
when I use load_dataset to load my local datasets,it always goes to Hugging Face to download the data instead of loading the local dataset.
### Steps to reproduce the bug
`raw_datasets = load_dataset(
‘json’,
data_files=data_files)`
### Expected behavior
` fails with a `ValueError` since the latest `pandas` [release](https://pandas.pydata.org/docs/dev/whatsnew/v2.1.0.html) (`2.1.0`)
In their latest release we have:
> Improved error handling when using [DataFrame.to_json()](https://pandas.pydata.org/docs/dev/refere... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6197/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6196/comments | https://api.github.com/repos/huggingface/datasets/issues/6196/events | https://github.com/huggingface/datasets/issues/6196 | 1,875,070,972 | I_kwDODunzps5vw0_8 | 6,196 | Split order is not preserved | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 2023-08-31T08:47:16 | 2023-08-31T13:48:43 | 2023-08-31T13:48:43 | MEMBER | null | I have noticed that in some cases the split order is not preserved.
For example, consider a no-script dataset with configs:
```yaml
configs:
- config_name: default
data_files:
- split: train
path: train.csv
- split: test
path: test.csv
```
- Note the defined split order is [train, test]
On... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6196/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6195/comments | https://api.github.com/repos/huggingface/datasets/issues/6195/events | https://github.com/huggingface/datasets/issues/6195 | 1,874,195,585 | I_kwDODunzps5vtfSB | 6,195 | Force to reuse cache at given path | {
"login": "Luosuu",
"id": 43507393,
"node_id": "MDQ6VXNlcjQzNTA3Mzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/43507393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luosuu",
"html_url": "https://github.com/Luosuu",
"followers_url": "https://api.github.com/users/Luosuu/fo... | [] | closed | false | null | [] | null | 1 | 2023-08-30T18:44:54 | 2023-08-30T19:00:45 | 2023-08-30T19:00:45 | NONE | null | ### Describe the bug
I have run the official example of MLM like:
```bash
python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name togethercomputer/RedPajama-Data-1T \
--dataset_config_name arxiv \
--per_device_train_batch_size 10 \
--preprocessing_num_workers 20 ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6195/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6194/comments | https://api.github.com/repos/huggingface/datasets/issues/6194/events | https://github.com/huggingface/datasets/issues/6194 | 1,872,598,223 | I_kwDODunzps5vnZTP | 6,194 | Support custom fingerprinting with `Dataset.from_generator` | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.c... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-08-29T22:43:13 | 2023-08-30T17:33:21 | null | NONE | null | ### Feature request
When using `Dataset.from_generator`, the generator is hashed when building the fingerprint. Similar to `.map`, it would be interesting to let the user bypass this hashing by accepting a `fingerprint` argument to `.from_generator`.
### Motivation
Using the `.from_generator` constructor with ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6194/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6193/comments | https://api.github.com/repos/huggingface/datasets/issues/6193/events | https://github.com/huggingface/datasets/issues/6193 | 1,872,285,153 | I_kwDODunzps5vmM3h | 6,193 | Dataset loading script method does not work with .pyc file | {
"login": "riteshkumarumassedu",
"id": 43389071,
"node_id": "MDQ6VXNlcjQzMzg5MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/43389071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riteshkumarumassedu",
"html_url": "https://github.com/riteshkumarumassedu",
"followers_url": ... | [] | open | false | null | [] | null | 3 | 2023-08-29T19:35:06 | 2023-08-31T19:47:29 | null | NONE | null | ### Describe the bug
The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file.
While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?
#... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6193/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6192/comments | https://api.github.com/repos/huggingface/datasets/issues/6192/events | https://github.com/huggingface/datasets/pull/6192 | 1,871,911,640 | PR_kwDODunzps5ZDGnI | 6,192 | Set minimal fsspec version requirement to 2023.1.0 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 5 | 2023-08-29T15:23:41 | 2023-08-30T14:01:56 | 2023-08-30T13:51:32 | CONTRIBUTOR | null | Fix https://github.com/huggingface/datasets/issues/6141
Colab installs 2023.6.0, so we should be good 🙂
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6192/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6192",
"html_url": "https://github.com/huggingface/datasets/pull/6192",
"diff_url": "https://github.com/huggingface/datasets/pull/6192.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6192.patch",
"merged_at": "2023-08-30T13:51... | true |
https://api.github.com/repos/huggingface/datasets/issues/6191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6191/comments | https://api.github.com/repos/huggingface/datasets/issues/6191/events | https://github.com/huggingface/datasets/pull/6191 | 1,871,634,840 | PR_kwDODunzps5ZCKmv | 6,191 | Add missing `revision` argument | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2023-08-29T13:05:04 | 2023-08-31T14:19:54 | 2023-08-31T13:50:00 | CONTRIBUTOR | null | I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6191/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6191",
"html_url": "https://github.com/huggingface/datasets/pull/6191",
"diff_url": "https://github.com/huggingface/datasets/pull/6191.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6191.patch",
"merged_at": "2023-08-31T13:50... | true |
https://api.github.com/repos/huggingface/datasets/issues/6190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6190/comments | https://api.github.com/repos/huggingface/datasets/issues/6190/events | https://github.com/huggingface/datasets/issues/6190 | 1,871,582,175 | I_kwDODunzps5vjhPf | 6,190 | `Invalid user token` even when correct user token is passed! | {
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2023-08-29T12:37:03 | 2023-08-29T13:01:10 | 2023-08-29T13:01:09 | MEMBER | null | ### Describe the bug
I'm working on a dataset which comprises other datasets on the hub.
URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only
Note: Some of the sub-datasets in this metadataset require explicit access.
All the other datasets work fine, except, `common_voice`.
### Steps t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6190/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6189/comments | https://api.github.com/repos/huggingface/datasets/issues/6189/events | https://github.com/huggingface/datasets/pull/6189 | 1,871,569,855 | PR_kwDODunzps5ZB8Z9 | 6,189 | Don't alter input in Features.from_dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 3 | 2023-08-29T12:29:47 | 2023-08-29T13:04:59 | 2023-08-29T12:52:48 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6189/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6189",
"html_url": "https://github.com/huggingface/datasets/pull/6189",
"diff_url": "https://github.com/huggingface/datasets/pull/6189.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6189.patch",
"merged_at": "2023-08-29T12:52... | true |
https://api.github.com/repos/huggingface/datasets/issues/6188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6188/comments | https://api.github.com/repos/huggingface/datasets/issues/6188/events | https://github.com/huggingface/datasets/issues/6188 | 1,870,987,640 | I_kwDODunzps5vhQF4 | 6,188 | [Feature Request] Check the length of batch before writing so that empty batch is allowed | {
"login": "namespace-Pt",
"id": 61188463,
"node_id": "MDQ6VXNlcjYxMTg4NDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/namespace-Pt",
"html_url": "https://github.com/namespace-Pt",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | 1 | 2023-08-29T06:37:34 | 2023-08-30T13:37:14 | null | NONE | null | ### Use Case
I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown:
```
ValueError: Schema and number of arrays unequal
`... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6188/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6187/comments | https://api.github.com/repos/huggingface/datasets/issues/6187/events | https://github.com/huggingface/datasets/issues/6187 | 1,870,936,143 | I_kwDODunzps5vhDhP | 6,187 | Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 1 | 2023-08-29T05:49:56 | 2023-08-29T16:21:45 | null | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>()
5 }
6
----> 7 csv_datasets_reloaded = load_... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6187/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6186/comments | https://api.github.com/repos/huggingface/datasets/issues/6186/events | https://github.com/huggingface/datasets/issues/6186 | 1,869,431,457 | I_kwDODunzps5vbUKh | 6,186 | Feature request: add code example of multi-GPU processing | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 19358928... | open | false | null | [] | null | 2 | 2023-08-28T10:00:59 | 2023-08-30T13:34:14 | null | CONTRIBUTOR | null | ### Feature request
Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu
Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work f... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6186/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6185/comments | https://api.github.com/repos/huggingface/datasets/issues/6185/events | https://github.com/huggingface/datasets/issues/6185 | 1,868,077,748 | I_kwDODunzps5vWJq0 | 6,185 | Error in saving the PIL image into *.arrow files using datasets.arrow_writer | {
"login": "HaozheZhao",
"id": 14247682,
"node_id": "MDQ6VXNlcjE0MjQ3Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14247682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaozheZhao",
"html_url": "https://github.com/HaozheZhao",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 1 | 2023-08-26T12:15:57 | 2023-08-29T14:49:58 | null | NONE | null | ### Describe the bug
I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects.
I am saving the json using the following script:
```
def save_to_arrow(path,temp):
with ArrowWri... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6185/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6184/comments | https://api.github.com/repos/huggingface/datasets/issues/6184/events | https://github.com/huggingface/datasets/issues/6184 | 1,867,766,143 | I_kwDODunzps5vU9l_ | 6,184 | Map cache does not detect function changes in another module | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/u... | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | 2 | 2023-08-25T22:59:14 | 2023-08-29T20:57:07 | 2023-08-29T20:56:49 | NONE | null | ```python
# dataset.py
import os
import datasets
if not os.path.exists('/tmp/test.json'):
with open('/tmp/test.json', 'w') as file:
file.write('[{"text": "hello"}]')
def transform(example):
text = example['text']
# text += ' world'
return {'text': text}
data = datasets.load_dataset('json', ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6184/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6183/comments | https://api.github.com/repos/huggingface/datasets/issues/6183/events | https://github.com/huggingface/datasets/issues/6183 | 1,867,743,276 | I_kwDODunzps5vU4As | 6,183 | Load dataset with non-existent file | {
"login": "freQuensy23-coder",
"id": 64750224,
"node_id": "MDQ6VXNlcjY0NzUwMjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freQuensy23-coder",
"html_url": "https://github.com/freQuensy23-coder",
"followers_url": "https... | [] | closed | false | null | [] | null | 2 | 2023-08-25T22:21:22 | 2023-08-29T13:26:22 | 2023-08-29T13:26:22 | NONE | null | ### Describe the bug
When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" -
```SchemaInferenceError: Please pass `features` or at least one example when writing data```
### Steps to reproduce the bug
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6183/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6183/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6182/comments | https://api.github.com/repos/huggingface/datasets/issues/6182/events | https://github.com/huggingface/datasets/issues/6182 | 1,867,203,131 | I_kwDODunzps5vS0I7 | 6,182 | Loading Meteor metric in HF evaluate module crashes due to datasets import issue | {
"login": "dsashulya",
"id": 42322648,
"node_id": "MDQ6VXNlcjQyMzIyNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/42322648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsashulya",
"html_url": "https://github.com/dsashulya",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2023-08-25T14:54:06 | 2023-09-01T18:51:12 | 2023-08-31T14:38:23 | NONE | null | ### Describe the bug
When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14```
### Steps to reproduce the bug
```
from evaluate import load
meteor = load("meteor")
```
produces the following error:
```
from d... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6182/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6181/comments | https://api.github.com/repos/huggingface/datasets/issues/6181/events | https://github.com/huggingface/datasets/pull/6181 | 1,867,035,522 | PR_kwDODunzps5Yy2VO | 6,181 | Fix import in `image_load` doc | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2023-08-25T13:12:19 | 2023-08-25T16:12:46 | 2023-08-25T16:02:24 | CONTRIBUTOR | null | Reported on [Discord](https://discord.com/channels/879548962464493619/1144295822209581168/1144295822209581168) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6181/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6181",
"html_url": "https://github.com/huggingface/datasets/pull/6181",
"diff_url": "https://github.com/huggingface/datasets/pull/6181.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6181.patch",
"merged_at": "2023-08-25T16:02... | true |
https://api.github.com/repos/huggingface/datasets/issues/6180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6180/comments | https://api.github.com/repos/huggingface/datasets/issues/6180/events | https://github.com/huggingface/datasets/pull/6180 | 1,867,032,578 | PR_kwDODunzps5Yy1r- | 6,180 | Use `hf-internal-testing` repos for hosting test dataset repos | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-08-25T13:10:26 | 2023-08-25T16:58:02 | 2023-08-25T16:46:22 | CONTRIBUTOR | null | Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6180/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6180",
"html_url": "https://github.com/huggingface/datasets/pull/6180",
"diff_url": "https://github.com/huggingface/datasets/pull/6180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6180.patch",
"merged_at": "2023-08-25T16:46... | true |
https://api.github.com/repos/huggingface/datasets/issues/6179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6179/comments | https://api.github.com/repos/huggingface/datasets/issues/6179/events | https://github.com/huggingface/datasets/issues/6179 | 1,867,009,016 | I_kwDODunzps5vSEv4 | 6,179 | Map cache with tokenizer | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/u... | [] | open | false | null | [] | null | 4 | 2023-08-25T12:55:18 | 2023-08-31T15:17:24 | null | NONE | null | Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session.
Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with...
setup
```... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6179/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6178/comments | https://api.github.com/repos/huggingface/datasets/issues/6178/events | https://github.com/huggingface/datasets/issues/6178 | 1,866,610,102 | I_kwDODunzps5vQjW2 | 6,178 | 'import datasets' throws "invalid syntax error" | {
"login": "elia-ashraf",
"id": 128580829,
"node_id": "U_kgDOB6n83Q",
"avatar_url": "https://avatars.githubusercontent.com/u/128580829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elia-ashraf",
"html_url": "https://github.com/elia-ashraf",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | 1 | 2023-08-25T08:35:14 | 2023-08-29T14:57:17 | null | NONE | null | ### Describe the bug
Hi,
I have been trying to import the datasets library but I keep gtting this error.
`Traceback (most recent call last):
File /opt/local/jupyterhub/lib64/python3.9/site-packages/IPython/core/interactiveshell.py:3508 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6178/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6177/comments | https://api.github.com/repos/huggingface/datasets/issues/6177/events | https://github.com/huggingface/datasets/pull/6177 | 1,865,490,962 | PR_kwDODunzps5Ytky- | 6,177 | Use object detection images from `huggingface/documentation-images` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-08-24T16:16:09 | 2023-08-25T16:30:00 | 2023-08-25T16:21:17 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6177/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6177",
"html_url": "https://github.com/huggingface/datasets/pull/6177",
"diff_url": "https://github.com/huggingface/datasets/pull/6177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6177.patch",
"merged_at": "2023-08-25T16:21... | true |
https://api.github.com/repos/huggingface/datasets/issues/6176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6176/comments | https://api.github.com/repos/huggingface/datasets/issues/6176/events | https://github.com/huggingface/datasets/issues/6176 | 1,864,436,408 | I_kwDODunzps5vIQq4 | 6,176 | how to limit the size of memory mapped file? | {
"login": "williamium3000",
"id": 47763855,
"node_id": "MDQ6VXNlcjQ3NzYzODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/47763855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/williamium3000",
"html_url": "https://github.com/williamium3000",
"followers_url": "https://api.gi... | [] | open | false | null | [] | null | 4 | 2023-08-24T05:33:45 | 2023-08-26T05:09:56 | null | NONE | null | ### Describe the bug
Huggingface datasets use memory-mapped file to map large datasets in memory for fast access.
However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6176/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6175/comments | https://api.github.com/repos/huggingface/datasets/issues/6175/events | https://github.com/huggingface/datasets/pull/6175 | 1,863,592,678 | PR_kwDODunzps5YnKlx | 6,175 | PyArrow 13 CI fixes | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2023-08-23T15:45:53 | 2023-08-25T13:15:59 | 2023-08-25T13:06:52 | CONTRIBUTOR | null | Fixes:
* bumps the PyArrow version check in the `cast_array_to_feature` to avoid the offset bug (still not fixed)
* aligns the Pandas formatting tests with the Numpy ones (the current test fails due to https://github.com/apache/arrow/pull/35656, which requires `.to_pandas(coerce_temporal_nanoseconds=True)` to always ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6175/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6175",
"html_url": "https://github.com/huggingface/datasets/pull/6175",
"diff_url": "https://github.com/huggingface/datasets/pull/6175.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6175.patch",
"merged_at": "2023-08-25T13:06... | true |
https://api.github.com/repos/huggingface/datasets/issues/6173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6173/comments | https://api.github.com/repos/huggingface/datasets/issues/6173/events | https://github.com/huggingface/datasets/issues/6173 | 1,863,422,065 | I_kwDODunzps5vEZBx | 6,173 | Fix CI for pyarrow 13.0.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2023-08-23T14:11:20 | 2023-08-25T13:06:53 | 2023-08-25T13:06:53 | MEMBER | null | pyarrow 13.0.0 just came out
```
FAILED tests/test_formatting.py::ArrowExtractorTest::test_pandas_extractor - AssertionError: Attributes of Series are different
Attribute "dtype" are different
[left]: datetime64[us, UTC]
[right]: datetime64[ns, UTC]
```
```
FAILED tests/test_table.py::test_cast_sliced_fi... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6173/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6173/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6172/comments | https://api.github.com/repos/huggingface/datasets/issues/6172/events | https://github.com/huggingface/datasets/issues/6172 | 1,863,318,027 | I_kwDODunzps5vD_oL | 6,172 | Make Dataset streaming queries retryable | {
"login": "rojagtap",
"id": 42299342,
"node_id": "MDQ6VXNlcjQyMjk5MzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/42299342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rojagtap",
"html_url": "https://github.com/rojagtap",
"followers_url": "https://api.github.com/users/roj... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-08-23T13:15:38 | 2023-08-24T14:29:27 | null | NONE | null | ### Feature request
Streaming datasets, as intended, do not load the entire dataset in memory or disk. However, while querying the next data chunk from the remote, sometimes it is possible that the service is down or there might be other issues that may cause the query to fail. In such a scenario, it would be nice to ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6172/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6171/comments | https://api.github.com/repos/huggingface/datasets/issues/6171/events | https://github.com/huggingface/datasets/pull/6171 | 1,862,922,767 | PR_kwDODunzps5Yk4AS | 6,171 | Fix typo in about_mapstyle_vs_iterable.mdx | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 3 | 2023-08-23T09:21:11 | 2023-08-23T09:32:59 | 2023-08-23T09:21:19 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6171/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6171",
"html_url": "https://github.com/huggingface/datasets/pull/6171",
"diff_url": "https://github.com/huggingface/datasets/pull/6171.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6171.patch",
"merged_at": "2023-08-23T09:21... | true |
https://api.github.com/repos/huggingface/datasets/issues/6170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6170/comments | https://api.github.com/repos/huggingface/datasets/issues/6170/events | https://github.com/huggingface/datasets/pull/6170 | 1,862,705,731 | PR_kwDODunzps5YkJOV | 6,170 | feat: Return the name of the currently loaded file | {
"login": "Amitesh-Patel",
"id": 124021133,
"node_id": "U_kgDOB2RpjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/124021133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Amitesh-Patel",
"html_url": "https://github.com/Amitesh-Patel",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 1 | 2023-08-23T07:08:17 | 2023-08-29T12:41:05 | null | NONE | null | Added an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output.
I added this here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/js... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6170/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6170",
"html_url": "https://github.com/huggingface/datasets/pull/6170",
"diff_url": "https://github.com/huggingface/datasets/pull/6170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6170.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6169/comments | https://api.github.com/repos/huggingface/datasets/issues/6169/events | https://github.com/huggingface/datasets/issues/6169 | 1,862,360,199 | I_kwDODunzps5vAVyH | 6,169 | Configurations in yaml not working | {
"login": "tsor13",
"id": 45085098,
"node_id": "MDQ6VXNlcjQ1MDg1MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/45085098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsor13",
"html_url": "https://github.com/tsor13",
"followers_url": "https://api.github.com/users/tsor13/fo... | [] | open | false | null | [] | null | 4 | 2023-08-23T00:13:22 | 2023-08-23T15:35:31 | null | NONE | null | ### Dataset configurations cannot be created in YAML/README
Hello! I'm trying to follow the docs here in order to create structure in my dataset as added from here (#5331): https://github.com/huggingface/datasets/blob/8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8/docs/source/repository_structure.mdx#L110-L118
I have t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6169/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6168/comments | https://api.github.com/repos/huggingface/datasets/issues/6168/events | https://github.com/huggingface/datasets/pull/6168 | 1,861,867,274 | PR_kwDODunzps5YhT7Y | 6,168 | Fix ArrayXD YAML conversion | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 3 | 2023-08-22T17:02:54 | 2023-08-29T12:42:32 | null | CONTRIBUTOR | null | Replace the `shape` tuple with a list in the `ArrayXD` YAML conversion.
Fix #6112 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6168/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6168/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6168",
"html_url": "https://github.com/huggingface/datasets/pull/6168",
"diff_url": "https://github.com/huggingface/datasets/pull/6168.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6168.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6167/comments | https://api.github.com/repos/huggingface/datasets/issues/6167/events | https://github.com/huggingface/datasets/pull/6167 | 1,861,474,327 | PR_kwDODunzps5Yf9-t | 6,167 | Allow hyphen in split name | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 5 | 2023-08-22T13:30:59 | 2023-08-22T15:39:24 | 2023-08-22T15:38:53 | CONTRIBUTOR | null | To fix https://discuss.huggingface.co/t/error-when-setting-up-the-dataset-viewer-streamingrowserror/51276.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6167/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6167",
"html_url": "https://github.com/huggingface/datasets/pull/6167",
"diff_url": "https://github.com/huggingface/datasets/pull/6167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6167.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6166/comments | https://api.github.com/repos/huggingface/datasets/issues/6166/events | https://github.com/huggingface/datasets/pull/6166 | 1,861,259,055 | PR_kwDODunzps5YfOt0 | 6,166 | Document BUILDER_CONFIG_CLASS | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 3 | 2023-08-22T11:27:41 | 2023-08-23T14:01:25 | 2023-08-23T13:52:36 | MEMBER | null | Related to https://github.com/huggingface/datasets/issues/6130 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6166/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6166",
"html_url": "https://github.com/huggingface/datasets/pull/6166",
"diff_url": "https://github.com/huggingface/datasets/pull/6166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6166.patch",
"merged_at": "2023-08-23T13:52... | true |
https://api.github.com/repos/huggingface/datasets/issues/6165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6165/comments | https://api.github.com/repos/huggingface/datasets/issues/6165/events | https://github.com/huggingface/datasets/pull/6165 | 1,861,124,284 | PR_kwDODunzps5YexBL | 6,165 | Fix multiprocessing with spawn in iterable datasets | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://... | [] | closed | false | null | [] | null | 5 | 2023-08-22T10:07:23 | 2023-08-29T13:27:14 | 2023-08-29T13:18:11 | CONTRIBUTOR | null | The "Spawn" method is preferred when multiprocessing on macOS or Windows systems, instead of the "Fork" method on linux systems.
This causes some methods of Iterable Datasets to break when using a dataloader with more than 0 workers.
I fixed the issue by replacing lambda and local methods which are not pickle-abl... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6165/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6165",
"html_url": "https://github.com/huggingface/datasets/pull/6165",
"diff_url": "https://github.com/huggingface/datasets/pull/6165.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6165.patch",
"merged_at": "2023-08-29T13:18... | true |
https://api.github.com/repos/huggingface/datasets/issues/6164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6164/comments | https://api.github.com/repos/huggingface/datasets/issues/6164/events | https://github.com/huggingface/datasets/pull/6164 | 1,859,560,007 | PR_kwDODunzps5YZZAJ | 6,164 | Fix: Missing a MetadataConfigs init when the repo has a `datasets_info.json` but no README | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2023-08-21T14:57:54 | 2023-08-21T16:27:05 | 2023-08-21T16:18:26 | CONTRIBUTOR | null | When I try to push to an arrow repo (can provide the link on Slack), it uploads the files but fails to update the metadata, with
```
File "app.py", line 123, in add_new_eval
eval_results[level].push_to_hub(my_repo, token=TOKEN, split=SPLIT)
File "blabla_my_env_path/lib/python3.10/site-packages/datasets/arro... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6164/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6164",
"html_url": "https://github.com/huggingface/datasets/pull/6164",
"diff_url": "https://github.com/huggingface/datasets/pull/6164.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6164.patch",
"merged_at": "2023-08-21T16:18... | true |
https://api.github.com/repos/huggingface/datasets/issues/6163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6163/comments | https://api.github.com/repos/huggingface/datasets/issues/6163/events | https://github.com/huggingface/datasets/issues/6163 | 1,857,682,241 | I_kwDODunzps5uuftB | 6,163 | Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32 | {
"login": "shishirCTC",
"id": 90616801,
"node_id": "MDQ6VXNlcjkwNjE2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/90616801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shishirCTC",
"html_url": "https://github.com/shishirCTC",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 1 | 2023-08-19T11:34:40 | 2023-08-21T13:28:16 | null | NONE | null | ### Describe the bug
I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are inte... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6163/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6162/comments | https://api.github.com/repos/huggingface/datasets/issues/6162/events | https://github.com/huggingface/datasets/issues/6162 | 1,856,198,342 | I_kwDODunzps5uo1bG | 6,162 | load_dataset('json',...) from togethercomputer/RedPajama-Data-1T errors when jsonl rows contains different data fields | {
"login": "rbrugaro",
"id": 82971690,
"node_id": "MDQ6VXNlcjgyOTcxNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/82971690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rbrugaro",
"html_url": "https://github.com/rbrugaro",
"followers_url": "https://api.github.com/users/rbr... | [] | open | false | null | [] | null | 4 | 2023-08-18T07:19:39 | 2023-08-18T17:00:35 | null | NONE | null | ### Describe the bug
When loading some jsonl from redpajama-data-1T github source [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) fails due to one row of the file containing an extra field called **symlink_target: string>**.
When deleting that line the loading... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6162/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6161/comments | https://api.github.com/repos/huggingface/datasets/issues/6161/events | https://github.com/huggingface/datasets/pull/6161 | 1,855,794,354 | PR_kwDODunzps5YM0g7 | 6,161 | Fix protocol prefix for Beam | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 4 | 2023-08-17T22:40:37 | 2023-08-18T13:47:59 | null | CONTRIBUTOR | null | Fix #6147 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6161/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6161",
"html_url": "https://github.com/huggingface/datasets/pull/6161",
"diff_url": "https://github.com/huggingface/datasets/pull/6161.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6161.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6160/comments | https://api.github.com/repos/huggingface/datasets/issues/6160/events | https://github.com/huggingface/datasets/pull/6160 | 1,855,760,543 | PR_kwDODunzps5YMtLQ | 6,160 | Fix Parquet loading with `columns` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-08-17T21:58:24 | 2023-08-17T22:44:59 | 2023-08-17T22:36:04 | CONTRIBUTOR | null | Fix #6149 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6160/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6160",
"html_url": "https://github.com/huggingface/datasets/pull/6160",
"diff_url": "https://github.com/huggingface/datasets/pull/6160.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6160.patch",
"merged_at": "2023-08-17T22:36... | true |
https://api.github.com/repos/huggingface/datasets/issues/6159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6159/comments | https://api.github.com/repos/huggingface/datasets/issues/6159/events | https://github.com/huggingface/datasets/issues/6159 | 1,855,691,512 | I_kwDODunzps5um5r4 | 6,159 | Add `BoundingBox` feature | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-08-17T20:49:51 | 2023-08-17T20:49:51 | null | CONTRIBUTOR | null | ... to make working with object detection datasets easier. Currently, `Sequence(int_or_float, length=4)` can be used to represent this feature optimally (in the storage backend), so I only see this feature being useful if we make it work with the viewer. Also, bounding boxes usually come in 4 different formats (explain... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6159/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6159/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6158/comments | https://api.github.com/repos/huggingface/datasets/issues/6158/events | https://github.com/huggingface/datasets/pull/6158 | 1,855,374,220 | PR_kwDODunzps5YLZBf | 6,158 | [docs] Complete `to_iterable_dataset` | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/ste... | [] | closed | false | null | [] | null | 2 | 2023-08-17T17:02:11 | 2023-08-17T19:24:20 | 2023-08-17T19:13:15 | MEMBER | null | Finishes the `to_iterable_dataset` documentation by adding it to the relevant sections in the tutorial and guide. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6158/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6158",
"html_url": "https://github.com/huggingface/datasets/pull/6158",
"diff_url": "https://github.com/huggingface/datasets/pull/6158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6158.patch",
"merged_at": "2023-08-17T19:13... | true |
https://api.github.com/repos/huggingface/datasets/issues/6157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6157/comments | https://api.github.com/repos/huggingface/datasets/issues/6157/events | https://github.com/huggingface/datasets/issues/6157 | 1,855,265,663 | I_kwDODunzps5ulRt_ | 6,157 | DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding' | {
"login": "AisingioroHao0",
"id": 51043929,
"node_id": "MDQ6VXNlcjUxMDQzOTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/51043929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AisingioroHao0",
"html_url": "https://github.com/AisingioroHao0",
"followers_url": "https://api.gi... | [] | open | false | null | [] | null | 11 | 2023-08-17T15:48:11 | 2023-09-01T17:38:26 | null | NONE | null | ### Describe the bug
When I was in load_dataset, it said "DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'". The second time I ran it, there was no error and the dataset object worked
```python
---------------------------------------------------------------------------
TypeErr... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6157/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6156/comments | https://api.github.com/repos/huggingface/datasets/issues/6156/events | https://github.com/huggingface/datasets/issues/6156 | 1,854,768,618 | I_kwDODunzps5ujYXq | 6,156 | Why not use self._epoch as seed to shuffle in distributed training with IterableDataset | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 3 | 2023-08-17T10:58:20 | 2023-08-17T14:33:15 | 2023-08-17T14:33:14 | CONTRIBUTOR | null | ### Describe the bug
Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177
My question ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6156/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6155/comments | https://api.github.com/repos/huggingface/datasets/issues/6155/events | https://github.com/huggingface/datasets/pull/6155 | 1,854,661,682 | PR_kwDODunzps5YI8Pc | 6,155 | Raise FileNotFoundError when passing data_files that don't exist | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 5 | 2023-08-17T09:49:48 | 2023-08-18T13:45:58 | 2023-08-18T13:35:13 | MEMBER | null | e.g. when running `load_dataset("parquet", data_files="doesnt_exist.parquet")` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6155/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6155",
"html_url": "https://github.com/huggingface/datasets/pull/6155",
"diff_url": "https://github.com/huggingface/datasets/pull/6155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6155.patch",
"merged_at": "2023-08-18T13:35... | true |
https://api.github.com/repos/huggingface/datasets/issues/6154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6154/comments | https://api.github.com/repos/huggingface/datasets/issues/6154/events | https://github.com/huggingface/datasets/pull/6154 | 1,854,595,943 | PR_kwDODunzps5YItlH | 6,154 | Use yaml instead of get data patterns when possible | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 6 | 2023-08-17T09:17:05 | 2023-08-17T20:46:25 | 2023-08-17T20:37:19 | MEMBER | null | This would make the data files resolution faster: no need to list all the data files to infer the dataset builder to use.
fix https://github.com/huggingface/datasets/issues/6140 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6154/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6154",
"html_url": "https://github.com/huggingface/datasets/pull/6154",
"diff_url": "https://github.com/huggingface/datasets/pull/6154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6154.patch",
"merged_at": "2023-08-17T20:37... | true |
https://api.github.com/repos/huggingface/datasets/issues/6152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6152/comments | https://api.github.com/repos/huggingface/datasets/issues/6152/events | https://github.com/huggingface/datasets/issues/6152 | 1,852,494,646 | I_kwDODunzps5uatM2 | 6,152 | FolderBase Dataset automatically resolves under current directory when data_dir is not specified | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | open | false | null | [] | null | 4 | 2023-08-16T04:38:09 | 2023-08-17T13:45:18 | null | CONTRIBUTOR | null | ### Describe the bug
FolderBase Dataset automatically resolves under current directory when data_dir is not specified.
For example:
```
load_dataset("audiofolder")
```
takes long time to resolve and collect data_files from current directory. But I think it should reach out to this line for error handling https:... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6152/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6151/comments | https://api.github.com/repos/huggingface/datasets/issues/6151/events | https://github.com/huggingface/datasets/issues/6151 | 1,851,497,818 | I_kwDODunzps5uW51a | 6,151 | Faster sorting for single key items | {
"login": "jackapbutler",
"id": 47942453,
"node_id": "MDQ6VXNlcjQ3OTQyNDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/47942453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackapbutler",
"html_url": "https://github.com/jackapbutler",
"followers_url": "https://api.github.c... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-08-15T14:02:31 | 2023-08-21T14:38:26 | 2023-08-21T14:38:25 | NONE | null | ### Feature request
A faster way to sort a dataset which contains a large number of rows.
### Motivation
The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps.
**Code snippet:**
```python
ds = datasets.load_dataset( "json"... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6151/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6150/comments | https://api.github.com/repos/huggingface/datasets/issues/6150/events | https://github.com/huggingface/datasets/issues/6150 | 1,850,740,456 | I_kwDODunzps5uUA7o | 6,150 | Allow dataset implement .take | {
"login": "brando90",
"id": 1855278,
"node_id": "MDQ6VXNlcjE4NTUyNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1855278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brando90",
"html_url": "https://github.com/brando90",
"followers_url": "https://api.github.com/users/brand... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 2023-08-15T00:17:51 | 2023-08-17T13:49:37 | null | NONE | null | ### Feature request
I want to do:
```
dataset.take(512)
```
but it only works with streaming = True
### Motivation
uniform interface to data sets. Really surprising the above only works with streaming = True.
### Your contribution
Should be trivial to copy paste the IterableDataset .take to use the local pa... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6150/timeline | null | null | null | null | false |
End of preview. Expand in Data Studio
- Downloads last month
- 5