The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
5: string
44: string
47: string
54: string
71: string
88: string
128: string
149: string
151: string
158: string
168: string
171: string
178: string
189: string
198: string
226: string
290: string
305: string
364: string
435: string
496: string
615: string
646: string
665: string
691: string
754: string
769: string
781: string
797: string
804: string
820: string
836: string
913: string
917: string
924: string
926: string
928: string
990: string
1005: string
1008: string
1028: string
1039: string
1057: string
1073: string
1161: string
1182: string
1183: string
1305: string
1344: string
1350: string
1363: string
1369: string
1397: string
1400: string
1451: string
1523: string
1527: string
1533: string
1534: string
1535: string
1609: string
1668: string
1727: string
1732: string
1787: string
1831: string
1855: string
1857: string
1859: string
1882: string
1904: string
1911: string
1915: string
1919: string
1926: string
1930: string
1996: string
2000: string
2031: string
2043: string
2045: string
2051: string
2068: string
2105: string
2126: string
2144: string
2165: string
2199: string
2252: string
2272: string
2283: string
2289: string
2297: string
2307: string
2395: string
2413: string
2419: string
2425: string
2446: string
2455: string
2466: string
2477: string
2482: string
2506: string
2509: string
2518: string
2575: string
2589: string
69: string
78: string
102: string
221: string
261: string
329: string
356: string
415: string
464: string
476: string
606: string
843: string
...
: string
2424: string
2426: string
2430: string
2431: string
2432: string
2435: string
2436: string
2437: string
2439: string
2440: string
2442: string
2443: string
2444: string
2445: string
2449: string
2451: string
2453: string
2454: string
2457: string
2459: string
2463: string
2465: string
2467: string
2470: string
2471: string
2475: string
2478: string
2479: string
2480: string
2483: string
2484: string
2486: string
2487: string
2488: string
2489: string
2490: string
2491: string
2493: string
2495: string
2496: string
2498: string
2499: string
2500: string
2501: string
2504: string
2505: string
2508: string
2510: string
2511: string
2512: string
2514: string
2515: string
2516: string
2520: string
2522: string
2524: string
2525: string
2529: string
2530: string
2531: string
2532: string
2533: string
2534: string
2536: string
2538: string
2539: string
2540: string
2541: string
2542: string
2544: string
2546: string
2548: string
2549: string
2550: string
2551: string
2552: string
2555: string
2556: string
2557: string
2558: string
2561: string
2562: string
2563: string
2565: string
2570: string
2571: string
2574: string
2577: string
2578: string
2581: string
2584: string
2586: string
2587: string
2588: string
2590: string
2591: string
2592: string
2593: string
Accept-poster: list<item: int64>
child 0, item: int64
Accept-oral: list<item: int64>
child 0, item: int64
Accept-spotlight: list<item: int64>
child 0, item: int64
Reject: list<item: int64>
child 0, item: int64
to
{'Accept-spotlight': List(Value('int64')), 'Accept-oral': List(Value('int64')), 'Accept-poster': List(Value('int64')), 'Reject': List(Value('int64'))}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 289, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 124, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
5: string
44: string
47: string
54: string
71: string
88: string
128: string
149: string
151: string
158: string
168: string
171: string
178: string
189: string
198: string
226: string
290: string
305: string
364: string
435: string
496: string
615: string
646: string
665: string
691: string
754: string
769: string
781: string
797: string
804: string
820: string
836: string
913: string
917: string
924: string
926: string
928: string
990: string
1005: string
1008: string
1028: string
1039: string
1057: string
1073: string
1161: string
1182: string
1183: string
1305: string
1344: string
1350: string
1363: string
1369: string
1397: string
1400: string
1451: string
1523: string
1527: string
1533: string
1534: string
1535: string
1609: string
1668: string
1727: string
1732: string
1787: string
1831: string
1855: string
1857: string
1859: string
1882: string
1904: string
1911: string
1915: string
1919: string
1926: string
1930: string
1996: string
2000: string
2031: string
2043: string
2045: string
2051: string
2068: string
2105: string
2126: string
2144: string
2165: string
2199: string
2252: string
2272: string
2283: string
2289: string
2297: string
2307: string
2395: string
2413: string
2419: string
2425: string
2446: string
2455: string
2466: string
2477: string
2482: string
2506: string
2509: string
2518: string
2575: string
2589: string
69: string
78: string
102: string
221: string
261: string
329: string
356: string
415: string
464: string
476: string
606: string
843: string
...
: string
2424: string
2426: string
2430: string
2431: string
2432: string
2435: string
2436: string
2437: string
2439: string
2440: string
2442: string
2443: string
2444: string
2445: string
2449: string
2451: string
2453: string
2454: string
2457: string
2459: string
2463: string
2465: string
2467: string
2470: string
2471: string
2475: string
2478: string
2479: string
2480: string
2483: string
2484: string
2486: string
2487: string
2488: string
2489: string
2490: string
2491: string
2493: string
2495: string
2496: string
2498: string
2499: string
2500: string
2501: string
2504: string
2505: string
2508: string
2510: string
2511: string
2512: string
2514: string
2515: string
2516: string
2520: string
2522: string
2524: string
2525: string
2529: string
2530: string
2531: string
2532: string
2533: string
2534: string
2536: string
2538: string
2539: string
2540: string
2541: string
2542: string
2544: string
2546: string
2548: string
2549: string
2550: string
2551: string
2552: string
2555: string
2556: string
2557: string
2558: string
2561: string
2562: string
2563: string
2565: string
2570: string
2571: string
2574: string
2577: string
2578: string
2581: string
2584: string
2586: string
2587: string
2588: string
2590: string
2591: string
2592: string
2593: string
Accept-poster: list<item: int64>
child 0, item: int64
Accept-oral: list<item: int64>
child 0, item: int64
Accept-spotlight: list<item: int64>
child 0, item: int64
Reject: list<item: int64>
child 0, item: int64
to
{'Accept-spotlight': List(Value('int64')), 'Accept-oral': List(Value('int64')), 'Accept-poster': List(Value('int64')), 'Reject': List(Value('int64'))}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:The task_categories "text-mining" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Dataset Summary
This dataset contains GROBID-parsed outputs for research papers from ICLR 2025, ICML 2025, and NeurIPS 2025.
The repository is distributed as zip archives (no PDFs) to make it easy to download and mirror.
What you will find (per paper, when available):
- TEI XML produced by GROBID (
*.xml) - BibTeX produced by GROBID (
*.bib) - In some folders: additional GROBID artifacts such as
grobid_metadataJSON andgrobid_bibexports
What you will not find in the current release:
- PDFs (explicitly excluded)
- OpenReview reviews / scores
- A single tabular
datasets-style split (this repo is file-based)
Dataset Structure
The dataset is organized by conference year folders and typically shipped as zips:
paper_data/
βββ ICLR_2025_no_pdf.zip
βββ ICML_2025_no_pdf.zip
βββ NeurIPS_2025_no_pdf.zip
Inside each zip (example; exact subfolders can differ by venue):
<VENUE>_2025/
βββ grobid_tei/ # TEI XML files (*.xml)
βββ grobid_bib/ # BibTeX exports (*.bib) (venue-dependent)
βββ grobid_metadata/ # JSON metadata (venue-dependent)
βββ ... # other non-PDF artifacts
| Conference | Papers | Reviews |
|---|---|---|
| ICLR 2025 | 11,475 | 46,748 |
| ICML 2025 | 3,385 | 35,546 |
| NeurIPS 2025 | 5,532 | 22,373 |
How to Use
Download from Hugging Face Hub
from huggingface_hub import snapshot_download
local_dir = snapshot_download(repo_id="thanhkt/paper", repo_type="dataset")
print(local_dir)
Then unzip the archives you need (example):
unzip -q ICLR_2025_no_pdf.zip -d ./extracted/
Parsing
- TEI XML: use any TEI/XML parser to extract title/abstract/sections/citations.
- BibTeX: parse with
bibtexparseror similar libraries.
Limitations
- GROBID outputs may contain parsing errors and incomplete fields depending on paper formatting.
- File coverage varies by venue and crawling/processing completeness.
- Because this repo is primarily zip/binary files, the Hugging Face dataset viewer may not display a table preview.
- Downloads last month
- 120