The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ArrowCapacityError
Message: array cannot contain more than 2147483646 bytes, have 12052107539
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1594, in _prepare_split_single
writer.write(example)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 682, in write
self.write_examples_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 743, in write_batch
arrays.append(pa.array(typed_sequence))
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 256, in pyarrow.lib.array
File "pyarrow/array.pxi", line 118, in pyarrow.lib._handle_arrow_array_protocol
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 291, in __arrow_array__
out = self._arrow_array(type=type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 340, in _arrow_array
out = pa.array(cast_to_python_objects(examples, only_1d_for_numpy=True))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
File "pyarrow/array.pxi", line 46, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 12052107539
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1607, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 770, in finalize
self.write_examples_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 743, in write_batch
arrays.append(pa.array(typed_sequence))
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 256, in pyarrow.lib.array
File "pyarrow/array.pxi", line 118, in pyarrow.lib._handle_arrow_array_protocol
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 291, in __arrow_array__
out = self._arrow_array(type=type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 340, in _arrow_array
out = pa.array(cast_to_python_objects(examples, only_1d_for_numpy=True))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
File "pyarrow/array.pxi", line 46, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 12052107539
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1438, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1616, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
gz unknown | __key__ string | __url__ string |
|---|---|---|
"cHN0MAUKAwAAAAEEAgAAARUEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAHgiBAAAADGlzX2Nhbm9uaWNhbAoHRERYMTF(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/1-1000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAANAEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0IgAA(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/1-1000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAgAAAJMEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAIwoFMTU4NDYAAAANX2dlbmVfaGduY19pZAi(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/100000001-101000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAAIYEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0IgAA(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/100000001-101000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAgAAAJIEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAIgoMSzdFUzk1X0hVTUFOAAAAB190cmVtYmw(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/10000001-11000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAANsEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0EERN(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/10000001-11000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAgAAAooEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAHAQCAAAAAQQRF0Jpbzo6RW5zRU1CTDo6QXR(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/1000001-2000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAAVAEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0EAwA(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/1000001-2000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAgAAAGIEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAIwoENDUzOQAAAA1fZ2VuZV9oZ25jX2lkCIE(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/101000001-102000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAAJ0EEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0EAgA(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/101000001-102000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
𧬠SchemaBio Bundle
π Overview
This dataset is a comprehensive, meticulously curated, one-stop annotation library designed for bioinformatics and genomic variant analysis (e.g., WES/WGS pipelines). It aggregates essential variant effect predictors, clinical databases, and genomic reference files into a unified repository, significantly reducing the time spent on data collection, format alignment, and preprocessing.
ποΈ Included Data & Sources
This repository includes data from the following categories:
- Variant Effect Predictors & Scores: AlphaMissense, EVOScore2, Pangolin.
- Clinical & Population Frequencies: gnomAD, CIViC, mskcc_hotspot.
- Genomic References & Regions: UCSC annotations (
cytoband,ncbiRefSeq.gtf), Ensembl references (GRCh38/37 FASTA,vep_cache), ManeSelectBed.
βοΈ Licensing & Commercial Use
Great news: All underlying datasets included in this repository currently permit commercial use, provided that users strictly adhere to their respective attribution and share-alike requirements.
Because this repository acts as an aggregator, it does not fall under a single license. You must comply with the individual licenses of the respective underlying data sources. The table below outlines the specific license for each component:
π Database Licenses Summary
| Database / Source | Version | Official License | Commercial Use | Redistribution | Key Requirements & Notes |
|---|---|---|---|---|---|
| AlphaMissense | v3 | CC BY 4.0 | β Allowed | β Allowed | Must attribute to DeepMind Technologies Limited. |
| EVOScore2 | 20260128 | MIT | β Allowed | β Allowed | Must include the original copyright notice. |
| Pangolin | v1 | CC BY 4.0 | β Allowed | β Allowed | Must provide proper attribution. |
| ManeSelectBed | v49 | MIT | β Allowed | β Allowed | Must include the original copyright notice. |
| mskcc_hotspot | v2 | ODbL | β Allowed | β Allowed | Share-Alike: If you modify/adapt this database, the derivative must also be ODbL. |
| CIViC | 20260301 | CC0 1.0 | β Allowed | β Allowed | Public Domain. No restrictions. |
| gnomAD | v4.1 | Unrestricted | β Allowed | β Allowed | Broad Institute claims no restrictions on use. |
| UCSC Data | / | Public Domain | β Allowed | β Allowed | Publicly funded data, free for all uses. |
| Ensembl Data | release 115 | Unrestricted | β Allowed | β Allowed | EMBL-EBI places no restrictions on use or redistribution. |
Disclaimer: The creator of this combined dataset claims no ownership over the original source data. Users are solely responsible for ensuring their downstream use complies with the original data providers' terms.
- Downloads last month
- 47