-
Notifications
You must be signed in to change notification settings - Fork 378
[Dataset hub] more than storage: streaming, editing, connectors #2052
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
| <div class="flex justify-center"> | ||
| <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage.png"/> | ||
| <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-dark.png"/> | ||
| </div> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we should have a toggle or something to show the Streaming code in addition to the Download code ? wdyt @cfahlgren1 ?
julien-c
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is great 🔥 important stuff
| ## Integrated libraries | ||
|
|
||
| If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/Samsung/samsum?library=datasets) shows how to do so with 🤗 Datasets below. | ||
| If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
|
|
||
| ## Integrated libraries | ||
|
|
||
| If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. | |
| If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`knkarthick/samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. |
| Streaming is especially useful to read big files on Hugging Face progressively or only small portion. | ||
| For example `tarfile` can iterate on the files of TAR archives, `zipfile` can read files from ZIP archives and `pyarrow` can access row groups of Parquet files. | ||
|
|
||
| >![TIP] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| >![TIP] | |
| > ![TIP] |
| from huggingface_hub import HfFileSystem | ||
|
|
||
| fs = HfFileSystem() | ||
|
|
||
| repo_id = "allenai/c4" | ||
| path_in_repo = "en/c4-train.00000-of-01024.json.gz" | ||
|
|
||
| # Stream the file | ||
| with fs.open(f"datasets/{repo_id}/{path_in_repo}", "r", compression="gzip") as f: | ||
| print(f.readline()) # read only the first line | ||
| # {"text":"Beginners BBQ Class Taking Place in Missoula!...} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(nit, DX) any way to make the snippet even more simpler/more compact? for instance do we need to instantiate a HfFileSystem or we could have syntactic sugar maybe?
|
|
||
| Parquet is a great format for AI datasets. It offers good compression, a columnar structure for efficient processing and projections, and multi-level metadata for fast filtering, and is suitable for datasets of all sizes. | ||
|
|
||
| Parquet files are divided in row groups that are often around 100MB each. This lets data loaders and data processing frameworks stream data progressively, iterating on row groups. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we do need a doc page about CDC Parquet btw!! cc @kszucs too
|
|
||
| ### Efficient random access | ||
|
|
||
| Row groups are further divied into columns, and columns into pages. Pages are often around 1MB and are the smallest unit of data in Parquet, since this is where compression is applied. Accessing pages enables loading specific rows without having to load a full row group, and is possible if the Parquet file has a page index. However not every Parquet frameworks supports reading at the page level. PyArrow doesn't for example, but the `parquet` crate in Rust does: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a good visual of Parquet file format, and btw we should have a /docs/hub/parquet page probably...
|
|
||
| ### Optimized Parquet files | ||
|
|
||
| Parquet files on Hugging Face are optimized to improve storage efficiency, accelerate downloads and uploads, and enable efficient dataset streaming and editing: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a visual of how those files or datasets are marked on the Hub 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hint hint @lhoestq =)
|
|
||
| ## Using the `huggingface_hub` client library | ||
|
|
||
| The rich features set in the `huggingface_hub` library allows you to manage repositories, including editing dataset files on the Hub. Visit [the client library's documentation](/docs/huggingface_hub/index) to learn more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should embed at least an example of two (same in the "downloading" doc btw)
| Therefore the amount of data to reupload depends on the edits and the file structure. | ||
|
|
||
| The Parquet format is columnar and compressed at the page level (pages are around ~1MB). | ||
| We optimized Parquet for Xet with [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc), which ensures unchanged data generally result in unchanged pages. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a visual of how those files and/or datasets are marked on the Hub 😁
I also touched the libraries page a bit cc @davanstrien