Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[C++][Python][Parquet] Support Content-Defined Chunking of Parquet files #45750

Open
kszucs opened this issue Mar 11, 2025 · 0 comments
Open

Comments

@kszucs
Copy link
Member

kszucs commented Mar 11, 2025

Describe the enhancement requested

Rationale

Unlike the traditional approach where pages are split once a page's size reaches the default limit (typically 1MB), this implementation splits pages based on the chunk boundaries identified by the chunker. Consequently, the resulting chunks will have variable sizes but will be more resilient to data modifications such as updates, inserts, and deletes. This method enhances the robustness and efficiency of data storage and retrieval in Apache Parquet by ensuring that identical data segments are consistently chunked in the same manner, regardless of their position within the dataset.

Parquet Deduplication

The space savings can be significant, see some test results generated on test data containing a series of snapshots from a database:

┏━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓
┃            ┃            ┃            ┃     Compressed Chunk ┃             ┃     Compressed Dedup ┃    Transmitted XTool ┃
┃ Title      ┃ Total Size ┃ Chunk Size ┃                 Size ┃ Dedup Ratio ┃                Ratio ┃                Bytes ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩
│ JSONLines  │   93.0 GiB │   64.9 GiB │             12.4 GiB │         70% │                  13% │             13.5 GiB │
│ Parquet    │   16.2 GiB │   15.0 GiB │             13.4 GiB │         93% │                  83% │             13.5 GiB │
│ CDC ZSTD   │    8.8 GiB │    5.6 GiB │              5.6 GiB │         64% │                  64% │              6.1 GiB │
│ CDC Snappy │   16.2 GiB │    8.6 GiB │              8.1 GiB │         53% │                  50% │              9.4 GiB │
└────────────┴────────────┴────────────┴──────────────────────┴─────────────┴──────────────────────┴──────────────────────┘

The results are calculated by simulating a content addressable storage system like Hugging Face Hub or restic.

Image

Example of inserting records into a parquet file

The following heatmaps show common byte blocks of a parquet file before and after inserting some records. The green parts are common whereas the red parts are different, hence must be stored twice. Using CDC chunking CAS systems can achieve much higher deduplication ratios.

Image

There is an evaluation tool available at https://github.com/kszucs/de with lot more examples and heatmaps comparing various scenarios: updates, deletes, insertions and appends.

Component(s)

C++, Parquet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant