Skip to content

[C++][Python][Parquet] Support Content-Defined Chunking of Parquet files #45750

Closed
@kszucs

Description

@kszucs

Describe the enhancement requested

Rationale

Unlike the traditional approach where pages are split once a page's size reaches the default limit (typically 1MB), this implementation splits pages based on the chunk boundaries identified by the chunker. Consequently, the resulting chunks will have variable sizes but will be more resilient to data modifications such as updates, inserts, and deletes. This method enhances the robustness and efficiency of data storage and retrieval in Apache Parquet by ensuring that identical data segments are consistently chunked in the same manner, regardless of their position within the dataset.

Parquet Deduplication

The space savings can be significant, see some test results generated on test data containing a series of snapshots from a database:

┏━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓
┃            ┃            ┃            ┃     Compressed Chunk ┃             ┃     Compressed Dedup ┃    Transmitted XTool ┃
┃ Title      ┃ Total Size ┃ Chunk Size ┃                 Size ┃ Dedup Ratio ┃                Ratio ┃                Bytes ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩
│ JSONLines  │   93.0 GiB │   64.9 GiB │             12.4 GiB │         70% │                  13% │             13.5 GiB │
│ Parquet    │   16.2 GiB │   15.0 GiB │             13.4 GiB │         93% │                  83% │             13.5 GiB │
│ CDC ZSTD   │    8.8 GiB │    5.6 GiB │              5.6 GiB │         64% │                  64% │              6.1 GiB │
│ CDC Snappy │   16.2 GiB │    8.6 GiB │              8.1 GiB │         53% │                  50% │              9.4 GiB │
└────────────┴────────────┴────────────┴──────────────────────┴─────────────┴──────────────────────┴──────────────────────┘

The results are calculated by simulating a content addressable storage system like Hugging Face Hub or restic.

Image

Example of inserting records into a parquet file

The following heatmaps show common byte blocks of a parquet file before and after inserting some records. The green parts are common whereas the red parts are different, hence must be stored twice. Using CDC chunking CAS systems can achieve much higher deduplication ratios.

Image

There is an evaluation tool available at https://github.com/kszucs/de with lot more examples and heatmaps comparing various scenarios: updates, deletes, insertions and appends.

Component(s)

C++, Parquet

Metadata

Metadata

Assignees

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions