You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unlike the traditional approach where pages are split once a page's size reaches the default limit (typically 1MB), this implementation splits pages based on the chunk boundaries identified by the chunker. Consequently, the resulting chunks will have variable sizes but will be more resilient to data modifications such as updates, inserts, and deletes. This method enhances the robustness and efficiency of data storage and retrieval in Apache Parquet by ensuring that identical data segments are consistently chunked in the same manner, regardless of their position within the dataset.
Parquet Deduplication
The space savings can be significant, see some test results generated on test data containing a series of snapshots from a database:
The results are calculated by simulating a content addressable storage system like Hugging Face Hub or restic.
Example of inserting records into a parquet file
The following heatmaps show common byte blocks of a parquet file before and after inserting some records. The green parts are common whereas the red parts are different, hence must be stored twice. Using CDC chunking CAS systems can achieve much higher deduplication ratios.
There is an evaluation tool available at https://github.com/kszucs/de with lot more examples and heatmaps comparing various scenarios: updates, deletes, insertions and appends.
Component(s)
C++, Parquet
The text was updated successfully, but these errors were encountered:
Describe the enhancement requested
Rationale
Unlike the traditional approach where pages are split once a page's size reaches the default limit (typically 1MB), this implementation splits pages based on the chunk boundaries identified by the chunker. Consequently, the resulting chunks will have variable sizes but will be more resilient to data modifications such as updates, inserts, and deletes. This method enhances the robustness and efficiency of data storage and retrieval in Apache Parquet by ensuring that identical data segments are consistently chunked in the same manner, regardless of their position within the dataset.
Parquet Deduplication
The space savings can be significant, see some test results generated on test data containing a series of snapshots from a database:
The results are calculated by simulating a content addressable storage system like Hugging Face Hub or restic.
Example of inserting records into a parquet file
The following heatmaps show common byte blocks of a parquet file before and after inserting some records. The green parts are common whereas the red parts are different, hence must be stored twice. Using CDC chunking CAS systems can achieve much higher deduplication ratios.
There is an evaluation tool available at https://github.com/kszucs/de with lot more examples and heatmaps comparing various scenarios: updates, deletes, insertions and appends.
Component(s)
C++, Parquet
The text was updated successfully, but these errors were encountered: