Skip to content

Support bulk deletion in batch file cleanup task #1179

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

danielhumanmod
Copy link
Contributor

Summary

As a follow up improvement for #515

After introducing batch file clean up, we want to make use of the bulk deletion APIs provided by Iceberg, which essentially utilize the bulk deletion ability of many object storage services (e.g., S3, GCS, Azure Blob Storage) to achieve better performance and lower cost

Recommended Review Order

  1. FileCleanupTaskHandler.java
  2. BatchFileCleanupTaskHandler.java
  3. TableCleanupTaskHandler.java
  4. Unit Tests

Iceberg API Reference

// https://github.com/apache/iceberg/blob/main/core/src/main/java/org/apache/iceberg/CatalogUtil.java#L194
    public static void deleteFiles(FileIO io, Iterable<String> files, String type, boolean concurrent) {
        if (io instanceof SupportsBulkOperations) {
            try {
                SupportsBulkOperations bulkIO = (SupportsBulkOperations)io;
                bulkIO.deleteFiles(files);
            } catch (RuntimeException e) {
                LOG.warn("Failed to bulk delete {} files", type, e);
            }
        } else if (concurrent) {
            deleteFiles(io, files, type);
        } else {
            files.forEach((file) -> deleteFile(io, file, type));
        }

@@ -103,4 +120,53 @@ public CompletableFuture<Void> tryDelete(
CompletableFuture.delayedExecutor(
FILE_DELETION_RETRY_MILLIS, TimeUnit.MILLISECONDS, executorService));
}

/**
* Attempts to delete multiple files in a batch operation with retry logic. If an error occurs, it
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM, but it's important to note that we may not retry in the event that the service dies. Eventually, we should have Polaris try to drain the task queue for any tasks that failed the first time they were run

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's a good catch, do you mind if I follow up with a new PR to do this, it might involve refactoring on existing delete task

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eric-maynard - I was looking at the currently existent tryDelete function regarding this comment. Is this comment something that also applies to that function?

Copy link
Contributor Author

@danielhumanmod danielhumanmod May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eric-maynard - I was looking at the currently existent tryDelete function regarding this comment. Is this comment something that also applies to that function?

No @adnanhemani , current PR mainly focusing on provide more efficient bulk deletion, but can not guarantee task got eventually executed.

For Eric's comment, we have a plan to fix that, please refer to #774

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danielhumanmod - my question was that does the current tryDelete function also suffer from the same lack of guarantees that the task may not eventually retry. (Was a question for my learning to see if there was anything fundamentally different in this new function that was previously covered by the existing function or tasks in general, not accusatory or asking for any change). After reading #774 - it seems like the same comment apply to both tasks. Thanks!

Copy link
Collaborator

@adnanhemani adnanhemani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One nit and a clarifying question :)

validFiles.stream()
.map(file -> super.tryDelete(tableId, authorizedFileIO, null, file, null, 1))
.toList();
CompletableFuture<Void> deleteFutures =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this is now a single future, so maybe we should name this deleteFuture instead

@@ -97,5 +92,25 @@ public boolean handleTask(TaskEntity task, CallContext callContext) {
}
}

public record BatchFileCleanupTask(TableIdentifier tableId, List<String> batchFiles) {}
public enum BatchFileType {
TABLE_METADATA("table_metadata");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not completely sure here, so am asking for context: why did we introduce this enum type here instead of keeping it as a record? Is this going to be extensible for something in the immediate future?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The record in line 114 is still there — we added this enum to specify what kind of file the BatchFileCleanupTask is cleaning up.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes a lot more sense :) Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants