Skip to content

Commit 58fd152

Browse files
committed
Add more retries when restoring a basebackup
Commit 4869d84 added logic to make the number of retries dependent on the backup size. Instead of allowing for one error every 64GB allow for one every 10GB. It's better to retry more than to start from scratch when things go wrong.
1 parent def54ec commit 58fd152

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

pghoard/restore.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -607,8 +607,8 @@ def _get_basebackup(
607607
os.chmod(dirname, 0o700)
608608

609609
# Based on limited samples, there could be one stalled download per 122GiB of transfer
610-
# So we tolerate one stall for every 64GiB of transfer (or STALL_MIN_RETRIES for smaller backup)
611-
stall_max_retries = max(STALL_MIN_RETRIES, int(int(metadata.get("total-size-enc", 0)) / (64 * 2 ** 30)))
610+
# So we tolerate one stall for every 10GiB of transfer (or STALL_MIN_RETRIES for smaller backup)
611+
stall_max_retries = max(STALL_MIN_RETRIES, int(int(metadata.get("total-size-enc", 0)) / (10 * 2 ** 30)))
612612

613613
fetcher = BasebackupFetcher(
614614
app_config=self.config,

0 commit comments

Comments
 (0)