Skip to content

Enable multi-threaded gzip #8

@Cristy94

Description

@Cristy94

I am trying to migrate my Coolify instance, but gzipping 100GB takes forever (already waiting for 20+ minutes).
I noticed a single thread is being used.

ChatGPT suggested this solution:

Yes. Use pigz for compression and decompression. Minimal edits:

1) Source: create the archive with pigz if available

# choose compressor
if command -v pigz >/dev/null 2>&1; then
  echo "✅ Using pigz for parallel gzip"
  # SC2086 intentional for $volumePaths
  tar -Pcf - -C / "$backupSourceDir" "$HOME/.ssh/authorized_keys" $volumePaths | pigz > "$backupFileName"
else
  echo "ℹ️ pigz not found, using gzip"
  # SC2086 intentional for $volumePaths
  tar -Pczf "$backupFileName" -C / "$backupSourceDir" "$HOME/.ssh/authorized_keys" $volumePaths
fi

2) Remote: install pigz and extract via pigz -dc | tar -Px

In your remoteCommands block, add a pigz check/install next to curl, then replace the extraction line.

  echo '🚸 Checking if pigz is installed...';
  if ! command -v pigz >/dev/null 2>&1; then
    echo 'ℹ️ Installing pigz...';
    if [ -f /etc/debian_version ] || { [ -f /etc/os-release ] && grep -iq "raspbian" /etc/os-release; }; then
      apt-get update && apt-get install -y pigz || { echo '❌ pigz install failed'; exit 1; }
    elif [ -f /etc/redhat-release ]; then
      yum install -y pigz || { echo '❌ pigz install failed'; exit 1; }
    elif { [ -f /etc/SuSE-release ] || { [ -f /etc/os-release ] && grep -iq "suse" /etc/os-release; }; }; then
      zypper install -y pigz || { echo '❌ pigz install failed'; exit 1; }
    elif [ -f /etc/arch-release ]; then
      pacman -Sy --noconfirm pigz || { echo '❌ pigz install failed'; exit 1; }
    elif [ -f /etc/alpine-release ]; then
      apk add --no-cache pigz || { echo '❌ pigz install failed'; exit 1; }
    else
      echo '❌ Unsupported OS for pigz install'; exit 1;
    fi
  fi
  echo '✅ pigz ready';

  echo '🚸 Extracting backup file...';
  if ! pigz -dc | tar -Px -C / ; then
    echo '❌ Backup file extraction failed'; exit 1;
  fi
  echo '✅ Backup file extracted';

3) Optional: simpler tar -I pigz variant

If you prefer a single tar command and GNU tar is guaranteed:

# create
tar -I pigz -Pcf "$backupFileName" -C / "$backupSourceDir" "$HOME/.ssh/authorized_keys" $volumePaths
# remote extract (still best to use pigz -dc for parallel decode)
pigz -dc | tar -Px -C /

4) Note on volume paths

Docker volume data is under .../volumes/<name>/_data. Ensure you append /_data when building $volumePaths, or you’ll archive metadata directories, not the contents.

This change keeps your streaming workflow intact and accelerates both compression and decompression.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions