Conversation
Add boto3 and its dependencies (botocore, s3transfer, jmespath) to the alibuild and bitsorg recipes. Add python-dateutil as an explicit dependency for botocore's runtime needs. This enables the --remote-store flag for S3-backed package caching.
GitHub Actions workflow that builds FairShip using aliBuild with S3 remote store caching. Triggers on pushes to main, PRs, weekly schedule, and manual dispatch. PRs use read-only store; main/tags use read-write.
Minimal AlmaLinux 9 container with CVMFS client and environment modules. Expects /cvmfs to be bind-mounted at runtime. Entrypoint loads the FairShip environment from CVMFS automatically. Container files are in container/ to avoid alidistlint treating scripts as recipes.
The HTTPS URL format doesn't use the S3 API for uploads. Use the s3:// scheme with S3_ENDPOINT_URL instead.
aliBuild uses s3cmd (not boto3) to download from S3 stores. s3cmd needs a .s3cfg file with host config and credentials. Generate it from the GitHub Actions secrets at build time.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a GitHub Actions build workflow for package builds and conditional container pushes, container runtime files (Dockerfile, CVMFS config, entrypoint), updates build scripts to include S3/AWS Python deps, excludes Changes
Sequence Diagram(s)sequenceDiagram
actor GitHubEvent as GitHub Event
participant GHA as GitHub Actions
participant Runner as Self-Hosted Runner
participant S3 as S3 Remote Store
participant Podman as Podman
participant Registry as Container Registry
GitHubEvent->>GHA: trigger (push / pull_request / schedule / dispatch)
activate GHA
GHA->>Runner: start build job (containerized)
activate Runner
Runner->>S3: access remote-store (seed/fetch/upload)
S3-->>Runner: respond
Runner->>Runner: run aliBuild build (--remote-store ...)
Runner-->>GHA: report build result
deactivate Runner
alt event is push or tag
GHA->>Podman: start build-container job
activate Podman
Podman->>Podman: build image (container/Dockerfile)
Podman->>Registry: push image:<tag>
Registry-->>Podman: ack
alt tag build
Podman->>Registry: tag & push :latest
Registry-->>Podman: ack
end
Podman-->>GHA: report container job complete
deactivate Podman
end
deactivate GHA
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
container/entrypoint.sh (1)
8-10: Consider logging when FairShip environment loading fails.The
|| truesilently swallows any errors fromalienv printenv. If CVMFS is mounted butFairShip/latestdoesn't exist or alienv fails, users get no indication that the environment wasn't loaded. This could make debugging difficult.♻️ Optional: Add minimal feedback on failure
# Load FairShip environment from CVMFS if available if [ -d /cvmfs/ship.cern.ch ]; then - eval "$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>/dev/null)" || true + if ! eval "$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>/dev/null)"; then + echo "Warning: Failed to load FairShip environment from CVMFS" >&2 + fi fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@container/entrypoint.sh` around lines 8 - 10, The script currently silences failures from the alienv call with "|| true", so add a check that captures the command's exit status and prints a clear error message when loading FairShip fails: run the existing eval "$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>/dev/null)" but after that test its return code ($?) and, if non‑zero, emit a descriptive message (e.g., via echo to stderr or logger) indicating FairShip environment loading failed and include the command output or error details; reference the existing alienv printenv invocation and remove the unconditional "|| true" to surface failures..github/workflows/build.yml (2)
49-52: Mask the S3 credentials step to prevent accidental exposure.The
printfcommand includes credentials directly. If this step fails or is run with debug logging enabled, the credentials could be visible in logs. While GitHub masks known secrets, using::add-mask::provides defense in depth.🔒 Proposed fix to add explicit masking
- name: Configure S3 credentials run: | + echo "::add-mask::$AWS_ACCESS_KEY_ID" + echo "::add-mask::$AWS_SECRET_ACCESS_KEY" printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \ "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 49 - 52, The Configure S3 credentials step writes secrets directly and may expose them in logs; update the step that currently uses printf to first mask both secrets using GitHub Actions' workflow command (emit "::add-mask::${AWS_ACCESS_KEY_ID}" and "::add-mask::${AWS_SECRET_ACCESS_KEY}") before writing ~/.s3cfg and ensure the write itself does not print secrets to stdout (keep the existing printf/redirect but run it after masking or use a quiet write), referencing the step content that constructs ~/.s3cfg and the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
58-63: Simplify redundant STORE_MODE assignment.
STORE_MODEis initialized to""on line 58, then conditionally set to""again on line 60. This can be simplified.♻️ Proposed simplification
- STORE_MODE="" - if [ "${{ github.event_name }}" = "pull_request" ]; then - STORE_MODE="" - else + if [ "${{ github.event_name }}" != "pull_request" ]; then STORE_MODE="::rw" + else + STORE_MODE="" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 58 - 63, Remove the redundant initial assignment of STORE_MODE and set it only once based on the condition: replace the current pattern that sets STORE_MODE="" before the if and again in the pull_request branch with a single conditional assignment that sets STORE_MODE="::rw" for non-pull_request events and otherwise leaves it empty; target the shell variable STORE_MODE in the workflow step containing the if/else block to implement this simplification.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@container/Dockerfile`:
- Around line 1-14: The Dockerfile currently runs as root which is flagged by
Trivy; create a non-root user and switch to it at runtime by adding a user
(e.g., RUN useradd -m -u 1000 appuser or equivalent) and set USER appuser before
ENTRYPOINT/CMD, ensuring files the runtime needs (entrypoint.sh and
/etc/cvmfs/default.d/ship.conf) are readable by that user (adjust ownership or
permissions during build using chown/chmod while still root). Keep the install
and other privileged steps as-is during build, but add the USER change so
/entrypoint.sh is executed as the non-root user.
---
Nitpick comments:
In @.github/workflows/build.yml:
- Around line 49-52: The Configure S3 credentials step writes secrets directly
and may expose them in logs; update the step that currently uses printf to first
mask both secrets using GitHub Actions' workflow command (emit
"::add-mask::${AWS_ACCESS_KEY_ID}" and "::add-mask::${AWS_SECRET_ACCESS_KEY}")
before writing ~/.s3cfg and ensure the write itself does not print secrets to
stdout (keep the existing printf/redirect but run it after masking or use a
quiet write), referencing the step content that constructs ~/.s3cfg and the
environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
- Around line 58-63: Remove the redundant initial assignment of STORE_MODE and
set it only once based on the condition: replace the current pattern that sets
STORE_MODE="" before the if and again in the pull_request branch with a single
conditional assignment that sets STORE_MODE="::rw" for non-pull_request events
and otherwise leaves it empty; target the shell variable STORE_MODE in the
workflow step containing the if/else block to implement this simplification.
In `@container/entrypoint.sh`:
- Around line 8-10: The script currently silences failures from the alienv call
with "|| true", so add a check that captures the command's exit status and
prints a clear error message when loading FairShip fails: run the existing eval
"$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>/dev/null)" but
after that test its return code ($?) and, if non‑zero, emit a descriptive
message (e.g., via echo to stderr or logger) indicating FairShip environment
loading failed and include the command output or error details; reference the
existing alienv printenv invocation and remove the unconditional "|| true" to
surface failures.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0360fc34-6631-4680-92b4-412db235d96e
📒 Files selected for processing (8)
.github/workflows/build.yml.pre-commit-config.yamlREADME.mdalibuild.shbitsorg.shcontainer/Dockerfilecontainer/cvmfs-ship.confcontainer/entrypoint.sh
- Run container as non-root user (shipuser, uid 1000) - Mask S3 secrets in workflow logs before writing .s3cfg - Simplify STORE_MODE conditional assignment - Surface alienv failures in entrypoint instead of silencing them
The runner image doesn't include aliBuild, so install uv and use uvx to run it. Also add -c . to point aliBuild at the local recipes.
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build.yml:
- Around line 53-54: The workflow writes AWS credentials to ~/.s3cfg using the
printf redirection and leaves file permissions to the runner's umask;
immediately after the printf that creates ~/.s3cfg, explicitly restrict the file
mode (e.g., to owner read/write) so credentials aren't world-readable—update the
step that contains the printf '... > ~/.s3cfg' to add an explicit chmod (or
equivalent) on ~/.s3cfg before any further use.
- Around line 8-11: The workflow's paths filter uses the root-only glob '*.sh'
which misses nested scripts; update the two occurrences of the paths entry that
currently list '*.sh' to use a recursive glob like '**/*.sh' so changes to shell
scripts in subdirectories (e.g., container/entrypoint.sh) will trigger the
workflow; modify the pull_request.paths and any other matching paths entries in
the same file to '**/*.sh'.
- Line 36: The workflow uses custom self-hosted runner labels 'alma9' and
'x86_64' in runs-on, so add an actionlint configuration file named
actionlint.yaml that defines a self-hosted-runner.labels list including alma9
and x86_64 so actionlint recognizes the custom labels; ensure the file contains
a top-level self-hosted-runner key with a labels array listing those two
strings.
- Around line 5-7: The workflow's push trigger is limited to branches
(push.branches: [main]) so tag pushes never run the dependent build job (build)
even though build-container checks startsWith(github.ref, 'refs/tags/'); update
the push trigger to also include tag events (e.g., add push.tags: ['*'] or a tag
pattern like ['v*']) or replace the branches restriction with a more inclusive
push configuration so tag refs trigger the workflow and allow the build job to
run for tags as well.
In `@container/entrypoint.sh`:
- Around line 9-13: The script currently uses command substitution with 2>&1 and
eval under set -e which can cause immediate exit and feed stderr into eval;
change to use a conditional assignment to preserve exit status and keep stderr
separate: run the command as if alienv_output=$(/cvmfs/ship.cern.ch/bin/alienv
printenv FairShip/latest) (no 2>&1) and on success run eval "$alienv_output"; on
failure capture stderr (e.g. err=$(/cvmfs/ship.cern.ch/bin/alienv printenv
FairShip/latest 2>&1 >/dev/null) or redirect stderr to a temp var/file) and use
that err when printing the warning; reference the existing alienv_output
variable and the eval invocation to locate and replace the block.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 56ed8f19-285a-4c76-9c4c-46895e99fa83
📒 Files selected for processing (3)
.github/workflows/build.ymlcontainer/Dockerfilecontainer/entrypoint.sh
✅ Files skipped from review due to trivial changes (1)
- container/Dockerfile
| push: | ||
| branches: [main] | ||
| paths: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and read the build.yml file
fd -t f "build.yml" .github/workflows/Repository: ShipSoft/shipdist
Length of output: 88
🏁 Script executed:
# Read the build.yml file with line numbers
cat -n .github/workflows/build.ymlRepository: ShipSoft/shipdist
Length of output: 3930
Tag pushes won't trigger workflow due to branch restriction on push event.
The push trigger is restricted to branches: [main] (line 6), preventing tag pushes from initiating the workflow. While the build-container job includes tag handling at lines 88-94 with the condition startsWith(github.ref, 'refs/tags/') (line 80), it depends on the build job (line 79), which never executes on tag events.
Suggested fix
push:
branches: [main]
+ tags:
+ - '*'
paths:
- '**/*.sh'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| push: | |
| branches: [main] | |
| paths: | |
| push: | |
| branches: [main] | |
| tags: | |
| - '*' | |
| paths: | |
| - '**/*.sh' |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build.yml around lines 5 - 7, The workflow's push trigger
is limited to branches (push.branches: [main]) so tag pushes never run the
dependent build job (build) even though build-container checks
startsWith(github.ref, 'refs/tags/'); update the push trigger to also include
tag events (e.g., add push.tags: ['*'] or a tag pattern like ['v*']) or replace
the branches restriction with a more inclusive push configuration so tag refs
trigger the workflow and allow the build job to run for tags as well.
| - '*.sh' | ||
| pull_request: | ||
| paths: | ||
| - '*.sh' |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub Actions paths filter glob pattern matching *.sh nested files
💡 Result:
Use the workflow-level path filter with a recursive globstar: '/.sh'. Example (runs on push/pull_request when any .sh file anywhere in the repo changes): on: push: paths: - '/.sh' Notes that explain common “doesn’t match nested files” issues: 1) Don’t use '.sh' alone if you need nested matching. '.sh' matches only in the workflow path root (top level), while '/.sh' matches recursively. 2) Quote the pattern strings to avoid YAML/parse issues (GitHub docs recommend escaping/quoting special characters in glob patterns). 3) If you need to exclude certain .sh files, combine include patterns and exclude patterns. GitHub requires at least one positive pattern if you use '!' patterns, and you can alternatively use paths-ignore. Example with excludes: on: push: paths: - '/.sh' - '!/scripts/' Authoritative reference: GitHub’s Workflow syntax docs for path filters show that paths uses glob patterns with '*' and '' and that the workflow runs if at least one path matches.
Citations:
- 1: https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions
- 2: https://docs.github.com/en/enterprise-server@3.0/articles/workflow-syntax-for-github-actions
- 3: https://stackoverflow.com/questions/70478320/github-actions-ignore-certain-files-inside-a-directory
🏁 Script executed:
# Search for all shell scripts in the repository
find . -name "*.sh" -type f 2>/dev/null | head -20Repository: ShipSoft/shipdist
Length of output: 350
paths: '*.sh' matches only root-level files and excludes nested scripts.
At lines 8 and 11, this pattern misses shell scripts in subdirectories (such as container/entrypoint.sh). PRs changing only nested shell scripts will not trigger this workflow.
Use the recursive globstar pattern instead:
Suggested fix
push:
branches: [main]
paths:
- - '*.sh'
+ - '**/*.sh'
pull_request:
paths:
- - '*.sh'
+ - '**/*.sh'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - '*.sh' | |
| pull_request: | |
| paths: | |
| - '*.sh' | |
| - '**/*.sh' | |
| pull_request: | |
| paths: | |
| - '**/*.sh' |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build.yml around lines 8 - 11, The workflow's paths filter
uses the root-only glob '*.sh' which misses nested scripts; update the two
occurrences of the paths entry that currently list '*.sh' to use a recursive
glob like '**/*.sh' so changes to shell scripts in subdirectories (e.g.,
container/entrypoint.sh) will trigger the workflow; modify the
pull_request.paths and any other matching paths entries in the same file to
'**/*.sh'.
| jobs: | ||
| build: | ||
| name: Build ${{ inputs.package || 'FairShip' }} | ||
| runs-on: [self-hosted, alma9, x86_64] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether actionlint config exists and whether custom labels are declared.
fd -i 'actionlint.yaml|actionlint.yml' .github
rg -n -C2 'self-hosted-runner|labels|alma9|x86_64' .github --iglob '*actionlint*.y*ml'Repository: ShipSoft/shipdist
Length of output: 1784
🏁 Script executed:
#!/bin/bash
# List .github directory structure
find .github -type f -name "*.yml" -o -name "*.yaml" | head -20
# Search for actionlint config
find .github -type f \( -name ".actionlintrc*" -o -name "actionlint*" \) 2>/dev/null
# Check the workflow file around lines 36 and 74
grep -n "runs-on\|alma9\|x86_64" .github/workflows/build.yml 2>/dev/null | head -20Repository: ShipSoft/shipdist
Length of output: 214
🏁 Script executed:
#!/bin/bash
# Search for actionlint config at repository root and subdirectories
find . -maxdepth 2 -type f \( -name ".actionlintrc*" -o -name "actionlint.yaml" -o -name "actionlint.yml" \) 2>/dev/null
# Check if actionlint config exists anywhere
ls -la .github/ | grep -i actionlint
# Get more context around line 36 and 81 in build.yml
sed -n '30,40p' .github/workflows/build.yml
sed -n '75,85p' .github/workflows/build.ymlRepository: ShipSoft/shipdist
Length of output: 615
🌐 Web query:
actionlint custom self-hosted runner labels configuration alma9 x86_64
💡 Result:
To make actionlint accept custom self-hosted runner labels (including ones like AlmaLinux 9 x86_64), configure actionlint.yaml under .github/ with self-hosted-runner.labels. 1) Create .github/actionlint.yaml (or .github/actionlint.yml) Example: self-hosted-runner: labels: - alma9 - x86_64 # If your actual runner labels include dynamic suffixes, use glob patterns. # Examples (adjust to your real label strings): # - alma9-* # - *-x86_64 2) Where these config fields apply - actionlint reads the configuration file from actionlint.yaml / actionlint.yml placed in the .github directory. - self-hosted-runner.labels is “Labels of self-hosted runner in array of strings”. actionlint treats each entry as a pattern (glob syntax supported by Go’s path.Match), and matches these patterns against labels used in runs-on. 3) Why this fixes the “label … is unknown” error - actionlint’s runner-label rule validates that labels used in runs-on are known; for custom self-hosted labels you must add them to self-hosted-runner.labels in actionlint.yaml. If you tell me the exact label name(s) you use for your AlmaLinux 9 x86_64 runners (as they appear in the GitHub Runner “Labels” setting and in runs-on), I can suggest the tightest glob patterns to cover them.
Citations:
- 1: https://github.com/rhysd/actionlint/blob/main/docs/config.md
- 2: https://github.com/rhysd/actionlint/blob/main/rule_runner_label.go
- 3: https://github.com/rhysd/actionlint/blob/main/README.md
- 4: Add glob support for self hosted runner known labels configuration rhysd/actionlint#378
Create actionlint configuration for custom self-hosted runner labels.
Lines 36 and 81 use custom self-hosted runner labels alma9 and x86_64. actionlint requires explicit configuration to recognize these labels. Create .github/actionlint.yaml with:
self-hosted-runner:
labels:
- alma9
- x86_64Without this configuration, actionlint will fail CI validation.
🧰 Tools
🪛 actionlint (1.7.12)
[error] 36-36: label "alma9" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2025-vs2026", "windows-2022", "windows-11-arm", "ubuntu-slim", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "macos-latest", "macos-latest-xlarge", "macos-latest-large", "macos-26-intel", "macos-26-xlarge", "macos-26-large", "macos-26", "macos-15-intel", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xlarge", "macos-14-large", "macos-14", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
[error] 36-36: label "x86_64" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2025-vs2026", "windows-2022", "windows-11-arm", "ubuntu-slim", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "macos-latest", "macos-latest-xlarge", "macos-latest-large", "macos-26-intel", "macos-26-xlarge", "macos-26-large", "macos-26", "macos-15-intel", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xlarge", "macos-14-large", "macos-14", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build.yml at line 36, The workflow uses custom self-hosted
runner labels 'alma9' and 'x86_64' in runs-on, so add an actionlint
configuration file named actionlint.yaml that defines a
self-hosted-runner.labels list including alma9 and x86_64 so actionlint
recognizes the custom labels; ensure the file contains a top-level
self-hosted-runner key with a labels array listing those two strings.
| printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \ | ||
| "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg |
There was a problem hiding this comment.
Harden .s3cfg permissions after writing secrets.
At Line 54, ~/.s3cfg may be too permissive depending on umask. Restrict it explicitly to avoid credential exposure on shared runners.
Suggested fix
printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \
"$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg
+ chmod 600 ~/.s3cfg📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \ | |
| "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg | |
| printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \ | |
| "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg | |
| chmod 600 ~/.s3cfg |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build.yml around lines 53 - 54, The workflow writes AWS
credentials to ~/.s3cfg using the printf redirection and leaves file permissions
to the runner's umask; immediately after the printf that creates ~/.s3cfg,
explicitly restrict the file mode (e.g., to owner read/write) so credentials
aren't world-readable—update the step that contains the printf '... > ~/.s3cfg'
to add an explicit chmod (or equivalent) on ~/.s3cfg before any further use.
| alienv_output=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>&1) | ||
| if ! eval "$alienv_output" 2>/dev/null; then | ||
| echo "Warning: failed to load FairShip environment from CVMFS" >&2 | ||
| echo "$alienv_output" >&2 | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and read the entrypoint.sh file
find . -name "entrypoint.sh" -type fRepository: ShipSoft/shipdist
Length of output: 86
🏁 Script executed:
# Check if we can locate the file in container directory
fd "entrypoint.sh" . --type fRepository: ShipSoft/shipdist
Length of output: 86
🏁 Script executed:
# List directory structure to understand the repo layout
git ls-files | grep -E "(container|entrypoint)" | head -20Repository: ShipSoft/shipdist
Length of output: 131
🏁 Script executed:
cat -n ./container/entrypoint.shRepository: ShipSoft/shipdist
Length of output: 610
🏁 Script executed:
# Verify bash set -e behavior with command substitution in assignment
bash <<'EOF'
set -e
echo "Testing set -e with failed command substitution in assignment..."
var=$(exit 1)
echo "After assignment, var='$var'"
echo "Script continued - set -e did NOT exit"
EOF
echo "Exit code: $?"Repository: ShipSoft/shipdist
Length of output: 138
🏁 Script executed:
# Test what happens when eval receives mixed stderr
bash <<'EOF'
set -e
echo "Testing eval with mixed stderr..."
# Simulate alienv failing and outputting to stderr + stdout
output=$({ echo "export VAR=value"; echo "error message" >&2; exit 0; } 2>&1)
echo "Output variable: '$output'"
echo "Now evaluating..."
eval "$output" 2>/dev/null || echo "Eval failed"
echo "VAR is: $VAR"
EOFRepository: ShipSoft/shipdist
Length of output: 188
🏁 Script executed:
# Test the exact pattern from entrypoint.sh more carefully
bash <<'EOF'
set -e
if [ -d /tmp ]; then
# Simulate alienv failure - outputs to stderr AND exits with non-zero
alienv_output=$(/bin/sh -c 'echo "error: something failed" >&2; exit 1' 2>&1)
echo "After command sub, alienv_output='$alienv_output'"
echo "Command substitution completed - set -e did NOT exit"
if ! eval "$alienv_output" 2>/dev/null; then
echo "Warning: failed to load FairShip environment from CVMFS" >&2
echo "$alienv_output" >&2
fi
fi
echo "Script completed successfully"
EOF
echo "Final exit code: $?"Repository: ShipSoft/shipdist
Length of output: 79
🏁 Script executed:
# Verify: does the suggested fix actually work better?
bash <<'EOF'
set -e
echo "Testing suggested fix pattern..."
if alienv_output=$(/bin/sh -c 'echo "error: something failed" >&2; exit 1' 2>&1); then
echo "Command succeeded, output: $alienv_output"
if ! eval "$alienv_output"; then
echo "Warning: failed to evaluate FairShip environment from CVMFS" >&2
fi
else
echo "Warning: failed to load FairShip environment from CVMFS" >&2
echo "Output was: $alienv_output"
fi
echo "Script completed"
EOF
echo "Exit code: $?"Repository: ShipSoft/shipdist
Length of output: 232
Prevent premature exit before your warning path executes.
At Line 9, when alienv exits with a non-zero status, set -e terminates the script immediately, so the warning block at Lines 11–13 never executes. Additionally, 2>&1 at Line 9 mixes stderr messages into the output, which are then passed to eval at Line 10, potentially causing eval to fail on invalid shell syntax within error messages.
Use if alienv_output=$(...); then to capture the exit status without terminating, and separate stderr handling from the eval input:
Suggested fix
if [ -d /cvmfs/ship.cern.ch ]; then
- alienv_output=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>&1)
- if ! eval "$alienv_output" 2>/dev/null; then
- echo "Warning: failed to load FairShip environment from CVMFS" >&2
- echo "$alienv_output" >&2
- fi
+ if alienv_output=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest); then
+ if ! eval "$alienv_output"; then
+ echo "Warning: failed to evaluate FairShip environment from CVMFS" >&2
+ fi
+ else
+ echo "Warning: failed to load FairShip environment from CVMFS" >&2
+ fi
fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@container/entrypoint.sh` around lines 9 - 13, The script currently uses
command substitution with 2>&1 and eval under set -e which can cause immediate
exit and feed stderr into eval; change to use a conditional assignment to
preserve exit status and keep stderr separate: run the command as if
alienv_output=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest) (no
2>&1) and on success run eval "$alienv_output"; on failure capture stderr (e.g.
err=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>&1 >/dev/null)
or redirect stderr to a temp var/file) and use that err when printing the
warning; reference the existing alienv_output variable and the eval invocation
to locate and replace the block.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
.github/workflows/build.yml (2)
38-38: Pin container image to a specific digest or tag.Using the
:latesttag can lead to non-reproducible builds if the image is updated. Consider pinning to a specific version or digest for build reproducibility.Suggested improvement
container: - image: registry.cern.ch/ship/gha-runner:latest + image: registry.cern.ch/ship/gha-runner:v1.0.0 # or use digest: `@sha256`:... volumes:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml at line 38, Replace the use of the floating image tag by pinning the container image referenced as "registry.cern.ch/ship/gha-runner:latest" to a specific, immutable identifier (either a stable version tag or an image digest) so builds are reproducible; update the image field in the workflow to use "registry.cern.ch/ship/gha-runner:<version>" or "registry.cern.ch/ship/gha-runner@sha256:<digest>" accordingly.
49-53: Harden uv installation with checksum verification.Piping
curltoshwithout checksum verification poses a supply-chain risk. Consider verifying the installer's integrity or using an alternative installation method.Suggested improvement
- name: Install uv and aliBuild run: | - curl -LsSf https://astral.sh/uv/install.sh | sh + # Download installer and verify checksum + curl -LsSf https://astral.sh/uv/install.sh -o /tmp/uv-install.sh + echo "<expected-checksum> /tmp/uv-install.sh" | sha256sum -c + sh /tmp/uv-install.sh echo "$HOME/.local/bin" >> "$GITHUB_PATH" "$HOME/.local/bin/uvx" --from alibuild aliBuild versionAlternatively, if uv provides signed releases or publishes checksums, reference those in the workflow.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 49 - 53, The workflow step "Install uv and aliBuild" pipes the installer directly from curl to sh; change it to download a pinned installer (or specific release URL) to a tempfile, fetch the corresponding published checksum or signature, verify it (e.g., sha256sum -c or gpg/openssl signature verification) and only run the installer if verification succeeds, and add safe shell flags (set -euo pipefail) to the step; update the "Install uv and aliBuild" step to perform download → checksum/signature fetch → verification → execute installer sequence and fail the job if verification fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build.yml:
- Around line 24-26: The concurrency group "build-${{ github.ref }}" is too
broad and causes different workflow_dispatch runs with different inputs to
cancel each other; update the concurrency.group expression to include a
distinguishing input (e.g., include github.event.inputs.package or another input
like github.event.client_id/sha) so runs with different package inputs get
unique groups (reference the concurrency.group setting in your workflow YAML and
adjust the expression to incorporate github.event.inputs.package or github.sha).
---
Nitpick comments:
In @.github/workflows/build.yml:
- Line 38: Replace the use of the floating image tag by pinning the container
image referenced as "registry.cern.ch/ship/gha-runner:latest" to a specific,
immutable identifier (either a stable version tag or an image digest) so builds
are reproducible; update the image field in the workflow to use
"registry.cern.ch/ship/gha-runner:<version>" or
"registry.cern.ch/ship/gha-runner@sha256:<digest>" accordingly.
- Around line 49-53: The workflow step "Install uv and aliBuild" pipes the
installer directly from curl to sh; change it to download a pinned installer (or
specific release URL) to a tempfile, fetch the corresponding published checksum
or signature, verify it (e.g., sha256sum -c or gpg/openssl signature
verification) and only run the installer if verification succeeds, and add safe
shell flags (set -euo pipefail) to the step; update the "Install uv and
aliBuild" step to perform download → checksum/signature fetch → verification →
execute installer sequence and fail the job if verification fails.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 36817b1b-c869-4b76-aa29-06964fb415b5
📒 Files selected for processing (1)
.github/workflows/build.yml
| concurrency: | ||
| group: build-${{ github.ref }} | ||
| cancel-in-progress: true |
There was a problem hiding this comment.
Concurrency group may cancel unrelated workflow_dispatch runs.
Multiple workflow_dispatch invocations from the same branch with different package inputs will share the same concurrency group (e.g., build-refs/heads/main) and cancel each other. Consider including inputs to distinguish them:
Suggested improvement
concurrency:
- group: build-${{ github.ref }}
+ group: build-${{ github.ref }}-${{ inputs.package || 'FairShip' }}-${{ inputs.defaults || 'release' }}
cancel-in-progress: true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| concurrency: | |
| group: build-${{ github.ref }} | |
| cancel-in-progress: true | |
| concurrency: | |
| group: build-${{ github.ref }}-${{ inputs.package || 'FairShip' }}-${{ inputs.defaults || 'release' }} | |
| cancel-in-progress: true |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build.yml around lines 24 - 26, The concurrency group
"build-${{ github.ref }}" is too broad and causes different workflow_dispatch
runs with different inputs to cancel each other; update the concurrency.group
expression to include a distinguishing input (e.g., include
github.event.inputs.package or another input like github.event.client_id/sha) so
runs with different package inputs get unique groups (reference the
concurrency.group setting in your workflow YAML and adjust the expression to
incorporate github.event.inputs.package or github.sha).
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
.github/workflows/build.yml (4)
55-61:⚠️ Potential issue | 🟠 MajorHarden
.s3cfgpermissions immediately after writing credentials.Please lock down the credentials file mode to owner-only access.
Suggested fix
printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \ "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg + chmod 600 ~/.s3cfg🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 55 - 61, The credentials file ~/.s3cfg created in the "Configure S3 credentials" step is world-readable; immediately restrict its permissions after writing by invoking a chmod to owner-only (e.g., mode 600) on ~/.s3cfg so only the workflow user can read/write it; add this chmod call right after the printf that writes ~/.s3cfg in that job step.
36-36:⚠️ Potential issue | 🟠 MajorAdd actionlint config for custom self-hosted labels.
alma9andx86_64are valid runtime labels for your runners, but actionlint will fail until they’re declared in.github/actionlint.yaml.self-hosted-runner: labels: - alma9 - x86_64Also applies to: 82-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml at line 36, actionlint will fail because custom self-hosted labels used in the workflow's runs-on field (runs-on: [self-hosted, alma9, x86_64]) are not declared; create or update the actionlint config to add a self-hosted-runner entry and declare the labels alma9 and x86_64 under its labels key (ensure the same declaration covers any other workflow occurrences such as the other runs-on usage referenced).
24-26:⚠️ Potential issue | 🟡 MinorConcurrency key is too broad for manual runs with different inputs.
Different
workflow_dispatchbuilds on the same ref can cancel each other even whenpackage/defaultsdiffer.Suggested fix
concurrency: - group: build-${{ github.ref }} + group: build-${{ github.ref }}-${{ github.event.inputs.package || 'FairShip' }}-${{ github.event.inputs.defaults || 'release' }} cancel-in-progress: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 24 - 26, The concurrency group key build-${{ github.ref }} is too broad and causes manual workflow_dispatch runs with different inputs to cancel each other; update the concurrency.group value to include a unique identifier from the run or inputs (for example append github.run_id or a serialized input like github.event.inputs['package/defaults']) so each manual invocation with different inputs gets its own concurrency group while keeping cancel-in-progress: true; modify the concurrency block that contains group and cancel-in-progress accordingly.
5-11:⚠️ Potential issue | 🟠 MajorExpand push/PR filters so tag and nested-script changes actually trigger builds.
Current filters block tag-triggered runs and miss scripts outside repo root. That prevents expected execution paths for container/tag publishing.
Suggested fix
push: branches: [main] + tags: + - '*' paths: - - '*.sh' + - '**/*.sh' pull_request: paths: - - '*.sh' + - '**/*.sh'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 5 - 11, Update the workflow triggers so tag pushes and scripts in subdirectories trigger runs: change the paths globs under push and pull_request from '*.sh' to '**/*.sh' to include nested scripts, and add a tags entry under push (e.g., tags: ['*'] or tags: ['v*'] depending on your tagging scheme) so tag pushes are not blocked; modify the push/pull_request blocks (symbols: push, pull_request, paths, tags) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build.yml:
- Around line 49-53: Pin the installer and aliBuild invocation in the workflow:
replace the unpinned curl of the uv installer with a fetch of a specific uv
release (or add checksum verification for the installer script) and update the
uvx invocation (the command containing "uvx --from alibuild aliBuild version")
to include an explicit --version <aliBuild-version> flag so aliBuild is
installed deterministically; ensure you pick and document concrete version
strings for uv and aliBuild when making these edits.
---
Duplicate comments:
In @.github/workflows/build.yml:
- Around line 55-61: The credentials file ~/.s3cfg created in the "Configure S3
credentials" step is world-readable; immediately restrict its permissions after
writing by invoking a chmod to owner-only (e.g., mode 600) on ~/.s3cfg so only
the workflow user can read/write it; add this chmod call right after the printf
that writes ~/.s3cfg in that job step.
- Line 36: actionlint will fail because custom self-hosted labels used in the
workflow's runs-on field (runs-on: [self-hosted, alma9, x86_64]) are not
declared; create or update the actionlint config to add a self-hosted-runner
entry and declare the labels alma9 and x86_64 under its labels key (ensure the
same declaration covers any other workflow occurrences such as the other runs-on
usage referenced).
- Around line 24-26: The concurrency group key build-${{ github.ref }} is too
broad and causes manual workflow_dispatch runs with different inputs to cancel
each other; update the concurrency.group value to include a unique identifier
from the run or inputs (for example append github.run_id or a serialized input
like github.event.inputs['package/defaults']) so each manual invocation with
different inputs gets its own concurrency group while keeping
cancel-in-progress: true; modify the concurrency block that contains group and
cancel-in-progress accordingly.
- Around line 5-11: Update the workflow triggers so tag pushes and scripts in
subdirectories trigger runs: change the paths globs under push and pull_request
from '*.sh' to '**/*.sh' to include nested scripts, and add a tags entry under
push (e.g., tags: ['*'] or tags: ['v*'] depending on your tagging scheme) so tag
pushes are not blocked; modify the push/pull_request blocks (symbols: push,
pull_request, paths, tags) accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7bc84c0b-1cf5-4cee-a32a-ecfbac15e138
📒 Files selected for processing (1)
.github/workflows/build.yml
| - name: Install uv and aliBuild | ||
| run: | | ||
| curl -LsSf https://astral.sh/uv/install.sh | sh | ||
| echo "$HOME/.local/bin" >> "$GITHUB_PATH" | ||
| "$HOME/.local/bin/uvx" --from alibuild aliBuild version |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/build.yml | sed -n '45,60p'Repository: ShipSoft/shipdist
Length of output: 835
🏁 Script executed:
# Check for lock files, requirements, and version specifications
find . -type f \( -name "*.lock" -o -name "requirements*.txt" -o -name "pyproject.toml" -o -name ".python-version" \) | head -20
# Check for other workflow files to see if they pin versions
ls -la .github/workflows/ 2>/dev/null
# Search for any UV_VERSION or ALIBUILD_VERSION env vars
rg -i "UV_VERSION|ALIBUILD_VERSION" .Repository: ShipSoft/shipdist
Length of output: 436
🏁 Script executed:
cat -n ./uv.sh
# Also check if build.yml calls this script or specifies versions elsewhere
grep -n "uv\|version\|ALIBUILD" .github/workflows/build.ymlRepository: ShipSoft/shipdist
Length of output: 1438
Pin uv and alibuild versions in the workflow to ensure deterministic CI builds.
Lines 51 and 53 install floating latest versions of uv and alibuild, which reduces reproducibility and increases supply-chain drift risk. Although uv.sh pins version 0.7.12, the GitHub Actions workflow bypasses this and downloads unpinned packages. Specify explicit versions:
- Add a
--versionflag to theuvx --from alibuild aliBuildinvocation (line 53) - Consider downloading
uvfrom a specific GitHub release tag instead of the installer script (line 51), or add checksum verification if using the installer
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build.yml around lines 49 - 53, Pin the installer and
aliBuild invocation in the workflow: replace the unpinned curl of the uv
installer with a fetch of a specific uv release (or add checksum verification
for the installer script) and update the uvx invocation (the command containing
"uvx --from alibuild aliBuild version") to include an explicit --version
<aliBuild-version> flag so aliBuild is installed deterministically; ensure you
pick and document concrete version strings for uv and aliBuild when making these
edits.
This reverts commit 8389597.
There was a problem hiding this comment.
♻️ Duplicate comments (6)
.github/workflows/build.yml (6)
49-53:⚠️ Potential issue | 🟠 MajorPin tool versions for deterministic and safer CI builds.
Line 51 downloads
uvvia floating installer, and Line 53/70 resolvesalibuildwithout an explicit version. This introduces supply-chain drift between runs.Suggested fix
- name: Install uv and aliBuild run: | - curl -LsSf https://astral.sh/uv/install.sh | sh + UV_VERSION="0.7.12" + curl -LsSf "https://astral.sh/uv/${UV_VERSION}/install.sh" | sh echo "$HOME/.local/bin" >> "$GITHUB_PATH" - "$HOME/.local/bin/uvx" --from alibuild aliBuild version + ALIBUILD_VERSION="<pin-me>" + "$HOME/.local/bin/uvx" --from "alibuild==${ALIBUILD_VERSION}" aliBuild version ... - uvx --from alibuild aliBuild build "$PACKAGE" \ + uvx --from "alibuild==${ALIBUILD_VERSION}" aliBuild build "$PACKAGE" \Also applies to: 70-70
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 49 - 53, Pin the external tools to fixed versions to avoid supply-chain drift: replace the floating installer invocation for uv (the curl to https://astral.sh/uv/install.sh) with a versioned installer URL or a specific release tag, and invoke uvx with an explicit aliBuild version by changing the "$HOME/.local/bin/uvx --from alibuild aliBuild version" call to reference a pinned package (e.g. --from alibuild@<VERSION> aliBuild version) so both the uv installer and the aliBuild source are deterministically versioned.
59-60:⚠️ Potential issue | 🟠 MajorRestrict
.s3cfgpermissions immediately after writing credentials.At Line 60, credentials are written to
~/.s3cfgwithout explicit mode hardening; on shared runners this can expose secrets depending on umask.Suggested fix
printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \ "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg + chmod 600 ~/.s3cfg🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 59 - 60, The script writes AWS creds to ~/.s3cfg using the printf command but doesn't harden file permissions; after the printf that creates ~/.s3cfg add an immediate permission hardening step (e.g., set umask 077 before writing or run chmod 600 ~/.s3cfg right after the printf) so the file is only readable/writable by the user and secrets aren't exposed on shared runners.
8-11:⚠️ Potential issue | 🟠 MajorPath filter is too narrow for this workflow.
At Line 8 and Line 11,
*.shonly matches repository-root shell scripts. This misses nested scripts and also ignores key build inputs likecontainer/**and workflow edits.Suggested fix
push: branches: [main] paths: - - '*.sh' + - '**/*.sh' + - 'container/**' + - '.github/workflows/build.yml' pull_request: paths: - - '*.sh' + - '**/*.sh' + - 'container/**' + - '.github/workflows/build.yml'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 8 - 11, The workflow's path filter uses the overly narrow glob '*.sh' (appearing under the push and pull_request paths entries) which only matches scripts at the repo root; update the path filters to include nested scripts and other relevant inputs by replacing or augmenting '*.sh' with recursive globs like '**/*.sh' and adding other important paths (e.g., 'container/**', '.github/workflows/**') in the pull_request and push path lists so changes to nested scripts, container files, and workflows trigger the workflow.
5-7:⚠️ Potential issue | 🟠 MajorTag-specific logic is unreachable with current
pushtrigger.
build-containerchecks tag refs at Line 80, buton.pushonly runs formainbranch pushes (Line 6), so tag pushes won’t start this workflow.Suggested fix
on: push: branches: [main] + tags: + - '*'Also applies to: 80-80
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 5 - 7, The workflow's on.push only targets the main branch so tag pushes never trigger the build-container job's tag-ref logic; update the push trigger (the on.push block referenced) to also include tag refs (e.g., add a tags: entry like tags: ['v*'] or tags: ['*'] under push) so tag pushes will start the workflow and allow the build-container tag checks at Line 80 to run.
24-26:⚠️ Potential issue | 🟡 MinorConcurrency key can cancel unrelated manual runs.
At Line 25, grouping only by
github.refmeans twoworkflow_dispatchruns on the same ref (different package/defaults) will cancel each other.Suggested fix
concurrency: - group: build-${{ github.ref }} + group: build-${{ github.ref }}-${{ inputs.package || 'FairShip' }}-${{ inputs.defaults || 'release' }} cancel-in-progress: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 24 - 26, The concurrency group currently uses only build-${{ github.ref }} which causes separate manual workflow_dispatch runs on the same ref to cancel each other; update the concurrency group to include a workflow-unique identifier (e.g., github.workflow) and/or any relevant run input so manual runs remain distinct — for example change group to something like build-${{ github.ref }}-${{ github.workflow }} or build-${{ github.ref }}-${{ github.event.inputs.package || github.workflow }} and keep cancel-in-progress: true to preserve behavior.
36-36:⚠️ Potential issue | 🟠 MajorCustom self-hosted labels need actionlint config to avoid CI lint failures.
runs-onusesalma9andx86_64(Line 36, Line 81). actionlint will flag these as unknown unless declared in.github/actionlint.yaml.Also applies to: 81-81
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml at line 36, The workflow uses custom runner labels in the runs-on setting (runs-on: [self-hosted, alma9, x86_64]) which actionlint will flag as unknown; fix this by adding those custom labels (alma9 and x86_64) to the actionlint configuration so actionlint recognizes them—update/create the actionlint YAML to declare these runner/label names used by runs-on and commit the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/build.yml:
- Around line 49-53: Pin the external tools to fixed versions to avoid
supply-chain drift: replace the floating installer invocation for uv (the curl
to https://astral.sh/uv/install.sh) with a versioned installer URL or a specific
release tag, and invoke uvx with an explicit aliBuild version by changing the
"$HOME/.local/bin/uvx --from alibuild aliBuild version" call to reference a
pinned package (e.g. --from alibuild@<VERSION> aliBuild version) so both the uv
installer and the aliBuild source are deterministically versioned.
- Around line 59-60: The script writes AWS creds to ~/.s3cfg using the printf
command but doesn't harden file permissions; after the printf that creates
~/.s3cfg add an immediate permission hardening step (e.g., set umask 077 before
writing or run chmod 600 ~/.s3cfg right after the printf) so the file is only
readable/writable by the user and secrets aren't exposed on shared runners.
- Around line 8-11: The workflow's path filter uses the overly narrow glob
'*.sh' (appearing under the push and pull_request paths entries) which only
matches scripts at the repo root; update the path filters to include nested
scripts and other relevant inputs by replacing or augmenting '*.sh' with
recursive globs like '**/*.sh' and adding other important paths (e.g.,
'container/**', '.github/workflows/**') in the pull_request and push path lists
so changes to nested scripts, container files, and workflows trigger the
workflow.
- Around line 5-7: The workflow's on.push only targets the main branch so tag
pushes never trigger the build-container job's tag-ref logic; update the push
trigger (the on.push block referenced) to also include tag refs (e.g., add a
tags: entry like tags: ['v*'] or tags: ['*'] under push) so tag pushes will
start the workflow and allow the build-container tag checks at Line 80 to run.
- Around line 24-26: The concurrency group currently uses only build-${{
github.ref }} which causes separate manual workflow_dispatch runs on the same
ref to cancel each other; update the concurrency group to include a
workflow-unique identifier (e.g., github.workflow) and/or any relevant run input
so manual runs remain distinct — for example change group to something like
build-${{ github.ref }}-${{ github.workflow }} or build-${{ github.ref }}-${{
github.event.inputs.package || github.workflow }} and keep cancel-in-progress:
true to preserve behavior.
- Line 36: The workflow uses custom runner labels in the runs-on setting
(runs-on: [self-hosted, alma9, x86_64]) which actionlint will flag as unknown;
fix this by adding those custom labels (alma9 and x86_64) to the actionlint
configuration so actionlint recognizes them—update/create the actionlint YAML to
declare these runner/label names used by runs-on and commit the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 77504881-3bcc-463e-91ab-f3b4b3db14ae
📒 Files selected for processing (1)
.github/workflows/build.yml
Find the most recently modified log in the BUILD directory and print it in a collapsible group when the build step fails.
The find -printf approach produced no output on the runner. Use ls -t on the BUILD/*/log glob to find the most recently modified log instead.
Redirect --debug output to a file to avoid filling runner storage. On failure, print the last 3000 lines of aliBuild debug output and the failed package's build log in collapsible groups.
The absolute symlink caused 'Too many levels of symbolic links' during the relocate-me.sh step when unpacking from the remote store, because the absolute path embedded in the symlink no longer matched after relocation.
aliBuild --debug enables bash set -x tracing which produces gigabytes of output even when redirected to a file. Rely on the per-package build log instead, which contains the actual compilation output.
List BUILD directory contents to diagnose why logs aren't found. Use *-latest/log glob to follow aliBuild's symlink convention.
Read cached tarballs from /cvmfs/ship-nightlies.cern.ch/main/sw (local, fast, no download needed). New packages built by CI are uploaded to S3 via --write-store. PRs don't write to either store.
Build images (using S3 as store). In PRs, we read but don't write, on main we write.
Summary by CodeRabbit
New Features
Documentation
Chores