Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
120 changes: 120 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
---
name: Build packages

'on':
push:
branches: [main]
paths:
Comment on lines +5 to +7
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find and read the build.yml file
fd -t f "build.yml" .github/workflows/

Repository: ShipSoft/shipdist

Length of output: 88


🏁 Script executed:

# Read the build.yml file with line numbers
cat -n .github/workflows/build.yml

Repository: ShipSoft/shipdist

Length of output: 3930


Tag pushes won't trigger workflow due to branch restriction on push event.

The push trigger is restricted to branches: [main] (line 6), preventing tag pushes from initiating the workflow. While the build-container job includes tag handling at lines 88-94 with the condition startsWith(github.ref, 'refs/tags/') (line 80), it depends on the build job (line 79), which never executes on tag events.

Suggested fix
   push:
     branches: [main]
+    tags:
+      - '*'
     paths:
       - '**/*.sh'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
push:
branches: [main]
paths:
push:
branches: [main]
tags:
- '*'
paths:
- '**/*.sh'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/build.yml around lines 5 - 7, The workflow's push trigger
is limited to branches (push.branches: [main]) so tag pushes never run the
dependent build job (build) even though build-container checks
startsWith(github.ref, 'refs/tags/'); update the push trigger to also include
tag events (e.g., add push.tags: ['*'] or a tag pattern like ['v*']) or replace
the branches restriction with a more inclusive push configuration so tag refs
trigger the workflow and allow the build job to run for tags as well.

- '*.sh'
pull_request:
paths:
- '*.sh'
Comment on lines +8 to +11
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

GitHub Actions paths filter glob pattern matching *.sh nested files

💡 Result:

Use the workflow-level path filter with a recursive globstar: '/.sh'. Example (runs on push/pull_request when any .sh file anywhere in the repo changes): on: push: paths: - '/.sh' Notes that explain common “doesn’t match nested files” issues: 1) Don’t use '.sh' alone if you need nested matching. '.sh' matches only in the workflow path root (top level), while '/.sh' matches recursively. 2) Quote the pattern strings to avoid YAML/parse issues (GitHub docs recommend escaping/quoting special characters in glob patterns). 3) If you need to exclude certain .sh files, combine include patterns and exclude patterns. GitHub requires at least one positive pattern if you use '!' patterns, and you can alternatively use paths-ignore. Example with excludes: on: push: paths: - '/.sh' - '!/scripts/' Authoritative reference: GitHub’s Workflow syntax docs for path filters show that paths uses glob patterns with '*' and '' and that the workflow runs if at least one path matches.

Citations:


🏁 Script executed:

# Search for all shell scripts in the repository
find . -name "*.sh" -type f 2>/dev/null | head -20

Repository: ShipSoft/shipdist

Length of output: 350


paths: '*.sh' matches only root-level files and excludes nested scripts.

At lines 8 and 11, this pattern misses shell scripts in subdirectories (such as container/entrypoint.sh). PRs changing only nested shell scripts will not trigger this workflow.

Use the recursive globstar pattern instead:

Suggested fix
   push:
     branches: [main]
     paths:
-      - '*.sh'
+      - '**/*.sh'
   pull_request:
     paths:
-      - '*.sh'
+      - '**/*.sh'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- '*.sh'
pull_request:
paths:
- '*.sh'
- '**/*.sh'
pull_request:
paths:
- '**/*.sh'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/build.yml around lines 8 - 11, The workflow's paths filter
uses the root-only glob '*.sh' which misses nested scripts; update the two
occurrences of the paths entry that currently list '*.sh' to use a recursive
glob like '**/*.sh' so changes to shell scripts in subdirectories (e.g.,
container/entrypoint.sh) will trigger the workflow; modify the
pull_request.paths and any other matching paths entries in the same file to
'**/*.sh'.

schedule:
# Weekly rebuild on Monday at 04:00 UTC
- cron: '0 4 * * 1'
workflow_dispatch:
inputs:
package:
description: Package to build
default: FairShip
defaults:
description: Defaults to use
default: release

concurrency:
group: build-${{ github.ref }}
cancel-in-progress: true
Comment on lines +24 to +26
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Concurrency group may cancel unrelated workflow_dispatch runs.

Multiple workflow_dispatch invocations from the same branch with different package inputs will share the same concurrency group (e.g., build-refs/heads/main) and cancel each other. Consider including inputs to distinguish them:

Suggested improvement
 concurrency:
-  group: build-${{ github.ref }}
+  group: build-${{ github.ref }}-${{ inputs.package || 'FairShip' }}-${{ inputs.defaults || 'release' }}
   cancel-in-progress: true
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
concurrency:
group: build-${{ github.ref }}
cancel-in-progress: true
concurrency:
group: build-${{ github.ref }}-${{ inputs.package || 'FairShip' }}-${{ inputs.defaults || 'release' }}
cancel-in-progress: true
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/build.yml around lines 24 - 26, The concurrency group
"build-${{ github.ref }}" is too broad and causes different workflow_dispatch
runs with different inputs to cancel each other; update the concurrency.group
expression to include a distinguishing input (e.g., include
github.event.inputs.package or another input like github.event.client_id/sha) so
runs with different package inputs get unique groups (reference the
concurrency.group setting in your workflow YAML and adjust the expression to
incorporate github.event.inputs.package or github.sha).


env:
REMOTE_STORE: /cvmfs/ship-nightlies.cern.ch/main/sw
WRITE_STORE: s3://ship-packages
WORK_DIR: /opt/ship
DEFAULTS: release

jobs:
build:
name: Build ${{ inputs.package || 'FairShip' }}
runs-on: [self-hosted, alma9, x86_64]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify whether actionlint config exists and whether custom labels are declared.
fd -i 'actionlint.yaml|actionlint.yml' .github
rg -n -C2 'self-hosted-runner|labels|alma9|x86_64' .github --iglob '*actionlint*.y*ml'

Repository: ShipSoft/shipdist

Length of output: 1784


🏁 Script executed:

#!/bin/bash
# List .github directory structure
find .github -type f -name "*.yml" -o -name "*.yaml" | head -20

# Search for actionlint config
find .github -type f \( -name ".actionlintrc*" -o -name "actionlint*" \) 2>/dev/null

# Check the workflow file around lines 36 and 74
grep -n "runs-on\|alma9\|x86_64" .github/workflows/build.yml 2>/dev/null | head -20

Repository: ShipSoft/shipdist

Length of output: 214


🏁 Script executed:

#!/bin/bash
# Search for actionlint config at repository root and subdirectories
find . -maxdepth 2 -type f \( -name ".actionlintrc*" -o -name "actionlint.yaml" -o -name "actionlint.yml" \) 2>/dev/null

# Check if actionlint config exists anywhere
ls -la .github/ | grep -i actionlint

# Get more context around line 36 and 81 in build.yml
sed -n '30,40p' .github/workflows/build.yml
sed -n '75,85p' .github/workflows/build.yml

Repository: ShipSoft/shipdist

Length of output: 615


🌐 Web query:

actionlint custom self-hosted runner labels configuration alma9 x86_64

💡 Result:

To make actionlint accept custom self-hosted runner labels (including ones like AlmaLinux 9 x86_64), configure actionlint.yaml under .github/ with self-hosted-runner.labels. 1) Create .github/actionlint.yaml (or .github/actionlint.yml) Example: self-hosted-runner: labels: - alma9 - x86_64 # If your actual runner labels include dynamic suffixes, use glob patterns. # Examples (adjust to your real label strings): # - alma9-* # - *-x86_64 2) Where these config fields apply - actionlint reads the configuration file from actionlint.yaml / actionlint.yml placed in the .github directory. - self-hosted-runner.labels is “Labels of self-hosted runner in array of strings”. actionlint treats each entry as a pattern (glob syntax supported by Go’s path.Match), and matches these patterns against labels used in runs-on. 3) Why this fixes the “label … is unknown” error - actionlint’s runner-label rule validates that labels used in runs-on are known; for custom self-hosted labels you must add them to self-hosted-runner.labels in actionlint.yaml. If you tell me the exact label name(s) you use for your AlmaLinux 9 x86_64 runners (as they appear in the GitHub Runner “Labels” setting and in runs-on), I can suggest the tightest glob patterns to cover them.

Citations:


Create actionlint configuration for custom self-hosted runner labels.

Lines 36 and 81 use custom self-hosted runner labels alma9 and x86_64. actionlint requires explicit configuration to recognize these labels. Create .github/actionlint.yaml with:

self-hosted-runner:
  labels:
    - alma9
    - x86_64

Without this configuration, actionlint will fail CI validation.

🧰 Tools
🪛 actionlint (1.7.12)

[error] 36-36: label "alma9" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2025-vs2026", "windows-2022", "windows-11-arm", "ubuntu-slim", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "macos-latest", "macos-latest-xlarge", "macos-latest-large", "macos-26-intel", "macos-26-xlarge", "macos-26-large", "macos-26", "macos-15-intel", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xlarge", "macos-14-large", "macos-14", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)


[error] 36-36: label "x86_64" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2025-vs2026", "windows-2022", "windows-11-arm", "ubuntu-slim", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "macos-latest", "macos-latest-xlarge", "macos-latest-large", "macos-26-intel", "macos-26-xlarge", "macos-26-large", "macos-26", "macos-15-intel", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xlarge", "macos-14-large", "macos-14", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/build.yml at line 36, The workflow uses custom self-hosted
runner labels 'alma9' and 'x86_64' in runs-on, so add an actionlint
configuration file named actionlint.yaml that defines a
self-hosted-runner.labels list including alma9 and x86_64 so actionlint
recognizes the custom labels; ensure the file contains a top-level
self-hosted-runner key with a labels array listing those two strings.

container:
image: registry.cern.ch/ship/gha-runner:latest
volumes:
- /cvmfs:/cvmfs:ro
- shipbuild:/opt/ship
env:
AWS_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
S3_ENDPOINT_URL: https://s3.cern.ch
steps:
- uses: actions/checkout@v4

- name: Install uv and aliBuild
run: |
curl -LsSf https://astral.sh/uv/install.sh | sh
echo "$HOME/.local/bin" >> "$GITHUB_PATH"
"$HOME/.local/bin/uvx" --from alibuild aliBuild version
Comment on lines +50 to +54
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n .github/workflows/build.yml | sed -n '45,60p'

Repository: ShipSoft/shipdist

Length of output: 835


🏁 Script executed:

# Check for lock files, requirements, and version specifications
find . -type f \( -name "*.lock" -o -name "requirements*.txt" -o -name "pyproject.toml" -o -name ".python-version" \) | head -20

# Check for other workflow files to see if they pin versions
ls -la .github/workflows/ 2>/dev/null

# Search for any UV_VERSION or ALIBUILD_VERSION env vars
rg -i "UV_VERSION|ALIBUILD_VERSION" .

Repository: ShipSoft/shipdist

Length of output: 436


🏁 Script executed:

cat -n ./uv.sh

# Also check if build.yml calls this script or specifies versions elsewhere
grep -n "uv\|version\|ALIBUILD" .github/workflows/build.yml

Repository: ShipSoft/shipdist

Length of output: 1438


Pin uv and alibuild versions in the workflow to ensure deterministic CI builds.

Lines 51 and 53 install floating latest versions of uv and alibuild, which reduces reproducibility and increases supply-chain drift risk. Although uv.sh pins version 0.7.12, the GitHub Actions workflow bypasses this and downloads unpinned packages. Specify explicit versions:

  • Add a --version flag to the uvx --from alibuild aliBuild invocation (line 53)
  • Consider downloading uv from a specific GitHub release tag instead of the installer script (line 51), or add checksum verification if using the installer
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/build.yml around lines 49 - 53, Pin the installer and
aliBuild invocation in the workflow: replace the unpinned curl of the uv
installer with a fetch of a specific uv release (or add checksum verification
for the installer script) and update the uvx invocation (the command containing
"uvx --from alibuild aliBuild version") to include an explicit --version
<aliBuild-version> flag so aliBuild is installed deterministically; ensure you
pick and document concrete version strings for uv and aliBuild when making these
edits.


- name: Configure S3 credentials
run: |
echo "::add-mask::${AWS_ACCESS_KEY_ID}"
echo "::add-mask::${AWS_SECRET_ACCESS_KEY}"
printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \
"$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg
Comment on lines +60 to +61
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Harden .s3cfg permissions after writing secrets.

At Line 54, ~/.s3cfg may be too permissive depending on umask. Restrict it explicitly to avoid credential exposure on shared runners.

Suggested fix
           printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \
             "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg
+          chmod 600 ~/.s3cfg
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \
"$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg
printf 'host_base = s3.cern.ch\nhost_bucket = %%(bucket)s.s3.cern.ch\naccess_key = %s\nsecret_key = %s\n' \
"$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" > ~/.s3cfg
chmod 600 ~/.s3cfg
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/build.yml around lines 53 - 54, The workflow writes AWS
credentials to ~/.s3cfg using the printf redirection and leaves file permissions
to the runner's umask; immediately after the printf that creates ~/.s3cfg,
explicitly restrict the file mode (e.g., to owner read/write) so credentials
aren't world-readable—update the step that contains the printf '... > ~/.s3cfg'
to add an explicit chmod (or equivalent) on ~/.s3cfg before any further use.


- name: Build
run: |
PACKAGE="${{ inputs.package || 'FairShip' }}"
DEFAULTS="${{ inputs.defaults || env.DEFAULTS }}"
WRITE_STORE_FLAG=""
if [ "${{ github.event_name }}" != "pull_request" ]; then
WRITE_STORE_FLAG="--write-store ${WRITE_STORE}"
fi
uvx --from alibuild aliBuild build "$PACKAGE" \
--defaults "$DEFAULTS" \
--remote-store "${REMOTE_STORE}" \
$WRITE_STORE_FLAG \
--work-dir "$WORK_DIR" \
-c . \
-j "$(nproc)"

- name: Print build log on failure
if: failure()
run: |
echo "Searching for build logs in $WORK_DIR/BUILD/"
ls -la "$WORK_DIR/BUILD/" 2>&1 || true
for log in "$WORK_DIR"/BUILD/*-latest/log; do
if [ -f "$log" ]; then
echo "::group::Build log: $log"
cat "$log"
echo "::endgroup::"
fi
done

build-container:
name: Build container image
needs: build
if: github.event_name == 'push' || startsWith(github.ref, 'refs/tags/')
runs-on: [self-hosted, alma9, x86_64]
steps:
- uses: actions/checkout@v4

- name: Determine image tag
id: tag
run: |
if [[ "$GITHUB_REF" == refs/tags/* ]]; then
echo "tag=${GITHUB_REF#refs/tags/}" >> "$GITHUB_OUTPUT"
echo "latest=true" >> "$GITHUB_OUTPUT"
else
echo "tag=main-${GITHUB_SHA::8}" >> "$GITHUB_OUTPUT"
echo "latest=false" >> "$GITHUB_OUTPUT"
fi

- name: Build and push
run: |
IMAGE=registry.cern.ch/ship/ship-sim
TAG="${{ steps.tag.outputs.tag }}"
podman build -t "$IMAGE:$TAG" container/
podman push "$IMAGE:$TAG"
if [ "${{ steps.tag.outputs.latest }}" = "true" ]; then
podman tag "$IMAGE:$TAG" "$IMAGE:latest"
podman push "$IMAGE:latest"
fi
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ repos:
hooks:
- id: alidistlint
verbose: true
exclude: ^container/
ci:
autofix_prs: true
autoupdate_commit_msg: "chore(deps): update pre-commit hooks"
Expand Down
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,27 @@ Defaults in use:

#### `aliBuild init` and local development packages

## CI

### What's in place

- **Recipe linting** (`.github/workflows/recipe-checks.yml`): runs on every push/PR
- **Build workflow** (`.github/workflows/build.yml`): builds FairShip on self-hosted runners with S3 remote store caching
- **Container image** (`container/`): minimal AlmaLinux 9 image with CVMFS client, loads FairShip from `/cvmfs/ship.cern.ch`

### Setup required before first use

1. **S3 credentials**: add `S3_ACCESS_KEY_ID`, `S3_SECRET_ACCESS_KEY` as GitHub Actions secrets (EC2 credentials for `s3.cern.ch`)
2. **Seed the remote store**: run a full build with `S3_ENDPOINT_URL=https://s3.cern.ch aliBuild build FairShip --remote-store s3://ship-packages::rw` to populate the cache
3. **Container registry**: authenticate the runner to push to `registry.cern.ch/ship/ship-sim`

### Remaining work

4. **CVMFS publishing**: set up `ship-cvmfs-builder-slc9` as a CVMFS Stratum-0 publisher for `ship.cern.ch`, then add a publish job to the workflow (following the LCG bits pattern: fetch tarballs from S3, unpack to CVMFS, `cvmfs_server transaction`/`publish`)
5. **unpacked.cern.ch**: submit `registry.cern.ch/ship/ship-sim` to the DUCC wishlist for CVMFS distribution of the container image
6. **Branch protection**: require the `build` check to pass before merging
7. **Failure notifications**: add alerting for scheduled build failures

## Platform specific information

Information for different platforms available below.
Expand Down
2 changes: 1 addition & 1 deletion acts.sh
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ cmake "$SOURCEDIR" -DCMAKE_INSTALL_PREFIX="$INSTALLROOT" \

cmake --build . -- ${JOBS:+-j$JOBS} install

[[ -d "$INSTALLROOT/lib64" ]] && [[ ! -d "$INSTALLROOT/lib" ]] && ln -sf "${INSTALLROOT}/lib64" "$INSTALLROOT/lib"
[[ -d "$INSTALLROOT/lib64" ]] && [[ ! -d "$INSTALLROOT/lib" ]] && ln -sf lib64 "$INSTALLROOT/lib"

# Modulefile
MODULEDIR="$INSTALLROOT/etc/modulefiles"
Expand Down
5 changes: 3 additions & 2 deletions alibuild.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ requires:
- python-requests
- python-distro
- python-jinja2
# boto3 is not included; remote store support is unavailable.
- python-dateutil
build_requires:
- uv
- alibuild-recipe-tools
Expand All @@ -26,7 +26,8 @@ pyver=$(python3 -c 'import sysconfig; print(sysconfig.get_python_version())')
TARGET="$INSTALLROOT/lib/python$pyver/site-packages"
mkdir -p "$TARGET"

uv pip install --no-deps --no-cache-dir --target="$TARGET" --python="$(command -v python3)" "alibuild==$PKGVERSION"
uv pip install --no-deps --no-cache-dir --target="$TARGET" --python="$(command -v python3)" \
"alibuild==$PKGVERSION" boto3 botocore s3transfer jmespath

ln -snf "python$pyver" "$INSTALLROOT/lib/python"

Expand Down
5 changes: 3 additions & 2 deletions bitsorg.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ requires:
- python-requests
- python-distro
- python-jinja2
# boto3 is not included; remote store support is unavailable.
- python-dateutil
build_requires:
- uv
- alibuild-recipe-tools
Expand All @@ -24,7 +24,8 @@ pyver=$(python3 -c 'import sysconfig; print(sysconfig.get_python_version())')
TARGET="$INSTALLROOT/lib/python$pyver/site-packages"
mkdir -p "$TARGET"

uv pip install --no-deps --no-cache-dir --target="$TARGET" --python="$(command -v python3)" "bitsorg==$PKGVERSION"
uv pip install --no-deps --no-cache-dir --target="$TARGET" --python="$(command -v python3)" \
"bitsorg==$PKGVERSION" boto3 botocore s3transfer jmespath

ln -snf "python$pyver" "$INSTALLROOT/lib/python"

Expand Down
17 changes: 17 additions & 0 deletions container/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
FROM almalinux:9-minimal

RUN microdnf install -y \
https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm && \
microdnf install -y cvmfs fuse environment-modules && \
microdnf clean all

COPY cvmfs-ship.conf /etc/cvmfs/default.d/ship.conf

COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

RUN useradd -m -u 1000 shipuser
USER shipuser

ENTRYPOINT ["/entrypoint.sh"]
CMD ["/bin/bash"]
Comment thread
coderabbitai[bot] marked this conversation as resolved.
2 changes: 2 additions & 0 deletions container/cvmfs-ship.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
CVMFS_REPOSITORIES=ship.cern.ch
CVMFS_HTTP_PROXY=DIRECT
16 changes: 16 additions & 0 deletions container/entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#!/bin/bash

Check failure on line 1 in container/entrypoint.sh

View workflow job for this annotation

GitHub Actions / alidistlint

metadata not found or empty (is the '\n---\n' separator present?) [ali:empty]
set -e

Check notice on line 2 in container/entrypoint.sh

View workflow job for this annotation

GitHub Actions / alidistlint

Missing script shebang. Use exactly "#!/bin/bash -e" to match aliBuild environment. You may see spurious errors until you fix the shebang. [ali:bad-shebang]

Check notice on line 2 in container/entrypoint.sh

View workflow job for this annotation

GitHub Actions / alidistlint

main recipe should create a Modulefile; use alibuild-generate-module or add a "#%Module1.0" comment to your manually-created Modulefile [ali:missing-modulefile]

# Source environment modules
source /etc/profile.d/modules.sh 2>/dev/null || true

# Load FairShip environment from CVMFS if available
if [ -d /cvmfs/ship.cern.ch ]; then
alienv_output=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>&1)
if ! eval "$alienv_output" 2>/dev/null; then
echo "Warning: failed to load FairShip environment from CVMFS" >&2
echo "$alienv_output" >&2
fi
Comment on lines +9 to +13
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find and read the entrypoint.sh file
find . -name "entrypoint.sh" -type f

Repository: ShipSoft/shipdist

Length of output: 86


🏁 Script executed:

# Check if we can locate the file in container directory
fd "entrypoint.sh" . --type f

Repository: ShipSoft/shipdist

Length of output: 86


🏁 Script executed:

# List directory structure to understand the repo layout
git ls-files | grep -E "(container|entrypoint)" | head -20

Repository: ShipSoft/shipdist

Length of output: 131


🏁 Script executed:

cat -n ./container/entrypoint.sh

Repository: ShipSoft/shipdist

Length of output: 610


🏁 Script executed:

# Verify bash set -e behavior with command substitution in assignment
bash <<'EOF'
set -e
echo "Testing set -e with failed command substitution in assignment..."
var=$(exit 1)
echo "After assignment, var='$var'"
echo "Script continued - set -e did NOT exit"
EOF
echo "Exit code: $?"

Repository: ShipSoft/shipdist

Length of output: 138


🏁 Script executed:

# Test what happens when eval receives mixed stderr
bash <<'EOF'
set -e
echo "Testing eval with mixed stderr..."
# Simulate alienv failing and outputting to stderr + stdout
output=$({ echo "export VAR=value"; echo "error message" >&2; exit 0; } 2>&1)
echo "Output variable: '$output'"
echo "Now evaluating..."
eval "$output" 2>/dev/null || echo "Eval failed"
echo "VAR is: $VAR"
EOF

Repository: ShipSoft/shipdist

Length of output: 188


🏁 Script executed:

# Test the exact pattern from entrypoint.sh more carefully
bash <<'EOF'
set -e
if [ -d /tmp ]; then
  # Simulate alienv failure - outputs to stderr AND exits with non-zero
  alienv_output=$(/bin/sh -c 'echo "error: something failed" >&2; exit 1' 2>&1)
  echo "After command sub, alienv_output='$alienv_output'"
  echo "Command substitution completed - set -e did NOT exit"
  if ! eval "$alienv_output" 2>/dev/null; then
    echo "Warning: failed to load FairShip environment from CVMFS" >&2
    echo "$alienv_output" >&2
  fi
fi
echo "Script completed successfully"
EOF
echo "Final exit code: $?"

Repository: ShipSoft/shipdist

Length of output: 79


🏁 Script executed:

# Verify: does the suggested fix actually work better?
bash <<'EOF'
set -e
echo "Testing suggested fix pattern..."
if alienv_output=$(/bin/sh -c 'echo "error: something failed" >&2; exit 1' 2>&1); then
  echo "Command succeeded, output: $alienv_output"
  if ! eval "$alienv_output"; then
    echo "Warning: failed to evaluate FairShip environment from CVMFS" >&2
  fi
else
  echo "Warning: failed to load FairShip environment from CVMFS" >&2
  echo "Output was: $alienv_output"
fi
echo "Script completed"
EOF
echo "Exit code: $?"

Repository: ShipSoft/shipdist

Length of output: 232


Prevent premature exit before your warning path executes.

At Line 9, when alienv exits with a non-zero status, set -e terminates the script immediately, so the warning block at Lines 11–13 never executes. Additionally, 2>&1 at Line 9 mixes stderr messages into the output, which are then passed to eval at Line 10, potentially causing eval to fail on invalid shell syntax within error messages.

Use if alienv_output=$(...); then to capture the exit status without terminating, and separate stderr handling from the eval input:

Suggested fix
 if [ -d /cvmfs/ship.cern.ch ]; then
-  alienv_output=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>&1)
-  if ! eval "$alienv_output" 2>/dev/null; then
-    echo "Warning: failed to load FairShip environment from CVMFS" >&2
-    echo "$alienv_output" >&2
-  fi
+  if alienv_output=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest); then
+    if ! eval "$alienv_output"; then
+      echo "Warning: failed to evaluate FairShip environment from CVMFS" >&2
+    fi
+  else
+    echo "Warning: failed to load FairShip environment from CVMFS" >&2
+  fi
 fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@container/entrypoint.sh` around lines 9 - 13, The script currently uses
command substitution with 2>&1 and eval under set -e which can cause immediate
exit and feed stderr into eval; change to use a conditional assignment to
preserve exit status and keep stderr separate: run the command as if
alienv_output=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest) (no
2>&1) and on success run eval "$alienv_output"; on failure capture stderr (e.g.
err=$(/cvmfs/ship.cern.ch/bin/alienv printenv FairShip/latest 2>&1 >/dev/null)
or redirect stderr to a temp var/file) and use that err when printing the
warning; reference the existing alienv_output variable and the eval invocation
to locate and replace the block.

fi

exec "$@"
Loading