Skip to content

Podman Fails in OpenShift GitHub ARC Ephemeral Runner Pod Due to User Namespace Mapping (newuidmap error) #4234

@SwatiPandey13

Description

@SwatiPandey13

Checks

Controller Version

10.0.1

Deployment Method

Helm

Checks

  • This isn't a question or user support case (For Q&A and community support, go to Discussions).
  • I've read the Changelog before submitting this issue and I'm sure it's not due to any recently-introduced backward-incompatible changes

To Reproduce

1. Build a custom OpenShift ephemeral runner image for GitHub ARC using the attached Dockerfile.
2. Deploy the runner pod in OpenShift.
3. Attempt to run Podman (rootless) inside the ephemeral runner pod (e.g., `podman info` or any podman command).

Describe the bug

Podman fails to start inside the ephemeral runner pod, consistently showing errors related to user namespace mapping and newuidmap. Example error:

ERRO[0000] running `/usr/bin/newuidmap 60 0 1001 1 1 100000 2000000`: newuidmap: Target process 60 is owned by a different user: uid:1001 pw_uid:1001 st_uid:1001, gid:123 pw_gid:12345 st_gid:123 
Error: cannot set up namespace using "/usr/bin/newuidmap": exit status 1

I have attempted dozens of solutions, including correct setup of /etc/subuid and /etc/subgid, file permissions, capabilities, and configuration options. The issue persists across different attempts and configurations.

Dockerfile.txt

Describe the expected behavior

Podman should run successfully inside the ephemeral runner pod, allowing rootless containers to be executed by the runner user as part of CI/CD workflows.

  • Is Podman rootless supported inside OpenShift ephemeral runner pods?
  • Is there a recommended configuration or workaround for this environment?
  • Are there known restrictions causing this failure?

Additional Context

- **Base Image:** UBI9, OpenShift, GitHub ARC ephemeral runner
- **Runner User:** UID 1001, GID 12345
- **Packages installed:** podman, buildah, crun, fuse-overlayfs, slirp4netns, container-selinux, containernetworking-plugins, iptables, etc.
- **Configuration:** `/etc/subuid`, `/etc/subgid`, setcap for `newuidmap`/`newgidmap`, cgroups v1 compatibility, vfs storage driver, correct ownership and permissions.
- **Attempts:** Over 100 solutions tried, including all relevant documentation, community posts, and GitHub issues.
- **Dockerfile:** See below for full Dockerfile used for reproduction.

FROM artifactory.voya.net/docker-virtual/openshift/ubi9/openjdk-17:latest

# Remove problematic mail directory but recreate it with proper permissions
RUN rm -rf /var/spool/mail && \
    mkdir -p /var/spool/mail && \
    chmod 777 /var/spool/mail

ARG TARGETPLATFORM
ARG RUNNER_VERSION
ARG RUNNER_CONTAINER_HOOKS_VERSION
ARG CHANNEL=stable
ARG DUMB_INIT_VERSION=1.2.5
ARG RUNNER_USER_UID=1001
ARG JFROG_CLI_VERSION=2.46.0
ARG SONAR_SCANNER_VERSION=5.0.1.3006

# 1. Install corporate certificates
COPY voya_ecc_root.crx voyarootca.cer voya_rsa_root.crx zscalerrootca.cer /etc/pki/ca-trust/source/anchors/
RUN update-ca-trust extract

# ====== BASE PACKAGE INSTALLATION ======
RUN microdnf update -y && \
    microdnf install -y dnf && \
    dnf update -y && \
    dnf install -y --allowerasing \
        curl \
        ca-certificates \
        git \
        jq \
        unzip \
        zip \
        libicu \
        krb5-libs \
        zlib \
        openssl \
        shadow-utils \
        findutils \
        libyaml \
        procps-ng \
        iputils \
        hostname \
        nc \
        tar \
        gzip \
        which \
        yum \
        wget \
        less \
        vim-minimal \
        lsof \
&& dnf clean all \
&& rm -rf /var/cache/dnf /var/tmp/*
# =======================================

# Install Git LFS
RUN curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | bash && \
    dnf install -y git-lfs && \
    git lfs install && \
    dnf clean all && \
    rm -rf /var/cache/dnf /var/tmp/*

# ====== INSTALL PODMAN AND CONTAINER TOOLS ======
# Install Podman and related tools directly from UBI9 repositories
RUN dnf install -y \
        podman \
        buildah \
        crun \
        fuse-overlayfs \
        slirp4netns \
        container-selinux \
        containernetworking-plugins \
        iptables \
        libseccomp \
        libnet \
        conmon \
        runc \
        skopeo && \
    dnf clean all && \
    rm -rf /var/cache/dnf /var/tmp/*

# ====== CONFIGURE PODMAN FOR CGROUPS V1 COMPATIBILITY ======
RUN mkdir -p /etc/containers && \
    echo -e '[containers]\nuserns="host"\nnetns="host"\n[engine]\ncgroup_manager="cgroupfs"\n' > /etc/containers/containers.conf && \
    echo -e '[storage]\ndriver = "vfs"\n' > /etc/containers/storage.conf && \
    mkdir -p /tmp/user/containers && \
    chmod -R 777 /tmp/user

ENV PODMAN_IGNORE_CGROUPSV1_WARNING=1 \
    PODMAN_DISABLE_HOST_LOOKUP=1 \
    STORAGE_DRIVER=vfs \
    CONTAINERS_NO_UMASK=1 \
    CONTAINERS_CONF=/etc/containers/containers.conf \
    CONTAINERS_STORAGE_CONF=/etc/containers/storage.conf \
    XDG_RUNTIME_DIR=/tmp/user \
    TMPDIR=/tmp/user \
    PODMAN_SKIP_LOOP_SETUP=1
# ===========================================================

# Create runner user with fixed UID
RUN groupadd -r -g 12345 docker && \
    useradd -r -g docker -u $RUNNER_USER_UID --create-home runner && \
    usermod -aG wheel runner && \
    chmod g+w /etc/passwd && \
    echo "runner:100000:65536" >> /etc/subuid && \
    echo "runner:100000:65536" >> /etc/subgid && \
    echo "default:100000:65536" >> /etc/subuid && \
    echo "default:100000:65536" >> /etc/subgid && \
    chmod 777 /etc/subuid /etc/subgid

# Environment variables
ENV HOME=/home/runner
ENV RUNNER_HOME=/home/runner
ENV _CONTAINERS_USERNS_CONFIGURED=1
ENV CONTAINERS_IGNORE_CHOWN_ERRORS=1
ENV STORAGE_DRIVER=vfs
ENV PODMAN_IGNORE_CGROUPSV1_WARNING=1

# Install JFrog CLI
RUN ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) && \
    if [ "$ARCH" = "amd64" ] || [ "$ARCH" = "x86_64" ]; then \
        JF_ARCH=amd64; \
    elif [ "$ARCH" = "arm64" ] || [ "$ARCH" = "aarch64" ]; then \
        JF_ARCH=arm64; \
    elif [ "$ARCH" = "386" ] || [ "$ARCH" = "i386" ]; then \
        JF_ARCH=386; \
    else \
        echo "Unsupported architecture: $ARCH"; exit 1; \
    fi && \
    curl -fL https://releases.jfrog.io/artifactory/jfrog-cli/v2-jf/${JFROG_CLI_VERSION}/jfrog-cli-linux-${JF_ARCH}/jf -o /usr/local/bin/jf && \
    chmod +x /usr/local/bin/jf

# Install Node.js 20 and Snyk
RUN curl -sL https://rpm.nodesource.com/setup_20.x | bash - && \
    dnf install -y nodejs && \
    npm install -g snyk && \
    dnf clean all && \
    rm -rf /var/cache/dnf /var/tmp/*

# Install dumb-init
RUN ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) && \
    if [ "$ARCH" = "arm64" ]; then ARCH=aarch64; fi && \
    if [ "$ARCH" = "amd64" ] || [ "$ARCH" = "i386" ]; then ARCH=x86_64; fi && \
    curl -fLo /usr/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v${DUMB_INIT_VERSION}/dumb-init_${DUMB_INIT_VERSION}_${ARCH} && \
    chmod +x /usr/bin/dumb-init

# Install GitHub runner
ENV RUNNER_ASSETS_DIR=/runnertmp
RUN ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) && \
    if [ "$ARCH" = "amd64" ] || [ "$ARCH" = "x86_64" ] || [ "$ARCH" = "i386" ]; then ARCH=x64; fi && \
    mkdir -p "$RUNNER_ASSETS_DIR" && \
    cd "$RUNNER_ASSETS_DIR" && \
    curl -fLo runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-${ARCH}-${RUNNER_VERSION}.tar.gz && \
    tar xzf ./runner.tar.gz && \
    rm runner.tar.gz && \
    ./bin/installdependencies.sh

# Copy runner to home directory
RUN cp -r $RUNNER_ASSETS_DIR/* $HOME/ && \
    chown -R runner:docker $HOME && \
    chmod -R 770 $HOME

# Set up tool cache
ENV RUNNER_TOOL_CACHE=/opt/hostedtoolcache
RUN mkdir -p /opt/hostedtoolcache && \
    chmod 777 /opt/hostedtoolcache

# Install container hooks
RUN cd "$RUNNER_ASSETS_DIR" && \
    curl -fLo runner-container-hooks.zip https://github.com/actions/runner-container-hooks/releases/download/v${RUNNER_CONTAINER_HOOKS_VERSION}/actions-runner-hooks-k8s-${RUNNER_CONTAINER_HOOKS_VERSION}.zip && \
    unzip ./runner-container-hooks.zip -d ./k8s && \
    rm -f runner-container-hooks.zip && \
    mkdir -p /etc/arc/hooks && \
    cp -r ./k8s/* /etc/arc/hooks/ && \
    chmod -R +x /etc/arc/hooks

# ====== JAVA CONFIGURATION ======
RUN real_java_path=$(readlink -f $(which java)) && \
    JAVA_HOME=$(dirname $(dirname "$real_java_path")) && \
    echo "JAVA_HOME=$JAVA_HOME" >> /etc/environment

ENV SONAR_SCANNER_JAVA_HOME=/usr/lib/jvm/jre-17
# ================================

# Install SonarScanner CLI
RUN cd /tmp && \
    curl -fLO https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-${SONAR_SCANNER_VERSION}.zip && \
    unzip sonar-scanner-cli-${SONAR_SCANNER_VERSION}.zip -d /opt/ && \
    rm -f sonar-scanner-cli-${SONAR_SCANNER_VERSION}.zip && \
    mv /opt/sonar-scanner-${SONAR_SCANNER_VERSION} /opt/sonar-scanner && \
    sed -i "s|^#sonar.scanner.jvmOptions=|sonar.scanner.jvmOptions=-Djavax.net.ssl.trustStore=/etc/pki/java/cacerts|" /opt/sonar-scanner/conf/sonar-scanner.properties && \
    ln -s /opt/sonar-scanner/bin/sonar-scanner /usr/bin/sonar-scanner

# Verify installations
RUN jf --version && \
    sonar-scanner --version

# Copy scripts
COPY entrypoint.sh startup.sh logger.sh graceful-stop.sh update-status /usr/bin/
RUN chmod 755 /usr/bin/entrypoint.sh /usr/bin/startup.sh /usr/bin/logger.sh /usr/bin/graceful-stop.sh

# Create podman configuration for runner user
RUN mkdir -p /home/runner/.config/containers && \
    echo -e '[containers]\nuserns="host"\nnetns="host"\n[engine]\ncgroup_manager="cgroupfs"\n' > /home/runner/.config/containers/containers.conf

# Environment setup
ENV PATH="${PATH}:${HOME}/.local/bin"
ENV ImageOS=rhel9
RUN echo "PATH=${PATH}" >> /etc/environment && \
    echo "ImageOS=${ImageOS}" >> /etc/environment && \
    echo "CONTAINERS_IGNORE_CHOWN_ERRORS=1" >> /etc/environment && \
    echo "XDG_RUNTIME_DIR=/tmp/user" >> /etc/environment && \
    echo "CONTAINERS_CONF=/etc/containers/containers.conf" >> /etc/environment && \
    echo "CONTAINERS_STORAGE_CONF=/etc/containers/storage.conf" >> /etc/environment && \
    echo "PODMAN_IGNORE_CGROUPSV1_WARNING=1" >> /etc/environment

# Final permissions - removed reference to /var/spool/mail
RUN chown -R runner:docker /home/runner /opt /etc/arc && \
    chmod -R 770 /home/runner /opt /etc/arc

# Set up UID/GID mappings for runner user
#RUN echo "runner:100000:65536" | newuidmap && \
#    echo "runner:100000:65536" | newgidmap

#=================================

RUN  chmod -R 777 /opt; chown -R runner:docker /opt
RUN  chown -R runner:docker /home/default/.config; chown -R runner:docker /home/default/.local; chown -R runner:docker /tmp/storage-run-1000; chown -R runner:docker /home/default
COPY newuidmap /usr/bin/newuidmap
COPY newgidmap /usr/bin/newgidmap
COPY login.defs /etc/login.defs
COPY .config /home/runner/.config/
RUN chmod -R 777 /usr/; chmod -R 0644 /etc/login.defs;chmod u+s /usr/bin/newuidmap; chown -R runner:docker /home/runner/.config/

RUN chown runner:docker /usr/bin/newuidmap; chown runner:docker /usr/bin/newgidmap
RUN setcap -q cap_setuid+ep /usr/bin/newuidmap; \
setcap -q cap_setgid+ep /usr/bin/newgidmap

#=================================
# Ensure mail directory exists with proper permissions
RUN mkdir -p /var/spool/mail && chmod 777 /var/spool/mail && chown -R runner:docker /var/spool/mail

WORKDIR $HOME
#USER $RUNNER_USER_UID
USER $RUNNER_USER_UID

ENV _CONTAINERS_USERNS_CONFIGURED=""
ENV CONTAINERS_IGNORE_CHOWN_ERRORS=1
ENV BUILDAH_ISOLATION=chroot

ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["/usr/bin/entrypoint.sh"]

Controller Logs

ERRO[0000] running `/usr/bin/newuidmap 60 0 1001 1 1 100000 2000000`: newuidmap: Target process 60 is owned by a different user: uid:1001 pw_uid:1001 st_uid:1001, gid:123 pw_gid:12345 st_gid:123 
Error: cannot set up namespace using "/usr/bin/newuidmap": exit status 1

Runner Pod Logs

ERRO[0000] running `/usr/bin/newuidmap 60 0 1001 1 1 100000 2000000`: newuidmap: Target process 60 is owned by a different user: uid:1001 pw_uid:1001 st_uid:1001, gid:123 pw_gid:12345 st_gid:123 
Error: cannot set up namespace using "/usr/bin/newuidmap": exit status 1

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workinggha-runner-scale-setRelated to the gha-runner-scale-set modeneeds triageRequires review from the maintainers

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions