Skip to content

Podman Prevents Updating a Single Service Due to Strict Dependency Enforcement #25812

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ArthoPacini opened this issue Apr 6, 2025 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@ArthoPacini
Copy link

Issue Description

Podman enforces strict dependencies between containers in a pod, preventing the update of a single service (e.g., rebuilding and restarting) when it has dependent services, even if the service is stopped. This behavior makes it impossible to recreate a container with an updated image without affecting the entire stack, leading to unnecessary downtime. In contrast, Docker allows updating individual services without impacting dependencies, offering greater flexibility. This limitation is particularly evident when using Podman Compose but appears to stem from Podman’s underlying architecture.

Steps to reproduce the issue

  1. Set up the directory structure:
.
|-docker-compose.yaml
|-modules
| |-app
| | |-index.html
| | |-Dockerfile
| |-proxy
| | |-Dockerfile
| | |-index.html
  1. Create identical Dockerfiles for both services:
FROM docker.io/nginx:alpine
COPY ./index.html /usr/share/nginx/html/index.html
  1. Create modules/app/index.html:
App Version 1
  1. Create modules/proxy/index.html:
Proxy Version 1
  1. Create docker-compose.yaml:
version: '3.8'
services:
  app:
    container_name: "app"
    build:
      context: ./modules/app
      dockerfile: Dockerfile
    healthcheck:
      test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:80"]
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 5s
    networks:
      - app-net
  proxy:
    container_name: "proxy"
    build:
      context: ./modules/proxy
      dockerfile: Dockerfile
    healthcheck:
      test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:80"]
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 5s
    networks:
      - app-net
    depends_on:
      app:
        condition: service_healthy
networks:
  app-net:
    driver: bridge
  1. Build and start the stack using Podman Compose:
podman-compose build
podman-compose up -d
  1. Verify the app service content:
podman exec -it app sh -c "curl http://localhost"

Output: App Version 1

  1. Modify modules/app/index.html:
App Version 2
  1. Attempt to rebuild and update the app service:
podman-compose build app && podman-compose down app && podman-compose up app -d
  1. Check the app service content again:
podman exec -it app sh -c "curl http://localhost"

Output: Still App Version 1

Describe the results you received

When attempting to update the app service:

  • The new image builds successfully.
  • Stopping the app container works, but Podman prevents its removal due to the dependency from the proxy container, producing an error like:
Error: container <app_container_id> has dependent containers which must be removed before it: <proxy_container_id>
  • Running podman-compose up -d app restarts the existing container instead of creating a new one with the updated image, resulting in the content remaining "App Version 1" instead of "App Version 2".
  • The strict dependency enforcement requires stopping and removing the entire stack to update a single service.

Describe the results you expected

I expected Podman to allow updating the app service independently, similar to Docker Compose’s behavior with --force-recreate --no-deps. Specifically:

  • The app container should be recreated with the updated image ("App Version 2").
  • The dependent proxy service should either continue running or gracefully handle the brief unavailability of app without requiring a full stack shutdown.

podman info output

host:
  arch: amd64
  buildahVersion: 1.28.2
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.6+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
  cpuUtilization:
    idlePercent: 85.03
    systemPercent: 2.04
    userPercent: 12.93
  cpus: 12
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: journald
  hostname: blocky
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1023
      size: 1
    - container_id: 1
      host_id: 624288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1023
      size: 1
    - container_id: 1
      host_id: 624288
      size: 65536
  kernel: 6.1.0-32-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 956821504
  memTotal: 33569669120
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun_1.8.1-1+deb12u1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/user/1023/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1023/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 119361761280
  swapTotal: 120015810560
  uptime: 139h 36m 6.00s (Approximately 5.79 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  localhost:7008:
    Blocked: false
    Insecure: true
    Location: localhost:7008
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: localhost:7008
    PullFromMirror: ""
store:
  configFile: /home/x/.config/containers/storage.conf
  containerStore:
    number: 32
    paused: 0
    running: 31
    stopped: 1
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/x/.local/share/containers/storage
  graphRootAllocated: 999697944576
  graphRootUsed: 60282040320
  graphStatus: {}
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 225
  runRoot: /run/user/1023/containers
  volumePath: /home/x/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 0
  BuiltTime: Wed Dec 31 21:00:00 1969
  GitCommit: ""
  GoVersion: go1.19.8
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

  • Environment: Linux / Debian 12
  • Podman Version: 4.3.1
  • Podman Compose Version: 1.1.0

Additional information

Questions:

  • Is this strict dependency enforcement a design choice in libpod?

  • Are there workarounds to update a single service’s image without destroying the entire stack?

  • Are there plans to introduce functionality akin to Docker’s --no-deps option?

Why is this a problem for larger stacks:

Let's say we got this services dependencies graph:

                +---------+
                | proxy   |
                +---------+
                   /   \
                  /     \
      +---------------+  +---------------+
      | service A     |  | service B     |
      +---------------+  +---------------+
                            /   |   \
                           /    |    \
             +---------------+ +---------------+ +---------------+
             | service B1  |  | service B2  |  | service B3  |
             +---------------+ +---------------+ +---------------+

If we need to update Service A, we need to destroy the whole stack, the proxy as well as service B and all other services like B1, B2 and B3 and so on... This is just... bananas.. How are we supposed to update the service A container without disrupting all other containers?

One "solution" is to... remove dependencies...

@ArthoPacini ArthoPacini added the kind/bug Categorizes issue or PR as related to a bug. label Apr 6, 2025
@ninja-quokka
Copy link
Collaborator

Hi @ArthoPacini

I'm closing this as a duplicate of containers/podman-compose#1177

Could you please add your "why is this a problem" part to your bug report there as well? I think that's a very good point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants