You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Podman enforces strict dependencies between containers in a pod, preventing the update of a single service (e.g., rebuilding and restarting) when it has dependent services, even if the service is stopped. This behavior makes it impossible to recreate a container with an updated image without affecting the entire stack, leading to unnecessary downtime. In contrast, Docker allows updating individual services without impacting dependencies, offering greater flexibility. This limitation is particularly evident when using Podman Compose but appears to stem from Podman’s underlying architecture.
podman-compose build app && podman-compose down app && podman-compose up app -d
Check the app service content again:
podman exec -it app sh -c "curl http://localhost"
Output: Still App Version 1
Describe the results you received
When attempting to update the app service:
The new image builds successfully.
Stopping the app container works, but Podman prevents its removal due to the dependency from the proxy container, producing an error like:
Error: container <app_container_id> has dependent containers which must be removed before it: <proxy_container_id>
Running podman-compose up -d app restarts the existing container instead of creating a new one with the updated image, resulting in the content remaining "App Version 1" instead of "App Version 2".
The strict dependency enforcement requires stopping and removing the entire stack to update a single service.
Describe the results you expected
I expected Podman to allow updating the app service independently, similar to Docker Compose’s behavior with --force-recreate --no-deps. Specifically:
The app container should be recreated with the updated image ("App Version 2").
The dependent proxy service should either continue running or gracefully handle the brief unavailability of app without requiring a full stack shutdown.
Is this strict dependency enforcement a design choice in libpod?
Are there workarounds to update a single service’s image without destroying the entire stack?
Are there plans to introduce functionality akin to Docker’s --no-deps option?
Why is this a problem for larger stacks:
Let's say we got this services dependencies graph:
+---------+
| proxy |
+---------+
/ \
/ \
+---------------+ +---------------+
| service A | | service B |
+---------------+ +---------------+
/ | \
/ | \
+---------------+ +---------------+ +---------------+
| service B1 | | service B2 | | service B3 |
+---------------+ +---------------+ +---------------+
If we need to update Service A, we need to destroy the whole stack, the proxy as well as service B and all other services like B1, B2 and B3 and so on... This is just... bananas.. How are we supposed to update the service A container without disrupting all other containers?
One "solution" is to... remove dependencies...
The text was updated successfully, but these errors were encountered:
Issue Description
Podman enforces strict dependencies between containers in a pod, preventing the update of a single service (e.g., rebuilding and restarting) when it has dependent services, even if the service is stopped. This behavior makes it impossible to recreate a container with an updated image without affecting the entire stack, leading to unnecessary downtime. In contrast, Docker allows updating individual services without impacting dependencies, offering greater flexibility. This limitation is particularly evident when using Podman Compose but appears to stem from Podman’s underlying architecture.
Steps to reproduce the issue
modules/app/index.html
:modules/proxy/index.html
:docker-compose.yaml
:Output:
App Version 1
modules/app/index.html
:Output: Still
App Version 1
Describe the results you received
When attempting to update the
app
service:app
container works, but Podman prevents its removal due to the dependency from the proxy container, producing an error like:podman-compose up -d app
restarts the existing container instead of creating a new one with the updated image, resulting in the content remaining "App Version 1" instead of "App Version 2".Describe the results you expected
I expected Podman to allow updating the app service independently, similar to Docker Compose’s behavior with
--force-recreate --no-deps
. Specifically:app
container should be recreated with the updated image ("App Version 2").proxy
service should either continue running or gracefully handle the brief unavailability ofapp
without requiring a full stack shutdown.podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
Additional information
Questions:
Is this strict dependency enforcement a design choice in libpod?
Are there workarounds to update a single service’s image without destroying the entire stack?
Are there plans to introduce functionality akin to Docker’s --no-deps option?
Why is this a problem for larger stacks:
Let's say we got this services dependencies graph:
If we need to update Service A, we need to destroy the whole stack, the
proxy
as well as serviceB
and all other services likeB1
,B2
andB3
and so on... This is just... bananas.. How are we supposed to update the serviceA
container without disrupting all other containers?One "solution" is to... remove dependencies...
The text was updated successfully, but these errors were encountered: