-
Notifications
You must be signed in to change notification settings - Fork 518
Podman Compose Prevents Updating a Single Service Due to Dependency Constraints #1177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
So the issue is when taking down the "app" service it wants to rm it, this fails due to the dependency from the proxy container. This failure is somewhat ignored by returning When bringing the Some things I checked to use as a workaround:
Checking the code we actually do have an option to ignore dependence but this is currently only for internal use and not exposed. I can't see any way to recreate the So IMO the question is whether we should expose the @Luap99 WDYT mate? |
Thanks for the feedback! I’ve tried every parameter I could find to resolve this, but nothing has worked. I get why dependencies should be respected, but this limitation becomes a real issue in larger stacks. Let me explain why this matters and propose a solution. Why This Is a Problem for Larger StacksConsider this service dependency graph:
If I need to update
In contrast, Docker Compose respects Since Podman already has an
Right now, to circumvent the problem, I've just disabled all of my |
It seems a similar problem was cited in containers/podman#18575 and containers/podman#23898 In the latter the user Luap99 questions this:
I guess in this case, you are simply recreating the container with a new, up to date image, it is a valid problem, given that it really isn't a great workflow to have to destroy the entire stack. But to be able to accomplish this, the service needs not only to stop, but to be erased/deleted, so a new container with the new image can take place. Podman won't let the container be deleted because of the dependency. |
And I mean... If |
I think in the case of podman-compose it makes more sense to allow a dependent container be recreated rather than in the context of a sysadmin running podman rm --force on a container they don't know has dependencies. But as podman-compose simply consumes podman, I don't think we can expose one without allowing the other use case. I think your "Why This Is a Problem for Larger Stacks" diagram is interesting for the discussion though so thank you for adding it here! |
My opinions hasn't changed, podman dependency model is rather strict and wired deeply in our stack. I don't see that changing. I acknowledge the use case but maybe then declaring a dependency is wrong.
AFAIK depends_on is a compose thing and not a regular docker or podman option as such I don't see why that option alone would trigger a podman dependency internally in podman. As such this is something podman-compose is passing explicitly to us via --requires? Note podman-compose is a community project and not maintained by us podman maintainers.
That is indeed problematic but naturally we cannot prevent a container process from exiting. The only option we would have in such case is to stop all other containers in the dependency chain but we never started doing that. The reason deletion is so important is because it bricks your other container otherwise. The deps are resolved to the unique IDs so even if we were to recreate a container it will have a new ID a such all dependencies that point to the old ID in our DB would be broken as they no longer exist. |
Hi @Luap99 and @ninja-quokka, thanks for your detailed responses! I’d like to walk you through my real-world use case to show why this limitation creates significant friction, especially in larger stacks, and propose a way forward that could balance both perspectives. My SetupHere’s the stack I’m working with, similar to the diagram I shared earlier:
The dependency chain, defined via
This setup works great for the initial deployment. The cryptocurrency node, which takes time to sync its ledger with the network, starts early, followed by the API, and finally the proxy—ensuring everything is accessible via the internet. So far, so good. The Update ProblemThe issue arises when I need to update just the podman-compose build docs && podman-compose down docs && podman-compose up docs -d
To update
This is a major pain point:
Podman’s strictness forces me to choose between disabling Current Workarounds
A Proposed Middle GroundSince Podman already has an internal
This would:
@Luap99, You mentioned that @ninja-quokka, You suggested that allowing recreation in Podman-Compose makes sense, even if it’s trickier with raw Podman commands. I think this supports the idea of a Podman-Compose-specific solution that leverages --ignore-deps internally without exposing it broadly. My Questions
Why I CareI switched from Docker to Podman for its daemonless design and rootless capabilities, I love it! Even contributed for |
Alright, for now I've found a "solution" for now. Just comment out the line #:
on line 1035 on function It seems to keep the order but doesn't actually passes I am using podman-compose version 1.1.0 with podman version 4.3.1. I've tried running on version 1.3.0, but I found another error (it doesn't actually finishes the command), made another issue: #1178 |
Describe the bug
When using Podman Compose, it is not possible to update a single service (e.g.,
app
) without affecting dependent services (e.g.,proxy
) due to strict dependency enforcement. This issue occurs when attempting to rebuild and restart a service with dependent containers, even if the service is stopped. In larger stacks with multiple interdependent services, this forces a complete stack shutdown to update a single service.This behavior contrasts with Docker Compose, where individual services can be updated without impacting dependencies.
To Reproduce
Dockerfile
s are identical:modules/app/index.html
:modules/proxy/index.html
:docker-compose.yaml
:app
content:Output should be
App Version 1
modules/app/index.html
(you may usesed -i 's/App Version 1/App Version 2/' ./modules/service_a/index.html
):app
:podman-compose build app && podman-compose down app && podman-compose up app -d
app
content again:Output: Still
App Version 1
app
container cannot be removed or recreated becauseproxy
depends on it, even whenapp
is stopped.podman-compose up -d app
restarts the old container instead of creating a new one with the updated image.app
requires stopping and removing the entire stack, which is impractical for larger stacks.Expected behavior
In Docker Compose, a single service can be rebuilt and restarted without affecting its dependencies using:
Podman Compose should offer similar functionality, allowing individual service updates without requiring the entire stack to be taken down.
Actual behavior
Podman Compose enforces dependencies strictly, preventing the removal or recreation of a service if it has dependent containers. This makes it impossible to update a single service without stopping and removing all dependent services, leading to unnecessary downtime.
Output
Environment:
odman version
Client: Podman Engine
Version: 4.3.1
API Version: 4.3.1
Go Version: go1.19.8
Built: Wed Dec 31 21:00:00 1969
OS/Arch: linux/amd64
Additional context
In stacks where a service like proxy depends on multiple services (e.g., 10+ containers), updating a single service requires shutting down the entire stack. This is inefficient and causes significant operational disruption, especially for users migrating from Docker Compose.
If it is a problem with
podman
and not actually withpodman-compose
, then how are you guys actually updating images without destroying the entire stack? I will remove dependencies for now as a "solution"...Is this a problem with roots in
libpod
? Any workarounds?The text was updated successfully, but these errors were encountered: