You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The FoundationDB integration relies on the FoundationDB Python client, which (like all FoundationDB clients) requires a cluster file so the client can identify and connect to the cluster's coordination servers. The client also requires that the cluster file be writable. When running in Kubernetes, it can be very tricky to get an appropriate clusterfile to Datadog Agent instances.
When running the Datadog Agent in Kubernetes, there are really three ways to add a file to an agent instance:
Operators can build a custom agent image with the file "baked in." This works and can get a writable cluster file into the image, but it is not resilient to cluster topology changes. Any time the set of coordinators changes, an operator would have to rebuild the agent image so newly-launched agents could find the coordination servers.
Operators can mount a ConfigMap as a file. For FoundationDB users using the FoundationDB Kubernetes Operator, this is probably the most natural approach. The problem here is that ConfigMap entries mounted as files are read-only; they'll work to a certain extent, but the FoundationDB client will be cranky (and the cluster as a whole will complain that clients are connected with read-only/outdated cluster files if its topology changes).
Operators could copy the file at agent startup using an init container. This is certainly theoretically possible, but I suspect that most Kubernetes users are using the Datadog Helm chart, and prior issues suggest that the Helm team is not interested in supporting init containers.
I'm wondering if we could consider alternative mechanisms for getting clusterfiles to agents; I'm happy to contribute something upstream if we can agree on an approach! I'd propose one or both of:
The FoundationDB integration could add a mechanism for reading cluster file contents from environment variables. When a new FoundationDB check gets instantiated, it could write the value from the named environment variable to a temporary file, then use that temporary file as the cluster file for the FoundationDB client. This would allow operators to pass in a ConfigMap value with the clusterfile contents as an environment variable, then use that as the actual cluster file.
The FoundationDB integration could (optionally? always?) copy cluster files to writable temporary files before passing them to the FoundationDB client; that would allow ConfigMap values mounted as files to work as writable cluster files.
Please let me know what you think; as I said, I'm happy to contribute an implementation if we can agree on an approach!
The text was updated successfully, but these errors were encountered:
The FoundationDB integration relies on the FoundationDB Python client, which (like all FoundationDB clients) requires a cluster file so the client can identify and connect to the cluster's coordination servers. The client also requires that the cluster file be writable. When running in Kubernetes, it can be very tricky to get an appropriate clusterfile to Datadog Agent instances.
When running the Datadog Agent in Kubernetes, there are really three ways to add a file to an agent instance:
ConfigMap
as a file. For FoundationDB users using the FoundationDB Kubernetes Operator, this is probably the most natural approach. The problem here is thatConfigMap
entries mounted as files are read-only; they'll work to a certain extent, but the FoundationDB client will be cranky (and the cluster as a whole will complain that clients are connected with read-only/outdated cluster files if its topology changes).I'm wondering if we could consider alternative mechanisms for getting clusterfiles to agents; I'm happy to contribute something upstream if we can agree on an approach! I'd propose one or both of:
ConfigMap
value with the clusterfile contents as an environment variable, then use that as the actual cluster file.ConfigMap
values mounted as files to work as writable cluster files.Please let me know what you think; as I said, I'm happy to contribute an implementation if we can agree on an approach!
The text was updated successfully, but these errors were encountered: