You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Tell us about your request
When ECS service is deployed with service connect, the envoy proxy sidecar starts up as a sidecar in the tasks, finds the ips and ports and then creates an cloudmap service where the task addrs are added as an instance of the service. When it creates the cloudmap service, the envoy sidecar adds the label "AmazonECSManaged=true".
I am requesting the capability to supply custom labels on the cloudmap service from the service now configuration in ECS.
Which service(s) is this request for?
This is for ECS fargate.
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
The ECS fargate tasks expose a prometheus endpoint to be scraped by a prometheus instance. I am writing prom HTTP service discovery service that uses cloudmap to find out the ips and ports of the fargate tasks. Prometheus expects labels and metadata for each target group. I want to expose the prometheus labels and metadata in the cloudmap service which in turn comes from service connect configuration in the ECS service.
Since I have no way to directly propagate this information to the cloudmap service. I either need to store this metadata off to separate datastore (files in s3, dynamodb, or RDS) and then try to link this together in cloudmap.
Mapping a cloudmap service and its instances to an ECS service non-trivial as well because there's no useful information in iether the instance or service to indicate what ECS service and container it came from. However, even if there were, its really important to avoid querying ECS for this information because the ECS rate limits are unforgiving and will cause significant problems at scale.
Among many problems this causes is something simple: each application may have a different HTTP path where the metrics are served. Prometheus expects this this to be communicated with a metadata label metric_path. If I could propagate this information from the ECS service to the cloudmap service as a label, it would be trivial to extract this info. Instead, I will need to make educated guesses and probe different paths.
Are you currently working around this issue?
I am coming up with heuristics and conventions that are based on highly brittle assumptions and could easily break. For example, I just assume that the cloudmap service is a one to one mapping to the ECS service (even though multiple container ports could be registered) and assume all prometheus endpoints have the same path.
The text was updated successfully, but these errors were encountered:
Community Note
Tell us about your request
When ECS service is deployed with service connect, the envoy proxy sidecar starts up as a sidecar in the tasks, finds the ips and ports and then creates an cloudmap service where the task addrs are added as an instance of the service. When it creates the cloudmap service, the envoy sidecar adds the label "AmazonECSManaged=true".
I am requesting the capability to supply custom labels on the cloudmap service from the service now configuration in ECS.
Which service(s) is this request for?
This is for ECS fargate.
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
The ECS fargate tasks expose a prometheus endpoint to be scraped by a prometheus instance. I am writing prom HTTP service discovery service that uses cloudmap to find out the ips and ports of the fargate tasks. Prometheus expects labels and metadata for each target group. I want to expose the prometheus labels and metadata in the cloudmap service which in turn comes from service connect configuration in the ECS service.
Since I have no way to directly propagate this information to the cloudmap service. I either need to store this metadata off to separate datastore (files in s3, dynamodb, or RDS) and then try to link this together in cloudmap.
Mapping a cloudmap service and its instances to an ECS service non-trivial as well because there's no useful information in iether the instance or service to indicate what ECS service and container it came from. However, even if there were, its really important to avoid querying ECS for this information because the ECS rate limits are unforgiving and will cause significant problems at scale.
Among many problems this causes is something simple: each application may have a different HTTP path where the metrics are served. Prometheus expects this this to be communicated with a metadata label metric_path. If I could propagate this information from the ECS service to the cloudmap service as a label, it would be trivial to extract this info. Instead, I will need to make educated guesses and probe different paths.
Are you currently working around this issue?
I am coming up with heuristics and conventions that are based on highly brittle assumptions and could easily break. For example, I just assume that the cloudmap service is a one to one mapping to the ECS service (even though multiple container ports could be registered) and assume all prometheus endpoints have the same path.
The text was updated successfully, but these errors were encountered: