You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
2
+
change_type: enhancement
3
+
4
+
# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action)
5
+
component: auto-instrumentation
6
+
7
+
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
8
+
note: "Bump NodeJS autoinstrumentations dependency to a version that supports enabling selected instrumentations via environment variable."
9
+
10
+
# One or more tracking issues related to the change
11
+
issues: [2622]
12
+
13
+
# (Optional) One or more lines of additional information to render under the primary note.
14
+
# These lines will be padded with 2 spaces and then inserted directly into the document.
15
+
# Use pipe (|) for multiline entries.
16
+
subtext: |
17
+
See [the documentation](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/metapackages/auto-instrumentations-node#usage-auto-instrumentation) for details.
Copy file name to clipboardexpand all lines: cmd/otel-allocator/README.md
+45-32
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,8 @@
3
3
Target Allocator is an optional component of the OpenTelemetry Collector [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CR). The release version matches the
4
4
operator's most recent release as well.
5
5
6
+
> 🚨 **Note:** the TargetAllocator currently supports `deployment`, `statefulset`, and `daemonset` deployment modes of the `OpenTelemetryCollector` CR.
7
+
6
8
In a nutshell, the TA is a mechanism for decoupling the service discovery and metric collection functions of Prometheus such that they can be scaled independently. The Collector manages Prometheus metrics without needing to install Prometheus. The TA manages the configuration of the Collector's [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md).
7
9
8
10
The TA serves two functions:
@@ -52,7 +54,9 @@ flowchart RL
52
54
53
55
Even though Prometheus is not required to be installed in your Kubernetes cluster to use the Target Allocator for Prometheus CR discovery, the TA does require that the ServiceMonitor and PodMonitor be installed. These CRs are bundled with Prometheus Operator; however, they can be installed standalone as well.
54
56
55
-
The easiest way to do this is by going to the [Prometheus Operator’s Releases page](https://github.com/prometheus-operator/prometheus-operator/releases), grabbing a copy of the latest `bundle.yaml` file (for example, [this one](https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.66.0/bundle.yaml)), and stripping out all of the YAML except the ServiceMonitor and PodMonitor YAML definitions.
57
+
The easiest way to do this is to grab a copy of the individual [`PodMonitor`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/charts/crds/crds/crd-podmonitors.yaml) YAML and [`ServiceMonitor`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/charts/crds/crds/crd-servicemonitors.yaml) YAML custom resource definitions (CRDs) from the [Kube Prometheus Operator’s Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/charts).
58
+
59
+
> ✨ For more information on configuring the `PodMonitor` and `ServiceMonitor`, check out the [PodMonitor API](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.PodMonitor) and the [ServiceMonitor API](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ServiceMonitor).
56
60
57
61
# Usage
58
62
The `spec.targetAllocator:` controls the TargetAllocator general properties. Full API spec can be found here: [api.md#opentelemetrycollectorspectargetallocator](../../docs/api.md#opentelemetrycollectorspectargetallocator)
The TargetAllocator service is named based on the OpenTelemetryCollector CR name. `collector_id` should be unique per
125
+
The TargetAllocator service is named based on the `OpenTelemetryCollector` CR name. For example, if your Collector CR name is `my-collector`, then the TargetAllocator `service` and `deployment` will each be named `my-collector-targetallocator`, and the `pod` will be named `my-collector-targetallocator-<pod_id>`. `collector_id` should be unique per
122
126
collector instance, such as the pod name. The `POD_NAME` environment variable is convenient since this is supplied
123
127
to collector instance pods by default.
124
128
125
129
126
130
### RBAC
127
-
The ServiceAccount that the TargetAllocator runs as, has to have access to the CRs and the namespaces to watch for the pod and service monitors. A role like this will provide that
128
-
access.
131
+
132
+
Before the TargetAllocator can start scraping, you need to set up Kubernetes RBAC (role-based access controls) resources. This means that you need to have a `ServiceAccount` and corresponding cluster roles so that the TargetAllocator has access to all of the necessary resources to pull metrics from.
133
+
134
+
You can create your own `ServiceAccount`, and reference it in `spec.targetAllocator.serviceAccount` in your `OpenTelemetryCollector` CR. You’ll then need to configure the `ClusterRole` and `ClusterRoleBinding` for this `ServiceAccount`, as per below.
135
+
129
136
```yaml
130
-
apiVersion: rbac.authorization.k8s.io/v1
131
-
kind: ClusterRole
132
-
metadata:
133
-
name: opentelemetry-targetallocator-cr-role
134
-
rules:
135
-
- apiGroups:
136
-
- monitoring.coreos.com
137
-
resources:
138
-
- servicemonitors
139
-
- podmonitors
140
-
verbs:
141
-
- '*'
142
-
- apiGroups: [""]
143
-
resources:
144
-
- namespaces
145
-
verbs: ["get", "list", "watch"]
137
+
targetAllocator:
138
+
enabled: true
139
+
serviceAccount: opentelemetry-targetallocator-sa
140
+
prometheusCR:
141
+
enabled: true
146
142
```
147
-
In addition, the TargetAllocator needs the same permissions as a Prometheus instance would to find the matching targets
148
-
from the CR instances.
143
+
144
+
> 🚨 **Note**: The Collector part of this same CR *also* has a serviceAccount key which only affects the collector and *not*
145
+
the TargetAllocator.
146
+
147
+
If you omit the `ServiceAccount` name, the TargetAllocator creates a `ServiceAccount` for you. The `ServiceAccount`’s default name is a concatenation of the Collector name and the `-targetallocator` suffix. By default, this `ServiceAccount` has no defined policy, so you’ll need to create your own `ClusterRole` and `ClusterRoleBinding` for it, as per below.
148
+
149
+
The role below will provide the minimum access required for the Target Allocator to query all the targets it needs based on any Prometheus configurations:
150
+
149
151
```yaml
150
152
apiVersion: rbac.authorization.k8s.io/v1
151
153
kind: ClusterRole
@@ -177,19 +179,30 @@ rules:
177
179
- nonResourceURLs: ["/metrics"]
178
180
verbs: ["get"]
179
181
```
180
-
These roles can be combined.
181
182
182
-
A ServiceAccount bound with the above permissions in the namespaces that are to be monitored can then be referenced in
183
-
the `targetAllocator:` part of the OpenTelemetryCollector CR.
183
+
If you enable the the `prometheusCR` (set `spec.targetAllocator.prometheusCR.enabled` to `true`) in the `OpenTelemetryCollector` CR, you will also need to define the following roles. These give the TargetAllocator access to the `PodMonitor` and `ServiceMonitor` CRs. It also gives namespace access to the `PodMonitor` and `ServiceMonitor`.
184
+
184
185
```yaml
185
-
targetAllocator:
186
-
enabled: true
187
-
serviceAccount: opentelemetry-targetallocator-sa
188
-
prometheusCR:
189
-
enabled: true
186
+
apiVersion: rbac.authorization.k8s.io/v1
187
+
kind: ClusterRole
188
+
metadata:
189
+
name: opentelemetry-targetallocator-cr-role
190
+
rules:
191
+
- apiGroups:
192
+
- monitoring.coreos.com
193
+
resources:
194
+
- servicemonitors
195
+
- podmonitors
196
+
verbs:
197
+
- '*'
198
+
- apiGroups: [""]
199
+
resources:
200
+
- namespaces
201
+
verbs: ["get", "list", "watch"]
190
202
```
191
-
**Note**: The Collector part of this same CR *also* has a serviceAccount key which only affects the collector and *not*
192
-
the TargetAllocator.
203
+
204
+
> ✨ The above roles can be combined into a single role.
0 commit comments