Skip to content

Commit 3cbc1ad

Browse files
authored
Merge branch 'main' into bug/2655
2 parents 15d9b87 + be172d9 commit 3cbc1ad

File tree

17 files changed

+256
-109
lines changed

17 files changed

+256
-109
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
2+
change_type: enhancement
3+
4+
# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action)
5+
component: auto-instrumentation
6+
7+
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
8+
note: "Bump NodeJS autoinstrumentations dependency to a version that supports enabling selected instrumentations via environment variable."
9+
10+
# One or more tracking issues related to the change
11+
issues: [2622]
12+
13+
# (Optional) One or more lines of additional information to render under the primary note.
14+
# These lines will be padded with 2 spaces and then inserted directly into the document.
15+
# Use pipe (|) for multiline entries.
16+
subtext: |
17+
See [the documentation](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/metapackages/auto-instrumentations-node#usage-auto-instrumentation) for details.
18+
Usage example: `export OTEL_NODE_ENABLED_INSTRUMENTATIONS="http,nestjs-core"`.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
2+
change_type: bug_fix
3+
4+
# The name of the component, or a single word describing the area of concern, (e.g. operator, target allocator, github action)
5+
component: operator
6+
7+
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
8+
note: Added missing label for Service/Pod Monitors
9+
10+
# One or more tracking issues related to the change
11+
issues: [2251]
12+
13+
# (Optional) One or more lines of additional information to render under the primary note.
14+
# These lines will be padded with 2 spaces and then inserted directly into the document.
15+
# Use pipe (|) for multiline entries.
16+
subtext:

.github/workflows/e2e.yaml

-2
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,6 @@ jobs:
3434
- e2e-multi-instrumentation
3535
- e2e-metadata-filters
3636
include:
37-
- group: e2e-prometheuscr
38-
setup: "prepare-e2e-with-featuregates FEATUREGATES=+operator.observability.prometheus"
3937
- group: e2e-multi-instrumentation
4038
setup: "add-operator-arg OPERATOR_ARG=--enable-multi-instrumentation prepare-e2e"
4139
- group: e2e-metadata-filters

autoinstrumentation/nodejs/package.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
},
1616
"dependencies": {
1717
"@opentelemetry/api": "1.7.0",
18-
"@opentelemetry/auto-instrumentations-node": "0.40.2",
18+
"@opentelemetry/auto-instrumentations-node": "0.43.0",
1919
"@opentelemetry/exporter-metrics-otlp-grpc": "0.46.0",
2020
"@opentelemetry/exporter-prometheus": "0.46.0",
2121
"@opentelemetry/exporter-trace-otlp-grpc": "0.46.0",

cmd/otel-allocator/README.md

+45-32
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,8 @@
33
Target Allocator is an optional component of the OpenTelemetry Collector [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CR). The release version matches the
44
operator's most recent release as well.
55

6+
> 🚨 **Note:** the TargetAllocator currently supports `deployment`, `statefulset`, and `daemonset` deployment modes of the `OpenTelemetryCollector` CR.
7+
68
In a nutshell, the TA is a mechanism for decoupling the service discovery and metric collection functions of Prometheus such that they can be scaled independently. The Collector manages Prometheus metrics without needing to install Prometheus. The TA manages the configuration of the Collector's [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md).
79

810
The TA serves two functions:
@@ -52,7 +54,9 @@ flowchart RL
5254

5355
Even though Prometheus is not required to be installed in your Kubernetes cluster to use the Target Allocator for Prometheus CR discovery, the TA does require that the ServiceMonitor and PodMonitor be installed. These CRs are bundled with Prometheus Operator; however, they can be installed standalone as well.
5456

55-
The easiest way to do this is by going to the [Prometheus Operator’s Releases page](https://github.com/prometheus-operator/prometheus-operator/releases), grabbing a copy of the latest `bundle.yaml` file (for example, [this one](https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.66.0/bundle.yaml)), and stripping out all of the YAML except the ServiceMonitor and PodMonitor YAML definitions.
57+
The easiest way to do this is to grab a copy of the individual [`PodMonitor`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/charts/crds/crds/crd-podmonitors.yaml) YAML and [`ServiceMonitor`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/charts/crds/crds/crd-servicemonitors.yaml) YAML custom resource definitions (CRDs) from the [Kube Prometheus Operator’s Helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/charts).
58+
59+
> ✨ For more information on configuring the `PodMonitor` and `ServiceMonitor`, check out the [PodMonitor API](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.PodMonitor) and the [ServiceMonitor API](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ServiceMonitor).
5660
5761
# Usage
5862
The `spec.targetAllocator:` controls the TargetAllocator general properties. Full API spec can be found here: [api.md#opentelemetrycollectorspectargetallocator](../../docs/api.md#opentelemetrycollectorspectargetallocator)
@@ -118,34 +122,32 @@ OpenTelemetry Collector Operator-specific config.
118122

119123
Upstream documentation here: [PrometheusReceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver#opentelemetry-operator)
120124

121-
The TargetAllocator service is named based on the OpenTelemetryCollector CR name. `collector_id` should be unique per
125+
The TargetAllocator service is named based on the `OpenTelemetryCollector` CR name. For example, if your Collector CR name is `my-collector`, then the TargetAllocator `service` and `deployment` will each be named `my-collector-targetallocator`, and the `pod` will be named `my-collector-targetallocator-<pod_id>`. `collector_id` should be unique per
122126
collector instance, such as the pod name. The `POD_NAME` environment variable is convenient since this is supplied
123127
to collector instance pods by default.
124128

125129

126130
### RBAC
127-
The ServiceAccount that the TargetAllocator runs as, has to have access to the CRs and the namespaces to watch for the pod and service monitors. A role like this will provide that
128-
access.
131+
132+
Before the TargetAllocator can start scraping, you need to set up Kubernetes RBAC (role-based access controls) resources. This means that you need to have a `ServiceAccount` and corresponding cluster roles so that the TargetAllocator has access to all of the necessary resources to pull metrics from.
133+
134+
You can create your own `ServiceAccount`, and reference it in `spec.targetAllocator.serviceAccount` in your `OpenTelemetryCollector` CR. You’ll then need to configure the `ClusterRole` and `ClusterRoleBinding` for this `ServiceAccount`, as per below.
135+
129136
```yaml
130-
apiVersion: rbac.authorization.k8s.io/v1
131-
kind: ClusterRole
132-
metadata:
133-
name: opentelemetry-targetallocator-cr-role
134-
rules:
135-
- apiGroups:
136-
- monitoring.coreos.com
137-
resources:
138-
- servicemonitors
139-
- podmonitors
140-
verbs:
141-
- '*'
142-
- apiGroups: [""]
143-
resources:
144-
- namespaces
145-
verbs: ["get", "list", "watch"]
137+
targetAllocator:
138+
enabled: true
139+
serviceAccount: opentelemetry-targetallocator-sa
140+
prometheusCR:
141+
enabled: true
146142
```
147-
In addition, the TargetAllocator needs the same permissions as a Prometheus instance would to find the matching targets
148-
from the CR instances.
143+
144+
> 🚨 **Note**: The Collector part of this same CR *also* has a serviceAccount key which only affects the collector and *not*
145+
the TargetAllocator.
146+
147+
If you omit the `ServiceAccount` name, the TargetAllocator creates a `ServiceAccount` for you. The `ServiceAccount`’s default name is a concatenation of the Collector name and the `-targetallocator` suffix. By default, this `ServiceAccount` has no defined policy, so you’ll need to create your own `ClusterRole` and `ClusterRoleBinding` for it, as per below.
148+
149+
The role below will provide the minimum access required for the Target Allocator to query all the targets it needs based on any Prometheus configurations:
150+
149151
```yaml
150152
apiVersion: rbac.authorization.k8s.io/v1
151153
kind: ClusterRole
@@ -177,19 +179,30 @@ rules:
177179
- nonResourceURLs: ["/metrics"]
178180
verbs: ["get"]
179181
```
180-
These roles can be combined.
181182

182-
A ServiceAccount bound with the above permissions in the namespaces that are to be monitored can then be referenced in
183-
the `targetAllocator:` part of the OpenTelemetryCollector CR.
183+
If you enable the the `prometheusCR` (set `spec.targetAllocator.prometheusCR.enabled` to `true`) in the `OpenTelemetryCollector` CR, you will also need to define the following roles. These give the TargetAllocator access to the `PodMonitor` and `ServiceMonitor` CRs. It also gives namespace access to the `PodMonitor` and `ServiceMonitor`.
184+
184185
```yaml
185-
targetAllocator:
186-
enabled: true
187-
serviceAccount: opentelemetry-targetallocator-sa
188-
prometheusCR:
189-
enabled: true
186+
apiVersion: rbac.authorization.k8s.io/v1
187+
kind: ClusterRole
188+
metadata:
189+
name: opentelemetry-targetallocator-cr-role
190+
rules:
191+
- apiGroups:
192+
- monitoring.coreos.com
193+
resources:
194+
- servicemonitors
195+
- podmonitors
196+
verbs:
197+
- '*'
198+
- apiGroups: [""]
199+
resources:
200+
- namespaces
201+
verbs: ["get", "list", "watch"]
190202
```
191-
**Note**: The Collector part of this same CR *also* has a serviceAccount key which only affects the collector and *not*
192-
the TargetAllocator.
203+
204+
> ✨ The above roles can be combined into a single role.
205+
193206

194207
### Service / Pod monitor endpoint credentials
195208

controllers/builder_test.go

+37-36
Original file line numberDiff line numberDiff line change
@@ -52,10 +52,6 @@ var (
5252
pathTypePrefix = networkingv1.PathTypePrefix
5353
)
5454

55-
var (
56-
prometheusFeatureGate = featuregate.PrometheusOperatorIsAvailable.ID()
57-
)
58-
5955
var (
6056
opampbridgeSelectorLabels = map[string]string{
6157
"app.kubernetes.io/managed-by": "opentelemetry-operator",
@@ -318,12 +314,13 @@ service:
318314
Name: "test-collector-monitoring",
319315
Namespace: "test",
320316
Labels: map[string]string{
321-
"app.kubernetes.io/component": "opentelemetry-collector",
322-
"app.kubernetes.io/instance": "test.test",
323-
"app.kubernetes.io/managed-by": "opentelemetry-operator",
324-
"app.kubernetes.io/name": "test-collector-monitoring",
325-
"app.kubernetes.io/part-of": "opentelemetry",
326-
"app.kubernetes.io/version": "latest",
317+
"app.kubernetes.io/component": "opentelemetry-collector",
318+
"app.kubernetes.io/instance": "test.test",
319+
"app.kubernetes.io/managed-by": "opentelemetry-operator",
320+
"app.kubernetes.io/name": "test-collector-monitoring",
321+
"app.kubernetes.io/part-of": "opentelemetry",
322+
"app.kubernetes.io/version": "latest",
323+
"operator.opentelemetry.io/collector-monitoring-service": "Exists",
327324
},
328325
Annotations: nil,
329326
},
@@ -564,12 +561,13 @@ service:
564561
Name: "test-collector-monitoring",
565562
Namespace: "test",
566563
Labels: map[string]string{
567-
"app.kubernetes.io/component": "opentelemetry-collector",
568-
"app.kubernetes.io/instance": "test.test",
569-
"app.kubernetes.io/managed-by": "opentelemetry-operator",
570-
"app.kubernetes.io/name": "test-collector-monitoring",
571-
"app.kubernetes.io/part-of": "opentelemetry",
572-
"app.kubernetes.io/version": "latest",
564+
"app.kubernetes.io/component": "opentelemetry-collector",
565+
"app.kubernetes.io/instance": "test.test",
566+
"app.kubernetes.io/managed-by": "opentelemetry-operator",
567+
"app.kubernetes.io/name": "test-collector-monitoring",
568+
"app.kubernetes.io/part-of": "opentelemetry",
569+
"app.kubernetes.io/version": "latest",
570+
"operator.opentelemetry.io/collector-monitoring-service": "Exists",
573571
},
574572
Annotations: nil,
575573
},
@@ -831,12 +829,13 @@ service:
831829
Name: "test-collector-monitoring",
832830
Namespace: "test",
833831
Labels: map[string]string{
834-
"app.kubernetes.io/component": "opentelemetry-collector",
835-
"app.kubernetes.io/instance": "test.test",
836-
"app.kubernetes.io/managed-by": "opentelemetry-operator",
837-
"app.kubernetes.io/name": "test-collector-monitoring",
838-
"app.kubernetes.io/part-of": "opentelemetry",
839-
"app.kubernetes.io/version": "latest",
832+
"app.kubernetes.io/component": "opentelemetry-collector",
833+
"app.kubernetes.io/instance": "test.test",
834+
"app.kubernetes.io/managed-by": "opentelemetry-operator",
835+
"app.kubernetes.io/name": "test-collector-monitoring",
836+
"app.kubernetes.io/part-of": "opentelemetry",
837+
"app.kubernetes.io/version": "latest",
838+
"operator.opentelemetry.io/collector-monitoring-service": "Exists",
840839
},
841840
Annotations: nil,
842841
},
@@ -1313,12 +1312,13 @@ service:
13131312
Name: "test-collector-monitoring",
13141313
Namespace: "test",
13151314
Labels: map[string]string{
1316-
"app.kubernetes.io/component": "opentelemetry-collector",
1317-
"app.kubernetes.io/instance": "test.test",
1318-
"app.kubernetes.io/managed-by": "opentelemetry-operator",
1319-
"app.kubernetes.io/name": "test-collector-monitoring",
1320-
"app.kubernetes.io/part-of": "opentelemetry",
1321-
"app.kubernetes.io/version": "latest",
1315+
"app.kubernetes.io/component": "opentelemetry-collector",
1316+
"app.kubernetes.io/instance": "test.test",
1317+
"app.kubernetes.io/managed-by": "opentelemetry-operator",
1318+
"app.kubernetes.io/name": "test-collector-monitoring",
1319+
"app.kubernetes.io/part-of": "opentelemetry",
1320+
"app.kubernetes.io/version": "latest",
1321+
"operator.opentelemetry.io/collector-monitoring-service": "Exists",
13221322
},
13231323
Annotations: nil,
13241324
},
@@ -1711,12 +1711,13 @@ prometheus_cr:
17111711
Name: "test-collector-monitoring",
17121712
Namespace: "test",
17131713
Labels: map[string]string{
1714-
"app.kubernetes.io/component": "opentelemetry-collector",
1715-
"app.kubernetes.io/instance": "test.test",
1716-
"app.kubernetes.io/managed-by": "opentelemetry-operator",
1717-
"app.kubernetes.io/name": "test-collector-monitoring",
1718-
"app.kubernetes.io/part-of": "opentelemetry",
1719-
"app.kubernetes.io/version": "latest",
1714+
"app.kubernetes.io/component": "opentelemetry-collector",
1715+
"app.kubernetes.io/instance": "test.test",
1716+
"app.kubernetes.io/managed-by": "opentelemetry-operator",
1717+
"app.kubernetes.io/name": "test-collector-monitoring",
1718+
"app.kubernetes.io/part-of": "opentelemetry",
1719+
"app.kubernetes.io/version": "latest",
1720+
"operator.opentelemetry.io/collector-monitoring-service": "Exists",
17201721
},
17211722
Annotations: nil,
17221723
},
@@ -1946,7 +1947,7 @@ prometheus_cr:
19461947
},
19471948
Spec: monitoringv1.ServiceMonitorSpec{
19481949
Endpoints: []monitoringv1.Endpoint{
1949-
monitoringv1.Endpoint{Port: "targetallocation"},
1950+
{Port: "targetallocation"},
19501951
},
19511952
Selector: v1.LabelSelector{
19521953
MatchLabels: map[string]string{
@@ -1964,7 +1965,7 @@ prometheus_cr:
19641965
},
19651966
},
19661967
wantErr: false,
1967-
featuregates: []string{prometheusFeatureGate},
1968+
featuregates: []string{},
19681969
},
19691970
}
19701971
for _, tt := range tests {

internal/manifests/collector/podmonitor.go

+8-13
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,6 @@
1515
package collector
1616

1717
import (
18-
"fmt"
1918
"strings"
2019

2120
"github.com/go-logr/logr"
@@ -25,10 +24,11 @@ import (
2524
"github.com/open-telemetry/opentelemetry-operator/apis/v1beta1"
2625
"github.com/open-telemetry/opentelemetry-operator/internal/manifests"
2726
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector/adapters"
27+
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils"
2828
"github.com/open-telemetry/opentelemetry-operator/internal/naming"
2929
)
3030

31-
// ServiceMonitor returns the service monitor for the given instance.
31+
// PodMonitor returns the pod monitor for the given instance.
3232
func PodMonitor(params manifests.Params) (*monitoringv1.PodMonitor, error) {
3333
if !params.OtelCol.Spec.Observability.Metrics.EnableMetrics {
3434
params.Log.V(2).Info("Metrics disabled for this OTEL Collector",
@@ -42,16 +42,14 @@ func PodMonitor(params manifests.Params) (*monitoringv1.PodMonitor, error) {
4242
if params.OtelCol.Spec.Mode != v1beta1.ModeSidecar {
4343
return nil, nil
4444
}
45-
45+
name := naming.PodMonitor(params.OtelCol.Name)
46+
labels := manifestutils.Labels(params.OtelCol.ObjectMeta, name, params.OtelCol.Spec.Image, ComponentOpenTelemetryCollector, nil)
47+
selectorLabels := manifestutils.SelectorLabels(params.OtelCol.ObjectMeta, ComponentOpenTelemetryCollector)
4648
pm = monitoringv1.PodMonitor{
4749
ObjectMeta: metav1.ObjectMeta{
4850
Namespace: params.OtelCol.Namespace,
49-
Name: naming.PodMonitor(params.OtelCol.Name),
50-
Labels: map[string]string{
51-
"app.kubernetes.io/name": naming.PodMonitor(params.OtelCol.Name),
52-
"app.kubernetes.io/instance": fmt.Sprintf("%s.%s", params.OtelCol.Namespace, params.OtelCol.Name),
53-
"app.kubernetes.io/managed-by": "opentelemetry-operator",
54-
},
51+
Name: name,
52+
Labels: labels,
5553
},
5654
Spec: monitoringv1.PodMonitorSpec{
5755
JobLabel: "app.kubernetes.io/instance",
@@ -60,10 +58,7 @@ func PodMonitor(params manifests.Params) (*monitoringv1.PodMonitor, error) {
6058
MatchNames: []string{params.OtelCol.Namespace},
6159
},
6260
Selector: metav1.LabelSelector{
63-
MatchLabels: map[string]string{
64-
"app.kubernetes.io/managed-by": "opentelemetry-operator",
65-
"app.kubernetes.io/instance": fmt.Sprintf("%s.%s", params.OtelCol.Namespace, params.OtelCol.Name),
66-
},
61+
MatchLabels: selectorLabels,
6762
},
6863
PodMetricsEndpoints: append(
6964
[]monitoringv1.PodMetricsEndpoint{

0 commit comments

Comments
 (0)