diff --git a/.chloggen/3090-enable-multiinstrumentation-by-default.yaml b/.chloggen/3090-enable-multiinstrumentation-by-default.yaml
deleted file mode 100755
index 29cbebcef3..0000000000
--- a/.chloggen/3090-enable-multiinstrumentation-by-default.yaml
+++ /dev/null
@@ -1,30 +0,0 @@
-# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
-change_type: 'breaking'
-
-# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action)
-component: auto-instrumentation
-
-# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
-note: Enable multi instrumentation by default.
-
-# One or more tracking issues related to the change
-issues: [3090]
-
-# (Optional) One or more lines of additional information to render under the primary note.
-# These lines will be padded with 2 spaces and then inserted directly into the document.
-# Use pipe (|) for multiline entries.
-subtext: |
- Starting with this release, the OpenTelemetry Operator now enables multi-instrumentation by default.
- This enhancement allows instrumentation of multiple containers in a pod with language-specific configurations.|
- Key Changes:
- - Single Instrumentation (Default Behavior): If no container names are specified using the
- `instrumentation.opentelemetry.io/container-names` annotation, instrumentation will be applied to the first container in
- the pod spec by default. This only applies when single instrumentation injection is configured.
- - Multi-Container Pods: In scenarios where different containers in a pod use distinct technologies, users must specify the
- container(s) for instrumentation using language-specific annotations. Without this specification, the default behavior may
- not work as expected for multi-container environments.
- Compatibility:
- - Users already utilizing the `instrumentation.opentelemetry.io/container-names` annotation do not need to take any action.
- Their existing setup will continue to function as before.
- - Important: Users who attempt to configure both `instrumentation.opentelemetry.io/container-names` and language-specific annotations
- (for multi-instrumentation) simultaneously will encounter an error, as this configuration is not supported.
diff --git a/.chloggen/3149-add-must-gather.yaml b/.chloggen/3149-add-must-gather.yaml
deleted file mode 100755
index d42c553265..0000000000
--- a/.chloggen/3149-add-must-gather.yaml
+++ /dev/null
@@ -1,25 +0,0 @@
-# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
-change_type: enhancement
-
-# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action)
-component: auto-instrumentation, collector
-
-# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
-note: "Add a must gather utility to help troubleshoot"
-
-# One or more tracking issues related to the change
-issues: [3149]
-
-# (Optional) One or more lines of additional information to render under the primary note.
-# These lines will be padded with 2 spaces and then inserted directly into the document.
-# Use pipe (|) for multiline entries.
-subtext: |
- The new utility is available as part of a new container image.
-
- To use the image in a running OpenShift cluster, you need to run the following command:
-
- ```sh
- oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace opentelemetry-operator-system
- ```
-
- See the [README](https://github.com/open-telemetry/opentelemetry-operator/blob/main/cmd/gather/README.md) for more details.
diff --git a/.chloggen/add_all_receiver_defaults.yaml b/.chloggen/add_all_receiver_defaults.yaml
deleted file mode 100755
index e4bb2b6c2b..0000000000
--- a/.chloggen/add_all_receiver_defaults.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
-change_type: enhancement
-
-# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action)
-component: collector
-
-# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
-note: set default address for all parsed receivers
-
-# One or more tracking issues related to the change
-issues: [3126]
-
-# (Optional) One or more lines of additional information to render under the primary note.
-# These lines will be padded with 2 spaces and then inserted directly into the document.
-# Use pipe (|) for multiline entries.
-subtext: |
- This feature is enabled by default. It can be disabled by specifying
- `--feature-gates=-operator.collector.default.config`.
diff --git a/.chloggen/fips.yaml b/.chloggen/fips.yaml
deleted file mode 100755
index ec572de643..0000000000
--- a/.chloggen/fips.yaml
+++ /dev/null
@@ -1,19 +0,0 @@
-# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
-change_type: enhancement
-
-# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action)
-component: collector
-
-# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
-note: Add flag to disable components when operator runs on FIPS enabled cluster.
-
-# One or more tracking issues related to the change
-issues: [3315]
-
-# (Optional) One or more lines of additional information to render under the primary note.
-# These lines will be padded with 2 spaces and then inserted directly into the document.
-# Use pipe (|) for multiline entries.
-subtext: |
- Flag `--fips-disabled-components=receiver.otlp,exporter.otlp,processor.batch,extension.oidc` can be used to disable
- components when operator runs on FIPS enabled cluster. The operator uses `/proc/sys/crypto/fips_enabled` to check
- if FIPS is enabled.
diff --git a/.chloggen/improve-probe-parsing.yaml b/.chloggen/fix-prometheus-rule-file.yaml
similarity index 75%
rename from .chloggen/improve-probe-parsing.yaml
rename to .chloggen/fix-prometheus-rule-file.yaml
index ec9b3fe8c2..28ce057468 100755
--- a/.chloggen/improve-probe-parsing.yaml
+++ b/.chloggen/fix-prometheus-rule-file.yaml
@@ -1,14 +1,14 @@
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
-change_type: enhancement
+change_type: 'bug_fix'
# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action)
-component: collector
+component: 'github action'
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
-note: Improves healthcheck parsing capabilities, allowing for future extensions to configure a healthcheck other than the v1 healthcheck extension.
+note: Add new line character at the end of PrometheusRule file.
# One or more tracking issues related to the change
-issues: [3184]
+issues: [3503]
# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
diff --git a/.chloggen/remove_localhost_fg.yaml b/.chloggen/remove_localhost_fg.yaml
deleted file mode 100755
index 276c3d74b3..0000000000
--- a/.chloggen/remove_localhost_fg.yaml
+++ /dev/null
@@ -1,36 +0,0 @@
-# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
-change_type: breaking
-
-# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action)
-component: collector
-
-# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
-note: Remove ComponentUseLocalHostAsDefaultHost collector feature gate.
-
-# One or more tracking issues related to the change
-issues: [3306]
-
-# (Optional) One or more lines of additional information to render under the primary note.
-# These lines will be padded with 2 spaces and then inserted directly into the document.
-# Use pipe (|) for multiline entries.
-subtext: |
- This change may break setups where receiver endpoints are not explicitly configured to listen on e.g. 0.0.0.0.
- Change \#3333 attempts to address this issue for a known set of components.
- The operator performs the adjustment for the following receivers:
- - otlp
- - skywalking
- - jaeger
- - loki
- - opencensus
- - zipkin
- - tcplog
- - udplog
- - fluentforward
- - statsd
- - awsxray/UDP
- - carbon
- - collectd
- - sapm
- - signalfx
- - splunk_hec
- - wavefront
diff --git a/.chloggen/resource-attribute-from-annotations.yaml b/.chloggen/resource-attribute-from-annotations.yaml
deleted file mode 100755
index 1ddf782c5d..0000000000
--- a/.chloggen/resource-attribute-from-annotations.yaml
+++ /dev/null
@@ -1,24 +0,0 @@
-change_type: enhancement
-
-component: auto-instrumentation
-
-note: Add support for k8s labels such as app.kubernetes.io/name for resource attributes
-
-issues: [3112]
-
-subtext: |
- You can opt-in as follows:
- ```yaml
- apiVersion: opentelemetry.io/v1alpha1
- kind: Instrumentation
- metadata:
- name: my-instrumentation
- spec:
- defaults:
- useLabelsForResourceAttributes: true
- ```
- The following labels are supported:
- - `app.kubernetes.io/name` becomes `service.name`
- - `app.kubernetes.io/version` becomes `service.version`
- - `app.kubernetes.io/part-of` becomes `service.namespace`
- - `app.kubernetes.io/instance` becomes `service.instance.id`
diff --git a/.chloggen/container-names.yaml b/.chloggen/revert-3379-otel-configmap.yaml
similarity index 68%
rename from .chloggen/container-names.yaml
rename to .chloggen/revert-3379-otel-configmap.yaml
index 034d411f8d..bd7b66223c 100755
--- a/.chloggen/container-names.yaml
+++ b/.chloggen/revert-3379-otel-configmap.yaml
@@ -5,12 +5,15 @@ change_type: bug_fix
component: auto-instrumentation
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
-note: Fix ApacheHttpd, Nginx and SDK injectors to honour their container-names annotations.
+note: Reverts PR 3379 which inadvertently broke users setting JAVA_TOOL_OPTIONS
# One or more tracking issues related to the change
-issues: [3313]
+issues: [3463]
# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
-subtext: This is a breaking change if anyone is accidentally using the enablement flag with container names for these 3 injectors.
+subtext: |
+ Reverts a previous PR which was causing JAVA_TOOL_OPTIONS to not be overriden when
+ set by users. This was resulting in application crashloopbackoffs for users relying
+ on java autoinstrumentation.
diff --git a/.chloggen/add_receiver_defaults.yaml b/.chloggen/service-extension.yaml
similarity index 85%
rename from .chloggen/add_receiver_defaults.yaml
rename to .chloggen/service-extension.yaml
index 7ffaefb2d8..d182754f46 100755
--- a/.chloggen/add_receiver_defaults.yaml
+++ b/.chloggen/service-extension.yaml
@@ -2,13 +2,13 @@
change_type: enhancement
# The name of the component, or a single word describing the area of concern, (e.g. collector, target allocator, auto-instrumentation, opamp, github action)
-component: operator
+component: collector
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
-note: Use 0.0.0.0 as otlp receiver default address
+note: support for creating a service for extensions when ports are specified.
# One or more tracking issues related to the change
-issues: [3126]
+issues: [3460]
# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS
index 209e0fe34b..68f4834a72 100644
--- a/.github/CODEOWNERS
+++ b/.github/CODEOWNERS
@@ -17,6 +17,3 @@
# AutoInstrumentation owners
# TBD
-
-# Target Allocator owners
-cmd/otel-allocator @open-telemetry/operator-ta-maintainers
diff --git a/.github/workflows/changelog.yaml b/.github/workflows/changelog.yaml
index c13feb754f..0cc293b2e6 100644
--- a/.github/workflows/changelog.yaml
+++ b/.github/workflows/changelog.yaml
@@ -65,16 +65,3 @@ jobs:
run: |
make chlog-validate \
|| { echo "New ./.chloggen/*.yaml file failed validation."; exit 1; }
-
- # In order to validate any links in the yaml file, render the config to markdown
- - name: Render .chloggen changelog entries
- run: make chlog-preview > changelog_preview.md
- - name: Install markdown-link-check
- run: npm install -g markdown-link-check
- - name: Run markdown-link-check
- run: |
- markdown-link-check \
- --verbose \
- --config .github/workflows/check_links_config.json \
- changelog_preview.md \
- || { echo "Check that anchor links are lowercase"; exit 1; }
diff --git a/.github/workflows/continuous-integration.yaml b/.github/workflows/continuous-integration.yaml
index 829789c19a..dd0fc335f6 100644
--- a/.github/workflows/continuous-integration.yaml
+++ b/.github/workflows/continuous-integration.yaml
@@ -62,8 +62,6 @@ jobs:
with:
path: |
/home/runner/.cache/golangci-lint
- /home/runner/go/pkg/mod
- ./bin
key: golangcilint-${{ hashFiles('**/go.sum') }}
restore-keys: |
golangcilint-
diff --git a/.github/workflows/e2e.yaml b/.github/workflows/e2e.yaml
index 5bc7aaeeec..64d8839087 100644
--- a/.github/workflows/e2e.yaml
+++ b/.github/workflows/e2e.yaml
@@ -31,9 +31,11 @@ jobs:
- e2e-pdb
- e2e-prometheuscr
- e2e-targetallocator
+ - e2e-targetallocator-cr
- e2e-upgrade
- e2e-multi-instrumentation
- e2e-metadata-filters
+ - e2e-ta-collector-mtls
include:
- group: e2e-instrumentation
setup: "add-instrumentation-params prepare-e2e"
@@ -41,8 +43,17 @@ jobs:
setup: "add-instrumentation-params prepare-e2e"
- group: e2e-metadata-filters
setup: "add-operator-arg OPERATOR_ARG='--annotations-filter=.*filter.out --annotations-filter=config.*.gke.io.* --labels-filter=.*filter.out' prepare-e2e"
+ - group: e2e-ta-collector-mtls
+ setup: "add-operator-arg OPERATOR_ARG='--feature-gates=operator.targetallocator.mtls' add-certmanager-permissions prepare-e2e"
- group: e2e-automatic-rbac
setup: "add-rbac-permissions-to-operator prepare-e2e"
+ - group: e2e-native-sidecar
+ setup: "add-operator-arg OPERATOR_ARG='--feature-gates=operator.sidecarcontainers.native' prepare-e2e"
+ kube-version: "1.29"
+ - group: e2e-targetallocator
+ setup: "enable-targetallocator-cr prepare-e2e"
+ - group: e2e-targetallocator-cr
+ setup: "enable-targetallocator-cr prepare-e2e"
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v4
@@ -56,8 +67,6 @@ jobs:
with:
path: bin
key: ${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('Makefile') }}-${{ steps.setup-go.outputs.go-version }}
- - name: Install chainsaw
- uses: kyverno/action-install-chainsaw@v0.2.11
- name: Install tools
run: make install-tools
- name: Prepare e2e tests
diff --git a/.github/workflows/publish-autoinstrumentation-nodejs.yaml b/.github/workflows/publish-autoinstrumentation-nodejs.yaml
index 45b368fbf6..7115105b2f 100644
--- a/.github/workflows/publish-autoinstrumentation-nodejs.yaml
+++ b/.github/workflows/publish-autoinstrumentation-nodejs.yaml
@@ -26,7 +26,7 @@ jobs:
- uses: actions/checkout@v4
- name: Read version
- run: echo VERSION=$(cat autoinstrumentation/nodejs/package.json | jq -r '.dependencies."@opentelemetry/sdk-node"') >> $GITHUB_ENV
+ run: echo VERSION=$(cat autoinstrumentation/nodejs/package.json | jq -r '.dependencies."@opentelemetry/auto-instrumentations-node"') >> $GITHUB_ENV
- name: Docker meta
id: meta
@@ -71,7 +71,7 @@ jobs:
uses: docker/build-push-action@v6
with:
context: autoinstrumentation/nodejs
- platforms: linux/amd64,linux/arm64
+ platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: ${{ github.event_name == 'push' }}
build-args: version=${{ env.VERSION }}
tags: ${{ steps.meta.outputs.tags }}
diff --git a/.github/workflows/reusable-operator-hub-release.yaml b/.github/workflows/reusable-operator-hub-release.yaml
index d453b92a93..e9de4190e2 100644
--- a/.github/workflows/reusable-operator-hub-release.yaml
+++ b/.github/workflows/reusable-operator-hub-release.yaml
@@ -56,7 +56,7 @@ jobs:
env:
VERSION: ${{ env.version }}
run: |
- mkdir operators/opentelemetry-operator/${VERSION}
+ mkdir operators/opentelemetry-operator/${VERSION}
cp -R ./tmp/bundle/${{ inputs.folder }}/* operators/opentelemetry-operator/${VERSION}
rm -rf ./tmp
@@ -73,7 +73,7 @@ jobs:
message="Update the opentelemetry to $VERSION"
body="Release opentelemetry-operator \`$VERSION\`.
- cc @pavolloffay @frzifus @yuriolisa @jaronoff97 @TylerHelmuth @swiatekm
+ cc @pavolloffay @frzifus @yuriolisa @jaronoff97 @TylerHelmuth @swiatekm @iblancasa
"
branch="update-opentelemetry-operator-to-${VERSION}"
diff --git a/.gitignore b/.gitignore
index 1438657894..52b40a6635 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,4 +1,3 @@
-
# Binaries for programs and plugins
*.exe
*.exe~
@@ -39,8 +38,9 @@ config/manager/kustomization.yaml
kubeconfig
tests/_build/
config/rbac/extra-permissions-operator/
+config/rbac/certmanager-permissions/
# autoinstrumentation artifacts
build
node_modules
-package-lock.json
\ No newline at end of file
+package-lock.json
diff --git a/.linkspector.yml b/.linkspector.yml
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/CHANGELOG.md b/CHANGELOG.md
index c9d919240a..80998b7690 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,6 +2,289 @@
+## 0.114.0
+
+### 💡 Enhancements 💡
+
+- `collector`: Create RBAC rules for the k8s_cluster receiver automatically. (#3427)
+- `collector`: Create RBAC rules for the k8sobjects receiver automatically. (#3429)
+- `collector`: Add a warning message when one created collector needs extra RBAC permissions and the service account doesn't have them. (#3432)
+- `target allocator`: Added allocation_fallback_strategy option as fallback strategy for per-node allocation strategy, can be enabled with feature flag operator.targetallocator.fallbackstrategy (#3477)
+
+ If using per-node allocation strategy, targets that are not attached to a node will not
+ be allocated. As the per-node strategy is required when running as a daemonset, it is
+ not possible to assign some targets under a daemonset deployment.
+ Feature flag operator.targetallocator.fallbackstrategy has been added and results in consistent-hashing
+ being used as the fallback allocation strategy for "per-node" only at this time.
+
+- `auto-instrumentation`: updated node auto-instrumentation dependencies to the latest version (#3476)
+
+ - auto-instrumentations-node to 0.53.0
+ - exporter-metrics-otlp-grpc to 0.55.0
+ - exporter-prometheus to 0.55.0
+
+- `operator`: Replace references to gcr.io/kubebuilder/kube-rbac-proxy with quay.io/brancz/kube-rbac-proxy (#3485)
+
+### 🧰 Bug fixes 🧰
+
+- `operator`: Operator pod crashed if the Service Monitor for the operator metrics was created before by another operator pod. (#3446)
+
+ Operator fails when the pod is restarted and the Service Monitor for operator metrics was already created by another operator pod.
+ To fix this, the operator now sets the owner reference on the Service Monitor to itself and checks if the Service Monitor already exists.
+
+- `auto-instrumentation`: Bump base memory requirements for python and go (#3479)
+
+### Components
+
+* [OpenTelemetry Collector - v0.114.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.114.0)
+* [OpenTelemetry Contrib - v0.114.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.114.0)
+* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5)
+* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0)
+* [Node.JS - v0.53.0](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.53.0)
+* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0)
+* [Go - v0.17.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.17.0-alpha)
+* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+
+## 0.113.0
+
+### 💡 Enhancements 💡
+
+- `operator`: Programmatically create the `ServiceMonitor` for the operator metrics endpoint, ensuring correct namespace handling and dynamic configuration. (#3370)
+ Previously, the `ServiceMonitor` was created statically from a manifest file, causing failures when the
+ operator was deployed in a non-default namespace. This enhancement ensures automatic adjustment of the
+ `serverName` and seamless metrics scraping.
+- `collector`: Create RBAC rules for the k8s_events receiver automatically. (#3420)
+- `collector`: Inject environment K8S_NODE_NAME environment variable for the Kubelet Stats Receiver. (#2779)
+- `auto-instrumentation`: add config for installing musl based auto-instrumentation for Python (#2264)
+- `auto-instrumentation`: Support `http/json` and `http/protobuf` via OTEL_EXPORTER_OTLP_PROTOCOL environment variable in addition to default `grpc` for exporting traces (#3412)
+- `target allocator`: enables support for pulling scrape config and probe CRDs in the target allocator (#1842)
+
+### 🧰 Bug fixes 🧰
+
+- `collector`: Fix mutation of deployments, statefulsets, and daemonsets allowing to remove fields on update (#2947)
+
+### Components
+
+* [OpenTelemetry Collector - v0.113.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.113.0)
+* [OpenTelemetry Contrib - v0.113.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.113.0)
+* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5)
+* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0)
+* [Node.JS - v0.53.0](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.53.0)
+* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0)
+* [Go - v0.17.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.17.0-alpha)
+* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+
+## 0.112.0
+
+### 💡 Enhancements 💡
+
+- `auto-instrumentation`: Support configuring Java auto-instrumentation when runtime configuration is provided from configmap or secret. (#1814)
+ This change allows users to configure JAVA_TOOL_OPTIONS in config map or secret when the name of the variable is defined in the pod spec.
+ The operator in this case set another JAVA_TOOL_OPTIONS that references the original value
+ e.g. `JAVA_TOOL_OPTIONS=$(JAVA_TOOL_OPTIONS) -javaagent:/otel-auto-instrumentation-java/javaagent.jar`.
+
+- `auto-instrumentation`: Adds VolumeClaimTemplate field to Instrumentation spec to enable user-definable ephemeral volumes for auto-instrumentation. (#3267)
+- `collector`: Add support for persistentVolumeClaimRetentionPolicy field (#3305)
+- `auto-instrumentation`: build musl based auto-instrumentation in Python docker image (#2264)
+- `auto-instrumentation`: An empty line should come before the addition of Include ...opentemetry_agent.conf, as a protection measure against cases of httpd.conf w/o a blank last line (#3401)
+- `collector`: Add automatic RBAC creation for the `kubeletstats` receiver. (#3155)
+- `auto-instrumentation`: Add Nodejs auto-instrumentation image builds for linux/s390x,linux/ppc64le. (#3322)
+
+### 🧰 Bug fixes 🧰
+
+- `target allocator`: Permission check fixed for the serviceaccount of the target allocator (#3380)
+- `target allocator`: Change docker image to run as non-root (#3378)
+
+### Components
+
+* [OpenTelemetry Collector - v0.112.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.112.0)
+* [OpenTelemetry Contrib - v0.112.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.112.0)
+* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5)
+* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0)
+* [Node.JS - v0.53.0](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.53.0)
+* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0)
+* [Go - v0.15.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.15.0-alpha)
+* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+
+## 0.111.0
+
+### 💡 Enhancements 💡
+
+- `auto-instrumentation`: set OTEL_LOGS_EXPORTER env var to otlp in python instrumentation (#3330)
+
+- `collector`: Expose the Collector telemetry endpoint by default. (#3361)
+
+ The collector v0.111.0 changes the default binding of the telemetry metrics endpoint from `0.0.0.0` to `localhost`.
+ To avoid any disruption we fallback to `0.0.0.0:{PORT}` as default address.
+ Details can be found here: [opentelemetry-collector#11251](https://github.com/open-telemetry/opentelemetry-collector/pull/11251)
+
+
+- `auto-instrumentation`: Add support for specifying exporter TLS certificates in auto-instrumentation. (#3338)
+
+ Now Instrumentation CR supports specifying TLS certificates for exporter:
+ ```yaml
+ spec:
+ exporter:
+ endpoint: https://otel-collector:4317
+ tls:
+ secretName: otel-tls-certs
+ configMapName: otel-ca-bundle
+ # otel-ca-bundle
+ ca_file: ca.crt
+ # present in otel-tls-certs
+ cert_file: tls.crt
+ # present in otel-tls-certs
+ key_file: tls.key
+ ```
+
+ * Propagating secrets across namespaces can be done with https://github.com/EmberStack/kubernetes-reflector or https://github.com/zakkg3/ClusterSecret
+ * Restarting workloads on certificate renewal can be done with https://github.com/stakater/Reloader or https://github.com/wave-k8s/wave
+
+- `collector`: Add native sidecar injection behind a feature gate which is disabled by default. (#2376)
+
+ Native sidecars are supported since Kubernetes version `1.28` and are availabe by default since `1.29`.
+ To use native sidecars on Kubernetes v1.28 make sure the "SidecarContainers" feature gate on kubernetes is enabled.
+ If native sidecars are available, the operator can be advised to use them by adding
+ the `--feature-gates=operator.sidecarcontainers.native` to the Operator args.
+ In the future this may will become availabe as deployment mode on the Collector CR. See [#3356](https://github.com/open-telemetry/opentelemetry-operator/issues/3356)
+
+- `target allocator, collector`: Enable mTLS between the TA and collector for passing secrets in the scrape_config securely (#1669)
+
+ This change enables mTLS between the collector and the target allocator (requires cert-manager).
+ This is necessary for passing secrets securely from the TA to the collector for scraping endpoints that have authentication. Use the `operator.targetallocator.mtls` to enable this feature. See the target allocator [documentation](https://github.com/open-telemetry/opentelemetry-operator/tree/main/cmd/otel-allocator#service--pod-monitor-endpoint-credentials) for more details.
+
+### 🧰 Bug fixes 🧰
+
+- `collector-webhook`: Fixed validation of `stabilizationWindowSeconds` in autoscaler behaviour (#3345)
+
+ The validation of `stabilizationWindowSeconds` in the `autoscaler.behaviour.scale[Up|Down]` incorrectly rejected 0 as an invalid value.
+ This has been fixed to ensure that the value is validated correctly (should be >=0 and <=3600) and the error messsage has been updated to reflect this.
+
+### Components
+
+* [OpenTelemetry Collector - v0.111.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.111.0)
+* [OpenTelemetry Contrib - v0.111.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.111.0)
+* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5)
+* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0)
+* [Node.JS - v0.53.0](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.53.0)
+* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0)
+* [Go - v0.15.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.15.0-alpha)
+* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+
+
+## 0.110.0
+
+### 🛑 Breaking changes 🛑
+
+- `auto-instrumentation`: Enable multi instrumentation by default. (#3090)
+
+ Starting with this release, the OpenTelemetry Operator now enables multi-instrumentation by default.
+ This enhancement allows instrumentation of multiple containers in a pod with language-specific configurations.
+
+ Key Changes:
+ - Single Instrumentation (Default Behavior): If no container names are specified using the
+ `instrumentation.opentelemetry.io/container-names` annotation, instrumentation will be applied to the first container in
+ the pod spec by default. This only applies when single instrumentation injection is configured.
+ - Multi-Container Pods: In scenarios where different containers in a pod use distinct technologies, users must specify the
+ container(s) for instrumentation using language-specific annotations. Without this specification, the default behavior may
+ not work as expected for multi-container environments.
+
+ Compatibility:
+ - Users already utilizing the `instrumentation.opentelemetry.io/container-names` annotation do not need to take any action.
+ Their existing setup will continue to function as before.
+ - Important: Users who attempt to configure both `instrumentation.opentelemetry.io/container-names` and language-specific annotations
+ (for multi-instrumentation) simultaneously will encounter an error, as this configuration is not supported.
+
+- `collector`: Remove ComponentUseLocalHostAsDefaultHost collector feature gate. (#3306)
+
+ This change may break setups where receiver endpoints are not explicitly configured to listen on e.g. 0.0.0.0.
+ Change \#3333 attempts to address this issue for a known set of components.
+ The operator performs the adjustment for the following receivers:
+ - otlp
+ - skywalking
+ - jaeger
+ - loki
+ - opencensus
+ - zipkin
+ - tcplog
+ - udplog
+ - fluentforward
+ - statsd
+ - awsxray/UDP
+ - carbon
+ - collectd
+ - sapm
+ - signalfx
+ - splunk_hec
+ - wavefront
+
+
+### 💡 Enhancements 💡
+
+- `auto-instrumentation, collector`: Add a must gather utility to help troubleshoot (#3149)
+
+ The new utility is available as part of a new container image.
+
+ To use the image in a running OpenShift cluster, you need to run the following command:
+
+ ```sh
+ oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace opentelemetry-operator-system
+ ```
+
+ See the [README](https://github.com/open-telemetry/opentelemetry-operator/blob/main/cmd/gather/README.md) for more details.
+
+- `collector`: set default address for all parsed receivers (#3126)
+
+ This feature is enabled by default. It can be disabled by specifying
+ `--feature-gates=-operator.collector.default.config`.
+- `operator`: Use 0.0.0.0 as otlp receiver default address (#3126)
+- `collector`: Add flag to disable components when operator runs on FIPS enabled cluster. (#3315)
+ Flag `--fips-disabled-components=receiver.otlp,exporter.otlp,processor.batch,extension.oidc` can be used to disable
+ components when operator runs on FIPS enabled cluster. The operator uses `/proc/sys/crypto/fips_enabled` to check
+ if FIPS is enabled.
+
+- `collector`: Improves healthcheck parsing capabilities, allowing for future extensions to configure a healthcheck other than the v1 healthcheck extension. (#3184)
+- `auto-instrumentation`: Add support for k8s labels such as app.kubernetes.io/name for resource attributes (#3112)
+
+ You can opt-in as follows:
+ ```yaml
+ apiVersion: opentelemetry.io/v1alpha1
+ kind: Instrumentation
+ metadata:
+ name: my-instrumentation
+ spec:
+ defaults:
+ useLabelsForResourceAttributes: true
+ ```
+ The following labels are supported:
+ - `app.kubernetes.io/name` becomes `service.name`
+ - `app.kubernetes.io/version` becomes `service.version`
+ - `app.kubernetes.io/part-of` becomes `service.namespace`
+ - `app.kubernetes.io/instance` becomes `service.instance.id`
+
+
+### 🧰 Bug fixes 🧰
+
+- `auto-instrumentation`: Fix ApacheHttpd, Nginx and SDK injectors to honour their container-names annotations. (#3313)
+
+ This is a breaking change if anyone is accidentally using the enablement flag with container names for these 3 injectors.
+
+### Components
+
+* [OpenTelemetry Collector - v0.110.0](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.110.0)
+* [OpenTelemetry Contrib - v0.110.0](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.110.0)
+* [Java auto-instrumentation - v1.33.5](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/tag/v1.33.5)
+* [.NET auto-instrumentation - v1.2.0](https://github.com/open-telemetry/opentelemetry-dotnet-instrumentation/releases/tag/v1.2.0)
+* [Node.JS - v0.52.1](https://github.com/open-telemetry/opentelemetry-js/releases/tag/experimental%2Fv0.52.1)
+* [Python - v0.48b0](https://github.com/open-telemetry/opentelemetry-python-contrib/releases/tag/v0.48b0)
+* [Go - v0.14.0-alpha](https://github.com/open-telemetry/opentelemetry-go-instrumentation/releases/tag/v0.14.0-alpha)
+* [ApacheHTTPD - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+* [Nginx - 1.0.4](https://github.com/open-telemetry/opentelemetry-cpp-contrib/releases/tag/webserver%2Fv1.0.4)
+
## 0.109.0
### 🚩 Deprecations 🚩
diff --git a/Makefile b/Makefile
index 939af881d5..8212a91dd7 100644
--- a/Makefile
+++ b/Makefile
@@ -204,11 +204,28 @@ add-image-opampbridge:
add-rbac-permissions-to-operator: manifests kustomize
# Kustomize only allows patches in the folder where the kustomization is located
# This folder is ignored by .gitignore
- cp -r tests/e2e-automatic-rbac/extra-permissions-operator/ config/rbac/extra-permissions-operator
+ mkdir -p config/rbac/extra-permissions-operator
+ cp -r tests/e2e-automatic-rbac/extra-permissions-operator/* config/rbac/extra-permissions-operator
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/clusterresourcequotas.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/cronjobs.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/daemonsets.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/events.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/extensions.yaml
cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/namespaces.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/namespaces-status.yaml
cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/nodes.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/nodes-proxy.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/nodes-spec.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/pod-status.yaml
cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/rbac.yaml
cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/replicaset.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/replicationcontrollers.yaml
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path extra-permissions-operator/resourcequotas.yaml
+
+.PHONY: enable-targetallocator-cr
+enable-targetallocator-cr:
+ @$(MAKE) add-operator-arg OPERATOR_ARG='--feature-gates=operator.collector.targetallocatorcr'
+ cd config/crd && $(KUSTOMIZE) edit add resource bases/opentelemetry.io_targetallocators.yaml
# Deploy controller in the current Kubernetes context, configured in ~/.kube/config
.PHONY: deploy
@@ -267,6 +284,13 @@ generate: controller-gen
e2e: chainsaw
$(CHAINSAW) test --test-dir ./tests/e2e
+# e2e-native-sidecar
+# NOTE: make sure the k8s featuregate "SidecarContainers" is set to true.
+# NOTE: make sure the operator featuregate "operator.sidecarcontainers.native" is enabled.
+.PHONY: e2e-native-sidecar
+e2e-native-sidecar: chainsaw
+ $(CHAINSAW) test --test-dir ./tests/e2e-native-sidecar
+
# end-to-end-test for testing automatic RBAC creation
.PHONY: e2e-automatic-rbac
e2e-automatic-rbac: chainsaw
@@ -312,6 +336,23 @@ e2e-prometheuscr: chainsaw
e2e-targetallocator: chainsaw
$(CHAINSAW) test --test-dir ./tests/e2e-targetallocator
+# Target allocator CR end-to-tests
+.PHONY: e2e-targetallocator-cr
+e2e-targetallocator-cr: chainsaw
+ $(CHAINSAW) test --test-dir ./tests/e2e-targetallocator-cr
+
+.PHONY: add-certmanager-permissions
+add-certmanager-permissions:
+ # Kustomize only allows patches in the folder where the kustomization is located
+ # This folder is ignored by .gitignore
+ cp -r tests/e2e-ta-collector-mtls/certmanager-permissions config/rbac/certmanager-permissions
+ cd config/rbac && $(KUSTOMIZE) edit add patch --kind ClusterRole --name manager-role --path certmanager-permissions/certmanager.yaml
+
+# Target allocator collector mTLS end-to-tests
+.PHONY: e2e-ta-collector-mtls
+e2e-ta-collector-mtls: chainsaw
+ $(CHAINSAW) test --test-dir ./tests/e2e-ta-collector-mtls
+
# end-to-end-test for Annotations/Labels Filters
.PHONY: e2e-metadata-filters
e2e-metadata-filters: chainsaw
@@ -454,7 +495,7 @@ KUSTOMIZE_VERSION ?= v5.0.3
CONTROLLER_TOOLS_VERSION ?= v0.16.1
GOLANGCI_LINT_VERSION ?= v1.57.2
KIND_VERSION ?= v0.20.0
-CHAINSAW_VERSION ?= v0.2.5
+CHAINSAW_VERSION ?= v0.2.8
.PHONY: install-tools
install-tools: kustomize golangci-lint kind controller-gen envtest crdoc kind operator-sdk chainsaw
@@ -474,12 +515,12 @@ kind: ## Download kind locally if necessary.
.PHONY: controller-gen
controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary.
$(CONTROLLER_GEN): $(LOCALBIN)
- @test -s $(LOCALBIN)/controller-gen && $(LOCALBIN)/controller-gen --version | grep -q $(CONTROLLER_TOOLS_VERSION) || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@$(CONTROLLER_TOOLS_VERSION)
+ $(call go-get-tool,$(CONTROLLER_GEN), sigs.k8s.io/controller-tools/cmd/controller-gen,$(CONTROLLER_TOOLS_VERSION))
.PHONY: envtest
envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
$(ENVTEST): $(LOCALBIN)
- @test -s $(LOCALBIN)/setup-envtest || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
+ $(call go-get-tool,$(ENVTEST), sigs.k8s.io/controller-runtime/tools/setup-envtest,latest)
CRDOC = $(shell pwd)/bin/crdoc
.PHONY: crdoc
diff --git a/README.md b/README.md
index f3485d7fac..6244ab90cf 100644
--- a/README.md
+++ b/README.md
@@ -11,6 +11,7 @@ The operator manages:
## Documentation
+- [Compatibility & Support docs](./docs/compatibility.md)
- [API docs](./docs/api.md)
- [Offical Telemetry Operator page](https://opentelemetry.io/docs/kubernetes/operator/)
@@ -291,9 +292,12 @@ instrumentation.opentelemetry.io/inject-nodejs: "true"
```
Python:
+Python auto-instrumentation also honors an annotation that will permit it to run it on images with a different C library than glibc.
```bash
instrumentation.opentelemetry.io/inject-python: "true"
+instrumentation.opentelemetry.io/otel-python-platform: "glibc" # for Linux glibc based images, this is the default value and can be omitted
+instrumentation.opentelemetry.io/otel-python-platform: "musl" # for Linux musl based images
```
.NET:
@@ -608,7 +612,7 @@ spec:
mode: statefulset
targetAllocator:
enabled: true
- config:
+ config:
receivers:
prometheus:
config:
@@ -740,7 +744,7 @@ spec:
### Configure resource attributes with labels
-You can also use common labels to set resource attributes.
+You can also use common labels to set resource attributes.
The following labels are supported:
- `app.kubernetes.io/name` becomes `service.name`
@@ -782,62 +786,14 @@ The priority for setting resource attributes is as follows (first found wins):
1. Resource attributes set via `OTEL_RESOURCE_ATTRIBUTES` and `OTEL_SERVICE_NAME` environment variables
2. Resource attributes set via annotations (with the `resource.opentelemetry.io/` prefix)
-3. Resource attributes set via labels (e.g. `app.kubernetes.io/name`)
+3. Resource attributes set via labels (e.g. `app.kubernetes.io/name`)
if the `Instrumentation` CR has defaults.useLabelsForResourceAttributes=true (see above)
4. Resource attributes calculated from the pod's metadata (e.g. `k8s.pod.name`)
5. Resource attributes set via the `Instrumentation` CR (in the `spec.resource.resourceAttributes` section)
-This priority is applied for each resource attribute separately, so it is possible to set some attributes via
+This priority is applied for each resource attribute separately, so it is possible to set some attributes via
annotations and others via labels.
-## Compatibility matrix
-
-### OpenTelemetry Operator vs. OpenTelemetry Collector
-
-The OpenTelemetry Operator follows the same versioning as the operand (OpenTelemetry Collector) up to the minor part of the version. For example, the OpenTelemetry Operator v0.18.1 tracks OpenTelemetry Collector 0.18.0. The patch part of the version indicates the patch level of the operator itself, not that of OpenTelemetry Collector. Whenever a new patch version is released for OpenTelemetry Collector, we'll release a new patch version of the operator.
-
-By default, the OpenTelemetry Operator ensures consistent versioning between itself and the managed `OpenTelemetryCollector` resources. That is, if the OpenTelemetry Operator is based on version `0.40.0`, it will create resources with an underlying OpenTelemetry Collector at version `0.40.0`.
-
-When a custom `Spec.Image` is used with an `OpenTelemetryCollector` resource, the OpenTelemetry Operator will not manage this versioning and upgrading. In this scenario, it is best practice that the OpenTelemetry Operator version should match the underlying core version. Given a `OpenTelemetryCollector` resource with a `Spec.Image` configured to a custom image based on underlying OpenTelemetry Collector at version `0.40.0`, it is recommended that the OpenTelemetry Operator is kept at version `0.40.0`.
-
-### OpenTelemetry Operator vs. Kubernetes vs. Cert Manager vs Prometheus Operator
-
-We strive to be compatible with the widest range of Kubernetes versions as possible, but some changes to Kubernetes itself require us to break compatibility with older Kubernetes versions, be it because of code incompatibilities, or in the name of maintainability. Every released operator will support a specific range of Kubernetes versions, to be determined at the latest during the release.
-
-We use `cert-manager` for some features of this operator and the third column shows the versions of the `cert-manager` that are known to work with this operator's versions.
-
-The Target Allocator supports prometheus-operator CRDs like ServiceMonitor, and it does so by using packages imported from prometheus-operator itself. The table shows which version is shipped with a given operator version.
-Generally speaking, these are backwards compatible, but specific features require the appropriate package versions.
-
-The OpenTelemetry Operator _might_ work on versions outside of the given range, but when opening new issues, please make sure to test your scenario on a supported version.
-
-| OpenTelemetry Operator | Kubernetes | Cert-Manager | Prometheus-Operator |
-|------------------------|----------------| ------------ |---------------------|
-| v0.109.0 | v1.23 to v1.31 | v1 | v0.76.0 |
-| v0.108.0 | v1.23 to v1.31 | v1 | v0.76.0 |
-| v0.107.0 | v1.23 to v1.30 | v1 | v0.75.0 |
-| v0.106.0 | v1.23 to v1.30 | v1 | v0.75.0 |
-| v0.105.0 | v1.23 to v1.30 | v1 | v0.74.0 |
-| v0.104.0 | v1.23 to v1.30 | v1 | v0.74.0 |
-| v0.103.0 | v1.23 to v1.30 | v1 | v0.74.0 |
-| v0.102.0 | v1.23 to v1.30 | v1 | v0.71.2 |
-| v0.101.0 | v1.23 to v1.30 | v1 | v0.71.2 |
-| v0.100.0 | v1.23 to v1.29 | v1 | v0.71.2 |
-| v0.99.0 | v1.23 to v1.29 | v1 | v0.71.2 |
-| v0.98.0 | v1.23 to v1.29 | v1 | v0.71.2 |
-| v0.97.0 | v1.23 to v1.29 | v1 | v0.71.2 |
-| v0.96.0 | v1.23 to v1.29 | v1 | v0.71.2 |
-| v0.95.0 | v1.23 to v1.29 | v1 | v0.71.2 |
-| v0.94.0 | v1.23 to v1.29 | v1 | v0.71.0 |
-| v0.93.0 | v1.23 to v1.29 | v1 | v0.71.0 |
-| v0.92.0 | v1.23 to v1.29 | v1 | v0.71.0 |
-| v0.91.0 | v1.23 to v1.29 | v1 | v0.70.0 |
-| v0.90.0 | v1.23 to v1.28 | v1 | v0.69.1 |
-| v0.89.0 | v1.23 to v1.28 | v1 | v0.69.1 |
-| v0.88.0 | v1.23 to v1.28 | v1 | v0.68.0 |
-| v0.87.0 | v1.23 to v1.28 | v1 | v0.68.0 |
-| v0.86.0 | v1.23 to v1.28 | v1 | v0.68.0 |
-
## Contributing and Developing
Please see [CONTRIBUTING.md](CONTRIBUTING.md).
@@ -849,6 +805,7 @@ Approvers ([@open-telemetry/operator-approvers](https://github.com/orgs/open-tel
- [Benedikt Bongartz](https://github.com/frzifus), Red Hat
- [Tyler Helmuth](https://github.com/TylerHelmuth), Honeycomb
- [Yuri Oliveira Sa](https://github.com/yuriolisa), Red Hat
+- [Israel Blancas](https://github.com/iblancasa), Red Hat
Emeritus Approvers:
@@ -859,15 +816,6 @@ Emeritus Approvers:
- [Owais Lone](https://github.com/owais), Splunk
- [Pablo Baeyens](https://github.com/mx-psi), DataDog
-Target Allocator Maintainers ([@open-telemetry/operator-ta-maintainers](https://github.com/orgs/open-telemetry/teams/operator-ta-maintainers)):
-
-- [Kristina Pathak](https://github.com/kristinapathak), Lightstep
-- [Sebastian Poxhofer](https://github.com/secustor)
-
-Emeritus Target Allocator Maintainers
-
-- [Anthony Mirabella](https://github.com/Aneurysm9), AWS
-
Maintainers ([@open-telemetry/operator-maintainers](https://github.com/orgs/open-telemetry/teams/operator-maintainers)):
- [Jacob Aronoff](https://github.com/jaronoff97), Lightstep
diff --git a/RELEASE.md b/RELEASE.md
index 1fecc1e997..c0f6e29e0e 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -12,7 +12,7 @@ Steps to release a new version of the OpenTelemetry Operator:
> DO NOT BUMP JAVA PAST `1.X.X` AND DO NOT BUMP .NET PAST `1.2.0`. Upgrades past these versions will introduce breaking HTTP semantic convention changes.
1. Check if the compatible OpenShift versions are updated in the `Makefile`.
1. Update the bundle by running `make bundle VERSION=$VERSION`.
- 1. Change the compatibility matrix in the [readme](./README.md) file, using the OpenTelemetry Operator version to be released and the current latest Kubernetes version as the latest supported version. Remove the oldest entry.
+ 1. Change the compatibility matrix in the [compatibility doc](./docs/compatibility.md) file, using the OpenTelemetry Operator version to be released and the current latest Kubernetes version as the latest supported version. Remove the oldest entry.
1. Update release schedule table, by moving the current release manager to the end of the table with updated release version.
1. Add the changes to the changelog by running `make chlog-update VERSION=$VERSION`.
1. Check the OpenTelemetry Collector's changelog and ensure migration steps are present in `pkg/collector/upgrade`
@@ -44,9 +44,10 @@ The operator should be released within a week after the [OpenTelemetry collector
| Version | Release manager |
|----------|-----------------|
-| v0.110.0 | @swiatekm |
-| v0.111.0 | @frzifus |
-| v0.112.0 | @yuriolisa |
-| v0.113.0 | @pavolloffay |
-| v0.114.0 | @TylerHelmuth |
-| v0.115.0 | @jaronoff97 |
\ No newline at end of file
+| v0.115.0 | @TylerHelmuth |
+| v0.116.0 | @jaronoff97 |
+| v0.117.0 | @iblancasa |
+| v0.118.0 | @frzifus |
+| v0.119.0 | @yuriolisa |
+| v0.120.0 | @pavolloffay |
+| v0.121.0 | @swiatekm |
diff --git a/apis/v1alpha1/instrumentation_types.go b/apis/v1alpha1/instrumentation_types.go
index 2cccef7d6b..e290f4033b 100644
--- a/apis/v1alpha1/instrumentation_types.go
+++ b/apis/v1alpha1/instrumentation_types.go
@@ -97,8 +97,37 @@ type Resource struct {
// Exporter defines OTLP exporter configuration.
type Exporter struct {
// Endpoint is address of the collector with OTLP endpoint.
+ // If the endpoint defines https:// scheme TLS has to be specified.
// +optional
Endpoint string `json:"endpoint,omitempty"`
+
+ // TLS defines certificates for TLS.
+ // TLS needs to be enabled by specifying https:// scheme in the Endpoint.
+ TLS *TLS `json:"tls,omitempty"`
+}
+
+// TLS defines TLS configuration for exporter.
+type TLS struct {
+ // SecretName defines secret name that will be used to configure TLS on the exporter.
+ // It is user responsibility to create the secret in the namespace of the workload.
+ // The secret must contain client certificate (Cert) and private key (Key).
+ // The CA certificate might be defined in the secret or in the config map.
+ SecretName string `json:"secretName,omitempty"`
+
+ // ConfigMapName defines configmap name with CA certificate. If it is not defined CA certificate will be
+ // used from the secret defined in SecretName.
+ ConfigMapName string `json:"configMapName,omitempty"`
+
+ // CA defines the key of certificate (e.g. ca.crt) in the configmap map, secret or absolute path to a certificate.
+ // The absolute path can be used when certificate is already present on the workload filesystem e.g.
+ // /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
+ CA string `json:"ca_file,omitempty"`
+ // Cert defines the key (e.g. tls.crt) of the client certificate in the secret or absolute path to a certificate.
+ // The absolute path can be used when certificate is already present on the workload filesystem.
+ Cert string `json:"cert_file,omitempty"`
+ // Key defines a key (e.g. tls.key) of the private key in the secret or absolute path to a certificate.
+ // The absolute path can be used when certificate is already present on the workload filesystem.
+ Key string `json:"key_file,omitempty"`
}
// Sampler defines sampling configuration.
@@ -133,6 +162,10 @@ type Java struct {
// +optional
Image string `json:"image,omitempty"`
+ // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+ // If omitted, an emptyDir is used with size limit VolumeSizeLimit
+ VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"`
+
// VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
// The default size is 200Mi.
VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"`
@@ -167,6 +200,10 @@ type NodeJS struct {
// +optional
Image string `json:"image,omitempty"`
+ // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+ // If omitted, an emptyDir is used with size limit VolumeSizeLimit
+ VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"`
+
// VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
// The default size is 200Mi.
VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"`
@@ -188,6 +225,10 @@ type Python struct {
// +optional
Image string `json:"image,omitempty"`
+ // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+ // If omitted, an emptyDir is used with size limit VolumeSizeLimit
+ VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"`
+
// VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
// The default size is 200Mi.
VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"`
@@ -209,6 +250,10 @@ type DotNet struct {
// +optional
Image string `json:"image,omitempty"`
+ // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+ // If omitted, an emptyDir is used with size limit VolumeSizeLimit
+ VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"`
+
// VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
// The default size is 200Mi.
VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"`
@@ -228,6 +273,10 @@ type Go struct {
// +optional
Image string `json:"image,omitempty"`
+ // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+ // If omitted, an emptyDir is used with size limit VolumeSizeLimit
+ VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"`
+
// VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
// The default size is 200Mi.
VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"`
@@ -249,6 +298,10 @@ type ApacheHttpd struct {
// +optional
Image string `json:"image,omitempty"`
+ // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+ // If omitted, an emptyDir is used with size limit VolumeSizeLimit
+ VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"`
+
// VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
// The default size is 200Mi.
VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"`
@@ -285,6 +338,10 @@ type Nginx struct {
// +optional
Image string `json:"image,omitempty"`
+ // VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+ // If omitted, an emptyDir is used with size limit VolumeSizeLimit
+ VolumeClaimTemplate corev1.PersistentVolumeClaimTemplate `json:"volumeClaimTemplate,omitempty"`
+
// VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
// The default size is 200Mi.
VolumeSizeLimit *resource.Quantity `json:"volumeLimitSize,omitempty"`
diff --git a/apis/v1alpha1/instrumentation_webhook.go b/apis/v1alpha1/instrumentation_webhook.go
index 004992f795..b4aae51c56 100644
--- a/apis/v1alpha1/instrumentation_webhook.go
+++ b/apis/v1alpha1/instrumentation_webhook.go
@@ -17,6 +17,7 @@ package v1alpha1
import (
"context"
"fmt"
+ "reflect"
"strconv"
"strings"
@@ -127,13 +128,13 @@ func (w InstrumentationWebhook) defaulter(r *Instrumentation) error {
if r.Spec.Python.Resources.Limits == nil {
r.Spec.Python.Resources.Limits = corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("500m"),
- corev1.ResourceMemory: resource.MustParse("32Mi"),
+ corev1.ResourceMemory: resource.MustParse("64Mi"),
}
}
if r.Spec.Python.Resources.Requests == nil {
r.Spec.Python.Resources.Requests = corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("50m"),
- corev1.ResourceMemory: resource.MustParse("32Mi"),
+ corev1.ResourceMemory: resource.MustParse("64Mi"),
}
}
if r.Spec.DotNet.Image == "" {
@@ -157,13 +158,13 @@ func (w InstrumentationWebhook) defaulter(r *Instrumentation) error {
if r.Spec.Go.Resources.Limits == nil {
r.Spec.Go.Resources.Limits = corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("500m"),
- corev1.ResourceMemory: resource.MustParse("32Mi"),
+ corev1.ResourceMemory: resource.MustParse("64Mi"),
}
}
if r.Spec.Go.Resources.Requests == nil {
r.Spec.Go.Resources.Requests = corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("50m"),
- corev1.ResourceMemory: resource.MustParse("32Mi"),
+ corev1.ResourceMemory: resource.MustParse("64Mi"),
}
}
if r.Spec.ApacheHttpd.Image == "" {
@@ -236,9 +237,61 @@ func (w InstrumentationWebhook) validate(r *Instrumentation) (admission.Warnings
default:
return warnings, fmt.Errorf("spec.sampler.type is not valid: %s", r.Spec.Sampler.Type)
}
+
+ var err error
+ err = validateInstrVolume(r.Spec.ApacheHttpd.VolumeClaimTemplate, r.Spec.ApacheHttpd.VolumeSizeLimit)
+ if err != nil {
+ return warnings, fmt.Errorf("spec.apachehttpd.volumeClaimTemplate and spec.apachehttpd.volumeSizeLimit cannot both be defined: %w", err)
+ }
+ err = validateInstrVolume(r.Spec.DotNet.VolumeClaimTemplate, r.Spec.DotNet.VolumeSizeLimit)
+ if err != nil {
+ return warnings, fmt.Errorf("spec.dotnet.volumeClaimTemplate and spec.dotnet.volumeSizeLimit cannot both be defined: %w", err)
+ }
+ err = validateInstrVolume(r.Spec.Go.VolumeClaimTemplate, r.Spec.Go.VolumeSizeLimit)
+ if err != nil {
+ return warnings, fmt.Errorf("spec.go.volumeClaimTemplate and spec.go.volumeSizeLimit cannot both be defined: %w", err)
+ }
+ err = validateInstrVolume(r.Spec.Java.VolumeClaimTemplate, r.Spec.Java.VolumeSizeLimit)
+ if err != nil {
+ return warnings, fmt.Errorf("spec.java.volumeClaimTemplate and spec.java.volumeSizeLimit cannot both be defined: %w", err)
+ }
+ err = validateInstrVolume(r.Spec.Nginx.VolumeClaimTemplate, r.Spec.Nginx.VolumeSizeLimit)
+ if err != nil {
+ return warnings, fmt.Errorf("spec.nginx.volumeClaimTemplate and spec.nginx.volumeSizeLimit cannot both be defined: %w", err)
+ }
+ err = validateInstrVolume(r.Spec.NodeJS.VolumeClaimTemplate, r.Spec.NodeJS.VolumeSizeLimit)
+ if err != nil {
+ return warnings, fmt.Errorf("spec.nodejs.volumeClaimTemplate and spec.nodejs.volumeSizeLimit cannot both be defined: %w", err)
+ }
+ err = validateInstrVolume(r.Spec.Python.VolumeClaimTemplate, r.Spec.Python.VolumeSizeLimit)
+ if err != nil {
+ return warnings, fmt.Errorf("spec.python.volumeClaimTemplate and spec.python.volumeSizeLimit cannot both be defined: %w", err)
+ }
+
+ warnings = append(warnings, validateExporter(r.Spec.Exporter)...)
+
return warnings, nil
}
+func validateExporter(exporter Exporter) []string {
+ var warnings []string
+ if exporter.TLS != nil {
+ tls := exporter.TLS
+ if tls.Key != "" && tls.Cert == "" || tls.Cert != "" && tls.Key == "" {
+ warnings = append(warnings, "both exporter.tls.key and exporter.tls.cert mut be set")
+ }
+
+ if !strings.HasPrefix(exporter.Endpoint, "https://") {
+ warnings = append(warnings, "exporter.tls is configured but exporter.endpoint is not enabling TLS with https://")
+ }
+ }
+ if strings.HasPrefix(exporter.Endpoint, "https://") && exporter.TLS == nil {
+ warnings = append(warnings, "exporter is using https:// but exporter.tls is unset")
+ }
+
+ return warnings
+}
+
func validateJaegerRemoteSamplerArgument(argument string) error {
parts := strings.Split(argument, ",")
@@ -270,6 +323,13 @@ func validateJaegerRemoteSamplerArgument(argument string) error {
return nil
}
+func validateInstrVolume(volumeClaimTemplate corev1.PersistentVolumeClaimTemplate, volumeSizeLimit *resource.Quantity) error {
+ if !reflect.ValueOf(volumeClaimTemplate).IsZero() && volumeSizeLimit != nil {
+ return fmt.Errorf("unable to resolve volume size")
+ }
+ return nil
+}
+
func NewInstrumentationWebhook(logger logr.Logger, scheme *runtime.Scheme, cfg config.Config) *InstrumentationWebhook {
return &InstrumentationWebhook{
logger: logger,
diff --git a/apis/v1alpha1/instrumentation_webhook_test.go b/apis/v1alpha1/instrumentation_webhook_test.go
index 81049cbc0c..f1089215aa 100644
--- a/apis/v1alpha1/instrumentation_webhook_test.go
+++ b/apis/v1alpha1/instrumentation_webhook_test.go
@@ -19,11 +19,15 @@ import (
"testing"
"github.com/stretchr/testify/assert"
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/resource"
"sigs.k8s.io/controller-runtime/pkg/webhook/admission"
"github.com/open-telemetry/opentelemetry-operator/internal/config"
)
+var defaultVolumeSize = resource.MustParse("200Mi")
+
func TestInstrumentationDefaultingWebhook(t *testing.T) {
inst := &Instrumentation{}
err := InstrumentationWebhook{
@@ -113,6 +117,111 @@ func TestInstrumentationValidatingWebhook(t *testing.T) {
},
},
},
+ {
+ name: "with volume and volumeSizeLimit",
+ err: "spec.nodejs.volumeClaimTemplate and spec.nodejs.volumeSizeLimit cannot both be defined",
+ inst: Instrumentation{
+ Spec: InstrumentationSpec{
+ NodeJS: NodeJS{
+ VolumeClaimTemplate: corev1.PersistentVolumeClaimTemplate{
+ Spec: corev1.PersistentVolumeClaimSpec{
+ AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
+ },
+ },
+ VolumeSizeLimit: &defaultVolumeSize,
+ },
+ },
+ },
+ warnings: []string{"sampler type not set"},
+ },
+ {
+ name: "exporter: tls cert set but missing key",
+ inst: Instrumentation{
+ Spec: InstrumentationSpec{
+ Sampler: Sampler{
+ Type: ParentBasedTraceIDRatio,
+ Argument: "0.99",
+ },
+ Exporter: Exporter{
+ Endpoint: "https://collector:4317",
+ TLS: &TLS{
+ Cert: "cert",
+ },
+ },
+ },
+ },
+ warnings: []string{"both exporter.tls.key and exporter.tls.cert mut be set"},
+ },
+ {
+ name: "exporter: tls key set but missing cert",
+ inst: Instrumentation{
+ Spec: InstrumentationSpec{
+ Sampler: Sampler{
+ Type: ParentBasedTraceIDRatio,
+ Argument: "0.99",
+ },
+ Exporter: Exporter{
+ Endpoint: "https://collector:4317",
+ TLS: &TLS{
+ Key: "key",
+ },
+ },
+ },
+ },
+ warnings: []string{"both exporter.tls.key and exporter.tls.cert mut be set"},
+ },
+ {
+ name: "exporter: tls set but using http://",
+ inst: Instrumentation{
+ Spec: InstrumentationSpec{
+ Sampler: Sampler{
+ Type: ParentBasedTraceIDRatio,
+ Argument: "0.99",
+ },
+ Exporter: Exporter{
+ Endpoint: "http://collector:4317",
+ TLS: &TLS{
+ Key: "key",
+ Cert: "cert",
+ },
+ },
+ },
+ },
+ warnings: []string{"exporter.tls is configured but exporter.endpoint is not enabling TLS with https://"},
+ },
+ {
+ name: "exporter: exporter using http://, but the tls is nil",
+ inst: Instrumentation{
+ Spec: InstrumentationSpec{
+ Sampler: Sampler{
+ Type: ParentBasedTraceIDRatio,
+ Argument: "0.99",
+ },
+ Exporter: Exporter{
+ Endpoint: "https://collector:4317",
+ },
+ },
+ },
+ warnings: []string{"exporter is using https:// but exporter.tls is unset"},
+ },
+ {
+ name: "exporter no warning set",
+ inst: Instrumentation{
+ Spec: InstrumentationSpec{
+ Sampler: Sampler{
+ Type: ParentBasedTraceIDRatio,
+ Argument: "0.99",
+ },
+ Exporter: Exporter{
+ Endpoint: "https://collector:4317",
+ TLS: &TLS{
+ Key: "key",
+ Cert: "cert",
+ },
+ },
+ },
+ },
+ },
}
for _, test := range tests {
diff --git a/apis/v1alpha1/targetallocator_webhook.go b/apis/v1alpha1/targetallocator_webhook.go
index bed76f29a4..1a3687dd65 100644
--- a/apis/v1alpha1/targetallocator_webhook.go
+++ b/apis/v1alpha1/targetallocator_webhook.go
@@ -26,6 +26,7 @@ import (
"github.com/open-telemetry/opentelemetry-operator/apis/v1beta1"
"github.com/open-telemetry/opentelemetry-operator/internal/config"
+ "github.com/open-telemetry/opentelemetry-operator/internal/naming"
"github.com/open-telemetry/opentelemetry-operator/internal/rbac"
)
@@ -119,7 +120,11 @@ func (w TargetAllocatorWebhook) validate(ctx context.Context, ta *TargetAllocato
// if the prometheusCR is enabled, it needs a suite of permissions to function
if ta.Spec.PrometheusCR.Enabled {
- warnings, err := v1beta1.CheckTargetAllocatorPrometheusCRPolicyRules(ctx, w.reviewer, ta.Spec.ServiceAccount, ta.GetNamespace())
+ saname := ta.Spec.ServiceAccount
+ if len(ta.Spec.ServiceAccount) == 0 {
+ saname = naming.TargetAllocatorServiceAccount(ta.Name)
+ }
+ warnings, err := v1beta1.CheckTargetAllocatorPrometheusCRPolicyRules(ctx, w.reviewer, ta.GetNamespace(), saname)
if err != nil || len(warnings) > 0 {
return warnings, err
}
diff --git a/apis/v1alpha1/targetallocator_webhook_test.go b/apis/v1alpha1/targetallocator_webhook_test.go
index aedbb62c82..5e665368a2 100644
--- a/apis/v1alpha1/targetallocator_webhook_test.go
+++ b/apis/v1alpha1/targetallocator_webhook_test.go
@@ -224,6 +224,10 @@ func TestTargetAllocatorValidatingWebhook(t *testing.T) {
name: "prom CR admissions warning",
shouldFailSar: true, // force failure
targetallocator: TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-ta",
+ Namespace: "test-ns",
+ },
Spec: TargetAllocatorSpec{
PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
Enabled: true,
@@ -231,18 +235,18 @@ func TestTargetAllocatorValidatingWebhook(t *testing.T) {
},
},
expectedWarnings: []string{
- "missing the following rules for monitoring.coreos.com/servicemonitors: [*]",
- "missing the following rules for monitoring.coreos.com/podmonitors: [*]",
- "missing the following rules for nodes/metrics: [get,list,watch]",
- "missing the following rules for services: [get,list,watch]",
- "missing the following rules for endpoints: [get,list,watch]",
- "missing the following rules for namespaces: [get,list,watch]",
- "missing the following rules for networking.k8s.io/ingresses: [get,list,watch]",
- "missing the following rules for nodes: [get,list,watch]",
- "missing the following rules for pods: [get,list,watch]",
- "missing the following rules for configmaps: [get]",
- "missing the following rules for discovery.k8s.io/endpointslices: [get,list,watch]",
- "missing the following rules for nonResourceURL: /metrics: [get]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - monitoring.coreos.com/servicemonitors: [*]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - monitoring.coreos.com/podmonitors: [*]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - nodes/metrics: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - services: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - endpoints: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - namespaces: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - networking.k8s.io/ingresses: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - nodes: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - pods: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - configmaps: [get]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - discovery.k8s.io/endpointslices: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:test-ta-targetallocator - nonResourceURL: /metrics: [get]",
},
},
{
diff --git a/apis/v1alpha1/zz_generated.deepcopy.go b/apis/v1alpha1/zz_generated.deepcopy.go
index 270c617e17..35c04992cb 100644
--- a/apis/v1alpha1/zz_generated.deepcopy.go
+++ b/apis/v1alpha1/zz_generated.deepcopy.go
@@ -31,6 +31,7 @@ import (
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ApacheHttpd) DeepCopyInto(out *ApacheHttpd) {
*out = *in
+ in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate)
if in.VolumeSizeLimit != nil {
in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit
x := (*in).DeepCopy()
@@ -143,6 +144,7 @@ func (in *Defaults) DeepCopy() *Defaults {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DotNet) DeepCopyInto(out *DotNet) {
*out = *in
+ in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate)
if in.VolumeSizeLimit != nil {
in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit
x := (*in).DeepCopy()
@@ -171,6 +173,11 @@ func (in *DotNet) DeepCopy() *DotNet {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Exporter) DeepCopyInto(out *Exporter) {
*out = *in
+ if in.TLS != nil {
+ in, out := &in.TLS, &out.TLS
+ *out = new(TLS)
+ **out = **in
+ }
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Exporter.
@@ -201,6 +208,7 @@ func (in *Extensions) DeepCopy() *Extensions {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Go) DeepCopyInto(out *Go) {
*out = *in
+ in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate)
if in.VolumeSizeLimit != nil {
in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit
x := (*in).DeepCopy()
@@ -323,7 +331,7 @@ func (in *InstrumentationList) DeepCopyObject() runtime.Object {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *InstrumentationSpec) DeepCopyInto(out *InstrumentationSpec) {
*out = *in
- out.Exporter = in.Exporter
+ in.Exporter.DeepCopyInto(&out.Exporter)
in.Resource.DeepCopyInto(&out.Resource)
if in.Propagators != nil {
in, out := &in.Propagators, &out.Propagators
@@ -376,6 +384,7 @@ func (in *InstrumentationStatus) DeepCopy() *InstrumentationStatus {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Java) DeepCopyInto(out *Java) {
*out = *in
+ in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate)
if in.VolumeSizeLimit != nil {
in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit
x := (*in).DeepCopy()
@@ -444,6 +453,7 @@ func (in *MetricsConfigSpec) DeepCopy() *MetricsConfigSpec {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Nginx) DeepCopyInto(out *Nginx) {
*out = *in
+ in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate)
if in.VolumeSizeLimit != nil {
in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit
x := (*in).DeepCopy()
@@ -479,6 +489,7 @@ func (in *Nginx) DeepCopy() *Nginx {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NodeJS) DeepCopyInto(out *NodeJS) {
*out = *in
+ in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate)
if in.VolumeSizeLimit != nil {
in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit
x := (*in).DeepCopy()
@@ -1195,6 +1206,7 @@ func (in *Probe) DeepCopy() *Probe {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Python) DeepCopyInto(out *Python) {
*out = *in
+ in.VolumeClaimTemplate.DeepCopyInto(&out.VolumeClaimTemplate)
if in.VolumeSizeLimit != nil {
in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit
x := (*in).DeepCopy()
@@ -1272,6 +1284,21 @@ func (in *ScaleSubresourceStatus) DeepCopy() *ScaleSubresourceStatus {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *TLS) DeepCopyInto(out *TLS) {
+ *out = *in
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TLS.
+func (in *TLS) DeepCopy() *TLS {
+ if in == nil {
+ return nil
+ }
+ out := new(TLS)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TargetAllocator) DeepCopyInto(out *TargetAllocator) {
*out = *in
diff --git a/apis/v1beta1/collector_webhook.go b/apis/v1beta1/collector_webhook.go
index e79754b4bd..d6ad88dcff 100644
--- a/apis/v1beta1/collector_webhook.go
+++ b/apis/v1beta1/collector_webhook.go
@@ -29,6 +29,7 @@ import (
"github.com/open-telemetry/opentelemetry-operator/internal/config"
"github.com/open-telemetry/opentelemetry-operator/internal/fips"
ta "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator/adapters"
+ "github.com/open-telemetry/opentelemetry-operator/internal/naming"
"github.com/open-telemetry/opentelemetry-operator/internal/rbac"
"github.com/open-telemetry/opentelemetry-operator/pkg/featuregate"
)
@@ -121,7 +122,7 @@ func (c CollectorWebhook) ValidateCreate(ctx context.Context, obj runtime.Object
c.metrics.create(ctx, otelcol)
}
if c.bv != nil {
- newWarnings := c.bv(*otelcol)
+ newWarnings := c.bv(ctx, *otelcol)
warnings = append(warnings, newWarnings...)
}
return warnings, nil
@@ -151,7 +152,7 @@ func (c CollectorWebhook) ValidateUpdate(ctx context.Context, oldObj, newObj run
}
if c.bv != nil {
- newWarnings := c.bv(*otelcol)
+ newWarnings := c.bv(ctx, *otelcol)
warnings = append(warnings, newWarnings...)
}
return warnings, nil
@@ -188,6 +189,11 @@ func (c CollectorWebhook) Validate(ctx context.Context, r *OpenTelemetryCollecto
return warnings, fmt.Errorf("the OpenTelemetry Collector mode is set to %s, which does not support the attribute 'volumeClaimTemplates'", r.Spec.Mode)
}
+ // validate persistentVolumeClaimRetentionPolicy
+ if r.Spec.Mode != ModeStatefulSet && r.Spec.PersistentVolumeClaimRetentionPolicy != nil {
+ return warnings, fmt.Errorf("the OpenTelemetry Collector mode is set to %s, which does not support the attribute 'persistentVolumeClaimRetentionPolicy'", r.Spec.Mode)
+ }
+
// validate tolerations
if r.Spec.Mode == ModeSidecar && len(r.Spec.Tolerations) > 0 {
return warnings, fmt.Errorf("the OpenTelemetry Collector mode is set to %s, which does not support the attribute 'tolerations'", r.Spec.Mode)
@@ -336,8 +342,12 @@ func (c CollectorWebhook) validateTargetAllocatorConfig(ctx context.Context, r *
}
// if the prometheusCR is enabled, it needs a suite of permissions to function
if r.Spec.TargetAllocator.PrometheusCR.Enabled {
+ saname := r.Spec.TargetAllocator.ServiceAccount
+ if len(r.Spec.TargetAllocator.ServiceAccount) == 0 {
+ saname = naming.TargetAllocatorServiceAccount(r.Name)
+ }
warnings, err := CheckTargetAllocatorPrometheusCRPolicyRules(
- ctx, c.reviewer, r.Spec.TargetAllocator.ServiceAccount, r.GetNamespace())
+ ctx, c.reviewer, r.GetNamespace(), saname)
if err != nil || len(warnings) > 0 {
return warnings, err
}
@@ -385,13 +395,13 @@ func ValidatePorts(ports []PortsSpec) error {
func checkAutoscalerSpec(autoscaler *AutoscalerSpec) error {
if autoscaler.Behavior != nil {
if autoscaler.Behavior.ScaleDown != nil && autoscaler.Behavior.ScaleDown.StabilizationWindowSeconds != nil &&
- *autoscaler.Behavior.ScaleDown.StabilizationWindowSeconds < int32(1) {
- return fmt.Errorf("the OpenTelemetry Spec autoscale configuration is incorrect, scaleDown should be one or more")
+ (*autoscaler.Behavior.ScaleDown.StabilizationWindowSeconds < int32(0) || *autoscaler.Behavior.ScaleDown.StabilizationWindowSeconds > 3600) {
+ return fmt.Errorf("the OpenTelemetry Spec autoscale configuration is incorrect, scaleDown.stabilizationWindowSeconds should be >=0 and <=3600")
}
if autoscaler.Behavior.ScaleUp != nil && autoscaler.Behavior.ScaleUp.StabilizationWindowSeconds != nil &&
- *autoscaler.Behavior.ScaleUp.StabilizationWindowSeconds < int32(1) {
- return fmt.Errorf("the OpenTelemetry Spec autoscale configuration is incorrect, scaleUp should be one or more")
+ (*autoscaler.Behavior.ScaleUp.StabilizationWindowSeconds < int32(0) || *autoscaler.Behavior.ScaleUp.StabilizationWindowSeconds > 3600) {
+ return fmt.Errorf("the OpenTelemetry Spec autoscale configuration is incorrect, scaleUp.stabilizationWindowSeconds should be >=0 and <=3600")
}
}
if autoscaler.TargetCPUUtilization != nil && *autoscaler.TargetCPUUtilization < int32(1) {
@@ -425,7 +435,7 @@ func checkAutoscalerSpec(autoscaler *AutoscalerSpec) error {
// BuildValidator enables running the manifest generators for the collector reconciler
// +kubebuilder:object:generate=false
-type BuildValidator func(c OpenTelemetryCollector) admission.Warnings
+type BuildValidator func(ctx context.Context, c OpenTelemetryCollector) admission.Warnings
func NewCollectorWebhook(
logger logr.Logger,
diff --git a/apis/v1beta1/collector_webhook_test.go b/apis/v1beta1/collector_webhook_test.go
index 0b6b915486..8604b91b3e 100644
--- a/apis/v1beta1/collector_webhook_test.go
+++ b/apis/v1beta1/collector_webhook_test.go
@@ -17,6 +17,7 @@ package v1beta1_test
import (
"context"
"fmt"
+ "math"
"os"
"testing"
@@ -82,7 +83,7 @@ func TestValidate(t *testing.T) {
},
}
- bv := func(collector v1beta1.OpenTelemetryCollector) admission.Warnings {
+ bv := func(_ context.Context, collector v1beta1.OpenTelemetryCollector) admission.Warnings {
var warnings admission.Warnings
cfg := config.New(
config.WithCollectorImage("default-collector"),
@@ -168,7 +169,7 @@ func TestCollectorDefaultingWebhook(t *testing.T) {
Mode: v1beta1.ModeDeployment,
UpgradeStrategy: v1beta1.UpgradeStrategyAutomatic,
Config: func() v1beta1.Config {
- const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"endpoint":"0.0.0.0:4317"},"http":{"endpoint":"0.0.0.0:4318"}}}},"exporters":{"debug":null},"service":{"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}`
+ const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"endpoint":"0.0.0.0:4317"},"http":{"endpoint":"0.0.0.0:4318"}}}},"exporters":{"debug":null},"service":{"telemetry":{"metrics":{"address":"0.0.0.0:8888"}},"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}`
var cfg v1beta1.Config
require.NoError(t, yaml.Unmarshal([]byte(input), &cfg))
return cfg
@@ -181,7 +182,7 @@ func TestCollectorDefaultingWebhook(t *testing.T) {
otelcol: v1beta1.OpenTelemetryCollector{
Spec: v1beta1.OpenTelemetryCollectorSpec{
Config: func() v1beta1.Config {
- const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"headers":{"example":"another"}},"http":{"endpoint":"0.0.0.0:4000"}}}},"exporters":{"debug":null},"service":{"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}`
+ const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"headers":{"example":"another"}},"http":{"endpoint":"0.0.0.0:4000"}}}},"exporters":{"debug":null},"service":{"telemetry":{"metrics":{"address":"1.2.3.4:7654"}},"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}`
var cfg v1beta1.Config
require.NoError(t, yaml.Unmarshal([]byte(input), &cfg))
return cfg
@@ -200,7 +201,7 @@ func TestCollectorDefaultingWebhook(t *testing.T) {
Mode: v1beta1.ModeDeployment,
UpgradeStrategy: v1beta1.UpgradeStrategyAutomatic,
Config: func() v1beta1.Config {
- const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"endpoint":"0.0.0.0:4317","headers":{"example":"another"}},"http":{"endpoint":"0.0.0.0:4000"}}}},"exporters":{"debug":null},"service":{"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}`
+ const input = `{"receivers":{"otlp":{"protocols":{"grpc":{"endpoint":"0.0.0.0:4317","headers":{"example":"another"}},"http":{"endpoint":"0.0.0.0:4000"}}}},"exporters":{"debug":null},"service":{"telemetry":{"metrics":{"address":"1.2.3.4:7654"}},"pipelines":{"traces":{"receivers":["otlp"],"exporters":["debug"]}}}}`
var cfg v1beta1.Config
require.NoError(t, yaml.Unmarshal([]byte(input), &cfg))
return cfg
@@ -517,7 +518,7 @@ func TestCollectorDefaultingWebhook(t *testing.T) {
},
}
- bv := func(collector v1beta1.OpenTelemetryCollector) admission.Warnings {
+ bv := func(_ context.Context, collector v1beta1.OpenTelemetryCollector) admission.Warnings {
var warnings admission.Warnings
cfg := config.New(
config.WithCollectorImage("default-collector"),
@@ -553,6 +554,9 @@ func TestCollectorDefaultingWebhook(t *testing.T) {
)
ctx := context.Background()
err := cvw.Default(ctx, &test.otelcol)
+ if test.expected.Spec.Config.Service.Telemetry == nil {
+ assert.NoError(t, test.expected.Spec.Config.Service.ApplyDefaults(), "could not apply defaults")
+ }
assert.NoError(t, err)
assert.Equal(t, test.expected, test.otelcol)
})
@@ -582,6 +586,7 @@ func TestOTELColValidatingWebhook(t *testing.T) {
one := int32(1)
three := int32(3)
five := int32(5)
+ maxInt := int32(math.MaxInt32)
cfg := v1beta1.Config{}
err := yaml.Unmarshal([]byte(cfgYaml), &cfg)
@@ -646,6 +651,10 @@ func TestOTELColValidatingWebhook(t *testing.T) {
name: "prom CR admissions warning",
shouldFailSar: true, // force failure
otelcol: v1beta1.OpenTelemetryCollector{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "adm-warning",
+ Namespace: "test-ns",
+ },
Spec: v1beta1.OpenTelemetryCollectorSpec{
Mode: v1beta1.ModeStatefulSet,
OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{
@@ -688,18 +697,18 @@ func TestOTELColValidatingWebhook(t *testing.T) {
},
},
expectedWarnings: []string{
- "missing the following rules for monitoring.coreos.com/servicemonitors: [*]",
- "missing the following rules for monitoring.coreos.com/podmonitors: [*]",
- "missing the following rules for nodes/metrics: [get,list,watch]",
- "missing the following rules for services: [get,list,watch]",
- "missing the following rules for endpoints: [get,list,watch]",
- "missing the following rules for namespaces: [get,list,watch]",
- "missing the following rules for networking.k8s.io/ingresses: [get,list,watch]",
- "missing the following rules for nodes: [get,list,watch]",
- "missing the following rules for pods: [get,list,watch]",
- "missing the following rules for configmaps: [get]",
- "missing the following rules for discovery.k8s.io/endpointslices: [get,list,watch]",
- "missing the following rules for nonResourceURL: /metrics: [get]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - monitoring.coreos.com/servicemonitors: [*]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - monitoring.coreos.com/podmonitors: [*]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - nodes/metrics: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - services: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - endpoints: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - namespaces: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - networking.k8s.io/ingresses: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - nodes: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - pods: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - configmaps: [get]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - discovery.k8s.io/endpointslices: [get,list,watch]",
+ "missing the following rules for system:serviceaccount:test-ns:adm-warning-targetallocator - nonResourceURL: /metrics: [get]",
},
},
{
@@ -758,6 +767,21 @@ func TestOTELColValidatingWebhook(t *testing.T) {
},
expectedErr: "does not support the attribute 'volumeClaimTemplates'",
},
+ {
+ name: "invalid mode with persistentVolumeClaimRetentionPolicy",
+ otelcol: v1beta1.OpenTelemetryCollector{
+ Spec: v1beta1.OpenTelemetryCollectorSpec{
+ Mode: v1beta1.ModeSidecar,
+ StatefulSetCommonFields: v1beta1.StatefulSetCommonFields{
+ PersistentVolumeClaimRetentionPolicy: &appsv1.StatefulSetPersistentVolumeClaimRetentionPolicy{
+ WhenDeleted: appsv1.RetainPersistentVolumeClaimRetentionPolicyType,
+ WhenScaled: appsv1.DeletePersistentVolumeClaimRetentionPolicyType,
+ },
+ },
+ },
+ },
+ expectedErr: "does not support the attribute 'persistentVolumeClaimRetentionPolicy'",
+ },
{
name: "invalid mode with tolerations",
otelcol: v1beta1.OpenTelemetryCollector{
@@ -913,36 +937,68 @@ func TestOTELColValidatingWebhook(t *testing.T) {
expectedErr: "minReplicas should be one or more",
},
{
- name: "invalid autoscaler scale down",
+ name: "invalid autoscaler scale down stablization window - <0",
otelcol: v1beta1.OpenTelemetryCollector{
Spec: v1beta1.OpenTelemetryCollectorSpec{
Autoscaler: &v1beta1.AutoscalerSpec{
MaxReplicas: &three,
Behavior: &autoscalingv2.HorizontalPodAutoscalerBehavior{
ScaleDown: &autoscalingv2.HPAScalingRules{
- StabilizationWindowSeconds: &zero,
+ StabilizationWindowSeconds: &minusOne,
+ },
+ },
+ },
+ },
+ },
+ expectedErr: "scaleDown.stabilizationWindowSeconds should be >=0 and <=3600",
+ },
+ {
+ name: "invalid autoscaler scale down stablization window - >3600",
+ otelcol: v1beta1.OpenTelemetryCollector{
+ Spec: v1beta1.OpenTelemetryCollectorSpec{
+ Autoscaler: &v1beta1.AutoscalerSpec{
+ MaxReplicas: &three,
+ Behavior: &autoscalingv2.HorizontalPodAutoscalerBehavior{
+ ScaleDown: &autoscalingv2.HPAScalingRules{
+ StabilizationWindowSeconds: &maxInt,
+ },
+ },
+ },
+ },
+ },
+ expectedErr: "scaleDown.stabilizationWindowSeconds should be >=0 and <=3600",
+ },
+ {
+ name: "invalid autoscaler scale up stablization window - <0",
+ otelcol: v1beta1.OpenTelemetryCollector{
+ Spec: v1beta1.OpenTelemetryCollectorSpec{
+ Autoscaler: &v1beta1.AutoscalerSpec{
+ MaxReplicas: &three,
+ Behavior: &autoscalingv2.HorizontalPodAutoscalerBehavior{
+ ScaleUp: &autoscalingv2.HPAScalingRules{
+ StabilizationWindowSeconds: &minusOne,
},
},
},
},
},
- expectedErr: "scaleDown should be one or more",
+ expectedErr: "scaleUp.stabilizationWindowSeconds should be >=0 and <=3600",
},
{
- name: "invalid autoscaler scale up",
+ name: "invalid autoscaler scale up stablization window - >3600",
otelcol: v1beta1.OpenTelemetryCollector{
Spec: v1beta1.OpenTelemetryCollectorSpec{
Autoscaler: &v1beta1.AutoscalerSpec{
MaxReplicas: &three,
Behavior: &autoscalingv2.HorizontalPodAutoscalerBehavior{
ScaleUp: &autoscalingv2.HPAScalingRules{
- StabilizationWindowSeconds: &zero,
+ StabilizationWindowSeconds: &maxInt,
},
},
},
},
},
- expectedErr: "scaleUp should be one or more",
+ expectedErr: "scaleUp.stabilizationWindowSeconds should be >=0 and <=3600",
},
{
name: "invalid autoscaler target cpu utilization",
@@ -1309,7 +1365,7 @@ func TestOTELColValidatingWebhook(t *testing.T) {
},
}
- bv := func(collector v1beta1.OpenTelemetryCollector) admission.Warnings {
+ bv := func(_ context.Context, collector v1beta1.OpenTelemetryCollector) admission.Warnings {
var warnings admission.Warnings
cfg := config.New(
config.WithCollectorImage("default-collector"),
@@ -1377,7 +1433,7 @@ func TestOTELColValidateUpdateWebhook(t *testing.T) {
},
}
- bv := func(collector v1beta1.OpenTelemetryCollector) admission.Warnings {
+ bv := func(_ context.Context, collector v1beta1.OpenTelemetryCollector) admission.Warnings {
var warnings admission.Warnings
cfg := config.New(
config.WithCollectorImage("default-collector"),
diff --git a/apis/v1beta1/common.go b/apis/v1beta1/common.go
index cf31de5118..77044771a5 100644
--- a/apis/v1beta1/common.go
+++ b/apis/v1beta1/common.go
@@ -15,6 +15,7 @@
package v1beta1
import (
+ appsv1 "k8s.io/api/apps/v1"
autoscalingv2 "k8s.io/api/autoscaling/v2"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/intstr"
@@ -243,4 +244,9 @@ type StatefulSetCommonFields struct {
// +optional
// +listType=atomic
VolumeClaimTemplates []v1.PersistentVolumeClaim `json:"volumeClaimTemplates,omitempty"`
+ // PersistentVolumeClaimRetentionPolicy describes the lifecycle of persistent volume claims
+ // created from volumeClaimTemplates.
+ // This only works with the following OpenTelemetryCollector modes: statefulset.
+ // +optional
+ PersistentVolumeClaimRetentionPolicy *appsv1.StatefulSetPersistentVolumeClaimRetentionPolicy `json:"persistentVolumeClaimRetentionPolicy,omitempty"`
}
diff --git a/apis/v1beta1/config.go b/apis/v1beta1/config.go
index 2d88c7617e..5cb9150513 100644
--- a/apis/v1beta1/config.go
+++ b/apis/v1beta1/config.go
@@ -206,7 +206,12 @@ func (c *Config) getPortsForComponentKinds(logger logr.Logger, componentKinds ..
case KindProcessor:
continue
case KindExtension:
- continue
+ retriever = extensions.ParserFor
+ if c.Extensions == nil {
+ cfg = AnyConfig{}
+ } else {
+ cfg = *c.Extensions
+ }
}
for componentName := range enabledComponents[componentKind] {
// TODO: Clean up the naming here and make it simpler to use a retriever.
@@ -226,8 +231,47 @@ func (c *Config) getPortsForComponentKinds(logger logr.Logger, componentKinds ..
return ports, nil
}
+// getEnvironmentVariablesForComponentKinds gets the environment variables for the given ComponentKind(s).
+func (c *Config) getEnvironmentVariablesForComponentKinds(logger logr.Logger, componentKinds ...ComponentKind) ([]corev1.EnvVar, error) {
+ var envVars []corev1.EnvVar = []corev1.EnvVar{}
+ enabledComponents := c.GetEnabledComponents()
+ for _, componentKind := range componentKinds {
+ var retriever components.ParserRetriever
+ var cfg AnyConfig
+
+ switch componentKind {
+ case KindReceiver:
+ retriever = receivers.ReceiverFor
+ cfg = c.Receivers
+ case KindExporter:
+ continue
+ case KindProcessor:
+ continue
+ case KindExtension:
+ continue
+ }
+ for componentName := range enabledComponents[componentKind] {
+ parser := retriever(componentName)
+ if parsedEnvVars, err := parser.GetEnvironmentVariables(logger, cfg.Object[componentName]); err != nil {
+ return nil, err
+ } else {
+ envVars = append(envVars, parsedEnvVars...)
+ }
+ }
+ }
+
+ sort.Slice(envVars, func(i, j int) bool {
+ return envVars[i].Name < envVars[j].Name
+ })
+
+ return envVars, nil
+}
+
// applyDefaultForComponentKinds applies defaults to the endpoints for the given ComponentKind(s).
func (c *Config) applyDefaultForComponentKinds(logger logr.Logger, componentKinds ...ComponentKind) error {
+ if err := c.Service.ApplyDefaults(); err != nil {
+ return err
+ }
enabledComponents := c.GetEnabledComponents()
for _, componentKind := range componentKinds {
var retriever components.ParserRetriever
@@ -279,10 +323,22 @@ func (c *Config) GetExporterPorts(logger logr.Logger) ([]corev1.ServicePort, err
return c.getPortsForComponentKinds(logger, KindExporter)
}
-func (c *Config) GetAllPorts(logger logr.Logger) ([]corev1.ServicePort, error) {
+func (c *Config) GetExtensionPorts(logger logr.Logger) ([]corev1.ServicePort, error) {
+ return c.getPortsForComponentKinds(logger, KindExtension)
+}
+
+func (c *Config) GetReceiverAndExporterPorts(logger logr.Logger) ([]corev1.ServicePort, error) {
return c.getPortsForComponentKinds(logger, KindReceiver, KindExporter)
}
+func (c *Config) GetAllPorts(logger logr.Logger) ([]corev1.ServicePort, error) {
+ return c.getPortsForComponentKinds(logger, KindReceiver, KindExporter, KindExtension)
+}
+
+func (c *Config) GetEnvironmentVariables(logger logr.Logger) ([]corev1.EnvVar, error) {
+ return c.getEnvironmentVariablesForComponentKinds(logger, KindReceiver)
+}
+
func (c *Config) GetAllRbacRules(logger logr.Logger) ([]rbacv1.PolicyRule, error) {
return c.getRbacRulesForComponentKinds(logger, KindReceiver, KindExporter, KindProcessor)
}
@@ -371,24 +427,55 @@ type Service struct {
Pipelines map[string]*Pipeline `json:"pipelines" yaml:"pipelines"`
}
-// MetricsPort gets the port number for the metrics endpoint from the collector config if it has been set.
-func (s *Service) MetricsPort() (int32, error) {
+// MetricsEndpoint gets the port number and host address for the metrics endpoint from the collector config if it has been set.
+func (s *Service) MetricsEndpoint() (string, int32, error) {
+ defaultAddr := "0.0.0.0"
if s.GetTelemetry() == nil {
// telemetry isn't set, use the default
- return 8888, nil
+ return defaultAddr, 8888, nil
}
- _, port, netErr := net.SplitHostPort(s.GetTelemetry().Metrics.Address)
+ host, port, netErr := net.SplitHostPort(s.GetTelemetry().Metrics.Address)
if netErr != nil && strings.Contains(netErr.Error(), "missing port in address") {
- return 8888, nil
+ return defaultAddr, 8888, nil
} else if netErr != nil {
- return 0, netErr
+ return "", 0, netErr
}
i64, err := strconv.ParseInt(port, 10, 32)
if err != nil {
- return 0, err
+ return "", 0, err
}
- return int32(i64), nil
+ if host == "" {
+ host = defaultAddr
+ }
+
+ return host, int32(i64), nil
+}
+
+// ApplyDefaults inserts configuration defaults if it has not been set.
+func (s *Service) ApplyDefaults() error {
+ telemetryAddr, telemetryPort, err := s.MetricsEndpoint()
+ if err != nil {
+ return err
+ }
+ tm := &AnyConfig{
+ Object: map[string]interface{}{
+ "metrics": map[string]interface{}{
+ "address": fmt.Sprintf("%s:%d", telemetryAddr, telemetryPort),
+ },
+ },
+ }
+
+ if s.Telemetry == nil {
+ s.Telemetry = tm
+ return nil
+ }
+ // NOTE: Merge without overwrite. If a telemetry endpoint is specified, the defaulting
+ // respects the configuration and returns an equal value.
+ if err := mergo.Merge(s.Telemetry, tm); err != nil {
+ return fmt.Errorf("telemetry config merge failed: %w", err)
+ }
+ return nil
}
// MetricsConfig comes from the collector.
diff --git a/apis/v1beta1/config_test.go b/apis/v1beta1/config_test.go
index 31895b3252..b9c288f692 100644
--- a/apis/v1beta1/config_test.go
+++ b/apis/v1beta1/config_test.go
@@ -220,11 +220,13 @@ func TestConfigToMetricsPort(t *testing.T) {
for _, tt := range []struct {
desc string
+ expectedAddr string
expectedPort int32
config Service
}{
{
"custom port",
+ "0.0.0.0",
9090,
Service{
Telemetry: &AnyConfig{
@@ -238,6 +240,7 @@ func TestConfigToMetricsPort(t *testing.T) {
},
{
"bad address",
+ "0.0.0.0",
8888,
Service{
Telemetry: &AnyConfig{
@@ -251,6 +254,7 @@ func TestConfigToMetricsPort(t *testing.T) {
},
{
"missing address",
+ "0.0.0.0",
8888,
Service{
Telemetry: &AnyConfig{
@@ -264,6 +268,7 @@ func TestConfigToMetricsPort(t *testing.T) {
},
{
"missing metrics",
+ "0.0.0.0",
8888,
Service{
Telemetry: &AnyConfig{},
@@ -271,14 +276,30 @@ func TestConfigToMetricsPort(t *testing.T) {
},
{
"missing telemetry",
+ "0.0.0.0",
8888,
Service{},
},
+ {
+ "configured telemetry",
+ "1.2.3.4",
+ 4567,
+ Service{
+ Telemetry: &AnyConfig{
+ Object: map[string]interface{}{
+ "metrics": map[string]interface{}{
+ "address": "1.2.3.4:4567",
+ },
+ },
+ },
+ },
+ },
} {
t.Run(tt.desc, func(t *testing.T) {
// these are acceptable failures, we return to the collector's default metric port
- port, err := tt.config.MetricsPort()
+ addr, port, err := tt.config.MetricsEndpoint()
assert.NoError(t, err)
+ assert.Equal(t, tt.expectedAddr, addr)
assert.Equal(t, tt.expectedPort, port)
})
}
@@ -402,6 +423,66 @@ func TestConfig_GetEnabledComponents(t *testing.T) {
}
}
+func TestConfig_getEnvironmentVariablesForComponentKinds(t *testing.T) {
+ tests := []struct {
+ name string
+ config *Config
+ componentKinds []ComponentKind
+ envVarsLen int
+ }{
+ {
+ name: "no env vars",
+ config: &Config{
+ Receivers: AnyConfig{
+ Object: map[string]interface{}{
+ "myreceiver": map[string]interface{}{
+ "env": "test",
+ },
+ },
+ },
+ Service: Service{
+ Pipelines: map[string]*Pipeline{
+ "test": {
+ Receivers: []string{"myreceiver"},
+ },
+ },
+ },
+ },
+ componentKinds: []ComponentKind{KindReceiver},
+ envVarsLen: 0,
+ },
+ {
+ name: "kubeletstats env vars",
+ config: &Config{
+ Receivers: AnyConfig{
+ Object: map[string]interface{}{
+ "kubeletstats": map[string]interface{}{},
+ },
+ },
+ Service: Service{
+ Pipelines: map[string]*Pipeline{
+ "test": {
+ Receivers: []string{"kubeletstats"},
+ },
+ },
+ },
+ },
+ componentKinds: []ComponentKind{KindReceiver},
+ envVarsLen: 1,
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ logger := logr.Discard()
+ envVars, err := tt.config.GetEnvironmentVariables(logger)
+
+ assert.NoError(t, err)
+ assert.Len(t, envVars, tt.envVarsLen)
+ })
+ }
+}
+
func TestConfig_GetReceiverPorts(t *testing.T) {
tests := []struct {
name string
diff --git a/apis/v1beta1/targetallocator_rbac.go b/apis/v1beta1/targetallocator_rbac.go
index 4fb48832e6..2ef66b4541 100644
--- a/apis/v1beta1/targetallocator_rbac.go
+++ b/apis/v1beta1/targetallocator_rbac.go
@@ -61,8 +61,8 @@ func CheckTargetAllocatorPrometheusCRPolicyRules(
serviceAccountName string) (warnings []string, err error) {
subjectAccessReviews, err := reviewer.CheckPolicyRules(
ctx,
- namespace,
serviceAccountName,
+ namespace,
targetAllocatorCRPolicyRules...,
)
if err != nil {
diff --git a/apis/v1beta1/zz_generated.deepcopy.go b/apis/v1beta1/zz_generated.deepcopy.go
index eaf24ed0ba..b508f0be76 100644
--- a/apis/v1beta1/zz_generated.deepcopy.go
+++ b/apis/v1beta1/zz_generated.deepcopy.go
@@ -19,6 +19,7 @@
package v1beta1
import (
+ appsv1 "k8s.io/api/apps/v1"
"k8s.io/api/autoscaling/v2"
"k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
@@ -680,6 +681,11 @@ func (in *StatefulSetCommonFields) DeepCopyInto(out *StatefulSetCommonFields) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
+ if in.PersistentVolumeClaimRetentionPolicy != nil {
+ in, out := &in.PersistentVolumeClaimRetentionPolicy, &out.PersistentVolumeClaimRetentionPolicy
+ *out = new(appsv1.StatefulSetPersistentVolumeClaimRetentionPolicy)
+ **out = **in
+ }
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StatefulSetCommonFields.
diff --git a/autoinstrumentation/dotnet/version.txt b/autoinstrumentation/dotnet/version.txt
index 27f9cd322b..f8e233b273 100644
--- a/autoinstrumentation/dotnet/version.txt
+++ b/autoinstrumentation/dotnet/version.txt
@@ -1 +1 @@
-1.8.0
+1.9.0
diff --git a/autoinstrumentation/java/version.txt b/autoinstrumentation/java/version.txt
index 834f262953..10c2c0c3d6 100644
--- a/autoinstrumentation/java/version.txt
+++ b/autoinstrumentation/java/version.txt
@@ -1 +1 @@
-2.8.0
+2.10.0
diff --git a/autoinstrumentation/nodejs/package.json b/autoinstrumentation/nodejs/package.json
index 7e5886a89c..11fc4006ce 100644
--- a/autoinstrumentation/nodejs/package.json
+++ b/autoinstrumentation/nodejs/package.json
@@ -10,21 +10,12 @@
},
"devDependencies": {
"copyfiles": "^2.4.1",
- "rimraf": "^5.0.8",
- "typescript": "^5.5.3"
+ "rimraf": "^6.0.1",
+ "typescript": "^5.6.3"
},
"dependencies": {
- "@opentelemetry/api": "1.9.0",
- "@opentelemetry/auto-instrumentations-node": "0.48.0",
- "@opentelemetry/exporter-metrics-otlp-grpc": "0.52.1",
- "@opentelemetry/exporter-prometheus": "0.52.1",
- "@opentelemetry/exporter-trace-otlp-grpc": "0.52.1",
- "@opentelemetry/resource-detector-alibaba-cloud": "0.28.10",
- "@opentelemetry/resource-detector-aws": "1.5.2",
- "@opentelemetry/resource-detector-container": "0.3.11",
- "@opentelemetry/resource-detector-gcp": "0.29.10",
- "@opentelemetry/resources": "1.25.1",
- "@opentelemetry/sdk-metrics": "1.25.1",
- "@opentelemetry/sdk-node": "0.52.1"
+ "@opentelemetry/exporter-metrics-otlp-grpc": "0.55.0",
+ "@opentelemetry/auto-instrumentations-node": "0.53.0",
+ "@opentelemetry/exporter-prometheus": "0.55.0"
}
}
diff --git a/autoinstrumentation/nodejs/src/autoinstrumentation.ts b/autoinstrumentation/nodejs/src/autoinstrumentation.ts
index 928e6d5578..2a4aabc4a7 100644
--- a/autoinstrumentation/nodejs/src/autoinstrumentation.ts
+++ b/autoinstrumentation/nodejs/src/autoinstrumentation.ts
@@ -1,5 +1,7 @@
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
-import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc';
+import { OTLPTraceExporter as OTLPProtoTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';
+import { OTLPTraceExporter as OTLPHttpTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
+import { OTLPTraceExporter as OTLPGrpcTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc';
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-grpc';
import { PrometheusExporter } from '@opentelemetry/exporter-prometheus';
import { PeriodicExportingMetricReader } from '@opentelemetry/sdk-metrics';
@@ -12,6 +14,22 @@ import { diag } from '@opentelemetry/api';
import { NodeSDK } from '@opentelemetry/sdk-node';
+function getTraceExporter() {
+ let protocol = process.env.OTEL_EXPORTER_OTLP_PROTOCOL;
+ switch (protocol) {
+ case undefined:
+ case '':
+ case 'grpc':
+ return new OTLPGrpcTraceExporter();
+ case 'http/json':
+ return new OTLPHttpTraceExporter();
+ case 'http/protobuf':
+ return new OTLPProtoTraceExporter();
+ default:
+ throw Error(`Creating traces exporter based on "${protocol}" protocol (configured via environment variable OTEL_EXPORTER_OTLP_PROTOCOL) is not implemented!`);
+ }
+}
+
function getMetricReader() {
switch (process.env.OTEL_METRICS_EXPORTER) {
case undefined:
@@ -35,7 +53,7 @@ function getMetricReader() {
const sdk = new NodeSDK({
autoDetectResources: true,
instrumentations: [getNodeAutoInstrumentations()],
- traceExporter: new OTLPTraceExporter(),
+ traceExporter: getTraceExporter(),
metricReader: getMetricReader(),
resourceDetectors:
[
diff --git a/autoinstrumentation/python/Dockerfile b/autoinstrumentation/python/Dockerfile
index 9a6dfa7403..2546cf61ac 100644
--- a/autoinstrumentation/python/Dockerfile
+++ b/autoinstrumentation/python/Dockerfile
@@ -1,12 +1,12 @@
# To build one auto-instrumentation image for Python, please:
-# - Ensure the packages are installed in the `/autoinstrumentation` directory. This is required as when instrumenting the pod,
-# one init container will be created to copy all the content in `/autoinstrumentation` directory to your app's container. Then
+# - Ensure the packages are installed in the `/autoinstrumentation,{-musl}` directory. This is required as when instrumenting the pod,
+# one init container will be created to copy all the content in `/autoinstrumentation{,-musl}` directory to your app's container. Then
# update the `PYTHONPATH` environment variable accordingly. To achieve this, you can mimic the one in `autoinstrumentation/python/Dockerfile`
# by using multi-stage builds. In the first stage, install all the required packages in one custom directory with `pip install --target`.
-# Then in the second stage, copy the directory to `/autoinstrumentation`.
+# Then in the second stage, copy the directory to `/autoinstrumentation{,-musl}`.
# - Ensure you have `opentelemetry-distro` and `opentelemetry-instrumentation` or your customized alternatives installed.
# Those two packages are essential to Python auto-instrumentation.
-# - Grant the necessary access to `/autoinstrumentation` directory. `chmod -R go+r /autoinstrumentation`
+# - Grant the necessary access to `/autoinstrumentation{,-musl}` directory. `chmod -R go+r /autoinstrumentation`
# - For auto-instrumentation by container injection, the Linux command cp is
# used and must be availabe in the image.
FROM python:3.11 AS build
@@ -17,8 +17,19 @@ ADD requirements.txt .
RUN mkdir workspace && pip install --target workspace -r requirements.txt
+FROM python:3.11-alpine AS build-musl
+
+WORKDIR /operator-build
+
+ADD requirements.txt .
+
+RUN apk add gcc python3-dev musl-dev linux-headers
+RUN mkdir workspace && pip install --target workspace -r requirements.txt
+
FROM busybox
COPY --from=build /operator-build/workspace /autoinstrumentation
+COPY --from=build-musl /operator-build/workspace /autoinstrumentation-musl
RUN chmod -R go+r /autoinstrumentation
+RUN chmod -R go+r /autoinstrumentation-musl
diff --git a/bundle/community/manifests/opentelemetry-operator.clusterserviceversion.yaml b/bundle/community/manifests/opentelemetry-operator.clusterserviceversion.yaml
index 3b1454f8d6..b7a24d1dde 100644
--- a/bundle/community/manifests/opentelemetry-operator.clusterserviceversion.yaml
+++ b/bundle/community/manifests/opentelemetry-operator.clusterserviceversion.yaml
@@ -99,13 +99,13 @@ metadata:
categories: Logging & Tracing,Monitoring
certified: "false"
containerImage: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator
- createdAt: "2024-09-19T17:15:52Z"
+ createdAt: "2024-11-27T11:54:33Z"
description: Provides the OpenTelemetry components, including the Collector
operators.operatorframework.io/builder: operator-sdk-v1.29.0
operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
repository: github.com/open-telemetry/opentelemetry-operator
support: OpenTelemetry Community
- name: opentelemetry-operator.v0.109.0
+ name: opentelemetry-operator.v0.114.0
namespace: placeholder
spec:
apiservicedefinitions: {}
@@ -284,7 +284,9 @@ spec:
- ""
resources:
- namespaces
+ - secrets
verbs:
+ - get
- list
- watch
- apiGroups:
@@ -387,6 +389,7 @@ spec:
- opentelemetry.io
resources:
- opampbridges
+ - targetallocators
verbs:
- create
- delete
@@ -407,6 +410,7 @@ spec:
- opampbridges/status
- opentelemetrycollectors/finalizers
- opentelemetrycollectors/status
+ - targetallocators/status
verbs:
- get
- patch
@@ -479,7 +483,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- image: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.109.0
+ image: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.114.0
livenessProbe:
httpGet:
path: /healthz
@@ -510,7 +514,7 @@ spec:
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=0
- image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
+ image: quay.io/brancz/kube-rbac-proxy:v0.13.1
name: kube-rbac-proxy
ports:
- containerPort: 8443
@@ -587,7 +591,7 @@ spec:
minKubeVersion: 1.23.0
provider:
name: OpenTelemetry Community
- version: 0.109.0
+ version: 0.114.0
webhookdefinitions:
- admissionReviewVersions:
- v1alpha1
diff --git a/bundle/community/manifests/opentelemetry.io_instrumentations.yaml b/bundle/community/manifests/opentelemetry.io_instrumentations.yaml
index 76f050bf0d..d8077d3867 100644
--- a/bundle/community/manifests/opentelemetry.io_instrumentations.yaml
+++ b/bundle/community/manifests/opentelemetry.io_instrumentations.yaml
@@ -217,6 +217,118 @@ spec:
type: object
version:
type: string
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -332,6 +444,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -409,6 +633,19 @@ spec:
properties:
endpoint:
type: string
+ tls:
+ properties:
+ ca_file:
+ type: string
+ cert_file:
+ type: string
+ configMapName:
+ type: string
+ key_file:
+ type: string
+ secretName:
+ type: string
+ type: object
type: object
go:
properties:
@@ -513,6 +750,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -635,6 +984,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -813,6 +1274,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -923,6 +1496,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -1046,6 +1731,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
diff --git a/bundle/community/manifests/opentelemetry.io_opentelemetrycollectors.yaml b/bundle/community/manifests/opentelemetry.io_opentelemetrycollectors.yaml
index 594e0f4aea..6ccb1c9e5f 100644
--- a/bundle/community/manifests/opentelemetry.io_opentelemetrycollectors.yaml
+++ b/bundle/community/manifests/opentelemetry.io_opentelemetrycollectors.yaml
@@ -6963,6 +6963,13 @@ spec:
type: boolean
type: object
type: object
+ persistentVolumeClaimRetentionPolicy:
+ properties:
+ whenDeleted:
+ type: string
+ whenScaled:
+ type: string
+ type: object
podAnnotations:
additionalProperties:
type: string
diff --git a/bundle/openshift/manifests/opentelemetry-operator-controller-manager-metrics-service_v1_service.yaml b/bundle/openshift/manifests/opentelemetry-operator-controller-manager-metrics-service_v1_service.yaml
index 66b0879b4d..a57cc212d5 100644
--- a/bundle/openshift/manifests/opentelemetry-operator-controller-manager-metrics-service_v1_service.yaml
+++ b/bundle/openshift/manifests/opentelemetry-operator-controller-manager-metrics-service_v1_service.yaml
@@ -1,6 +1,8 @@
apiVersion: v1
kind: Service
metadata:
+ annotations:
+ service.beta.openshift.io/serving-cert-secret-name: opentelemetry-operator-metrics
creationTimestamp: null
labels:
app.kubernetes.io/name: opentelemetry-operator
diff --git a/bundle/openshift/manifests/opentelemetry-operator-prometheus-rules_monitoring.coreos.com_v1_prometheusrule.yaml b/bundle/openshift/manifests/opentelemetry-operator-prometheus-rules_monitoring.coreos.com_v1_prometheusrule.yaml
new file mode 100644
index 0000000000..e6b5531887
--- /dev/null
+++ b/bundle/openshift/manifests/opentelemetry-operator-prometheus-rules_monitoring.coreos.com_v1_prometheusrule.yaml
@@ -0,0 +1,24 @@
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+ labels:
+ app.kubernetes.io/managed-by: operator-lifecycle-manager
+ app.kubernetes.io/name: opentelemetry-operator
+ app.kubernetes.io/part-of: opentelemetry-operator
+ name: opentelemetry-operator-prometheus-rules
+spec:
+ groups:
+ - name: opentelemetry-operator-monitoring.rules
+ rules:
+ - expr: sum by (type) (opentelemetry_collector_receivers)
+ record: type:opentelemetry_collector_receivers:sum
+ - expr: sum by (type) (opentelemetry_collector_exporters)
+ record: type:opentelemetry_collector_exporters:sum
+ - expr: sum by (type) (opentelemetry_collector_processors)
+ record: type:opentelemetry_collector_processors:sum
+ - expr: sum by (type) (opentelemetry_collector_extensions)
+ record: type:opentelemetry_collector_extensions:sum
+ - expr: sum by (type) (opentelemetry_collector_connectors)
+ record: type:opentelemetry_collector_connectors:sum
+ - expr: sum by (type) (opentelemetry_collector_info)
+ record: type:opentelemetry_collector_info:sum
diff --git a/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml b/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
new file mode 100644
index 0000000000..9895de1183
--- /dev/null
+++ b/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
@@ -0,0 +1,15 @@
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: opentelemetry-operator-prometheus
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - services
+ - endpoints
+ - pods
+ verbs:
+ - get
+ - list
+ - watch
diff --git a/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml b/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
new file mode 100644
index 0000000000..db617726d5
--- /dev/null
+++ b/bundle/openshift/manifests/opentelemetry-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
@@ -0,0 +1,12 @@
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: opentelemetry-operator-prometheus
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: opentelemetry-operator-prometheus
+subjects:
+- kind: ServiceAccount
+ name: prometheus-k8s
+ namespace: openshift-monitoring
diff --git a/bundle/openshift/manifests/opentelemetry-operator.clusterserviceversion.yaml b/bundle/openshift/manifests/opentelemetry-operator.clusterserviceversion.yaml
index 70db688513..751ef48728 100644
--- a/bundle/openshift/manifests/opentelemetry-operator.clusterserviceversion.yaml
+++ b/bundle/openshift/manifests/opentelemetry-operator.clusterserviceversion.yaml
@@ -99,13 +99,13 @@ metadata:
categories: Logging & Tracing,Monitoring
certified: "false"
containerImage: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator
- createdAt: "2024-09-19T17:16:12Z"
+ createdAt: "2024-11-27T11:54:33Z"
description: Provides the OpenTelemetry components, including the Collector
operators.operatorframework.io/builder: operator-sdk-v1.29.0
operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
repository: github.com/open-telemetry/opentelemetry-operator
support: OpenTelemetry Community
- name: opentelemetry-operator.v0.109.0
+ name: opentelemetry-operator.v0.114.0
namespace: placeholder
spec:
apiservicedefinitions: {}
@@ -284,7 +284,9 @@ spec:
- ""
resources:
- namespaces
+ - secrets
verbs:
+ - get
- list
- watch
- apiGroups:
@@ -387,6 +389,7 @@ spec:
- opentelemetry.io
resources:
- opampbridges
+ - targetallocators
verbs:
- create
- delete
@@ -407,6 +410,7 @@ spec:
- opampbridges/status
- opentelemetrycollectors/finalizers
- opentelemetrycollectors/status
+ - targetallocators/status
verbs:
- get
- patch
@@ -475,15 +479,15 @@ spec:
- --zap-time-encoding=rfc3339nano
- --enable-nginx-instrumentation=true
- --enable-go-instrumentation=true
- - --enable-multi-instrumentation=true
- --openshift-create-dashboard=true
- --feature-gates=+operator.observability.prometheus
+ - --enable-cr-metrics=true
env:
- name: SERVICE_ACCOUNT_NAME
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- image: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.109.0
+ image: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.114.0
livenessProbe:
httpGet:
path: /healthz
@@ -514,7 +518,11 @@ spec:
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=0
- image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
+ - --tls-cert-file=/var/run/tls/server/tls.crt
+ - --tls-private-key-file=/var/run/tls/server/tls.key
+ - --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA256
+ - --tls-min-version=VersionTLS12
+ image: quay.io/brancz/kube-rbac-proxy:v0.13.1
name: kube-rbac-proxy
ports:
- containerPort: 8443
@@ -527,9 +535,16 @@ spec:
requests:
cpu: 5m
memory: 64Mi
+ volumeMounts:
+ - mountPath: /var/run/tls/server
+ name: opentelemetry-operator-metrics-cert
serviceAccountName: opentelemetry-operator-controller-manager
terminationGracePeriodSeconds: 10
volumes:
+ - name: opentelemetry-operator-metrics-cert
+ secret:
+ defaultMode: 420
+ secretName: opentelemetry-operator-metrics
- name: cert
secret:
defaultMode: 420
@@ -591,7 +606,7 @@ spec:
minKubeVersion: 1.23.0
provider:
name: OpenTelemetry Community
- version: 0.109.0
+ version: 0.114.0
webhookdefinitions:
- admissionReviewVersions:
- v1alpha1
diff --git a/bundle/openshift/manifests/opentelemetry.io_instrumentations.yaml b/bundle/openshift/manifests/opentelemetry.io_instrumentations.yaml
index 76f050bf0d..d8077d3867 100644
--- a/bundle/openshift/manifests/opentelemetry.io_instrumentations.yaml
+++ b/bundle/openshift/manifests/opentelemetry.io_instrumentations.yaml
@@ -217,6 +217,118 @@ spec:
type: object
version:
type: string
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -332,6 +444,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -409,6 +633,19 @@ spec:
properties:
endpoint:
type: string
+ tls:
+ properties:
+ ca_file:
+ type: string
+ cert_file:
+ type: string
+ configMapName:
+ type: string
+ key_file:
+ type: string
+ secretName:
+ type: string
+ type: object
type: object
go:
properties:
@@ -513,6 +750,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -635,6 +984,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -813,6 +1274,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -923,6 +1496,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -1046,6 +1731,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
diff --git a/bundle/openshift/manifests/opentelemetry.io_opentelemetrycollectors.yaml b/bundle/openshift/manifests/opentelemetry.io_opentelemetrycollectors.yaml
index 594e0f4aea..6ccb1c9e5f 100644
--- a/bundle/openshift/manifests/opentelemetry.io_opentelemetrycollectors.yaml
+++ b/bundle/openshift/manifests/opentelemetry.io_opentelemetrycollectors.yaml
@@ -6963,6 +6963,13 @@ spec:
type: boolean
type: object
type: object
+ persistentVolumeClaimRetentionPolicy:
+ properties:
+ whenDeleted:
+ type: string
+ whenScaled:
+ type: string
+ type: object
podAnnotations:
additionalProperties:
type: string
diff --git a/cmd/otel-allocator/Dockerfile b/cmd/otel-allocator/Dockerfile
index 2e57628925..26ed93dbe0 100644
--- a/cmd/otel-allocator/Dockerfile
+++ b/cmd/otel-allocator/Dockerfile
@@ -1,5 +1,5 @@
# Get CA certificates from the Alpine package repo
-FROM alpine:3.20 as certificates
+FROM alpine:3.20 AS certificates
RUN apk --no-cache add ca-certificates
@@ -8,7 +8,7 @@ FROM scratch
ARG TARGETARCH
-WORKDIR /root/
+WORKDIR /
# Copy the certs
COPY --from=certificates /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
@@ -16,4 +16,6 @@ COPY --from=certificates /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-ce
# Copy binary built on the host
COPY bin/targetallocator_${TARGETARCH} ./main
+USER 65532:65532
+
ENTRYPOINT ["./main"]
diff --git a/cmd/otel-allocator/README.md b/cmd/otel-allocator/README.md
index 0b10a85614..7b4741d42b 100644
--- a/cmd/otel-allocator/README.md
+++ b/cmd/otel-allocator/README.md
@@ -211,9 +211,42 @@ rules:
### Service / Pod monitor endpoint credentials
-If your service or pod monitor endpoints require credentials or other supported form of authentication (bearer token, basic auth, OAuth2 etc.), you need to ensure that the collector has access to this information. Due to some limitations in how the endpoints configuration is handled, target allocator currently does **not** support credentials provided via secrets. It is only possible to provide credentials in a file (for more details see issue https://github.com/open-telemetry/opentelemetry-operator/issues/1669).
+If your service or pod monitor endpoints require authentication (such as bearer tokens, basic auth, OAuth2, etc.), you must ensure that the collector has access to these credentials.
+
+To secure the connection between the target allocator and the collector so that the secrets can be retrieved, mTLS is used. This involves the use of cert-manager to manage the CA, server, and client certificates.
+
+Prerequisites:
+- Ensure cert-manager is installed in your Kubernetes cluster.
+- Grant RBAC Permissions:
+
+ - The target allocator needs the appropriate RBAC permissions to get the secrets referenced in the Service / Pod monitor.
+
+ - The operator needs the appropriate RBAC permissions to manage cert-manager resources. The following clusterRole can be used to grant the necessary permissions:
+
+ ```yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: opentelemetry-operator-controller-manager-cert-manager-role
+ rules:
+ - apiGroups:
+ - cert-manager.io
+ resources:
+ - issuers
+ - certificaterequests
+ - certificates
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+ - patch
+ - delete
+ ```
+
+- Enable the `operator.targetallocator.mtls` feature gate in the operator's deployment.
-In order to ensure your endpoints can be scraped, your collector instance needs to have the particular secret mounted as a file at the correct path.
# Design
diff --git a/cmd/otel-allocator/allocation/allocator.go b/cmd/otel-allocator/allocation/allocator.go
index cbe5d1d31d..b0a9125ba9 100644
--- a/cmd/otel-allocator/allocation/allocator.go
+++ b/cmd/otel-allocator/allocation/allocator.go
@@ -76,6 +76,11 @@ func (a *allocator) SetFilter(filter Filter) {
a.filter = filter
}
+// SetFallbackStrategy sets the fallback strategy to use.
+func (a *allocator) SetFallbackStrategy(strategy Strategy) {
+ a.strategy.SetFallbackStrategy(strategy)
+}
+
// SetTargets accepts a list of targets that will be used to make
// load balancing decisions. This method should be called when there are
// new targets discovered or existing targets are shutdown.
diff --git a/cmd/otel-allocator/allocation/allocator_test.go b/cmd/otel-allocator/allocation/allocator_test.go
index 55f2bb6dc6..e6c2b9693a 100644
--- a/cmd/otel-allocator/allocation/allocator_test.go
+++ b/cmd/otel-allocator/allocation/allocator_test.go
@@ -17,7 +17,7 @@ package allocation
import (
"testing"
- "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/assert"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
@@ -176,11 +176,11 @@ func TestAllocationCollision(t *testing.T) {
cols := MakeNCollectors(3, 0)
allocator.SetCollectors(cols)
- firstLabels := model.LabelSet{
- "test": "test1",
+ firstLabels := labels.Labels{
+ {Name: "test", Value: "test1"},
}
- secondLabels := model.LabelSet{
- "test": "test2",
+ secondLabels := labels.Labels{
+ {Name: "test", Value: "test2"},
}
firstTarget := target.NewItem("sample-name", "0.0.0.0:8000", firstLabels, "")
secondTarget := target.NewItem("sample-name", "0.0.0.0:8000", secondLabels, "")
diff --git a/cmd/otel-allocator/allocation/consistent_hashing.go b/cmd/otel-allocator/allocation/consistent_hashing.go
index 8ec07ba857..c8a16903bc 100644
--- a/cmd/otel-allocator/allocation/consistent_hashing.go
+++ b/cmd/otel-allocator/allocation/consistent_hashing.go
@@ -16,7 +16,6 @@ package allocation
import (
"fmt"
- "strings"
"github.com/buraksezer/consistent"
"github.com/cespare/xxhash/v2"
@@ -59,7 +58,7 @@ func (s *consistentHashingStrategy) GetName() string {
}
func (s *consistentHashingStrategy) GetCollectorForTarget(collectors map[string]*Collector, item *target.Item) (*Collector, error) {
- hashKey := strings.Join(item.TargetURL, "")
+ hashKey := item.TargetURL
member := s.consistentHasher.LocateKey([]byte(hashKey))
collectorName := member.String()
collector, ok := collectors[collectorName]
@@ -84,3 +83,5 @@ func (s *consistentHashingStrategy) SetCollectors(collectors map[string]*Collect
s.consistentHasher = consistent.New(members, s.config)
}
+
+func (s *consistentHashingStrategy) SetFallbackStrategy(fallbackStrategy Strategy) {}
diff --git a/cmd/otel-allocator/allocation/least_weighted.go b/cmd/otel-allocator/allocation/least_weighted.go
index caa2febbd9..49d935715d 100644
--- a/cmd/otel-allocator/allocation/least_weighted.go
+++ b/cmd/otel-allocator/allocation/least_weighted.go
@@ -54,3 +54,5 @@ func (s *leastWeightedStrategy) GetCollectorForTarget(collectors map[string]*Col
}
func (s *leastWeightedStrategy) SetCollectors(_ map[string]*Collector) {}
+
+func (s *leastWeightedStrategy) SetFallbackStrategy(fallbackStrategy Strategy) {}
diff --git a/cmd/otel-allocator/allocation/per_node.go b/cmd/otel-allocator/allocation/per_node.go
index a5e2bfa3f8..3d9c76d90d 100644
--- a/cmd/otel-allocator/allocation/per_node.go
+++ b/cmd/otel-allocator/allocation/per_node.go
@@ -25,21 +25,31 @@ const perNodeStrategyName = "per-node"
var _ Strategy = &perNodeStrategy{}
type perNodeStrategy struct {
- collectorByNode map[string]*Collector
+ collectorByNode map[string]*Collector
+ fallbackStrategy Strategy
}
func newPerNodeStrategy() Strategy {
return &perNodeStrategy{
- collectorByNode: make(map[string]*Collector),
+ collectorByNode: make(map[string]*Collector),
+ fallbackStrategy: nil,
}
}
+func (s *perNodeStrategy) SetFallbackStrategy(fallbackStrategy Strategy) {
+ s.fallbackStrategy = fallbackStrategy
+}
+
func (s *perNodeStrategy) GetName() string {
return perNodeStrategyName
}
func (s *perNodeStrategy) GetCollectorForTarget(collectors map[string]*Collector, item *target.Item) (*Collector, error) {
targetNodeName := item.GetNodeName()
+ if targetNodeName == "" && s.fallbackStrategy != nil {
+ return s.fallbackStrategy.GetCollectorForTarget(collectors, item)
+ }
+
collector, ok := s.collectorByNode[targetNodeName]
if !ok {
return nil, fmt.Errorf("could not find collector for node %s", targetNodeName)
@@ -54,4 +64,8 @@ func (s *perNodeStrategy) SetCollectors(collectors map[string]*Collector) {
s.collectorByNode[collector.NodeName] = collector
}
}
+
+ if s.fallbackStrategy != nil {
+ s.fallbackStrategy.SetCollectors(collectors)
+ }
}
diff --git a/cmd/otel-allocator/allocation/per_node_test.go b/cmd/otel-allocator/allocation/per_node_test.go
index d853574a11..4d17e6bbb3 100644
--- a/cmd/otel-allocator/allocation/per_node_test.go
+++ b/cmd/otel-allocator/allocation/per_node_test.go
@@ -17,7 +17,7 @@ package allocation
import (
"testing"
- "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/assert"
logf "sigs.k8s.io/controller-runtime/pkg/log"
@@ -26,30 +26,40 @@ import (
var loggerPerNode = logf.Log.WithName("unit-tests")
-// Tests that two targets with the same target url and job name but different label set are both added.
+func GetTargetsWithNodeName(targets []*target.Item) (targetsWithNodeName []*target.Item) {
+ for _, item := range targets {
+ if item.GetNodeName() != "" {
+ targetsWithNodeName = append(targetsWithNodeName, item)
+ }
+ }
+ return targetsWithNodeName
+}
+
+// Tests that four targets, with one of them lacking node labels, are assigned except for the
+// target that lacks node labels.
func TestAllocationPerNode(t *testing.T) {
// prepare allocator with initial targets and collectors
s, _ := New("per-node", loggerPerNode)
cols := MakeNCollectors(4, 0)
s.SetCollectors(cols)
- firstLabels := model.LabelSet{
- "test": "test1",
- "__meta_kubernetes_pod_node_name": "node-0",
+ firstLabels := labels.Labels{
+ {Name: "test", Value: "test1"},
+ {Name: "__meta_kubernetes_pod_node_name", Value: "node-0"},
}
- secondLabels := model.LabelSet{
- "test": "test2",
- "__meta_kubernetes_node_name": "node-1",
+ secondLabels := labels.Labels{
+ {Name: "test", Value: "test2"},
+ {Name: "__meta_kubernetes_node_name", Value: "node-1"},
}
// no label, should be skipped
- thirdLabels := model.LabelSet{
- "test": "test3",
+ thirdLabels := labels.Labels{
+ {Name: "test", Value: "test3"},
}
// endpointslice target kind and name
- fourthLabels := model.LabelSet{
- "test": "test4",
- "__meta_kubernetes_endpointslice_address_target_kind": "Node",
- "__meta_kubernetes_endpointslice_address_target_name": "node-3",
+ fourthLabels := labels.Labels{
+ {Name: "test", Value: "test4"},
+ {Name: "__meta_kubernetes_endpointslice_address_target_kind", Value: "Node"},
+ {Name: "__meta_kubernetes_endpointslice_address_target_name", Value: "node-3"},
}
firstTarget := target.NewItem("sample-name", "0.0.0.0:8000", firstLabels, "")
@@ -93,6 +103,77 @@ func TestAllocationPerNode(t *testing.T) {
}
}
+// Tests that four targets, with one of them missing node labels, are all assigned.
+func TestAllocationPerNodeUsingFallback(t *testing.T) {
+ // prepare allocator with initial targets and collectors
+ s, _ := New("per-node", loggerPerNode, WithFallbackStrategy(consistentHashingStrategyName))
+
+ cols := MakeNCollectors(4, 0)
+ s.SetCollectors(cols)
+ firstLabels := labels.Labels{
+ {Name: "test", Value: "test1"},
+ {Name: "__meta_kubernetes_pod_node_name", Value: "node-0"},
+ }
+ secondLabels := labels.Labels{
+ {Name: "test", Value: "test2"},
+ {Name: "__meta_kubernetes_node_name", Value: "node-1"},
+ }
+ // no label, should be allocated by the fallback strategy
+ thirdLabels := labels.Labels{
+ {Name: "test", Value: "test3"},
+ }
+ // endpointslice target kind and name
+ fourthLabels := labels.Labels{
+ {Name: "test", Value: "test4"},
+ {Name: "__meta_kubernetes_endpointslice_address_target_kind", Value: "Node"},
+ {Name: "__meta_kubernetes_endpointslice_address_target_name", Value: "node-3"},
+ }
+
+ firstTarget := target.NewItem("sample-name", "0.0.0.0:8000", firstLabels, "")
+ secondTarget := target.NewItem("sample-name", "0.0.0.0:8000", secondLabels, "")
+ thirdTarget := target.NewItem("sample-name", "0.0.0.0:8000", thirdLabels, "")
+ fourthTarget := target.NewItem("sample-name", "0.0.0.0:8000", fourthLabels, "")
+
+ targetList := map[string]*target.Item{
+ firstTarget.Hash(): firstTarget,
+ secondTarget.Hash(): secondTarget,
+ thirdTarget.Hash(): thirdTarget,
+ fourthTarget.Hash(): fourthTarget,
+ }
+
+ // test that targets and collectors are added properly
+ s.SetTargets(targetList)
+
+ // verify length
+ actualItems := s.TargetItems()
+
+ // all targets should be allocated
+ expectedTargetLen := len(targetList)
+ assert.Len(t, actualItems, expectedTargetLen)
+
+ // verify allocation to nodes
+ for targetHash, item := range targetList {
+ actualItem, found := actualItems[targetHash]
+
+ assert.True(t, found, "target with hash %s not found", item.Hash())
+
+ itemsForCollector := s.GetTargetsForCollectorAndJob(actualItem.CollectorName, actualItem.JobName)
+
+ // first two should be assigned one to each collector; if third target, it should be assigned
+ // according to the fallback strategy which may assign it to the otherwise empty collector or
+ // one of the others, depending on the strategy and collector loop order
+ if targetHash == thirdTarget.Hash() {
+ assert.Empty(t, item.GetNodeName())
+ assert.NotZero(t, len(itemsForCollector))
+ continue
+ }
+
+ // Only check targets that have been assigned using the per-node (not fallback) strategy here
+ assert.Len(t, GetTargetsWithNodeName(itemsForCollector), 1)
+ assert.Equal(t, actualItem, GetTargetsWithNodeName(itemsForCollector)[0])
+ }
+}
+
func TestTargetsWithNoCollectorsPerNode(t *testing.T) {
// prepare allocator with initial targets and collectors
c, _ := New("per-node", loggerPerNode)
diff --git a/cmd/otel-allocator/allocation/strategy.go b/cmd/otel-allocator/allocation/strategy.go
index 29ae7fd99a..47fafd5662 100644
--- a/cmd/otel-allocator/allocation/strategy.go
+++ b/cmd/otel-allocator/allocation/strategy.go
@@ -29,6 +29,8 @@ import (
type AllocatorProvider func(log logr.Logger, opts ...AllocationOption) Allocator
var (
+ strategies = map[string]Strategy{}
+
registry = map[string]AllocatorProvider{}
// TargetsPerCollector records how many targets have been assigned to each collector.
@@ -67,6 +69,16 @@ func WithFilter(filter Filter) AllocationOption {
}
}
+func WithFallbackStrategy(fallbackStrategy string) AllocationOption {
+ var strategy, ok = strategies[fallbackStrategy]
+ if fallbackStrategy != "" && !ok {
+ panic(fmt.Errorf("unregistered strategy used as fallback: %s", fallbackStrategy))
+ }
+ return func(allocator Allocator) {
+ allocator.SetFallbackStrategy(strategy)
+ }
+}
+
func RecordTargetsKept(targets map[string]*target.Item) {
targetsRemaining.Add(float64(len(targets)))
}
@@ -101,6 +113,7 @@ type Allocator interface {
Collectors() map[string]*Collector
GetTargetsForCollectorAndJob(collector string, job string) []*target.Item
SetFilter(filter Filter)
+ SetFallbackStrategy(strategy Strategy)
}
type Strategy interface {
@@ -110,6 +123,8 @@ type Strategy interface {
// SetCollectors call. Strategies which don't need this information can just ignore it.
SetCollectors(map[string]*Collector)
GetName() string
+ // Add fallback strategy for strategies whose main allocation method can sometimes leave targets unassigned
+ SetFallbackStrategy(Strategy)
}
var _ consistent.Member = Collector{}
@@ -136,22 +151,18 @@ func NewCollector(name, node string) *Collector {
}
func init() {
- err := Register(leastWeightedStrategyName, func(log logr.Logger, opts ...AllocationOption) Allocator {
- return newAllocator(log, newleastWeightedStrategy(), opts...)
- })
- if err != nil {
- panic(err)
- }
- err = Register(consistentHashingStrategyName, func(log logr.Logger, opts ...AllocationOption) Allocator {
- return newAllocator(log, newConsistentHashingStrategy(), opts...)
- })
- if err != nil {
- panic(err)
+ strategies = map[string]Strategy{
+ leastWeightedStrategyName: newleastWeightedStrategy(),
+ consistentHashingStrategyName: newConsistentHashingStrategy(),
+ perNodeStrategyName: newPerNodeStrategy(),
}
- err = Register(perNodeStrategyName, func(log logr.Logger, opts ...AllocationOption) Allocator {
- return newAllocator(log, newPerNodeStrategy(), opts...)
- })
- if err != nil {
- panic(err)
+
+ for strategyName, strategy := range strategies {
+ err := Register(strategyName, func(log logr.Logger, opts ...AllocationOption) Allocator {
+ return newAllocator(log, strategy, opts...)
+ })
+ if err != nil {
+ panic(err)
+ }
}
}
diff --git a/cmd/otel-allocator/allocation/testutils.go b/cmd/otel-allocator/allocation/testutils.go
index 054e9e0205..3189b576c1 100644
--- a/cmd/otel-allocator/allocation/testutils.go
+++ b/cmd/otel-allocator/allocation/testutils.go
@@ -21,7 +21,7 @@ import (
"strconv"
"testing"
- "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/require"
logf "sigs.k8s.io/controller-runtime/pkg/log"
@@ -39,9 +39,9 @@ func MakeNNewTargets(n int, numCollectors int, startingIndex int) map[string]*ta
toReturn := map[string]*target.Item{}
for i := startingIndex; i < n+startingIndex; i++ {
collector := fmt.Sprintf("collector-%d", colIndex(i, numCollectors))
- label := model.LabelSet{
- "i": model.LabelValue(strconv.Itoa(i)),
- "total": model.LabelValue(strconv.Itoa(n + startingIndex)),
+ label := labels.Labels{
+ {Name: "i", Value: strconv.Itoa(i)},
+ {Name: "total", Value: strconv.Itoa(n + startingIndex)},
}
newTarget := target.NewItem(fmt.Sprintf("test-job-%d", i), fmt.Sprintf("test-url-%d", i), label, collector)
toReturn[newTarget.Hash()] = newTarget
@@ -65,10 +65,10 @@ func MakeNCollectors(n int, startingIndex int) map[string]*Collector {
func MakeNNewTargetsWithEmptyCollectors(n int, startingIndex int) map[string]*target.Item {
toReturn := map[string]*target.Item{}
for i := startingIndex; i < n+startingIndex; i++ {
- label := model.LabelSet{
- "i": model.LabelValue(strconv.Itoa(i)),
- "total": model.LabelValue(strconv.Itoa(n + startingIndex)),
- "__meta_kubernetes_pod_node_name": model.LabelValue("node-0"),
+ label := labels.Labels{
+ {Name: "i", Value: strconv.Itoa(i)},
+ {Name: "total", Value: strconv.Itoa(n + startingIndex)},
+ {Name: "__meta_kubernetes_pod_node_name", Value: "node-0"},
}
newTarget := target.NewItem(fmt.Sprintf("test-job-%d", i), fmt.Sprintf("test-url-%d", i), label, "")
toReturn[newTarget.Hash()] = newTarget
diff --git a/cmd/otel-allocator/benchmark_test.go b/cmd/otel-allocator/benchmark_test.go
new file mode 100644
index 0000000000..7b6c644347
--- /dev/null
+++ b/cmd/otel-allocator/benchmark_test.go
@@ -0,0 +1,192 @@
+// Copyright The OpenTelemetry Authors
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "strconv"
+ "strings"
+ "testing"
+
+ gokitlog "github.com/go-kit/log"
+ "github.com/go-logr/logr"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/discovery"
+ "github.com/prometheus/prometheus/discovery/targetgroup"
+ "github.com/prometheus/prometheus/model/labels"
+ "github.com/prometheus/prometheus/model/relabel"
+ "github.com/stretchr/testify/require"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/log"
+
+ "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/allocation"
+ "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/prehook"
+ "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/server"
+ "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
+)
+
+// BenchmarkProcessTargets benchmarks the whole target allocation pipeline. It starts with data the prometheus
+// discovery manager would normally output, and pushes it all the way into the allocator. It notably doe *not* check
+// the HTTP server afterward. Test data is chosen to be reasonably representative of what the Prometheus service discovery
+// outputs in the real world.
+func BenchmarkProcessTargets(b *testing.B) {
+ numTargets := 10000
+ targetsPerGroup := 5
+ groupsPerJob := 20
+ tsets := prepareBenchmarkData(numTargets, targetsPerGroup, groupsPerJob)
+ labelsBuilder := labels.NewBuilder(labels.EmptyLabels())
+
+ b.ResetTimer()
+ for _, strategy := range allocation.GetRegisteredAllocatorNames() {
+ b.Run(strategy, func(b *testing.B) {
+ targetDiscoverer, allocator := createTestDiscoverer(strategy, map[string][]*relabel.Config{})
+ for i := 0; i < b.N; i++ {
+ targetDiscoverer.ProcessTargets(labelsBuilder, tsets, allocator.SetTargets)
+ }
+ })
+ }
+}
+
+// BenchmarkProcessTargetsWithRelabelConfig is BenchmarkProcessTargets with a relabel config set. The relabel config
+// does not actually modify any records, but does force the prehook to perform any necessary conversions along the way.
+func BenchmarkProcessTargetsWithRelabelConfig(b *testing.B) {
+ numTargets := 10000
+ targetsPerGroup := 5
+ groupsPerJob := 20
+ tsets := prepareBenchmarkData(numTargets, targetsPerGroup, groupsPerJob)
+ labelsBuilder := labels.NewBuilder(labels.EmptyLabels())
+ prehookConfig := make(map[string][]*relabel.Config, len(tsets))
+ for jobName := range tsets {
+ // keep all targets in half the jobs, drop the rest
+ jobNrStr := strings.Split(jobName, "-")[1]
+ jobNr, err := strconv.Atoi(jobNrStr)
+ require.NoError(b, err)
+ var action relabel.Action
+ if jobNr%2 == 0 {
+ action = "keep"
+ } else {
+ action = "drop"
+ }
+ prehookConfig[jobName] = []*relabel.Config{
+ {
+ Action: action,
+ Regex: relabel.MustNewRegexp(".*"),
+ SourceLabels: model.LabelNames{"__address__"},
+ },
+ }
+ }
+
+ b.ResetTimer()
+ for _, strategy := range allocation.GetRegisteredAllocatorNames() {
+ b.Run(strategy, func(b *testing.B) {
+ targetDiscoverer, allocator := createTestDiscoverer(strategy, prehookConfig)
+ for i := 0; i < b.N; i++ {
+ targetDiscoverer.ProcessTargets(labelsBuilder, tsets, allocator.SetTargets)
+ }
+ })
+ }
+}
+
+func prepareBenchmarkData(numTargets, targetsPerGroup, groupsPerJob int) map[string][]*targetgroup.Group {
+ numGroups := numTargets / targetsPerGroup
+ numJobs := numGroups / groupsPerJob
+ jobNamePrefix := "test-"
+ groupLabels := model.LabelSet{
+ "__meta_kubernetes_pod_controller_name": "example",
+ "__meta_kubernetes_pod_ip": "10.244.0.251",
+ "__meta_kubernetes_pod_uid": "676ebee7-14f8-481e-a937-d2affaec4105",
+ "__meta_kubernetes_endpointslice_port_protocol": "TCP",
+ "__meta_kubernetes_endpointslice_endpoint_conditions_ready": "true",
+ "__meta_kubernetes_service_annotation_kubectl_kubernetes_io_last_applied_configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"example\"},\"name\":\"example-svc\",\"namespace\":\"example\"},\"spec\":{\"clusterIP\":\"None\",\"ports\":[{\"name\":\"http-example\",\"port\":9006,\"targetPort\":9006}],\"selector\":{\"app\":\"example\"},\"type\":\"ClusterIP\"}}\n",
+ "__meta_kubernetes_endpointslice_labelpresent_app": "true",
+ "__meta_kubernetes_endpointslice_name": "example-svc-qgwxf",
+ "__address__": "10.244.0.251:9006",
+ "__meta_kubernetes_endpointslice_endpoint_conditions_terminating": "false",
+ "__meta_kubernetes_pod_labelpresent_pod_template_hash": "true",
+ "__meta_kubernetes_endpointslice_label_kubernetes_io_service_name": "example-svc",
+ "__meta_kubernetes_endpointslice_labelpresent_service_kubernetes_io_headless": "true",
+ "__meta_kubernetes_pod_label_pod_template_hash": "6b549885f8",
+ "__meta_kubernetes_endpointslice_address_target_name": "example-6b549885f8-7tbcw",
+ "__meta_kubernetes_pod_labelpresent_app": "true",
+ "somelabel": "somevalue",
+ }
+ exampleTarget := model.LabelSet{
+ "__meta_kubernetes_endpointslice_port": "9006",
+ "__meta_kubernetes_service_label_app": "example",
+ "__meta_kubernetes_endpointslice_port_name": "http-example",
+ "__meta_kubernetes_pod_ready": "true",
+ "__meta_kubernetes_endpointslice_address_type": "IPv4",
+ "__meta_kubernetes_endpointslice_label_endpointslice_kubernetes_io_managed_by": "endpointslice-controller.k8s.io",
+ "__meta_kubernetes_endpointslice_labelpresent_endpointslice_kubernetes_io_managed_by": "true",
+ "__meta_kubernetes_endpointslice_label_app": "example",
+ "__meta_kubernetes_endpointslice_endpoint_conditions_serving": "true",
+ "__meta_kubernetes_pod_phase": "Running",
+ "__meta_kubernetes_pod_controller_kind": "ReplicaSet",
+ "__meta_kubernetes_service_annotationpresent_kubectl_kubernetes_io_last_applied_configuration": "true",
+ "__meta_kubernetes_service_labelpresent_app": "true",
+ "__meta_kubernetes_endpointslice_labelpresent_kubernetes_io_service_name": "true",
+ "__meta_kubernetes_endpointslice_annotation_endpoints_kubernetes_io_last_change_trigger_time": "2023-09-27T16:01:29Z",
+ "__meta_kubernetes_pod_name": "example-6b549885f8-7tbcw",
+ "__meta_kubernetes_service_name": "example-svc",
+ "__meta_kubernetes_namespace": "example",
+ "__meta_kubernetes_endpointslice_annotationpresent_endpoints_kubernetes_io_last_change_trigger_time": "true",
+ "__meta_kubernetes_pod_node_name": "kind-control-plane",
+ "__meta_kubernetes_endpointslice_address_target_kind": "Pod",
+ "__meta_kubernetes_pod_host_ip": "172.18.0.2",
+ "__meta_kubernetes_endpointslice_label_service_kubernetes_io_headless": "",
+ "__meta_kubernetes_pod_label_app": "example",
+ }
+ targets := []model.LabelSet{}
+ for i := 0; i < numTargets; i++ {
+ targets = append(targets, exampleTarget.Clone())
+ }
+ groups := make([]*targetgroup.Group, numGroups)
+ for i := 0; i < numGroups; i++ {
+ groupTargets := targets[(i * targetsPerGroup):(i*targetsPerGroup + targetsPerGroup)]
+ groups[i] = &targetgroup.Group{
+ Labels: groupLabels,
+ Targets: groupTargets,
+ }
+ }
+ tsets := make(map[string][]*targetgroup.Group, numJobs)
+ for i := 0; i < numJobs; i++ {
+ jobGroups := groups[(i * groupsPerJob):(i*groupsPerJob + groupsPerJob)]
+ jobName := fmt.Sprintf("%s%d", jobNamePrefix, i)
+ tsets[jobName] = jobGroups
+ }
+ return tsets
+}
+
+func createTestDiscoverer(allocationStrategy string, prehookConfig map[string][]*relabel.Config) (*target.Discoverer, allocation.Allocator) {
+ ctx := context.Background()
+ logger := ctrl.Log.WithName(fmt.Sprintf("bench-%s", allocationStrategy))
+ ctrl.SetLogger(logr.New(log.NullLogSink{}))
+ allocatorPrehook := prehook.New("relabel-config", logger)
+ allocatorPrehook.SetConfig(prehookConfig)
+ allocator, err := allocation.New(allocationStrategy, logger, allocation.WithFilter(allocatorPrehook))
+ srv := server.NewServer(logger, allocator, "localhost:0")
+ if err != nil {
+ setupLog.Error(err, "Unable to initialize allocation strategy")
+ os.Exit(1)
+ }
+ registry := prometheus.NewRegistry()
+ sdMetrics, _ := discovery.CreateAndRegisterSDMetrics(registry)
+ discoveryManager := discovery.NewManager(ctx, gokitlog.NewNopLogger(), registry, sdMetrics)
+ targetDiscoverer := target.NewDiscoverer(logger, discoveryManager, allocatorPrehook, srv)
+ return targetDiscoverer, allocator
+}
diff --git a/cmd/otel-allocator/config/config.go b/cmd/otel-allocator/config/config.go
index 3e3fd389c7..ee55fe0a32 100644
--- a/cmd/otel-allocator/config/config.go
+++ b/cmd/otel-allocator/config/config.go
@@ -46,24 +46,29 @@ const (
)
type Config struct {
- ListenAddr string `yaml:"listen_addr,omitempty"`
- KubeConfigFilePath string `yaml:"kube_config_file_path,omitempty"`
- ClusterConfig *rest.Config `yaml:"-"`
- RootLogger logr.Logger `yaml:"-"`
- CollectorSelector *metav1.LabelSelector `yaml:"collector_selector,omitempty"`
- PromConfig *promconfig.Config `yaml:"config"`
- AllocationStrategy string `yaml:"allocation_strategy,omitempty"`
- FilterStrategy string `yaml:"filter_strategy,omitempty"`
- PrometheusCR PrometheusCRConfig `yaml:"prometheus_cr,omitempty"`
- HTTPS HTTPSServerConfig `yaml:"https,omitempty"`
+ ListenAddr string `yaml:"listen_addr,omitempty"`
+ KubeConfigFilePath string `yaml:"kube_config_file_path,omitempty"`
+ ClusterConfig *rest.Config `yaml:"-"`
+ RootLogger logr.Logger `yaml:"-"`
+ CollectorSelector *metav1.LabelSelector `yaml:"collector_selector,omitempty"`
+ PromConfig *promconfig.Config `yaml:"config"`
+ AllocationStrategy string `yaml:"allocation_strategy,omitempty"`
+ AllocationFallbackStrategy string `yaml:"allocation_fallback_strategy,omitempty"`
+ FilterStrategy string `yaml:"filter_strategy,omitempty"`
+ PrometheusCR PrometheusCRConfig `yaml:"prometheus_cr,omitempty"`
+ HTTPS HTTPSServerConfig `yaml:"https,omitempty"`
}
type PrometheusCRConfig struct {
Enabled bool `yaml:"enabled,omitempty"`
PodMonitorSelector *metav1.LabelSelector `yaml:"pod_monitor_selector,omitempty"`
+ PodMonitorNamespaceSelector *metav1.LabelSelector `yaml:"pod_monitor_namespace_selector,omitempty"`
ServiceMonitorSelector *metav1.LabelSelector `yaml:"service_monitor_selector,omitempty"`
ServiceMonitorNamespaceSelector *metav1.LabelSelector `yaml:"service_monitor_namespace_selector,omitempty"`
- PodMonitorNamespaceSelector *metav1.LabelSelector `yaml:"pod_monitor_namespace_selector,omitempty"`
+ ScrapeConfigSelector *metav1.LabelSelector `yaml:"scrape_config_selector,omitempty"`
+ ScrapeConfigNamespaceSelector *metav1.LabelSelector `yaml:"scrape_config_namespace_selector,omitempty"`
+ ProbeSelector *metav1.LabelSelector `yaml:"probe_selector,omitempty"`
+ ProbeNamespaceSelector *metav1.LabelSelector `yaml:"probe_namespace_selector,omitempty"`
ScrapeInterval model.Duration `yaml:"scrape_interval,omitempty"`
}
@@ -115,29 +120,34 @@ func LoadFromCLI(target *Config, flagSet *pflag.FlagSet) error {
target.PrometheusCR.Enabled = prometheusCREnabled
}
- target.HTTPS.Enabled, err = getHttpsEnabled(flagSet)
- if err != nil {
+ if httpsEnabled, changed, err := getHttpsEnabled(flagSet); err != nil {
return err
+ } else if changed {
+ target.HTTPS.Enabled = httpsEnabled
}
- target.HTTPS.ListenAddr, err = getHttpsListenAddr(flagSet)
- if err != nil {
+ if listenAddrHttps, changed, err := getHttpsListenAddr(flagSet); err != nil {
return err
+ } else if changed {
+ target.HTTPS.ListenAddr = listenAddrHttps
}
- target.HTTPS.CAFilePath, err = getHttpsCAFilePath(flagSet)
- if err != nil {
+ if caFilePath, changed, err := getHttpsCAFilePath(flagSet); err != nil {
return err
+ } else if changed {
+ target.HTTPS.CAFilePath = caFilePath
}
- target.HTTPS.TLSCertFilePath, err = getHttpsTLSCertFilePath(flagSet)
- if err != nil {
+ if tlsCertFilePath, changed, err := getHttpsTLSCertFilePath(flagSet); err != nil {
return err
+ } else if changed {
+ target.HTTPS.TLSCertFilePath = tlsCertFilePath
}
- target.HTTPS.TLSKeyFilePath, err = getHttpsTLSKeyFilePath(flagSet)
- if err != nil {
+ if tlsKeyFilePath, changed, err := getHttpsTLSKeyFilePath(flagSet); err != nil {
return err
+ } else if changed {
+ target.HTTPS.TLSKeyFilePath = tlsKeyFilePath
}
return nil
@@ -156,8 +166,9 @@ func unmarshal(cfg *Config, configFile string) error {
func CreateDefaultConfig() Config {
return Config{
- AllocationStrategy: DefaultAllocationStrategy,
- FilterStrategy: DefaultFilterStrategy,
+ AllocationStrategy: DefaultAllocationStrategy,
+ AllocationFallbackStrategy: "",
+ FilterStrategy: DefaultFilterStrategy,
PrometheusCR: PrometheusCRConfig{
ScrapeInterval: DefaultCRScrapeInterval,
},
diff --git a/cmd/otel-allocator/config/config_test.go b/cmd/otel-allocator/config/config_test.go
index 53ddc52a49..c1b721b773 100644
--- a/cmd/otel-allocator/config/config_test.go
+++ b/cmd/otel-allocator/config/config_test.go
@@ -64,6 +64,7 @@ func TestLoad(t *testing.T) {
},
HTTPS: HTTPSServerConfig{
Enabled: true,
+ ListenAddr: ":8443",
CAFilePath: "/path/to/ca.pem",
TLSCertFilePath: "/path/to/cert.pem",
TLSKeyFilePath: "/path/to/key.pem",
diff --git a/cmd/otel-allocator/config/flags.go b/cmd/otel-allocator/config/flags.go
index 5b3a3705db..0a47c27636 100644
--- a/cmd/otel-allocator/config/flags.go
+++ b/cmd/otel-allocator/config/flags.go
@@ -78,22 +78,47 @@ func getPrometheusCREnabled(flagSet *pflag.FlagSet) (value bool, changed bool, e
return
}
-func getHttpsListenAddr(flagSet *pflag.FlagSet) (string, error) {
- return flagSet.GetString(listenAddrHttpsFlagName)
+func getHttpsListenAddr(flagSet *pflag.FlagSet) (value string, changed bool, err error) {
+ if changed = flagSet.Changed(listenAddrHttpsFlagName); !changed {
+ value, err = ":8443", nil
+ return
+ }
+ value, err = flagSet.GetString(listenAddrHttpsFlagName)
+ return
}
-func getHttpsEnabled(flagSet *pflag.FlagSet) (bool, error) {
- return flagSet.GetBool(httpsEnabledFlagName)
+func getHttpsEnabled(flagSet *pflag.FlagSet) (value bool, changed bool, err error) {
+ if changed = flagSet.Changed(httpsEnabledFlagName); !changed {
+ value, err = false, nil
+ return
+ }
+ value, err = flagSet.GetBool(httpsEnabledFlagName)
+ return
}
-func getHttpsCAFilePath(flagSet *pflag.FlagSet) (string, error) {
- return flagSet.GetString(httpsCAFilePathFlagName)
+func getHttpsCAFilePath(flagSet *pflag.FlagSet) (value string, changed bool, err error) {
+ if changed = flagSet.Changed(httpsCAFilePathFlagName); !changed {
+ value, err = "", nil
+ return
+ }
+ value, err = flagSet.GetString(httpsCAFilePathFlagName)
+ return
}
-func getHttpsTLSCertFilePath(flagSet *pflag.FlagSet) (string, error) {
- return flagSet.GetString(httpsTLSCertFilePathFlagName)
+func getHttpsTLSCertFilePath(flagSet *pflag.FlagSet) (value string, changed bool, err error) {
+ if changed = flagSet.Changed(httpsTLSCertFilePathFlagName); !changed {
+ value, err = "", nil
+ return
+ }
+ value, err = flagSet.GetString(httpsTLSCertFilePathFlagName)
+ return
}
-func getHttpsTLSKeyFilePath(flagSet *pflag.FlagSet) (string, error) {
- return flagSet.GetString(httpsTLSKeyFilePathFlagName)
+func getHttpsTLSKeyFilePath(flagSet *pflag.FlagSet) (value string, changed bool, err error) {
+ if changed = flagSet.Changed(httpsTLSKeyFilePathFlagName); !changed {
+ value, err = "", nil
+ return
+ }
+ value, err = flagSet.GetString(httpsTLSKeyFilePathFlagName)
+ return
}
diff --git a/cmd/otel-allocator/config/flags_test.go b/cmd/otel-allocator/config/flags_test.go
index 2c33d65017..b2725c170e 100644
--- a/cmd/otel-allocator/config/flags_test.go
+++ b/cmd/otel-allocator/config/flags_test.go
@@ -77,13 +77,19 @@ func TestFlagGetters(t *testing.T) {
name: "HttpsServer",
flagArgs: []string{"--" + httpsEnabledFlagName, "true"},
expectedValue: true,
- getterFunc: func(fs *pflag.FlagSet) (interface{}, error) { return getHttpsEnabled(fs) },
+ getterFunc: func(fs *pflag.FlagSet) (interface{}, error) {
+ value, _, err := getHttpsEnabled(fs)
+ return value, err
+ },
},
{
name: "HttpsServerKey",
flagArgs: []string{"--" + httpsTLSKeyFilePathFlagName, "/path/to/tls.key"},
expectedValue: "/path/to/tls.key",
- getterFunc: func(fs *pflag.FlagSet) (interface{}, error) { return getHttpsTLSKeyFilePath(fs) },
+ getterFunc: func(fs *pflag.FlagSet) (interface{}, error) {
+ value, _, err := getHttpsTLSKeyFilePath(fs)
+ return value, err
+ },
},
}
diff --git a/cmd/otel-allocator/config/testdata/config_test.yaml b/cmd/otel-allocator/config/testdata/config_test.yaml
index bcb220adf8..47a3226517 100644
--- a/cmd/otel-allocator/config/testdata/config_test.yaml
+++ b/cmd/otel-allocator/config/testdata/config_test.yaml
@@ -7,6 +7,7 @@ prometheus_cr:
scrape_interval: 60s
https:
enabled: true
+ listen_addr: :8443
ca_file_path: /path/to/ca.pem
tls_cert_file_path: /path/to/cert.pem
tls_key_file_path: /path/to/key.pem
diff --git a/cmd/otel-allocator/main.go b/cmd/otel-allocator/main.go
index f9531d6740..be2418902e 100644
--- a/cmd/otel-allocator/main.go
+++ b/cmd/otel-allocator/main.go
@@ -81,7 +81,13 @@ func main() {
log := ctrl.Log.WithName("allocator")
allocatorPrehook = prehook.New(cfg.FilterStrategy, log)
- allocator, err = allocation.New(cfg.AllocationStrategy, log, allocation.WithFilter(allocatorPrehook))
+
+ var allocationOptions []allocation.AllocationOption
+ allocationOptions = append(allocationOptions, allocation.WithFilter(allocatorPrehook))
+ if cfg.AllocationFallbackStrategy != "" {
+ allocationOptions = append(allocationOptions, allocation.WithFallbackStrategy(cfg.AllocationFallbackStrategy))
+ }
+ allocator, err = allocation.New(cfg.AllocationStrategy, log, allocationOptions...)
if err != nil {
setupLog.Error(err, "Unable to initialize allocation strategy")
os.Exit(1)
diff --git a/cmd/otel-allocator/prehook/relabel.go b/cmd/otel-allocator/prehook/relabel.go
index 3595cb888e..6c96affa39 100644
--- a/cmd/otel-allocator/prehook/relabel.go
+++ b/cmd/otel-allocator/prehook/relabel.go
@@ -16,8 +16,6 @@ package prehook
import (
"github.com/go-logr/logr"
- "github.com/prometheus/common/model"
- "github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/model/relabel"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
@@ -35,18 +33,6 @@ func NewRelabelConfigTargetFilter(log logr.Logger) Hook {
}
}
-// helper function converts from model.LabelSet to []labels.Label.
-func convertLabelToPromLabelSet(lbls model.LabelSet) []labels.Label {
- newLabels := make([]labels.Label, len(lbls))
- index := 0
- for k, v := range lbls {
- newLabels[index].Name = string(k)
- newLabels[index].Value = string(v)
- index++
- }
- return newLabels
-}
-
func (tf *RelabelConfigTargetFilter) Apply(targets map[string]*target.Item) map[string]*target.Item {
numTargets := len(targets)
@@ -57,20 +43,15 @@ func (tf *RelabelConfigTargetFilter) Apply(targets map[string]*target.Item) map[
// Note: jobNameKey != tItem.JobName (jobNameKey is hashed)
for jobNameKey, tItem := range targets {
- keepTarget := true
- lset := convertLabelToPromLabelSet(tItem.Labels)
+ var keepTarget bool
+ lset := tItem.Labels
for _, cfg := range tf.relabelCfg[tItem.JobName] {
- if newLset, keep := relabel.Process(lset, cfg); !keep {
- keepTarget = false
+ lset, keepTarget = relabel.Process(lset, cfg)
+ if !keepTarget {
+ delete(targets, jobNameKey)
break // inner loop
- } else {
- lset = newLset
}
}
-
- if !keepTarget {
- delete(targets, jobNameKey)
- }
}
tf.log.V(2).Info("Filtering complete", "seen", numTargets, "kept", len(targets))
diff --git a/cmd/otel-allocator/prehook/relabel_test.go b/cmd/otel-allocator/prehook/relabel_test.go
index d30f645eba..9aa27764ca 100644
--- a/cmd/otel-allocator/prehook/relabel_test.go
+++ b/cmd/otel-allocator/prehook/relabel_test.go
@@ -22,6 +22,7 @@ import (
"testing"
"github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/model/relabel"
"github.com/stretchr/testify/assert"
logf "sigs.k8s.io/controller-runtime/pkg/log"
@@ -184,10 +185,10 @@ func makeNNewTargets(rCfgs []relabelConfigObj, n int, numCollectors int, startin
relabelConfig := make(map[string][]*relabel.Config)
for i := startingIndex; i < n+startingIndex; i++ {
collector := fmt.Sprintf("collector-%d", colIndex(i, numCollectors))
- label := model.LabelSet{
- "collector": model.LabelValue(collector),
- "i": model.LabelValue(strconv.Itoa(i)),
- "total": model.LabelValue(strconv.Itoa(n + startingIndex)),
+ label := labels.Labels{
+ {Name: "collector", Value: collector},
+ {Name: "i", Value: strconv.Itoa(i)},
+ {Name: "total", Value: strconv.Itoa(n + startingIndex)},
}
jobName := fmt.Sprintf("test-job-%d", i)
newTarget := target.NewItem(jobName, "test-url", label, collector)
diff --git a/cmd/otel-allocator/server/bench_test.go b/cmd/otel-allocator/server/bench_test.go
index 8fcea90b0e..d441fd8e2c 100644
--- a/cmd/otel-allocator/server/bench_test.go
+++ b/cmd/otel-allocator/server/bench_test.go
@@ -24,6 +24,7 @@ import (
"github.com/gin-gonic/gin"
"github.com/prometheus/common/model"
promconfig "github.com/prometheus/prometheus/config"
+ "github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/assert"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/allocation"
@@ -198,7 +199,7 @@ func BenchmarkTargetItemsJSONHandler(b *testing.B) {
},
}
for _, tc := range tests {
- data := makeNTargetItems(*random, tc.numTargets, tc.numLabels)
+ data := makeNTargetJSON(*random, tc.numTargets, tc.numLabels)
b.Run(fmt.Sprintf("%d_targets_%d_labels", tc.numTargets, tc.numLabels), func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
@@ -242,29 +243,39 @@ func makeNCollectorJSON(random rand.Rand, numCollectors, numItems int) map[strin
for i := 0; i < numCollectors; i++ {
items[randSeq(random, 20)] = collectorJSON{
Link: randSeq(random, 120),
- Jobs: makeNTargetItems(random, numItems, 50),
+ Jobs: makeNTargetJSON(random, numItems, 50),
}
}
return items
}
func makeNTargetItems(random rand.Rand, numItems, numLabels int) []*target.Item {
+ builder := labels.NewBuilder(labels.EmptyLabels())
items := make([]*target.Item, 0, numItems)
for i := 0; i < numItems; i++ {
items = append(items, target.NewItem(
randSeq(random, 80),
randSeq(random, 150),
- makeNNewLabels(random, numLabels),
+ makeNNewLabels(builder, random, numLabels),
randSeq(random, 30),
))
}
return items
}
-func makeNNewLabels(random rand.Rand, n int) model.LabelSet {
- labels := make(map[model.LabelName]model.LabelValue, n)
+func makeNTargetJSON(random rand.Rand, numItems, numLabels int) []*targetJSON {
+ items := makeNTargetItems(random, numItems, numLabels)
+ targets := make([]*targetJSON, numItems)
+ for i := 0; i < numItems; i++ {
+ targets[i] = targetJsonFromTargetItem(items[i])
+ }
+ return targets
+}
+
+func makeNNewLabels(builder *labels.Builder, random rand.Rand, n int) labels.Labels {
+ builder.Reset(labels.EmptyLabels())
for i := 0; i < n; i++ {
- labels[model.LabelName(randSeq(random, 20))] = model.LabelValue(randSeq(random, 20))
+ builder.Set(randSeq(random, 20), randSeq(random, 20))
}
- return labels
+ return builder.Labels()
}
diff --git a/cmd/otel-allocator/server/mocks_test.go b/cmd/otel-allocator/server/mocks_test.go
index e44b178fa8..8620d70367 100644
--- a/cmd/otel-allocator/server/mocks_test.go
+++ b/cmd/otel-allocator/server/mocks_test.go
@@ -32,6 +32,7 @@ func (m *mockAllocator) SetTargets(_ map[string]*target.Item)
func (m *mockAllocator) Collectors() map[string]*allocation.Collector { return nil }
func (m *mockAllocator) GetTargetsForCollectorAndJob(_ string, _ string) []*target.Item { return nil }
func (m *mockAllocator) SetFilter(_ allocation.Filter) {}
+func (m *mockAllocator) SetFallbackStrategy(_ allocation.Strategy) {}
func (m *mockAllocator) TargetItems() map[string]*target.Item {
return m.targetItems
diff --git a/cmd/otel-allocator/server/server.go b/cmd/otel-allocator/server/server.go
index 33e845103f..2e9df9a8b0 100644
--- a/cmd/otel-allocator/server/server.go
+++ b/cmd/otel-allocator/server/server.go
@@ -35,6 +35,7 @@ import (
"github.com/prometheus/client_golang/prometheus/promhttp"
promcommconfig "github.com/prometheus/common/config"
promconfig "github.com/prometheus/prometheus/config"
+ "github.com/prometheus/prometheus/model/labels"
"gopkg.in/yaml.v2"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/allocation"
@@ -57,8 +58,17 @@ var (
)
type collectorJSON struct {
- Link string `json:"_link"`
- Jobs []*target.Item `json:"targets"`
+ Link string `json:"_link"`
+ Jobs []*targetJSON `json:"targets"`
+}
+
+type linkJSON struct {
+ Link string `json:"_link"`
+}
+
+type targetJSON struct {
+ TargetURL []string `json:"targets"`
+ Labels labels.Labels `json:"labels"`
}
type Server struct {
@@ -263,9 +273,9 @@ func (s *Server) ReadinessProbeHandler(c *gin.Context) {
}
func (s *Server) JobHandler(c *gin.Context) {
- displayData := make(map[string]target.LinkJSON)
+ displayData := make(map[string]linkJSON)
for _, v := range s.allocator.TargetItems() {
- displayData[v.JobName] = target.LinkJSON{Link: v.Link.Link}
+ displayData[v.JobName] = linkJSON{Link: fmt.Sprintf("/jobs/%s/targets", url.QueryEscape(v.JobName))}
}
s.jsonHandler(c.Writer, displayData)
}
@@ -294,16 +304,16 @@ func (s *Server) TargetsHandler(c *gin.Context) {
if len(q) == 0 {
displayData := GetAllTargetsByJob(s.allocator, jobId)
s.jsonHandler(c.Writer, displayData)
-
} else {
- tgs := s.allocator.GetTargetsForCollectorAndJob(q[0], jobId)
+ targets := GetAllTargetsByCollectorAndJob(s.allocator, q[0], jobId)
// Displays empty list if nothing matches
- if len(tgs) == 0 {
+ if len(targets) == 0 {
s.jsonHandler(c.Writer, []interface{}{})
return
}
- s.jsonHandler(c.Writer, tgs)
+ s.jsonHandler(c.Writer, targets)
}
+
}
func (s *Server) errorHandler(w http.ResponseWriter, err error) {
@@ -323,12 +333,25 @@ func (s *Server) jsonHandler(w http.ResponseWriter, data interface{}) {
func GetAllTargetsByJob(allocator allocation.Allocator, job string) map[string]collectorJSON {
displayData := make(map[string]collectorJSON)
for _, col := range allocator.Collectors() {
- items := allocator.GetTargetsForCollectorAndJob(col.Name, job)
- displayData[col.Name] = collectorJSON{Link: fmt.Sprintf("/jobs/%s/targets?collector_id=%s", url.QueryEscape(job), col.Name), Jobs: items}
+ targets := GetAllTargetsByCollectorAndJob(allocator, col.Name, job)
+ displayData[col.Name] = collectorJSON{
+ Link: fmt.Sprintf("/jobs/%s/targets?collector_id=%s", url.QueryEscape(job), col.Name),
+ Jobs: targets,
+ }
}
return displayData
}
+// GetAllTargetsByCollector returns all the targets for a given collector and job.
+func GetAllTargetsByCollectorAndJob(allocator allocation.Allocator, collectorName string, jobName string) []*targetJSON {
+ items := allocator.GetTargetsForCollectorAndJob(collectorName, jobName)
+ targets := make([]*targetJSON, len(items))
+ for i, item := range items {
+ targets[i] = targetJsonFromTargetItem(item)
+ }
+ return targets
+}
+
// registerPprof registers the pprof handlers and either serves the requested
// specific profile or falls back to index handler.
func registerPprof(g *gin.RouterGroup) {
@@ -348,3 +371,10 @@ func registerPprof(g *gin.RouterGroup) {
}
})
}
+
+func targetJsonFromTargetItem(item *target.Item) *targetJSON {
+ return &targetJSON{
+ TargetURL: []string{item.TargetURL},
+ Labels: item.Labels,
+ }
+}
diff --git a/cmd/otel-allocator/server/server_test.go b/cmd/otel-allocator/server/server_test.go
index 88b8ad9368..4bc403251c 100644
--- a/cmd/otel-allocator/server/server_test.go
+++ b/cmd/otel-allocator/server/server_test.go
@@ -28,6 +28,7 @@ import (
"github.com/prometheus/common/config"
"github.com/prometheus/common/model"
promconfig "github.com/prometheus/prometheus/config"
+ "github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/model/relabel"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -41,11 +42,11 @@ import (
var (
logger = logf.Log.WithName("server-unit-tests")
- baseLabelSet = model.LabelSet{
- "test_label": "test-value",
+ baseLabelSet = labels.Labels{
+ {Name: "test_label", Value: "test-value"},
}
- testJobLabelSetTwo = model.LabelSet{
- "test_label": "test-value2",
+ testJobLabelSetTwo = labels.Labels{
+ {Name: "test_label", Value: "test-value2"},
}
baseTargetItem = target.NewItem("test-job", "test-url", baseLabelSet, "test-collector")
secondTargetItem = target.NewItem("test-job", "test-url", baseLabelSet, "test-collector")
@@ -74,7 +75,7 @@ func TestServer_TargetsHandler(t *testing.T) {
allocator allocation.Allocator
}
type want struct {
- items []*target.Item
+ items []*targetJSON
errString string
}
tests := []struct {
@@ -91,7 +92,7 @@ func TestServer_TargetsHandler(t *testing.T) {
allocator: leastWeighted,
},
want: want{
- items: []*target.Item{},
+ items: []*targetJSON{},
},
},
{
@@ -105,11 +106,11 @@ func TestServer_TargetsHandler(t *testing.T) {
allocator: leastWeighted,
},
want: want{
- items: []*target.Item{
+ items: []*targetJSON{
{
TargetURL: []string{"test-url"},
- Labels: map[model.LabelName]model.LabelValue{
- "test_label": "test-value",
+ Labels: labels.Labels{
+ {Name: "test_label", Value: "test-value"},
},
},
},
@@ -127,11 +128,11 @@ func TestServer_TargetsHandler(t *testing.T) {
allocator: leastWeighted,
},
want: want{
- items: []*target.Item{
+ items: []*targetJSON{
{
TargetURL: []string{"test-url"},
- Labels: map[model.LabelName]model.LabelValue{
- "test_label": "test-value",
+ Labels: labels.Labels{
+ {Name: "test_label", Value: "test-value"},
},
},
},
@@ -149,17 +150,17 @@ func TestServer_TargetsHandler(t *testing.T) {
allocator: leastWeighted,
},
want: want{
- items: []*target.Item{
+ items: []*targetJSON{
{
TargetURL: []string{"test-url"},
- Labels: map[model.LabelName]model.LabelValue{
- "test_label": "test-value",
+ Labels: labels.Labels{
+ {Name: "test_label", Value: "test-value"},
},
},
{
TargetURL: []string{"test-url2"},
- Labels: map[model.LabelName]model.LabelValue{
- "test_label": "test-value2",
+ Labels: labels.Labels{
+ {Name: "test_label", Value: "test-value2"},
},
},
},
@@ -186,7 +187,7 @@ func TestServer_TargetsHandler(t *testing.T) {
assert.EqualError(t, err, tt.want.errString)
return
}
- var itemResponse []*target.Item
+ var itemResponse []*targetJSON
err = json.Unmarshal(bodyBytes, &itemResponse)
assert.NoError(t, err)
assert.ElementsMatch(t, tt.want.items, itemResponse)
@@ -555,40 +556,40 @@ func TestServer_JobHandler(t *testing.T) {
description string
targetItems map[string]*target.Item
expectedCode int
- expectedJobs map[string]target.LinkJSON
+ expectedJobs map[string]linkJSON
}{
{
description: "nil jobs",
targetItems: nil,
expectedCode: http.StatusOK,
- expectedJobs: make(map[string]target.LinkJSON),
+ expectedJobs: make(map[string]linkJSON),
},
{
description: "empty jobs",
targetItems: map[string]*target.Item{},
expectedCode: http.StatusOK,
- expectedJobs: make(map[string]target.LinkJSON),
+ expectedJobs: make(map[string]linkJSON),
},
{
description: "one job",
targetItems: map[string]*target.Item{
- "targetitem": target.NewItem("job1", "", model.LabelSet{}, ""),
+ "targetitem": target.NewItem("job1", "", labels.Labels{}, ""),
},
expectedCode: http.StatusOK,
- expectedJobs: map[string]target.LinkJSON{
+ expectedJobs: map[string]linkJSON{
"job1": newLink("job1"),
},
},
{
description: "multiple jobs",
targetItems: map[string]*target.Item{
- "a": target.NewItem("job1", "", model.LabelSet{}, ""),
- "b": target.NewItem("job2", "", model.LabelSet{}, ""),
- "c": target.NewItem("job3", "", model.LabelSet{}, ""),
- "d": target.NewItem("job3", "", model.LabelSet{}, ""),
- "e": target.NewItem("job3", "", model.LabelSet{}, "")},
+ "a": target.NewItem("job1", "", labels.Labels{}, ""),
+ "b": target.NewItem("job2", "", labels.Labels{}, ""),
+ "c": target.NewItem("job3", "", labels.Labels{}, ""),
+ "d": target.NewItem("job3", "", labels.Labels{}, ""),
+ "e": target.NewItem("job3", "", labels.Labels{}, "")},
expectedCode: http.StatusOK,
- expectedJobs: map[string]target.LinkJSON{
+ expectedJobs: map[string]linkJSON{
"job1": newLink("job1"),
"job2": newLink("job2"),
"job3": newLink("job3"),
@@ -609,7 +610,7 @@ func TestServer_JobHandler(t *testing.T) {
assert.Equal(t, tc.expectedCode, result.StatusCode)
bodyBytes, err := io.ReadAll(result.Body)
require.NoError(t, err)
- jobs := map[string]target.LinkJSON{}
+ jobs := map[string]linkJSON{}
err = json.Unmarshal(bodyBytes, &jobs)
require.NoError(t, err)
assert.Equal(t, tc.expectedJobs, jobs)
@@ -737,6 +738,6 @@ func TestServer_ScrapeConfigRespose(t *testing.T) {
}
}
-func newLink(jobName string) target.LinkJSON {
- return target.LinkJSON{Link: fmt.Sprintf("/jobs/%s/targets", url.QueryEscape(jobName))}
+func newLink(jobName string) linkJSON {
+ return linkJSON{Link: fmt.Sprintf("/jobs/%s/targets", url.QueryEscape(jobName))}
}
diff --git a/cmd/otel-allocator/target/discovery.go b/cmd/otel-allocator/target/discovery.go
index d7dcb4e127..eb7498e5ad 100644
--- a/cmd/otel-allocator/target/discovery.go
+++ b/cmd/otel-allocator/target/discovery.go
@@ -24,6 +24,8 @@ import (
"github.com/prometheus/common/model"
promconfig "github.com/prometheus/prometheus/config"
"github.com/prometheus/prometheus/discovery"
+ "github.com/prometheus/prometheus/discovery/targetgroup"
+ "github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/model/relabel"
"gopkg.in/yaml.v3"
@@ -104,28 +106,42 @@ func (m *Discoverer) ApplyConfig(source allocatorWatcher.EventSource, scrapeConf
}
func (m *Discoverer) Watch(fn func(targets map[string]*Item)) error {
+ labelsBuilder := labels.NewBuilder(labels.EmptyLabels())
for {
select {
case <-m.close:
m.log.Info("Service Discovery watch event stopped: discovery manager closed")
return nil
case tsets := <-m.manager.SyncCh():
- targets := map[string]*Item{}
-
- for jobName, tgs := range tsets {
- var count float64 = 0
- for _, tg := range tgs {
- for _, t := range tg.Targets {
- count++
- item := NewItem(jobName, string(t[model.AddressLabel]), t.Merge(tg.Labels), "")
- targets[item.Hash()] = item
- }
+ m.ProcessTargets(labelsBuilder, tsets, fn)
+ }
+ }
+}
+
+func (m *Discoverer) ProcessTargets(builder *labels.Builder, tsets map[string][]*targetgroup.Group, fn func(targets map[string]*Item)) {
+ targets := map[string]*Item{}
+
+ for jobName, tgs := range tsets {
+ var count float64 = 0
+ for _, tg := range tgs {
+ builder.Reset(labels.EmptyLabels())
+ for ln, lv := range tg.Labels {
+ builder.Set(string(ln), string(lv))
+ }
+ groupLabels := builder.Labels()
+ for _, t := range tg.Targets {
+ count++
+ builder.Reset(groupLabels)
+ for ln, lv := range t {
+ builder.Set(string(ln), string(lv))
}
- targetsDiscovered.WithLabelValues(jobName).Set(count)
+ item := NewItem(jobName, string(t[model.AddressLabel]), builder.Labels(), "")
+ targets[item.Hash()] = item
}
- fn(targets)
}
+ targetsDiscovered.WithLabelValues(jobName).Set(count)
}
+ fn(targets)
}
func (m *Discoverer) Close() {
diff --git a/cmd/otel-allocator/target/discovery_test.go b/cmd/otel-allocator/target/discovery_test.go
index f773b295c0..7eb2883ee9 100644
--- a/cmd/otel-allocator/target/discovery_test.go
+++ b/cmd/otel-allocator/target/discovery_test.go
@@ -87,7 +87,7 @@ func TestDiscovery(t *testing.T) {
err := manager.Watch(func(targets map[string]*Item) {
var result []string
for _, t := range targets {
- result = append(result, t.TargetURL[0])
+ result = append(result, t.TargetURL)
}
results <- result
})
diff --git a/cmd/otel-allocator/target/target.go b/cmd/otel-allocator/target/target.go
index 3341560329..5a157bc11d 100644
--- a/cmd/otel-allocator/target/target.go
+++ b/cmd/otel-allocator/target/target.go
@@ -15,36 +15,30 @@
package target
import (
- "fmt"
- "net/url"
+ "strconv"
- "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
)
// nodeLabels are labels that are used to identify the node on which the given
// target is residing. To learn more about these labels, please refer to:
// https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config
var (
- nodeLabels = []model.LabelName{
+ nodeLabels = []string{
"__meta_kubernetes_pod_node_name",
"__meta_kubernetes_node_name",
"__meta_kubernetes_endpoint_node_name",
}
- endpointSliceTargetKindLabel model.LabelName = "__meta_kubernetes_endpointslice_address_target_kind"
- endpointSliceTargetNameLabel model.LabelName = "__meta_kubernetes_endpointslice_address_target_name"
+ endpointSliceTargetKindLabel = "__meta_kubernetes_endpointslice_address_target_kind"
+ endpointSliceTargetNameLabel = "__meta_kubernetes_endpointslice_address_target_name"
+ relevantLabelNames = append(nodeLabels, endpointSliceTargetKindLabel, endpointSliceTargetNameLabel)
)
-// LinkJSON This package contains common structs and methods that relate to scrape targets.
-type LinkJSON struct {
- Link string `json:"_link"`
-}
-
type Item struct {
- JobName string `json:"-"`
- Link LinkJSON `json:"-"`
- TargetURL []string `json:"targets"`
- Labels model.LabelSet `json:"labels"`
- CollectorName string `json:"-"`
+ JobName string
+ TargetURL string
+ Labels labels.Labels
+ CollectorName string
hash string
}
@@ -53,30 +47,30 @@ func (t *Item) Hash() string {
}
func (t *Item) GetNodeName() string {
+ relevantLabels := t.Labels.MatchLabels(true, relevantLabelNames...)
for _, label := range nodeLabels {
- if val, ok := t.Labels[label]; ok {
- return string(val)
+ if val := relevantLabels.Get(label); val != "" {
+ return val
}
}
- if val := t.Labels[endpointSliceTargetKindLabel]; val != "Node" {
+ if val := relevantLabels.Get(endpointSliceTargetKindLabel); val != "Node" {
return ""
}
- return string(t.Labels[endpointSliceTargetNameLabel])
+ return relevantLabels.Get(endpointSliceTargetNameLabel)
}
// NewItem Creates a new target item.
// INVARIANTS:
// * Item fields must not be modified after creation.
// * Item should only be made via its constructor, never directly.
-func NewItem(jobName string, targetURL string, label model.LabelSet, collectorName string) *Item {
+func NewItem(jobName string, targetURL string, labels labels.Labels, collectorName string) *Item {
return &Item{
JobName: jobName,
- Link: LinkJSON{Link: fmt.Sprintf("/jobs/%s/targets", url.QueryEscape(jobName))},
- hash: jobName + targetURL + label.Fingerprint().String(),
- TargetURL: []string{targetURL},
- Labels: label,
+ hash: jobName + targetURL + strconv.FormatUint(labels.Hash(), 10),
+ TargetURL: targetURL,
+ Labels: labels,
CollectorName: collectorName,
}
}
diff --git a/cmd/otel-allocator/watcher/promOperator.go b/cmd/otel-allocator/watcher/promOperator.go
index ae2ddcb68e..517f065ff3 100644
--- a/cmd/otel-allocator/watcher/promOperator.go
+++ b/cmd/otel-allocator/watcher/promOperator.go
@@ -22,7 +22,7 @@ import (
"time"
"github.com/blang/semver/v4"
- "github.com/go-kit/log"
+ gokitlog "github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/go-logr/logr"
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
@@ -53,6 +53,9 @@ const (
)
func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocatorconfig.Config) (*PrometheusCRWatcher, error) {
+ // TODO: Remove this after go 1.23 upgrade
+ promLogger := level.NewFilter(gokitlog.NewLogfmtLogger(os.Stderr), level.AllowWarn())
+ slogger := slog.New(logr.ToSlogHandler(logger))
var resourceSelector *prometheus.ResourceSelector
mClient, err := monitoringclient.NewForConfig(cfg.ClusterConfig)
if err != nil {
@@ -79,18 +82,20 @@ func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocat
Spec: monitoringv1.PrometheusSpec{
CommonPrometheusFields: monitoringv1.CommonPrometheusFields{
ScrapeInterval: monitoringv1.Duration(cfg.PrometheusCR.ScrapeInterval.String()),
- ServiceMonitorSelector: cfg.PrometheusCR.ServiceMonitorSelector,
PodMonitorSelector: cfg.PrometheusCR.PodMonitorSelector,
- ServiceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector,
PodMonitorNamespaceSelector: cfg.PrometheusCR.PodMonitorNamespaceSelector,
+ ServiceMonitorSelector: cfg.PrometheusCR.ServiceMonitorSelector,
+ ServiceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector,
+ ScrapeConfigSelector: cfg.PrometheusCR.ScrapeConfigSelector,
+ ScrapeConfigNamespaceSelector: cfg.PrometheusCR.ScrapeConfigNamespaceSelector,
+ ProbeSelector: cfg.PrometheusCR.ProbeSelector,
+ ProbeNamespaceSelector: cfg.PrometheusCR.ProbeNamespaceSelector,
ServiceDiscoveryRole: &serviceDiscoveryRole,
},
},
}
- promOperatorLogger := level.NewFilter(log.NewLogfmtLogger(os.Stderr), level.AllowWarn())
- promOperatorSlogLogger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: slog.LevelWarn}))
- generator, err := prometheus.NewConfigGenerator(promOperatorLogger, prom, true)
+ generator, err := prometheus.NewConfigGenerator(promLogger, prom, true)
if err != nil {
return nil, err
@@ -108,7 +113,7 @@ func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocat
logger.Error(err, "Retrying namespace informer creation in promOperator CRD watcher")
return true
}, func() error {
- nsMonInf, err = getNamespaceInformer(ctx, map[string]struct{}{v1.NamespaceAll: {}}, promOperatorLogger, clientset, operatorMetrics)
+ nsMonInf, err = getNamespaceInformer(ctx, map[string]struct{}{v1.NamespaceAll: {}}, promLogger, clientset, operatorMetrics)
return err
})
if getNamespaceInformerErr != nil {
@@ -116,13 +121,13 @@ func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocat
return nil, getNamespaceInformerErr
}
- resourceSelector, err = prometheus.NewResourceSelector(promOperatorSlogLogger, prom, store, nsMonInf, operatorMetrics, eventRecorder)
+ resourceSelector, err = prometheus.NewResourceSelector(slogger, prom, store, nsMonInf, operatorMetrics, eventRecorder)
if err != nil {
logger.Error(err, "Failed to create resource selector in promOperator CRD watcher")
}
return &PrometheusCRWatcher{
- logger: logger,
+ logger: slogger,
kubeMonitoringClient: mClient,
k8sClient: clientset,
informers: monitoringInformers,
@@ -133,13 +138,15 @@ func NewPrometheusCRWatcher(ctx context.Context, logger logr.Logger, cfg allocat
kubeConfigPath: cfg.KubeConfigFilePath,
podMonitorNamespaceSelector: cfg.PrometheusCR.PodMonitorNamespaceSelector,
serviceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector,
+ scrapeConfigNamespaceSelector: cfg.PrometheusCR.ScrapeConfigNamespaceSelector,
+ probeNamespaceSelector: cfg.PrometheusCR.ProbeNamespaceSelector,
resourceSelector: resourceSelector,
store: store,
}, nil
}
type PrometheusCRWatcher struct {
- logger logr.Logger
+ logger *slog.Logger
kubeMonitoringClient monitoringclient.Interface
k8sClient kubernetes.Interface
informers map[string]*informers.ForResource
@@ -150,12 +157,13 @@ type PrometheusCRWatcher struct {
kubeConfigPath string
podMonitorNamespaceSelector *metav1.LabelSelector
serviceMonitorNamespaceSelector *metav1.LabelSelector
+ scrapeConfigNamespaceSelector *metav1.LabelSelector
+ probeNamespaceSelector *metav1.LabelSelector
resourceSelector *prometheus.ResourceSelector
store *assets.StoreBuilder
}
-func getNamespaceInformer(ctx context.Context, allowList map[string]struct{}, promOperatorLogger log.Logger, clientset kubernetes.Interface, operatorMetrics *operator.Metrics) (cache.SharedIndexInformer, error) {
-
+func getNamespaceInformer(ctx context.Context, allowList map[string]struct{}, promOperatorLogger gokitlog.Logger, clientset kubernetes.Interface, operatorMetrics *operator.Metrics) (cache.SharedIndexInformer, error) {
kubernetesVersion, err := clientset.Discovery().ServerVersion()
if err != nil {
return nil, err
@@ -196,9 +204,21 @@ func getInformers(factory informers.FactoriesForNamespaces) (map[string]*informe
return nil, err
}
+ probeInformers, err := informers.NewInformersForResource(factory, monitoringv1.SchemeGroupVersion.WithResource(monitoringv1.ProbeName))
+ if err != nil {
+ return nil, err
+ }
+
+ scrapeConfigInformers, err := informers.NewInformersForResource(factory, promv1alpha1.SchemeGroupVersion.WithResource(promv1alpha1.ScrapeConfigName))
+ if err != nil {
+ return nil, err
+ }
+
return map[string]*informers.ForResource{
monitoringv1.ServiceMonitorName: serviceMonitorInformers,
monitoringv1.PodMonitorName: podMonitorInformers,
+ monitoringv1.ProbeName: probeInformers,
+ promv1alpha1.ScrapeConfigName: scrapeConfigInformers,
}, nil
}
@@ -210,7 +230,7 @@ func (w *PrometheusCRWatcher) Watch(upstreamEvents chan Event, upstreamErrors ch
if w.nsInformer != nil {
go w.nsInformer.Run(w.stopChannel)
- if ok := cache.WaitForNamedCacheSync("namespace", w.stopChannel, w.nsInformer.HasSynced); !ok {
+ if ok := w.WaitForNamedCacheSync("namespace", w.nsInformer.HasSynced); !ok {
success = false
}
@@ -228,10 +248,12 @@ func (w *PrometheusCRWatcher) Watch(upstreamEvents chan Event, upstreamErrors ch
for name, selector := range map[string]*metav1.LabelSelector{
"PodMonitorNamespaceSelector": w.podMonitorNamespaceSelector,
"ServiceMonitorNamespaceSelector": w.serviceMonitorNamespaceSelector,
+ "ProbeNamespaceSelector": w.probeNamespaceSelector,
+ "ScrapeConfigNamespaceSelector": w.scrapeConfigNamespaceSelector,
} {
sync, err := k8sutil.LabelSelectionHasChanged(old.Labels, cur.Labels, selector)
if err != nil {
- w.logger.Error(err, "Failed to check label selection between namespaces while handling namespace updates", "selector", name)
+ w.logger.Error("Failed to check label selection between namespaces while handling namespace updates", "selector", name, "error", err)
return
}
@@ -252,8 +274,9 @@ func (w *PrometheusCRWatcher) Watch(upstreamEvents chan Event, upstreamErrors ch
for name, resource := range w.informers {
resource.Start(w.stopChannel)
- if ok := cache.WaitForNamedCacheSync(name, w.stopChannel, resource.HasSynced); !ok {
- success = false
+ if ok := w.WaitForNamedCacheSync(name, resource.HasSynced); !ok {
+ w.logger.Info("skipping informer", "informer", name)
+ continue
}
// only send an event notification if there isn't one already
@@ -342,6 +365,16 @@ func (w *PrometheusCRWatcher) LoadConfig(ctx context.Context) (*promconfig.Confi
return nil, err
}
+ probeInstances, err := w.resourceSelector.SelectProbes(ctx, w.informers[monitoringv1.ProbeName].ListAllByNamespace)
+ if err != nil {
+ return nil, err
+ }
+
+ scrapeConfigInstances, err := w.resourceSelector.SelectScrapeConfigs(ctx, w.informers[promv1alpha1.ScrapeConfigName].ListAllByNamespace)
+ if err != nil {
+ return nil, err
+ }
+
generatedConfig, err := w.configGenerator.GenerateServerConfiguration(
"30s",
"",
@@ -352,8 +385,8 @@ func (w *PrometheusCRWatcher) LoadConfig(ctx context.Context) (*promconfig.Confi
nil,
serviceMonitorInstances,
podMonitorInstances,
- map[string]*monitoringv1.Probe{},
- map[string]*promv1alpha1.ScrapeConfig{},
+ probeInstances,
+ scrapeConfigInstances,
w.store,
nil,
nil,
@@ -384,3 +417,41 @@ func (w *PrometheusCRWatcher) LoadConfig(ctx context.Context) (*promconfig.Confi
return promCfg, nil
}
}
+
+// WaitForNamedCacheSync adds a timeout to the informer's wait for the cache to be ready.
+// If the PrometheusCRWatcher is unable to load an informer within 15 seconds, the method is
+// cancelled and returns false. A successful informer load will return true. This method also
+// will be cancelled if the target allocator's stopChannel is called before it returns.
+//
+// This method is inspired by the upstream prometheus-operator implementation, with a shorter timeout
+// and support for the PrometheusCRWatcher's stopChannel.
+// https://github.com/prometheus-operator/prometheus-operator/blob/293c16c854ce69d1da9fdc8f0705de2d67bfdbfa/pkg/operator/operator.go#L433
+func (w *PrometheusCRWatcher) WaitForNamedCacheSync(controllerName string, inf cache.InformerSynced) bool {
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*15)
+ t := time.NewTicker(time.Second * 5)
+ defer t.Stop()
+
+ go func() {
+ for {
+ select {
+ case <-t.C:
+ w.logger.Debug("cache sync not yet completed")
+ case <-ctx.Done():
+ return
+ case <-w.stopChannel:
+ w.logger.Warn("stop received, shutting down cache syncing")
+ cancel()
+ return
+ }
+ }
+ }()
+
+ ok := cache.WaitForNamedCacheSync(controllerName, ctx.Done(), inf)
+ if !ok {
+ w.logger.Error("failed to sync cache")
+ } else {
+ w.logger.Debug("successfully synced cache")
+ }
+
+ return ok
+}
diff --git a/cmd/otel-allocator/watcher/promOperator_test.go b/cmd/otel-allocator/watcher/promOperator_test.go
index 7bd3f0f443..3cc959046e 100644
--- a/cmd/otel-allocator/watcher/promOperator_test.go
+++ b/cmd/otel-allocator/watcher/promOperator_test.go
@@ -24,6 +24,7 @@ import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
+ promv1alpha1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1alpha1"
"github.com/prometheus-operator/prometheus-operator/pkg/assets"
fakemonitoringclient "github.com/prometheus-operator/prometheus-operator/pkg/client/versioned/fake"
"github.com/prometheus-operator/prometheus-operator/pkg/informers"
@@ -35,6 +36,7 @@ import (
promconfig "github.com/prometheus/prometheus/config"
"github.com/prometheus/prometheus/discovery"
kubeDiscovery "github.com/prometheus/prometheus/discovery/kubernetes"
+ "github.com/prometheus/prometheus/discovery/targetgroup"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
@@ -59,6 +61,8 @@ func TestLoadConfig(t *testing.T) {
name string
serviceMonitors []*monitoringv1.ServiceMonitor
podMonitors []*monitoringv1.PodMonitor
+ scrapeConfigs []*promv1alpha1.ScrapeConfig
+ probes []*monitoringv1.Probe
want *promconfig.Config
wantErr bool
cfg allocatorconfig.Config
@@ -662,6 +666,136 @@ func TestLoadConfig(t *testing.T) {
},
},
},
+ {
+ name: "scrape configs selector test",
+ scrapeConfigs: []*promv1alpha1.ScrapeConfig{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "scrapeconfig-test-1",
+ Namespace: "test",
+ Labels: map[string]string{
+ "testpod": "testpod",
+ },
+ },
+ Spec: promv1alpha1.ScrapeConfigSpec{
+ JobName: func() *string {
+ j := "scrapeConfig/test/scrapeconfig-test-1"
+ return &j
+ }(),
+ StaticConfigs: []promv1alpha1.StaticConfig{
+ {
+ Targets: []promv1alpha1.Target{"127.0.0.1:8888"},
+ Labels: nil,
+ },
+ },
+ },
+ },
+ },
+ cfg: allocatorconfig.Config{
+ PrometheusCR: allocatorconfig.PrometheusCRConfig{
+ ScrapeConfigSelector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "testpod": "testpod",
+ },
+ },
+ },
+ },
+ want: &promconfig.Config{
+ ScrapeConfigs: []*promconfig.ScrapeConfig{
+ {
+ JobName: "scrapeConfig/test/scrapeconfig-test-1",
+ ScrapeInterval: model.Duration(30 * time.Second),
+ ScrapeProtocols: defaultScrapeProtocols,
+ ScrapeTimeout: model.Duration(10 * time.Second),
+ HonorTimestamps: true,
+ HonorLabels: false,
+ Scheme: "http",
+ MetricsPath: "/metrics",
+ ServiceDiscoveryConfigs: []discovery.Config{
+ discovery.StaticConfig{
+ &targetgroup.Group{
+ Targets: []model.LabelSet{
+ map[model.LabelName]model.LabelValue{
+ "__address__": "127.0.0.1:8888",
+ },
+ },
+ Labels: map[model.LabelName]model.LabelValue{},
+ Source: "0",
+ },
+ },
+ },
+ HTTPClientConfig: config.DefaultHTTPClientConfig,
+ EnableCompression: true,
+ },
+ },
+ },
+ },
+ {
+ name: "probe selector test",
+ probes: []*monitoringv1.Probe{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "probe-test-1",
+ Namespace: "test",
+ Labels: map[string]string{
+ "testpod": "testpod",
+ },
+ },
+ Spec: monitoringv1.ProbeSpec{
+ JobName: "probe/test/probe-1/0",
+ ProberSpec: monitoringv1.ProberSpec{
+ URL: "localhost:50671",
+ Path: "/metrics",
+ },
+ Targets: monitoringv1.ProbeTargets{
+ StaticConfig: &monitoringv1.ProbeTargetStaticConfig{
+ Targets: []string{"prometheus.io"},
+ },
+ },
+ },
+ },
+ },
+ cfg: allocatorconfig.Config{
+ PrometheusCR: allocatorconfig.PrometheusCRConfig{
+ ProbeSelector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "testpod": "testpod",
+ },
+ },
+ },
+ },
+ want: &promconfig.Config{
+ ScrapeConfigs: []*promconfig.ScrapeConfig{
+ {
+ JobName: "probe/test/probe-test-1",
+ ScrapeInterval: model.Duration(30 * time.Second),
+ ScrapeProtocols: defaultScrapeProtocols,
+ ScrapeTimeout: model.Duration(10 * time.Second),
+ HonorTimestamps: true,
+ HonorLabels: false,
+ Scheme: "http",
+ MetricsPath: "/metrics",
+ ServiceDiscoveryConfigs: []discovery.Config{
+ discovery.StaticConfig{
+ &targetgroup.Group{
+ Targets: []model.LabelSet{
+ map[model.LabelName]model.LabelValue{
+ "__address__": "prometheus.io",
+ },
+ },
+ Labels: map[model.LabelName]model.LabelValue{
+ "namespace": "test",
+ },
+ Source: "0",
+ },
+ },
+ },
+ HTTPClientConfig: config.DefaultHTTPClientConfig,
+ EnableCompression: true,
+ },
+ },
+ },
+ },
{
name: "service monitor namespace selector test",
serviceMonitors: []*monitoringv1.ServiceMonitor{
@@ -805,7 +939,7 @@ func TestLoadConfig(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
- w, _ := getTestPrometheusCRWatcher(t, tt.serviceMonitors, tt.podMonitors, tt.cfg)
+ w, _ := getTestPrometheusCRWatcher(t, tt.serviceMonitors, tt.podMonitors, tt.probes, tt.scrapeConfigs, tt.cfg)
// Start namespace informers in order to populate cache.
go w.nsInformer.Run(w.stopChannel)
@@ -910,7 +1044,7 @@ func TestNamespaceLabelUpdate(t *testing.T) {
ScrapeConfigs: []*promconfig.ScrapeConfig{},
}
- w, source := getTestPrometheusCRWatcher(t, nil, podMonitors, cfg)
+ w, source := getTestPrometheusCRWatcher(t, nil, podMonitors, nil, nil, cfg)
events := make(chan Event, 1)
eventInterval := 5 * time.Millisecond
@@ -946,7 +1080,7 @@ func TestNamespaceLabelUpdate(t *testing.T) {
select {
case <-events:
- case <-time.After(time.Second):
+ case <-time.After(5 * time.Second):
}
got, err = w.LoadConfig(context.Background())
@@ -973,10 +1107,10 @@ func TestRateLimit(t *testing.T) {
},
}
events := make(chan Event, 1)
- eventInterval := 5 * time.Millisecond
+ eventInterval := 500 * time.Millisecond
cfg := allocatorconfig.Config{}
- w, _ := getTestPrometheusCRWatcher(t, nil, nil, cfg)
+ w, _ := getTestPrometheusCRWatcher(t, nil, nil, nil, nil, cfg)
defer w.Close()
w.eventInterval = eventInterval
@@ -1006,10 +1140,10 @@ func TestRateLimit(t *testing.T) {
default:
return false
}
- }, eventInterval*2, time.Millisecond)
+ }, time.Second*5, eventInterval/10)
// it's difficult to measure the rate precisely
- // what we do, is send two updates, and then assert that the elapsed time is between eventInterval and 3*eventInterval
+ // what we do, is send two updates, and then assert that the elapsed time is at least eventInterval
startTime := time.Now()
_, err = w.kubeMonitoringClient.MonitoringV1().ServiceMonitors("test").Update(context.Background(), serviceMonitor, metav1.UpdateOptions{})
require.NoError(t, err)
@@ -1020,7 +1154,7 @@ func TestRateLimit(t *testing.T) {
default:
return false
}
- }, eventInterval*2, time.Millisecond)
+ }, time.Second*5, eventInterval/10)
_, err = w.kubeMonitoringClient.MonitoringV1().ServiceMonitors("test").Update(context.Background(), serviceMonitor, metav1.UpdateOptions{})
require.NoError(t, err)
require.Eventually(t, func() bool {
@@ -1030,16 +1164,14 @@ func TestRateLimit(t *testing.T) {
default:
return false
}
- }, eventInterval*2, time.Millisecond)
+ }, time.Second*5, eventInterval/10)
elapsedTime := time.Since(startTime)
assert.Less(t, eventInterval, elapsedTime)
- assert.GreaterOrEqual(t, eventInterval*3, elapsedTime)
-
}
// getTestPrometheusCRWatcher creates a test instance of PrometheusCRWatcher with fake clients
// and test secrets.
-func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.ServiceMonitor, podMonitors []*monitoringv1.PodMonitor, cfg allocatorconfig.Config) (*PrometheusCRWatcher, *fcache.FakeControllerSource) {
+func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.ServiceMonitor, podMonitors []*monitoringv1.PodMonitor, probes []*monitoringv1.Probe, scrapeConfigs []*promv1alpha1.ScrapeConfig, cfg allocatorconfig.Config) (*PrometheusCRWatcher, *fcache.FakeControllerSource) {
mClient := fakemonitoringclient.NewSimpleClientset()
for _, sm := range svcMonitors {
if sm != nil {
@@ -1057,6 +1189,23 @@ func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.Servic
}
}
}
+ for _, prb := range probes {
+ if prb != nil {
+ _, err := mClient.MonitoringV1().Probes(prb.Namespace).Create(context.Background(), prb, metav1.CreateOptions{})
+ if err != nil {
+ t.Fatal(t, err)
+ }
+ }
+ }
+
+ for _, scc := range scrapeConfigs {
+ if scc != nil {
+ _, err := mClient.MonitoringV1alpha1().ScrapeConfigs(scc.Namespace).Create(context.Background(), scc, metav1.CreateOptions{})
+ if err != nil {
+ t.Fatal(t, err)
+ }
+ }
+ }
k8sClient := fake.NewSimpleClientset()
_, err := k8sClient.CoreV1().Secrets("test").Create(context.Background(), &v1.Secret{
@@ -1096,6 +1245,10 @@ func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.Servic
PodMonitorSelector: cfg.PrometheusCR.PodMonitorSelector,
ServiceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector,
PodMonitorNamespaceSelector: cfg.PrometheusCR.PodMonitorNamespaceSelector,
+ ProbeSelector: cfg.PrometheusCR.ProbeSelector,
+ ProbeNamespaceSelector: cfg.PrometheusCR.ProbeNamespaceSelector,
+ ScrapeConfigSelector: cfg.PrometheusCR.ScrapeConfigSelector,
+ ScrapeConfigNamespaceSelector: cfg.PrometheusCR.ScrapeConfigNamespaceSelector,
ServiceDiscoveryRole: &serviceDiscoveryRole,
},
},
@@ -1130,6 +1283,7 @@ func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.Servic
require.NoError(t, err)
return &PrometheusCRWatcher{
+ logger: slog.Default(),
kubeMonitoringClient: mClient,
k8sClient: k8sClient,
informers: informers,
@@ -1138,6 +1292,8 @@ func getTestPrometheusCRWatcher(t *testing.T, svcMonitors []*monitoringv1.Servic
configGenerator: generator,
podMonitorNamespaceSelector: cfg.PrometheusCR.PodMonitorNamespaceSelector,
serviceMonitorNamespaceSelector: cfg.PrometheusCR.ServiceMonitorNamespaceSelector,
+ probeNamespaceSelector: cfg.PrometheusCR.ProbeNamespaceSelector,
+ scrapeConfigNamespaceSelector: cfg.PrometheusCR.ScrapeConfigNamespaceSelector,
resourceSelector: resourceSelector,
store: store,
}, source
diff --git a/config/crd/bases/opentelemetry.io_instrumentations.yaml b/config/crd/bases/opentelemetry.io_instrumentations.yaml
index 19582f62c6..4032a33613 100644
--- a/config/crd/bases/opentelemetry.io_instrumentations.yaml
+++ b/config/crd/bases/opentelemetry.io_instrumentations.yaml
@@ -215,6 +215,118 @@ spec:
type: object
version:
type: string
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -330,6 +442,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -407,6 +631,19 @@ spec:
properties:
endpoint:
type: string
+ tls:
+ properties:
+ ca_file:
+ type: string
+ cert_file:
+ type: string
+ configMapName:
+ type: string
+ key_file:
+ type: string
+ secretName:
+ type: string
+ type: object
type: object
go:
properties:
@@ -511,6 +748,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -633,6 +982,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -811,6 +1272,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -921,6 +1494,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
@@ -1044,6 +1729,118 @@ spec:
x-kubernetes-int-or-string: true
type: object
type: object
+ volumeClaimTemplate:
+ properties:
+ metadata:
+ properties:
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ finalizers:
+ items:
+ type: string
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ name:
+ type: string
+ namespace:
+ type: string
+ type: object
+ spec:
+ properties:
+ accessModes:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ dataSource:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ x-kubernetes-map-type: atomic
+ dataSourceRef:
+ properties:
+ apiGroup:
+ type: string
+ kind:
+ type: string
+ name:
+ type: string
+ namespace:
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ type: object
+ selector:
+ properties:
+ matchExpressions:
+ items:
+ properties:
+ key:
+ type: string
+ operator:
+ type: string
+ values:
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ x-kubernetes-list-type: atomic
+ matchLabels:
+ additionalProperties:
+ type: string
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ storageClassName:
+ type: string
+ volumeAttributesClassName:
+ type: string
+ volumeMode:
+ type: string
+ volumeName:
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
volumeLimitSize:
anyOf:
- type: integer
diff --git a/config/crd/bases/opentelemetry.io_opentelemetrycollectors.yaml b/config/crd/bases/opentelemetry.io_opentelemetrycollectors.yaml
index 05baaaa5df..fc36f4deb5 100644
--- a/config/crd/bases/opentelemetry.io_opentelemetrycollectors.yaml
+++ b/config/crd/bases/opentelemetry.io_opentelemetrycollectors.yaml
@@ -6949,6 +6949,13 @@ spec:
type: boolean
type: object
type: object
+ persistentVolumeClaimRetentionPolicy:
+ properties:
+ whenDeleted:
+ type: string
+ whenScaled:
+ type: string
+ type: object
podAnnotations:
additionalProperties:
type: string
diff --git a/config/default/kustomization.yaml b/config/default/kustomization.yaml
index b5d04b59ae..2475c8ee5b 100644
--- a/config/default/kustomization.yaml
+++ b/config/default/kustomization.yaml
@@ -18,8 +18,6 @@ bases:
- ../manager
- ../webhook
- ../certmanager
-# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
-#- ../prometheus
patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
diff --git a/config/default/manager_auth_proxy_patch.yaml b/config/default/manager_auth_proxy_patch.yaml
index 9969c5c16e..4ac6ff2247 100644
--- a/config/default/manager_auth_proxy_patch.yaml
+++ b/config/default/manager_auth_proxy_patch.yaml
@@ -10,7 +10,7 @@ spec:
spec:
containers:
- name: kube-rbac-proxy
- image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
+ image: quay.io/brancz/kube-rbac-proxy:v0.13.1
args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
diff --git a/config/manager/kustomization.yaml b/config/manager/kustomization.yaml
index 5c5f0b84cb..372a75ae43 100644
--- a/config/manager/kustomization.yaml
+++ b/config/manager/kustomization.yaml
@@ -1,2 +1,3 @@
resources:
- manager.yaml
+
diff --git a/config/overlays/openshift/kustomization.yaml b/config/overlays/openshift/kustomization.yaml
index ddd0d3b29b..dd5b4300d0 100644
--- a/config/overlays/openshift/kustomization.yaml
+++ b/config/overlays/openshift/kustomization.yaml
@@ -8,3 +8,7 @@ patches:
kind: Deployment
name: controller-manager
path: manager-patch.yaml
+
+patchesStrategicMerge:
+- metrics_service_tls_patch.yaml
+- manager_auth_proxy_tls_patch.yaml
\ No newline at end of file
diff --git a/config/overlays/openshift/manager-patch.yaml b/config/overlays/openshift/manager-patch.yaml
index 2fb76bd889..57b097ca29 100644
--- a/config/overlays/openshift/manager-patch.yaml
+++ b/config/overlays/openshift/manager-patch.yaml
@@ -7,6 +7,6 @@
- --zap-time-encoding=rfc3339nano
- --enable-nginx-instrumentation=true
- '--enable-go-instrumentation=true'
- - '--enable-multi-instrumentation=true'
- '--openshift-create-dashboard=true'
- '--feature-gates=+operator.observability.prometheus'
+ - '--enable-cr-metrics=true'
\ No newline at end of file
diff --git a/config/overlays/openshift/manager_auth_proxy_tls_patch.yaml b/config/overlays/openshift/manager_auth_proxy_tls_patch.yaml
new file mode 100644
index 0000000000..077fa74ea6
--- /dev/null
+++ b/config/overlays/openshift/manager_auth_proxy_tls_patch.yaml
@@ -0,0 +1,29 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: controller-manager
+ namespace: system
+spec:
+ template:
+ spec:
+ containers:
+ - name: manager # without this line, kustomize reorders the containers, making kube-rbac-proxy the default container
+ - name: kube-rbac-proxy
+ args:
+ - "--secure-listen-address=0.0.0.0:8443"
+ - "--upstream=http://127.0.0.1:8080/"
+ - "--logtostderr=true"
+ - "--v=0"
+ - "--tls-cert-file=/var/run/tls/server/tls.crt"
+ - "--tls-private-key-file=/var/run/tls/server/tls.key"
+ - "--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA256"
+ - "--tls-min-version=VersionTLS12"
+ volumeMounts:
+ - mountPath: /var/run/tls/server
+ name: opentelemetry-operator-metrics-cert
+ volumes:
+ - name: opentelemetry-operator-metrics-cert
+ secret:
+ defaultMode: 420
+ # secret generated by the 'service.beta.openshift.io/serving-cert-secret-name' annotation on the metrics-service
+ secretName: opentelemetry-operator-metrics
diff --git a/config/overlays/openshift/metrics_service_tls_patch.yaml b/config/overlays/openshift/metrics_service_tls_patch.yaml
new file mode 100644
index 0000000000..7505c7894a
--- /dev/null
+++ b/config/overlays/openshift/metrics_service_tls_patch.yaml
@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ service.beta.openshift.io/serving-cert-secret-name: opentelemetry-operator-metrics
+ name: controller-manager-metrics-service
+ namespace: system
diff --git a/config/prometheus/kustomization.yaml b/config/prometheus/kustomization.yaml
deleted file mode 100644
index ed137168a1..0000000000
--- a/config/prometheus/kustomization.yaml
+++ /dev/null
@@ -1,2 +0,0 @@
-resources:
-- monitor.yaml
diff --git a/config/prometheus/monitor.yaml b/config/prometheus/monitor.yaml
deleted file mode 100644
index 6e5f438a21..0000000000
--- a/config/prometheus/monitor.yaml
+++ /dev/null
@@ -1,26 +0,0 @@
-
-# Prometheus Monitor Service (Metrics)
-apiVersion: monitoring.coreos.com/v1
-kind: ServiceMonitor
-metadata:
- labels:
- app.kubernetes.io/name: opentelemetry-operator
- control-plane: controller-manager
- name: controller-manager-metrics-monitor
- namespace: system
-spec:
- endpoints:
- - path: /metrics
- port: https
- scheme: https
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
- tlsConfig:
- insecureSkipVerify: false
- ca:
- secret:
- key: ca.crt
- name: opentelemetry-operator-controller-manager-service-cert
- selector:
- matchLabels:
- app.kubernetes.io/name: opentelemetry-operator
- control-plane: controller-manager
diff --git a/config/rbac/role.yaml b/config/rbac/role.yaml
index 73632f89c8..a03aeb18e8 100644
--- a/config/rbac/role.yaml
+++ b/config/rbac/role.yaml
@@ -30,7 +30,9 @@ rules:
- ""
resources:
- namespaces
+ - secrets
verbs:
+ - get
- list
- watch
- apiGroups:
@@ -133,6 +135,7 @@ rules:
- opentelemetry.io
resources:
- opampbridges
+ - targetallocators
verbs:
- create
- delete
@@ -153,6 +156,7 @@ rules:
- opampbridges/status
- opentelemetrycollectors/finalizers
- opentelemetrycollectors/status
+ - targetallocators/status
verbs:
- get
- patch
diff --git a/controllers/builder_test.go b/controllers/builder_test.go
index e3b495e00a..793bc217e2 100644
--- a/controllers/builder_test.go
+++ b/controllers/builder_test.go
@@ -15,9 +15,10 @@
package controllers
import (
- "strings"
"testing"
+ cmv1 "github.com/cert-manager/cert-manager/pkg/apis/certmanager/v1"
+ cmmetav1 "github.com/cert-manager/cert-manager/pkg/apis/meta/v1"
"github.com/go-logr/logr"
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
"github.com/stretchr/testify/require"
@@ -35,10 +36,12 @@ import (
"github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1"
"github.com/open-telemetry/opentelemetry-operator/apis/v1beta1"
+ "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager"
"github.com/open-telemetry/opentelemetry-operator/internal/config"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils"
+ "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator"
"github.com/open-telemetry/opentelemetry-operator/pkg/featuregate"
)
@@ -1199,7 +1202,7 @@ endpoint: ws://opamp-server:4320/v1/opamp
}
}
-func TestBuildTargetAllocator(t *testing.T) {
+func TestBuildCollectorTargetAllocatorResources(t *testing.T) {
var goodConfigYaml = `
receivers:
prometheus:
@@ -1241,8 +1244,9 @@ service:
name string
args args
want []client.Object
- featuregates []string
+ featuregates []*colfeaturegate.Gate
wantErr bool
+ opts []config.Option
}{
{
name: "base case",
@@ -2183,33 +2187,2637 @@ prometheus_cr:
},
},
},
- wantErr: false,
- featuregates: []string{},
+ wantErr: false,
},
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cfg := config.New(
- config.WithCollectorImage("default-collector"),
- config.WithTargetAllocatorImage("default-ta-allocator"),
- )
- params := manifests.Params{
- Log: logr.Discard(),
- Config: cfg,
- OtelCol: tt.args.instance,
- }
- targetAllocator, err := collector.TargetAllocator(params)
- require.NoError(t, err)
- params.TargetAllocator = targetAllocator
- if len(tt.featuregates) > 0 {
- fg := strings.Join(tt.featuregates, ",")
- flagset := featuregate.Flags(colfeaturegate.GlobalRegistry())
- if err = flagset.Set(featuregate.FeatureGatesFlag, fg); err != nil {
- t.Errorf("featuregate setting error = %v", err)
- return
- }
- }
- got, err := BuildCollector(params)
+ {
+ name: "target allocator mtls enabled",
+ args: args{
+ instance: v1beta1.OpenTelemetryCollector{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ },
+ Spec: v1beta1.OpenTelemetryCollectorSpec{
+ OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{
+ Image: "test",
+ Replicas: &one,
+ },
+ Mode: "statefulset",
+ Config: goodConfig,
+ TargetAllocator: v1beta1.TargetAllocatorEmbedded{
+ Enabled: true,
+ FilterStrategy: "relabel-config",
+ AllocationStrategy: v1beta1.TargetAllocatorAllocationStrategyConsistentHashing,
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ Enabled: true,
+ },
+ },
+ },
+ },
+ },
+ want: []client.Object{
+ &appsv1.StatefulSet{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ Spec: appsv1.StatefulSetSpec{
+ ServiceName: "test-collector",
+ Replicas: &one,
+ Selector: &metav1.LabelSelector{
+ MatchLabels: selectorLabels,
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-operator-config/sha256": "42773025f65feaf30df59a306a9e38f1aaabe94c8310983beaddb7f648d699b0",
+ "prometheus.io/path": "/metrics",
+ "prometheus.io/port": "8888",
+ "prometheus.io/scrape": "true",
+ },
+ },
+ Spec: corev1.PodSpec{
+ Volumes: []corev1.Volume{
+ {
+ Name: "otc-internal",
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "test-collector-" + goodConfigHash,
+ },
+ Items: []corev1.KeyToPath{
+ {
+ Key: "collector.yaml",
+ Path: "collector.yaml",
+ },
+ },
+ },
+ },
+ },
+ {
+ Name: "test-ta-client-cert",
+ VolumeSource: corev1.VolumeSource{
+ Secret: &corev1.SecretVolumeSource{
+ SecretName: "test-ta-client-cert",
+ },
+ },
+ },
+ },
+ Containers: []corev1.Container{
+ {
+ Name: "otc-container",
+ Image: "test",
+ Args: []string{
+ "--config=/conf/collector.yaml",
+ },
+ Env: []corev1.EnvVar{
+ {
+ Name: "POD_NAME",
+ ValueFrom: &corev1.EnvVarSource{
+ FieldRef: &corev1.ObjectFieldSelector{
+ FieldPath: "metadata.name",
+ },
+ },
+ },
+ {
+ Name: "SHARD",
+ Value: "0",
+ },
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: "metrics",
+ HostPort: 0,
+ ContainerPort: 8888,
+ Protocol: "TCP",
+ },
+ },
+ VolumeMounts: []corev1.VolumeMount{
+ {
+ Name: "otc-internal",
+ MountPath: "/conf",
+ },
+ {
+ Name: "test-ta-client-cert",
+ MountPath: "/tls",
+ },
+ },
+ },
+ },
+ ShareProcessNamespace: ptr.To(false),
+ DNSPolicy: "ClusterFirst",
+ DNSConfig: &corev1.PodDNSConfig{},
+ ServiceAccountName: "test-collector",
+ },
+ },
+ PodManagementPolicy: "Parallel",
+ },
+ },
+ &policyV1.PodDisruptionBudget{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ Spec: policyV1.PodDisruptionBudgetSpec{
+ Selector: &v1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ MaxUnavailable: &intstr.IntOrString{
+ Type: intstr.Int,
+ IntVal: 1,
+ },
+ },
+ },
+ &corev1.ConfigMap{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector-" + goodConfigHash,
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ Data: map[string]string{
+ "collector.yaml": "exporters:\n debug: null\nreceivers:\n prometheus:\n config: {}\n target_allocator:\n collector_id: ${POD_NAME}\n endpoint: https://test-targetallocator:443\n interval: 30s\n tls:\n ca_file: /tls/ca.crt\n cert_file: /tls/tls.crt\n key_file: /tls/tls.key\nservice:\n pipelines:\n metrics:\n exporters:\n - debug\n receivers:\n - prometheus\n",
+ },
+ },
+ &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ },
+ &corev1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector-monitoring",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector-monitoring",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ "operator.opentelemetry.io/collector-service-type": "monitoring",
+ "operator.opentelemetry.io/collector-monitoring-service": "Exists",
+ },
+ Annotations: map[string]string{},
+ },
+ Spec: corev1.ServiceSpec{
+ Ports: []corev1.ServicePort{
+ {
+ Name: "monitoring",
+ Port: 8888,
+ },
+ },
+ Selector: selectorLabels,
+ },
+ },
+ &corev1.ConfigMap{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Data: map[string]string{
+ "targetallocator.yaml": `allocation_strategy: consistent-hashing
+collector_selector:
+ matchlabels:
+ app.kubernetes.io/component: opentelemetry-collector
+ app.kubernetes.io/instance: test.test
+ app.kubernetes.io/managed-by: opentelemetry-operator
+ app.kubernetes.io/part-of: opentelemetry
+ matchexpressions: []
+config:
+ scrape_configs:
+ - job_name: example
+ metric_relabel_configs:
+ - replacement: $1_$2
+ source_labels:
+ - job
+ target_label: job
+ relabel_configs:
+ - replacement: my_service_$1
+ source_labels:
+ - __meta_service_id
+ target_label: job
+ - replacement: $1
+ source_labels:
+ - __meta_service_name
+ target_label: instance
+filter_strategy: relabel-config
+https:
+ ca_file_path: /tls/ca.crt
+ enabled: true
+ listen_addr: :8443
+ tls_cert_file_path: /tls/tls.crt
+ tls_key_file_path: /tls/tls.key
+prometheus_cr:
+ enabled: true
+ pod_monitor_selector: null
+ service_monitor_selector: null
+`,
+ },
+ },
+ &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: appsv1.DeploymentSpec{
+ Selector: &metav1.LabelSelector{
+ MatchLabels: taSelectorLabels,
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "f1ce0fdbf69924576576d1d6eb2a3cc91a3f72675b3facbb36702d57027bc6ae",
+ },
+ },
+ Spec: corev1.PodSpec{
+ Volumes: []corev1.Volume{
+ {
+ Name: "ta-internal",
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "test-targetallocator",
+ },
+ Items: []corev1.KeyToPath{
+ {
+ Key: "targetallocator.yaml",
+ Path: "targetallocator.yaml",
+ },
+ },
+ },
+ },
+ },
+ {
+ Name: "test-ta-server-cert",
+ VolumeSource: corev1.VolumeSource{
+ Secret: &corev1.SecretVolumeSource{
+ SecretName: "test-ta-server-cert",
+ },
+ },
+ },
+ },
+ Containers: []corev1.Container{
+ {
+ Name: "ta-container",
+ Image: "default-ta-allocator",
+ Env: []corev1.EnvVar{
+ {
+ Name: "OTELCOL_NAMESPACE",
+ ValueFrom: &corev1.EnvVarSource{
+ FieldRef: &corev1.ObjectFieldSelector{
+ FieldPath: "metadata.namespace",
+ },
+ },
+ },
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: "http",
+ HostPort: 0,
+ ContainerPort: 8080,
+ Protocol: "TCP",
+ },
+ {
+ Name: "https",
+ HostPort: 0,
+ ContainerPort: 8443,
+ Protocol: "TCP",
+ },
+ },
+ VolumeMounts: []corev1.VolumeMount{
+ {
+ Name: "ta-internal",
+ MountPath: "/conf",
+ },
+ {
+ Name: "test-ta-server-cert",
+ MountPath: "/tls",
+ },
+ },
+ LivenessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/livez",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ ReadinessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/readyz",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ },
+ },
+ DNSPolicy: "ClusterFirst",
+ DNSConfig: &corev1.PodDNSConfig{},
+ ShareProcessNamespace: ptr.To(false),
+ ServiceAccountName: "test-targetallocator",
+ },
+ },
+ },
+ },
+ &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ },
+ &corev1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: corev1.ServiceSpec{
+ Ports: []corev1.ServicePort{
+ {
+ Name: "targetallocation",
+ Port: 80,
+ TargetPort: intstr.IntOrString{
+ Type: 1,
+ StrVal: "http",
+ },
+ },
+ {
+ Name: "targetallocation-https",
+ Port: 443,
+ TargetPort: intstr.IntOrString{
+ Type: 1,
+ StrVal: "https",
+ },
+ },
+ },
+ Selector: taSelectorLabels,
+ },
+ },
+ &policyV1.PodDisruptionBudget{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "f1ce0fdbf69924576576d1d6eb2a3cc91a3f72675b3facbb36702d57027bc6ae",
+ },
+ },
+ Spec: policyV1.PodDisruptionBudgetSpec{
+ Selector: &v1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ },
+ },
+ MaxUnavailable: &intstr.IntOrString{
+ Type: intstr.Int,
+ IntVal: 1,
+ },
+ },
+ },
+ &cmv1.Issuer{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-self-signed-issuer",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-self-signed-issuer",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.IssuerSpec{
+ IssuerConfig: cmv1.IssuerConfig{
+ SelfSigned: &cmv1.SelfSignedIssuer{
+ CRLDistributionPoints: nil,
+ },
+ },
+ },
+ },
+ &cmv1.Certificate{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-ca-cert",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-ca-cert",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.CertificateSpec{
+ Subject: &cmv1.X509Subject{
+ OrganizationalUnits: []string{"opentelemetry-operator"},
+ },
+ CommonName: "test-ca-cert",
+ IsCA: true,
+ SecretName: "test-ca-cert",
+ IssuerRef: cmmetav1.ObjectReference{
+ Name: "test-self-signed-issuer",
+ Kind: "Issuer",
+ },
+ },
+ },
+ &cmv1.Issuer{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-ca-issuer",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-ca-issuer",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.IssuerSpec{
+ IssuerConfig: cmv1.IssuerConfig{
+ CA: &cmv1.CAIssuer{
+ SecretName: "test-ca-cert",
+ },
+ },
+ },
+ },
+ &cmv1.Certificate{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-ta-server-cert",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-ta-server-cert",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.CertificateSpec{
+ Subject: &cmv1.X509Subject{
+ OrganizationalUnits: []string{"opentelemetry-operator"},
+ },
+ DNSNames: []string{
+ "test-targetallocator",
+ "test-targetallocator.test.svc",
+ "test-targetallocator.test.svc.cluster.local",
+ },
+ SecretName: "test-ta-server-cert",
+ IssuerRef: cmmetav1.ObjectReference{
+ Name: "test-ca-issuer",
+ Kind: "Issuer",
+ },
+ Usages: []cmv1.KeyUsage{
+ "client auth",
+ "server auth",
+ },
+ },
+ },
+ &cmv1.Certificate{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-ta-client-cert",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-ta-client-cert",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.CertificateSpec{
+ Subject: &cmv1.X509Subject{
+ OrganizationalUnits: []string{"opentelemetry-operator"},
+ },
+ DNSNames: []string{
+ "test-targetallocator",
+ "test-targetallocator.test.svc",
+ "test-targetallocator.test.svc.cluster.local",
+ },
+ SecretName: "test-ta-client-cert",
+ IssuerRef: cmmetav1.ObjectReference{
+ Name: "test-ca-issuer",
+ Kind: "Issuer",
+ },
+ Usages: []cmv1.KeyUsage{
+ "client auth",
+ "server auth",
+ },
+ },
+ },
+ },
+ wantErr: false,
+ opts: []config.Option{
+ config.WithCertManagerAvailability(certmanager.Available),
+ },
+ featuregates: []*colfeaturegate.Gate{featuregate.EnableTargetAllocatorMTLS},
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ opts := []config.Option{
+ config.WithCollectorImage("default-collector"),
+ config.WithTargetAllocatorImage("default-ta-allocator"),
+ }
+ opts = append(opts, tt.opts...)
+ cfg := config.New(
+ opts...,
+ )
+ params := manifests.Params{
+ Log: logr.Discard(),
+ Config: cfg,
+ OtelCol: tt.args.instance,
+ }
+ targetAllocator, err := collector.TargetAllocator(params)
+ require.NoError(t, err)
+ params.TargetAllocator = targetAllocator
+ registry := colfeaturegate.GlobalRegistry()
+ for _, gate := range tt.featuregates {
+ current := gate.IsEnabled()
+ require.False(t, current, "only enable gates which are disabled by default")
+ if setErr := registry.Set(gate.ID(), true); setErr != nil {
+ require.NoError(t, setErr)
+ return
+ }
+ t.Cleanup(func() {
+ setErr := registry.Set(gate.ID(), current)
+ require.NoError(t, setErr)
+ })
+ }
+ got, err := BuildCollector(params)
+ if (err != nil) != tt.wantErr {
+ t.Errorf("BuildAll() error = %v, wantErr %v", err, tt.wantErr)
+ return
+ }
+ require.Equal(t, tt.want, got)
+
+ })
+ }
+}
+
+func TestBuildCollectorTargetAllocatorCR(t *testing.T) {
+ var goodConfigYaml = `
+receivers:
+ prometheus:
+ config:
+ scrape_configs:
+ - job_name: 'example'
+ relabel_configs:
+ - source_labels: ['__meta_service_id']
+ target_label: 'job'
+ replacement: 'my_service_$$1'
+ - source_labels: ['__meta_service_name']
+ target_label: 'instance'
+ replacement: '$1'
+ metric_relabel_configs:
+ - source_labels: ['job']
+ target_label: 'job'
+ replacement: '$$1_$2'
+exporters:
+ debug:
+service:
+ pipelines:
+ metrics:
+ receivers: [prometheus]
+ exporters: [debug]
+`
+
+ goodConfig := v1beta1.Config{}
+ err := go_yaml.Unmarshal([]byte(goodConfigYaml), &goodConfig)
+ require.NoError(t, err)
+
+ goodConfigHash, _ := manifestutils.GetConfigMapSHA(goodConfig)
+ goodConfigHash = goodConfigHash[:8]
+
+ one := int32(1)
+ type args struct {
+ instance v1beta1.OpenTelemetryCollector
+ }
+ tests := []struct {
+ name string
+ args args
+ want []client.Object
+ featuregates []*colfeaturegate.Gate
+ wantErr bool
+ opts []config.Option
+ }{
+ {
+ name: "base case",
+ args: args{
+ instance: v1beta1.OpenTelemetryCollector{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ },
+ Spec: v1beta1.OpenTelemetryCollectorSpec{
+ OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{
+ Image: "test",
+ Replicas: &one,
+ },
+ Mode: "statefulset",
+ Config: goodConfig,
+ TargetAllocator: v1beta1.TargetAllocatorEmbedded{
+ Enabled: true,
+ FilterStrategy: "relabel-config",
+ AllocationStrategy: v1beta1.TargetAllocatorAllocationStrategyConsistentHashing,
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ Enabled: true,
+ },
+ },
+ },
+ },
+ },
+ want: []client.Object{
+ &appsv1.StatefulSet{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ Spec: appsv1.StatefulSetSpec{
+ ServiceName: "test-collector",
+ Replicas: &one,
+ Selector: &metav1.LabelSelector{
+ MatchLabels: selectorLabels,
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-operator-config/sha256": "42773025f65feaf30df59a306a9e38f1aaabe94c8310983beaddb7f648d699b0",
+ "prometheus.io/path": "/metrics",
+ "prometheus.io/port": "8888",
+ "prometheus.io/scrape": "true",
+ },
+ },
+ Spec: corev1.PodSpec{
+ Volumes: []corev1.Volume{
+ {
+ Name: "otc-internal",
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "test-collector-" + goodConfigHash,
+ },
+ Items: []corev1.KeyToPath{
+ {
+ Key: "collector.yaml",
+ Path: "collector.yaml",
+ },
+ },
+ },
+ },
+ },
+ },
+ Containers: []corev1.Container{
+ {
+ Name: "otc-container",
+ Image: "test",
+ Args: []string{
+ "--config=/conf/collector.yaml",
+ },
+ Env: []corev1.EnvVar{
+ {
+ Name: "POD_NAME",
+ ValueFrom: &corev1.EnvVarSource{
+ FieldRef: &corev1.ObjectFieldSelector{
+ FieldPath: "metadata.name",
+ },
+ },
+ },
+ {
+ Name: "SHARD",
+ Value: "0",
+ },
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: "metrics",
+ HostPort: 0,
+ ContainerPort: 8888,
+ Protocol: "TCP",
+ },
+ },
+ VolumeMounts: []corev1.VolumeMount{
+ {
+ Name: "otc-internal",
+ MountPath: "/conf",
+ },
+ },
+ },
+ },
+ ShareProcessNamespace: ptr.To(false),
+ DNSPolicy: "ClusterFirst",
+ DNSConfig: &corev1.PodDNSConfig{},
+ ServiceAccountName: "test-collector",
+ },
+ },
+ PodManagementPolicy: "Parallel",
+ },
+ },
+ &policyV1.PodDisruptionBudget{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ Spec: policyV1.PodDisruptionBudgetSpec{
+ Selector: &v1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ MaxUnavailable: &intstr.IntOrString{
+ Type: intstr.Int,
+ IntVal: 1,
+ },
+ },
+ },
+ &corev1.ConfigMap{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector-" + goodConfigHash,
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ Data: map[string]string{
+ "collector.yaml": "exporters:\n debug: null\nreceivers:\n prometheus:\n config: {}\n target_allocator:\n collector_id: ${POD_NAME}\n endpoint: http://test-targetallocator:80\n interval: 30s\nservice:\n pipelines:\n metrics:\n exporters:\n - debug\n receivers:\n - prometheus\n",
+ },
+ },
+ &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ },
+ &corev1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector-monitoring",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector-monitoring",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ "operator.opentelemetry.io/collector-service-type": "monitoring",
+ "operator.opentelemetry.io/collector-monitoring-service": "Exists",
+ },
+ Annotations: map[string]string{},
+ },
+ Spec: corev1.ServiceSpec{
+ Ports: []corev1.ServicePort{
+ {
+ Name: "monitoring",
+ Port: 8888,
+ },
+ },
+ Selector: selectorLabels,
+ },
+ },
+ &v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ Labels: nil,
+ },
+ Spec: v1alpha1.TargetAllocatorSpec{
+ FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig,
+ AllocationStrategy: v1beta1.TargetAllocatorAllocationStrategyConsistentHashing,
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ Enabled: true,
+ },
+ },
+ },
+ },
+ wantErr: false,
+ },
+ {
+ name: "enable metrics case",
+ args: args{
+ instance: v1beta1.OpenTelemetryCollector{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ },
+ Spec: v1beta1.OpenTelemetryCollectorSpec{
+ OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{
+ Image: "test",
+ Replicas: &one,
+ },
+ Mode: "statefulset",
+ Config: goodConfig,
+ TargetAllocator: v1beta1.TargetAllocatorEmbedded{
+ Enabled: true,
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ Enabled: true,
+ },
+ FilterStrategy: "relabel-config",
+ Observability: v1beta1.ObservabilitySpec{
+ Metrics: v1beta1.MetricsConfigSpec{
+ EnableMetrics: true,
+ },
+ },
+ },
+ },
+ },
+ },
+ want: []client.Object{
+ &appsv1.StatefulSet{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ Spec: appsv1.StatefulSetSpec{
+ ServiceName: "test-collector",
+ Replicas: &one,
+ Selector: &metav1.LabelSelector{
+ MatchLabels: selectorLabels,
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-operator-config/sha256": "42773025f65feaf30df59a306a9e38f1aaabe94c8310983beaddb7f648d699b0",
+ "prometheus.io/path": "/metrics",
+ "prometheus.io/port": "8888",
+ "prometheus.io/scrape": "true",
+ },
+ },
+ Spec: corev1.PodSpec{
+ Volumes: []corev1.Volume{
+ {
+ Name: "otc-internal",
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "test-collector-" + goodConfigHash,
+ },
+ Items: []corev1.KeyToPath{
+ {
+ Key: "collector.yaml",
+ Path: "collector.yaml",
+ },
+ },
+ },
+ },
+ },
+ },
+ Containers: []corev1.Container{
+ {
+ Name: "otc-container",
+ Image: "test",
+ Args: []string{
+ "--config=/conf/collector.yaml",
+ },
+ Env: []corev1.EnvVar{
+ {
+ Name: "POD_NAME",
+ ValueFrom: &corev1.EnvVarSource{
+ FieldRef: &corev1.ObjectFieldSelector{
+ FieldPath: "metadata.name",
+ },
+ },
+ },
+ {
+ Name: "SHARD",
+ Value: "0",
+ },
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: "metrics",
+ HostPort: 0,
+ ContainerPort: 8888,
+ Protocol: "TCP",
+ },
+ },
+ VolumeMounts: []corev1.VolumeMount{
+ {
+ Name: "otc-internal",
+ MountPath: "/conf",
+ },
+ },
+ },
+ },
+ ShareProcessNamespace: ptr.To(false),
+ DNSPolicy: "ClusterFirst",
+ DNSConfig: &corev1.PodDNSConfig{},
+ ServiceAccountName: "test-collector",
+ },
+ },
+ PodManagementPolicy: "Parallel",
+ },
+ },
+ &policyV1.PodDisruptionBudget{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ Spec: policyV1.PodDisruptionBudgetSpec{
+ Selector: &v1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ MaxUnavailable: &intstr.IntOrString{
+ Type: intstr.Int,
+ IntVal: 1,
+ },
+ },
+ },
+ &corev1.ConfigMap{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector-" + goodConfigHash,
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ Data: map[string]string{
+ "collector.yaml": "exporters:\n debug: null\nreceivers:\n prometheus:\n config: {}\n target_allocator:\n collector_id: ${POD_NAME}\n endpoint: http://test-targetallocator:80\n interval: 30s\nservice:\n pipelines:\n metrics:\n exporters:\n - debug\n receivers:\n - prometheus\n",
+ },
+ },
+ &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{},
+ },
+ },
+ &corev1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-collector-monitoring",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-collector",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-collector-monitoring",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ "operator.opentelemetry.io/collector-service-type": "monitoring",
+ "operator.opentelemetry.io/collector-monitoring-service": "Exists",
+ },
+ Annotations: map[string]string{},
+ },
+ Spec: corev1.ServiceSpec{
+ Ports: []corev1.ServicePort{
+ {
+ Name: "monitoring",
+ Port: 8888,
+ },
+ },
+ Selector: selectorLabels,
+ },
+ },
+ &v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ Labels: nil,
+ },
+ Spec: v1alpha1.TargetAllocatorSpec{
+ FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig,
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ Enabled: true,
+ },
+ Observability: v1beta1.ObservabilitySpec{
+ Metrics: v1beta1.MetricsConfigSpec{
+ EnableMetrics: true,
+ },
+ },
+ },
+ },
+ },
+ wantErr: false,
+ featuregates: []*colfeaturegate.Gate{},
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ opts := []config.Option{
+ config.WithCollectorImage("default-collector"),
+ config.WithTargetAllocatorImage("default-ta-allocator"),
+ }
+ opts = append(opts, tt.opts...)
+ cfg := config.New(
+ opts...,
+ )
+ params := manifests.Params{
+ Log: logr.Discard(),
+ Config: cfg,
+ OtelCol: tt.args.instance,
+ }
+ targetAllocator, err := collector.TargetAllocator(params)
+ require.NoError(t, err)
+ params.TargetAllocator = targetAllocator
+ featuregates := []*colfeaturegate.Gate{featuregate.CollectorUsesTargetAllocatorCR}
+ featuregates = append(featuregates, tt.featuregates...)
+ registry := colfeaturegate.GlobalRegistry()
+ for _, gate := range featuregates {
+ current := gate.IsEnabled()
+ require.False(t, current, "only enable gates which are disabled by default")
+ if setErr := registry.Set(gate.ID(), true); setErr != nil {
+ require.NoError(t, setErr)
+ return
+ }
+ t.Cleanup(func() {
+ setErr := registry.Set(gate.ID(), current)
+ require.NoError(t, setErr)
+ })
+ }
+ got, err := BuildCollector(params)
+ if (err != nil) != tt.wantErr {
+ t.Errorf("BuildAll() error = %v, wantErr %v", err, tt.wantErr)
+ return
+ }
+ require.Equal(t, tt.want, got)
+
+ })
+ }
+}
+
+func TestBuildTargetAllocator(t *testing.T) {
+ type args struct {
+ instance v1alpha1.TargetAllocator
+ collector *v1beta1.OpenTelemetryCollector
+ }
+ tests := []struct {
+ name string
+ args args
+ want []client.Object
+ featuregates []*colfeaturegate.Gate
+ wantErr bool
+ opts []config.Option
+ }{
+ {
+ name: "base case",
+ args: args{
+ instance: v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ Labels: nil,
+ },
+ Spec: v1alpha1.TargetAllocatorSpec{
+ FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig,
+ ScrapeConfigs: []v1beta1.AnyConfig{
+ {Object: map[string]any{
+ "job_name": "example",
+ "metric_relabel_configs": []any{
+ map[string]any{
+ "replacement": "$1_$2",
+ "source_labels": []any{"job"},
+ "target_label": "job",
+ },
+ },
+ "relabel_configs": []any{
+ map[string]any{
+ "replacement": "my_service_$1",
+ "source_labels": []any{"__meta_service_id"},
+ "target_label": "job",
+ },
+ map[string]any{
+ "replacement": "$1",
+ "source_labels": []any{"__meta_service_name"},
+ "target_label": "instance",
+ },
+ },
+ }},
+ },
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ Enabled: true,
+ },
+ },
+ },
+ },
+ want: []client.Object{
+ &corev1.ConfigMap{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Data: map[string]string{
+ "targetallocator.yaml": `allocation_strategy: consistent-hashing
+collector_selector: null
+config:
+ scrape_configs:
+ - job_name: example
+ metric_relabel_configs:
+ - replacement: $1_$2
+ source_labels:
+ - job
+ target_label: job
+ relabel_configs:
+ - replacement: my_service_$1
+ source_labels:
+ - __meta_service_id
+ target_label: job
+ - replacement: $1
+ source_labels:
+ - __meta_service_name
+ target_label: instance
+filter_strategy: relabel-config
+prometheus_cr:
+ enabled: true
+ pod_monitor_selector: null
+ service_monitor_selector: null
+`,
+ },
+ },
+ &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: appsv1.DeploymentSpec{
+ Selector: &metav1.LabelSelector{
+ MatchLabels: taSelectorLabels,
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "88ab06aab167d58ae2316ddecc9cf0600b9094d27054781dd6aa6e44dcf902fc",
+ },
+ },
+ Spec: corev1.PodSpec{
+ Volumes: []corev1.Volume{
+ {
+ Name: "ta-internal",
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "test-targetallocator",
+ },
+ Items: []corev1.KeyToPath{
+ {
+ Key: "targetallocator.yaml",
+ Path: "targetallocator.yaml",
+ },
+ },
+ },
+ },
+ },
+ },
+ Containers: []corev1.Container{
+ {
+ Name: "ta-container",
+ Image: "default-ta-allocator",
+ Env: []corev1.EnvVar{
+ {
+ Name: "OTELCOL_NAMESPACE",
+ ValueFrom: &corev1.EnvVarSource{
+ FieldRef: &corev1.ObjectFieldSelector{
+ FieldPath: "metadata.namespace",
+ },
+ },
+ },
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: "http",
+ HostPort: 0,
+ ContainerPort: 8080,
+ Protocol: "TCP",
+ },
+ },
+ VolumeMounts: []corev1.VolumeMount{
+ {
+ Name: "ta-internal",
+ MountPath: "/conf",
+ },
+ },
+ LivenessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/livez",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ ReadinessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/readyz",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ },
+ },
+ DNSPolicy: "ClusterFirst",
+ DNSConfig: &corev1.PodDNSConfig{},
+ ShareProcessNamespace: ptr.To(false),
+ ServiceAccountName: "test-targetallocator",
+ },
+ },
+ },
+ },
+ &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ },
+ &corev1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: corev1.ServiceSpec{
+ Ports: []corev1.ServicePort{
+ {
+ Name: "targetallocation",
+ Port: 80,
+ TargetPort: intstr.IntOrString{
+ Type: 1,
+ StrVal: "http",
+ },
+ },
+ },
+ Selector: taSelectorLabels,
+ },
+ },
+ &policyV1.PodDisruptionBudget{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "88ab06aab167d58ae2316ddecc9cf0600b9094d27054781dd6aa6e44dcf902fc",
+ },
+ },
+ Spec: policyV1.PodDisruptionBudgetSpec{
+ Selector: &v1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ },
+ },
+ MaxUnavailable: &intstr.IntOrString{
+ Type: intstr.Int,
+ IntVal: 1,
+ },
+ },
+ },
+ },
+ wantErr: false,
+ },
+ {
+ name: "enable metrics case",
+ args: args{
+ instance: v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ Labels: nil,
+ },
+ Spec: v1alpha1.TargetAllocatorSpec{
+ FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig,
+ ScrapeConfigs: []v1beta1.AnyConfig{
+ {Object: map[string]any{
+ "job_name": "example",
+ "metric_relabel_configs": []any{
+ map[string]any{
+ "replacement": "$1_$2",
+ "source_labels": []any{"job"},
+ "target_label": "job",
+ },
+ },
+ "relabel_configs": []any{
+ map[string]any{
+ "replacement": "my_service_$1",
+ "source_labels": []any{"__meta_service_id"},
+ "target_label": "job",
+ },
+ map[string]any{
+ "replacement": "$1",
+ "source_labels": []any{"__meta_service_name"},
+ "target_label": "instance",
+ },
+ },
+ }},
+ },
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ Enabled: true,
+ },
+ AllocationStrategy: v1beta1.TargetAllocatorAllocationStrategyConsistentHashing,
+ Observability: v1beta1.ObservabilitySpec{
+ Metrics: v1beta1.MetricsConfigSpec{
+ EnableMetrics: true,
+ },
+ },
+ },
+ },
+ },
+ want: []client.Object{
+ &corev1.ConfigMap{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Data: map[string]string{
+ "targetallocator.yaml": `allocation_strategy: consistent-hashing
+collector_selector: null
+config:
+ scrape_configs:
+ - job_name: example
+ metric_relabel_configs:
+ - replacement: $1_$2
+ source_labels:
+ - job
+ target_label: job
+ relabel_configs:
+ - replacement: my_service_$1
+ source_labels:
+ - __meta_service_id
+ target_label: job
+ - replacement: $1
+ source_labels:
+ - __meta_service_name
+ target_label: instance
+filter_strategy: relabel-config
+prometheus_cr:
+ enabled: true
+ pod_monitor_selector: null
+ service_monitor_selector: null
+`,
+ },
+ },
+ &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: appsv1.DeploymentSpec{
+ Selector: &metav1.LabelSelector{
+ MatchLabels: taSelectorLabels,
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "88ab06aab167d58ae2316ddecc9cf0600b9094d27054781dd6aa6e44dcf902fc",
+ },
+ },
+ Spec: corev1.PodSpec{
+ Volumes: []corev1.Volume{
+ {
+ Name: "ta-internal",
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "test-targetallocator",
+ },
+ Items: []corev1.KeyToPath{
+ {
+ Key: "targetallocator.yaml",
+ Path: "targetallocator.yaml",
+ },
+ },
+ },
+ },
+ },
+ },
+ Containers: []corev1.Container{
+ {
+ Name: "ta-container",
+ Image: "default-ta-allocator",
+ Env: []corev1.EnvVar{
+ {
+ Name: "OTELCOL_NAMESPACE",
+ ValueFrom: &corev1.EnvVarSource{
+ FieldRef: &corev1.ObjectFieldSelector{
+ FieldPath: "metadata.namespace",
+ },
+ },
+ },
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: "http",
+ HostPort: 0,
+ ContainerPort: 8080,
+ Protocol: "TCP",
+ },
+ },
+ VolumeMounts: []corev1.VolumeMount{
+ {
+ Name: "ta-internal",
+ MountPath: "/conf",
+ },
+ },
+ LivenessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/livez",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ ReadinessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/readyz",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ },
+ },
+ ShareProcessNamespace: ptr.To(false),
+ DNSPolicy: "ClusterFirst",
+ DNSConfig: &corev1.PodDNSConfig{},
+ ServiceAccountName: "test-targetallocator",
+ },
+ },
+ },
+ },
+ &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ },
+ &corev1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: corev1.ServiceSpec{
+ Ports: []corev1.ServicePort{
+ {
+ Name: "targetallocation",
+ Port: 80,
+ TargetPort: intstr.IntOrString{
+ Type: 1,
+ StrVal: "http",
+ },
+ },
+ },
+ Selector: taSelectorLabels,
+ },
+ },
+ &policyV1.PodDisruptionBudget{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "88ab06aab167d58ae2316ddecc9cf0600b9094d27054781dd6aa6e44dcf902fc",
+ },
+ },
+ Spec: policyV1.PodDisruptionBudgetSpec{
+ Selector: &v1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ },
+ },
+ MaxUnavailable: &intstr.IntOrString{
+ Type: intstr.Int,
+ IntVal: 1,
+ },
+ },
+ },
+ &monitoringv1.ServiceMonitor{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: monitoringv1.ServiceMonitorSpec{
+ Endpoints: []monitoringv1.Endpoint{
+ {Port: "targetallocation"},
+ },
+ Selector: v1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ },
+ },
+ NamespaceSelector: monitoringv1.NamespaceSelector{
+ MatchNames: []string{"test"},
+ },
+ },
+ },
+ },
+ wantErr: false,
+ },
+ {
+ name: "collector present",
+ args: args{
+ instance: v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ Labels: nil,
+ },
+ Spec: v1alpha1.TargetAllocatorSpec{
+ FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig,
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ Enabled: true,
+ },
+ },
+ },
+ collector: &v1beta1.OpenTelemetryCollector{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ },
+ Spec: v1beta1.OpenTelemetryCollectorSpec{
+ Config: v1beta1.Config{
+ Receivers: v1beta1.AnyConfig{
+ Object: map[string]any{
+ "prometheus": map[string]any{
+ "config": map[string]any{
+ "scrape_configs": []any{
+ map[string]any{
+ "job_name": "example",
+ "metric_relabel_configs": []any{
+ map[string]any{
+ "replacement": "$1_$2",
+ "source_labels": []any{"job"},
+ "target_label": "job",
+ },
+ },
+ "relabel_configs": []any{
+ map[string]any{
+ "replacement": "my_service_$1",
+ "source_labels": []any{"__meta_service_id"},
+ "target_label": "job",
+ },
+ map[string]any{
+ "replacement": "$1",
+ "source_labels": []any{"__meta_service_name"},
+ "target_label": "instance",
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ want: []client.Object{
+ &corev1.ConfigMap{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Data: map[string]string{
+ "targetallocator.yaml": `allocation_strategy: consistent-hashing
+collector_selector:
+ matchlabels:
+ app.kubernetes.io/component: opentelemetry-collector
+ app.kubernetes.io/instance: test.test
+ app.kubernetes.io/managed-by: opentelemetry-operator
+ app.kubernetes.io/part-of: opentelemetry
+ matchexpressions: []
+config:
+ scrape_configs:
+ - job_name: example
+ metric_relabel_configs:
+ - replacement: $1_$2
+ source_labels:
+ - job
+ target_label: job
+ relabel_configs:
+ - replacement: my_service_$1
+ source_labels:
+ - __meta_service_id
+ target_label: job
+ - replacement: $1
+ source_labels:
+ - __meta_service_name
+ target_label: instance
+filter_strategy: relabel-config
+prometheus_cr:
+ enabled: true
+ pod_monitor_selector: null
+ service_monitor_selector: null
+`,
+ },
+ },
+ &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: appsv1.DeploymentSpec{
+ Selector: &metav1.LabelSelector{
+ MatchLabels: taSelectorLabels,
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "9d78d2ecfad18bad24dec7e9a825b4ce45657ecbb2e6b32845b585b7c15ea407",
+ },
+ },
+ Spec: corev1.PodSpec{
+ Volumes: []corev1.Volume{
+ {
+ Name: "ta-internal",
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "test-targetallocator",
+ },
+ Items: []corev1.KeyToPath{
+ {
+ Key: "targetallocator.yaml",
+ Path: "targetallocator.yaml",
+ },
+ },
+ },
+ },
+ },
+ },
+ Containers: []corev1.Container{
+ {
+ Name: "ta-container",
+ Image: "default-ta-allocator",
+ Env: []corev1.EnvVar{
+ {
+ Name: "OTELCOL_NAMESPACE",
+ ValueFrom: &corev1.EnvVarSource{
+ FieldRef: &corev1.ObjectFieldSelector{
+ FieldPath: "metadata.namespace",
+ },
+ },
+ },
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: "http",
+ HostPort: 0,
+ ContainerPort: 8080,
+ Protocol: "TCP",
+ },
+ },
+ VolumeMounts: []corev1.VolumeMount{
+ {
+ Name: "ta-internal",
+ MountPath: "/conf",
+ },
+ },
+ LivenessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/livez",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ ReadinessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/readyz",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ },
+ },
+ DNSPolicy: "ClusterFirst",
+ DNSConfig: &corev1.PodDNSConfig{},
+ ShareProcessNamespace: ptr.To(false),
+ ServiceAccountName: "test-targetallocator",
+ },
+ },
+ },
+ },
+ &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ },
+ &corev1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: corev1.ServiceSpec{
+ Ports: []corev1.ServicePort{
+ {
+ Name: "targetallocation",
+ Port: 80,
+ TargetPort: intstr.IntOrString{
+ Type: 1,
+ StrVal: "http",
+ },
+ },
+ },
+ Selector: taSelectorLabels,
+ },
+ },
+ &policyV1.PodDisruptionBudget{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "9d78d2ecfad18bad24dec7e9a825b4ce45657ecbb2e6b32845b585b7c15ea407",
+ },
+ },
+ Spec: policyV1.PodDisruptionBudgetSpec{
+ Selector: &v1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ },
+ },
+ MaxUnavailable: &intstr.IntOrString{
+ Type: intstr.Int,
+ IntVal: 1,
+ },
+ },
+ },
+ },
+ wantErr: false,
+ },
+ {
+ name: "mtls",
+ args: args{
+ instance: v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ Labels: nil,
+ },
+ Spec: v1alpha1.TargetAllocatorSpec{
+ FilterStrategy: v1beta1.TargetAllocatorFilterStrategyRelabelConfig,
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ Enabled: true,
+ },
+ },
+ },
+ collector: &v1beta1.OpenTelemetryCollector{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "test",
+ },
+ Spec: v1beta1.OpenTelemetryCollectorSpec{
+ Config: v1beta1.Config{
+ Receivers: v1beta1.AnyConfig{
+ Object: map[string]any{
+ "prometheus": map[string]any{
+ "config": map[string]any{
+ "scrape_configs": []any{
+ map[string]any{
+ "job_name": "example",
+ "metric_relabel_configs": []any{
+ map[string]any{
+ "replacement": "$1_$2",
+ "source_labels": []any{"job"},
+ "target_label": "job",
+ },
+ },
+ "relabel_configs": []any{
+ map[string]any{
+ "replacement": "my_service_$1",
+ "source_labels": []any{"__meta_service_id"},
+ "target_label": "job",
+ },
+ map[string]any{
+ "replacement": "$1",
+ "source_labels": []any{"__meta_service_name"},
+ "target_label": "instance",
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ want: []client.Object{
+ &corev1.ConfigMap{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Data: map[string]string{
+ "targetallocator.yaml": `allocation_strategy: consistent-hashing
+collector_selector:
+ matchlabels:
+ app.kubernetes.io/component: opentelemetry-collector
+ app.kubernetes.io/instance: test.test
+ app.kubernetes.io/managed-by: opentelemetry-operator
+ app.kubernetes.io/part-of: opentelemetry
+ matchexpressions: []
+config:
+ scrape_configs:
+ - job_name: example
+ metric_relabel_configs:
+ - replacement: $1_$2
+ source_labels:
+ - job
+ target_label: job
+ relabel_configs:
+ - replacement: my_service_$1
+ source_labels:
+ - __meta_service_id
+ target_label: job
+ - replacement: $1
+ source_labels:
+ - __meta_service_name
+ target_label: instance
+filter_strategy: relabel-config
+https:
+ ca_file_path: /tls/ca.crt
+ enabled: true
+ listen_addr: :8443
+ tls_cert_file_path: /tls/tls.crt
+ tls_key_file_path: /tls/tls.key
+prometheus_cr:
+ enabled: true
+ pod_monitor_selector: null
+ service_monitor_selector: null
+`,
+ },
+ },
+ &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: appsv1.DeploymentSpec{
+ Selector: &metav1.LabelSelector{
+ MatchLabels: taSelectorLabels,
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "f1ce0fdbf69924576576d1d6eb2a3cc91a3f72675b3facbb36702d57027bc6ae",
+ },
+ },
+ Spec: corev1.PodSpec{
+ Volumes: []corev1.Volume{
+ {
+ Name: "ta-internal",
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: "test-targetallocator",
+ },
+ Items: []corev1.KeyToPath{
+ {
+ Key: "targetallocator.yaml",
+ Path: "targetallocator.yaml",
+ },
+ },
+ },
+ },
+ },
+ {
+ Name: "test-ta-server-cert",
+ VolumeSource: corev1.VolumeSource{
+ Secret: &corev1.SecretVolumeSource{
+ SecretName: "test-ta-server-cert",
+ },
+ },
+ },
+ },
+ Containers: []corev1.Container{
+ {
+ Name: "ta-container",
+ Image: "default-ta-allocator",
+ Env: []corev1.EnvVar{
+ {
+ Name: "OTELCOL_NAMESPACE",
+ ValueFrom: &corev1.EnvVarSource{
+ FieldRef: &corev1.ObjectFieldSelector{
+ FieldPath: "metadata.namespace",
+ },
+ },
+ },
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: "http",
+ HostPort: 0,
+ ContainerPort: 8080,
+ Protocol: "TCP",
+ },
+ {
+ Name: "https",
+ HostPort: 0,
+ ContainerPort: 8443,
+ Protocol: "TCP",
+ },
+ },
+ VolumeMounts: []corev1.VolumeMount{
+ {
+ Name: "ta-internal",
+ MountPath: "/conf",
+ },
+ {
+ Name: "test-ta-server-cert",
+ MountPath: "/tls",
+ },
+ },
+ LivenessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/livez",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ ReadinessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/readyz",
+ Port: intstr.FromInt(8080),
+ },
+ },
+ },
+ },
+ },
+ DNSPolicy: "ClusterFirst",
+ DNSConfig: &corev1.PodDNSConfig{},
+ ShareProcessNamespace: ptr.To(false),
+ ServiceAccountName: "test-targetallocator",
+ },
+ },
+ },
+ },
+ &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ },
+ &corev1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: nil,
+ },
+ Spec: corev1.ServiceSpec{
+ Ports: []corev1.ServicePort{
+ {
+ Name: "targetallocation",
+ Port: 80,
+ TargetPort: intstr.IntOrString{
+ Type: 1,
+ StrVal: "http",
+ },
+ },
+ {
+ Name: "targetallocation-https",
+ Port: 443,
+ TargetPort: intstr.IntOrString{
+ Type: 1,
+ StrVal: "https",
+ },
+ },
+ },
+ Selector: taSelectorLabels,
+ },
+ },
+ &policyV1.PodDisruptionBudget{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-targetallocator",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ Annotations: map[string]string{
+ "opentelemetry-targetallocator-config/hash": "f1ce0fdbf69924576576d1d6eb2a3cc91a3f72675b3facbb36702d57027bc6ae",
+ },
+ },
+ Spec: policyV1.PodDisruptionBudgetSpec{
+ Selector: &v1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-targetallocator",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ },
+ },
+ MaxUnavailable: &intstr.IntOrString{
+ Type: intstr.Int,
+ IntVal: 1,
+ },
+ },
+ },
+ &cmv1.Issuer{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-self-signed-issuer",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-self-signed-issuer",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.IssuerSpec{
+ IssuerConfig: cmv1.IssuerConfig{
+ SelfSigned: &cmv1.SelfSignedIssuer{
+ CRLDistributionPoints: nil,
+ },
+ },
+ },
+ },
+ &cmv1.Certificate{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-ca-cert",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-ca-cert",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.CertificateSpec{
+ Subject: &cmv1.X509Subject{
+ OrganizationalUnits: []string{"opentelemetry-operator"},
+ },
+ CommonName: "test-ca-cert",
+ IsCA: true,
+ SecretName: "test-ca-cert",
+ IssuerRef: cmmetav1.ObjectReference{
+ Name: "test-self-signed-issuer",
+ Kind: "Issuer",
+ },
+ },
+ },
+ &cmv1.Issuer{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-ca-issuer",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-ca-issuer",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.IssuerSpec{
+ IssuerConfig: cmv1.IssuerConfig{
+ CA: &cmv1.CAIssuer{
+ SecretName: "test-ca-cert",
+ },
+ },
+ },
+ },
+ &cmv1.Certificate{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-ta-server-cert",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-ta-server-cert",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.CertificateSpec{
+ Subject: &cmv1.X509Subject{
+ OrganizationalUnits: []string{"opentelemetry-operator"},
+ },
+ DNSNames: []string{
+ "test-targetallocator",
+ "test-targetallocator.test.svc",
+ "test-targetallocator.test.svc.cluster.local",
+ },
+ SecretName: "test-ta-server-cert",
+ IssuerRef: cmmetav1.ObjectReference{
+ Name: "test-ca-issuer",
+ Kind: "Issuer",
+ },
+ Usages: []cmv1.KeyUsage{
+ "client auth",
+ "server auth",
+ },
+ },
+ },
+ &cmv1.Certificate{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test-ta-client-cert",
+ Namespace: "test",
+ Labels: map[string]string{
+ "app.kubernetes.io/component": "opentelemetry-targetallocator",
+ "app.kubernetes.io/instance": "test.test",
+ "app.kubernetes.io/managed-by": "opentelemetry-operator",
+ "app.kubernetes.io/name": "test-ta-client-cert",
+ "app.kubernetes.io/part-of": "opentelemetry",
+ "app.kubernetes.io/version": "latest",
+ },
+ },
+ Spec: cmv1.CertificateSpec{
+ Subject: &cmv1.X509Subject{
+ OrganizationalUnits: []string{"opentelemetry-operator"},
+ },
+ DNSNames: []string{
+ "test-targetallocator",
+ "test-targetallocator.test.svc",
+ "test-targetallocator.test.svc.cluster.local",
+ },
+ SecretName: "test-ta-client-cert",
+ IssuerRef: cmmetav1.ObjectReference{
+ Name: "test-ca-issuer",
+ Kind: "Issuer",
+ },
+ Usages: []cmv1.KeyUsage{
+ "client auth",
+ "server auth",
+ },
+ },
+ },
+ },
+ wantErr: false,
+ opts: []config.Option{
+ config.WithCertManagerAvailability(certmanager.Available),
+ },
+ featuregates: []*colfeaturegate.Gate{featuregate.EnableTargetAllocatorMTLS},
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ opts := []config.Option{
+ config.WithCollectorImage("default-collector"),
+ config.WithTargetAllocatorImage("default-ta-allocator"),
+ }
+ opts = append(opts, tt.opts...)
+ cfg := config.New(
+ opts...,
+ )
+ params := targetallocator.Params{
+ Log: logr.Discard(),
+ Config: cfg,
+ TargetAllocator: tt.args.instance,
+ Collector: tt.args.collector,
+ }
+ registry := colfeaturegate.GlobalRegistry()
+ for _, gate := range tt.featuregates {
+ current := gate.IsEnabled()
+ require.False(t, current, "only enable gates which are disabled by default")
+ if err := registry.Set(gate.ID(), true); err != nil {
+ require.NoError(t, err)
+ return
+ }
+ t.Cleanup(func() {
+ err := registry.Set(gate.ID(), current)
+ require.NoError(t, err)
+ })
+ }
+ got, err := BuildTargetAllocator(params)
if (err != nil) != tt.wantErr {
t.Errorf("BuildAll() error = %v, wantErr %v", err, tt.wantErr)
return
diff --git a/controllers/common.go b/controllers/common.go
index 3003907913..25bdc0c432 100644
--- a/controllers/common.go
+++ b/controllers/common.go
@@ -35,6 +35,7 @@ import (
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/opampbridge"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator"
+ "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate"
)
func isNamespaceScoped(obj client.Object) bool {
@@ -59,22 +60,26 @@ func BuildCollector(params manifests.Params) ([]client.Object, error) {
}
resources = append(resources, objs...)
}
- // TODO: Remove this after TargetAllocator CRD is reconciled
- if params.TargetAllocator != nil {
- taParams := targetallocator.Params{
- Client: params.Client,
- Scheme: params.Scheme,
- Recorder: params.Recorder,
- Log: params.Log,
- Config: params.Config,
- Collector: ¶ms.OtelCol,
- TargetAllocator: *params.TargetAllocator,
- }
- taResources, err := BuildTargetAllocator(taParams)
- if err != nil {
- return nil, err
+ // If we're not building a TargetAllocator CRD, then we need to separately invoke its builder
+ // to directly build the manifests. This is what used to happen before the TargetAllocator CRD
+ // was introduced.
+ if !featuregate.CollectorUsesTargetAllocatorCR.IsEnabled() {
+ if params.TargetAllocator != nil {
+ taParams := targetallocator.Params{
+ Client: params.Client,
+ Scheme: params.Scheme,
+ Recorder: params.Recorder,
+ Log: params.Log,
+ Config: params.Config,
+ Collector: ¶ms.OtelCol,
+ TargetAllocator: *params.TargetAllocator,
+ }
+ taResources, err := BuildTargetAllocator(taParams)
+ if err != nil {
+ return nil, err
+ }
+ resources = append(resources, taResources...)
}
- resources = append(resources, taResources...)
}
return resources, nil
}
@@ -155,7 +160,7 @@ func reconcileDesiredObjects(ctx context.Context, kubeClient client.Client, logg
op = result
return createOrUpdateErr
})
- if crudErr != nil && errors.Is(crudErr, manifests.ImmutableChangeErr) {
+ if crudErr != nil && errors.As(crudErr, &manifests.ImmutableChangeErr) {
l.Error(crudErr, "detected immutable field change, trying to delete, new object will be created on next reconcile", "existing", existing.GetName())
delErr := kubeClient.Delete(ctx, existing)
if delErr != nil {
diff --git a/controllers/opentelemetrycollector_controller.go b/controllers/opentelemetrycollector_controller.go
index 8c616700a6..1f0211f932 100644
--- a/controllers/opentelemetrycollector_controller.go
+++ b/controllers/opentelemetrycollector_controller.go
@@ -38,6 +38,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
+ "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1"
"github.com/open-telemetry/opentelemetry-operator/apis/v1beta1"
"github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift"
"github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus"
@@ -46,7 +47,9 @@ import (
"github.com/open-telemetry/opentelemetry-operator/internal/manifests"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils"
+ internalRbac "github.com/open-telemetry/opentelemetry-operator/internal/rbac"
collectorStatus "github.com/open-telemetry/opentelemetry-operator/internal/status/collector"
+ "github.com/open-telemetry/opentelemetry-operator/pkg/constants"
"github.com/open-telemetry/opentelemetry-operator/pkg/featuregate"
)
@@ -64,6 +67,7 @@ type OpenTelemetryCollectorReconciler struct {
scheme *runtime.Scheme
log logr.Logger
config config.Config
+ reviewer *internalRbac.Reviewer
}
// Params is the set of options to build a new OpenTelemetryCollectorReconciler.
@@ -73,6 +77,7 @@ type Params struct {
Scheme *runtime.Scheme
Log logr.Logger
Config config.Config
+ Reviewer *internalRbac.Reviewer
}
func (r *OpenTelemetryCollectorReconciler) findOtelOwnedObjects(ctx context.Context, params manifests.Params) (map[types.UID]client.Object, error) {
@@ -168,7 +173,7 @@ func (r *OpenTelemetryCollectorReconciler) getConfigMapsToRemove(configVersionsT
return ownedConfigMaps
}
-func (r *OpenTelemetryCollectorReconciler) GetParams(instance v1beta1.OpenTelemetryCollector) (manifests.Params, error) {
+func (r *OpenTelemetryCollectorReconciler) GetParams(ctx context.Context, instance v1beta1.OpenTelemetryCollector) (manifests.Params, error) {
p := manifests.Params{
Config: r.config,
Client: r.Client,
@@ -176,10 +181,11 @@ func (r *OpenTelemetryCollectorReconciler) GetParams(instance v1beta1.OpenTeleme
Log: r.log,
Scheme: r.scheme,
Recorder: r.recorder,
+ Reviewer: r.reviewer,
}
// generate the target allocator CR from the collector CR
- targetAllocator, err := collector.TargetAllocator(p)
+ targetAllocator, err := r.getTargetAllocator(ctx, p)
if err != nil {
return p, err
}
@@ -187,6 +193,19 @@ func (r *OpenTelemetryCollectorReconciler) GetParams(instance v1beta1.OpenTeleme
return p, nil
}
+func (r *OpenTelemetryCollectorReconciler) getTargetAllocator(ctx context.Context, params manifests.Params) (*v1alpha1.TargetAllocator, error) {
+ if taName, ok := params.OtelCol.GetLabels()[constants.LabelTargetAllocator]; ok {
+ targetAllocator := &v1alpha1.TargetAllocator{}
+ taKey := client.ObjectKey{Name: taName, Namespace: params.OtelCol.GetNamespace()}
+ err := r.Client.Get(ctx, taKey, targetAllocator)
+ if err != nil {
+ return nil, err
+ }
+ return targetAllocator, nil
+ }
+ return collector.TargetAllocator(params)
+}
+
// NewReconciler creates a new reconciler for OpenTelemetryCollector objects.
func NewReconciler(p Params) *OpenTelemetryCollectorReconciler {
r := &OpenTelemetryCollectorReconciler{
@@ -195,6 +214,7 @@ func NewReconciler(p Params) *OpenTelemetryCollectorReconciler {
scheme: p.Scheme,
config: p.Config,
recorder: p.Recorder,
+ reviewer: p.Reviewer,
}
return r
}
@@ -212,6 +232,7 @@ func NewReconciler(p Params) *OpenTelemetryCollectorReconciler {
// +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors,verbs=get;list;watch;update;patch
// +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors/finalizers,verbs=get;update;patch
+// +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators,verbs=get;list;watch;create;update;patch;delete
// Reconcile the current state of an OpenTelemetry collector resource with the desired state.
func (r *OpenTelemetryCollectorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
@@ -229,7 +250,7 @@ func (r *OpenTelemetryCollectorReconciler) Reconcile(ctx context.Context, req ct
return ctrl.Result{}, client.IgnoreNotFound(err)
}
- params, err := r.GetParams(instance)
+ params, err := r.GetParams(ctx, instance)
if err != nil {
log.Error(err, "Failed to create manifest.Params")
return ctrl.Result{}, err
diff --git a/controllers/reconcile_test.go b/controllers/reconcile_test.go
index 46b0d38837..a0d6fc3bed 100644
--- a/controllers/reconcile_test.go
+++ b/controllers/reconcile_test.go
@@ -22,7 +22,7 @@ import (
routev1 "github.com/openshift/api/route/v1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
- "gopkg.in/yaml.v2"
+ colfeaturegate "go.opentelemetry.io/collector/featuregate"
appsv1 "k8s.io/api/apps/v1"
autoscalingv2 "k8s.io/api/autoscaling/v2"
v1 "k8s.io/api/core/v1"
@@ -41,14 +41,15 @@ import (
k8sreconcile "sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1"
+ "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1"
"github.com/open-telemetry/opentelemetry-operator/controllers"
"github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift"
"github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus"
autoRBAC "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/rbac"
"github.com/open-telemetry/opentelemetry-operator/internal/config"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests"
- ta "github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator/adapters"
"github.com/open-telemetry/opentelemetry-operator/internal/naming"
+ "github.com/open-telemetry/opentelemetry-operator/pkg/featuregate"
)
const (
@@ -75,6 +76,18 @@ var (
type check[T any] func(t *testing.T, params T)
func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) {
+ // enable the collector CR feature flag, as these tests assume it
+ // TODO: drop this after the flag is enabled by default
+ registry := colfeaturegate.GlobalRegistry()
+ current := featuregate.CollectorUsesTargetAllocatorCR.IsEnabled()
+ require.False(t, current, "don't set gates which are enabled by default")
+ err := registry.Set(featuregate.CollectorUsesTargetAllocatorCR.ID(), true)
+ require.NoError(t, err)
+ t.Cleanup(func() {
+ err := registry.Set(featuregate.CollectorUsesTargetAllocatorCR.ID(), current)
+ require.NoError(t, err)
+ })
+
addedMetadataDeployment := testCollectorWithMode("test-deployment", v1alpha1.ModeDeployment)
addedMetadataDeployment.Labels = map[string]string{
labelName: labelVal,
@@ -496,10 +509,7 @@ func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) {
assert.NoError(t, err)
assert.True(t, exists)
// Check the TA doesn't exist
- exists, err = populateObjectIfExists(t, &v1.ConfigMap{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace))
- assert.NoError(t, err)
- assert.False(t, exists)
- exists, err = populateObjectIfExists(t, &appsv1.Deployment{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace))
+ exists, err = populateObjectIfExists(t, &v1alpha1.TargetAllocator{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace))
assert.NoError(t, err)
assert.False(t, exists)
},
@@ -516,34 +526,35 @@ func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) {
exists, err := populateObjectIfExists(t, &v1.ConfigMap{}, namespacedObjectName(naming.ConfigMap(params.Name, configHash), params.Namespace))
assert.NoError(t, err)
assert.True(t, exists)
- actual := v1.ConfigMap{}
- exists, err = populateObjectIfExists(t, &appsv1.Deployment{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace))
- assert.NoError(t, err)
- assert.True(t, exists)
- exists, err = populateObjectIfExists(t, &actual, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace))
- assert.NoError(t, err)
- assert.True(t, exists)
- exists, err = populateObjectIfExists(t, &v1.ServiceAccount{}, namespacedObjectName(naming.TargetAllocatorServiceAccount(params.Name), params.Namespace))
- assert.NoError(t, err)
- assert.True(t, exists)
- promConfig, err := ta.ConfigToPromConfig(testCollectorAssertNoErr(t, "test-stateful-ta", baseTaImage, promFile).Spec.Config)
- assert.NoError(t, err)
-
- taConfig := make(map[interface{}]interface{})
- taConfig["collector_selector"] = metav1.LabelSelector{
- MatchLabels: map[string]string{
- "app.kubernetes.io/instance": "default.test-stateful-ta",
- "app.kubernetes.io/managed-by": "opentelemetry-operator",
- "app.kubernetes.io/component": "opentelemetry-collector",
- "app.kubernetes.io/part-of": "opentelemetry",
+ actual := v1alpha1.TargetAllocator{}
+ exists, err = populateObjectIfExists(t, &actual, namespacedObjectName(params.Name, params.Namespace))
+ require.NoError(t, err)
+ require.True(t, exists)
+ expected := v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: params.Name,
+ Namespace: params.Namespace,
+ Labels: nil,
+ },
+ Spec: v1alpha1.TargetAllocatorSpec{
+ OpenTelemetryCommonFields: v1beta1.OpenTelemetryCommonFields{},
+ AllocationStrategy: "consistent-hashing",
+ FilterStrategy: "relabel-config",
+ PrometheusCR: v1beta1.TargetAllocatorPrometheusCR{
+ ScrapeInterval: &metav1.Duration{Duration: time.Second * 30},
+ ServiceMonitorSelector: &metav1.LabelSelector{},
+ PodMonitorSelector: &metav1.LabelSelector{},
+ },
},
}
- taConfig["config"] = promConfig["config"]
- taConfig["allocation_strategy"] = "consistent-hashing"
- taConfig["filter_strategy"] = "relabel-config"
- taConfigYAML, _ := yaml.Marshal(taConfig)
- assert.Equal(t, string(taConfigYAML), actual.Data["targetallocator.yaml"])
- assert.NotContains(t, actual.Data["targetallocator.yaml"], "0.0.0.0:10100")
+ assert.Equal(t, expected.Name, actual.Name)
+ assert.Equal(t, expected.Namespace, actual.Namespace)
+ assert.Equal(t, expected.Labels, actual.Labels)
+ assert.Equal(t, baseTaImage, actual.Spec.Image)
+ assert.Equal(t, expected.Spec.AllocationStrategy, actual.Spec.AllocationStrategy)
+ assert.Equal(t, expected.Spec.FilterStrategy, actual.Spec.FilterStrategy)
+ assert.Equal(t, expected.Spec.ScrapeConfigs, actual.Spec.ScrapeConfigs)
+
},
},
wantErr: assert.NoError,
@@ -558,14 +569,11 @@ func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) {
exists, err := populateObjectIfExists(t, &v1.ConfigMap{}, namespacedObjectName(naming.ConfigMap(params.Name, configHash), params.Namespace))
assert.NoError(t, err)
assert.True(t, exists)
- actual := v1.ConfigMap{}
- exists, err = populateObjectIfExists(t, &appsv1.Deployment{}, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace))
- assert.NoError(t, err)
- assert.True(t, exists)
- exists, err = populateObjectIfExists(t, &actual, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace))
- assert.NoError(t, err)
- assert.True(t, exists)
- assert.Contains(t, actual.Data["targetallocator.yaml"], "0.0.0.0:10100")
+ actual := v1alpha1.TargetAllocator{}
+ exists, err = populateObjectIfExists(t, &actual, namespacedObjectName(params.Name, params.Namespace))
+ require.NoError(t, err)
+ require.True(t, exists)
+ assert.Nil(t, actual.Spec.ScrapeConfigs)
},
},
wantErr: assert.NoError,
@@ -575,11 +583,11 @@ func TestOpenTelemetryCollectorReconciler_Reconcile(t *testing.T) {
result: controllerruntime.Result{},
checks: []check[v1alpha1.OpenTelemetryCollector]{
func(t *testing.T, params v1alpha1.OpenTelemetryCollector) {
- actual := appsv1.Deployment{}
- exists, err := populateObjectIfExists(t, &actual, namespacedObjectName(naming.TargetAllocator(params.Name), params.Namespace))
- assert.NoError(t, err)
- assert.True(t, exists)
- assert.Equal(t, actual.Spec.Template.Spec.Containers[0].Image, updatedTaImage)
+ actual := v1alpha1.TargetAllocator{}
+ exists, err := populateObjectIfExists(t, &actual, namespacedObjectName(params.Name, params.Namespace))
+ require.NoError(t, err)
+ require.True(t, exists)
+ assert.Equal(t, actual.Spec.Image, updatedTaImage)
},
},
wantErr: assert.NoError,
diff --git a/controllers/suite_test.go b/controllers/suite_test.go
index 4e56fb16de..1dc118d9dd 100644
--- a/controllers/suite_test.go
+++ b/controllers/suite_test.go
@@ -55,6 +55,7 @@ import (
"github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1"
"github.com/open-telemetry/opentelemetry-operator/apis/v1beta1"
"github.com/open-telemetry/opentelemetry-operator/internal/autodetect"
+ "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/certmanager"
"github.com/open-telemetry/opentelemetry-operator/internal/autodetect/openshift"
"github.com/open-telemetry/opentelemetry-operator/internal/autodetect/prometheus"
autoRBAC "github.com/open-telemetry/opentelemetry-operator/internal/autodetect/rbac"
@@ -63,7 +64,6 @@ import (
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/collector/testdata"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/manifestutils"
"github.com/open-telemetry/opentelemetry-operator/internal/rbac"
- // +kubebuilder:scaffold:imports
)
var (
@@ -100,6 +100,7 @@ type mockAutoDetect struct {
OpenShiftRoutesAvailabilityFunc func() (openshift.RoutesAvailability, error)
PrometheusCRsAvailabilityFunc func() (prometheus.Availability, error)
RBACPermissionsFunc func(ctx context.Context) (autoRBAC.Availability, error)
+ CertManagerAvailabilityFunc func(ctx context.Context) (certmanager.Availability, error)
}
func (m *mockAutoDetect) FIPSEnabled(ctx context.Context) bool {
@@ -127,6 +128,13 @@ func (m *mockAutoDetect) RBACPermissions(ctx context.Context) (autoRBAC.Availabi
return autoRBAC.NotAvailable, nil
}
+func (m *mockAutoDetect) CertManagerAvailability(ctx context.Context) (certmanager.Availability, error) {
+ if m.CertManagerAvailabilityFunc != nil {
+ return m.CertManagerAvailabilityFunc(ctx)
+ }
+ return certmanager.NotAvailable, nil
+}
+
func TestMain(m *testing.M) {
ctx, cancel = context.WithCancel(context.TODO())
defer cancel()
@@ -191,6 +199,11 @@ func TestMain(m *testing.M) {
os.Exit(1)
}
+ if err = v1alpha1.SetupTargetAllocatorWebhook(mgr, config.New(), reviewer); err != nil {
+ fmt.Printf("failed to SetupWebhookWithManager: %v", err)
+ os.Exit(1)
+ }
+
if err = v1alpha1.SetupOpAMPBridgeWebhook(mgr, config.New()); err != nil {
fmt.Printf("failed to SetupWebhookWithManager: %v", err)
os.Exit(1)
diff --git a/controllers/targetallocator_controller.go b/controllers/targetallocator_controller.go
index 6b748e4535..5ec135ac68 100644
--- a/controllers/targetallocator_controller.go
+++ b/controllers/targetallocator_controller.go
@@ -17,6 +17,8 @@ package controllers
import (
"context"
+ "fmt"
+ "slices"
"github.com/go-logr/logr"
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
@@ -24,16 +26,23 @@ import (
corev1 "k8s.io/api/core/v1"
policyV1 "k8s.io/api/policy/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/handler"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1"
"github.com/open-telemetry/opentelemetry-operator/apis/v1beta1"
"github.com/open-telemetry/opentelemetry-operator/internal/config"
"github.com/open-telemetry/opentelemetry-operator/internal/manifests/targetallocator"
taStatus "github.com/open-telemetry/opentelemetry-operator/internal/status/targetallocator"
+ "github.com/open-telemetry/opentelemetry-operator/pkg/constants"
"github.com/open-telemetry/opentelemetry-operator/pkg/featuregate"
)
@@ -55,7 +64,11 @@ type TargetAllocatorReconcilerParams struct {
Config config.Config
}
-func (r *TargetAllocatorReconciler) getParams(instance v1alpha1.TargetAllocator) targetallocator.Params {
+func (r *TargetAllocatorReconciler) getParams(ctx context.Context, instance v1alpha1.TargetAllocator) (targetallocator.Params, error) {
+ collector, err := r.getCollector(ctx, instance)
+ if err != nil {
+ return targetallocator.Params{}, err
+ }
p := targetallocator.Params{
Config: r.config,
Client: r.Client,
@@ -63,9 +76,47 @@ func (r *TargetAllocatorReconciler) getParams(instance v1alpha1.TargetAllocator)
Scheme: r.scheme,
Recorder: r.recorder,
TargetAllocator: instance,
+ Collector: collector,
}
- return p
+ return p, nil
+}
+
+func (r *TargetAllocatorReconciler) getCollector(ctx context.Context, instance v1alpha1.TargetAllocator) (*v1beta1.OpenTelemetryCollector, error) {
+ var collector v1beta1.OpenTelemetryCollector
+ ownerReferences := instance.GetOwnerReferences()
+ collectorIndex := slices.IndexFunc(ownerReferences, func(reference metav1.OwnerReference) bool {
+ return reference.Kind == "OpenTelemetryCollector"
+ })
+ if collectorIndex != -1 {
+ collectorRef := ownerReferences[collectorIndex]
+ collectorKey := client.ObjectKey{Name: collectorRef.Name, Namespace: instance.GetNamespace()}
+ if err := r.Get(ctx, collectorKey, &collector); err != nil {
+ return nil, fmt.Errorf(
+ "error getting owner for TargetAllocator %s/%s: %w",
+ instance.GetNamespace(), instance.GetName(), err)
+ }
+ return &collector, nil
+ }
+
+ var collectors v1beta1.OpenTelemetryCollectorList
+ listOpts := []client.ListOption{
+ client.InNamespace(instance.GetNamespace()),
+ client.MatchingLabels{
+ constants.LabelTargetAllocator: instance.GetName(),
+ },
+ }
+ err := r.List(ctx, &collectors, listOpts...)
+ if err != nil {
+ return nil, err
+ }
+ if len(collectors.Items) == 0 {
+ return nil, nil
+ } else if len(collectors.Items) > 1 {
+ return nil, fmt.Errorf("found multiple OpenTelemetry collectors annotated with the same Target Allocator: %s/%s", instance.GetNamespace(), instance.GetName())
+ }
+
+ return &collectors.Items[0], nil
}
// NewTargetAllocatorReconciler creates a new reconciler for TargetAllocator objects.
@@ -85,15 +136,14 @@ func NewTargetAllocatorReconciler(
}
}
-// TODO: Uncomment the lines below after enabling the TA controller in main.go
-// // +kubebuilder:rbac:groups="",resources=pods;configmaps;services;serviceaccounts;persistentvolumeclaims;persistentvolumes,verbs=get;list;watch;create;update;patch;delete
-// // +kubebuilder:rbac:groups="",resources=events,verbs=create;patch
-// // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
-// // +kubebuilder:rbac:groups=policy,resources=poddisruptionbudgets,verbs=get;list;watch;create;update;patch;delete
-// // +kubebuilder:rbac:groups=monitoring.coreos.com,resources=servicemonitors;podmonitors,verbs=get;list;watch;create;update;patch;delete
-// // +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors,verbs=get;list;watch;update;patch
-// // +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators,verbs=get;list;watch;update;patch
-// // +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators/status,verbs=get;update;patch
+// +kubebuilder:rbac:groups="",resources=pods;configmaps;services;serviceaccounts,verbs=get;list;watch;create;update;patch;delete
+// +kubebuilder:rbac:groups="",resources=events,verbs=create;patch
+// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
+// +kubebuilder:rbac:groups=policy,resources=poddisruptionbudgets,verbs=get;list;watch;create;update;patch;delete
+// +kubebuilder:rbac:groups=monitoring.coreos.com,resources=servicemonitors;podmonitors,verbs=get;list;watch;create;update;patch;delete
+// +kubebuilder:rbac:groups=opentelemetry.io,resources=opentelemetrycollectors,verbs=get;list;watch;update;patch
+// +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators,verbs=get;list;watch;update;patch
+// +kubebuilder:rbac:groups=opentelemetry.io,resources=targetallocators/status,verbs=get;update;patch
// Reconcile the current state of a TargetAllocator resource with the desired state.
func (r *TargetAllocatorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
@@ -121,32 +171,91 @@ func (r *TargetAllocatorReconciler) Reconcile(ctx context.Context, req ctrl.Requ
return ctrl.Result{}, nil
}
- params := r.getParams(instance)
+ params, err := r.getParams(ctx, instance)
+ if err != nil {
+ return ctrl.Result{}, err
+ }
desiredObjects, buildErr := BuildTargetAllocator(params)
if buildErr != nil {
return ctrl.Result{}, buildErr
}
- err := reconcileDesiredObjects(ctx, r.Client, log, ¶ms.TargetAllocator, params.Scheme, desiredObjects, nil)
+ err = reconcileDesiredObjects(ctx, r.Client, log, ¶ms.TargetAllocator, params.Scheme, desiredObjects, nil)
return taStatus.HandleReconcileStatus(ctx, log, params, err)
}
// SetupWithManager tells the manager what our controller is interested in.
func (r *TargetAllocatorReconciler) SetupWithManager(mgr ctrl.Manager) error {
- builder := ctrl.NewControllerManagedBy(mgr).
+ ctrlBuilder := ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.TargetAllocator{}).
Owns(&corev1.ConfigMap{}).
Owns(&corev1.ServiceAccount{}).
Owns(&corev1.Service{}).
Owns(&appsv1.Deployment{}).
- Owns(&corev1.PersistentVolume{}).
- Owns(&corev1.PersistentVolumeClaim{}).
Owns(&policyV1.PodDisruptionBudget{})
if featuregate.PrometheusOperatorIsAvailable.IsEnabled() {
- builder.Owns(&monitoringv1.ServiceMonitor{})
- builder.Owns(&monitoringv1.PodMonitor{})
+ ctrlBuilder.Owns(&monitoringv1.ServiceMonitor{})
+ ctrlBuilder.Owns(&monitoringv1.PodMonitor{})
}
- return builder.Complete(r)
+ // watch collectors which have embedded Target Allocator enabled
+ // we need to do this separately from collector reconciliation, as changes to Config will not lead to changes
+ // in the TargetAllocator CR
+ ctrlBuilder.Watches(
+ &v1beta1.OpenTelemetryCollector{},
+ handler.EnqueueRequestsFromMapFunc(getTargetAllocatorForCollector),
+ builder.WithPredicates(
+ predicate.NewPredicateFuncs(func(object client.Object) bool {
+ collector := object.(*v1beta1.OpenTelemetryCollector)
+ return collector.Spec.TargetAllocator.Enabled
+ }),
+ ),
+ )
+
+ // watch collectors which have the target allocator label
+ collectorSelector := metav1.LabelSelector{
+ MatchExpressions: []metav1.LabelSelectorRequirement{
+ {
+ Key: constants.LabelTargetAllocator,
+ Operator: metav1.LabelSelectorOpExists,
+ },
+ },
+ }
+ selectorPredicate, err := predicate.LabelSelectorPredicate(collectorSelector)
+ if err != nil {
+ return err
+ }
+ ctrlBuilder.Watches(
+ &v1beta1.OpenTelemetryCollector{},
+ handler.EnqueueRequestsFromMapFunc(getTargetAllocatorRequestsFromLabel),
+ builder.WithPredicates(selectorPredicate),
+ )
+
+ return ctrlBuilder.Complete(r)
+}
+
+func getTargetAllocatorForCollector(_ context.Context, collector client.Object) []reconcile.Request {
+ return []reconcile.Request{
+ {
+ NamespacedName: types.NamespacedName{
+ Name: collector.GetName(),
+ Namespace: collector.GetNamespace(),
+ },
+ },
+ }
+}
+
+func getTargetAllocatorRequestsFromLabel(_ context.Context, collector client.Object) []reconcile.Request {
+ if taName, ok := collector.GetLabels()[constants.LabelTargetAllocator]; ok {
+ return []reconcile.Request{
+ {
+ NamespacedName: types.NamespacedName{
+ Name: taName,
+ Namespace: collector.GetNamespace(),
+ },
+ },
+ }
+ }
+ return []reconcile.Request{}
}
diff --git a/controllers/targetallocator_reconciler_test.go b/controllers/targetallocator_reconciler_test.go
new file mode 100644
index 0000000000..cd8a889765
--- /dev/null
+++ b/controllers/targetallocator_reconciler_test.go
@@ -0,0 +1,179 @@
+// Copyright The OpenTelemetry Authors
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controllers
+
+import (
+ "context"
+ "testing"
+
+ routev1 "github.com/openshift/api/route/v1"
+ monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+ networkingv1 "k8s.io/api/networking/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/types"
+ utilruntime "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/client-go/kubernetes/scheme"
+ "k8s.io/client-go/tools/record"
+ "sigs.k8s.io/controller-runtime/pkg/client/fake"
+ logf "sigs.k8s.io/controller-runtime/pkg/log"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ "github.com/open-telemetry/opentelemetry-operator/apis/v1alpha1"
+ "github.com/open-telemetry/opentelemetry-operator/apis/v1beta1"
+ "github.com/open-telemetry/opentelemetry-operator/internal/config"
+ "github.com/open-telemetry/opentelemetry-operator/pkg/constants"
+)
+
+var testLogger = logf.Log.WithName("opamp-bridge-controller-unit-tests")
+
+var (
+ testScheme *runtime.Scheme = scheme.Scheme
+)
+
+func init() {
+ utilruntime.Must(monitoringv1.AddToScheme(testScheme))
+ utilruntime.Must(networkingv1.AddToScheme(testScheme))
+ utilruntime.Must(routev1.AddToScheme(testScheme))
+ utilruntime.Must(v1alpha1.AddToScheme(testScheme))
+ utilruntime.Must(v1beta1.AddToScheme(testScheme))
+}
+
+func TestTargetAllocatorReconciler_GetCollector(t *testing.T) {
+ testCollector := &v1beta1.OpenTelemetryCollector{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Labels: map[string]string{
+ constants.LabelTargetAllocator: "label-ta",
+ },
+ },
+ }
+ fakeClient := fake.NewFakeClient(testCollector)
+ reconciler := NewTargetAllocatorReconciler(
+ fakeClient,
+ testScheme,
+ record.NewFakeRecorder(10),
+ config.New(),
+ testLogger,
+ )
+
+ t.Run("not owned by a collector", func(t *testing.T) {
+ ta := v1alpha1.TargetAllocator{}
+ collector, err := reconciler.getCollector(context.Background(), ta)
+ require.NoError(t, err)
+ assert.Nil(t, collector)
+ })
+ t.Run("owned by a collector", func(t *testing.T) {
+ ta := v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ OwnerReferences: []metav1.OwnerReference{
+ {
+ Kind: "OpenTelemetryCollector",
+ Name: testCollector.Name,
+ },
+ },
+ },
+ }
+ collector, err := reconciler.getCollector(context.Background(), ta)
+ require.NoError(t, err)
+ assert.Equal(t, testCollector, collector)
+ })
+ t.Run("owning collector doesn't exist", func(t *testing.T) {
+ ta := v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "default",
+ OwnerReferences: []metav1.OwnerReference{
+ {
+ Kind: "OpenTelemetryCollector",
+ Name: "non_existent",
+ },
+ },
+ },
+ }
+ collector, err := reconciler.getCollector(context.Background(), ta)
+ assert.Nil(t, collector)
+ assert.Errorf(t, err, "error getting owner for TargetAllocator default/test: opentelemetrycollectors.opentelemetry.io \"non_existent\" not found")
+ })
+ t.Run("collector attached by label", func(t *testing.T) {
+ ta := v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "label-ta",
+ },
+ }
+ collector, err := reconciler.getCollector(context.Background(), ta)
+ require.NoError(t, err)
+ assert.Equal(t, testCollector, collector)
+ })
+ t.Run("multiple collectors attached by label", func(t *testing.T) {
+ testCollector2 := testCollector.DeepCopy()
+ testCollector2.SetName("test2")
+ fakeClient := fake.NewFakeClient(testCollector, testCollector2)
+ reconciler := NewTargetAllocatorReconciler(
+ fakeClient,
+ testScheme,
+ record.NewFakeRecorder(10),
+ config.New(),
+ testLogger,
+ )
+ ta := v1alpha1.TargetAllocator{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "label-ta",
+ },
+ }
+ collector, err := reconciler.getCollector(context.Background(), ta)
+ assert.Nil(t, collector)
+ assert.Errorf(t, err, "found multiple OpenTelemetry collectors annotated with the same Target Allocator: %s/%s", ta.Namespace, ta.Name)
+ })
+}
+
+func TestGetTargetAllocatorForCollector(t *testing.T) {
+ testCollector := &v1beta1.OpenTelemetryCollector{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "default",
+ },
+ }
+ requests := getTargetAllocatorForCollector(context.Background(), testCollector)
+ expected := []reconcile.Request{{
+ NamespacedName: types.NamespacedName{
+ Name: "test",
+ Namespace: "default",
+ },
+ }}
+ assert.Equal(t, expected, requests)
+}
+
+func TestGetTargetAllocatorRequestsFromLabel(t *testing.T) {
+ testCollector := &v1beta1.OpenTelemetryCollector{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "test",
+ Namespace: "default",
+ Labels: map[string]string{
+ constants.LabelTargetAllocator: "label-ta",
+ },
+ },
+ }
+ requests := getTargetAllocatorRequestsFromLabel(context.Background(), testCollector)
+ expected := []reconcile.Request{{
+ NamespacedName: types.NamespacedName{
+ Name: "label-ta",
+ Namespace: "default",
+ },
+ }}
+ assert.Equal(t, expected, requests)
+}
diff --git a/docs/api.md b/docs/api.md
index 24d16da3f4..9601cca2fd 100644
--- a/docs/api.md
+++ b/docs/api.md
@@ -253,6 +253,14 @@ If the former var had been defined, then the other vars would be ignored.
Apache HTTPD server version. One of 2.4 or 2.2. Default is 2.4
Name | -Type | -Description | -Required | -
---|---|---|---|
useLabelsForResourceAttributes | -boolean | -
- UseLabelsForResourceAttributes defines whether to use common labels for resource attributes:
- - `app.kubernetes.io/name` becomes `service.name`
- - `app.kubernetes.io/version` becomes `service.version`
- - `app.kubernetes.io/part-of` becomes `service.namespace`
- - `app.kubernetes.io/instance` becomes `service.instance.id` - |
- false | -
env | -[]object | -
- Env defines DotNet specific env vars. There are four layers for env vars' definitions and
-the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
-If the former var had been defined, then the other vars would be ignored. - |
- false | -||
image | -string | -
- Image is a container image with DotNet SDK and auto-instrumentation. - |
- false | -||
resourceRequirements | +spec | object |
- Resources describes the compute resource requirements. + The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. |
- false | +true |
volumeLimitSize | -int or string | +metadata | +object |
- VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
-The default size is 200Mi. + May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. |
false |
name | -string | -
- Name of the environment variable. Must be a C_IDENTIFIER. - |
- true | -||
value | -string | +accessModes | +[]string |
- Variable references $(VAR_NAME) are expanded
-using the previously defined environment variables in the container and
-any service environment variables. If a variable cannot be resolved,
-the reference in the input string will be unchanged. Double $$ are reduced
-to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
-"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
-Escaped references will never be expanded, regardless of whether the variable
-exists or not.
-Defaults to "". + accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 |
false |
valueFrom | +dataSource | object |
- Source for the environment variable's value. Cannot be used if value is not empty. - |
- false | -
Name | -Type | -Description | -Required | -|
---|---|---|---|---|
configMapKeyRef | -object | -
- Selects a key of a ConfigMap. + dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. |
false | |
fieldRef | +dataSourceRef | object |
- Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[' + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. |
false |
resourceFieldRef | +resources | object |
- Selects a resource of the container: only resources limits and requests
-(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources |
false |
secretKeyRef | +selector | object |
- Selects a key of a secret in the pod's namespace + selector is a label query over volumes to consider for binding. |
false | -
Name | -Type | -Description | -Required | -||
---|---|---|---|---|---|
key | -string | -
- The key to select. - |
- true | ||
name | +storageClassName | string |
- Name of the referent.
-This field is effectively required, but due to backwards compatibility is
-allowed to be empty. Instances of this type with an empty value here are
-almost certainly wrong.
-More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names - - Default: + storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 |
false | |
optional | -boolean | +volumeAttributesClassName | +string |
- Specify whether the ConfigMap or its key must be defined + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). |
false | -
Name | -Type | -Description | -Required | -|
---|---|---|---|---|
fieldPath | +||||
volumeMode | string |
- Path of the field to select in the specified API version. + volumeMode defines what type of volume is required by the claim. +Value of Filesystem is implied when not included in claim spec. |
- true | +false |
apiVersion | +volumeName | string |
- Version of the schema the FieldPath is written in terms of, defaults to "v1". + volumeName is the binding reference to the PersistentVolume backing this claim. |
false |
resource | +kind | string |
- Required: resource to select + Kind is the type of resource being referenced |
true | |
containerName | +name | string |
- Container name: required for volumes, optional for env vars + Name is the name of resource being referenced |
- false | +true |
divisor | -int or string | +apiGroup | +string |
- Specifies the output format of the exposed resources, defaults to "1" + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required. |
false |
key | +kind | string |
- The key of the secret to select from. Must be a valid secret key. + Kind is the type of resource being referenced |
true | |
name | string |
- Name of the referent.
-This field is effectively required, but due to backwards compatibility is
-allowed to be empty. Instances of this type with an empty value here are
-almost certainly wrong.
-More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names - - Default: + Name is the name of resource being referenced + |
+ true | +||
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. |
false | ||
optional | -boolean | +namespace | +string |
- Specify whether the Secret or its key must be defined + Namespace is the namespace of resource being referenced +Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. +(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. |
false |
claims | -[]object | -
- Claims lists the names of resources, defined in spec.resourceClaims,
-that are used by this container.
-
-This is an alpha field and requires enabling the
-DynamicResourceAllocation feature gate.
-
-This field is immutable. It can only be set for containers. - |
- false | -
limits | map[string]int or string | @@ -1297,12 +1227,12 @@ More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-co |
name | +matchExpressions | +[]object | +
+ matchExpressions is a list of label selector requirements. The requirements are ANDed. + |
+ false | +
matchLabels | +map[string]string | +
+ matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
+map is equivalent to an element of matchExpressions, whose key field is "key", the
+operator is "In", and the values array contains only "value". The requirements are ANDed. + |
+ false | +
Name | +Type | +Description | +Required | +|
---|---|---|---|---|
key | string |
- Name must match the name of one entry in pod.spec.resourceClaims of
-the Pod where this field is used. It makes that resource available
-inside a container. + key is the label key that the selector applies to. |
true | |
request | +operator | string |
- Request is the name chosen for a request in the referenced claim.
-If empty, everything from the claim is made available, otherwise
-only the result of this request. + operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. + |
+ true | +
values | +[]string | +
+ values is an array of string values. If the operator is In or NotIn,
+the values array must be non-empty. If the operator is Exists or DoesNotExist,
+the values array must be empty. This array is replaced during a strategic
+merge patch. |
false |
Name | +Type | +Description | +Required | +
---|---|---|---|
annotations | +map[string]string | +
+ + |
+ false | +
finalizers | +[]string | +
+ + |
+ false | +
labels | +map[string]string | +
+ + |
+ false | +
name | +string | +
+ + |
+ false | +
namespace | +string | +
+ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
useLabelsForResourceAttributes | +boolean | +
+ UseLabelsForResourceAttributes defines whether to use common labels for resource attributes:
+ - `app.kubernetes.io/name` becomes `service.name`
+ - `app.kubernetes.io/version` becomes `service.version`
+ - `app.kubernetes.io/part-of` becomes `service.namespace`
+ - `app.kubernetes.io/instance` becomes `service.instance.id` + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
env | +[]object | +
+ Env defines DotNet specific env vars. There are four layers for env vars' definitions and
+the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
+If the former var had been defined, then the other vars would be ignored. + |
+ false | +
image | +string | +
+ Image is a container image with DotNet SDK and auto-instrumentation. + |
+ false | +
resourceRequirements | +object | +
+ Resources describes the compute resource requirements. + |
+ false | +
volumeClaimTemplate | +object | +
+ VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+If omitted, an emptyDir is used with size limit VolumeSizeLimit + |
+ false | +
volumeLimitSize | +int or string | +
+ VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
+The default size is 200Mi. + |
+ false | +
false | |||
valueFrom | +valueFrom | object |
Source for the environment variable's value. Cannot be used if value is not empty. @@ -1384,8 +1505,8 @@ Defaults to "". |
endpoint | +claims | +[]object | +
+ Claims lists the names of resources, defined in spec.resourceClaims,
+that are used by this container.
+
+This is an alpha field and requires enabling the
+DynamicResourceAllocation feature gate.
+
+This field is immutable. It can only be set for containers. + |
+ false | +
limits | +map[string]int or string | +
+ Limits describes the maximum amount of compute resources allowed.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +|
requests | +map[string]int or string | +
+ Requests describes the minimum amount of compute resources required.
+If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
+otherwise to an implementation-defined value. Requests cannot exceed Limits.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
name | +string | +
+ Name must match the name of one entry in pod.spec.resourceClaims of
+the Pod where this field is used. It makes that resource available
+inside a container. + |
+ true | +
request | string |
- Endpoint is address of the collector with OTLP endpoint. + Request is the name chosen for a request in the referenced claim. +If empty, everything from the claim is made available, otherwise +only the result of this request. |
false |
env | -[]object | +spec | +object |
- Env defines Go specific env vars. There are four layers for env vars' definitions and
-the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
-If the former var had been defined, then the other vars would be ignored. + The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. + |
+ true | +
metadata | +object | +
+ May contain labels and annotations that will be copied into the PVC
+when creating it. No other fields are allowed and will be rejected during
+validation. + |
+ false | +
Name | +Type | +Description | +Required | +|
---|---|---|---|---|
accessModes | +[]string | +
+ accessModes contains the desired access modes the volume should have.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 |
false | |
image | +dataSource | +object | +
+ dataSource field can be used to specify either:
+* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
+* An existing PVC (PersistentVolumeClaim)
+If the provisioner or an external controller can support the specified data source,
+it will create a new volume based on the contents of the specified data source.
+When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef,
+and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified.
+If the namespace is specified, then dataSourceRef will not be copied to dataSource. + |
+ false | +
dataSourceRef | +object | +
+ dataSourceRef specifies the object from which to populate the volume with data, if a non-empty
+volume is desired. This may be any object from a non-empty API group (non
+core object) or a PersistentVolumeClaim object.
+When this field is specified, volume binding will only succeed if the type of
+the specified object matches some installed volume populator or dynamic
+provisioner.
+This field will replace the functionality of the dataSource field and as such
+if both fields are non-empty, they must have the same value. For backwards
+compatibility, when namespace isn't specified in dataSourceRef,
+both fields (dataSource and dataSourceRef) will be set to the same
+value automatically if one of them is empty and the other is non-empty.
+When namespace is specified in dataSourceRef,
+dataSource isn't set to the same value and must be empty.
+There are three important differences between dataSource and dataSourceRef:
+* While dataSource only allows two specific types of objects, dataSourceRef
+ allows any non-core object, as well as PersistentVolumeClaim objects. + |
+ false | +|
resources | +object | +
+ resources represents the minimum resources the volume should have.
+If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements
+that are lower than previous value but must still be higher than capacity recorded in the
+status field of the claim.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + |
+ false | +|
selector | +object | +
+ selector is a label query over volumes to consider for binding. + |
+ false | +|
storageClassName | string |
- Image is a container image with Go SDK and auto-instrumentation. + storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + |
+ false | +|
volumeAttributesClassName | +string | +
+ volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim.
+If specified, the CSI driver will create or update the volume with the attributes defined
+in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName,
+it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass
+will be applied to the claim but it's not allowed to reset this field to empty string once it is set.
+If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass
+will be set by the persistentvolume controller if it exists.
+If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be
+set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource
+exists.
+More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/
+(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). + |
+ false | +|
volumeMode | +string | +
+ volumeMode defines what type of volume is required by the claim.
+Value of Filesystem is implied when not included in claim spec. + |
+ false | +|
volumeName | +string | +
+ volumeName is the binding reference to the PersistentVolume backing this claim. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
kind | +string | +
+ Kind is the type of resource being referenced + |
+ true | +
name | +string | +
+ Name is the name of resource being referenced + |
+ true | +
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
kind | +string | +
+ Kind is the type of resource being referenced + |
+ true | +
name | +string | +
+ Name is the name of resource being referenced + |
+ true | +
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. + |
+ false | +
namespace | +string | +
+ Namespace is the namespace of resource being referenced
+Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.
+(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
limits | +map[string]int or string | +
+ Limits describes the maximum amount of compute resources allowed.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
requests | +map[string]int or string | +
+ Requests describes the minimum amount of compute resources required.
+If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
+otherwise to an implementation-defined value. Requests cannot exceed Limits.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
matchExpressions | +[]object | +
+ matchExpressions is a list of label selector requirements. The requirements are ANDed. + |
+ false | +
matchLabels | +map[string]string | +
+ matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
+map is equivalent to an element of matchExpressions, whose key field is "key", the
+operator is "In", and the values array contains only "value". The requirements are ANDed. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ key is the label key that the selector applies to. + |
+ true | +
operator | +string | +
+ operator represents a key's relationship to a set of values.
+Valid operators are In, NotIn, Exists and DoesNotExist. + |
+ true | +
values | +[]string | +
+ values is an array of string values. If the operator is In or NotIn,
+the values array must be non-empty. If the operator is Exists or DoesNotExist,
+the values array must be empty. This array is replaced during a strategic
+merge patch. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
annotations | +map[string]string | +
+ + |
+ false | +
finalizers | +[]string | +
+ + |
+ false | +
labels | +map[string]string | +
+ + |
+ false | +
name | +string | +
+ + |
+ false | +
namespace | +string | +
+ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
name | +string | +
+ Name of the environment variable. Must be a C_IDENTIFIER. + |
+ true | +
value | +string | +
+ Variable references $(VAR_NAME) are expanded
+using the previously defined environment variables in the container and
+any service environment variables. If a variable cannot be resolved,
+the reference in the input string will be unchanged. Double $$ are reduced
+to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
+"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
+Escaped references will never be expanded, regardless of whether the variable
+exists or not.
+Defaults to "". + |
+ false | +
valueFrom | +object | +
+ Source for the environment variable's value. Cannot be used if value is not empty. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
configMapKeyRef | +object | +
+ Selects a key of a ConfigMap. + |
+ false | +
fieldRef | +object | +
+ Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[' + |
+ false | +
resourceFieldRef | +object | +
+ Selects a resource of the container: only resources limits and requests
+(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + |
+ false | +
secretKeyRef | +object | +
+ Selects a key of a secret in the pod's namespace + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ The key to select. + |
+ true | +
name | +string | +
+ Name of the referent.
+This field is effectively required, but due to backwards compatibility is
+allowed to be empty. Instances of this type with an empty value here are
+almost certainly wrong.
+More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + + Default: + |
+ false | +
optional | +boolean | +
+ Specify whether the ConfigMap or its key must be defined + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
fieldPath | +string | +
+ Path of the field to select in the specified API version. + |
+ true | +
apiVersion | +string | +
+ Version of the schema the FieldPath is written in terms of, defaults to "v1". + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
resource | +string | +
+ Required: resource to select + |
+ true | +
containerName | +string | +
+ Container name: required for volumes, optional for env vars + |
+ false | +
divisor | +int or string | +
+ Specifies the output format of the exposed resources, defaults to "1" + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ The key of the secret to select from. Must be a valid secret key. + |
+ true | +
name | +string | +
+ Name of the referent.
+This field is effectively required, but due to backwards compatibility is
+allowed to be empty. Instances of this type with an empty value here are
+almost certainly wrong.
+More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + + Default: + |
+ false | +
optional | +boolean | +
+ Specify whether the Secret or its key must be defined + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
endpoint | +string | +
+ Endpoint is address of the collector with OTLP endpoint.
+If the endpoint defines https:// scheme TLS has to be specified. + |
+ false | +
tls | +object | +
+ TLS defines certificates for TLS.
+TLS needs to be enabled by specifying https:// scheme in the Endpoint. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
ca_file | +string | +
+ CA defines the key of certificate (e.g. ca.crt) in the configmap map, secret or absolute path to a certificate.
+The absolute path can be used when certificate is already present on the workload filesystem e.g.
+/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt + |
+ false | +
cert_file | +string | +
+ Cert defines the key (e.g. tls.crt) of the client certificate in the secret or absolute path to a certificate.
+The absolute path can be used when certificate is already present on the workload filesystem. + |
+ false | +
configMapName | +string | +
+ ConfigMapName defines configmap name with CA certificate. If it is not defined CA certificate will be
+used from the secret defined in SecretName. + |
+ false | +
key_file | +string | +
+ Key defines a key (e.g. tls.key) of the private key in the secret or absolute path to a certificate.
+The absolute path can be used when certificate is already present on the workload filesystem. + |
+ false | +
secretName | +string | +
+ SecretName defines secret name that will be used to configure TLS on the exporter.
+It is user responsibility to create the secret in the namespace of the workload.
+The secret must contain client certificate (Cert) and private key (Key).
+The CA certificate might be defined in the secret or in the config map. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
env | +[]object | +
+ Env defines Go specific env vars. There are four layers for env vars' definitions and
+the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
+If the former var had been defined, then the other vars would be ignored. + |
+ false | +
image | +string | +
+ Image is a container image with Go SDK and auto-instrumentation. + |
+ false | +
resourceRequirements | +object | +
+ Resources describes the compute resource requirements. + |
+ false | +
volumeClaimTemplate | +object | +
+ VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+If omitted, an emptyDir is used with size limit VolumeSizeLimit + |
+ false | +
volumeLimitSize | +int or string | +
+ VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
+The default size is 200Mi. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
name | +string | +
+ Name of the environment variable. Must be a C_IDENTIFIER. + |
+ true | +
value | +string | +
+ Variable references $(VAR_NAME) are expanded
+using the previously defined environment variables in the container and
+any service environment variables. If a variable cannot be resolved,
+the reference in the input string will be unchanged. Double $$ are reduced
+to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
+"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
+Escaped references will never be expanded, regardless of whether the variable
+exists or not.
+Defaults to "". + |
+ false | +
valueFrom | +object | +
+ Source for the environment variable's value. Cannot be used if value is not empty. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
configMapKeyRef | +object | +
+ Selects a key of a ConfigMap. + |
+ false | +
fieldRef | +object | +
+ Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[' + |
+ false | +
resourceFieldRef | +object | +
+ Selects a resource of the container: only resources limits and requests
+(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + |
+ false | +
secretKeyRef | +object | +
+ Selects a key of a secret in the pod's namespace + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ The key to select. + |
+ true | +
name | +string | +
+ Name of the referent.
+This field is effectively required, but due to backwards compatibility is
+allowed to be empty. Instances of this type with an empty value here are
+almost certainly wrong.
+More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + + Default: + |
+ false | +
optional | +boolean | +
+ Specify whether the ConfigMap or its key must be defined + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
fieldPath | +string | +
+ Path of the field to select in the specified API version. + |
+ true | +
apiVersion | +string | +
+ Version of the schema the FieldPath is written in terms of, defaults to "v1". + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
resource | +string | +
+ Required: resource to select + |
+ true | +
containerName | +string | +
+ Container name: required for volumes, optional for env vars + |
+ false | +
divisor | +int or string | +
+ Specifies the output format of the exposed resources, defaults to "1" + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ The key of the secret to select from. Must be a valid secret key. + |
+ true | +
name | +string | +
+ Name of the referent.
+This field is effectively required, but due to backwards compatibility is
+allowed to be empty. Instances of this type with an empty value here are
+almost certainly wrong.
+More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + + Default: + |
+ false | +
optional | +boolean | +
+ Specify whether the Secret or its key must be defined + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
claims | +[]object | +
+ Claims lists the names of resources, defined in spec.resourceClaims,
+that are used by this container.
+
+This is an alpha field and requires enabling the
+DynamicResourceAllocation feature gate.
+
+This field is immutable. It can only be set for containers. + |
+ false | +
limits | +map[string]int or string | +
+ Limits describes the maximum amount of compute resources allowed.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
requests | +map[string]int or string | +
+ Requests describes the minimum amount of compute resources required.
+If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
+otherwise to an implementation-defined value. Requests cannot exceed Limits.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
name | +string | +
+ Name must match the name of one entry in pod.spec.resourceClaims of
+the Pod where this field is used. It makes that resource available
+inside a container. + |
+ true | +
request | +string | +
+ Request is the name chosen for a request in the referenced claim.
+If empty, everything from the claim is made available, otherwise
+only the result of this request. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
spec | +object | +
+ The specification for the PersistentVolumeClaim. The entire content is
+copied unchanged into the PVC that gets created from this
+template. The same fields as in a PersistentVolumeClaim
+are also valid here. + |
+ true | +
metadata | +object | +
+ May contain labels and annotations that will be copied into the PVC
+when creating it. No other fields are allowed and will be rejected during
+validation. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
accessModes | +[]string | +
+ accessModes contains the desired access modes the volume should have.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + |
+ false | +
dataSource | +object | +
+ dataSource field can be used to specify either:
+* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
+* An existing PVC (PersistentVolumeClaim)
+If the provisioner or an external controller can support the specified data source,
+it will create a new volume based on the contents of the specified data source.
+When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef,
+and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified.
+If the namespace is specified, then dataSourceRef will not be copied to dataSource. + |
+ false | +
dataSourceRef | +object | +
+ dataSourceRef specifies the object from which to populate the volume with data, if a non-empty
+volume is desired. This may be any object from a non-empty API group (non
+core object) or a PersistentVolumeClaim object.
+When this field is specified, volume binding will only succeed if the type of
+the specified object matches some installed volume populator or dynamic
+provisioner.
+This field will replace the functionality of the dataSource field and as such
+if both fields are non-empty, they must have the same value. For backwards
+compatibility, when namespace isn't specified in dataSourceRef,
+both fields (dataSource and dataSourceRef) will be set to the same
+value automatically if one of them is empty and the other is non-empty.
+When namespace is specified in dataSourceRef,
+dataSource isn't set to the same value and must be empty.
+There are three important differences between dataSource and dataSourceRef:
+* While dataSource only allows two specific types of objects, dataSourceRef
+ allows any non-core object, as well as PersistentVolumeClaim objects. + |
+ false | +
resources | +object | +
+ resources represents the minimum resources the volume should have.
+If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements
+that are lower than previous value but must still be higher than capacity recorded in the
+status field of the claim.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + |
+ false | +
selector | +object | +
+ selector is a label query over volumes to consider for binding. + |
+ false | +
storageClassName | +string | +
+ storageClassName is the name of the StorageClass required by the claim.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + |
+ false | +
volumeAttributesClassName | +string | +
+ volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim.
+If specified, the CSI driver will create or update the volume with the attributes defined
+in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName,
+it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass
+will be applied to the claim but it's not allowed to reset this field to empty string once it is set.
+If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass
+will be set by the persistentvolume controller if it exists.
+If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be
+set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource
+exists.
+More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/
+(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). + |
+ false | +
volumeMode | +string | +
+ volumeMode defines what type of volume is required by the claim.
+Value of Filesystem is implied when not included in claim spec. + |
+ false | +
volumeName | +string | +
+ volumeName is the binding reference to the PersistentVolume backing this claim. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
kind | +string | +
+ Kind is the type of resource being referenced + |
+ true | +
name | +string | +
+ Name is the name of resource being referenced + |
+ true | +
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
kind | +string | +
+ Kind is the type of resource being referenced + |
+ true | +
name | +string | +
+ Name is the name of resource being referenced + |
+ true | +
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. + |
+ false | +
namespace | +string | +
+ Namespace is the namespace of resource being referenced
+Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.
+(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
limits | +map[string]int or string | +
+ Limits describes the maximum amount of compute resources allowed.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
requests | +map[string]int or string | +
+ Requests describes the minimum amount of compute resources required.
+If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
+otherwise to an implementation-defined value. Requests cannot exceed Limits.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
matchExpressions | +[]object | +
+ matchExpressions is a list of label selector requirements. The requirements are ANDed. + |
+ false | +
matchLabels | +map[string]string | +
+ matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
+map is equivalent to an element of matchExpressions, whose key field is "key", the
+operator is "In", and the values array contains only "value". The requirements are ANDed. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ key is the label key that the selector applies to. + |
+ true | +
operator | +string | +
+ operator represents a key's relationship to a set of values.
+Valid operators are In, NotIn, Exists and DoesNotExist. + |
+ true | +
values | +[]string | +
+ values is an array of string values. If the operator is In or NotIn,
+the values array must be non-empty. If the operator is Exists or DoesNotExist,
+the values array must be empty. This array is replaced during a strategic
+merge patch. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
annotations | +map[string]string | +
+ + |
+ false | +
finalizers | +[]string | +
+ + |
+ false | +
labels | +map[string]string | +
+ + |
+ false | +
name | +string | +
+ + |
+ false | +
namespace | +string | +
+ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
env | +[]object | +
+ Env defines java specific env vars. There are four layers for env vars' definitions and
+the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
+If the former var had been defined, then the other vars would be ignored. + |
+ false | +
extensions | +[]object | +
+ Extensions defines java specific extensions.
+All extensions are copied to a single directory; if a JAR with the same name exists, it will be overwritten. + |
+ false | +
image | +string | +
+ Image is a container image with javaagent auto-instrumentation JAR. + |
+ false | +
resources | +object | +
+ Resources describes the compute resource requirements. + |
+ false | +
volumeClaimTemplate | +object | +
+ VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+If omitted, an emptyDir is used with size limit VolumeSizeLimit + |
+ false | +
volumeLimitSize | +int or string | +
+ VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
+The default size is 200Mi. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
name | +string | +
+ Name of the environment variable. Must be a C_IDENTIFIER. + |
+ true | +
value | +string | +
+ Variable references $(VAR_NAME) are expanded
+using the previously defined environment variables in the container and
+any service environment variables. If a variable cannot be resolved,
+the reference in the input string will be unchanged. Double $$ are reduced
+to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
+"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
+Escaped references will never be expanded, regardless of whether the variable
+exists or not.
+Defaults to "". + |
+ false | +
valueFrom | +object | +
+ Source for the environment variable's value. Cannot be used if value is not empty. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
configMapKeyRef | +object | +
+ Selects a key of a ConfigMap. + |
+ false | +
fieldRef | +object | +
+ Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[' + |
+ false | +
resourceFieldRef | +object | +
+ Selects a resource of the container: only resources limits and requests
+(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + |
+ false | +
secretKeyRef | +object | +
+ Selects a key of a secret in the pod's namespace + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ The key to select. + |
+ true | +
name | +string | +
+ Name of the referent.
+This field is effectively required, but due to backwards compatibility is
+allowed to be empty. Instances of this type with an empty value here are
+almost certainly wrong.
+More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + + Default: + |
+ false | +
optional | +boolean | +
+ Specify whether the ConfigMap or its key must be defined + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
fieldPath | +string | +
+ Path of the field to select in the specified API version. + |
+ true | +
apiVersion | +string | +
+ Version of the schema the FieldPath is written in terms of, defaults to "v1". + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
resource | +string | +
+ Required: resource to select + |
+ true | +
containerName | +string | +
+ Container name: required for volumes, optional for env vars + |
+ false | +
divisor | +int or string | +
+ Specifies the output format of the exposed resources, defaults to "1" + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ The key of the secret to select from. Must be a valid secret key. + |
+ true | +
name | +string | +
+ Name of the referent.
+This field is effectively required, but due to backwards compatibility is
+allowed to be empty. Instances of this type with an empty value here are
+almost certainly wrong.
+More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + + Default: + |
+ false | +
optional | +boolean | +
+ Specify whether the Secret or its key must be defined + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
dir | +string | +
+ Dir is a directory with extensions auto-instrumentation JAR. + |
+ true | +
image | +string | +
+ Image is a container image with extensions auto-instrumentation JAR. + |
+ true | +
Name | +Type | +Description | +Required | +
---|---|---|---|
claims | +[]object | +
+ Claims lists the names of resources, defined in spec.resourceClaims,
+that are used by this container.
+
+This is an alpha field and requires enabling the
+DynamicResourceAllocation feature gate.
+
+This field is immutable. It can only be set for containers. + |
+ false | +
limits | +map[string]int or string | +
+ Limits describes the maximum amount of compute resources allowed.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
requests | +map[string]int or string | +
+ Requests describes the minimum amount of compute resources required.
+If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
+otherwise to an implementation-defined value. Requests cannot exceed Limits.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
name | +string | +
+ Name must match the name of one entry in pod.spec.resourceClaims of
+the Pod where this field is used. It makes that resource available
+inside a container. + |
+ true | +
request | +string | +
+ Request is the name chosen for a request in the referenced claim.
+If empty, everything from the claim is made available, otherwise
+only the result of this request. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
spec | +object | +
+ The specification for the PersistentVolumeClaim. The entire content is
+copied unchanged into the PVC that gets created from this
+template. The same fields as in a PersistentVolumeClaim
+are also valid here. + |
+ true | +
metadata | +object | +
+ May contain labels and annotations that will be copied into the PVC
+when creating it. No other fields are allowed and will be rejected during
+validation. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
accessModes | +[]string | +
+ accessModes contains the desired access modes the volume should have.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + |
+ false | +
dataSource | +object | +
+ dataSource field can be used to specify either:
+* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
+* An existing PVC (PersistentVolumeClaim)
+If the provisioner or an external controller can support the specified data source,
+it will create a new volume based on the contents of the specified data source.
+When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef,
+and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified.
+If the namespace is specified, then dataSourceRef will not be copied to dataSource. + |
+ false | +
dataSourceRef | +object | +
+ dataSourceRef specifies the object from which to populate the volume with data, if a non-empty
+volume is desired. This may be any object from a non-empty API group (non
+core object) or a PersistentVolumeClaim object.
+When this field is specified, volume binding will only succeed if the type of
+the specified object matches some installed volume populator or dynamic
+provisioner.
+This field will replace the functionality of the dataSource field and as such
+if both fields are non-empty, they must have the same value. For backwards
+compatibility, when namespace isn't specified in dataSourceRef,
+both fields (dataSource and dataSourceRef) will be set to the same
+value automatically if one of them is empty and the other is non-empty.
+When namespace is specified in dataSourceRef,
+dataSource isn't set to the same value and must be empty.
+There are three important differences between dataSource and dataSourceRef:
+* While dataSource only allows two specific types of objects, dataSourceRef
+ allows any non-core object, as well as PersistentVolumeClaim objects. + |
+ false | +
resources | +object | +
+ resources represents the minimum resources the volume should have.
+If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements
+that are lower than previous value but must still be higher than capacity recorded in the
+status field of the claim.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + |
+ false | +
selector | +object | +
+ selector is a label query over volumes to consider for binding. + |
+ false | +
storageClassName | +string | +
+ storageClassName is the name of the StorageClass required by the claim.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + |
+ false | +
volumeAttributesClassName | +string | +
+ volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim.
+If specified, the CSI driver will create or update the volume with the attributes defined
+in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName,
+it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass
+will be applied to the claim but it's not allowed to reset this field to empty string once it is set.
+If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass
+will be set by the persistentvolume controller if it exists.
+If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be
+set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource
+exists.
+More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/
+(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). + |
+ false | +
volumeMode | +string | +
+ volumeMode defines what type of volume is required by the claim.
+Value of Filesystem is implied when not included in claim spec. + |
+ false | +
volumeName | +string | +
+ volumeName is the binding reference to the PersistentVolume backing this claim. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
kind | +string | +
+ Kind is the type of resource being referenced + |
+ true | +
name | +string | +
+ Name is the name of resource being referenced + |
+ true | +
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
kind | +string | +
+ Kind is the type of resource being referenced + |
+ true | +
name | +string | +
+ Name is the name of resource being referenced + |
+ true | +
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. + |
+ false | +
namespace | +string | +
+ Namespace is the namespace of resource being referenced
+Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.
+(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
limits | +map[string]int or string | +
+ Limits describes the maximum amount of compute resources allowed.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
requests | +map[string]int or string | +
+ Requests describes the minimum amount of compute resources required.
+If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
+otherwise to an implementation-defined value. Requests cannot exceed Limits.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
matchExpressions | +[]object | +
+ matchExpressions is a list of label selector requirements. The requirements are ANDed. + |
+ false | +
matchLabels | +map[string]string | +
+ matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
+map is equivalent to an element of matchExpressions, whose key field is "key", the
+operator is "In", and the values array contains only "value". The requirements are ANDed. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ key is the label key that the selector applies to. + |
+ true | +
operator | +string | +
+ operator represents a key's relationship to a set of values.
+Valid operators are In, NotIn, Exists and DoesNotExist. + |
+ true | +
values | +[]string | +
+ values is an array of string values. If the operator is In or NotIn,
+the values array must be non-empty. If the operator is Exists or DoesNotExist,
+the values array must be empty. This array is replaced during a strategic
+merge patch. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
annotations | +map[string]string | +
+ + |
+ false | +
finalizers | +[]string | +
+ + |
+ false | +
labels | +map[string]string | +
+ + |
+ false | +
name | +string | +
+ + |
+ false | +
namespace | +string | +
+ + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
attrs | +[]object | +
+ Attrs defines Nginx agent specific attributes. The precedence order is:
+`agent default attributes` > `instrument spec attributes` .
+Attributes are documented at https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/otel-webserver-module + |
+ false | +
configFile | +string | +
+ Location of Nginx configuration file.
+Needed only if different from default "/etx/nginx/nginx.conf" + |
+ false | +
env | +[]object | +
+ Env defines Nginx specific env vars. There are four layers for env vars' definitions and
+the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
+If the former var had been defined, then the other vars would be ignored. + |
+ false | +
image | +string | +
+ Image is a container image with Nginx SDK and auto-instrumentation. + |
+ false | +
resourceRequirements | +object | +
+ Resources describes the compute resource requirements. + |
+ false | +
volumeClaimTemplate | +object | +
+ VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+If omitted, an emptyDir is used with size limit VolumeSizeLimit + |
+ false | +
volumeLimitSize | +int or string | +
+ VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
+The default size is 200Mi. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
name | +string | +
+ Name of the environment variable. Must be a C_IDENTIFIER. + |
+ true | +
value | +string | +
+ Variable references $(VAR_NAME) are expanded
+using the previously defined environment variables in the container and
+any service environment variables. If a variable cannot be resolved,
+the reference in the input string will be unchanged. Double $$ are reduced
+to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
+"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
+Escaped references will never be expanded, regardless of whether the variable
+exists or not.
+Defaults to "". + |
+ false | +
valueFrom | +object | +
+ Source for the environment variable's value. Cannot be used if value is not empty. + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
configMapKeyRef | +object | +
+ Selects a key of a ConfigMap. + |
+ false | +
fieldRef | +object | +
+ Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[' + |
+ false | +
resourceFieldRef | +object | +
+ Selects a resource of the container: only resources limits and requests
+(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + |
+ false | +
secretKeyRef | +object | +
+ Selects a key of a secret in the pod's namespace + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
key | +string | +
+ The key to select. + |
+ true | +
name | +string | +
+ Name of the referent.
+This field is effectively required, but due to backwards compatibility is
+allowed to be empty. Instances of this type with an empty value here are
+almost certainly wrong.
+More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + + Default: + |
+ false | +
optional | +boolean | +
+ Specify whether the ConfigMap or its key must be defined + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
fieldPath | +string | +
+ Path of the field to select in the specified API version. + |
+ true | +
apiVersion | +string | +
+ Version of the schema the FieldPath is written in terms of, defaults to "v1". + |
+ false | +
Name | +Type | +Description | +Required | +
---|---|---|---|
resource | +string | +
+ Required: resource to select + |
+ true | +
containerName | +string | +
+ Container name: required for volumes, optional for env vars + |
+ false | +
divisor | +int or string | +
+ Specifies the output format of the exposed resources, defaults to "1" + |
+ false | +
Name | +Type | +Description | +Required | +||
---|---|---|---|---|---|
key | +string | +
+ The key of the secret to select from. Must be a valid secret key. |
- false | +true | |
resourceRequirements | -object | +name | +string |
- Resources describes the compute resource requirements. + Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + + Default: |
false |
volumeLimitSize | -int or string | +optional | +boolean |
- VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
-The default size is 200Mi. + Specify whether the Secret or its key must be defined |
false |
env | -[]object | -
- Env defines java specific env vars. There are four layers for env vars' definitions and
-the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
-If the former var had been defined, then the other vars would be ignored. - |
- false | -||
extensions | -[]object | -
- Extensions defines java specific extensions.
-All extensions are copied to a single directory; if a JAR with the same name exists, it will be overwritten. - |
- false | -||
image | -string | -
- Image is a container image with javaagent auto-instrumentation JAR. - |
- false | -||
resources | +spec | object |
- Resources describes the compute resource requirements. + The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. |
- false | +true |
volumeLimitSize | -int or string | +metadata | +object |
- VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
-The default size is 200Mi. + May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. |
false |
name | -string | +accessModes | +[]string |
- Name of the environment variable. Must be a C_IDENTIFIER. + accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 |
- true | +false |
value | -string | +dataSource | +object |
- Variable references $(VAR_NAME) are expanded
-using the previously defined environment variables in the container and
-any service environment variables. If a variable cannot be resolved,
-the reference in the input string will be unchanged. Double $$ are reduced
-to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
-"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
-Escaped references will never be expanded, regardless of whether the variable
-exists or not.
-Defaults to "". + dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. |
false | |
valueFrom | +dataSourceRef | object |
- Source for the environment variable's value. Cannot be used if value is not empty. + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. |
false | -
Name | -Type | -Description | -Required | -||
---|---|---|---|---|---|
configMapKeyRef | +|||||
resources | object |
- Selects a key of a ConfigMap. + resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources |
false | ||
fieldRef | +selector | object |
- Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[' + selector is a label query over volumes to consider for binding. |
false | |
resourceFieldRef | -object | +storageClassName | +string |
- Selects a resource of the container: only resources limits and requests
-(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 |
false |
secretKeyRef | -object | +volumeAttributesClassName | +string |
- Selects a key of a secret in the pod's namespace + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). + |
+ false | +
volumeMode | +string | +
+ volumeMode defines what type of volume is required by the claim.
+Value of Filesystem is implied when not included in claim spec. + |
+ false | +||
volumeName | +string | +
+ volumeName is the binding reference to the PersistentVolume backing this claim. |
false |
key | +kind | string |
- The key to select. + Kind is the type of resource being referenced |
true | |
name | string |
- Name of the referent.
-This field is effectively required, but due to backwards compatibility is
-allowed to be empty. Instances of this type with an empty value here are
-almost certainly wrong.
-More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names - - Default: + Name is the name of resource being referenced |
- false | +true | |
optional | -boolean | +apiGroup | +string |
- Specify whether the ConfigMap or its key must be defined + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required. |
false |
fieldPath | +kind | string |
- Path of the field to select in the specified API version. + Kind is the type of resource being referenced |
true |
apiVersion | +name | string |
- Version of the schema the FieldPath is written in terms of, defaults to "v1". + Name is the name of resource being referenced + |
+ true | +
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. + |
+ false | +|
namespace | +string | +
+ Namespace is the namespace of resource being referenced
+Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.
+(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. |
false |
resource | -string | -
- Required: resource to select - |
- true | -||
containerName | -string | +limits | +map[string]int or string |
- Container name: required for volumes, optional for env vars + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
false |
divisor | -int or string | +requests | +map[string]int or string |
- Specifies the output format of the exposed resources, defaults to "1" + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
false |
key | -string | -
- The key of the secret to select from. Must be a valid secret key. - |
- true | -||
name | -string | +matchExpressions | +[]object |
- Name of the referent.
-This field is effectively required, but due to backwards compatibility is
-allowed to be empty. Instances of this type with an empty value here are
-almost certainly wrong.
-More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names - - Default: + matchExpressions is a list of label selector requirements. The requirements are ANDed. |
false |
optional | -boolean | +matchLabels | +map[string]string |
- Specify whether the Secret or its key must be defined + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |
false |
dir | +key | string |
- Dir is a directory with extensions auto-instrumentation JAR. + key is the label key that the selector applies to. |
true |
image | +operator | string |
- Image is a container image with extensions auto-instrumentation JAR. + operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. |
true | +
values | +[]string | +
+ values is an array of string values. If the operator is In or NotIn,
+the values array must be non-empty. If the operator is Exists or DoesNotExist,
+the values array must be empty. This array is replaced during a strategic
+merge patch. + |
+ false |
claims | -[]object | +annotations | +map[string]string |
- Claims lists the names of resources, defined in spec.resourceClaims,
-that are used by this container.
-
-This is an alpha field and requires enabling the
-DynamicResourceAllocation feature gate.
-
-This field is immutable. It can only be set for containers. + |
false |
limits | -map[string]int or string | +finalizers | +[]string |
- Limits describes the maximum amount of compute resources allowed.
-More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
false |
requests | -map[string]int or string | +labels | +map[string]string |
- Requests describes the minimum amount of compute resources required.
-If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
-otherwise to an implementation-defined value. Requests cannot exceed Limits.
-More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
false | -
Name | -Type | -Description | -Required | -|
---|---|---|---|---|
name | string |
- Name must match the name of one entry in pod.spec.resourceClaims of
-the Pod where this field is used. It makes that resource available
-inside a container. + |
- true | +false |
request | +namespace | string |
- Request is the name chosen for a request in the referenced claim.
-If empty, everything from the claim is made available, otherwise
-only the result of this request. + |
false |
attrs | -[]object | -
- Attrs defines Nginx agent specific attributes. The precedence order is:
-`agent default attributes` > `instrument spec attributes` .
-Attributes are documented at https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/otel-webserver-module - |
- false | -||||
configFile | -string | -
- Location of Nginx configuration file.
-Needed only if different from default "/etx/nginx/nginx.conf" - |
- false | -||||
env | +env | []object |
- Env defines Nginx specific env vars. There are four layers for env vars' definitions and
+ Env defines nodejs specific env vars. There are four layers for env vars' definitions and
the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
If the former var had been defined, then the other vars would be ignored. |
@@ -2543,16 +5656,24 @@ If the former var had been defined, then the other vars would be ignored.image | string |
- Image is a container image with Nginx SDK and auto-instrumentation. + Image is a container image with NodeJS SDK and auto-instrumentation. |
false |
resourceRequirements | +resourceRequirements | object |
Resources describes the compute resource requirements. |
false | +|||
volumeClaimTemplate | +object | +
+ VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+If omitted, an emptyDir is used with size limit VolumeSizeLimit + |
+ false | ||||
volumeLimitSize | int or string | @@ -2565,8 +5686,8 @@ The default size is 200Mi.
Name | +Type | +Description | +Required | +
---|---|---|---|
claims | +[]object | +
+ Claims lists the names of resources, defined in spec.resourceClaims,
+that are used by this container.
+
+This is an alpha field and requires enabling the
+DynamicResourceAllocation feature gate.
+
+This field is immutable. It can only be set for containers. + |
+ false | +
limits | +map[string]int or string | +
+ Limits describes the maximum amount of compute resources allowed.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
requests | +map[string]int or string | +
+ Requests describes the minimum amount of compute resources required.
+If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
+otherwise to an implementation-defined value. Requests cannot exceed Limits.
+More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + |
+ false | +
name | string |
- Name of the environment variable. Must be a C_IDENTIFIER. + Name must match the name of one entry in pod.spec.resourceClaims of +the Pod where this field is used. It makes that resource available +inside a container. |
true | |
value | +request | string |
- Variable references $(VAR_NAME) are expanded
-using the previously defined environment variables in the container and
-any service environment variables. If a variable cannot be resolved,
-the reference in the input string will be unchanged. Double $$ are reduced
-to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
-"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
-Escaped references will never be expanded, regardless of whether the variable
-exists or not.
-Defaults to "". + Request is the name chosen for a request in the referenced claim. +If empty, everything from the claim is made available, otherwise +only the result of this request. |
false | +
Name | +Type | +Description | +Required | +|
---|---|---|---|---|
spec | +object | +
+ The specification for the PersistentVolumeClaim. The entire content is
+copied unchanged into the PVC that gets created from this
+template. The same fields as in a PersistentVolumeClaim
+are also valid here. + |
+ true | |
valueFrom | +metadata | object |
- Source for the environment variable's value. Cannot be used if value is not empty. + May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. |
false |
configMapKeyRef | +accessModes | +[]string | +
+ accessModes contains the desired access modes the volume should have.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + |
+ false | +
dataSource | object |
- Selects a key of a ConfigMap. + dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. |
false | |
fieldRef | +dataSourceRef | object |
- Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[' + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. |
false |
resourceFieldRef | +resources | object |
- Selects a resource of the container: only resources limits and requests
-(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources |
false |
secretKeyRef | +selector | object |
- Selects a key of a secret in the pod's namespace + selector is a label query over volumes to consider for binding. + |
+ false | +
storageClassName | +string | +
+ storageClassName is the name of the StorageClass required by the claim.
+More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + |
+ false | +|
volumeAttributesClassName | +string | +
+ volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim.
+If specified, the CSI driver will create or update the volume with the attributes defined
+in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName,
+it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass
+will be applied to the claim but it's not allowed to reset this field to empty string once it is set.
+If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass
+will be set by the persistentvolume controller if it exists.
+If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be
+set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource
+exists.
+More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/
+(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). + |
+ false | +|
volumeMode | +string | +
+ volumeMode defines what type of volume is required by the claim.
+Value of Filesystem is implied when not included in claim spec. + |
+ false | +|
volumeName | +string | +
+ volumeName is the binding reference to the PersistentVolume backing this claim. |
false |
key | +kind | string |
- The key to select. + Kind is the type of resource being referenced |
true | |
name | string |
- Name of the referent.
-This field is effectively required, but due to backwards compatibility is
-allowed to be empty. Instances of this type with an empty value here are
-almost certainly wrong.
-More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names - - Default: + Name is the name of resource being referenced |
- false | +true | |
optional | -boolean | +apiGroup | +string |
- Specify whether the ConfigMap or its key must be defined + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required. |
false |
fieldPath | +kind | string |
- Path of the field to select in the specified API version. + Kind is the type of resource being referenced |
true |
apiVersion | +name | string |
- Version of the schema the FieldPath is written in terms of, defaults to "v1". + Name is the name of resource being referenced + |
+ true | +
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. + |
+ false | +|
namespace | +string | +
+ Namespace is the namespace of resource being referenced
+Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.
+(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. |
false |
resource | -string | -
- Required: resource to select - |
- true | -||
containerName | -string | +limits | +map[string]int or string |
- Container name: required for volumes, optional for env vars + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
false |
divisor | -int or string | +requests | +map[string]int or string |
- Specifies the output format of the exposed resources, defaults to "1" + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
false |
key | -string | -
- The key of the secret to select from. Must be a valid secret key. - |
- true | -||
name | -string | +matchExpressions | +[]object |
- Name of the referent.
-This field is effectively required, but due to backwards compatibility is
-allowed to be empty. Instances of this type with an empty value here are
-almost certainly wrong.
-More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names - - Default: + matchExpressions is a list of label selector requirements. The requirements are ANDed. |
false |
optional | -boolean | +matchLabels | +map[string]string |
- Specify whether the Secret or its key must be defined + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |
false |
claims | -[]object | +key | +string |
- Claims lists the names of resources, defined in spec.resourceClaims,
-that are used by this container.
-
-This is an alpha field and requires enabling the
-DynamicResourceAllocation feature gate.
-
-This field is immutable. It can only be set for containers. + key is the label key that the selector applies to. |
- false | +true |
limits | -map[string]int or string | +operator | +string |
- Limits describes the maximum amount of compute resources allowed.
-More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. |
- false | +true |
requests | -map[string]int or string | +values | +[]string |
- Requests describes the minimum amount of compute resources required.
-If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
-otherwise to an implementation-defined value. Requests cannot exceed Limits.
-More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |
false |
annotations | +map[string]string | +
+ + |
+ false | +|
finalizers | +[]string | +
+ + |
+ false | +|
labels | +map[string]string | +
+ + |
+ false | +|
name | string |
- Name must match the name of one entry in pod.spec.resourceClaims of
-the Pod where this field is used. It makes that resource available
-inside a container. + |
- true | +false |
request | +namespace | string |
- Request is the name chosen for a request in the referenced claim.
-If empty, everything from the claim is made available, otherwise
-only the result of this request. + |
false |
env | +env | []object |
- Env defines nodejs specific env vars. There are four layers for env vars' definitions and
+ Env defines python specific env vars. There are four layers for env vars' definitions and
the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
If the former var had been defined, then the other vars would be ignored. |
@@ -3223,16 +6538,24 @@ If the former var had been defined, then the other vars would be ignored.image | string |
- Image is a container image with NodeJS SDK and auto-instrumentation. + Image is a container image with Python SDK and auto-instrumentation. |
false |
resourceRequirements | +resourceRequirements | object |
Resources describes the compute resource requirements. |
false | +|||
volumeClaimTemplate | +object | +
+ VolumeClaimTemplate defines a ephemeral volume used for auto-instrumentation.
+If omitted, an emptyDir is used with size limit VolumeSizeLimit + |
+ false | ||||
volumeLimitSize | int or string | @@ -3245,8 +6568,8 @@ The default size is 200Mi.
env | -[]object | -
- Env defines python specific env vars. There are four layers for env vars' definitions and
-the precedence order is: `original container env vars` > `language specific env vars` > `common env vars` > `instrument spec configs' vars`.
-If the former var had been defined, then the other vars would be ignored. - |
- false | -||
image | -string | -
- Image is a container image with Python SDK and auto-instrumentation. - |
- false | -||
resourceRequirements | +spec | object |
- Resources describes the compute resource requirements. + The specification for the PersistentVolumeClaim. The entire content is +copied unchanged into the PVC that gets created from this +template. The same fields as in a PersistentVolumeClaim +are also valid here. |
- false | +true |
volumeLimitSize | -int or string | +metadata | +object |
- VolumeSizeLimit defines size limit for volume used for auto-instrumentation.
-The default size is 200Mi. + May contain labels and annotations that will be copied into the PVC +when creating it. No other fields are allowed and will be rejected during +validation. |
false |
name | -string | +accessModes | +[]string |
- Name of the environment variable. Must be a C_IDENTIFIER. + accessModes contains the desired access modes the volume should have. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 |
- true | +false |
value | -string | +dataSource | +object |
- Variable references $(VAR_NAME) are expanded
-using the previously defined environment variables in the container and
-any service environment variables. If a variable cannot be resolved,
-the reference in the input string will be unchanged. Double $$ are reduced
-to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
-"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
-Escaped references will never be expanded, regardless of whether the variable
-exists or not.
-Defaults to "". + dataSource field can be used to specify either: +* An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) +* An existing PVC (PersistentVolumeClaim) +If the provisioner or an external controller can support the specified data source, +it will create a new volume based on the contents of the specified data source. +When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, +and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. +If the namespace is specified, then dataSourceRef will not be copied to dataSource. |
false | |
valueFrom | +dataSourceRef | object |
- Source for the environment variable's value. Cannot be used if value is not empty. + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty +volume is desired. This may be any object from a non-empty API group (non +core object) or a PersistentVolumeClaim object. +When this field is specified, volume binding will only succeed if the type of +the specified object matches some installed volume populator or dynamic +provisioner. +This field will replace the functionality of the dataSource field and as such +if both fields are non-empty, they must have the same value. For backwards +compatibility, when namespace isn't specified in dataSourceRef, +both fields (dataSource and dataSourceRef) will be set to the same +value automatically if one of them is empty and the other is non-empty. +When namespace is specified in dataSourceRef, +dataSource isn't set to the same value and must be empty. +There are three important differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. |
false | -
Name | -Type | -Description | -Required | -||
---|---|---|---|---|---|
configMapKeyRef | +|||||
resources | object |
- Selects a key of a ConfigMap. + resources represents the minimum resources the volume should have. +If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements +that are lower than previous value but must still be higher than capacity recorded in the +status field of the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources |
false | ||
fieldRef | +selector | object |
- Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[' + selector is a label query over volumes to consider for binding. |
false | |
resourceFieldRef | -object | +storageClassName | +string |
- Selects a resource of the container: only resources limits and requests
-(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + storageClassName is the name of the StorageClass required by the claim. +More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 |
false |
secretKeyRef | -object | +volumeAttributesClassName | +string |
- Selects a key of a secret in the pod's namespace + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. +If specified, the CSI driver will create or update the volume with the attributes defined +in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, +it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass +will be applied to the claim but it's not allowed to reset this field to empty string once it is set. +If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass +will be set by the persistentvolume controller if it exists. +If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be +set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource +exists. +More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ +(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). + |
+ false | +
volumeMode | +string | +
+ volumeMode defines what type of volume is required by the claim.
+Value of Filesystem is implied when not included in claim spec. + |
+ false | +||
volumeName | +string | +
+ volumeName is the binding reference to the PersistentVolume backing this claim. |
false |
key | +kind | string |
- The key to select. + Kind is the type of resource being referenced |
true | |
name | string |
- Name of the referent.
-This field is effectively required, but due to backwards compatibility is
-allowed to be empty. Instances of this type with an empty value here are
-almost certainly wrong.
-More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names - - Default: + Name is the name of resource being referenced |
- false | +true | |
optional | -boolean | +apiGroup | +string |
- Specify whether the ConfigMap or its key must be defined + APIGroup is the group for the resource being referenced. +If APIGroup is not specified, the specified Kind must be in the core API group. +For any other third-party types, APIGroup is required. |
false |
fieldPath | +kind | string |
- Path of the field to select in the specified API version. + Kind is the type of resource being referenced |
true |
apiVersion | +name | string |
- Version of the schema the FieldPath is written in terms of, defaults to "v1". + Name is the name of resource being referenced + |
+ true | +
apiGroup | +string | +
+ APIGroup is the group for the resource being referenced.
+If APIGroup is not specified, the specified Kind must be in the core API group.
+For any other third-party types, APIGroup is required. + |
+ false | +|
namespace | +string | +
+ Namespace is the namespace of resource being referenced
+Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details.
+(Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. |
false |
resource | -string | -
- Required: resource to select - |
- true | -||
containerName | -string | +limits | +map[string]int or string |
- Container name: required for volumes, optional for env vars + Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
false |
divisor | -int or string | +requests | +map[string]int or string |
- Specifies the output format of the exposed resources, defaults to "1" + Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
false |
key | -string | -
- The key of the secret to select from. Must be a valid secret key. - |
- true | -||
name | -string | +matchExpressions | +[]object |
- Name of the referent.
-This field is effectively required, but due to backwards compatibility is
-allowed to be empty. Instances of this type with an empty value here are
-almost certainly wrong.
-More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names - - Default: + matchExpressions is a list of label selector requirements. The requirements are ANDed. |
false |
optional | -boolean | +matchLabels | +map[string]string |
- Specify whether the Secret or its key must be defined + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |
false |
claims | -[]object | +key | +string |
- Claims lists the names of resources, defined in spec.resourceClaims,
-that are used by this container.
-
-This is an alpha field and requires enabling the
-DynamicResourceAllocation feature gate.
-
-This field is immutable. It can only be set for containers. + key is the label key that the selector applies to. |
- false | +true |
limits | -map[string]int or string | +operator | +string |
- Limits describes the maximum amount of compute resources allowed.
-More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. |
- false | +true |
requests | -map[string]int or string | +values | +[]string |
- Requests describes the minimum amount of compute resources required.
-If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
-otherwise to an implementation-defined value. Requests cannot exceed Limits.
-More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |
false |
annotations | +map[string]string | +
+ + |
+ false | +|
finalizers | +[]string | +
+ + |
+ false | +|
labels | +map[string]string | +
+ + |
+ false | +|
name | string |
- Name must match the name of one entry in pod.spec.resourceClaims of
-the Pod where this field is used. It makes that resource available
-inside a container. + |
- true | +false |
request | +namespace | string |
- Request is the name chosen for a request in the referenced claim.
-If empty, everything from the claim is made available, otherwise
-only the result of this request. + |
false | false | +
persistentVolumeClaimRetentionPolicy | +object | +
+ PersistentVolumeClaimRetentionPolicy describes the lifecycle of persistent volume claims
+created from volumeClaimTemplates.
+This only works with the following OpenTelemetryCollector modes: statefulset. + |
+ false | |
podAnnotations | map[string]string | @@ -40863,6 +44249,49 @@ The operator.observability.prometheus feature gate must be enabled to use this f
Name | +Type | +Description | +Required | +
---|---|---|---|
whenDeleted | +string | +
+ WhenDeleted specifies what happens to PVCs created from StatefulSet
+VolumeClaimTemplates when the StatefulSet is deleted. The default policy
+of `Retain` causes PVCs to not be affected by StatefulSet deletion. The
+`Delete` policy causes those PVCs to be deleted. + |
+ false | +
whenScaled | +string | +
+ WhenScaled specifies what happens to PVCs created from StatefulSet
+VolumeClaimTemplates when the StatefulSet is scaled down. The default
+policy of `Retain` causes PVCs to not be affected by a scaledown. The
+`Delete` policy causes the associated PVCs for any excess pods above
+the replica count to be deleted. + |
+ false | +