Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: migrate from kuttl to chainsaw #2607

Closed
wants to merge 19 commits into from
Closed
Changes from 14 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions .chainsaw.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/configuration-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Configuration
metadata:
name: configuration
spec:
parallel: 4
timeouts:
assert: 5m0s
cleanup: 5m0s
delete: 5m0s
error: 5m0s
28 changes: 16 additions & 12 deletions .github/workflows/e2e.yaml
Original file line number Diff line number Diff line change
@@ -12,7 +12,6 @@ concurrency:

jobs:
e2e-tests:
name: End-to-end tests ${{ matrix.group }} on K8s ${{ matrix.kube-version }}
runs-on: ubuntu-22.04
strategy:
fail-fast: false
@@ -25,13 +24,13 @@ jobs:
- "1.29"
group:
- e2e
- e2e-instrumentation
- e2e-upgrade
- e2e-autoscale
- e2e-pdb
- e2e-instrumentation
- e2e-opampbridge
- e2e-targetallocator
- e2e-pdb
- e2e-prometheuscr
- e2e-targetallocator
- e2e-upgrade
- e2e-multi-instrumentation
include:
- group: e2e-prometheuscr
@@ -42,33 +41,38 @@ jobs:
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v4

- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "~1.21.3"

- name: Cache tools
uses: actions/cache@v4
with:
path: bin
key: ${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('Makefile') }}

- name: Install tools
run: make install-tools

- name: "run tests"
- name: Install chainsaw
uses: kyverno/action-install-chainsaw@07b6c986572f2abaf6647c85d37cbecfddc4a6ab # v0.1.3
- name: Prepare e2e tests
env:
KUBE_VERSION: ${{ matrix.kube-version }}
run: |
set -e
make ${{ matrix.setup != '' && matrix.setup || 'prepare-e2e' }} KUBE_VERSION=$KUBE_VERSION VERSION=e2e
- name: Run e2e tests
env:
KUBE_VERSION: ${{ matrix.kube-version }}
run: |
set -e
make ${{ matrix.group }}

- name: "log operator if failed"
if: ${{ failure() }}
env:
KUBE_VERSION: ${{ matrix.kube-version }}
run: make e2e-log-operator KUBE_VERSION=$KUBE_VERSION
run: |
set -e
make e2e-log-operator KUBE_VERSION=$KUBE_VERSION

e2e-tests-check:
runs-on: ubuntu-22.04
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -118,11 +118,11 @@ KUBEBUILDER_ASSETS=$(./bin/setup-envtest use -p path 1.23) go test ./pkg...

### End to end tests

To run the end-to-end tests, you'll need [`kind`](https://kind.sigs.k8s.io) and [`kuttl`](https://kuttl.dev). Refer to their documentation for installation instructions.
To run the end-to-end tests, you'll need [`kind`](https://kind.sigs.k8s.io) and [`chainsaw`](https://kyverno.github.io/chainsaw). Refer to their documentation for installation instructions.

Once they are installed, the tests can be executed with `make prepare-e2e`, which will build an image to use with the tests, followed by `make e2e`. Each call to the `e2e` target will setup a fresh `kind` cluster, making it safe to be executed multiple times with a single `prepare-e2e` step.

The tests are located under `tests/e2e` and are written to be used with `kuttl`. Refer to their documentation to understand how tests are written.
The tests are located under `tests/e2e` and are written to be used with `chainsaw`. Refer to their documentation to understand how tests are written.

To evert the changes made by the `make prepare-e2e` run `make reset`.

@@ -133,7 +133,7 @@ To install the OpenTelemetry operator, please follow the instructions in [Opera

Once the operator is installed, the tests can be executed using `make e2e-openshift`, which will call to the `e2e-openshift` target. Note that `kind` is disabled for the TestSuite as the requirement is to use an OpenShift cluster for these test cases.

The tests are located under `tests/e2e-openshift` and are written to be used with `kuttl`.
The tests are located under `tests/e2e-openshift` and are written to be used with `chainsaw`.

### Undeploying the operator from the local cluster

68 changes: 29 additions & 39 deletions Makefile
Original file line number Diff line number Diff line change
@@ -198,38 +198,17 @@ generate: controller-gen
# end-to-tests
.PHONY: e2e
e2e:
$(KUTTL) test


# instrumentation end-to-tests
.PHONY: e2e-instrumentation
e2e-instrumentation:
$(KUTTL) test --config kuttl-test-instrumentation.yaml

# end-to-end-test for PrometheusCR E2E tests
.PHONY: e2e-prometheuscr
e2e-prometheuscr:
$(KUTTL) test --config kuttl-test-prometheuscr.yaml

# end-to-end-test for testing upgrading
.PHONY: e2e-upgrade
e2e-upgrade: undeploy
$(KUTTL) test --config kuttl-test-upgrade.yaml
chainsaw test --test-dir ./tests/e2e

# end-to-end-test for testing autoscale
.PHONY: e2e-autoscale
e2e-autoscale:
$(KUTTL) test --config kuttl-test-autoscale.yaml

# end-to-end-test for testing pdb support
.PHONY: e2e-pdb
e2e-pdb:
$(KUTTL) test --config kuttl-test-pdb.yaml
chainsaw test --test-dir ./tests/e2e-autoscale

# end-to-end-test for testing OpenShift cases
.PHONY: e2e-openshift
e2e-openshift:
$(KUTTL) test --config kuttl-test-openshift.yaml
# instrumentation end-to-tests
.PHONY: e2e-instrumentation
e2e-instrumentation:
chainsaw test --test-dir ./tests/e2e-instrumentation

.PHONY: e2e-log-operator
e2e-log-operator:
@@ -239,23 +218,40 @@ e2e-log-operator:
# end-to-tests for multi-instrumentation
.PHONY: e2e-multi-instrumentation
e2e-multi-instrumentation:
$(KUTTL) test --config kuttl-test-multi-instr.yaml
chainsaw test --test-dir ./tests/e2e-multi-instrumentation

# OpAMPBridge CR end-to-tests
.PHONY: e2e-opampbridge
e2e-opampbridge:
$(KUTTL) test --config kuttl-test-opampbridge.yaml
chainsaw test --test-dir ./tests/e2e-opampbridge

# end-to-end-test for testing pdb support
.PHONY: e2e-pdb
e2e-pdb:
chainsaw test --test-dir ./tests/e2e-pdb

# end-to-end-test for PrometheusCR E2E tests
.PHONY: e2e-prometheuscr
e2e-prometheuscr:
chainsaw test --test-dir ./tests/e2e-prometheuscr

# Target allocator end-to-tests
.PHONY: e2e-targetallocator
e2e-targetallocator:
$(KUTTL) test --config kuttl-test-targetallocator.yaml
chainsaw test --test-dir ./tests/e2e-targetallocator

# end-to-end-test for testing upgrading
.PHONY: e2e-upgrade
e2e-upgrade: undeploy
kubectl apply -f ./tests/e2e-upgrade/upgrade-test/opentelemetry-operator-v0.86.0.yaml
go run hack/check-operator-ready.go
chainsaw test --test-dir ./tests/e2e-upgrade

.PHONY: prepare-e2e
prepare-e2e: kuttl set-image-controller add-image-targetallocator add-image-opampbridge container container-target-allocator container-operator-opamp-bridge start-kind cert-manager install-metrics-server install-targetallocator-prometheus-crds load-image-all deploy
prepare-e2e: set-image-controller add-image-targetallocator add-image-opampbridge container container-target-allocator container-operator-opamp-bridge start-kind cert-manager install-metrics-server install-targetallocator-prometheus-crds load-image-all deploy

.PHONY: prepare-e2e-with-featuregates
prepare-e2e-with-featuregates: kuttl enable-operator-featuregates prepare-e2e
prepare-e2e-with-featuregates: enable-operator-featuregates prepare-e2e

.PHONY: scorecard-tests
scorecard-tests: operator-sdk
@@ -361,7 +357,6 @@ cmctl:

KUSTOMIZE ?= $(LOCALBIN)/kustomize
KIND ?= $(LOCALBIN)/kind
KUTTL ?= $(LOCALBIN)/kubectl-kuttl
CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen
ENVTEST ?= $(LOCALBIN)/setup-envtest
CHLOGGEN ?= $(LOCALBIN)/chloggen
@@ -371,10 +366,9 @@ KUSTOMIZE_VERSION ?= v5.0.3
CONTROLLER_TOOLS_VERSION ?= v0.12.0
GOLANGCI_LINT_VERSION ?= v1.54.0
KIND_VERSION ?= v0.20.0
KUTTL_VERSION ?= 0.15.0

.PHONY: install-tools
install-tools: kustomize golangci-lint kind controller-gen envtest crdoc kuttl kind operator-sdk
install-tools: kustomize golangci-lint kind controller-gen envtest crdoc kind operator-sdk

.PHONY: kustomize
kustomize: ## Download kustomize locally if necessary.
@@ -418,10 +412,6 @@ rm -rf $$TMP_DIR ;\
}
endef

.PHONY: kuttl
kuttl: $(LOCALBIN)
@KUTTL=$(KUTTL) KUTTL_VERSION=$(KUTTL_VERSION) ./hack/install-kuttl.sh

OPERATOR_SDK = $(shell pwd)/bin/operator-sdk
.PHONY: operator-sdk
operator-sdk: $(LOCALBIN)
12 changes: 0 additions & 12 deletions hack/install-kuttl.sh

This file was deleted.

6 changes: 0 additions & 6 deletions kuttl-test-autoscale.yaml

This file was deleted.

5 changes: 0 additions & 5 deletions kuttl-test-instrumentation.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions kuttl-test-multi-instr.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions kuttl-test-opampbridge.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions kuttl-test-openshift.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions kuttl-test-pdb.yaml

This file was deleted.

16 changes: 0 additions & 16 deletions kuttl-test-prometheuscr.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions kuttl-test-targetallocator.yaml

This file was deleted.

19 changes: 0 additions & 19 deletions kuttl-test-upgrade.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions kuttl-test.yaml

This file was deleted.

8 changes: 0 additions & 8 deletions tests/e2e-autoscale/autoscale/03-delete.yaml

This file was deleted.

35 changes: 35 additions & 0 deletions tests/e2e-autoscale/autoscale/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: autoscale
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install.yaml
- assert:
file: 00-assert.yaml
- name: step-01
try:
- apply:
file: 01-install.yaml
- assert:
file: 01-assert.yaml
- name: step-02
try:
- apply:
file: 02-install.yaml
- assert:
file: 02-assert.yaml
- name: step-03
try:
- delete:
ref:
apiVersion: batch/v1
kind: Job
name: telemetrygen-set-utilization
- assert:
file: 03-assert.yaml
Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -19,13 +18,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [debug]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=3000/1000 --overwrite
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-apache-httpd
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=3000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
# skipping test, see https://github.com/open-telemetry/opentelemetry-operator/issues/1936
#apiVersion: opentelemetry.io/v1alpha1
#kind: OpenTelemetryCollector
#metadata:
# name: sidecar
#spec:
# mode: sidecar
# config: |
# receivers:
# otlp:
# protocols:
# grpc:
# http:
# processors:
# exporters:
# debug:
# service:
# pipelines:
# traces:
# receivers: [otlp]
# processors: []
# exporters: [debug]
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
debug:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [debug]
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# skipping test, see https://github.com/open-telemetry/opentelemetry-operator/issues/1936
#apiVersion: opentelemetry.io/v1alpha1
#kind: Instrumentation
#metadata:
# name: apache
#spec:
# exporter:
# endpoint: http://localhost:4317
# propagators:
# - jaeger
# - b3
# sampler:
# type: parentbased_traceidratio
# argument: "0.25"
# apacheHttpd:
# attrs:
# - name: ApacheModuleOtelMaxQueueSize
# value: "4096"
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: apache
spec:
exporter:
endpoint: http://localhost:4317
propagators:
- jaeger
- b3
sampler:
type: parentbased_traceidratio
argument: "0.25"
apacheHttpd:
attrs:
- name: ApacheModuleOtelMaxQueueSize
value: "4096"
Original file line number Diff line number Diff line change
@@ -1,71 +1,71 @@
# skipping test, see https://github.com/open-telemetry/opentelemetry-operator/issues/1936
#apiVersion: v1
#kind: Pod
#metadata:
# annotations:
# sidecar.opentelemetry.io/inject: "true"
# instrumentation.opentelemetry.io/inject-apache-httpd: "true"
# instrumentation.opentelemetry.io/container-names: "myapp,myrabbit"
# labels:
# app: my-apache
#spec:
# containers:
# - env:
# - name: OTEL_SERVICE_NAME
# value: my-apache
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: http://localhost:4317
# - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
# valueFrom:
# fieldRef:
# apiVersion: v1
# fieldPath: metadata.name
# - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
# valueFrom:
# fieldRef:
# apiVersion: v1
# fieldPath: spec.nodeName
# - name: OTEL_PROPAGATORS
# value: jaeger,b3
# - name: OTEL_TRACES_SAMPLER
# value: parentbased_traceidratio
# - name: OTEL_TRACES_SAMPLER_ARG
# value: "0.25"
# - name: OTEL_RESOURCE_ATTRIBUTES
# name: myapp
# volumeMounts:
# - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
# - mountPath: /opt/opentelemetry-webserver/agent
# name: otel-apache-agent
# - mountPath: /usr/local/apache2/conf
# name: otel-apache-conf-dir
# - env:
# - name: OTEL_SERVICE_NAME
# value: my-apache
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: http://localhost:4317
# - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
# valueFrom:
# fieldRef:
# apiVersion: v1
# fieldPath: metadata.name
# - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
# valueFrom:
# fieldRef:
# apiVersion: v1
# fieldPath: spec.nodeName
# - name: OTEL_PROPAGATORS
# value: jaeger,b3
# - name: OTEL_TRACES_SAMPLER
# value: parentbased_traceidratio
# - name: OTEL_TRACES_SAMPLER_ARG
# value: "0.25"
# - name: OTEL_RESOURCE_ATTRIBUTES
# name: myrabbit
# volumeMounts:
# - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
# - args:
# - --config=env:OTEL_CONFIG
# name: otc-container
#status:
# phase: Running
apiVersion: v1
kind: Pod
metadata:
annotations:
sidecar.opentelemetry.io/inject: "true"
instrumentation.opentelemetry.io/inject-apache-httpd: "true"
instrumentation.opentelemetry.io/container-names: "myapp,myrabbit"
labels:
app: my-apache
spec:
containers:
- env:
- name: OTEL_SERVICE_NAME
value: my-apache
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://localhost:4317
- name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: OTEL_PROPAGATORS
value: jaeger,b3
- name: OTEL_TRACES_SAMPLER
value: parentbased_traceidratio
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.25"
- name: OTEL_RESOURCE_ATTRIBUTES
name: myapp
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
- mountPath: /opt/opentelemetry-webserver/agent
name: otel-apache-agent
- mountPath: /usr/local/apache2/conf
name: otel-apache-conf-dir
- env:
- name: OTEL_SERVICE_NAME
value: my-apache
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://localhost:4317
- name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: OTEL_PROPAGATORS
value: jaeger,b3
- name: OTEL_TRACES_SAMPLER
value: parentbased_traceidratio
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.25"
- name: OTEL_RESOURCE_ATTRIBUTES
name: myrabbit
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
- args:
- --config=env:OTEL_CONFIG
name: otc-container
status:
phase: Running
Original file line number Diff line number Diff line change
@@ -1,36 +1,36 @@
# skipping test, see https://github.com/open-telemetry/opentelemetry-operator/issues/1936
#apiVersion: apps/v1
#kind: Deployment
#metadata:
# name: my-apache
#spec:
# selector:
# matchLabels:
# app: my-apache
# replicas: 1
# template:
# metadata:
# labels:
# app: my-apache
# annotations:
# sidecar.opentelemetry.io/inject: "true"
# instrumentation.opentelemetry.io/inject-apache-httpd: "true"
# instrumentation.opentelemetry.io/container-names: "myapp,myrabbit"
# spec:
# containers:
# - name: myapp
# image: docker.io/chrlic/apache-test@sha256:fad58c6ce7a4f477b455bece2a1980741fa6f81cef1e1093a3b72f9b2ffa7b8e
# # image source at https://github.com/cisco-open/appdynamics-k8s-webhook-instrumentor/tree/main/testWorkloads/apache-httpd
# # licensed under Apache 2.0
# imagePullPolicy: Always
# ports:
# - containerPort: 8080
# resources:
# limits:
# cpu: "1"
# memory: 500Mi
# requests:
# cpu: 250m
# memory: 100Mi
# - name: myrabbit
# image: rabbitmq
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-apache
spec:
selector:
matchLabels:
app: my-apache
replicas: 1
template:
metadata:
labels:
app: my-apache
annotations:
sidecar.opentelemetry.io/inject: "true"
instrumentation.opentelemetry.io/inject-apache-httpd: "true"
instrumentation.opentelemetry.io/container-names: "myapp,myrabbit"
spec:
containers:
- name: myapp
image: docker.io/chrlic/apache-test@sha256:fad58c6ce7a4f477b455bece2a1980741fa6f81cef1e1093a3b72f9b2ffa7b8e
# image source at https://github.com/cisco-open/appdynamics-k8s-webhook-instrumentor/tree/main/testWorkloads/apache-httpd
# licensed under Apache 2.0
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
limits:
cpu: "1"
memory: 500Mi
requests:
cpu: 250m
memory: 100Mi
- name: myrabbit
image: rabbitmq
Original file line number Diff line number Diff line change
@@ -1,68 +1,68 @@
# skipping test, see https://github.com/open-telemetry/opentelemetry-operator/issues/1936
#apiVersion: v1
#kind: Pod
#metadata:
# annotations:
# instrumentation.opentelemetry.io/inject-apache-httpd: "true"
# sidecar.opentelemetry.io/inject: "true"
# labels:
# app: my-apache
#spec:
# containers:
# - env:
# - name: OTEL_SERVICE_NAME
# value: my-apache
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: http://localhost:4317
# - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
# valueFrom:
# fieldRef:
# apiVersion: v1
# fieldPath: metadata.name
# - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
# valueFrom:
# fieldRef:
# apiVersion: v1
# fieldPath: spec.nodeName
# - name: OTEL_PROPAGATORS
# value: jaeger,b3
# - name: OTEL_TRACES_SAMPLER
# value: parentbased_traceidratio
# - name: OTEL_TRACES_SAMPLER_ARG
# value: "0.25"
# - name: OTEL_RESOURCE_ATTRIBUTES
# name: myapp
# volumeMounts:
# - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
# - mountPath: /opt/opentelemetry-webserver/agent
# name: otel-apache-agent
# - mountPath: /usr/local/apache2/conf
# name: otel-apache-conf-dir
# - env:
# - name: OTEL_SERVICE_NAME
# value: my-apache
# - name: OTEL_EXPORTER_OTLP_ENDPOINT
# value: http://localhost:4317
# - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
# valueFrom:
# fieldRef:
# apiVersion: v1
# fieldPath: metadata.name
# - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
# valueFrom:
# fieldRef:
# apiVersion: v1
# fieldPath: spec.nodeName
# - name: OTEL_PROPAGATORS
# value: jaeger,b3
# - name: OTEL_TRACES_SAMPLER
# value: parentbased_traceidratio
# - name: OTEL_TRACES_SAMPLER_ARG
# value: "0.25"
# - name: OTEL_RESOURCE_ATTRIBUTES
# name: myrabbit
# - args:
# - --config=env:OTEL_CONFIG
# name: otc-container
#status:
# phase: Running
apiVersion: v1
kind: Pod
metadata:
annotations:
instrumentation.opentelemetry.io/inject-apache-httpd: "true"
sidecar.opentelemetry.io/inject: "true"
labels:
app: my-apache
spec:
containers:
- env:
- name: OTEL_SERVICE_NAME
value: my-apache
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://localhost:4317
- name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: OTEL_PROPAGATORS
value: jaeger,b3
- name: OTEL_TRACES_SAMPLER
value: parentbased_traceidratio
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.25"
- name: OTEL_RESOURCE_ATTRIBUTES
name: myapp
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
- mountPath: /opt/opentelemetry-webserver/agent
name: otel-apache-agent
- mountPath: /usr/local/apache2/conf
name: otel-apache-conf-dir
- env:
- name: OTEL_SERVICE_NAME
value: my-apache
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://localhost:4317
- name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: OTEL_PROPAGATORS
value: jaeger,b3
- name: OTEL_TRACES_SAMPLER
value: parentbased_traceidratio
- name: OTEL_TRACES_SAMPLER_ARG
value: "0.25"
- name: OTEL_RESOURCE_ATTRIBUTES
name: myrabbit
- args:
- --config=env:OTEL_CONFIG
name: otc-container
status:
phase: Running
Original file line number Diff line number Diff line change
@@ -1,36 +1,36 @@
# skipping test, see https://github.com/open-telemetry/opentelemetry-operator/issues/1936
#apiVersion: apps/v1
#kind: Deployment
#metadata:
# name: my-apache
#spec:
# selector:
# matchLabels:
# app: my-apache
# replicas: 1
# template:
# metadata:
# labels:
# app: my-apache
# annotations:
# sidecar.opentelemetry.io/inject: "true"
# instrumentation.opentelemetry.io/inject-apache-httpd: "true"
# instrumentation.opentelemetry.io/container-names: "myrabbit"
# spec:
# containers:
# - name: myapp
# image: docker.io/chrlic/apache-test@sha256:fad58c6ce7a4f477b455bece2a1980741fa6f81cef1e1093a3b72f9b2ffa7b8e
# # image source at https://github.com/cisco-open/appdynamics-k8s-webhook-instrumentor/tree/main/testWorkloads/apache-httpd
# # licensed under Apache 2.0
# imagePullPolicy: Always
# ports:
# - containerPort: 8080
# resources:
# limits:
# cpu: "1"
# memory: 500Mi
# requests:
# cpu: 250m
# memory: 100Mi
# - name: myrabbit
# image: rabbitmq
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-apache
spec:
selector:
matchLabels:
app: my-apache
replicas: 1
template:
metadata:
labels:
app: my-apache
annotations:
sidecar.opentelemetry.io/inject: "true"
instrumentation.opentelemetry.io/inject-apache-httpd: "true"
instrumentation.opentelemetry.io/container-names: "myrabbit"
spec:
containers:
- name: myapp
image: docker.io/chrlic/apache-test@sha256:fad58c6ce7a4f477b455bece2a1980741fa6f81cef1e1093a3b72f9b2ffa7b8e
# image source at https://github.com/cisco-open/appdynamics-k8s-webhook-instrumentor/tree/main/testWorkloads/apache-httpd
# licensed under Apache 2.0
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
limits:
cpu: "1"
memory: 500Mi
requests:
cpu: 250m
memory: 100Mi
- name: myrabbit
image: rabbitmq
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-apache-multicontainer
spec:
# skipping test, see https://github.com/open-telemetry/opentelemetry-operator/issues/1936
skip: true
steps:
- name: step-00
try:
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
- name: step-02
try:
- apply:
file: 02-install-app.yaml
- assert:
file: 02-assert.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-dotnet-multicontainer
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
- name: step-02
try:
- apply:
file: 02-install-app.yaml
- assert:
file: 02-assert.yaml
Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -21,12 +20,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [logging]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=2000/1000 --overwrite
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-dotnet-musl
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=2000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -21,12 +20,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [debug]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=2000/1000 --overwrite
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-dotnet
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=2000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
12 changes: 0 additions & 12 deletions tests/e2e-instrumentation/instrumentation-go/01-add-scc.yaml
Original file line number Diff line number Diff line change
@@ -1,16 +1,4 @@
# Create a SA to apply the SCC policy
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-instrumentation-go
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- script: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=0/0 --overwrite
- script: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=2000/1000 --overwrite
# Add the SCC
- script: ./add-scc.sh
32 changes: 32 additions & 0 deletions tests/e2e-instrumentation/instrumentation-go/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-go
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- script:
content: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=0/0
--overwrite
- script:
content: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=2000/1000
--overwrite
- script:
content: ./add-scc.sh
- apply:
file: 01-add-scc.yaml
- name: step-02
try:
- apply:
file: 02-install-app.yaml
- assert:
file: 02-assert.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-java-multicontainer
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
- name: step-02
try:
- apply:
file: 02-install-app.yaml
- assert:
file: 02-assert.yaml

This file was deleted.

Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -21,12 +20,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [debug]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=2000/1000 --overwrite
mode: sidecar

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-java-other-ns
spec:
steps:
- name: step-01
try:
- delete:
ref:
apiVersion: v1
kind: Namespace
name: my-other-ns
- name: step-02
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=2000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 02-install-collector.yaml
- apply:
file: 02-install-instrumentation.yaml
- name: step-03
try:
- apply:
file: 03-install-app.yaml
- assert:
file: 03-assert.yaml
- name: step-04
try:
- delete:
ref:
apiVersion: v1
kind: Namespace
name: my-other-ns
Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -21,12 +20,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [debug]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=2000/1000 --overwrite
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-java
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=2000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -19,13 +18,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [logging]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=3000/1000 --overwrite
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-nginx-contnr-secctx
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=3000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -1,32 +1,21 @@
# skipping test, see https://github.com/open-telemetry/opentelemetry-operator/issues/1936
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: sidecar
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=3000/1000 --overwrite
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-nginx-multicontainer
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=3000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
- name: step-02
try:
- apply:
file: 02-install-app.yaml
- assert:
file: 02-assert.yaml
Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -19,13 +18,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [logging]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=3000/1000 --overwrite
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-nginx
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=3000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-nodejs-multicontainer
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
- name: step-02
try:
- apply:
file: 02-install-app.yaml
- assert:
file: 02-assert.yaml
Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -21,12 +20,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [debug]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=2000/1000 --overwrite
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-nodejs
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=2000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-python-multicontainer
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
- name: step-02
try:
- apply:
file: 02-install-app.yaml
- assert:
file: 02-assert.yaml
Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -21,12 +20,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [debug]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=2000/1000 --overwrite
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-python
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=2000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -3,7 +3,6 @@ kind: OpenTelemetryCollector
metadata:
name: sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
@@ -21,12 +20,4 @@ spec:
receivers: [otlp]
processors: []
exporters: [debug]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=2000/1000 --overwrite
mode: sidecar
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-sdk
spec:
steps:
- name: step-00
try:
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=2000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-multi-multicontainer
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-multi-no-containers
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: instrumentation-single-instr-first-container
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install-collector.yaml
- apply:
file: 00-install-instrumentation.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml
43 changes: 19 additions & 24 deletions tests/e2e-opampbridge/opampbridge/00-assert.yaml
Original file line number Diff line number Diff line change
@@ -6,30 +6,27 @@ spec:
template:
spec:
containers:
- name: opamp-bridge-container
env:
- name: OTELCOL_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /conf
name: opamp-bridge-internal
volumes:
- configMap:
items:
- key: remoteconfiguration.yaml
path: remoteconfiguration.yaml
name: test-opamp-bridge
- env:
- name: OTELCOL_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
name: opamp-bridge-container
volumeMounts:
- mountPath: /conf
name: opamp-bridge-internal
volumes:
- configMap:
items:
- key: remoteconfiguration.yaml
path: remoteconfiguration.yaml
name: test-opamp-bridge
name: opamp-bridge-internal
status:
replicas: 1
readyReplicas: 1
replicas: 1
---
apiVersion: v1
kind: ConfigMap
metadata:
name: test-opamp-bridge
data:
remoteconfiguration.yaml: |
capabilities:
@@ -52,6 +49,9 @@ data:
receivers:
- otlp
endpoint: ws://opamp-server:4320/v1/opamp
kind: ConfigMap
metadata:
name: test-opamp-bridge
---
apiVersion: v1
kind: Service
@@ -63,8 +63,3 @@ spec:
port: 80
protocol: TCP
targetPort: 8080
---
apiVersion: kuttl.dev/v1beta1
kind: TestAssert
opampbridges:
- selector: app.kubernetes.io/component=opentelemetry-opamp-bridge
23 changes: 9 additions & 14 deletions tests/e2e-opampbridge/opampbridge/00-install.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: opamp-bridge
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
@@ -16,24 +16,18 @@ rules:
verbs:
- '*'
- apiGroups:
- ''
- ""
resources:
- pods
verbs:
- 'list'
- 'get'
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
- command: kubectl -n $NAMESPACE create rolebinding default-opamp-bridge-$NAMESPACE --role=opamp-bridge --serviceaccount=$NAMESPACE:opamp-bridge
- list
- get
---
apiVersion: opentelemetry.io/v1alpha1
kind: OpAMPBridge
metadata:
name: test
spec:
endpoint: ws://opamp-server:4320/v1/opamp
capabilities:
AcceptsOpAMPConnectionSettings: true
AcceptsOtherConnectionSettings: true
@@ -47,9 +41,10 @@ spec:
ReportsRemoteConfig: true
ReportsStatus: true
componentsAllowed:
receivers:
- otlp
exporters:
- logging
processors:
- memory_limiter
exporters:
- logging
receivers:
- otlp
endpoint: ws://opamp-server:4320/v1/opamp
27 changes: 27 additions & 0 deletions tests/e2e-opampbridge/opampbridge/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: opampbridge
spec:
steps:
- catch:
- podLogs:
selector: app.kubernetes.io/component=opentelemetry-opamp-bridge
name: step-00
try:
- command:
args:
- -n
- $NAMESPACE
- create
- rolebinding
- default-opamp-bridge-$NAMESPACE
- --role=opamp-bridge
- --serviceaccount=$NAMESPACE:opamp-bridge
entrypoint: kubectl
- apply:
file: 00-install.yaml
- assert:
file: 00-assert.yaml
6 changes: 0 additions & 6 deletions tests/e2e-openshift/kafka/05-assert.yaml

This file was deleted.

33 changes: 33 additions & 0 deletions tests/e2e-openshift/kafka/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: kafka
spec:
steps:
- name: step-00
try:
- apply:
file: 00-create-kafka-instance.yaml
- assert:
file: 00-assert.yaml
- apply:
file: 01-create-kafka-topics.yaml
- assert:
file: 01-assert.yaml
- apply:
file: 02-otel-kakfa-receiver.yaml
- assert:
file: 02-assert.yaml
- apply:
file: 03-otel-kakfa-exporter.yaml
- assert:
file: 03-assert.yaml
- apply:
file: 04-generate-traces.yaml
- assert:
file: 04-assert.yaml
catch:
- script:
content: ./tests/e2e-openshift/kafka/check_traces.sh
4 changes: 0 additions & 4 deletions tests/e2e-openshift/monitoring/00-assert.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions tests/e2e-openshift/monitoring/03-assert.yaml
Original file line number Diff line number Diff line change
@@ -11,7 +11,6 @@ rules:
- get
- list
- watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
@@ -25,9 +24,3 @@ subjects:
- kind: ServiceAccount
name: prometheus-user-workload
namespace: openshift-user-workload-monitoring

---
apiVersion: kuttl.dev/v1beta1
kind: TestAssert
commands:
- script: ./tests/e2e-openshift/monitoring/check_metrics.sh
36 changes: 36 additions & 0 deletions tests/e2e-openshift/monitoring/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: monitoring
spec:
steps:
- catch:
- script:
content: ./tests/e2e-openshift/monitoring/check_user_workload_monitoring.sh
name: step-00
try:
- apply:
file: 00-workload-monitoring.yaml
- name: step-01
try:
- apply:
file: 01-otel-collector.yaml
- assert:
file: 01-assert.yaml
- name: step-02
try:
- apply:
file: 02-generate-traces.yaml
- assert:
file: 02-assert.yaml
- catch:
- script:
content: ./tests/e2e-openshift/monitoring/check_metrics.sh
name: step-03
try:
- apply:
file: 03-create-monitoring-roles.yaml
- assert:
file: 03-assert.yaml
30 changes: 12 additions & 18 deletions tests/e2e-openshift/multi-cluster/02-otlp-receiver.yaml
Original file line number Diff line number Diff line change
@@ -1,27 +1,9 @@
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
- script: ./generate_certs.sh

---
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otlp-receiver
namespace: kuttl-multi-cluster-receive
spec:
mode: "deployment"
ingress:
type: route
route:
termination: "passthrough"
volumes:
- name: kuttl-certs
configMap:
name: kuttl-certs
volumeMounts:
- name: kuttl-certs
mountPath: /certs
config: |
receivers:
otlp:
@@ -47,3 +29,15 @@ spec:
receivers: [otlp]
processors: []
exporters: [otlp]
ingress:
route:
termination: passthrough
type: route
mode: deployment
volumeMounts:
- mountPath: /certs
name: kuttl-certs
volumes:
- configMap:
name: kuttl-certs
name: kuttl-certs
63 changes: 37 additions & 26 deletions tests/e2e-openshift/multi-cluster/03-otlp-sender.yaml
Original file line number Diff line number Diff line change
@@ -1,46 +1,57 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kuttl-multi-cluster
namespace: kuttl-multi-cluster-send

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kuttl-multi-cluster
rules:
- apiGroups: ["config.openshift.io"]
resources: ["infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "watch", "list"]

- apiGroups:
- config.openshift.io
resources:
- infrastructures
- infrastructures/status
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- replicasets
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kuttl-multi-cluster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kuttl-multi-cluster
subjects:
- kind: ServiceAccount
name: kuttl-multi-cluster
namespace: kuttl-multi-cluster-send
roleRef:
kind: ClusterRole
name: kuttl-multi-cluster
apiGroup: rbac.authorization.k8s.io

---
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
- script: ./create_otlp_sender.sh
6 changes: 0 additions & 6 deletions tests/e2e-openshift/multi-cluster/05-assert.yaml

This file was deleted.

37 changes: 37 additions & 0 deletions tests/e2e-openshift/multi-cluster/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: multi-cluster
spec:
steps:
- name: step-00
try:
- apply:
file: 00-create-namespaces.yaml
- assert:
file: 00-assert.yaml
- apply:
file: 01-create-jaeger.yaml
- assert:
file: 01-assert.yaml
- script:
content: ./generate_certs.sh
- apply:
file: 02-otlp-receiver.yaml
- assert:
file: 02-assert.yaml
- script:
content: ./create_otlp_sender.sh
- apply:
file: 03-otlp-sender.yaml
- assert:
file: 03-assert.yaml
- apply:
file: 04-generate-traces.yaml
- assert:
file: 04-assert.yaml
catch:
- script:
content: ./tests/e2e-openshift/multi-cluster/check_traces.sh
4 changes: 0 additions & 4 deletions tests/e2e-openshift/otlp-metrics-traces/01-assert.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions tests/e2e-openshift/otlp-metrics-traces/04-assert.yaml

This file was deleted.

5 changes: 0 additions & 5 deletions tests/e2e-openshift/otlp-metrics-traces/05-assert.yaml

This file was deleted.

33 changes: 33 additions & 0 deletions tests/e2e-openshift/otlp-metrics-traces/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: otlp-metrics-traces
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install-jaeger.yaml
- assert:
file: 00-assert.yaml
- name: step-01
try:
- apply:
file: 01-workload-monitoring.yaml
- apply:
file: 02-otel-metrics-collector.yaml
- assert:
file: 02-assert.yaml
- apply:
file: 03-metrics-traces-gen.yaml
- assert:
file: 03-assert.yaml
catch:
- script:
content: ./tests/e2e-openshift/otlp-metrics-traces/check_user_workload_monitoring.sh
- script:
content: ./tests/e2e-openshift/otlp-metrics-traces/check_traces.sh
- script:
content: ./tests/e2e-openshift/otlp-metrics-traces/check_metrics.sh

This file was deleted.

23 changes: 23 additions & 0 deletions tests/e2e-openshift/route/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: route
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install.yaml
- assert:
file: 00-assert.yaml
- name: step-01
try:
- script:
content: |
#!/bin/bash
set -ex
# Export empty payload and check of collector accepted it with 2xx status code
otlp_http_host=$(kubectl get route otlp-http-simplest-route -n $NAMESPACE -o jsonpath='{.spec.host}')
for i in {1..40}; do curl --fail -ivX POST http://${otlp_http_host}:80/v1/traces -H "Content-Type: application/json" -d '{}' && break || sleep 1; done
14 changes: 14 additions & 0 deletions tests/e2e-pdb/pdb/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: pdb
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install.yaml
- assert:
file: 00-assert.yaml
14 changes: 14 additions & 0 deletions tests/e2e-pdb/target-allocator/chainsaw-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: target-allocator
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install.yaml
- assert:
file: 00-assert.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: create-pm-prometheus-exporters
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install.yaml
- name: step-01
try:
- apply:
file: 01-install-app.yaml
- assert:
file: 01-assert.yaml

This file was deleted.

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: ta
namespace: create-sm-prometheus
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ta
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- services
- endpoints
- configmaps
- secrets
- namespaces
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- statefulsets
- services
- endpoints
verbs:
- get
- watch
- list
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- watch
- list
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- watch
- list
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
- podmonitors
verbs:
- get
- watch
- list
- nonResourceURLs:
- /metrics
verbs:
- get
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: simplest
namespace: create-sm-prometheus
spec:
config: |
receivers:
prometheus:
config:
scrape_configs: []
processors:
exporters:
prometheus:
endpoint: 0.0.0.0:9090
service:
pipelines:
metrics:
receivers: [prometheus]
processors: []
exporters: [prometheus]
mode: statefulset
targetAllocator:
enabled: true
observability:
metrics:
enableMetrics: true
prometheusCR:
enabled: true
serviceAccount: ta
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
apiVersion: batch/v1
kind: Job
metadata:
name: check-ta-metrics
namespace: create-sm-prometheus
spec:
template:
spec:
containers:
- args:
- /bin/sh
- -c
- curl -s http://simplest-targetallocator/jobs | grep "targetallocator"
image: curlimages/curl
name: check-metrics
restartPolicy: OnFailure
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: create-sm-prometheus-exporters
spec:
steps:
- name: step-00
try:
- apply:
file: 00-install.yaml
- name: step-01
try:
- apply:
file: 01-install.yaml
- assert:
file: 01-assert.yaml
- name: step-02
try:
- apply:
file: 02-install.yaml
- assert:
file: 02-assert.yaml
- name: step-03
try:
- apply:
file: 03-install.yaml
- assert:
file: 03-assert.yaml
- name: step-04
try:
- apply:
file: 04-error.yaml
- apply:
file: 04-install.yaml
- name: step-05
try:
- apply:
file: 05-error.yaml
- apply:
file: 05-install.yaml
- assert:
file: 05-assert.yaml
- name: step-06
try:
- apply:
file: 06-install.yaml
- assert:
file: 06-assert.yaml
- name: step-07
try:
- delete:
ref:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
name: simplest
namespace: create-sm-prometheus
- apply:
file: 07-error.yaml
- name: step-08
try:
- command:
args:
- create
- clusterrolebinding
- simplest-targetallocator-create-sm-prometheus
- --clusterrole=ta
- --serviceaccount=create-sm-prometheus:ta
entrypoint: kubectl
- apply:
file: chainsaw-step-08-apply-1-1.yaml
- apply:
file: chainsaw-step-08-apply-1-2.yaml
- apply:
file: chainsaw-step-08-apply-1-4.yaml
- apply:
file: chainsaw-step-08-apply-1-5.yaml
- assert:
file: 08-assert.yaml
146 changes: 70 additions & 76 deletions tests/e2e-targetallocator/targetallocator-features/00-assert.yaml
Original file line number Diff line number Diff line change
@@ -6,24 +6,24 @@ spec:
podManagementPolicy: Parallel
template:
spec:
containers:
- args:
- --config=/conf/collector.yaml
name: otc-container
volumeMounts:
- mountPath: /conf
name: otc-internal
- mountPath: /usr/share/testvolume
name: testvolume
volumes:
- configMap:
items:
- key: collector.yaml
path: collector.yaml
name: stateful-collector
name: otc-internal
- emptyDir: {}
name: testvolume
containers:
- args:
- --config=/conf/collector.yaml
name: otc-container
volumeMounts:
- mountPath: /conf
name: otc-internal
- mountPath: /usr/share/testvolume
name: testvolume
volumes:
- configMap:
items:
- key: collector.yaml
path: collector.yaml
name: stateful-collector
name: otc-internal
- emptyDir: {}
name: testvolume
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
@@ -37,8 +37,8 @@ spec:
storage: 1Gi
volumeMode: Filesystem
status:
replicas: 1
readyReplicas: 1
replicas: 1
---
apiVersion: apps/v1
kind: Deployment
@@ -51,70 +51,64 @@ spec:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ingress-ready
operator: In
values:
- "true"
- matchExpressions:
- key: ingress-ready
operator: In
values:
- "true"
containers:
- args:
- --enable-prometheus-cr-watcher
env:
- name: TEST_ENV
value: test
- name: OTELCOL_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ta-container
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
privileged: false
runAsGroup: 1000
runAsUser: 1000
volumeMounts:
- mountPath: /conf
name: ta-internal
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 3000
containers:
- name: ta-container
args:
- --enable-prometheus-cr-watcher
env:
- name: TEST_ENV
value: "test"
- name: OTELCOL_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /conf
name: ta-internal
readinessProbe:
httpGet:
path: /readyz
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 1
periodSeconds: 10
securityContext:
runAsUser: 1000
runAsGroup: 1000
privileged: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
livenessProbe:
httpGet:
path: /livez
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 1
periodSeconds: 10
runAsGroup: 3000
runAsUser: 1000
volumes:
- configMap:
items:
- key: targetallocator.yaml
path: targetallocator.yaml
name: stateful-targetallocator
name: ta-internal
- configMap:
items:
- key: targetallocator.yaml
path: targetallocator.yaml
name: stateful-targetallocator
name: ta-internal
status:
replicas: 1
readyReplicas: 1
replicas: 1
---
apiVersion: v1
kind: ConfigMap
metadata:
name: stateful-targetallocator
---
# Print TA pod logs if test fails
apiVersion: kuttl.dev/v1beta1
kind: TestAssert
collectors:
- selector: app.kubernetes.io/component=opentelemetry-targetallocator
149 changes: 63 additions & 86 deletions tests/e2e-targetallocator/targetallocator-features/00-install.yaml
Original file line number Diff line number Diff line change
@@ -1,112 +1,89 @@
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: ta
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: targetallocator-features
rules:
- apiGroups: [""]
resources: [ "pods", "namespaces" ]
verbs: [ "get", "list", "watch"]
- apiGroups: ["monitoring.coreos.com"]
resources: ["servicemonitors"]
verbs: ["get", "list", "watch"]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
- command: kubectl -n $NAMESPACE create clusterrolebinding default-view-$NAMESPACE --clusterrole=targetallocator-features --serviceaccount=$NAMESPACE:ta
# Annotate the namespace to allow the application to run using an specific group and user in OpenShift
# https://docs.openshift.com/dedicated/authentication/managing-security-context-constraints.html
# This annotation has no effect in Kubernetes
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.uid-range=1000/1000 --overwrite
- command: kubectl annotate namespace ${NAMESPACE} openshift.io/sa.scc.supplemental-groups=3000/1000 --overwrite
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- list
- watch
---
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: stateful
spec:
config: "receivers:\n jaeger:\n protocols:\n grpc:\n\n # Collect own
metrics\n prometheus:\n config:\n scrape_configs:\n - job_name:
'otel-collector'\n scrape_interval: 10s\n static_configs:\n -
targets: [ '0.0.0.0:8888' ]\n relabel_configs:\n - regex: __meta_kubernetes_node_label_(.+)\n
\ action: labelmap\n replacement: $$1\n - regex: test_.*\n
\ action: labeldrop \n - regex: 'metrica_*|metricb.*'\n action:
labelkeep\n replacement: $$1\n\nprocessors:\n\nexporters:\n debug:\nservice:\n
\ pipelines:\n traces:\n receivers: [jaeger]\n processors: []\n exporters:
[debug]\n"
mode: statefulset
volumes:
- name: testvolume
volumeMounts:
- name: testvolume
mountPath: /usr/share/testvolume
volumeClaimTemplates:
- metadata:
name: testvolume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
targetAllocator:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ingress-ready
operator: In
values:
- "true"
enabled: true
serviceAccount: ta
securityContext:
env:
- name: TEST_ENV
value: test
podSecurityContext:
fsGroup: 3000
runAsGroup: 3000
runAsUser: 1000
runAsGroup: 1000
privileged: false
prometheusCR:
enabled: true
filterStrategy: ""
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
podSecurityContext:
privileged: false
runAsGroup: 1000
runAsUser: 1000
runAsGroup: 3000
fsGroup: 3000
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ingress-ready
operator: In
values:
- "true"
prometheusCR:
enabled: true
filterStrategy: ""
env:
- name: TEST_ENV
value: "test"
config: |
receivers:
jaeger:
protocols:
grpc:

# Collect own metrics
prometheus:
config:
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 10s
static_configs:
- targets: [ '0.0.0.0:8888' ]
relabel_configs:
- regex: __meta_kubernetes_node_label_(.+)
action: labelmap
replacement: $$1
- regex: test_.*
action: labeldrop
- regex: 'metrica_*|metricb.*'
action: labelkeep
replacement: $$1

processors:

exporters:
debug:
service:
pipelines:
traces:
receivers: [jaeger]
processors: []
exporters: [debug]
serviceAccount: ta
volumeClaimTemplates:
- metadata:
name: testvolume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMounts:
- mountPath: /usr/share/testvolume
name: testvolume
volumes:
- name: testvolume
Original file line number Diff line number Diff line change
@@ -1,14 +1,9 @@
# waiting for liveness probe to work
apiVersion: kuttl.dev/v1beta1
kind: TestAssert
timeout: 5
---
apiVersion: v1
kind: Pod
metadata:
labels:
app.kubernetes.io/component: opentelemetry-targetallocator
status:
containerStatuses:
- name: ta-container
restartCount: 0
- name: ta-container
restartCount: 0

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/kyverno/chainsaw/main/.schemas/json/test-chainsaw-v1alpha1.json
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
creationTimestamp: null
name: targetallocator-features
spec:
steps:
- catch:
- podLogs:
selector: app.kubernetes.io/component=opentelemetry-targetallocator
name: step-00
try:
- command:
args:
- -n
- $NAMESPACE
- create
- clusterrolebinding
- default-view-$NAMESPACE
- --clusterrole=targetallocator-features
- --serviceaccount=$NAMESPACE:ta
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.uid-range=1000/1000
- --overwrite
entrypoint: kubectl
- command:
args:
- annotate
- namespace
- ${NAMESPACE}
- openshift.io/sa.scc.supplemental-groups=3000/1000
- --overwrite
entrypoint: kubectl
- apply:
file: 00-install.yaml
- assert:
file: 00-assert.yaml
- name: step-01
try:
- sleep:
duration: 35s
- assert:
file: 01-assert.yaml
Original file line number Diff line number Diff line change
@@ -1,28 +1,18 @@
# This KUTTL assert uses the check-daemonset.sh script to ensure the number of ready pods in a daemonset matches the desired count, retrying until successful or a timeout occurs. The script is needed as the number of Kubernetes cluster nodes can vary and we cannot statically set desiredNumberScheduled and numberReady in the assert for daemonset status.

apiVersion: kuttl.dev/v1beta1
kind: TestAssert
commands:
- script: ./tests/e2e/smoke-daemonset/check-daemonset.sh
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-kubernetessd-targetallocator
status:
replicas: 1
readyReplicas: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-kubernetessd-targetallocator
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-kubernetessd-collector
data:
collector.yaml: |
exporters:
@@ -43,9 +33,6 @@ data:
processors: []
receivers:
- prometheus
---
# Print TA pod logs if test fails
apiVersion: kuttl.dev/v1beta1
kind: TestAssert
collectors:
- selector: app.kubernetes.io/managed-by=opentelemetry-operator
kind: ConfigMap
metadata:
name: prometheus-kubernetessd-collector
Original file line number Diff line number Diff line change
@@ -1,105 +1,102 @@
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: ta
automountServiceAccountToken: true
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: collector
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: targetallocator-kubernetessd
rules:
- apiGroups: [""]
- apiGroups:
- ""
resources:
- pods
- nodes
- services
- endpoints
- configmaps
- secrets
- namespaces
- pods
- nodes
- services
- endpoints
- configmaps
- secrets
- namespaces
verbs:
- get
- watch
- list
- apiGroups: ["apps"]
- get
- watch
- list
- apiGroups:
- apps
resources:
- statefulsets
- daemonsets
- services
- endpoints
- statefulsets
- daemonsets
- services
- endpoints
verbs:
- get
- watch
- list
- apiGroups: ["discovery.k8s.io"]
- get
- watch
- list
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
- endpointslices
verbs:
- get
- watch
- list
- apiGroups: ["networking.k8s.io"]
- get
- watch
- list
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- ingresses
verbs:
- get
- watch
- list
- nonResourceURLs:
- /metrics
verbs:
- get
- watch
- list
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: collector-kubernetessd
rules:
- apiGroups: [""]
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/metrics
- services
- endpoints
- pods
- nodes
- nodes/metrics
- services
- endpoints
verbs:
- get
- watch
- list
- apiGroups: ["networking.k8s.io"]
- get
- watch
- list
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- ingresses
verbs:
- get
- watch
- list
- nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
verbs: ["get"]
---
apiVersion: kuttl.dev/v1beta1
kind: TestStep
commands:
- command: kubectl create clusterrolebinding ta-$NAMESPACE --clusterrole=targetallocator-kubernetessd --serviceaccount=$NAMESPACE:ta
- command: kubectl create clusterrolebinding collector-$NAMESPACE --clusterrole=collector-kubernetessd --serviceaccount=$NAMESPACE:collector
- get
- watch
- list
- nonResourceURLs:
- /metrics
- /metrics/cadvisor
verbs:
- get
---
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: prometheus-kubernetessd
spec:
mode: daemonset
serviceAccount: collector
targetAllocator:
enabled: true
allocationStrategy: "per-node"
serviceAccount: ta
prometheusCR:
enabled: false
config: |
receivers:
prometheus:
@@ -131,3 +128,11 @@ spec:
receivers: [prometheus]
processors: []
exporters: [prometheus]
mode: daemonset
serviceAccount: collector
targetAllocator:
allocationStrategy: per-node
enabled: true
prometheusCR:
enabled: false
serviceAccount: ta
Loading