Skip to content

Conversation

@lmiccini
Copy link
Contributor

@lmiccini lmiccini commented Dec 19, 2025

Add new messagingBus and notificationsBus interfaces to hold cluster, user and vhost names for optional usage.
The controller adds these values to the TransportURL create request when present.

Additionally, we migrate RabbitMQ cluster name to RabbitMq config struct using DefaultRabbitMqConfig from infra-operator to automatically populate the new Cluster field from legacy RabbitMqClusterName.

Example usage:

  spec:
    messagingBus:
      cluster: rpc-rabbitmq
      user: rpc-user
      vhost: rpc-vhost
    notificationsBus:
      cluster: notifications-rabbitmq
      user: notifications-user
      vhost: notifications-vhost

Jira: https://issues.redhat.com/browse/OSPRH-23882

Depends-on: openstack-k8s-operators/openstack-operator#1779

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/43a7be44149e4a80a82fc6b816e9f29b

✔️ openstack-meta-content-provider-master SUCCESS in 2h 24m 19s
watcher-operator-validation-master FAILURE in 36m 03s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 3h 35m 19s
✔️ watcher-operator-validation-epoxy SUCCESS in 1h 55m 29s
watcher-operator-validation-epoxy-ocp4-16 FAILURE in 2h 17m 30s
✔️ noop SUCCESS in 0s
✔️ watcher-operator-kuttl SUCCESS in 47m 25s

@lmiccini
Copy link
Contributor Author

recheck

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/fcb50a37da904b558dc9d03f6c6b6eb0

✔️ openstack-meta-content-provider-master SUCCESS in 2h 08m 29s
watcher-operator-validation-master FAILURE in 29m 06s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 2h 02m 35s
watcher-operator-validation-epoxy FAILURE in 1h 10m 16s
watcher-operator-validation-epoxy-ocp4-16 FAILURE in 19m 16s
✔️ noop SUCCESS in 0s
✔️ watcher-operator-kuttl SUCCESS in 48m 51s

@lmiccini
Copy link
Contributor Author

recheck

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/ee8b8635fe8241a7aae1b57c091777e0

✔️ openstack-meta-content-provider-master SUCCESS in 1h 31m 19s
watcher-operator-validation-master FAILURE in 32m 32s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 2h 24m 58s
✔️ watcher-operator-validation-epoxy SUCCESS in 1h 55m 32s
watcher-operator-validation-epoxy-ocp4-16 FAILURE in 21m 02s
✔️ noop SUCCESS in 0s
✔️ watcher-operator-kuttl SUCCESS in 47m 15s

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/5deaf237b4c84f9394a94fed49bfe09c

✔️ openstack-meta-content-provider-master SUCCESS in 2h 53m 25s
watcher-operator-validation-master FAILURE in 2h 17m 48s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 2h 17m 27s
✔️ watcher-operator-validation-epoxy SUCCESS in 1h 50m 53s
watcher-operator-validation-epoxy-ocp4-16 FAILURE in 1h 51m 03s
✔️ noop SUCCESS in 0s
✔️ watcher-operator-kuttl SUCCESS in 44m 27s

Copy link
Contributor

@amoralej amoralej left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to also modify the kuttl tests to cover the new parameters. You can use the existing kuttl tests, i.e. the watcher-notification or some other, watcher-topology, etc... to test non-default values for the new spec params.

I also left some inline comments.

description: Name of the cluster
minLength: 1
type: string
user:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im not sure if this makes seense either.

the reaosin i say that is when i filed https://issues.redhat.com/browse/OSPRH-92 orgianly it was also covering rothation of the rabbit mq password

it was scoped down to only the db for GA but
the intent was to intoduce a messaging busss account CR following the mariadb CR patthern to allow password rotation while having 2 active password/users

to do that we would need to generate a new user/password password pair so that the old pass word can remain active after the contolplane has rotaited to allow time of for the edpm deployment.

inother words we cannot have the user be part of this struct unless rabbit supprot having 2 active password for the same user.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback. The way credentials rotation can be achieved is by simply switching the user in the cr, so you could start from "user-old" and switch to "user-new". Both credentials will be valid until a human admin decides to manually remove the unused one. We did not implement auto cleanup as it would break edpm nodes, plus we want to allow rolling back to old credentials if required.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that seams messy vs the dedicated maria db account where the db is account is removed when the account CR is removed and we use a finalsize to prevent tis deletion until all usage of it is remvoed form all the crs that refence it.

are there plans to model the rabbit acocunts as CRDs in a similar way?
watcher does nto have any edpm compoents so its less relenvet for this PR but it a factor for nova

ideally we shoudl not have human operators direcly interacting with rabbit.

so my suggestion is to replace user with an account name which fence a rabbit account opbejct like the database account CRD.

the rotation woudl basically happen the way you suggest you create a new account obejct which will be reconsiled by the infra operator and update the value in the service template which will propagate to the service operator to reconisle.

during that reconsiliation they will remove there finaliser form the old account CR and add it to the new one.

once teh human has completed the edpm deploy ment they will delete the old account cr.
the infra operator would hten remove that user and password form rabit.

if they want to revert at any point before they do that delete they just need to revert the refence rabbit account object.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's how it has been implemented already. Human operators only have to specify the username and a rabbitmquser crd is created for them, similarly to the db account. Finalizers are also moved to the new account and removed from the old one for rotation purposes. See openstack-k8s-operators/infra-operator@f4eab21 .

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the db account have to be created seperalty and then the name of the account passed in

so this is not workign the saem. unless this is the name of the rabbitmquer cr not the username.

if we have a rabbitmquser CRD then we shoudl be takign the name of the rabbitmquser CR here not the user name ot have this work the same as the db password.

can you make that change?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for what its worht i kind of see the current propsoal to autocare the user (and password) just bsed on teh user name as a security problem that would prevent use merging this.

bsiclly it provde a very easy way to typo the name have a user created and then not notice this again since it wont show up in the controlplan CR

you would have to expclitly go looking for the RabbitmqUser crs to keep track fo them and manually corralate them.

Copy link
Contributor Author

@lmiccini lmiccini Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have implemented the automatic cleanup of orphaned users, either created by typos then corrected or as result of credentials rotation, so hopefully this should take care of the security aspect (transporturl controller in infra-operator will take care of the cleanup).
As for what happens if there is no username specified - the service will use the default rabbitmq user as it does today, so from a end user perspective this is an opt-in to using new dedicated credentials.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. i woudl still prefer to have the user refer to a RabbitmqUser cr name rather then the username in rabit itself but if you have impleneted the cleanup the i can consider the security element (leaking user accounts) to be adressed and the functioal blocker to be reoslved.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ill resolve this an remove the requested chagne state ocne i review where/how you did that got a pointer?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@openshift-ci
Copy link

openshift-ci bot commented Jan 9, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from seanmooney. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/18dac81b57c3435e8ee3c89c8255337d

✔️ openstack-meta-content-provider-master SUCCESS in 4h 47m 04s
watcher-operator-validation-master FAILURE in 2h 15m 03s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 3h 26m 41s
✔️ watcher-operator-validation-epoxy SUCCESS in 1h 54m 15s
✔️ watcher-operator-validation-epoxy-ocp4-16 SUCCESS in 2h 00m 16s
✔️ noop SUCCESS in 0s
watcher-operator-kuttl FAILURE in 56m 42s

@lmiccini lmiccini force-pushed the rabbitmq_vhosts branch 4 times, most recently from db92d54 to ff28815 Compare January 10, 2026 15:41
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/ae2a6a20346043d7b23e2e2dc5675058

✔️ openstack-meta-content-provider-master SUCCESS in 2h 52m 59s
watcher-operator-validation-master FAILURE in 2h 15m 58s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 2h 30m 09s
✔️ watcher-operator-validation-epoxy SUCCESS in 2h 05m 59s
✔️ watcher-operator-validation-epoxy-ocp4-16 SUCCESS in 2h 00m 14s
✔️ noop SUCCESS in 0s
watcher-operator-kuttl FAILURE in 58m 40s

@lmiccini lmiccini force-pushed the rabbitmq_vhosts branch 2 times, most recently from f749ebf to e2c7b37 Compare January 12, 2026 08:42
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/dab5bd6c4e9c47219f0d96d09cbf2caf

✔️ openstack-meta-content-provider-master SUCCESS in 3h 36m 47s
watcher-operator-validation-master FAILURE in 2h 14m 30s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 2h 13m 48s
✔️ watcher-operator-validation-epoxy SUCCESS in 1h 52m 31s
watcher-operator-validation-epoxy-ocp4-16 FAILURE in 19m 50s
✔️ noop SUCCESS in 0s
watcher-operator-kuttl FAILURE in 57m 23s

description: Name of the cluster
minLength: 1
type: string
user:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for what its worht i kind of see the current propsoal to autocare the user (and password) just bsed on teh user name as a security problem that would prevent use merging this.

bsiclly it provde a very easy way to typo the name have a user created and then not notice this again since it wont show up in the controlplan CR

you would have to expclitly go looking for the RabbitmqUser crs to keep track fo them and manually corralate them.

@amoralej
Copy link
Contributor

Master job is failing with:

2026-01-27T13:27:59Z	ERROR	Metadate update failed	{"controller": "openstackcontrolplane", "controllerGroup": "core.openstack.org", "controllerKind": "OpenStackControlPlane", "OpenStackControlPlane": {"name":"controlplane","namespace":"openstack"}, "namespace": "openstack", "name": "controlplane", "reconcileID": "90df7b83-eac9-4ff9-a806-a1251c3b1925", "error": "admission webhook \"vopenstackcontrolplane-v1beta1.kb.io\" denied the request: OpenStackControlPlane.core.openstack.org \"controlplane\" is invalid: spec.keystone.template.rabbitMqClusterName: Invalid value: \"rabbitmq\": cannot set both deprecated field \"spec.keystone.template.rabbitMqClusterName\" and new field \"spec.keystone.template.notificationsBus.cluster\" with different values (deprecated: \"rabbitmq\", new: \"rabbitmq-notifications\"). Use \"spec.keystone.template.notificationsBus.cluster\" only"}

https://logserver.rdoproject.org/e73/rdoproject.org/e73efdd0de6440b2b1aac1ec3c485810/controller/ci-framework-data/logs/openstack-must-gather/quay-io-openstack-k8s-operators-openstack-must-gather-sha256-5a235be74bde76726d2fe232519162c11cacc7ebe133103b4cd177993898d231/namespaces/openstack-operators/pods/openstack-operator-controller-manager-96bd7847-d5vm4/logs/manager.log

Note that the openstackcontrolplane is creating in that job includes old-style notifications declaration using notificationsBusInstance https://logserver.rdoproject.org/e73/rdoproject.org/e73efdd0de6440b2b1aac1ec3c485810/controller/ci-framework-data/logs/openstack-must-gather/quay-io-openstack-k8s-operators-openstack-must-gather-sha256-5a235be74bde76726d2fe232519162c11cacc7ebe133103b4cd177993898d231/namespaces/openstack/crs/openstackcontrolplanes.core.openstack.org/controlplane.yaml

Configuration for that job is in:

- patch: |-
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: controlplane
spec:
notificationsBusInstance: rabbitmq-notifications
target:
kind: OpenStackControlPlane
- patch: |-
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: controlplane
spec:
rabbitmq:
templates:
rabbitmq-notifications:
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.87
spec:
type: LoadBalancer

@softwarefactory-project-zuul
Copy link

This change depends on a change that failed to merge.

Change openstack-k8s-operators/openstack-operator#1779 is needed.

@softwarefactory-project-zuul
Copy link

This change depends on a change that failed to merge.

Change openstack-k8s-operators/openstack-operator#1779 is needed.

@lmiccini
Copy link
Contributor Author

check-rdo

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/a3a7c90e1fdc44a9ac0d4ef27203dc35

✔️ openstack-meta-content-provider-master SUCCESS in 2h 35m 51s
watcher-operator-validation-master FAILURE in 1h 35m 26s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 2h 43m 54s
✔️ watcher-operator-validation-epoxy SUCCESS in 1h 55m 56s
watcher-operator-validation-epoxy-ocp4-16 FAILURE in 1h 58m 26s
✔️ noop SUCCESS in 0s
✔️ watcher-operator-kuttl SUCCESS in 49m 09s

Add new messagingBus and notificationsBus interfaces to hold cluster,
user and vhost names for optional usage.
The controller adds these values to the TransportURL create request when present.

Additionally, we migrate RabbitMQ cluster name to RabbitMq config struct
using DefaultRabbitMqConfig from infra-operator to automatically
populate the new Cluster field from legacy RabbitMqClusterName.

Example usage:

  spec:
    messagingBus:
      cluster: rpc-rabbitmq
      user: rpc-user
      vhost: rpc-vhost
    notificationsBus:
      cluster: notifications-rabbitmq
      user: notifications-user
      vhost: notifications-vhost

Jira: https://issues.redhat.com/browse/OSPRH-23882
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/470494c2f6ae44f8a4c235e85c208a03

openstack-meta-content-provider-master FAILURE in 34m 51s
⚠️ watcher-operator-validation-master SKIPPED Skipped due to failed job openstack-meta-content-provider-master
openstack-meta-content-provider-epoxy FAILURE in 16m 22s
⚠️ watcher-operator-validation-epoxy SKIPPED Skipped due to failed job openstack-meta-content-provider-epoxy
⚠️ watcher-operator-validation-epoxy-ocp4-16 SKIPPED Skipped due to failed job openstack-meta-content-provider-epoxy
✔️ noop SUCCESS in 0s
⚠️ watcher-operator-kuttl SKIPPED Skipped due to failed job openstack-meta-content-provider-master

@lmiccini
Copy link
Contributor Author

check-rdo

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/c24687696303477da1e8550bfb025939

✔️ openstack-meta-content-provider-master SUCCESS in 2h 50m 25s
✔️ watcher-operator-validation-master SUCCESS in 2h 03m 29s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 2h 23m 04s
✔️ watcher-operator-validation-epoxy SUCCESS in 1h 54m 31s
watcher-operator-validation-epoxy-ocp4-16 FAILURE in 25m 10s
✔️ noop SUCCESS in 0s
watcher-operator-kuttl RETRY_LIMIT in 17m 39s

@lmiccini
Copy link
Contributor Author

check-rdo

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/e90ffd4b0d2444b79af831f53b6d4e51

✔️ openstack-meta-content-provider-master SUCCESS in 2h 42m 41s
✔️ watcher-operator-validation-master SUCCESS in 2h 00m 23s
✔️ openstack-meta-content-provider-epoxy SUCCESS in 2h 16m 21s
✔️ watcher-operator-validation-epoxy SUCCESS in 1h 50m 08s
watcher-operator-validation-epoxy-ocp4-16 RETRY_LIMIT in 5m 37s
✔️ noop SUCCESS in 0s
✔️ watcher-operator-kuttl SUCCESS in 48m 54s

@softwarefactory-project-zuul
Copy link

This change depends on a change that failed to merge.

Change openstack-k8s-operators/openstack-operator#1779 is needed.

Copy link
Contributor

@amoralej amoralej left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some minor questions, other than that this patch LGTM.

I've also tested this manually with watcher CRs in isolation and worked as expected. The openstack-operator patch in depends-on is still on progress at this point.

}

// validateDeprecatedFieldsCreate validates deprecated fields during CREATE operations
func (spec *WatcherSpecCore) validateDeprecatedFieldsCreate(basePath *field.Path) ([]string, field.ErrorList) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

validateDeprecatedFieldsCreate is never returning errors, only warnings. The interface may be simplified to return only []string at this point. Said this, we can leave it for potential future cases.

Comment on lines 71 to 79
- patch: |-
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: controlplane
spec:
keystone:
template:
rabbitMqClusterName: ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is unrelated to this patch? (i guess related to openstack-operator change)

notificationsRabbitMqConfig := *instance.Spec.NotificationsBus

// Append cluster name to the TransportURL name to make it unique when using different clusters
notificationTransportURLName := fmt.Sprintf("%s-watcher-notification-%s", instance.Name, notificationsRabbitMqConfig.Cluster)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is effectively changing tha name of the transporturl during operator update. I guess openstack-operator will automatically remove the old one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I added the transporturl delete logic modeled after nova's. Tested switching notifications on/off/changing cluster and it seems to be working as expected.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested too, thanks!

@softwarefactory-project-zuul
Copy link

This change depends on a change that failed to merge.

Change openstack-k8s-operators/openstack-operator#1779 is needed.

@softwarefactory-project-zuul
Copy link

This change depends on a change that failed to merge.

Change openstack-k8s-operators/openstack-operator#1779 is needed.

@lmiccini
Copy link
Contributor Author

recheck

@lmiccini
Copy link
Contributor Author

check-rdo

Comment on lines 75 to 78
// Default NotificationsBus.Cluster if NotificationsBus is present but Cluster is empty
if spec.NotificationsBus != nil && spec.NotificationsBus.Cluster == "" {
spec.NotificationsBus.Cluster = "rabbitmq"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means the user set up user and/or vhost but not cluster iiuc. For me that should fail validation. Setting the same rabbitmq for messagingBus and notificationBus should be something that could only be done by explicitely setting it, not by defaulting rules, IMO.

Also, i checked that adding any wrong content to NotificationsBus, i.e. {}, ends up in this setting rabbitmq for notifications.

spec.MessagingBus.Cluster = "rabbitmq"
}

// Default NotificationsBus.Cluster if NotificationsBus is present but Cluster is empty
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I liked the previous way of overriding spec.NotificationsBus.Cluster from NotificationsBusInstance if this is set and NotificationsBus.Cluster was empty. I think it'd avoid racy situations between openstack-operators and watcher-operator in update conditions. Said this, i guess it will run fine in most cases and will end up in reconciling properly (unless there is any other problem in the update related to the openstackcontrolplane content, i.e.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As reminded by Luca, the webhook only runs in the openstack-operator for the openstackcontrolplane, not in the watcher-operator, so actually, adding that logic to the webhook will not help to fix the race i mentioned before.

Copy link
Contributor Author

@lmiccini lmiccini Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a valid concern if each service operator runs in isolation (like in CI). Since we embed the webhooks in the openstack-operator I feel like we are somehow guaranteed that the defaulting/migration logic will run in the expected order, and that the watcher-operator will receive a valid configuration.

@softwarefactory-project-zuul
Copy link

This change depends on a change that failed to merge.

Change openstack-k8s-operators/openstack-operator#1779 is needed.

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/707dc3a9310d47399eeca91b1f3871fd

openstack-meta-content-provider-master FAILURE in 46m 03s
⚠️ watcher-operator-validation-master SKIPPED Skipped due to failed job openstack-meta-content-provider-master
openstack-meta-content-provider-epoxy FAILURE in 20m 22s
⚠️ watcher-operator-validation-epoxy SKIPPED Skipped due to failed job openstack-meta-content-provider-epoxy
⚠️ watcher-operator-validation-epoxy-ocp4-16 SKIPPED Skipped due to failed job openstack-meta-content-provider-epoxy
✔️ noop SUCCESS in 0s
⚠️ watcher-operator-kuttl SKIPPED Skipped due to failed job openstack-meta-content-provider-master

@softwarefactory-project-zuul
Copy link

This change depends on a change that failed to merge.

Change openstack-k8s-operators/openstack-operator#1779 is needed.

@lmiccini
Copy link
Contributor Author

check-rdo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants