-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider if KCP should log periodically status #11717
Comments
We could also have some additional more frequent logs on a higher log-level, but that's not useful retroactively after running with a low log level. |
/help We could use some help to figure out a good approach before jumping into implementation. |
@chrischdi: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hi @fabriziopandini , I would like to pick up the issue but I need guidance as I am new to the community. Let me know where to start ,docs, codebase etc , I will go through it. |
Hey @NMSVishal , thanks for asking. I'm sorry but I think this is not a good first issue to start with because we first have to figure out a good approach. |
Ok, can you please suggest a issue , to get started with code contribution |
Hi, I'd like to work on this issue. Could you please assign it to me? 😊 |
Done; @hoodapreksha let's discussion options on this issue before starting implementing, there are trade-offs to be figured out between being informative and avoid spamming the log at every reconcile. Also, please PTAL #11693 that introduced some utility we should use for this work as well (and it probably also already "logging overall status when it changes"). |
What would you like to be added (User Story)?
As an operator Ii would like to easily triage what happened to a cluster's control plane
Detailed Description
#11693 introduced a func tha can generate key/value pairs describing the overall status of the control plane.
Those k/v pair, are then added to "Machine Create" and "Deleting Machine" log lines, thus providing a sort of history of how KCP evolved over time (and why). This is good.
However, unless users have monitoring system on top of CAPI, as of today by looking at logs it is complex to figure out what happened in between "Machine Create" and "Deleting Machine" oprations, e.g. did etcd had issue in the last two hours?
This issue is about discussing options to fill this gap, e.g
Anything else you would like to add?
No response
Label(s) to be applied
/kind feature
/area provider/control-plane-kubeadm
The text was updated successfully, but these errors were encountered: