Skip to content

Conversation

@shaikenov
Copy link
Contributor

@shaikenov shaikenov commented Oct 5, 2025

What type of PR is this?

/kind feature

What this PR does / why we need it:

This PR adds a new metric to the exported prometheus metrics list: MaxNodeSkipEvalTime

We want to track nodes that were unprocessed during ScaleDown. If a node was skipped multiple times consecutively, we store only the earliest time it happened. The difference between current time and the earliest time among all unprocessed nodes will give us the maximum evaluation time of node being skipped during ScaleDown at this moment in time. This time can give us an indication of possible throttling and helps to better monitor what happens during ScaleDown.

Which issue(s) this PR fixes:

None

Does this PR introduce a user-facing change?

Introduced a new metric, tracking time to process all nodes in scale down simulations. `--max-node-skip-eval-time-tracker-enabled` flag enables the new metric.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Oct 5, 2025
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Oct 5, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: shaikenov / name: Olzhas Shaikenov (e86caa9)

@k8s-ci-robot
Copy link
Contributor

Welcome @shaikenov!

It looks like this is your first PR to kubernetes/autoscaler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/autoscaler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 5, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @shaikenov. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Oct 5, 2025
Copy link

@kada2004 kada2004 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very good

@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch from b9e7969 to 86a98d1 Compare October 6, 2025 11:40
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 7, 2025
@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch from 86a98d1 to 0457f73 Compare October 8, 2025 08:56
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 8, 2025
@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch from 0457f73 to c2756ca Compare October 8, 2025 15:22
@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch from c2756ca to a279f58 Compare October 10, 2025 12:00
@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch from 65b09dd to 716b1c3 Compare October 17, 2025 07:21
@shaikenov shaikenov marked this pull request as ready for review October 20, 2025 08:20
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 20, 2025
@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch from 716b1c3 to 1239ced Compare October 20, 2025 13:53
@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch from 1239ced to 44aac42 Compare October 23, 2025 14:45
var longestTime time.Duration
// if nodeNames is nil it means that all nodes were processed
if nodeNames == nil {
// if l.minimumTime is 0, then in previous iteration we also processed all the nodes, so the longest time is 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we require all nodes to be processed twice before resetting the metric?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the first time we might have some leftovers from previous simulations (as in the next comment), so we want to calculate the time for these nodes and reset the node map. If we get here again the second time we will report 0. I refactored this part to fix the issue that I had, I hope now it is more correct and clear

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds like an implementation detail leading to surprising results. Can this be implemented in a way that doesn't require this?

Copy link
Contributor Author

@shaikenov shaikenov Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this condition here for a case when we do not have any unprocessed nodes at all. In this case if this if is not here and we have something like this:

func (l *LongestNodeScaleDownEvalTime) Update(nodeNames []string, currentTime time.Time) time.Duration {
	minimumTime := l.getMin()
	newNodes := make(map[string]time.Time)
	for _, nodeName := range nodeNames {
		newNodes[nodeName] = l.get(nodeName)
	}
	l.NodeNamesWithTimeStamps = newNodes
	longestTime := currentTime.Sub(minimumTime)
	l.lastEvalTime = currentTime
	metrics.ObserveLongestUnneededNodeScaleDownEvalDurationSeconds(longestTime)
	return longestTime
}

we will report time between simulations (currentTime - lastEvalTime).

In this case IIUC we want to report 0 because there are no unprocessed nodes and from the metric definition (longest time during which node was not processed during ScaleDown) this metric is about the skipped nodes. The snippet above is fine by me, but I think it a bit diverges from the metric definition if there are not unprocessed nodes. WDYT, @x13n ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm misunderstanding semantic of the metric, but if you want to report longest time a node was not processed I expected something like this:

func (l *LongestNodeScaleDownEvalTime) Update(nodeNames []string, currentTime time.Time) time.Duration {
	newNodes := make(map[string]time.Time)
	for _, nodeName := range nodeNames {
		newNodes[nodeName] = l.get(nodeName)
	}
	l.NodeNamesWithTimeStamps = newNodes
	minimumTime := l.getMin()
	longestDuration := currentTime.Sub(minimumTime)
	l.lastEvalTime = currentTime
	metrics.ObserveLongestUnneededNodeScaleDownEvalDurationSeconds(longestDuration)
	return longestDuration
}

I suppose the distinction is that when I have this called with currentTime equal to some times t0, t1 and t2, only the nodes unprocessed at a given timestamp are considered by the metric in that time instant. So, say at t0 everything was processed, at t1 node A was skipped and at t2 node B was skipped. The metric then is set to 0 at t0, t1-t0 at t1 and t2-t1 at t2. If then at t3 everything is processed again, the metric drops back to 0. Does that make sense? Or did I misunderstood the metric definition?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discussed offline with @x13n and @MartynaGrotek. Agreed on having the behavior described by @x13n in the last comment.
Small note: we need to first set lastEvalTime to currentTime and then call getMin(). This is because if we have no skipped nodes and we set lastEvalTime after longestDuration calculation, longestDuration will be equal to currentTime - lastEvalTime != 0.
Otherwise, if we have some nodes skipped, minimum value in newNodes will be <= time of last evaluation anyway, so this case will not be impacted

@x13n
Copy link
Member

x13n commented Oct 29, 2025

/ok-to-test
/release-note-edit

Introduced a new metric, tracking time to process all nodes in scale down simulations. `--longest-node-scaledown-eval-timetracker-enabled` flag enables the new metric.

@k8s-ci-robot
Copy link
Contributor

@x13n: /release-note-edit must be used with a single release note block.

In response to this:

/ok-to-test
/release-note-edit

Introduced a new metric, tracking time to process all nodes in scale down simulations. `--longest-node-scaledown-eval-timetracker-enabled` flag enables the new metric.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 29, 2025
@x13n
Copy link
Member

x13n commented Oct 29, 2025

Ah, looks like you removed the release note block entirely, please bring it back in the first comment.

@jackfrancis
Copy link
Contributor

/release-note-edit

Introduced a new metric, tracking time to process all nodes in scale down simulations. `--longest-node-scaledown-eval-timetracker-enabled` flag enables the new metric.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Oct 29, 2025
@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch 2 times, most recently from 3a5135c to 4e48044 Compare October 30, 2025 14:03

// handleUnprocessedNodes is used to track the longest time it take for a node to be evaluated as removable or not
func (p *Planner) handleUnprocessedNodes(unprocessedNodeNames []string) {
// if p.longestNodeScaleDownEvalTime is not set (flag is disabled) or endedPrematurely is already true (nodes were already reported in this iteration) do not do anything
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update the comment as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the metric name and all related comments.
New metric name is maxNodeSkipEvalDurationSeconds. All related structs and methods were also renamed accordingly

var longestTime time.Duration
// if nodeNames is nil it means that all nodes were processed
if nodeNames == nil {
// if l.minimumTime is 0, then in previous iteration we also processed all the nodes, so the longest time is 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds like an implementation detail leading to surprising results. Can this be implemented in a way that doesn't require this?

@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch from 4e48044 to 94e2264 Compare November 3, 2025 16:25
Copy link
Member

@x13n x13n left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Mostly LGTM, just one minor comment.

{
name: "Only one node is skipped in one iteration",
unprocessedNodes: [][]string{{}, {"n1"}, {}, {}},
wantMaxSkipEvalTime: []time.Duration{time.Duration(0), time.Duration(1 * time.Second), time.Duration(0), time.Duration(0)},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: You don't need to wrap everything into time.Duration here, wantMaxSkipEvalTime: []time.Duration{0 * time.Second, 1 * time.Second, 0 * time.Second, 0 * time.Second}, should work just fine. Alternatively, since your step resolution is 1s everywhere anyway, you could just make wantMaxSkipEvalTime a slice of integers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefer the second suggestion here: changed wantMaxSkipEvalTime to []int

@shaikenov shaikenov force-pushed the shaikenov-scaledown-unprocessed-node-tracking branch from 94e2264 to e86caa9 Compare November 5, 2025 08:03
@x13n
Copy link
Member

x13n commented Nov 5, 2025

Looks good, thanks!

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 5, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kada2004, shaikenov, x13n

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 5, 2025
@k8s-ci-robot k8s-ci-robot merged commit 7e8cf0b into kubernetes:master Nov 5, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants