-
Notifications
You must be signed in to change notification settings - Fork 207
Implement PrepareDataPlugin for prefix cache match plugin #1942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Skipping CI for Draft Pull Request. |
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
| PrefixCacheMatchPrecentKey = "PrefixCacheMatchPercentKey" | ||
| ) | ||
|
|
||
| type PrefixCacheMatchPercent struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I suggest making this something more generic like
PrefixCacheInfoso we can easily extend. One of things we will likely need to add is the cache tier info. - Instead (or in addition) of providing the percentage, let's provide the match length. The idea is to provide more "raw" data for consumers. Percentage is just something the scorer cares about, the latency predictor may work better with the length.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I renamed it to PrefixCacheInfo to make it easy for extension. Predictor is presently only trained to depend on prefix cache match percent. I think updating this to length may not be suitable in the short term. @BenjaminBraunDev keep me honest here. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @liu-cong 's opinion carries the most weight here.
The reccomendations here seem reasonable, aggregating data for easier ergonomics of an unproven, experimental plugin, at the expense of overall usability does not seem a worth tradeoff.
I'm not sure why this comment was resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the info to contain prefix cache related info. Thanks
|
/assign @kfswain |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: rahulgurnani The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| // If a cycle is detected, it returns an error. | ||
| func (c *Config) PrepareDataPluginGraph() error { | ||
| func (c *Config) PrepareDataPluginGraph(enablePrepareDataPlugins bool) error { | ||
| if !enablePrepareDataPlugins { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to the other comment, lets remove this check from here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks!
pkg/epp/scheduling/framework/plugins/scorer/prefix_cache_match_scorer.go
Outdated
Show resolved
Hide resolved
| return keys | ||
| } | ||
|
|
||
| func TestPrefixCacheScorer_Score(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a unit test that also creates the legacy prefix scorer, and test to make sure the created scores have identical output? We should be able to validate the behavior is the same in these unit tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the scorer from this PR since I am not using it yet. I plan to add the scorer back in a separate PR. Scoping this PR to just adding prepare data step so that predictor can use it. Thanks!
pkg/epp/scheduling/framework/plugins/scorer/prefix_cache_match_scorer.go
Outdated
Show resolved
Hide resolved
pkg/epp/scheduling/framework/plugins/scorer/prefix_cache_match_scorer.go
Outdated
Show resolved
Hide resolved
|
Looks good. Just a minor nit, and a comment to open the issue. I feel strongly about the issue, however, as that is a key part of ensuring that the P/C data model is reliable. Once you tag the created issue here (and bind it to the milestone), I'm happy to stamp the PR. Thanks! Looks great! |
5e8aff8 to
bf00276
Compare
| } | ||
|
|
||
| func (p *Plugin) Produces() map[string]any { | ||
| return map[string]any{approximateprefix.PrefixCacheMatchInfoKey: approximateprefix.PrefixCacheMatchInfo{}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Others Looks good, just one question for my understanding. what's the Produces() used for?
here it does not populate any data in PrefixCacheMatchInfo{}. It's only used for constructing the DAG. Are we planning to populate PrefixCacheMatchInfo in produces? And when/how should we populate it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do it in PrepareRequestData. Produces is used for DAG validation on EPP startup.
Refer: https://docs.google.com/document/d/1EQwXL2pCuUyM1B917FUgP_8pFS3VF8F_bUfjy8IE7gM/edit?tab=t.vmaefhinvkl5#heading=h.s9mr6kynb3ls for more context. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, Produces output a map[string]any. I get the idea of DAG validation of using the key of the output map. I'm wondering what's the usage of the value here. Currently you put a placeholder approximateprefix.PrefixCacheMatchInfo{}. I'm wondering if it will be populated in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the Produces output type is meant to be a set, should we change it to a map[string]struct{} or map[string]bool which is more go idiomatic? @rahulgurnani @kfswain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zetxqx as far as I understand the validation is not only for key existence, but also that the producer output type correlates to the type the consumer wants.
the way it’s done here is by setting an empty struct, and then the validation code can use reflection (or alike) to validate the types match
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zetxqx as far as I understand the validation is not only for key existence, but also that the producer output type correlates to the type the consumer wants.
This is correct, the value is used in reflection on startup to ensure that they value of the key type is expected. Allowing for confidence in type usage, even in an environment where only out of tree plugins are used. So using a set would not allow for a fully reliable dag (a key name may be used with an unexpected type, or force reflection to happen on every plugin call)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thank you all. Non-blocking for this PR, just two points from my understanding of the current code path:
- Currently the DAG construction is only checking the Key here:
gateway-api-inference-extension/pkg/epp/requestcontrol/dag.go
Lines 41 to 43 in 31d388d
for producedKey := range plugins[i].Produces() { // If plugin j consumes the produced key, then j depends on i. We can break after the first match. if _, ok := plugins[j].Consumes()[producedKey]; ok { - if we only want to do pure type checking not information checking in the type. Should we consider
Producesoutputmap[string]reflect.Typeormap[reflect.Type]struct{}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- That should be fixed, the DAG needs more work, I have other comments in this PR suggesting as such.
- You could , it wouldn't really make a material difference other than making the implementers call
reflect.TypeOf(pluginProducingType). We will do that all under the hood in the DAG checker anyway. Just depends on what the preferred UX is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the validation in this PR itself. Please take another look at the last commit. I think data check is simpler than using reflection. We could document this behavior. Thanks!
Addressed it in the current PR. Let me know what you think. Thanks! |
|
|
||
| // PrepareDataPluginGraph creates data dependency graph and sorts the plugins in topological order. | ||
| // If a cycle is detected, it returns an error. | ||
| func (c *Config) PrepareDataPluginGraph() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DAG should be built, completely agnostic of if any PrepareData plugins are in use. Will add suggestions to help clarify what I mean
| if len(c.prepareDataPlugins) == 0 { | ||
| return nil | ||
| } | ||
| dag := buildDAG(c.prepareDataPlugins) | ||
| plugins, err := sortPlugins(dag, c.prepareDataPlugins) | ||
| if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if len(c.prepareDataPlugins) == 0 { | |
| return nil | |
| } | |
| dag := buildDAG(c.prepareDataPlugins) | |
| plugins, err := sortPlugins(dag, c.prepareDataPlugins) | |
| if err != nil { | |
| dag := buildDAG(c.allPlugins) | |
| plugins, err := sortPlugins(dag, c.allPlugins) | |
| if err != nil { |
This is semi psuedocode. But I hope this kind of illustrates what I mean. We should always create the DAG, entirely agnostic of which plugins are in use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohh. I understand. Added a TODO for it and address it as a fast follow in the next PR. Let me know if that's reasonable. Thanks!
|
/retest |
672ca5e to
1a0c28c
Compare
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR implements PrepareDataPlugin hook for prefix cache match plugin. The goal is to split the prefix cache match plugin into match plugin and scorer.
This is an intermediate step towards migration of the plugin. In a following PR, scorer would be added and the existing prefix cache match plugin would be deprecated.
Did some initial scale testing using vllm:
On this version
On head (latest release)
@kaushikmitr / @BenjaminBraunDev FYI
Which issue(s) this PR fixes:
Addresses #1924
Does this PR introduce a user-facing change?: