Skip to content

Conversation

@rahulgurnani
Copy link
Contributor

@rahulgurnani rahulgurnani commented Dec 4, 2025

What type of PR is this?

/kind feature

What this PR does / why we need it:

This PR implements PrepareDataPlugin hook for prefix cache match plugin. The goal is to split the prefix cache match plugin into match plugin and scorer.
This is an intermediate step towards migration of the plugin. In a following PR, scorer would be added and the existing prefix cache match plugin would be deprecated.

Did some initial scale testing using vllm:

On this version

============ Serving Benchmark Result ============
Successful requests:                     1000      
Failed requests:                         0         
Benchmark duration (s):                  83.04     
Total input tokens:                      127000    
Total generated tokens:                  2048000   
Request throughput (req/s):              12.04     
Output token throughput (tok/s):         24663.03  
Peak output token throughput (tok/s):    52286.00  
Peak concurrent requests:                1000.00   
Total Token throughput (tok/s):          26192.43  
---------------Time to First Token----------------
Mean TTFT (ms):                          1875.19   
Median TTFT (ms):                        1877.09   
P99 TTFT (ms):                           2368.76   
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          31.62     
Median TPOT (ms):                        29.59     
P99 TPOT (ms):                           39.16     
---------------Inter-token Latency----------------
Mean ITL (ms):                           31.62     
Median ITL (ms):                         27.45     
P99 ITL (ms):                            92.53     
==================================================

On head (latest release)

============ Serving Benchmark Result ============
Successful requests:                     1000      
Failed requests:                         0         
Benchmark duration (s):                  101.31    
Total input tokens:                      127000    
Total generated tokens:                  2048000   
Request throughput (req/s):              9.87      
Output token throughput (tok/s):         20215.75  
Peak output token throughput (tok/s):    52972.00  
Peak concurrent requests:                1000.00   
Total Token throughput (tok/s):          21469.36  
---------------Time to First Token----------------
Mean TTFT (ms):                          1885.78   
Median TTFT (ms):                        1851.98   
P99 TTFT (ms):                           2482.30   
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          33.57     
Median TPOT (ms):                        32.10     
P99 TPOT (ms):                           48.02     
---------------Inter-token Latency----------------
Mean ITL (ms):                           33.57     
Median ITL (ms):                         27.89     
P99 ITL (ms):                            93.49     

@kaushikmitr / @BenjaminBraunDev FYI

Which issue(s) this PR fixes:

Addresses #1924

Does this PR introduce a user-facing change?:

yes. this PR adds a new feature flag "prepareDataPlugins"

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. labels Dec 4, 2025
@netlify
Copy link

netlify bot commented Dec 4, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit ab82246
🔍 Latest deploy log https://app.netlify.com/projects/gateway-api-inference-extension/deploys/693a372fb2aa7c0008df86b0
😎 Deploy Preview https://deploy-preview-1942--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Dec 4, 2025
@rahulgurnani rahulgurnani changed the title Implement PreapreDataPlugin for prefix cache match plugin Implement PrepareDataPlugin for prefix cache match plugin Dec 4, 2025
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 4, 2025
PrefixCacheMatchPrecentKey = "PrefixCacheMatchPercentKey"
)

type PrefixCacheMatchPercent struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I suggest making this something more generic like PrefixCacheInfo so we can easily extend. One of things we will likely need to add is the cache tier info.
  2. Instead (or in addition) of providing the percentage, let's provide the match length. The idea is to provide more "raw" data for consumers. Percentage is just something the scorer cares about, the latency predictor may work better with the length.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I renamed it to PrefixCacheInfo to make it easy for extension. Predictor is presently only trained to depend on prefix cache match percent. I think updating this to length may not be suitable in the short term. @BenjaminBraunDev keep me honest here. Thanks!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @liu-cong 's opinion carries the most weight here.

The reccomendations here seem reasonable, aggregating data for easier ergonomics of an unproven, experimental plugin, at the expense of overall usability does not seem a worth tradeoff.

I'm not sure why this comment was resolved

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the info to contain prefix cache related info. Thanks

@rahulgurnani rahulgurnani requested a review from liu-cong December 6, 2025 05:16
@rahulgurnani rahulgurnani marked this pull request as ready for review December 6, 2025 05:16
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 6, 2025
@rahulgurnani
Copy link
Contributor Author

/assign @kfswain

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: rahulgurnani
Once this PR has been reviewed and has the lgtm label, please ask for approval from kfswain. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

// If a cycle is detected, it returns an error.
func (c *Config) PrepareDataPluginGraph() error {
func (c *Config) PrepareDataPluginGraph(enablePrepareDataPlugins bool) error {
if !enablePrepareDataPlugins {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to the other comment, lets remove this check from here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

return keys
}

func TestPrefixCacheScorer_Score(t *testing.T) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a unit test that also creates the legacy prefix scorer, and test to make sure the created scores have identical output? We should be able to validate the behavior is the same in these unit tests

Copy link
Contributor Author

@rahulgurnani rahulgurnani Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the scorer from this PR since I am not using it yet. I plan to add the scorer back in a separate PR. Scoping this PR to just adding prepare data step so that predictor can use it. Thanks!

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 9, 2025
@kfswain
Copy link
Collaborator

kfswain commented Dec 9, 2025

Looks good. Just a minor nit, and a comment to open the issue.

I feel strongly about the issue, however, as that is a key part of ensuring that the P/C data model is reliable. Once you tag the created issue here (and bind it to the milestone), I'm happy to stamp the PR.

Thanks! Looks great!

@rahulgurnani rahulgurnani force-pushed the pc-m branch 2 times, most recently from 5e8aff8 to bf00276 Compare December 10, 2025 01:04
}

func (p *Plugin) Produces() map[string]any {
return map[string]any{approximateprefix.PrefixCacheMatchInfoKey: approximateprefix.PrefixCacheMatchInfo{}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Others Looks good, just one question for my understanding. what's the Produces() used for?

here it does not populate any data in PrefixCacheMatchInfo{}. It's only used for constructing the DAG. Are we planning to populate PrefixCacheMatchInfo in produces? And when/how should we populate it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do it in PrepareRequestData. Produces is used for DAG validation on EPP startup.

Refer: https://docs.google.com/document/d/1EQwXL2pCuUyM1B917FUgP_8pFS3VF8F_bUfjy8IE7gM/edit?tab=t.vmaefhinvkl5#heading=h.s9mr6kynb3ls for more context. Thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, Produces output a map[string]any. I get the idea of DAG validation of using the key of the output map. I'm wondering what's the usage of the value here. Currently you put a placeholder approximateprefix.PrefixCacheMatchInfo{}. I'm wondering if it will be populated in the future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the Produces output type is meant to be a set, should we change it to a map[string]struct{} or map[string]bool which is more go idiomatic? @rahulgurnani @kfswain

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zetxqx as far as I understand the validation is not only for key existence, but also that the producer output type correlates to the type the consumer wants.

the way it’s done here is by setting an empty struct, and then the validation code can use reflection (or alike) to validate the types match

Copy link
Collaborator

@kfswain kfswain Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zetxqx as far as I understand the validation is not only for key existence, but also that the producer output type correlates to the type the consumer wants.

This is correct, the value is used in reflection on startup to ensure that they value of the key type is expected. Allowing for confidence in type usage, even in an environment where only out of tree plugins are used. So using a set would not allow for a fully reliable dag (a key name may be used with an unexpected type, or force reflection to happen on every plugin call)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thank you all. Non-blocking for this PR, just two points from my understanding of the current code path:

  1. Currently the DAG construction is only checking the Key here:
    for producedKey := range plugins[i].Produces() {
    // If plugin j consumes the produced key, then j depends on i. We can break after the first match.
    if _, ok := plugins[j].Consumes()[producedKey]; ok {
  2. if we only want to do pure type checking not information checking in the type. Should we consider Produces output map[string]reflect.Type or map[reflect.Type]struct{}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. That should be fixed, the DAG needs more work, I have other comments in this PR suggesting as such.
  2. You could , it wouldn't really make a material difference other than making the implementers call reflect.TypeOf(pluginProducingType). We will do that all under the hood in the DAG checker anyway. Just depends on what the preferred UX is

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the validation in this PR itself. Please take another look at the last commit. I think data check is simpler than using reflection. We could document this behavior. Thanks!

@rahulgurnani
Copy link
Contributor Author

Looks good. Just a minor nit, and a comment to open the issue.

I feel strongly about the issue, however, as that is a key part of ensuring that the P/C data model is reliable. Once you tag the created issue here (and bind it to the milestone), I'm happy to stamp the PR.

Thanks! Looks great!

Addressed it in the current PR. Let me know what you think. Thanks!


// PrepareDataPluginGraph creates data dependency graph and sorts the plugins in topological order.
// If a cycle is detected, it returns an error.
func (c *Config) PrepareDataPluginGraph() error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The DAG should be built, completely agnostic of if any PrepareData plugins are in use. Will add suggestions to help clarify what I mean

Comment on lines 111 to 120
if len(c.prepareDataPlugins) == 0 {
return nil
}
dag := buildDAG(c.prepareDataPlugins)
plugins, err := sortPlugins(dag, c.prepareDataPlugins)
if err != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if len(c.prepareDataPlugins) == 0 {
return nil
}
dag := buildDAG(c.prepareDataPlugins)
plugins, err := sortPlugins(dag, c.prepareDataPlugins)
if err != nil {
dag := buildDAG(c.allPlugins)
plugins, err := sortPlugins(dag, c.allPlugins)
if err != nil {

This is semi psuedocode. But I hope this kind of illustrates what I mean. We should always create the DAG, entirely agnostic of which plugins are in use.

Copy link
Contributor Author

@rahulgurnani rahulgurnani Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ohh. I understand. Added a TODO for it and address it as a fast follow in the next PR. Let me know if that's reasonable. Thanks!

@rahulgurnani
Copy link
Contributor Author

/retest

@rahulgurnani rahulgurnani force-pushed the pc-m branch 3 times, most recently from 672ca5e to 1a0c28c Compare December 11, 2025 03:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants