Skip to content

Commit 0070f21

Browse files
.*: change llm-instance-gateway -> gateway-api-inference-extension (kubernetes-sigs#161)
Signed-off-by: Madhav Jivrajani <[email protected]>
1 parent c1fd57e commit 0070f21

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+88
-88
lines changed

PROJECT

+4-4
Original file line numberDiff line numberDiff line change
@@ -5,23 +5,23 @@
55
domain: x-k8s.io
66
layout:
77
- go.kubebuilder.io/v4
8-
projectName: llm-instance-gateway
9-
repo: sigs.k8s.io/llm-instance-gateway
8+
projectName: gateway-api-inference-extension
9+
repo: sigs.k8s.io/gateway-api-inference-extension
1010
resources:
1111
- api:
1212
crdVersion: v1
1313
namespaced: true
1414
domain: x-k8s.io
1515
group: inference
1616
kind: InferencePool
17-
path: sigs.k8s.io/llm-instance-gateway/api/v1alpha1
17+
path: sigs.k8s.io/gateway-api-inference-extension/api/v1alpha1
1818
version: v1alpha1
1919
- api:
2020
crdVersion: v1
2121
namespaced: true
2222
domain: x-k8s.io
2323
group: inference
2424
kind: InferenceModel
25-
path: sigs.k8s.io/llm-instance-gateway/api/v1alpha1
25+
path: sigs.k8s.io/gateway-api-inference-extension/api/v1alpha1
2626
version: v1alpha1
2727
version: "3"

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ make uninstall
2626
```
2727

2828
**Deploying the ext-proc image**
29-
Refer to this [README](https://github.com/kubernetes-sigs/llm-instance-gateway/blob/main/pkg/README.md) on how to deploy the Ext-Proc image.
29+
Refer to this [README](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/pkg/README.md) on how to deploy the Ext-Proc image.
3030

3131
## Contributing
3232

client-go/applyconfiguration/api/v1alpha1/inferencemodelspec.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/applyconfiguration/api/v1alpha1/inferencepoolspec.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/applyconfiguration/utils.go

+3-3
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/clientset.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/fake/clientset_generated.go

+4-4
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/fake/register.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/scheme/register.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/typed/api/v1alpha1/api_client.go

+2-2
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/typed/api/v1alpha1/fake/fake_api_client.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/typed/api/v1alpha1/fake/fake_inferencemodel.go

+2-2
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/typed/api/v1alpha1/fake/fake_inferencepool.go

+2-2
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/typed/api/v1alpha1/inferencemodel.go

+3-3
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/clientset/versioned/typed/api/v1alpha1/inferencepool.go

+3-3
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/informers/externalversions/api/interface.go

+2-2
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/informers/externalversions/api/v1alpha1/inferencemodel.go

+4-4
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/informers/externalversions/api/v1alpha1/inferencepool.go

+4-4
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/informers/externalversions/api/v1alpha1/interface.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/informers/externalversions/factory.go

+3-3
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/informers/externalversions/generic.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/informers/externalversions/internalinterfaces/factory_interfaces.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/listers/api/v1alpha1/inferencemodel.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

client-go/listers/api/v1alpha1/inferencepool.go

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

go.mod

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
module inference.networking.x-k8s.io/llm-instance-gateway
1+
module inference.networking.x-k8s.io/gateway-api-inference-extension
22

33
go 1.22.7
44

hack/update-codegen.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ echo "$SCRIPT_ROOT script"
2323
CODEGEN_PKG=${2:-bin}
2424
echo $CODEGEN_PKG
2525
source "${CODEGEN_PKG}/kube_codegen.sh"
26-
THIS_PKG="inference.networking.x-k8s.io/llm-instance-gateway"
26+
THIS_PKG="inference.networking.x-k8s.io/gateway-api-inference-extension"
2727

2828

2929
kube::codegen::gen_helpers \

pkg/ext-proc/backend/datastore.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ import (
55
"math/rand"
66
"sync"
77

8-
"inference.networking.x-k8s.io/llm-instance-gateway/api/v1alpha1"
8+
"inference.networking.x-k8s.io/gateway-api-inference-extension/api/v1alpha1"
99
corev1 "k8s.io/api/core/v1"
1010
"k8s.io/klog/v2"
1111
)

pkg/ext-proc/backend/datastore_test.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ package backend
33
import (
44
"testing"
55

6-
"inference.networking.x-k8s.io/llm-instance-gateway/api/v1alpha1"
6+
"inference.networking.x-k8s.io/gateway-api-inference-extension/api/v1alpha1"
77
)
88

99
func TestRandomWeightedDraw(t *testing.T) {

pkg/ext-proc/backend/endpointslice_reconciler.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import (
44
"context"
55
"strconv"
66

7-
"inference.networking.x-k8s.io/llm-instance-gateway/api/v1alpha1"
7+
"inference.networking.x-k8s.io/gateway-api-inference-extension/api/v1alpha1"
88
discoveryv1 "k8s.io/api/discovery/v1"
99
"k8s.io/apimachinery/pkg/runtime"
1010
"k8s.io/client-go/tools/record"

pkg/ext-proc/backend/endpointslice_reconcilier_test.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import (
44
"sync"
55
"testing"
66

7-
"inference.networking.x-k8s.io/llm-instance-gateway/api/v1alpha1"
7+
"inference.networking.x-k8s.io/gateway-api-inference-extension/api/v1alpha1"
88
v1 "k8s.io/api/core/v1"
99
discoveryv1 "k8s.io/api/discovery/v1"
1010
)

pkg/ext-proc/backend/fake.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ package backend
33
import (
44
"context"
55

6-
"inference.networking.x-k8s.io/llm-instance-gateway/api/v1alpha1"
6+
"inference.networking.x-k8s.io/gateway-api-inference-extension/api/v1alpha1"
77
klog "k8s.io/klog/v2"
88
)
99

pkg/ext-proc/backend/inferencemodel_reconciler.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ package backend
33
import (
44
"context"
55

6-
"inference.networking.x-k8s.io/llm-instance-gateway/api/v1alpha1"
6+
"inference.networking.x-k8s.io/gateway-api-inference-extension/api/v1alpha1"
77
"k8s.io/apimachinery/pkg/runtime"
88
"k8s.io/client-go/tools/record"
99
"k8s.io/klog/v2"

pkg/ext-proc/backend/inferencemodel_reconciler_test.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import (
44
"sync"
55
"testing"
66

7-
"inference.networking.x-k8s.io/llm-instance-gateway/api/v1alpha1"
7+
"inference.networking.x-k8s.io/gateway-api-inference-extension/api/v1alpha1"
88
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
99
)
1010

pkg/ext-proc/backend/inferencepool_reconciler.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ package backend
33
import (
44
"context"
55

6-
"inference.networking.x-k8s.io/llm-instance-gateway/api/v1alpha1"
6+
"inference.networking.x-k8s.io/gateway-api-inference-extension/api/v1alpha1"
77
"k8s.io/apimachinery/pkg/runtime"
88
"k8s.io/client-go/tools/record"
99
"k8s.io/klog/v2"

pkg/ext-proc/backend/vllm/metrics.go

+2-2
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ import (
1212
dto "github.com/prometheus/client_model/go"
1313
"github.com/prometheus/common/expfmt"
1414
"go.uber.org/multierr"
15-
"inference.networking.x-k8s.io/llm-instance-gateway/pkg/ext-proc/backend"
15+
"inference.networking.x-k8s.io/gateway-api-inference-extension/pkg/ext-proc/backend"
1616
klog "k8s.io/klog/v2"
1717
)
1818

@@ -41,7 +41,7 @@ func (p *PodMetricsClientImpl) FetchMetrics(
4141
existing *backend.PodMetrics,
4242
) (*backend.PodMetrics, error) {
4343
// Currently the metrics endpoint is hard-coded, which works with vLLM.
44-
// TODO(https://github.com/kubernetes-sigs/llm-instance-gateway/issues/16): Consume this from InferencePool config.
44+
// TODO(https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/16): Consume this from InferencePool config.
4545
url := fmt.Sprintf("http://%s/metrics", pod.Address)
4646
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
4747
if err != nil {

pkg/ext-proc/backend/vllm/metrics_test.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ import (
77
dto "github.com/prometheus/client_model/go"
88
"github.com/stretchr/testify/assert"
99
"google.golang.org/protobuf/proto"
10-
"inference.networking.x-k8s.io/llm-instance-gateway/pkg/ext-proc/backend"
10+
"inference.networking.x-k8s.io/gateway-api-inference-extension/pkg/ext-proc/backend"
1111
)
1212

1313
func TestPromToPodMetrics(t *testing.T) {

0 commit comments

Comments
 (0)