diff --git a/_banners/ngf-2.0-release.md b/_banners/ngf-2.0-release.md
new file mode 100644
index 000000000..135681553
--- /dev/null
+++ b/_banners/ngf-2.0-release.md
@@ -0,0 +1,7 @@
+{{< banner "notice" "NGINX Gateway Fabric 2.0 is now available" >}}
+
+NGINX Gateway Fabric 2.0 has released! Follow [these instructions]({{< ref "/ngf/install/upgrade-version.md#upgrade-from-v1x-to-v2x" >}}) to upgrade from 1.x to 2.0.
+
+For 1.x, checkout [an older version]({{< ref "/ngf/install/upgrade-version.md#access-nginx-gateway-fabric-1x-documentation" >}}) of documentation.
+
+{{< /banner >}}
\ No newline at end of file
diff --git a/content/includes/ngf/installation/helm/pulling-the-chart.md b/content/includes/ngf/installation/helm/pulling-the-chart.md
index 0d0a5071a..b82b2f809 100644
--- a/content/includes/ngf/installation/helm/pulling-the-chart.md
+++ b/content/includes/ngf/installation/helm/pulling-the-chart.md
@@ -2,11 +2,9 @@
docs: "DOCS-1439"
---
-Pull the latest stable release of the NGINX Gateway Fabric chart:
+```shell
+helm pull oci://ghcr.io/nginx/charts/nginx-gateway-fabric --untar
+cd nginx-gateway-fabric
+```
- ```shell
- helm pull oci://ghcr.io/nginx/charts/nginx-gateway-fabric --untar
- cd nginx-gateway-fabric
- ```
-
- If you want the latest version from the **main** branch, add `--version 0.0.0-edge` to your pull command.
+For the latest version from the **main** branch, add _--version 0.0.0-edge_ to your pull command.
diff --git a/content/includes/ngf/installation/install-gateway-api-experimental-features.md b/content/includes/ngf/installation/install-gateway-api-experimental-features.md
index db70cc083..aa95d1eb9 100644
--- a/content/includes/ngf/installation/install-gateway-api-experimental-features.md
+++ b/content/includes/ngf/installation/install-gateway-api-experimental-features.md
@@ -17,7 +17,7 @@ kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gate
To enable experimental features on NGINX Gateway Fabric:
Using Helm: Set `nginxGateway.gwAPIExperimentalFeatures.enable` to true. An example can be found
-in the [Installation with Helm]({{< ref "/ngf/installation/installing-ngf/helm.md#custom-installation-options" >}}) guide.
+in the [Installation with Helm]({{< ref "/ngf/install/helm.md#custom-installation-options" >}}) guide.
Using Kubernetes manifests: Add the `--gateway-api-experimental-features` command-line flag to the deployment manifest args.
-An example can be found in the [Installation with Kubernetes manifests]({{< ref "/ngf/installation/installing-ngf/manifests.md#3-deploy-nginx-gateway-fabric" >}}) guide.
+An example can be found in the [Installation with Kubernetes manifests]({{< ref "/ngf/install/manifests.md#3-deploy-nginx-gateway-fabric" >}}) guide.
diff --git a/content/includes/ngf/installation/nginx-plus/docker-registry-secret.md b/content/includes/ngf/installation/nginx-plus/docker-registry-secret.md
index 6c5902604..c6d666b6d 100644
--- a/content/includes/ngf/installation/nginx-plus/docker-registry-secret.md
+++ b/content/includes/ngf/installation/nginx-plus/docker-registry-secret.md
@@ -2,7 +2,7 @@
docs: "DOCS-000"
---
-{{< note >}} If you would rather pull the NGINX Plus image and push to a private registry, you can skip this specific step and instead follow [this step]({{< ref "/ngf/installation/nginx-plus-jwt.md#pulling-an-image-for-local-use" >}}). {{< /note >}}
+{{< note >}} If you would rather pull the NGINX Plus image and push to a private registry, you can skip this specific step and instead follow [this step]({{< ref "/ngf/install/nginx-plus.md#pulling-an-image-for-local-use" >}}). {{< /note >}}
If the `nginx-gateway` namespace does not yet exist, create it:
diff --git a/content/ngf/_index.md b/content/ngf/_index.md
index 6e7642933..f4685bcf1 100644
--- a/content/ngf/_index.md
+++ b/content/ngf/_index.md
@@ -1,4 +1,10 @@
---
title: "NGINX Gateway Fabric"
url: /nginx-gateway-fabric/
+cascade:
+ banner:
+ enabled: true
+ type: deprecation
+ start-date: 2025-05-30
+ md: /_banners/ngf-2.0-release.md
---
\ No newline at end of file
diff --git a/content/ngf/releases.md b/content/ngf/changelog.md
similarity index 52%
rename from content/ngf/releases.md
rename to content/ngf/changelog.md
index f371d7a96..5453d1b8f 100644
--- a/content/ngf/releases.md
+++ b/content/ngf/changelog.md
@@ -1,11 +1,10 @@
---
-title: Releases
-description: "NGINX Gateway Fabric releases."
-weight: 700
+title: Changelog
toc: true
-type: reference
-product: NGF
-docs: "DOCS-1359"
+weight: 900
+nd-content-type: reference
+nd-product: NGF
+nd-docs: "DOCS-1359"
---
See the NGINX Gateway Fabric changelog page:
diff --git a/content/ngf/get-started.md b/content/ngf/get-started.md
index 334e99175..c6a03bd0c 100644
--- a/content/ngf/get-started.md
+++ b/content/ngf/get-started.md
@@ -2,15 +2,15 @@
title: Get started
weight: 200
toc: true
-type: how-to
-product: NGF
-docs: DOCS-000
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-000
---
{{< important >}}
This document is for trying out NGINX Gateway Fabric, and not intended for a production environment.
-For standard deployments, you should read the [Install NGINX Gateway Fabric]({{< ref "/ngf/installation/installing-ngf" >}}) section.
+For standard deployments, you should read the [Install NGINX Gateway Fabric]({{< ref "/ngf/install/" >}}) section.
{{< /important >}}
This is a guide for getting started with NGINX Gateway Fabric. It explains how to:
@@ -21,8 +21,6 @@ This is a guide for getting started with NGINX Gateway Fabric. It explains how t
By following the steps in order, you will finish with a functional NGINX Gateway Fabric cluster.
----
-
## Before you begin
To complete this guide, you need the following prerequisites installed:
@@ -47,13 +45,10 @@ nodes:
- containerPort: 31437
hostPort: 8080
protocol: TCP
- - containerPort: 31438
- hostPort: 8443
- protocol: TCP
```
{{< note >}}
-The two _containerPort_ values are used to later configure a _NodePort_.
+The _containerPort_ value is used to later configure a _NodePort_.
{{< /note >}}
Run the following command:
@@ -87,8 +82,6 @@ make create-kind-cluster
{{< /note >}}
----
-
## Install NGINX Gateway Fabric
### Add Gateway API resources
@@ -107,82 +100,28 @@ customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io created
```
----
-
### Install the Helm chart
-Use `helm` to install NGINX Gateway Fabric with the following command:
+Use `helm` to install NGINX Gateway Fabric, specifying the NodePort configuration that will be set on the
+NGINX Service when it is provisioned:
```shell
-helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway --set service.create=false
+helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway --set nginx.service.type=NodePort --set-json 'nginx.service.nodePorts=[{"port":31437,"listenerPort":80}]'
```
+{{< note >}}
+The port value should equal the _containerPort_ value from _cluster-config.yaml_ [when you created the kind cluster](#set-up-a-kind-cluster). The _listenerPort_ value will match the port that we expose in the Gateway listener.
+{{< /note >}}
+
```text
-Pulled: ghcr.io/nginx/charts/nginx-gateway-fabric:{{< version-ngf >}}
-Digest: sha256:9bbd1a2fcbfd5407ad6be39f796f582e6263512f1f3a8969b427d39063cc6fee
NAME: ngf
-LAST DEPLOYED: Mon Oct 21 14:45:14 2024
+LAST DEPLOYED: Tue Apr 29 14:45:14 2025
NAMESPACE: nginx-gateway
STATUS: deployed
REVISION: 1
TEST SUITE: None
```
----
-
-### Set up a NodePort
-
-Create the file _nodeport-config.yaml_ with the following contents:
-
-```yaml {linenos=true, hl_lines=[20, 25]}
-apiVersion: v1
-kind: Service
-metadata:
- name: nginx-gateway
- namespace: nginx-gateway
- labels:
- app.kubernetes.io/name: nginx-gateway-fabric
- app.kubernetes.io/instance: ngf
- app.kubernetes.io/version: "{{< version-ngf >}}"
-spec:
- type: NodePort
- selector:
- app.kubernetes.io/name: nginx-gateway-fabric
- app.kubernetes.io/instance: ngf
- ports:
- - name: http
- port: 80
- protocol: TCP
- targetPort: 80
- nodePort: 31437
- - name: https
- port: 443
- protocol: TCP
- targetPort: 443
- nodePort: 31438
-```
-
-{{< note >}}
-The highlighted _nodePort_ values should equal the _containerPort_ values from _cluster-config.yaml_ [when you created the kind cluster](#set-up-a-kind-cluster).
-{{< /note >}}
-
-Apply it using `kubectl`:
-
-```shell
-kubectl apply -f nodeport-config.yaml
-```
-```text
-service/nginx-gateway created
-```
-
-{{< warning >}}
-The NodePort resource must be deployed in the same namespace as NGINX Gateway Fabric.
-
-If you are making customizations, ensure your `labels:` and `selectors:` also match the labels of the NGINX Gateway Fabric deployment.
-{{< /warning >}}
-
----
-
## Create an example application
In the previous section, you deployed NGINX Gateway Fabric to a local cluster. This section shows you how to deploy a simple web application to test that NGINX Gateway Fabric works.
@@ -191,8 +130,6 @@ In the previous section, you deployed NGINX Gateway Fabric to a local cluster. T
The YAML code in the following sections can be found in the [cafe-example folder](https://github.com/nginx/nginx-gateway-fabric/tree/main/examples/cafe-example) of the GitHub repository.
{{< /note >}}
----
-
### Create the application resources
Create the file _cafe.yaml_ with the following contents:
@@ -220,12 +157,10 @@ kubectl -n default get pods
```text
NAME READY STATUS RESTARTS AGE
-coffee-6db967495b-wk2mm 1/1 Running 0 10s
-tea-7b7d6c947d-d4qcf 1/1 Running 0 10s
+coffee-676c9f8944-k2bmd 1/1 Running 0 9s
+tea-6fbfdcb95d-9lhbj 1/1 Running 0 9s
```
----
-
### Create Gateway and HTTPRoute resources
Create the file _gateway.yaml_ with the following contents:
@@ -242,6 +177,19 @@ kubectl apply -f gateway.yaml
gateway.gateway.networking.k8s.io/gateway created
```
+Verify that the NGINX deployment has been provisioned:
+
+```shell
+kubectl -n default get pods
+```
+
+```text
+NAME READY STATUS RESTARTS AGE
+coffee-676c9f8944-k2bmd 1/1 Running 0 31s
+gateway-nginx-66b5d78f8f-4fmtb 1/1 Running 0 13s
+tea-6fbfdcb95d-9lhbj 1/1 Running 0 31s
+```
+
Create the file _cafe-routes.yaml_ with the following contents:
{{< ghcode `https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/refs/heads/main/examples/cafe-example/cafe-routes.yaml`>}}
@@ -257,29 +205,26 @@ httproute.gateway.networking.k8s.io/coffee created
httproute.gateway.networking.k8s.io/tea created
```
----
-
### Verify the configuration
You can check that all of the expected services are available using `kubectl get`:
```shell
-kubectl get service --all-namespaces
+kubectl -n default get services
```
```text
-NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-default coffee ClusterIP 10.96.18.163 80/TCP 2m51s
-default kubernetes ClusterIP 10.96.0.1 443/TCP 4m41s
-default tea ClusterIP 10.96.169.132 80/TCP 2m51s
-kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 4m40s
-nginx-gateway nginx-gateway NodePort 10.96.186.45 80:31437/TCP,443:31438/TCP 3m6s
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+coffee ClusterIP 10.96.206.93 80/TCP 2m2s
+gateway-nginx NodePort 10.96.157.168 80:31437/TCP 104s
+kubernetes ClusterIP 10.96.0.1 443/TCP 142m
+tea ClusterIP 10.96.43.183 80/TCP 2m2s
```
You can also use `kubectl describe` on the new resources to check their status:
```shell
-kubectl describe httproutes
+kubectl -n default describe httproutes
```
```text
@@ -290,10 +235,10 @@ Annotations:
API Version: gateway.networking.k8s.io/v1
Kind: HTTPRoute
Metadata:
- Creation Timestamp: 2024-10-21T13:46:51Z
+ Creation Timestamp: 2025-04-29T19:06:31Z
Generation: 1
- Resource Version: 821
- UID: cc591089-d3aa-44d3-a851-e2bbfa285029
+ Resource Version: 12285
+ UID: c8055a74-b4c6-442f-b3fb-350fb88b2a7c
Spec:
Hostnames:
cafe.example.com
@@ -316,13 +261,13 @@ Spec:
Status:
Parents:
Conditions:
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: The route is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: All references are resolved
Observed Generation: 1
Reason: ResolvedRefs
@@ -345,10 +290,10 @@ Annotations:
API Version: gateway.networking.k8s.io/v1
Kind: HTTPRoute
Metadata:
- Creation Timestamp: 2024-10-21T13:46:51Z
+ Creation Timestamp: 2025-04-29T19:06:31Z
Generation: 1
- Resource Version: 823
- UID: d72d2a19-1c4d-48c4-9808-5678cff6c331
+ Resource Version: 12284
+ UID: 55aa0ab5-9b1c-4028-9bb5-4903f05bb998
Spec:
Hostnames:
cafe.example.com
@@ -371,13 +316,13 @@ Spec:
Status:
Parents:
Conditions:
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: The route is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: All references are resolved
Observed Generation: 1
Reason: ResolvedRefs
@@ -394,7 +339,7 @@ Events:
```
```shell
-kubectl describe gateways
+kubectl -n default describe gateways
```
```text
@@ -405,10 +350,10 @@ Annotations:
API Version: gateway.networking.k8s.io/v1
Kind: Gateway
Metadata:
- Creation Timestamp: 2024-10-21T13:46:36Z
+ Creation Timestamp: 2025-04-29T19:05:01Z
Generation: 1
- Resource Version: 824
- UID: 2ae8ec42-70eb-41a4-b249-3e47177aea48
+ Resource Version: 12286
+ UID: 0baa6e15-55e0-405a-9e7c-de22472fc3ad
Spec:
Gateway Class Name: nginx
Listeners:
@@ -422,15 +367,15 @@ Spec:
Status:
Addresses:
Type: IPAddress
- Value: 10.244.0.5
+ Value: 10.96.157.168
Conditions:
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: Gateway is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: Gateway is programmed
Observed Generation: 1
Reason: Programmed
@@ -439,25 +384,25 @@ Status:
Listeners:
Attached Routes: 2
Conditions:
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: Listener is accepted
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: Listener is programmed
Observed Generation: 1
Reason: Programmed
Status: True
Type: Programmed
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: All references are resolved
Observed Generation: 1
Reason: ResolvedRefs
Status: True
Type: ResolvedRefs
- Last Transition Time: 2024-10-21T13:46:51Z
+ Last Transition Time: 2025-04-29T19:06:31Z
Message: No conflicts
Observed Generation: 1
Reason: NoConflicts
@@ -472,11 +417,9 @@ Status:
Events:
```
----
-
## Test NGINX Gateway Fabric
-By configuring the cluster with the ports `31437` and `31438`, there is implicit port forwarding from your local machine to NodePort, allowing for direct communication to the NGINX Gateway Fabric service.
+By configuring the cluster with the port `31437`, there is implicit port forwarding from your local machine to NodePort, allowing for direct communication to the NGINX Gateway Fabric service.
You can use `curl` to test the new services by targeting the hostname (_cafe.example.com_) with the _/coffee_ and _/tea_ paths:
@@ -485,11 +428,11 @@ curl --resolve cafe.example.com:8080:127.0.0.1 http://cafe.example.com:8080/coff
```
```text
-Server address: 10.244.0.6:8080
-Server name: coffee-6db967495b-wk2mm
-Date: 21/Oct/2024:13:52:13 +0000
+Server address: 10.244.0.16:8080
+Server name: coffee-676c9f8944-k2bmd
+Date: 29/Apr/2025:19:08:21 +0000
URI: /coffee
-Request ID: fb226a54fd94f927b484dd31fb30e747
+Request ID: f34e138922171977a79b1b0d0395b97e
```
```shell
@@ -497,17 +440,15 @@ curl --resolve cafe.example.com:8080:127.0.0.1 http://cafe.example.com:8080/tea
```
```text
-Server address: 10.244.0.7:8080
-Server name: tea-7b7d6c947d-d4qcf
-Date: 21/Oct/2024:13:52:17 +0000
+Server address: 10.244.0.17:8080
+Server name: tea-6fbfdcb95d-9lhbj
+Date: 29/Apr/2025:19:08:31 +0000
URI: /tea
-Request ID: 43882f2f5794a1ee05d2ea017a035ce3
+Request ID: 1b5c8f3a4532ea7d7510cf14ffeb27af
```
----
-
-## See also
+## Next steps
-- [Install NGINX Gateway Fabric]({{< ref "/ngf/installation/installing-ngf/" >}}), for additional ways to install NGINX Gateway Fabric
-- [How-to guides]({{< ref "/ngf/how-to/" >}}), for configuring your cluster
-- [Traffic management]({{< ref "/ngf/how-to/traffic-management/" >}}), for more in-depth traffic management configuration
+- [Install NGINX Gateway Fabric]({{< ref "/ngf/install/" >}}), for additional ways to install NGINX Gateway Fabric
+- [Traffic management]({{< ref "/ngf/traffic-management/" >}}), for more in-depth traffic management configuration
+- [How-to guides]({{< ref "/ngf/how-to/" >}}), for configuring your cluster
\ No newline at end of file
diff --git a/content/ngf/how-to/_index.md b/content/ngf/how-to/_index.md
index b48bf630e..c6717f894 100644
--- a/content/ngf/how-to/_index.md
+++ b/content/ngf/how-to/_index.md
@@ -1,5 +1,5 @@
---
title: "How-to guides"
url: /nginx-gateway-fabric/how-to/
-weight: 400
+weight: 550
---
diff --git a/content/ngf/how-to/control-plane-configuration.md b/content/ngf/how-to/control-plane-configuration.md
index 15db1c714..c7ca856b4 100644
--- a/content/ngf/how-to/control-plane-configuration.md
+++ b/content/ngf/how-to/control-plane-configuration.md
@@ -13,7 +13,7 @@ Learn how to dynamically update the NGINX Gateway Fabric control plane configura
NGINX Gateway Fabric can dynamically update the control plane configuration without restarting. The control plane configuration is stored in the NginxGateway custom resource, created during the installation of NGINX Gateway Fabric.
-NginxGateway is deployed in the same namespace as the controller (Default: `nginx-gateway`). The resource's default name is based on your [installation method]({{< ref "/ngf/installation/installing-ngf" >}}):
+NginxGateway is deployed in the same namespace as the controller (Default: `nginx-gateway`). The resource's default name is based on your [installation method]({{< ref "/ngf/install/" >}}):
- Helm: `-config`
- Manifests: `nginx-gateway-config`
diff --git a/content/ngf/how-to/data-plane-configuration.md b/content/ngf/how-to/data-plane-configuration.md
index 1ffb7a17a..994a86faa 100644
--- a/content/ngf/how-to/data-plane-configuration.md
+++ b/content/ngf/how-to/data-plane-configuration.md
@@ -11,98 +11,223 @@ Learn how to dynamically update the NGINX Gateway Fabric global data plane confi
## Overview
-NGINX Gateway Fabric can dynamically update the global data plane configuration without restarting. The data plane configuration is a global configuration for NGINX that has options that are not available using the standard Gateway API resources. This includes such things as setting an OpenTelemetry collector config, disabling http2, changing the IP family, or setting the NGINX error log level.
+NGINX Gateway Fabric can dynamically update the global data plane configuration without restarting. The data plane configuration contains configuration for NGINX that is not available using the standard Gateway API resources. This includes options such as configuring an OpenTelemetry collector, disabling HTTP/2, changing the IP family, modifying infrastructure-related fields, and setting the NGINX error log level.
-The data plane configuration is stored in the NginxProxy custom resource, which is a cluster-scoped resource that is attached to the `nginx` GatewayClass.
+The data plane configuration is stored in the `NginxProxy` custom resource, which is a namespace-scoped resource that can be attached to a GatewayClass or Gateway. When attached to a GatewayClass, the fields in the NginxProxy affect all Gateways that belong to the GatewayClass.
+When attached to a Gateway, the fields in the NginxProxy only affect the Gateway. If a GatewayClass and its Gateway both specify an NginxProxy, the GatewayClass NginxProxy provides defaults that can be overridden by the Gateway NginxProxy. See the [Merging Semantics](#merging-semantics) section for more detail.
-By default, the NginxProxy resource is not created when installing NGINX Gateway Fabric. However, you can set configuration options in the `nginx.config` Helm values, and the resource will be created and attached when NGINX Gateway Fabric is installed using Helm. You can also [manually create and attach](#manually-create-the-configuration) the resource after NGINX Gateway Fabric is already installed.
+---
-When installed using the Helm chart, the NginxProxy resource is named `-proxy-config`.
+## Merging Semantics
-**For a full list of configuration options that can be set, see the `NginxProxy spec` in the [API reference]({{< ref "/ngf/reference/api.md" >}}).**
+NginxProxy resources are merged when a GatewayClass and a Gateway reference different NginxProxy resources.
-{{< note >}} Some global configuration also requires an [associated policy]({{< ref "/ngf/overview/custom-policies.md" >}}) to fully enable a feature (such as [tracing]({{< ref "/ngf/how-to/monitoring/tracing.md" >}}), for example). {{< /note >}}
+For fields that are bools, integers, and strings:
+- If a field on the Gateway's NginxProxy is unspecified (`nil`), the Gateway __inherits__ the value of the field in the GatewayClass's NginxProxy.
+- If a field on the Gateway's NginxProxy is specified, its value __overrides__ the value of the field in the GatewayClass's NginxProxy.
----
+For array fields:
+- If the array on the Gateway's NginxProxy is unspecified (`nil`), the Gateway __inherits__ the entire array in the GatewayClass's NginxProxy.
+- If the array on the Gateway's NginxProxy is empty, it __overrides__ the entire array in the GatewayClass's NginxProxy, effectively unsetting the field.
+- If the array on the Gateway's NginxProxy is specified and not empty, it __overrides__ the entire array in the GatewayClass's NginxProxy.
-## Viewing and Updating the Configuration
-If the `NginxProxy` resource already exists, you can view and edit it.
+### Merging Examples
-{{< note >}} For the following examples, the name `ngf-proxy-config` should be updated to the name of the resource created for your installation. {{< /note >}}
+This section contains examples of how NginxProxy resources are merged when they are attached to both a Gateway and its GatewayClass.
-To view the current configuration:
+#### Disable HTTP/2 for a Gateway
-```shell
-kubectl describe nginxproxies ngf-proxy-config
+A GatewayClass references the following NginxProxy which explicitly allows HTTP/2 traffic and sets the IPFamily to ipv4:
+
+```yaml
+apiVersion: gateway.nginx.org/v1alpha2
+kind: NginxProxy
+metadata:
+ name: gateway-class-enable-http2
+ namespace: default
+spec:
+ ipFamily: "ipv4"
+ disableHTTP: false
```
-To update the configuration:
+To disable HTTP/2 traffic for a particular Gateway, reference the following NginxProxy in the Gateway's spec:
-```shell
-kubectl edit nginxproxies ngf-proxy-config
+```yaml
+apiVersion: gateway.nginx.org/v1alpha2
+kind: NginxProxy
+metadata:
+ name: gateway-disable-http
+ namespace: default
+spec:
+ disableHTTP: true
```
-This will open the configuration in your default editor. You can then update and save the configuration, which is applied automatically to the data plane.
+These NginxProxy resources are merged and the following settings are applied to the Gateway:
-To view the status of the configuration, check the GatewayClass that it is attached to:
+```yaml
+ipFamily: "ipv4"
+disableHTTP: true
+```
-```shell
-kubectl describe gatewayclasses nginx
+#### Change Telemetry configuration for a Gateway
+
+A GatewayClass references the following NginxProxy which configures telemetry:
+
+```yaml
+apiVersion: gateway.nginx.org/v1alpha2
+kind: NginxProxy
+metadata:
+ name: gateway-class-telemetry
+ namespace: default
+spec:
+ telemetry:
+ exporter:
+ endpoint: "my.telemetry.collector:9000"
+ interval: "60s"
+ batchSize: 20
+ serviceName: "my-company"
+ spanAttributes:
+ - key: "company-key"
+ value: "company-value"
```
-```text
-...
-Status:
- Conditions:
- ...
- Message: parametersRef resource is resolved
- Observed Generation: 1
- Reason: ResolvedRefs
- Status: True
- Type: ResolvedRefs
+To change the telemetry configuration for a particular Gateway, reference the following NginxProxy in the Gateway's spec:
+
+```yaml
+apiVersion: gateway.nginx.org/v1alpha2
+kind: NginxProxy
+metadata:
+ name: gateway-telemetry-service-name
+ namespace: default
+spec:
+ telemetry:
+ exporter:
+ batchSize: 50
+ batchCount: 5
+ serviceName: "my-app"
+ spanAttributes:
+ - key: "app-key"
+ value: "app-value"
```
-If everything is valid, the `ResolvedRefs` condition should be `True`. Otherwise, you will see an `InvalidParameters` condition in the status.
+These NginxProxy resources are merged and the following settings are applied to the Gateway:
+
+```yaml
+ telemetry:
+ exporter:
+ endpoint: "my.telemetry.collector:9000"
+ interval: "60s"
+ batchSize: 50
+ batchCount: 5
+ serviceName: "my-app"
+ spanAttributes:
+ - key: "app-key"
+ value: "app-value"
+```
+
+#### Disable Tracing for a Gateway
+
+A GatewayClass references the following NginxProxy which configures telemetry:
+
+```yaml
+apiVersion: gateway.nginx.org/v1alpha2
+kind: NginxProxy
+metadata:
+ name: gateway-class-telemetry
+ namespace: default
+spec:
+ telemetry:
+ exporter:
+ endpoint: "my.telemetry.collector:9000"
+ interval: "60s"
+ serviceName: "my-company"
+```
+
+To disable tracing for a particular Gateway, reference the following NginxProxy in the Gateway's spec:
+
+```yaml
+apiVersion: gateway.nginx.org/v1alpha2
+kind: NginxProxy
+metadata:
+ name: gateway-disable-tracing
+ namespace: default
+spec:
+ telemetry:
+ disabledFeatures:
+ - DisableTracing
+```
+
+These NginxProxy resources are merged and the following settings are applied to the Gateway:
+
+```yaml
+telemetry:
+ exporter:
+ endpoint: "my.telemetry.collector:9000"
+ interval: "60s"
+ serviceName: "my-app"
+ disabledFeatures:
+ - DisableTracing
+```
---
-## Manually create the configuration
+## Configuring the GatewayClass NginxProxy on install
+
+By default, an `NginxProxy` resource is created in the same namespace where NGINX Gateway Fabric is installed, attached to the GatewayClass. You can set configuration options in the `nginx` Helm value section, and the resource will be created and attached using the set values. You can also [manually create and attach](#manually-creating-nginxProxies) specific `NginxProxy` resources to target different Gateways.
+
+When installed using the Helm chart, the NginxProxy resource is named `-proxy-config` and is created in the release Namespace.
+
+**For a full list of configuration options that can be set, see the `NginxProxy spec` in the [API reference]({{< ref "/ngf/reference/api.md" >}}).**
+
+{{< note >}} Some global configuration also requires an [associated policy]({{< ref "/ngf/overview/custom-policies.md" >}}) to fully enable a feature (such as [tracing]({{< ref "/ngf/monitoring/tracing.md" >}}), for example). {{< /note >}}
-If the `NginxProxy` resource doesn't exist, you can create it and attach it to the GatewayClass.
+---
+
+## Manually Creating NginxProxies
-The following command creates a basic `NginxProxy` configuration that sets the IP family to `ipv4` instead of the default value of `dual`:
+The following command creates a basic `NginxProxy` configuration in the `default` namespace that sets the IP family to `ipv4` instead of the default value of `dual`:
```yaml
kubectl apply -f - <}}).
+
+---
+
+### Attaching NginxProxy to Gateway
+
+To attach the `ngf-proxy-config` NginxProxy to a Gateway:
```shell
-kubectl edit gatewayclass nginx
+kubectl edit gateway
```
This will open your default editor, allowing you to add the following to the `spec`:
```yaml
-parametersRef:
- group: gateway.nginx.org
- kind: NginxProxy
- name: ngf-proxy-config
+infrastructure:
+ parametersRef:
+ group: gateway.nginx.org
+ kind: NginxProxy
+ name: ngf-proxy-config
```
-After updating, you can check the status of the GatewayClass to see if the configuration is valid:
+{{< note >}} The `NginxProxy` resource must reside in the same namespace as the Gateway it is attached to. {{< /note >}}
+
+After updating, you can check the status of the Gateway to see if the configuration is valid:
```shell
-kubectl describe gatewayclasses nginx
+kubectl describe gateway
```
```text
@@ -123,13 +248,13 @@ If everything is valid, the `ResolvedRefs` condition should be `True`. Otherwise
## Configure the data plane log level
-You can use the `NginxProxy` resource to dynamically configure the Data Plane Log Level.
+You can use the `NginxProxy` resource to dynamically configure the log level.
The following command creates a basic `NginxProxy` configuration that sets the log level to `warn` instead of the default value of `info`:
```yaml
kubectl apply -f - <}}).
{{< note >}}For `debug` logging to work, NGINX needs to be built with `--with-debug` or "in debug mode". NGINX Gateway Fabric can easily
@@ -151,23 +274,33 @@ of a few arguments. {{ note >}}
### Run NGINX Gateway Fabric with NGINX in debug mode
-To run NGINX Gateway Fabric with NGINX in debug mode, follow the [installation document]({{< ref "/ngf/installation/installing-ngf" >}}) with these additional steps:
+To run NGINX Gateway Fabric with NGINX in debug mode, during [installation]({{< ref "/ngf/install/" >}}), follow these additional steps:
-Using Helm: Set `nginx.debug` to true.
+- **Helm**: Set _nginx.debug_ to _true_.
+- **Manifests**: Set _spec.kubernetes.deployment.container.debug_ field in the _NginxProxy_ resource to _true_.
-Using Kubernetes Manifests: Under the `nginx` container of the deployment manifest, add `-c` and `rm -rf /var/run/nginx/*.sock && nginx-debug -g 'daemon off;'`
-as arguments and add `/bin/sh` as the command. The deployment manifest should look something like this:
+To change NGINX mode **after** deploying NGINX Gateway Fabric, use the _NginxProxy_ _spec.kubernetes.deployment.container.debug_ field.
-```text
-...
-- args:
- - -c
- - rm -rf /var/run/nginx/*.sock && nginx-debug -g 'daemon off;'
- command:
- - /bin/sh
-...
+The following command creates a basic _NginxProxy_ configuration that sets both the NGINX mode and log level to _debug_.
+
+```yaml
+kubectl apply -f - <}} When modifying any _deployment_ field in the _NginxProxy_ resource, any corresponding NGINX instances will be restarted. {{< /note >}}
+
---
## Configure PROXY protocol and RewriteClientIP settings
@@ -189,7 +322,7 @@ The following command creates an `NginxProxy` resource with `RewriteClientIP` se
```yaml
kubectl apply -f - <}}).
-
{{< note >}} When sending curl requests to a server expecting proxy information, use the flag `--haproxy-protocol` to avoid broken header errors. {{< /note >}}
+
+---
+
+## Configure infrastructure-related settings
+
+You can configure deployment and service settings for all data plane instances by editing the `NginxProxy` resource at the Gateway or GatewayClass level. These settings can also be specified under the `nginx` section in the Helm values file. You can edit things such as replicas, pod scheduling options, container resource limits, extra volume mounts, service types and load balancer settings.
+
+The following command creates an `NginxProxy` resource with 2 replicas, sets `container.resources.requests` to 100m CPU and 128Mi memory, configures a 90 second `pod.terminationGracePeriodSeconds`, and sets the service type to `LoadBalancer` with IP `192.87.9.1` and AWS NLB annotation.
+
+```yaml
+kubectl apply -f - <}}).
+
+---
+
diff --git a/content/ngf/how-to/monitoring/_index.md b/content/ngf/how-to/monitoring/_index.md
deleted file mode 100644
index 81673a4df..000000000
--- a/content/ngf/how-to/monitoring/_index.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Monitoring and troubleshooting"
-url: /nginx-gateway-fabric/how-to/monitoring/
-weight: 300
----
diff --git a/content/ngf/how-to/monitoring/dashboard.md b/content/ngf/how-to/monitoring/dashboard.md
deleted file mode 100644
index 94e411879..000000000
--- a/content/ngf/how-to/monitoring/dashboard.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: NGINX Plus dashboard
-weight: 300
-toc: true
-type: how-to
-product: NGF
-docs: DOCS-1417
----
-
-Learn how to view the NGINX Plus dashboard to see real-time metrics.
-
----
-
-## Overview
-
-The NGINX Plus dashboard offers a real-time live activity monitoring interface that shows key load and performance metrics of your server infrastructure. The dashboard is enabled by default for NGINX Gateway Fabric deployments that use NGINX Plus as the data plane. The dashboard is available on port 8765.
-
-To access the dashboard:
-
-1. Use port-forwarding to forward connections to port 8765 on your local machine to port 8765 on the NGINX Gateway Fabric pod (replace `` with the actual name of the pod).
-
- ```shell
- kubectl port-forward 8765:8765 -n nginx-gateway
- ```
-
-1. Open your browser to [http://127.0.0.1:8765/dashboard.html](http://127.0.0.1:8765/dashboard.html) to access the dashboard.
-
-The dashboard will look like this:
-
-{{< img src="/ngf/img/nginx-plus-dashboard.png" alt="">}}
-
-{{< note >}} The [API](https://nginx.org/en/docs/http/ngx_http_api_module.html) used by the dashboard for metrics is also accessible using the `/api` path. {{< /note >}}
diff --git a/content/ngf/how-to/scaling.md b/content/ngf/how-to/scaling.md
new file mode 100644
index 000000000..22d181cdf
--- /dev/null
+++ b/content/ngf/how-to/scaling.md
@@ -0,0 +1,96 @@
+---
+title: Scaling the control plane and data plane
+weight: 700
+toc: true
+type: how-to
+product: NGF
+docs: DOCS-0000
+---
+
+This document describes how you can separately scale the NGINX Gateway Fabric control plane and data plane.
+
+It provides guidance on how to scale each plane effectively, and when you should do so based on your traffic patterns.
+
+
+### Scaling the data plane
+
+The data plane is the NGINX deployment that handles user traffic to backend applications. Every Gateway object created provisions its own NGINX deployment and configuration.
+
+You have two options for scaling the data plane:
+
+- Increasing the number of replicas for an existing deployment
+- Creating a new Gateway for a new data plane
+
+#### When to increase replicas or create a new Gateway
+
+Understanding when to increase replicas or create a new Gateway is key to managing traffic effectively.
+
+Increasing data plane replicas is ideal when you need to handle more traffic without changing the configuration.
+
+For example, if you're routing traffic to `api.example.com` and notice an increase in load, you can scale the replicas from 1 to 5 to better distribute the traffic and reduce latency.
+
+All replicas will share the same configuration from the Gateway used to set up the data plane, simplifying configuration management.
+
+There are two ways to modify the number of replicas for an NGINX deployment:
+
+First, at the time of installation you can modify the field `nginx.replicas` in the `values.yaml` or add the `--set nginx.replicas=` flag to the `helm install` command:
+
+```shell
+helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway --set nginx.replicas=5
+```
+
+Secondly, you can update the `NginxProxy` resource while NGINX is running to modify the `kubernetes.deployment.replicas` field and scale the data plane deployment dynamically:
+
+```shell
+kubectl edit nginxproxies.gateway.nginx.org ngf-proxy-config -n nginx-gateway
+```
+
+The alternate way to scale the data plane is by creating a new Gateway. This is beneficial when you need distinct configurations, isolation, or separate policies.
+
+For example, if you're routing traffic to a new domain `admin.example.com` and require a different TLS certificate, stricter rate limits, or separate authentication policies, creating a new Gateway could be a good approach.
+
+It allows for safe experimentation with isolated configurations and makes it easier to enforce security boundaries and specific routing rules.
+
+### Scaling the control plane
+
+The control plane builds configuration based on defined Gateway API resources and sends that configuration to the NGINX data planes. With leader election enabled by default, the control plane can be scaled horizontally by running multiple replicas, although only the pod with leader lease can actively manage configuration status updates.
+
+Scaling the control plane can be beneficial in the following scenarios:
+
+1. _Higher availability_ - When a control plane pod crashes, runs out of memory, or goes down during an upgrade, it can interrupt configuration delivery. By scaling to multiple replicas, another pod can quickly step in and take over, keeping things running smoothly with minimal downtime.
+1. _Faster configuration distribution_ - As the number of connected NGINX instances grows, a single control plane pod may become a bottleneck in handling connections or streaming configuration updates. Scaling the control plane improves concurrency and responsiveness when delivering configuration over gRPC.
+
+To scale the control plane, use the `kubectl scale` command on the control plane deployment to increase or decrease the number of replicas. For example, the following command scales the control plane deployment to 3 replicas:
+
+ ```shell
+ kubectl scale deployment -n nginx-gateway ngf-nginx-gateway-fabric --replicas 3
+ ```
+
+#### Known risks when scaling the control plane
+
+When scaling the control plane, it's important to understand how status updates are handled for Gateway API resources.
+
+All control plane pods can send NGINX configuration to the data planes. However, only the leader control plane pod is allowed to write status updates to Gateway API resources.
+
+If an NGINX instance connects to a non-leader pod, and an error occurs when applying the config, that error status will not be written to the Gateway object status.
+
+To mitigate the potential for this issue, ensure that the number of NGINX data plane pods equals or exceeds the number of control plane pods.
+
+This increases the likelihood that at least one of the data planes is connected to the leader control plane pod. If an applied configuration has an error, the leader pod will be aware of it and status can still be written.
+
+There is still a chance (however unlikely) that one of the data planes connected to a non-leader has an issue applying its configuration, while the rest of the data planes are successful, which could lead to that error status not being written.
+
+To identify which control plane pod currently holds the leader election lease, retrieve the leases in the same namespace as the control plane pods. For example:
+
+```shell
+kubectl get leases -n nginx-gateway
+```
+
+The current leader lease is held by the pod `ngf-nginx-gateway-fabric-b45ffc8d6-d9z2g`:
+
+```shell
+NAME HOLDER AGE
+ngf-nginx-gateway-fabric-leader-election ngf-nginx-gateway-fabric-b45ffc8d6-d9z2g_2ef81ced-f19d-41a0-9fcd-a68d89380d10 16d
+```
+
+---
\ No newline at end of file
diff --git a/content/ngf/how-to/traffic-management/_index.md b/content/ngf/how-to/traffic-management/_index.md
deleted file mode 100644
index f94f827ce..000000000
--- a/content/ngf/how-to/traffic-management/_index.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Traffic management"
-url: /nginx-gateway-fabric/how-to/traffic-management/
-weight: 100
----
diff --git a/content/ngf/how-to/traffic-security/_index.md b/content/ngf/how-to/traffic-security/_index.md
deleted file mode 100644
index eaea647d8..000000000
--- a/content/ngf/how-to/traffic-security/_index.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Traffic security"
-url: /nginx-gateway-fabric/how-to/traffic-security
-weight: 200
----
\ No newline at end of file
diff --git a/content/ngf/how-to/upgrade-apps-without-downtime.md b/content/ngf/how-to/upgrade-apps-without-downtime.md
index e66edd9ba..71e570f0b 100644
--- a/content/ngf/how-to/upgrade-apps-without-downtime.md
+++ b/content/ngf/how-to/upgrade-apps-without-downtime.md
@@ -1,6 +1,6 @@
---
title: Upgrade applications without downtime
-weight: 500
+weight: 600
toc: true
type: how-to
product: NGF
diff --git a/content/ngf/install/_index.md b/content/ngf/install/_index.md
new file mode 100644
index 000000000..64db7a6ee
--- /dev/null
+++ b/content/ngf/install/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Install"
+url: /nginx-gateway-fabric/install/
+weight: 300
+---
diff --git a/content/ngf/installation/building-the-images.md b/content/ngf/install/build-image.md
similarity index 83%
rename from content/ngf/installation/building-the-images.md
rename to content/ngf/install/build-image.md
index 5c81a28a1..7e229f4c1 100644
--- a/content/ngf/installation/building-the-images.md
+++ b/content/ngf/install/build-image.md
@@ -1,17 +1,15 @@
---
title: Build NGINX Gateway Fabric
-weight: 500
+weight: 400
toc: true
-type: how-to
-product: NGF
-docs: DOCS-1431
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1431
---
## Overview
-While most users will install NGINX Gateway Fabric [with Helm]({{< ref "/ngf/installation/installing-ngf/helm.md" >}}) or [Kubernetes manifests]({{< ref "/ngf/installation/installing-ngf/manifests.md" >}}), manually building the [NGINX Gateway Fabric and NGINX images]({{< ref "/ngf/overview/gateway-architecture.md#the-nginx-gateway-fabric-pod" >}}) can be helpful for testing and development purposes. Follow the steps in this document to build the NGINX Gateway Fabric and NGINX images.
-
----
+While most users will install NGINX Gateway Fabric [with Helm]({{< ref "/ngf/install/helm.md" >}}) or [Kubernetes manifests]({{< ref "/ngf/install/manifests.md" >}}), manually building the [NGINX Gateway Fabric and NGINX images]({{< ref "/ngf/overview/gateway-architecture.md#the-nginx-gateway-fabric-pod" >}}) can be helpful for testing and development purposes. Follow the steps in this document to build the NGINX Gateway Fabric and NGINX images.
## Before you begin
@@ -25,7 +23,6 @@ installed on your machine:
If building the NGINX Plus image, you will also need a valid NGINX Plus license certificate (`nginx-repo.crt`) and key (`nginx-repo.key`) in the root of the repo.
----
## Steps
diff --git a/content/ngf/install/deploy-data-plane.md b/content/ngf/install/deploy-data-plane.md
new file mode 100644
index 000000000..114613026
--- /dev/null
+++ b/content/ngf/install/deploy-data-plane.md
@@ -0,0 +1,242 @@
+---
+title: Deploy a Gateway for data plane instances
+weight: 600
+toc: true
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-000
+---
+
+## Overview
+
+This document describes how to use a Gateway to deploy the NGINX data plane, and how to modify it using an NGINX custom resource.
+
+[A Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) is used to manage all inbound requests, and is a key Gateway API resource.
+
+When a Gateway is attached to a GatewayClass associated with NGINX Gateway Fabric, it creates a Service and an NGINX deployment. This forms the NGINX data plane, handling requests.
+
+A single GatewayClass can have multiple Gateways: each Gateway will create a separate Service and NGINX deployment.
+
+## Before you begin
+
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric.
+
+## Create a Gateway
+
+To deploy a Gateway, run the following command:
+
+```yaml
+kubectl apply -f - < 80:30180/TCP 5m2s
+```
+
+The Service type can be changed, as explained in the next section.
+
+## Modify provisioned NGINX instances
+
+The NginxProxy custom resource can modify the provisioning of the Service object and NGINX deployment when a Gateway is created.
+
+{{< note >}} Updating most Kubernetes related fields in NginxProxy will trigger a restart of the related resources. {{< /note >}}
+
+An NginxProxy resource is created by default after deploying NGINX Gateway Fabric. This NginxProxy resource is attached to the GatewayClass (created on NGINX Gateway Fabric installation), and
+its settings are applied globally to all Gateways.
+
+Use `kubectl get` and `kubectl describe` to get some more information on the resource:
+
+```shell
+kubectl get nginxproxies -A
+```
+```text
+NAMESPACE NAME AGE
+nginx-gateway ngf-proxy-config 19h
+```
+
+```shell
+kubectl describe nginxproxy -n nginx-gateway ngf-proxy-config
+```
+```text
+Name: ngf-proxy-config
+Namespace: nginx-gateway
+Labels: app.kubernetes.io/instance=ngf
+ app.kubernetes.io/managed-by=Helm
+ app.kubernetes.io/name=nginx-gateway-fabric
+ app.kubernetes.io/version=edge
+ helm.sh/chart=nginx-gateway-fabric-1.6.2
+Annotations: meta.helm.sh/release-name: ngf
+ meta.helm.sh/release-namespace: nginx-gateway
+API Version: gateway.nginx.org/v1alpha2
+Kind: NginxProxy
+Metadata:
+ Creation Timestamp: 2025-05-05T23:01:28Z
+ Generation: 1
+ Resource Version: 2245
+ UID: b545aa9e-74f8-45c0-b472-f14d3cab936f
+Spec:
+ Ip Family: dual
+ Kubernetes:
+ Deployment:
+ Container:
+ Image:
+ Pull Policy: IfNotPresent
+ Repository: nginx-gateway-fabric/nginx
+ Tag: edge
+ Replicas: 1
+ Service:
+ External Traffic Policy: Local
+ Type: LoadBalancer
+Events:
+```
+
+From the information obtained with `kubectl describe` you can see the default settings for the provisioned NGINX Deployment and Service.
+Under `Spec.Kubernetes` you can see a few things:
+- The NGINX container image settings
+- How many NGINX Deployment replicas are specified
+- The type of Service and external traffic policy
+
+{{< note >}} Depending on installation configuration, the default NginxProxy settings may be slightly different from what is shown in the example.
+For more information on NginxProxy and its configurable fields, see the [API reference]({{< ref "/ngf/reference/api.md" >}}). {{< /note >}}
+
+Modify the NginxProxy resource to change the type of Service.
+
+Use `kubectl edit` to modify the default NginxProxy and insert the following under `spec.kubernetes.service`:
+
+```yaml
+type: NodePort
+```
+
+After saving the changes, use `kubectl get` on the service, and you should see the service type has changed to `LoadBalancer`.
+
+```shell
+kubectl get service cafe-nginx
+```
+```text
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+cafe-nginx NodePort 10.96.172.204 80:32615/TCP 3h5m
+```
+
+### Set annotations and labels on provisioned resources
+
+While the majority of configuration will happen on the NginxProxy resource, that is not always the case. Uniquely, if
+you want to set any annotations or labels on the NGINX Deployment or Service, you need to set those annotations on the Gateway which
+provisioned them.
+
+You can use `kubectl edit` on the Gateway and add the following to the `spec`:
+
+```yaml
+infrastructure:
+ annotations:
+ annotationKey: annotationValue
+ labels:
+ labelKey: labelValue
+```
+
+After saving the changes, check the Service and NGINX deployment with `kubectl describe`.
+
+```shell
+kubectl describe deployment cafe
+```
+```text
+Name: cafe-nginx
+Namespace: default
+CreationTimestamp: Mon, 05 May 2025 16:49:33 -0700
+...
+Pod Template:
+ Labels: app.kubernetes.io/instance=ngf
+ app.kubernetes.io/managed-by=ngf-nginx
+ app.kubernetes.io/name=cafe-nginx
+ gateway.networking.k8s.io/gateway-name=cafe
+ labelKey=labelValue
+ Annotations: annotationKey: annotationValue
+ prometheus.io/port: 9113
+ prometheus.io/scrape: true
+...
+```
+
+```shell
+kubectl describe service cafe-nginx
+```
+```text
+Name: cafe-nginx
+Namespace: default
+Labels: app.kubernetes.io/instance=ngf
+ app.kubernetes.io/managed-by=ngf-nginx
+ app.kubernetes.io/name=cafe-nginx
+ gateway.networking.k8s.io/gateway-name=cafe
+ labelKey=labelValue
+Annotations: annotationKey: annotationValue
+```
+
+## See also
+
+For more guides on routing traffic to applications and more information on Data Plane configuration, check out the following resources:
+
+- [Routing traffic to applications]({{< ref "/ngf/traffic-management/basic-routing.md" >}})
+- [Application routes using HTTP matching conditions]({{< ref "/ngf/traffic-management/advanced-routing.md" >}})
+- [Data plane configuration]({{< ref "/ngf/how-to/data-plane-configuration.md" >}})
+- [API reference]({{< ref "/ngf/reference/api.md" >}})
\ No newline at end of file
diff --git a/content/ngf/installation/installing-ngf/helm.md b/content/ngf/install/helm.md
similarity index 52%
rename from content/ngf/installation/installing-ngf/helm.md
rename to content/ngf/install/helm.md
index dd859a459..7c997fb3e 100644
--- a/content/ngf/installation/installing-ngf/helm.md
+++ b/content/ngf/install/helm.md
@@ -1,24 +1,24 @@
---
-title: Installation with Helm
-weight: 100
+title: Install NGINX Gateway Fabric with Helm
+weight: 200
toc: true
-type: how-to
-product: NGF
-docs: DOCS-1430
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1430
---
## Overview
Learn how to install, upgrade, and uninstall NGINX Gateway Fabric in a Kubernetes cluster using Helm.
----
## Before you begin
-To complete this guide, you'll need to install:
+To complete this guide, you will need:
- [kubectl](https://kubernetes.io/docs/tasks/tools/), a command-line tool for managing Kubernetes clusters.
- [Helm 3.0 or later](https://helm.sh/docs/intro/install/), for deploying and managing applications on Kubernetes.
+- [Add certificates for secure authentication]({{< ref "/ngf/install/secure-certificates.md" >}}) in a production environment.
{{< important >}} If you’d like to use NGINX Plus, some additional setup is also required: {{ important >}}
@@ -39,19 +39,16 @@ To complete this guide, you'll need to install:
{{< include "/ngf/installation/nginx-plus/nginx-plus-secret.md" >}}
-{{< note >}} For more information on why this is needed and additional configuration options, including how to report to NGINX Instance Manager instead, see the [NGINX Plus Image and JWT Requirement]({{< ref "/ngf/installation/nginx-plus-jwt.md" >}}) document. {{< /note >}}
+{{< note >}} For more information on why this is needed and additional configuration options, including how to report to NGINX Instance Manager instead, see the [NGINX Plus Image and JWT Requirement]({{< ref "/ngf/install/nginx-plus.md" >}}) document. {{< /note >}}
----
-
## Deploy NGINX Gateway Fabric
### Installing the Gateway API resources
{{< include "/ngf/installation/install-gateway-api-resources.md" >}}
----
### Install from the OCI registry
@@ -76,7 +73,7 @@ helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namesp
To install the latest stable release of NGINX Gateway Fabric in the **nginx-gateway** namespace, run the following command:
```shell
-helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --set nginx.image.repository=private-registry.nginx.com/nginx-gateway-fabric/nginx-plus --set nginx.plus=true --set serviceAccount.imagePullSecret=nginx-plus-registry-secret -n nginx-gateway
+helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --set nginx.image.repository=private-registry.nginx.com/nginx-gateway-fabric/nginx-plus --set nginx.plus=true --set nginx.imagePullSecret=nginx-plus-registry-secret -n nginx-gateway
```
{{% /tab %}}
@@ -93,8 +90,6 @@ To wait for the Deployment to be ready, you can either add the `--wait` flag to
kubectl wait --timeout=5m -n nginx-gateway deployment/ngf-nginx-gateway-fabric --for=condition=Available
```
----
-
### Install from sources {#install-from-sources}
If you prefer to install directly from sources, instead of through the OCI helm registry, use the following steps.
@@ -120,7 +115,7 @@ helm install ngf . --create-namespace -n nginx-gateway
To install the chart into the **nginx-gateway** namespace, run the following command:
```shell
-helm install ngf . --set nginx.image.repository=private-registry.nginx.com/nginx-gateway-fabric/nginx-plus --set nginx.plus=true --set serviceAccount.imagePullSecret=nginx-plus-registry-secret -n nginx-gateway
+helm install ngf . --set nginx.image.repository=private-registry.nginx.com/nginx-gateway-fabric/nginx-plus --set nginx.plus=true --set nginx.imagePullSecret=nginx-plus-registry-secret -n nginx-gateway
```
{{% /tab %}}
@@ -135,8 +130,6 @@ To wait for the Deployment to be ready, you can either add the `--wait` flag to
kubectl wait --timeout=5m -n nginx-gateway deployment/ngf-nginx-gateway-fabric --for=condition=Available
```
----
-
### Custom installation options
#### Service type
@@ -146,17 +139,9 @@ By default, the NGINX Gateway Fabric helm chart deploys a LoadBalancer Service.
To use a NodePort Service instead:
```shell
-helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway --set service.type=NodePort
-```
-
-To disable the creation of a Service:
-
-```shell
-helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway --set service.create=false
+helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway --set nginx.service.type=NodePort
```
----
-
#### Experimental features
We support a subset of the additional features provided by the Gateway API experimental channel. To enable the
@@ -168,159 +153,15 @@ helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namesp
{{< note >}} Requires the Gateway APIs installed from the experimental channel. {{< /note >}}
----
#### Examples
You can find several examples of configuration options of the `values.yaml` file in the [helm examples](https://github.com/nginx/nginx-gateway-fabric/tree/v{{< version-ngf >}}/examples/helm) directory.
----
-
### Access NGINX Gateway Fabric
{{< include "/ngf/installation/expose-nginx-gateway-fabric.md" >}}
----
-
-## Upgrade NGINX Gateway Fabric
-
-{{< important >}} NGINX Plus users that are upgrading from version 1.4.0 to 1.5.x need to install an NGINX Plus JWT
-Secret before upgrading. Follow the steps in the [Before you begin](#before-you-begin) section to create the Secret. If you use a different name than the default `nplus-license` name, specify the Secret name by setting `--set nginx.usage.secretName=` when running `helm upgrade`. {{< /important >}}
-
-{{< tip >}} For guidance on zero downtime upgrades, see the [Delay Pod Termination](#configure-delayed-pod-termination-for-zero-downtime-upgrades) section below. {{< /tip >}}
-
-To upgrade NGINX Gateway Fabric and get the latest features and improvements, take the following steps:
-
----
-
-### Upgrade Gateway resources
-
-{{< include "/ngf/installation/upgrade-api-resources.md" >}}
-
----
-
-### Upgrade NGINX Gateway Fabric CRDs
-
-Helm's upgrade process does not automatically upgrade the NGINX Gateway Fabric CRDs (Custom Resource Definitions).
-
-To upgrade the CRDs, take the following steps:
-
-1. {{< include "/ngf/installation/helm/pulling-the-chart.md" >}}
-
-2. Upgrade the CRDs:
-
- ```shell
- kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/deploy/crds.yaml
- ```
-
- {{}}Ignore the following warning, as it is expected.{{}}
-
- ```text
- Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply.
- ```
-
----
-
-### Upgrade NGINX Gateway Fabric release
-
-{{< important >}} NGINX Plus users that are upgrading from version 1.4.0 to 1.5.x need to install an NGINX Plus JWT
-Secret before upgrading. Follow the steps in the [Before you begin](#before-you-begin) section to create the Secret. If you use a different name than the default `nplus-license` name, specify the Secret name by setting `--set nginx.usage.secretName=` when running `helm upgrade`. {{< /important >}}
-
-There are two possible ways to upgrade NGINX Gateway Fabric. You can either upgrade from the OCI registry, or download the chart and upgrade from the source.
-
----
-
-#### Upgrade from the OCI registry
-
-To upgrade to the latest stable release of NGINX Gateway Fabric, run:
-
-```shell
-helm upgrade ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric -n nginx-gateway
-```
-
-If needed, replace `ngf` with your chosen release name.
-
----
-
-#### Upgrade from sources
-
-{{< include "/ngf/installation/helm/pulling-the-chart.md" >}}
-
-To upgrade, run: the following command:
-
-```shell
-helm upgrade ngf . -n nginx-gateway
-```
-
-If needed, replace `ngf` with your chosen release name.
-
----
-
-## How to upgrade from NGINX OSS to NGINX Plus
-
-To upgrade from NGINX OSS to NGINX Plus, update the Helm command to include the necessary values for Plus:
-
-{{< note >}} If applicable, replace the F5 Container registry `private-registry.nginx.com` with your internal registry for your NGINX Plus image, and replace `nginx-plus-registry-secret` with your Secret name containing the registry credentials.{{< /note >}}
-
-{{< important >}} Ensure that you [Create the required JWT Secrets]({{< ref "/ngf/installation/nginx-plus-jwt.md" >}}) before installing.{{< /important >}}
-
-```shell
-helm upgrade ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --set nginx.image.repository=private-registry.nginx.com/nginx-gateway-fabric/nginx-plus --set nginx.plus=true --set serviceAccount.imagePullSecret=nginx-plus-registry-secret -n nginx-gateway
-```
-
-If needed, replace `ngf` with your chosen release name.
-
----
-
-## Delay pod termination for zero downtime upgrades {#configure-delayed-pod-termination-for-zero-downtime-upgrades}
-
-{{< include "/ngf/installation/delay-pod-termination/delay-pod-termination-overview.md" >}}
-
-Follow these steps to configure delayed pod termination:
-
-1. Open the `values.yaml` for editing.
-
-1. **Add delayed shutdown hooks**:
-
- - In the `values.yaml` file, add `lifecycle: preStop` hooks to both the `nginx` and `nginx-gateway` container definitions. These hooks instruct the containers to delay their shutdown process, allowing time for connections to close gracefully. Update the `sleep` value to what works for your environment.
-
- ```yaml
- nginxGateway:
- <...>
- lifecycle:
- preStop:
- exec:
- command:
- - /usr/bin/gateway
- - sleep
- - --duration=40s # This flag is optional, the default is 30s
-
- nginx:
- <...>
- lifecycle:
- preStop:
- exec:
- command:
- - /bin/sleep
- - "40"
- ```
-
-1. **Set the termination grace period**:
-
- - {{< include "/ngf/installation/delay-pod-termination/termination-grace-period.md">}}
-
-1. Save the changes.
-
-{{}}
-For additional information on configuring and understanding the behavior of containers and pods during their lifecycle, refer to the following Kubernetes documentation:
-
-- [Container Lifecycle Hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks)
-- [Pod Lifecycle](https://kubernetes.io/docs/concepts/workloads/Pods/Pod-lifecycle/#Pod-termination)
-
-{{}}
-
----
-
## Uninstall NGINX Gateway Fabric
Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your Kubernetes cluster:
@@ -348,8 +189,9 @@ Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your K
- {{< include "/ngf/installation/uninstall-gateway-api-resources.md" >}}
----
+## Next steps
-## Additional configuration
+- [Deploy a Gateway for data plane instances]({{< ref "/ngf/install/deploy-data-plane.md" >}})
+- [Routing traffic to applications]({{< ref "/ngf/traffic-management/basic-routing.md" >}})
For a full list of the Helm Chart configuration parameters, read [the NGINX Gateway Fabric Helm Chart](https://github.com/nginx/nginx-gateway-fabric/blob/v{{< version-ngf >}}/charts/nginx-gateway-fabric/README.md#configuration).
diff --git a/content/ngf/installation/installing-ngf/manifests.md b/content/ngf/install/manifests.md
similarity index 64%
rename from content/ngf/installation/installing-ngf/manifests.md
rename to content/ngf/install/manifests.md
index 50384c205..59ca928b5 100644
--- a/content/ngf/installation/installing-ngf/manifests.md
+++ b/content/ngf/install/manifests.md
@@ -1,23 +1,22 @@
---
-title: Installation with Manifests
+title: Install NGINX Gateway Fabric with Manifests
weight: 200
toc: true
-type: how-to
-product: NGF
-docs: DOCS-1429
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1429
---
## Overview
Learn how to install, upgrade, and uninstall NGINX Gateway Fabric using Kubernetes manifests.
----
-
## Before you begin
To complete this guide, you'll need to install:
- [kubectl](https://kubernetes.io/docs/tasks/tools/), a command-line interface for managing Kubernetes clusters.
+- [Add certificates for secure authentication]({{< ref "/ngf/install/secure-certificates.md" >}}) in a production environment.
{{< important >}} If you’d like to use NGINX Plus, some additional setup is also required: {{ important >}}
@@ -38,24 +37,18 @@ To complete this guide, you'll need to install:
{{< include "/ngf/installation/nginx-plus/nginx-plus-secret.md" >}}
-{{< note >}} For more information on why this is needed and additional configuration options, including how to report to NGINX Instance Manager instead, see the [NGINX Plus Image and JWT Requirement]({{< ref "/ngf/installation/nginx-plus-jwt.md" >}}) document. {{< /note >}}
+{{< note >}} For more information on why this is needed and additional configuration options, including how to report to NGINX Instance Manager instead, see the [NGINX Plus Image and JWT Requirement]({{< ref "/ngf/install/nginx-plus.md" >}}) document. {{< /note >}}
----
-
## Deploy NGINX Gateway Fabric
Deploying NGINX Gateway Fabric with Kubernetes manifests takes only a few steps. With manifests, you can configure your deployment exactly how you want. Manifests also make it easy to replicate deployments across environments or clusters, ensuring consistency.
----
-
### Install the Gateway API resources
{{< include "/ngf/installation/install-gateway-api-resources.md" >}}
----
-
### Deploy the NGINX Gateway Fabric CRDs
#### Stable release
@@ -70,8 +63,6 @@ kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{
kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/main/deploy/crds.yaml
```
----
-
### Deploy NGINX Gateway Fabric
{{< note >}} By default, NGINX Gateway Fabric is installed in the **nginx-gateway** namespace. You can deploy in another namespace by modifying the manifest files. {{< /note >}}
@@ -90,10 +81,20 @@ kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{
{{%tab name="AWS NLB"%}}
-Deploys NGINX Gateway Fabric with NGINX OSS and an AWS Network Load Balancer service.
+Deploys NGINX Gateway Fabric with NGINX OSS.
```shell
-kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/deploy/aws-nlb/deploy.yaml
+kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/deploy/default/deploy.yaml
+```
+
+To set up an AWS Network Load Balancer service, add these annotations to your Gateway infrastructure field:
+
+```yaml
+spec:
+ infrastructure:
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-type: "external"
+ service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
```
{{% /tab %}}
@@ -168,8 +169,6 @@ kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{
{{}}
----
-
### Verify the Deployment
To confirm that NGINX Gateway Fabric is running, check the pods in the `nginx-gateway` namespace:
@@ -182,95 +181,13 @@ The output should look similar to this (note that the pod name will include a un
```text
NAME READY STATUS RESTARTS AGE
-nginx-gateway-5d4f4c7db7-xk2kq 2/2 Running 0 112s
+nginx-gateway-5d4f4c7db7-xk2kq 1/1 Running 0 112s
```
----
-
### Access NGINX Gateway Fabric
{{< include "/ngf/installation/expose-nginx-gateway-fabric.md" >}}
----
-
-## Upgrade NGINX Gateway Fabric
-
-{{< important >}} NGINX Plus users that are upgrading from version 1.4.0 to 1.5.x need to install an NGINX Plus JWT
-Secret before upgrading. Follow the steps in the [Before you begin](#before-you-begin) section to create the Secret, which is referenced in the updated deployment manifest for the newest version. {{< /important >}}
-
-{{< tip >}} For guidance on zero downtime upgrades, see the [Delay Pod Termination](#configure-delayed-pod-termination-for-zero-downtime-upgrades) section. {{ tip >}}
-
-To upgrade NGINX Gateway Fabric and get the latest features and improvements, take the following steps:
-
-### Upgrade Gateway API resources
-
-{{< include "/ngf/installation/upgrade-api-resources.md" >}}
-
-### Upgrade NGINX Gateway Fabric CRDs
-
-To upgrade the Custom Resource Definitions (CRDs), run:
-
-```shell
-kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/deploy/crds.yaml
-```
-
-### Upgrade NGINX Gateway Fabric deployment
-
-Select the deployment manifest that matches your current deployment from the table above in the [Deploy NGINX Gateway Fabric](#deploy-nginx-gateway-fabric) section and apply it.
-
----
-
-## Delay pod termination for zero downtime upgrades {#configure-delayed-pod-termination-for-zero-downtime-upgrades}
-
-{{< include "/ngf/installation/delay-pod-termination/delay-pod-termination-overview.md" >}}
-
-Follow these steps to configure delayed pod termination:
-
-1. Open the `deploy.yaml` for editing.
-
-1. **Add delayed shutdown hooks**:
-
- - In the `deploy.yaml` file, add `lifecycle: preStop` hooks to both the `nginx` and `nginx-gateway` container definitions. These hooks instruct the containers to delay their shutdown process, allowing time for connections to close gracefully. Update the `sleep` value to what works for your environment.
-
- ```yaml
- <...>
- name: nginx-gateway
- <...>
- lifecycle:
- preStop:
- exec:
- command:
- - /usr/bin/gateway
- - sleep
- - --duration=40s # This flag is optional, the default is 30s
- <...>
- name: nginx
- <...>
- lifecycle:
- preStop:
- exec:
- command:
- - /bin/sleep
- - "40"
- <...>
- ```
-
-1. **Set the termination grace period**:
-
- - {{< include "/ngf/installation/delay-pod-termination/termination-grace-period.md" >}}
-
-1. Save the changes.
-
-{{< see-also >}}
-For additional information on configuring and understanding the behavior of containers and pods during their lifecycle, refer to the following Kubernetes documentation:
-
-- [Container Lifecycle Hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks)
-- [Pod Lifecycle](https://kubernetes.io/docs/concepts/workloads/Pods/Pod-lifecycle/#Pod-termination)
-
-{{< /see-also >}}
-
----
-
## Uninstall NGINX Gateway Fabric
Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your Kubernetes cluster:
@@ -292,3 +209,8 @@ Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your K
1. **Remove the Gateway API resources:**
- {{< include "/ngf/installation/uninstall-gateway-api-resources.md" >}}
+
+## Next steps
+
+- [Deploy a Gateway for data plane instances]({{< ref "/ngf/install/deploy-data-plane.md" >}})
+- [Routing traffic to applications]({{< ref "/ngf/traffic-management/basic-routing.md" >}})
\ No newline at end of file
diff --git a/content/ngf/installation/nginx-plus-jwt.md b/content/ngf/install/nginx-plus.md
similarity index 76%
rename from content/ngf/installation/nginx-plus-jwt.md
rename to content/ngf/install/nginx-plus.md
index fc3241052..4c7eb0e17 100644
--- a/content/ngf/installation/nginx-plus-jwt.md
+++ b/content/ngf/install/nginx-plus.md
@@ -1,10 +1,10 @@
---
-title: NGINX Plus and JWT
+title: Install NGINX Gateway Fabric with NGINX Plus
weight: 300
toc: true
-type: how-to
-product: NGF
-docs: DOCS-000
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-000
---
## Overview
@@ -13,9 +13,9 @@ NGINX Gateway Fabric with NGINX Plus requires a valid JSON Web Token (JWT) to do
This requirement is part of F5’s broader licensing program and aligns with industry best practices. The JWT will streamline subscription renewals and usage reporting, helping you manage your NGINX Plus subscription more efficiently. The [telemetry](#telemetry) data we collect helps us improve our products and services to better meet your needs.
-The JWT is required for validating your subscription and reporting telemetry data. For environments connected to the internet, telemetry is automatically sent to F5’s licensing endpoint. In offline environments, telemetry is routed through [NGINX Instance Manager](https://docs.nginx.com/nginx-instance-manager/). Usage is reported every hour and on startup whenever NGINX is reloaded.
+The JWT is required for validating your subscription and reporting telemetry data. For environments connected to the internet, telemetry is automatically sent to F5’s licensing endpoint. In offline environments, telemetry is routed through [NGINX Instance Manager]({{< ref "/nim/" >}}). Usage is reported every hour and on startup whenever NGINX is reloaded.
----
+{{< note >}} The following Secrets should be created in the same namespace as the NGINX Gateway Fabric control plane (default: nginx-gateway). The control plane will copy these Secrets into any namespaces where NGINX gets deployed. If you need to update the Secrets, update the originals that you created in the control plane namespace, and the control plane will propagate those updates to all duplicated Secrets. {{< /note >}}
## Set up the JWT
@@ -23,14 +23,10 @@ The JWT needs to be configured before deploying NGINX Gateway Fabric. The JWT wi
{{< include "/ngf/installation/jwt-password-note.md" >}}
----
-
### Download the JWT from MyF5
{{< include "/ngf/installation/nginx-plus/download-jwt.md" >}}
----
-
### Docker Registry Secret
{{< include "/ngf/installation/nginx-plus/docker-registry-secret.md" >}}
@@ -41,20 +37,18 @@ Provide the name of this Secret when installing NGINX Gateway Fabric:
{{%tab name="Helm"%}}
-Specify the Secret name using the `serviceAccount.imagePullSecret` or `serviceAccount.imagePullSecrets` helm value.
+Specify the Secret name using the `nginx.imagePullSecret` or `nginx.imagePullSecrets` helm value.
{{% /tab %}}
{{%tab name="Manifests"%}}
-Specify the Secret name in the `imagePullSecrets` field of the `nginx-gateway` ServiceAccount.
+Specify the Secret name in the `nginx-docker-secret` command-line argument of the `nginx-gateway` container.
{{% /tab %}}
{{}}
----
-
### NGINX Plus Secret
{{< include "/ngf/installation/nginx-plus/nginx-plus-secret.md" >}}
@@ -73,29 +67,11 @@ Specify the Secret name using the `nginx.usage.secretName` helm value.
Specify the Secret name in the `--usage-report-secret` command-line flag on the `nginx-gateway` container.
-You also need to define the proper volume mount to mount the Secret to the nginx container. If it doesn't already exist, add the following volume to the Deployment:
-
-```yaml
-- name: nginx-plus-license
- secret:
- secretName: nplus-license
-```
-
-and the following volume mount to the `nginx` container:
-
-```yaml
-- mountPath: /etc/nginx/license.jwt
- name: nginx-plus-license
- subPath: license.jwt
-```
-
{{% /tab %}}
{{}}
-{{< note >}} If you are reporting to the default licensing endpoint, then you can now proceed with [installing NGINX Gateway Fabric]({{< ref "/ngf/installation/installing-ngf" >}}). Otherwise, follow the steps below to configure reporting to NGINX Instance Manager. {{< /note >}}
-
----
+{{< note >}} If you are reporting to the default licensing endpoint, then you can now proceed with [installing NGINX Gateway Fabric]({{< ref "/ngf/install/" >}}). Otherwise, follow the steps below to configure reporting to NGINX Instance Manager. {{< /note >}}
### Reporting to NGINX Instance Manager {#nim}
@@ -113,18 +89,12 @@ Specify the endpoint using the `nginx.usage.endpoint` helm value.
{{%tab name="Manifests"%}}
-Specify the endpoint in the `--usage-report-endpoint` command-line flag on the `nginx-gateway` container. You also need to add the following line to the `mgmt` block of the `nginx-includes-bootstrap` ConfigMap:
-
-```text
-usage_report endpoint=;
-```
+Specify the endpoint in the `--usage-report-endpoint` command-line flag on the `nginx-gateway` container.
{{% /tab %}}
{{}}
----
-
#### CA and Client certificate/key {#nim-cert}
To configure a CA cert and/or client certificate and key, a few extra steps are needed.
@@ -153,42 +123,13 @@ Specify the CA Secret name using the `nginx.usage.caSecretName` helm value. Spec
Specify the CA Secret name in the `--usage-report-ca-secret` command-line flag on the `nginx-gateway` container. Specify the client Secret name in the `--usage-report-client-ssl-secret` command-line flag on the `nginx-gateway` container.
-You also need to define the proper volume mount to mount the Secrets to the nginx container. Add the following volume to the Deployment:
-
-```yaml
-- name: nginx-plus-usage-certs
- projected:
- sources:
- - secret:
- name: nim-ca
- - secret:
- name: nim-client
-```
-
-and the following volume mounts to the `nginx` container:
-
-```yaml
-- mountPath: /etc/nginx/certs-bootstrap/
- name: nginx-plus-usage-certs
-```
-
-Finally, in the `nginx-includes-bootstrap` ConfigMap, add the following lines to the `mgmt` block:
-
-```text
-ssl_trusted_certificate /etc/nginx/certs-bootstrap/ca.crt;
-ssl_certificate /etc/nginx/certs-bootstrap/tls.crt;
-ssl_certificate_key /etc/nginx/certs-bootstrap/tls.key;
-```
-
{{% /tab %}}
{{}}
-{{< note >}} Once these Secrets are created and configuration options are set, you can now [install NGINX Gateway Fabric]({{< ref "/ngf/installation/installing-ngf" >}}). {{< /note >}}
-
----
+{{< note >}} Once these Secrets are created and configuration options are set, you can now [install NGINX Gateway Fabric]({{< ref "/ngf/install/" >}}). {{< /note >}}
## Installation flags to configure usage reporting {#flags}
@@ -205,14 +146,12 @@ If using Helm, the `nginx.usage` values should be set as necessary:
If using manifests, the following command-line options should be set as necessary on the `nginx-gateway` container:
-- `--usage-report-secret` should be the name of the JWT Secret you created. Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway). By default this field is set to `nplus-license`. A [volume mount](#nginx-plus-secret) for this Secret is required for installation.
-- `--usage-report-endpoint` is the endpoint to send the telemetry data to. This is optional, and by default is `product.connect.nginx.com`. Requires [extra configuration](#nim) if specified.
+- `--usage-report-secret` should be the name of the JWT Secret you created. Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway). By default this field is set to `nplus-license`.
+- `--usage-report-endpoint` is the endpoint to send the telemetry data to. This is optional, and by default is `product.connect.nginx.com`.
- `--usage-report-resolver` is the nameserver used to resolve the NGINX Plus usage reporting endpoint. This is optional and used with NGINX Instance Manager.
- `--usage-report-skip-verify` disables client verification of the NGINX Plus usage reporting server certificate.
-- `--usage-report-ca-secret` is the name of the Secret containing the NGINX Instance Manager CA certificate. Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway). Requires [extra configuration](#nim-cert) if specified.
-- `--usage-report-client-ssl-secret` is the name of the Secret containing the client certificate and key for authenticating with NGINX Instance Manager. Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway). Requires [extra configuration](#nim-cert) if specified.
-
----
+- `--usage-report-ca-secret` is the name of the Secret containing the NGINX Instance Manager CA certificate. Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway).
+- `--usage-report-client-ssl-secret` is the name of the Secret containing the client certificate and key for authenticating with NGINX Instance Manager. Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway).
## What’s reported and how it’s protected {#telemetry}
@@ -229,8 +168,6 @@ NGINX Plus reports the following data every hour by default:
- **Usage report timestamps**: Start and end times for each usage report.
- **Kubernetes node details**: Information about Kubernetes nodes.
----
-
### Security and privacy of reported data
All communication between your NGINX Plus instances, NGINX Instance Manager, and F5’s licensing endpoint (`product.connect.nginx.com`) is protected using **SSL/TLS** encryption.
@@ -255,10 +192,13 @@ docker pull private-registry.nginx.com/nginx-gateway-fabric/nginx-plus:{{< versi
Once you have successfully pulled the image, you can tag it as needed, then push it to a different container registry.
----
-
## Alternative installation options
There are alternative ways to get an NGINX Plus image for NGINX Gateway Fabric:
-- [Build the Gateway Fabric image]({{< ref "/ngf/installation/building-the-images.md">}}) describes how to use the source code with an NGINX Plus subscription certificate and key to build an image.
+- [Build the Gateway Fabric image]({{< ref "/ngf/install/build-image.md">}}) describes how to use the source code with an NGINX Plus subscription certificate and key to build an image.
+
+## Next steps
+
+- [Deploy a Gateway for data plane instances]({{< ref "/ngf/install/deploy-data-plane.md" >}})
+- [Routing traffic to applications]({{< ref "/ngf/traffic-management/basic-routing.md" >}})
\ No newline at end of file
diff --git a/content/ngf/install/secure-certificates.md b/content/ngf/install/secure-certificates.md
new file mode 100644
index 000000000..01283eabf
--- /dev/null
+++ b/content/ngf/install/secure-certificates.md
@@ -0,0 +1,208 @@
+---
+title: Add certificates for secure authentication
+weight: 100
+toc: true
+nd-content-type: how-to
+nd-product: NGF
+---
+
+By default, NGINX Gateway Fabric installs self-signed certificates to secure the connection between the NGINX Gateway Fabric control plane and the NGINX data plane pods. These certificates are created by a `cert-generator` job when NGINX Gateway Fabric is first installed.
+
+However, because these certificates are self-signed and will expire after 3 years, we recommend a solution such as [cert-manager](https://cert-manager.io) to create and manage these certificates in a production environment.
+
+This guide will step through how to install and use `cert-manager` to secure this connection.
+
+{{< caution >}}
+
+These steps should be completed before you install NGINX Gateway Fabric.
+
+{{< /caution >}}
+
+---
+
+## Before you begin
+
+To complete this guide, you will need the following prerequisites:
+
+- Administrator access to a Kubernetes cluster.
+- [Helm](https://helm.sh) and [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) must be installed locally.
+
+## Install cert-manager
+
+Add the Helm repository:
+
+```shell
+helm repo add jetstack https://charts.jetstack.io
+helm repo update
+```
+
+Install cert-manager:
+
+```shell
+helm install \
+ cert-manager jetstack/cert-manager \
+ --namespace cert-manager \
+ --create-namespace \
+ --set config.apiVersion="controller.config.cert-manager.io/v1alpha1" \
+ --set config.kind="ControllerConfiguration" \
+ --set config.enableGatewayAPI=true \
+ --set crds.enabled=true
+```
+
+This also enables Gateway API features for cert-manager, which can be useful for [securing your workload traffic]({{< ref "/ngf/traffic-security/integrate-cert-manager.md" >}}).
+
+## Create the CA issuer
+
+The first step is to create the CA (certificate authority) issuer.
+
+{{< note >}} This example uses a self-signed Issuer, which should not be used in production environments. For production environments, you should use a real [CA issuer](https://cert-manager.io/docs/configuration/ca/). {{< /note >}}
+
+Create the namespace:
+
+```shell
+kubectl create namespace nginx-gateway
+```
+
+```yaml
+kubectl apply -f - <}}
+
+{{%tab name="Helm"%}}
+
+The full service name is of the format: `-nginx-gateway-fabric..svc`.
+
+The default Helm release name used in our installation docs is `ngf`, and the default namespace is `nginx-gateway`, so the `dnsName` should be `ngf-nginx-gateway-fabric.nginx-gateway.svc`.
+
+{{% /tab %}}
+
+{{%tab name="Manifests"%}}
+
+The full service name is of the format: `..svc`.
+
+By default, the base service name is `nginx-gateway`, and the namespace is `nginx-gateway`, so the `dnsName` should be `nginx-gateway.nginx-gateway.svc`.
+
+{{% /tab %}}
+
+{{}}
+
+```yaml
+kubectl apply -f - <}}
+
+{{%tab name="Helm"%}}
+
+Specify the Secret name using the `certGenerator.agentTLSSecretName` helm value.
+
+{{% /tab %}}
+
+{{%tab name="Manifests"%}}
+
+Specify the Secret name using the `agent-tls-secret` command-line argument.
+
+{{% /tab %}}
+
+{{}}
+
+## Confirm the Secrets have been created
+
+You should see the Secrets created in the `nginx-gateway` namespace:
+
+```shell
+kubectl -n nginx-gateway get secrets
+```
+
+```text
+agent-tls kubernetes.io/tls 3 3s
+nginx-gateway-ca kubernetes.io/tls 3 15s
+server-tls kubernetes.io/tls 3 8s
+```
+
+You can now [install NGINX Gateway Fabric]({{< ref "/ngf/install/" >}}).
+
+## Next steps
+
+- [Install NGINX Gateway Fabric with Helm]({{< ref "/ngf/install/helm.md" >}})
+- [Install NGINX Gateway Fabric with Manifests]({{< ref "/ngf/install/manifests.md" >}})
+- [Install NGINX Gateway Fabric with NGINX Plus]({{< ref "/ngf/install/nginx-plus.md" >}})
diff --git a/content/ngf/install/upgrade-version.md b/content/ngf/install/upgrade-version.md
new file mode 100644
index 000000000..e8e0f16ef
--- /dev/null
+++ b/content/ngf/install/upgrade-version.md
@@ -0,0 +1,308 @@
+---
+title: Upgrade NGINX Gateway Fabric
+weight: 700
+toc: true
+type: how-to
+product: NGF
+docs: DOCS-0000
+---
+
+This document describes how to upgrade NGINX Gateway Fabric when a new version releases.
+
+It covers the necessary steps for minor versions as well as major versions (such as 1.x to 2.x).
+
+Many of the nuances in upgrade paths relate to how custom resource definitions (CRDs) are managed.
+
+{{< tip >}}
+
+To avoid interruptions, review the [Delay pod termination for zero downtime upgrades](#configure-delayed-pod-termination-for-zero-downtime-upgrades) section.
+
+{{< /tip >}}
+
+
+## Minor NGINX Gateway Fabric upgrades
+
+{{< important >}} NGINX Plus users need a JWT secret before upgrading from version 1.4.0 to 1.5.x.
+
+Follow the steps in [Set up the JWT]({{< ref "/ngf/install/nginx-plus.md#set-up-the-jwt" >}}) to create the Secret.
+
+{{< /important >}}
+
+
+### Upgrade Gateway resources
+
+To upgrade your Gateway API resources, take the following steps:
+
+- Use [Technical specifications]({{< ref "/ngf/reference/technical-specifications.md" >}}) to verify your Gateway API resources are compatible with your NGINX Gateway Fabric version.
+- Review the [release notes](https://github.com/kubernetes-sigs/gateway-api/releases) for any important upgrade-specific information.
+
+To upgrade the Gateway API resources, run the following command:
+
+```shell
+kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v{{< version-ngf >}}" | kubectl apply -f -
+```
+
+If you installed NGINX Gateway from the experimental channel, use this instead:
+
+```shell
+kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/experimental?ref=v{{< version-ngf >}}" | kubectl apply -f -
+```
+
+### Upgrade NGINX Gateway Fabric CRDs
+
+Run the following command to upgrade the CRDs:
+
+```shell
+kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/deploy/crds.yaml
+```
+
+{{< note >}}
+
+Ignore the following warning, as it is expected.
+
+```text
+Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply.
+```
+
+{{< /note >}}
+
+### Upgrade NGINX Gateway Fabric release
+
+{{< tabs name="upgrade-release" >}}
+
+{{% tab name="Helm" %}}
+
+{{< important >}} If you are using NGINX Plus and have a different Secret name than the default `nplus-license` name, specify the Secret name by setting `--set nginx.usage.secretName=` when running `helm upgrade`. {{< /important >}}
+
+To upgrade the release with Helm, you can use the OCI registry, or download the chart and upgrade from the source.
+
+If needed, replace `ngf` with your chosen release name.
+
+**Upgrade from the OCI registry**
+
+```shell
+helm upgrade ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric -n nginx-gateway
+```
+
+**Upgrade from sources**
+
+{{< include "/ngf/installation/helm/pulling-the-chart.md" >}}
+
+To upgrade, run the following command:
+
+```shell
+helm upgrade ngf . -n nginx-gateway
+```
+
+{{% /tab %}}
+
+{{% tab name="Manifests" %}}
+
+Select the deployment manifest that matches your current deployment from options available in the [Deploy NGINX Gateway Fabric]({{< ref "/ngf/install/manifests.md#deploy-nginx-gateway-fabric-1">}}) section and apply it.
+
+{{% /tab %}}
+
+{{< /tabs>}}
+
+## Upgrade from v1.x to v2.x
+
+This section provides step-by-step instructions for upgrading NGINX Gateway Fabric from version 1.x to 2.x, highlighting key architectural changes, expected downtime, and important considerations for CRDs.
+
+To upgrade NGINX Gateway Fabric from version 1.x to the new architecture in version 2.x, you must uninstall the existing NGINX Gateway Fabric CRDs and deployment, and perform a fresh installation. This will cause brief downtime during the upgrade process.
+
+{{}} You do not need to uninstall the Gateway API CRDs during the upgrade. These resources are compatible with the new NGINX Gateway Fabric version. {{}}
+
+### Uninstall NGINX Gateway Fabric v1.x
+
+To remove the previous version 1.x of NGINX Gateway Fabric, follow these steps:
+
+First, run the following command to uninstall NGINX Gateway Fabric from the `nginx-gateway` namespace, and update `ngf` to your release name if it is different:
+
+```shell
+helm uninstall ngf -n nginx-gateway
+```
+
+Afterwards, remove CRDs associated with NGINX Gateway Fabric version 1.x with the following command:
+
+```shell
+kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v1.6.2/deploy/crds.yaml
+```
+
+### Install NGINX Gateway Fabric 2.x
+
+{{< important >}}
+
+Before installing 2.x, we recommend following [Add certificates for secure authentication]({{< ref "/ngf/install/secure-certificates.md" >}}).
+
+By default, NGINX Gateway Fabric installs self-signed certificates, which may be unsuitable for a production environment.
+
+{{< /important >}}
+
+{{}}
+
+{{%tab name="Helm"%}}
+
+Use the following `helm install` command to install the latest stable NGINX Gateway Fabric release in the `nginx-gateway` namespace. It will also install the CRDs required for the deployment:
+
+```shell
+helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway
+```
+
+For customization options during the Helm installation process, view the [Install NGINX Gateway Fabric with Helm]({{< ref "/ngf/install/helm.md" >}}) topic.
+
+{{% /tab %}}
+
+{{%tab name="Manifests"%}}
+
+Apply the new CRDs with the following command:
+
+```shell
+kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/deploy/crds.yaml
+```
+
+Next, install the latest stable release of NGINX Gateway Fabric in the `nginx-gateway` namespace with the following command:
+
+```shell
+kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/deploy/default/deploy.yaml
+```
+
+For customization options during the Manifest installation process, view the [Install NGINX Gateway Fabric with Manifests]({{< ref "/ngf/install/manifests.md" >}}) topic.
+
+{{% /tab %}}
+
+{{}}
+
+### Architecture changes
+
+With this release, NGINX Gateway Fabric adopts a new architecture that separates the control plane and data plane into independent deployments. This separation improves scalability, security, and operational clarity.
+
+The control plane is a Kubernetes controller that watches Gateway API and Kubernetes resources (e.g., Services, Endpoints, Secrets) and dynamically provisions NGINX data plane deployments for each Gateway.
+
+NGINX configurations are generated by the control plane and securely delivered to the data planes via gRPC, using the NGINX Agent. TLS is enabled by default, with optional integration with `cert-manager`.
+
+Each data plane pod runs NGINX alongside the Agent, which applies config updates and handles reloads without shared volumes or signals. This design ensures dynamic, per-Gateway traffic management and operational isolation.
+
+New fields have been added to the `NginxProxy` resource to configure infrastructure-related settings for data plane deployments. The `NginxProxy` resource is now a namespaced-scoped resource, instead of a cluster-scoped resource, and can be modified at either the Gateway or GatewayClass level. These new fields provide the flexibility to customize deployment and service configurations.
+
+For detailed instructions on how to modify these settings, refer to the [Configure infrastructure-related settings]({{< ref "/ngf/how-to/data-plane-configuration.md#configure-infrastructure-related-settings" >}}) guide.
+
+### Key links for the version 2.x update
+
+- To read more on [modifying data plane configuration]({{< ref "/ngf/how-to/data-plane-configuration.md" >}}).
+- To learn more about [deploying a Gateway for data plane instances]({{< ref "/ngf/install/deploy-data-plane.md" >}}).
+- To add secure [authentication to control plane and data planes]({{< ref "/ngf/install/secure-certificates.md" >}}).
+- To read more about [architecture changes]({{< ref "/ngf/overview/gateway-architecture.md" >}}).
+- For detailed [API reference]({{< ref "/ngf/reference/api.md" >}}).
+
+## Access NGINX Gateway Fabric 1.x documentation
+
+The documentation website is intended for the latest version of NGINX Gateway Fabric.
+
+To review documentation prior to 2.x, check out the desired release branch (such as _release-1.6_):
+
+```shell
+git clone git@github.com:nginx/nginx-gateway-fabric.git
+git checkout release-1.6
+```
+
+To review the documentation in a local webserver, run _make watch_ in the _/site_ folder:
+
+```shell
+cd site
+make watch
+```
+```text
+Hugo is available and has a version greater than 133. Proceeding with build.
+hugo --bind 0.0.0.0 -p 1313 server --disableFastRender
+Watching for changes in /home//nginx-gateway-fabric/site/{content,layouts,static}
+Watching for config changes in /home//nginx-gateway-fabric/site/config/_default, /home//nginx-gateway-fabric/site/config/development, /home//nginx-gateway-fabric/site/go.mod
+Start building sites …
+hugo v0.135.0-f30603c47f5205e30ef83c70419f57d7eb7175ab linux/amd64 BuildDate=2024-09-27T13:17:08Z VendorInfo=gohugoio
+
+
+ | EN
+-------------------+------
+ Pages | 72
+ Paginator pages | 0
+ Non-page files | 0
+ Static files | 176
+ Processed images | 0
+ Aliases | 9
+ Cleaned | 0
+
+Built in 213 ms
+Environment: "development"
+Serving pages from disk
+Web Server is available
+```
+
+You can then follow [this localhost link](http://localhost:1313/nginx-gateway-fabric/) for 1.x NGINX Gateway Fabric documentation.
+
+## Upgrade from NGINX Open Source to NGINX Plus
+
+{{< important >}}
+
+Ensure that you [Set up the JWT]({{< ref "/ngf/install/nginx-plus.md#set-up-the-jwt" >}}) before upgrading. These instructions only apply to Helm.
+
+{{< /important >}}
+
+To upgrade from NGINX Open Source to NGINX Plus, update the Helm command to include the necessary values for Plus:
+
+{{< note >}} If applicable:
+
+- Replace the F5 Container registry `private-registry.nginx.com` with your internal registry for your NGINX Plus image
+- Replace `nginx-plus-registry-secret` with your Secret name containing the registry credentials
+- Replace `ngf` with your chosen release name.
+{{< /note >}}
+
+
+```shell
+helm upgrade ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --set nginx.image.repository=private-registry.nginx.com/nginx-gateway-fabric/nginx-plus --set nginx.plus=true --set nginx.imagePullSecret=nginx-plus-registry-secret -n nginx-gateway
+```
+
+## Delay pod termination for zero downtime upgrades {#configure-delayed-pod-termination-for-zero-downtime-upgrades}
+
+{{< include "/ngf/installation/delay-pod-termination/delay-pod-termination-overview.md" >}}
+
+Follow these steps to configure delayed pod termination:
+
+1. Open the `values.yaml` for editing.
+
+1. **Add delayed shutdown hooks**:
+
+ - In the `values.yaml` file, add `lifecycle: preStop` hooks to both the `nginx` and `nginx-gateway` container definitions. These hooks instruct the containers to delay their shutdown process, allowing time for connections to close gracefully. Update the `sleep` value to what works for your environment.
+
+ ```yaml
+ nginxGateway:
+ <...>
+ lifecycle:
+ preStop:
+ exec:
+ command:
+ - /usr/bin/gateway
+ - sleep
+ - --duration=40s # This flag is optional, the default is 30s
+
+ nginx:
+ <...>
+ lifecycle:
+ preStop:
+ exec:
+ command:
+ - /bin/sleep
+ - "40"
+ ```
+
+1. **Set the termination grace period**:
+
+ - {{< include "/ngf/installation/delay-pod-termination/termination-grace-period.md">}}
+
+1. Save the changes.
+
+{{}}
+For additional information on configuring and understanding the behavior of containers and pods during their lifecycle, refer to the following Kubernetes documentation:
+
+- [Container Lifecycle Hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks)
+- [Pod Lifecycle](https://kubernetes.io/docs/concepts/workloads/Pods/Pod-lifecycle/#Pod-termination)
+
+{{}}
\ No newline at end of file
diff --git a/content/ngf/installation/_index.md b/content/ngf/installation/_index.md
deleted file mode 100644
index b49ab026b..000000000
--- a/content/ngf/installation/_index.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Installation"
-url: /nginx-gateway-fabric/installation/
-weight: 300
----
diff --git a/content/ngf/installation/installing-ngf/_index.md b/content/ngf/installation/installing-ngf/_index.md
deleted file mode 100644
index 814e9ae02..000000000
--- a/content/ngf/installation/installing-ngf/_index.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Install NGINX Gateway Fabric"
-url: /nginx-gateway-fabric/installation/installing-ngf/
-weight: 100
----
diff --git a/content/ngf/monitoring/_index.md b/content/ngf/monitoring/_index.md
new file mode 100644
index 000000000..8f7bfadcc
--- /dev/null
+++ b/content/ngf/monitoring/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Monitoring"
+url: /nginx-gateway-fabric/monitoring/
+weight: 600
+---
diff --git a/content/ngf/monitoring/dashboard.md b/content/ngf/monitoring/dashboard.md
new file mode 100644
index 000000000..56cfc31d2
--- /dev/null
+++ b/content/ngf/monitoring/dashboard.md
@@ -0,0 +1,55 @@
+---
+title: Access the NGINX Plus dashboard
+weight: 300
+toc: true
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1417
+---
+
+This topic describes how to view the NGINX Plus dashboard to see real-time metrics.
+
+The NGINX Plus dashboard offers a real-time live activity monitoring interface that shows key load and performance metrics of your server infrastructure.
+
+The dashboard is enabled by default for NGINX Gateway Fabric deployments that use NGINX Plus as the data plane, and is available on port 8765.
+
+## Connect to the dashboard
+
+To access the dashboard, you will first need to forward connections to port 8765 on your local machine to port 8765 on the NGINX Plus pod (replace `` with the actual name of the pod).
+
+```shell
+kubectl port-forward 8765:8765 -n
+```
+
+Afterwards, use a browser to access [http://127.0.0.1:8765/dashboard.html](http://127.0.0.1:8765/dashboard.html) to view the dashboard.
+
+The dashboard will look like this:
+
+{{< img src="/ngf/img/nginx-plus-dashboard.png" alt="">}}
+
+{{< note >}} The [API](https://nginx.org/en/docs/http/ngx_http_api_module.html) used by the dashboard for metrics is also accessible using the `/api` path. {{< /note >}}
+
+### Configure dashboard access through NginxProxy
+
+To access the NGINX Plus dashboard from sources than the default `127.0.0.1`, you can use the NginxProxy resource to allow access to other IP Addresses or CIDR blocks.
+
+The following example configuration allows access to the NGINX Plus dashboard from the IP Addresses `192.0.2.8` and
+`192.0.2.0` and the CIDR block `198.51.100.0/24`:
+
+```yaml
+apiVersion: gateway.nginx.org/v1alpha1
+kind: NginxProxy
+metadata:
+ name: ngf-proxy-config
+spec:
+ nginxPlus:
+ allowedAddresses:
+ - type: IPAddress
+ value: 192.0.2.8
+ - type: IPAddress
+ value: 192.0.2.0
+ - type: CIDR
+ value: 198.51.100.0/24
+```
+
+For more information on configuring the NginxProxy resource, visit the [data plane configuration]({{< ref "/ngf/how-to/data-plane-configuration.md" >}}) document.
\ No newline at end of file
diff --git a/content/ngf/how-to/monitoring/prometheus.md b/content/ngf/monitoring/prometheus.md
similarity index 57%
rename from content/ngf/how-to/monitoring/prometheus.md
rename to content/ngf/monitoring/prometheus.md
index 37253b3d5..369789dcd 100644
--- a/content/ngf/how-to/monitoring/prometheus.md
+++ b/content/ngf/monitoring/prometheus.md
@@ -2,24 +2,22 @@
title: Monitoring with Prometheus and Grafana
weight: 200
toc: true
-type: how-to
-product: NGF
-docs: DOCS-1418
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1418
---
This document describes how to monitor NGINX Gateway Fabric using Prometheus and Grafana. It explains installation and configuration, as well as what metrics are available.
----
-
## Overview
NGINX Gateway Fabric metrics are displayed in [Prometheus](https://prometheus.io/) format. These metrics are served through a metrics server orchestrated by the controller-runtime package on HTTP port `9113`. When installed, Prometheus automatically scrapes this port and collects metrics. [Grafana](https://grafana.com/) can be used for rich visualization of these metrics.
{{< call-out "important" "Security note for metrics" >}}
+
Metrics are served over HTTP by default. Enabling HTTPS will secure the metrics endpoint with a self-signed certificate. When using HTTPS, adjust the Prometheus Pod scrape settings by adding the `insecure_skip_verify` flag to handle the self-signed certificate. For further details, refer to the [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config).
-{{< /call-out >}}
----
+{{< /call-out >}}
## Installing Prometheus and Grafana
@@ -41,8 +39,6 @@ kubectl port-forward -n monitoring svc/prometheus-server 9090:80 &
Visit [http://127.0.0.1:9090](http://127.0.0.1:9090) to view the dashboard.
----
-
### Grafana
```shell
@@ -65,8 +61,6 @@ The username for login is `admin`. The password can be acquired by running:
kubectl get secret -n monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
```
----
-
#### Configuring Grafana
In the Grafana UI menu, go to `Connections` then `Data sources`. Add your Prometheus service (`http://prometheus-server.monitoring.svc`) as a data source.
@@ -75,33 +69,62 @@ Download the following sample dashboard and Import as a new Dashboard in the Gra
- {{< download "ngf/grafana-dashboard.json" "ngf-grafana-dashboard.json" >}}
----
-
## Available metrics in NGINX Gateway Fabric
NGINX Gateway Fabric provides a variety of metrics for monitoring and analyzing performance. These metrics are categorized as follows:
### NGINX/NGINX Plus metrics
-NGINX metrics cover specific NGINX operations such as the total number of accepted client connections. For a complete list of available NGINX/NGINX Plus metrics, refer to the [NGINX Prometheus Exporter developer docs](https://github.com/nginx/nginx-prometheus-exporter#exported-metrics).
-
-These metrics use the `nginx_gateway_fabric` namespace and include the `class` label, indicating the NGINX Gateway class. For example, `nginx_gateway_fabric_connections_accepted{class="nginx"}`.
-
----
+NGINX metrics include NGINX-specific data such as the total number of accepted client connections. These metrics are
+collected through NGINX Agent and are reported by each NGINX Pod.
+
+NGINX Gateway Fabric currently supports a subset of all metrics available through NGINX OSS and Plus. Listed below are
+the supported metrics along with a small accompanying description.
+
+Metrics provided by NGINX Open Source include:
+- `nginx_http_connection_count_connections`: The current number of connections.
+- `nginx_http_connections_total`: The total number of connections, since NGINX was last started or reloaded.
+- `nginx_http_request_count_requests`: The total number of client requests received, since the last collection interval.
+- `nginx_http_requests_total`: The total number of client requests received, since NGINX was last started or reloaded.
+
+In addition to the previous metrics provided by NGINX Open Source, NGINX Plus includes:
+- `nginx_config_reloads_total`: The total number of NGINX config reloads.
+- `nginx_http_response_count_responses`: The total number of HTTP responses sent to clients since the last collection interval, grouped by status code range.
+- `nginx_http_response_status_responses_total`: The total number of responses since NGINX was last started or reloaded, grouped by status code range.
+- `nginx_http_request_discarded_requests_total`: The total number of requests completed without sending a response.
+- `nginx_http_request_processing_count_requests`: The number of client requests that are currently being processed.
+- `nginx_http_request_byte_io_bytes_total`: The total number of HTTP byte IO.
+- `nginx_http_upstream_keepalive_count_connections`: The current number of idle keepalive connections per HTTP upstream.
+- `nginx_http_upstream_peer_connection_count_connections`: The average number of active connections per HTTP upstream peer.
+- `nginx_http_upstream_peer_byte_io_bytes_total`: The total number of byte IO per HTTP upstream peer.
+- `nginx_http_upstream_peer_count_peers`: The current count of peers on the HTTP upstream grouped by state.
+- `nginx_http_upstream_peer_fails_attempts_total`: The total number of unsuccessful attempts to communicate with the HTTP upstream peer.
+- `nginx_http_upstream_peer_header_time_milliseconds`: The average time to get the response header from the HTTP upstream peer.
+- `nginx_http_upstream_peer_health_checks_requests_total`: The total number of health check requests made to a HTTP upstream peer.
+- `nginx_http_upstream_peer_requests_total`: The total number of client requests forwarded to the HTTP upstream peer.
+- `nginx_http_upstream_peer_response_time_milliseconds`: The average time to get the full response from the HTTP upstream peer.
+- `nginx_http_upstream_peer_responses_total`: The total number of responses obtained from the HTTP upstream peer grouped by status range.
+- `nginx_http_upstream_peer_state_is_deployed`: Current state of an upstream peer in deployment.
+- `nginx_http_upstream_peer_unavailables_requests_total`: Number of times the server became unavailable for client requests (“unavail”).
+- `nginx_http_upstream_queue_limit_requests`: The maximum number of requests that can be in the queue at the same time.
+- `nginx_http_upstream_queue_overflows_responses_total`: The total number of requests rejected due to the queue overflow.
+- `nginx_http_upstream_queue_usage_requests`: The current number of requests in the queue.
+- `nginx_http_upstream_zombie_count_is_deployed`: The current number of upstream peers removed from the group but still processing active client requests.
+- `nginx_slab_page_free_pages`: The current number of free memory pages.
+- `nginx_slab_page_usage_pages`: The current number of used memory pages.
+- `nginx_slab_slot_allocations_total`: The number of attempts to allocate memory of specified size.
+- `nginx_slab_slot_free_slots`: The current number of free memory slots.
+- `nginx_slab_slot_usage_slots`: The current number of used memory slots.
+- `nginx_ssl_certificate_verify_failures_certificates_total`: The total number of SSL certificate verification failures.
+- `nginx_ssl_handshakes_total`: The total number of SSL handshakes.
### NGINX Gateway Fabric metrics
Metrics specific to NGINX Gateway Fabric include:
-- `nginx_reloads_total`: Counts successful NGINX reloads.
-- `nginx_reload_errors_total`: Counts NGINX reload failures.
-- `nginx_stale_config`: Indicates if NGINX Gateway Fabric couldn't update NGINX with the latest configuration, resulting in a stale version.
-- `nginx_reloads_milliseconds`: Time in milliseconds for NGINX reloads.
- `event_batch_processing_milliseconds`: Time in milliseconds to process batches of Kubernetes events.
-All these metrics are under the `nginx_gateway_fabric` namespace and include a `class` label set to the Gateway class of NGINX Gateway Fabric. For example, `nginx_gateway_fabric_nginx_reloads_total{class="nginx"}`.
-
----
+All these metrics are under the `nginx_gateway_fabric` namespace and include a `class` label set to the GatewayClass of NGINX Gateway Fabric. For example, `nginx_gateway_fabric_event_batch_processing_milliseconds_sum{class="nginx"}`.
### Controller-runtime metrics
@@ -111,8 +134,6 @@ Provided by the [controller-runtime](https://github.com/kubernetes-sigs/controll
- Go runtime metrics such as the number of Go routines, garbage collection duration, and Go version.
- Controller-specific metrics, including reconciliation errors per controller, length of the reconcile queue, and reconciliation latency.
----
-
## Change the default metrics configuration
You can configure monitoring metrics for NGINX Gateway Fabric using Helm or Manifests.
@@ -121,14 +142,10 @@ You can configure monitoring metrics for NGINX Gateway Fabric using Helm or Mani
If you're setting up NGINX Gateway Fabric with Helm, you can adjust the `metrics.*` parameters to fit your needs. For detailed options and instructions, see the [Helm README](https://github.com/nginx/nginx-gateway-fabric/blob/v{{< version-ngf >}}/charts/nginx-gateway-fabric/README.md).
----
-
### Using Kubernetes manifests
For setups using Kubernetes manifests, change the metrics configuration by editing the NGINX Gateway Fabric manifest that you want to deploy. You can find some examples in the [deploy](https://github.com/nginx/nginx-gateway-fabric/tree/v{{< version-ngf >}}/deploy) directory.
----
-
#### Disabling metrics
If you need to disable metrics:
@@ -170,8 +187,6 @@ To change the default port for metrics:
<...>
```
----
-
#### Enabling HTTPS for metrics
For enhanced security with HTTPS:
diff --git a/content/ngf/how-to/monitoring/tracing.md b/content/ngf/monitoring/tracing.md
similarity index 85%
rename from content/ngf/how-to/monitoring/tracing.md
rename to content/ngf/monitoring/tracing.md
index 34a0dfc60..fe241cad5 100644
--- a/content/ngf/how-to/monitoring/tracing.md
+++ b/content/ngf/monitoring/tracing.md
@@ -2,15 +2,13 @@
title: Configure tracing
weight: 100
toc: true
-type: how-to
-product: NGF
-docs: DOCS-000
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-000
---
This guide explains how to enable tracing on HTTPRoutes in NGINX Gateway Fabric using the OpenTelemetry Collector. Jaeger is used to process and collect the traces.
----
-
## Overview
NGINX Gateway Fabric supports tracing using [OpenTelemetry](https://opentelemetry.io/).
@@ -19,8 +17,6 @@ The official [NGINX OpenTelemetry Module](https://github.com/nginxinc/nginx-otel
This collector can then export data to one or more upstream collectors like [Jaeger](https://www.jaegertracing.io/), [DataDog](https://docs.datadoghq.com/tracing/), and many others. This is called the [Agent model](https://opentelemetry.io/docs/collector/deployment/agent/).
----
-
## Install the collectors
The first step is to install the collectors. NGINX Gateway Fabric will be configured to export to the OpenTelemetry Collector, which is configured to export to Jaeger. This model allows the visualization collector (Jaeger) to be swapped with something else, or to add more collectors without needing to reconfigure NGINX Gateway Fabric. It is also possible to configure NGINX Gateway Fabric to export directly to Jaeger.
@@ -64,25 +60,25 @@ kubectl port-forward -n tracing svc/jaeger 16686:16686 &
Visit [http://127.0.0.1:16686](http://127.0.0.1:16686) to view the dashboard.
----
-
## Enable tracing
To enable tracing, you must configure two resources:
-- `NginxProxy`: This resource contains global settings relating to the NGINX data plane. It is created and managed by the [cluster operator](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/), and is referenced in the `parametersRef` field of the GatewayClass. This resource can be created and linked when we install NGINX Gateway Fabric using its helm chart, or it can be added later. This guide installs the resource using the helm chart, but the resource can also be created for an existing deployment.
+- `NginxProxy`: This resource contains global settings relating to the NGINX data plane. It is created and managed by the [cluster operator](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/), and is referenced in the `parametersRef` field of the GatewayClass. By default, an `NginxProxy` resource is created in the same namespace where NGINX Gateway Fabric is installed, attached to the GatewayClass. You can set configuration options in the `nginx` Helm value section, and the resource will be created and attached using the set values.
+
+When installed using the Helm chart, the NginxProxy resource is named `-proxy-config` and is created in the release Namespace.
The `NginxProxy` resource contains configuration for the collector, and applies to all Gateways and routes under the GatewayClass. It does not enable tracing, but is a prerequisite to the next piece of configuration.
+{{< note >}} You can also override the tracing configuration for a particular Gateway by manually creating and attaching specific `NginxProxy` resources to target the different Gateways. This guide covers the global tracing configuration only. {{< /note >}}
+
- `ObservabilityPolicy`: This resource is a [Direct PolicyAttachment](https://gateway-api.sigs.k8s.io/reference/policy-attachment/) that targets HTTPRoutes or GRPCRoutes. It is created by the [application developer](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/) and enables tracing for a specific route or routes. It requires the `NginxProxy` resource to exist in order to complete the tracing configuration.
For all the possible configuration options for these resources, see the [API reference]({{< ref "/ngf/reference/api.md" >}}).
----
-
### Install NGINX Gateway Fabric with global tracing configuration
-{{< note >}}Ensure that you [install the Gateway API resources]({{< ref "/ngf/installation/installing-ngf/helm.md#installing-the-gateway-api-resources" >}}).{{< /note >}}
+{{< note >}} Ensure that you [install the Gateway API resources]({{< ref "/ngf/install/helm.md#installing-the-gateway-api-resources" >}}).{{< /note >}}
Referencing the previously deployed collector, create the following `values.yaml` file for installing NGINX Gateway Fabric:
@@ -110,11 +106,11 @@ helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namesp
You should see the following configuration:
```shell
-kubectl get nginxproxies.gateway.nginx.org ngf-proxy-config -o yaml
+kubectl get nginxproxies.gateway.nginx.org ngf-proxy-config -n nginx-gateway -o yaml
```
```yaml
-apiVersion: gateway.nginx.org/v1alpha1
+apiVersion: gateway.nginx.org/v1alpha2
kind: NginxProxy
metadata:
name: ngf-proxy-config
@@ -164,23 +160,14 @@ status:
type: ResolvedRefs
```
-If you already have NGINX Gateway Fabric installed, then you can create the `NginxProxy` resource and link it to the GatewayClass `parametersRef`:
+If you already have NGINX Gateway Fabric installed, then you can modify the `NginxProxy` resource to include the tracing configuration:
```shell
-kubectl edit gatewayclasses.gateway.networking.k8s.io nginx
-```
-
-Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
-```text
-GW_IP=XXX.YYY.ZZZ.III
-GW_PORT=
+kubectl edit nginxproxies.gateway.nginx.org ngf-proxy-config -n nginx-gateway
```
You can now create the application, route, and tracing policy.
----
-
### Create the application and route
Create the basic **coffee** application:
@@ -257,6 +244,15 @@ spec:
EOF
```
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+Save the public IP address and port of the NGINX Service into shell variables:
+
+```text
+GW_IP=XXX.YYY.ZZZ.III
+GW_PORT=
+```
+
Check that traffic can flow to the application.
{{< note >}} If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that hostname, without needing to resolve. {{< /note >}}
@@ -275,8 +271,6 @@ URI: /coffee
You shouldn't see any information from the [Jaeger dashboard](http://127.0.0.1:16686) yet: you need to create the `ObservabilityPolicy`.
----
-
### Create the ObservabilityPolicy
To enable tracing for the coffee HTTPRoute, create the following policy:
@@ -345,9 +339,8 @@ Select a trace to view the attributes.
The trace includes the attribute from the global NginxProxy resource as well as the attribute from the ObservabilityPolicy.
----
-
## See also
+- [Data plane configuration]({{< ref "/ngf/how-to/data-plane-configuration.md" >}}): learn how to dynamically update the NGINX Gateway Fabric global data plane configuration, including how to override the telemetry configuration for a particular Gateway.
- [Custom policies]({{< ref "/ngf/overview/custom-policies.md" >}}): learn about how NGINX Gateway Fabric custom policies work.
- [API reference]({{< ref "/ngf/reference/api.md" >}}): all configuration fields for the policies mentioned in this guide
diff --git a/content/ngf/overview/custom-policies.md b/content/ngf/overview/custom-policies.md
index 5aeb99fce..02c04d899 100644
--- a/content/ngf/overview/custom-policies.md
+++ b/content/ngf/overview/custom-policies.md
@@ -2,9 +2,9 @@
title: Custom policies
weight: 600
toc: true
-type: reference
-product: NGF
-docs: "DOCS-000"
+nd-content-type: reference
+nd-product: NGF
+nd-docs: DOCS-000
---
## Overview
@@ -19,9 +19,9 @@ The following table summarizes NGINX Gateway Fabric custom policies:
| Policy | Description | Attachment Type | Supported Target Object(s) | Supports Multiple Target Refs | Mergeable | API Version |
|---------------------------------------------------------------------------------------------|---------------------------------------------------------|-----------------|-------------------------------|-------------------------------|-----------|-------------|
-| [ClientSettingsPolicy]({{< ref "/ngf/how-to/traffic-management/client-settings.md" >}}) | Configure connection behavior between client and NGINX | Inherited | Gateway, HTTPRoute, GRPCRoute | No | Yes | v1alpha1 |
-| [ObservabilityPolicy]({{< ref "/ngf/how-to/monitoring/tracing.md" >}}) | Define settings related to tracing, metrics, or logging | Direct | HTTPRoute, GRPCRoute | Yes | No | v1alpha2 |
-| [UpstreamSettingsPolicy]({{< ref "/ngf/how-to/traffic-management/upstream-settings.md" >}}) | Configure connection behavior between NGINX and backend | Direct | Service | Yes | Yes | v1alpha1 |
+| [ClientSettingsPolicy]({{< ref "/ngf/traffic-management/client-settings.md" >}}) | Configure connection behavior between client and NGINX | Inherited | Gateway, HTTPRoute, GRPCRoute | No | Yes | v1alpha1 |
+| [ObservabilityPolicy]({{< ref "/ngf/monitoring/tracing.md" >}}) | Define settings related to tracing, metrics, or logging | Direct | HTTPRoute, GRPCRoute | Yes | No | v1alpha2 |
+| [UpstreamSettingsPolicy]({{< ref "/ngf/traffic-management/upstream-settings.md" >}}) | Configure connection behavior between NGINX and backend | Direct | Service | Yes | Yes | v1alpha1 |
{{< /bootstrap-table >}}
@@ -29,8 +29,6 @@ The following table summarizes NGINX Gateway Fabric custom policies:
If attaching a Policy to a Route, that Route must not share a hostname:port/path combination with any other Route that is not referenced by the same Policy. If it does, the Policy will be rejected. This is because the Policy would end up affecting other Routes that it is not attached to.
{{< /important >}}
----
-
## Terminology
- _Attachment Type_. How the policy attaches to an object. Attachment type can be "direct" or "inherited".
@@ -38,8 +36,6 @@ If attaching a Policy to a Route, that Route must not share a hostname:port/path
- _Supports Multiple Target Refs_. Whether a single policy can target multiple objects.
- _Mergeable_. Whether policies that target the same object can be merged.
----
-
## Direct Policy Attachment
A Direct Policy Attachment is a policy that references a single object, such as a Gateway or HTTPRoute. It is tightly bound to one instance of a particular Kind within a single Namespace or an instance of a single Kind at the cluster-scope. It affects _only_ the object specified in its TargetRef.
@@ -50,8 +46,6 @@ This diagram uses a fictional retry policy to show how Direct Policy Attachment
The policy targets the HTTPRoute `baz` and sets `retries` to `3` and `timeout` to `60s`. Since this policy is a Direct Policy Attachment, its settings are only applied to the `baz` HTTPRoute.
----
-
## Inherited Policy Attachment
Inherited Policy Attachment allows settings to cascade down a hierarchy. The hierarchy for Gateway API resources looks like this:
@@ -80,8 +74,6 @@ The `baz-policy` and `foo-policy` are attached to the `baz` and `foo` HTTPRoutes
Since the `foo-policy` only defines the `retries` setting, it still inherits the `timeout` setting from `dev-policy`.
The `bar` HTTPRoute has no policy attached to it and inherits all the settings from `dev-policy`.
----
-
## Merging Policies
With some NGINX Gateway Fabric Policies, it is possible to create multiple policies that target the same resource as long as the fields in those policies do not conflict.
@@ -129,8 +121,6 @@ However, if both policies had the `retries` field set, then the policies cannot
If a policy conflicts with a configured policy, NGINX Gateway Fabric will set the policy `Accepted` status to false with a reason of `Conflicted`. See [Policy Status](#policy-status) for more details.
----
-
## Policy Status
NGINX Gateway Fabric sets the [PolicyStatus](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.PolicyStatus) on all policies.
diff --git a/content/ngf/overview/gateway-api-compatibility.md b/content/ngf/overview/gateway-api-compatibility.md
index d65258c07..fc04731cc 100644
--- a/content/ngf/overview/gateway-api-compatibility.md
+++ b/content/ngf/overview/gateway-api-compatibility.md
@@ -2,9 +2,9 @@
title: Gateway API Compatibility
weight: 200
toc: true
-type: reference
-product: NGF
-docs: DOCS-1412
+nd-content-type: reference
+nd-product: NGF
+nd-docs: DOCS-1412
---
Learn which Gateway API resources NGINX Gateway Fabric supports and to which level.
@@ -28,8 +28,6 @@ Learn which Gateway API resources NGINX Gateway Fabric supports and to which lev
{{< /bootstrap-table >}}
----
-
## Terminology
Gateway API features has three [support levels](https://gateway-api.sigs.k8s.io/concepts/conformance/#2-support-levels): Core, Extended and Implementation-specific. We use the following terms to describe the support status for each level and resource field:
@@ -42,7 +40,6 @@ Gateway API features has three [support levels](https://gateway-api.sigs.k8s.io/
{{< note >}} It's possible that NGINX Gateway Fabric will never support some resources or fields of the Gateway API. They will be documented on a case by case basis. {{< /note >}}
----
## Resources
@@ -78,8 +75,6 @@ NGINX Gateway Fabric supports a single GatewayClass resource configured with the
- `SupportedVersion/True/SupportedVersion`
- `SupportedVersion/False/UnsupportedVersion`
----
-
### Gateway
{{< bootstrap-table "table table-striped table-bordered" >}}
@@ -90,7 +85,7 @@ NGINX Gateway Fabric supports a single GatewayClass resource configured with the
{{< /bootstrap-table >}}
-NGINX Gateway Fabric supports a single Gateway resource. The Gateway resource must reference NGINX Gateway Fabric's corresponding GatewayClass.
+NGINX Gateway Fabric supports multiple Gateway resources. The Gateway resources must reference NGINX Gateway Fabric's corresponding GatewayClass.
See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) command for more information.
@@ -98,6 +93,10 @@ See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) comma
- `spec`
- `gatewayClassName`: Supported.
+ - `infrastructure`: Supported.
+ - `parametersRef`: NginxProxy resource supported.
+ - `labels`: Supported.
+ - `annotations`: Supported.
- `listeners`
- `name`: Supported.
- `hostname`: Supported.
@@ -109,21 +108,18 @@ See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) comma
- `options`: Not supported.
- `allowedRoutes`: Supported.
- `addresses`: Not supported.
- - `infrastructure`: Not supported.
- `backendTLS`: Not supported.
+ - `allowedListeners`: Not supported.
- `status`
- - `addresses`: Partially supported (LoadBalancer and Pod IP).
+ - `addresses`: Partially supported (LoadBalancer and ClusterIP).
- `conditions`: Supported (Condition/Status/Reason):
- `Accepted/True/Accepted`
- `Accepted/True/ListenersNotValid`
- `Accepted/False/ListenersNotValid`
- `Accepted/False/Invalid`
- `Accepted/False/UnsupportedValue`: Custom reason for when a value of a field in a Gateway is invalid or not supported.
- - `Accepted/False/GatewayConflict`: Custom reason for when the Gateway is ignored due to a conflicting Gateway.
- NGINX Gateway Fabric only supports a single Gateway.
- `Programmed/True/Programmed`
- `Programmed/False/Invalid`
- - `Programmed/False/GatewayConflict`: Custom reason for when the Gateway is ignored due to a conflicting Gateway. NGINX Gateway Fabric only supports a single Gateway.
- `listeners`
- `name`: Supported.
- `supportedKinds`: Supported.
@@ -135,7 +131,6 @@ See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) comma
- `Accepted/False/ProtocolConflict`
- `Accpeted/False/HostnameConflict`
- `Accepted/False/UnsupportedValue`: Custom reason for when a value of a field in a Listener is invalid or not supported.
- - `Accepted/False/GatewayConflict`: Custom reason for when the Gateway is ignored due to a conflicting Gateway. NGINX Gateway Fabric only supports a single Gateway.
- `Programmed/True/Programmed`
- `Programmed/False/Invalid`
- `ResolvedRefs/True/ResolvedRefs`
@@ -145,8 +140,6 @@ See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) comma
- `Conflicted/True/HostnameConflict`
- `Conflicted/False/NoConflicts`
----
-
### HTTPRoute
{{< bootstrap-table "table table-striped table-bordered" >}}
@@ -165,8 +158,8 @@ See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) comma
- `rules`
- `matches`
- `path`: Partially supported. Only `PathPrefix` and `Exact` types.
- - `headers`: Partially supported. Only `Exact` type.
- - `queryParams`: Partially supported. Only `Exact` type.
+ - `headers`: Supported.
+ - `queryParams`: Supported.
- `method`: Supported.
- `filters`
- `type`: Supported.
@@ -174,7 +167,8 @@ See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) comma
- `requestHeaderModifier`: Supported. If multiple filters are configured, NGINX Gateway Fabric will choose the first and ignore the rest.
- `urlRewrite`: Supported. If multiple filters are configured, NGINX Gateway Fabric will choose the first and ignore the rest. Incompatible with `requestRedirect`.
- `responseHeaderModifier`: Supported. If multiple filters are configured, NGINX Gateway Fabric will choose the first and ignore the rest.
- - `requestMirror`, `extensionRef`: Not supported.
+ - `requestMirror`: Supported. Multiple mirrors can be specified.
+ - `extensionRef`: Supported for SnippetsFilters.
- `backendRefs`: Partially supported. Backend ref `filters` are not supported.
- `status`
- `parents`
@@ -197,8 +191,6 @@ See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) comma
- `ResolvedRefs/False/InvalidIPFamily`: Custom reason for when one of the HTTPRoute rules has a backendRef that has an invalid IPFamily.
- `PartiallyInvalid/True/UnsupportedValue`
----
-
### GRPCRoute
{{< bootstrap-table "table table-striped table-bordered" >}}
@@ -217,12 +209,13 @@ See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) comma
- `rules`
- `matches`
- `method`: Partially supported. Only `Exact` type with both `method.service` and `method.method` specified.
- - `headers`: Partially supported. Only `Exact` type.
+ - `headers`: Supported
- `filters`
- `type`: Supported.
- `requestHeaderModifier`: Supported. If multiple filters are configured, NGINX Gateway Fabric will choose the first and ignore the rest.
- `responseHeaderModifier`: Supported. If multiple filters are configured, NGINX Gateway Fabric will choose the first and ignore the rest.
- - `requestMirror`, `extensionRef`: Not supported.
+ - `requestMirror`: Supported. Multiple mirrors can be specified.
+ - `extensionRef`: Supported for SnippetsFilters.
- `backendRefs`: Partially supported. Backend ref `filters` are not supported.
- `status`
- `parents`
@@ -243,8 +236,6 @@ See the [static-mode]({{< ref "/ngf/reference/cli-help.md#static-mode">}}) comma
- `ResolvedRefs/False/UnsupportedValue`: Custom reason for when one of the GRPCRoute rules has a backendRef with an unsupported value.
- `PartiallyInvalid/True/UnsupportedValue`
----
-
### ReferenceGrant
{{< bootstrap-table "table table-striped table-bordered" >}}
@@ -267,8 +258,6 @@ Fields:
- `kind` - supports `Gateway` and `HTTPRoute`.
- `namespace`- supported.
----
-
### TLSRoute
{{< bootstrap-table "table table-striped table-bordered" >}}
@@ -319,8 +308,6 @@ Fields:
{{< /bootstrap-table >}}
----
-
### UDPRoute
{{< bootstrap-table "table table-striped table-bordered" >}}
@@ -331,8 +318,6 @@ Fields:
{{< /bootstrap-table >}}
----
-
### BackendTLSPolicy
{{< bootstrap-table "table table-striped table-bordered" >}}
@@ -351,10 +336,10 @@ Fields:
- `kind`: Supports `Service`.
- `name`: Supported.
- `validation`
- - `caCertificateRefs`: Supports single reference to a `ConfigMap`, with the CA certificate in a key named `ca.crt`.
+ - `caCertificateRefs`: Supports single reference to a `ConfigMap` or `Secret`, with the CA certificate in a key named `ca.crt`.
- `name`: Supported.
- `group`: Supported.
- - `kind`: Supports `ConfigMap`.
+ - `kind`: Supports `ConfigMap` and `Secret`.
- `hostname`: Supported.
- `wellKnownCertificates`: Supports `System`. This will set the CA certificate to the Alpine system root CA path `/etc/ssl/cert.pem`. NB: This option will only work if the NGINX image used is Alpine based. The NGF NGINX images are Alpine based by default.
- `subjectAltNames`: Not supported.
@@ -382,4 +367,4 @@ Fields:
Custom policies are NGINX Gateway Fabric-specific CRDs (Custom Resource Definitions) that support features such as tracing, and client connection settings. These important data-plane features are not part of the Gateway API specifications.
While these CRDs are not part of the Gateway API, the mechanism to attach them to Gateway API resources is part of the Gateway API. See the [Policy Attachment documentation](https://gateway-api.sigs.k8s.io/references/policy-attachment/).
-See the [custom policies]({{< ref "/ngf/overview/custom-policies.md" >}}) document for more information.
+See the [custom policies]({{< ref "/ngf/overview/custom-policies.md" >}}) document for more information.
\ No newline at end of file
diff --git a/content/ngf/overview/gateway-architecture.md b/content/ngf/overview/gateway-architecture.md
index 7426c67c1..fa45758e7 100644
--- a/content/ngf/overview/gateway-architecture.md
+++ b/content/ngf/overview/gateway-architecture.md
@@ -2,9 +2,9 @@
title: Gateway architecture
weight: 100
toc: true
-type: reference
-product: NGF
-docs: DOCS-1413
+nd-content-type: reference
+nd-product: NGF
+nd-docs: DOCS-1413
---
Learn about the architecture and design principles of NGINX Gateway Fabric.
@@ -24,87 +24,345 @@ NGINX Gateway Fabric is an open source project that provides an implementation o
For a list of supported Gateway API resources and features, see the [Gateway API Compatibility]({{< ref "/ngf/overview/gateway-api-compatibility.md" >}}) documentation.
-We have more information regarding our [design principles](https://github.com/nginx/nginx-gateway-fabric/blob/v{{< version-ngf >}}/docs/developer/design-principles.md) in the project's GitHub repository.
+NGINX Gateway Fabric separates the control plane and data plane into distinct deployments. This architectural separation enhances scalability, security, and operational isolation between the two components.
+
+The control plane interacts with the Kubernetes API, watching for Gateway API resources. When a new Gateway resource is provisioned, it dynamically creates and manages a corresponding NGINX data plane Deployment and Service. This ensures that the system can adapt to changes in the cluster state seamlessly.
+
+Each NGINX data plane pod consists of an NGINX container integrated with the [NGINX agent](https://github.com/nginx/agent). The agent securely communicates with the control plane using gRPC. The control plane translates Gateway API resources into NGINX configurations and sends these configurations to the agent to ensure consistent traffic management.
+
+This design enables centralized management of multiple Gateways while ensuring that each NGINX instance stays aligned with the cluster's current configuration. Labels, annotations, and infrastructure settings such as service type or replica count can be specified globally via the Helm chart or customized per Gateway using the enhanced NginxProxy CRD and the Gateway's `infrastructure` section.
+
+We have more information regarding our [design principles](https://github.com/nginx/nginx-gateway-fabric/blob/v1.6.1/docs/developer/design-principles.md) in the project's GitHub repository.
---
-## NGINX Gateway Fabric at a high level
+## NGINX Gateway Fabric Deployment Model and Architectural Overview
-This figure depicts an example of NGINX Gateway Fabric exposing two web applications within a Kubernetes cluster to clients on the internet:
+The NGINX Gateway Fabric architecture separates the control plane and data plane into distinct and independent Deployments, ensuring enhanced security, flexibility, and resilience.
-{{< img src="/ngf/img/ngf-high-level.png" alt="" >}}
+### Control Plane: Centralized Management
-{{< note >}} The figure does not show many of the necessary Kubernetes resources the Cluster Operators and Application Developers need to create, like deployment and services. {{< /note >}}
+The control plane operates as a Deployment, serving as a [Kubernetes controller](https://kubernetes.io/docs/concepts/architecture/controller/) built with the [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) library. It manages all aspects of resource provisioning and configuration for the NGINX data planes by watching Gateway API resources and other Kubernetes objects such as Services, Endpoints, and Secrets.
-The figure shows:
+Key functionalities include:
-- A _Kubernetes cluster_.
-- Users _Cluster Operator_, _Application Developer A_ and _Application Developer B_. These users interact with the cluster through the Kubernetes API by creating Kubernetes objects.
-- _Clients A_ and _Clients B_ connect to _Applications A_ and _B_, respectively, which they have deployed.
-- The _NGF Pod_, [deployed by _Cluster Operator_]({{< ref "/ngf/installation">}}) in the namespace _nginx-gateway_. For scalability and availability, you can have multiple replicas. This pod consists of two containers: `NGINX` and `NGF`. The _NGF_ container interacts with the Kubernetes API to retrieve the most up-to-date Gateway API resources created within the cluster. It then dynamically configures the _NGINX_ container based on these resources, ensuring proper alignment between the cluster state and the NGINX configuration.
-- _Gateway AB_, created by _Cluster Operator_, requests a point where traffic can be translated to Services within the cluster. This Gateway includes a listener with a hostname `*.example.com`. Application Developers have the ability to attach their application's routes to this Gateway if their application's hostname matches `*.example.com`.
-- _Application A_ with two pods deployed in the _applications_ namespace by _Application Developer A_. To expose the application to its clients (_Clients A_) via the host `a.example.com`, _Application Developer A_ creates _HTTPRoute A_ and attaches it to `Gateway AB`.
-- _Application B_ with one pod deployed in the _applications_ namespace by _Application Developer B_. To expose the application to its clients (_Clients B_) via the host `b.example.com`, _Application Developer B_ creates _HTTPRoute B_ and attaches it to `Gateway AB`.
-- _Public Endpoint_, which fronts the _NGF_ pod. This is typically a TCP load balancer (cloud, software, or hardware) or a combination of such load balancer with a NodePort service. _Clients A_ and _B_ connect to their applications via the _Public Endpoint_.
+- Dynamic provisioning: When a new Gateway resource is created, the control plane automatically provisions a dedicated NGINX Deployment and exposes it using a Service.
+- Configuration management: Kubernetes and Gateway API resources are translated into NGINX configurations, which are securely delivered to the data plane pods via a gRPC connection to the NGINX Agent.
+- Secure communication: By default, the gRPC connection uses self-signed certificates generated during installation. Integration with [cert-manager](https://cert-manager.io/) is also supported for optional certificate management.
-The yellow and purple arrows represent connections related to the client traffic, and the black arrows represent access to the Kubernetes API. The resources within the cluster are color-coded based on the user responsible for their creation.
+### Data Plane: Autonomous Traffic Management
-For example, the Cluster Operator is denoted by the color green, indicating they create and manage all the green resources.
+Each NGINX data plane pod is provisioned as an independent Deployment containing an `nginx` container. This container runs both the `nginx` process and the [NGINX agent](https://github.com/nginx/agent), which is responsible for:
----
+- Applying configurations: The agent receives updates from the control plane and applies them to the NGINX instance.
+- Handling reloads: NGINX Agent handles configuration reconciliation and reloading NGINX, eliminating the need for shared volumes or Unix signals between the control plane and data plane pods.
+
+With this design, multiple NGINX data planes can be managed by a single control plane, enabling fine-grained, Gateway-specific control and isolation.
+
+### Gateway Resource Management
-## The NGINX Gateway Fabric pod
+The architecture supports flexible operation and isolation across multiple Gateways:
-NGINX Gateway Fabric consists of two containers:
+- Concurrent Gateways: Multiple Gateway objects can run simultaneously within a single installation.
+- 1:1 resource mapping: Each Gateway resource corresponds uniquely to a dedicated data plane deployment, ensuring clear delineation of ownership and operational segregation.
-1. `nginx`: the data plane. Consists of an NGINX master process and NGINX worker processes. The master process controls the worker processes. The worker processes handle the client traffic and load balance traffic to the backend applications.
-1. `nginx-gateway`: the control plane. Watches Kubernetes objects and configures NGINX.
+### Resilience and Fault Isolation
-These containers are deployed in a single pod as a Kubernetes Deployment.
+One of the primary advantages of this architecture is enhanced operational resilience and fault isolation:
-The `nginx-gateway`, or the control plane, is a [Kubernetes controller](https://kubernetes.io/docs/concepts/architecture/controller/), written with the [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) library. It watches Kubernetes objects (services, endpoints, secrets, and Gateway API CRDs), translates them to NGINX configuration, and configures NGINX.
+#### Control Plane Resilience
-This configuration happens in two stages:
+In the event of a control plane failure or downtime:
+- Existing data plane pods continue serving traffic using their last-valid cached configurations.
+- Updates to routes or Gateways are temporarily paused, but stable traffic delivery continues without degradation.
+- Recovery restores functionality, resynchronizing configuration updates seamlessly.
-1. NGINX configuration files are written to the NGINX configuration volume shared by the `nginx-gateway` and `nginx` containers.
-1. The control plane reloads the NGINX process.
+#### Data Plane Resilience
-This is possible because the two containers [share a process namespace](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/), allowing the NGINX Gateway Fabric process to send signals to the NGINX main process.
+If a data plane pod encounters an outage or restarts:
+- Only routes tied to the specific linked Gateway object experience brief disruptions.
+- Configurations automatically resynchronize with the data plane upon pod restart, minimizing the scope of impact.
+- Other data plane pods remain unaffected and continue serving traffic normally.
-The following diagram represents the connections, relationships and interactions between process with the `nginx` and `nginx-gateway` containers, as well as external processes/entities.
+This split architecture ensures operational boundaries between the control plane and data plane, delivering improved scalability, security, and robustness while minimizing risks associated with failures in either component.
-{{< img src="/ngf/img/ngf-pod.png" alt="" >}}
+---
+
+## High-Level Example of NGINX Gateway Fabric in Action
+
+This figure depicts an example of NGINX Gateway Fabric exposing three web applications within a Kubernetes cluster to clients on the internet:
+
+```mermaid
+graph LR
+ %% Nodes and Relationships
+ subgraph KubernetesCluster[Kubernetes Cluster]
+
+ subgraph applications2[Namespace: applications2]
+
+ subgraph DataplaneComponentsC[Dataplane Components]
+ GatewayC[Gateway C
Listener: *.other-example.com]
+
+ subgraph NGINXPodC[NGINX Pod]
+ subgraph NGINXContainerC[NGINX Container]
+ NGINXProcessC(NGINX)
+ NGINXAgentC(NGINX Agent)
+ end
+ end
+ end
+
+ subgraph HTTPRouteCAndApplicationC[HTTPRoute C and Application C]
+ HTTPRouteC[HTTPRoute C
Host: c.other-example.com]
+ ApplicationC[Application C
Pods: 1]
+ end
+
+ end
+
+ subgraph nginx-gateway[Namespace: nginx-gateway]
+ NGFPod[NGF Pod]
+ end
+
+ subgraph applications1[Namespace: applications]
+
+ subgraph DataplaneComponentsAB[Dataplane Components]
+ GatewayAB[Gateway AB
Listener: *.example.com]
+
+ subgraph NGINXPodAB[NGINX Pod]
+ subgraph NGINXContainerAB[NGINX Container]
+ NGINXProcessAB(NGINX)
+ NGINXAgentAB(NGINX Agent)
+ end
+ end
+ end
+
+ subgraph HTTPRouteBAndApplicationB[HTTPRoute B and Application B]
+ HTTPRouteB[HTTPRoute B
Host: b.example.com]
+ ApplicationB[Application B
Pods: 1]
+ end
+
+ subgraph HTTPRouteAAndApplicationA[HTTPRoute A and Application A]
+ HTTPRouteA[HTTPRoute A
Host: a.example.com]
+ ApplicationA[Application AB
Pods: 2]
+ end
+ end
+
+ KubernetesAPI[Kubernetes API]
+ end
+
+ subgraph Users[Users]
+ ClusterOperator[Cluster Operator]
+ AppDevA[Application Developer A]
+ AppDevB[Application Developer B]
+ AppDevC[Application Developer C]
+ end
+
+ subgraph Clients[Clients]
+ ClientsA[Clients A]
+ ClientsB[Clients B]
+ ClientsC[Clients C]
+ end
+
+ subgraph "Public Endpoints"
+ PublicEndpointAB[Public Endpoint AB
TCP Load Balancer/NodePort]
+ PublicEndpointC[Public Endpoint C
TCP Load Balancer/NodePort]
+ end
+
+ %% Updated Traffic Flow
+ ClientsA == a.example.com ==> PublicEndpointAB
+ ClientsB == b.example.com ==> PublicEndpointAB
+ ClientsC == c.other-example.com ==> PublicEndpointC
+
+ PublicEndpointAB ==> NGINXProcessAB
+ PublicEndpointC ==> NGINXProcessC
+ NGINXProcessAB ==> ApplicationA
+ NGINXProcessAB ==> ApplicationB
+ NGINXProcessC ==> ApplicationC
+
+ %% Kubernetes Configuration Flow
+ HTTPRouteA --> GatewayAB
+ HTTPRouteB --> GatewayAB
+ HTTPRouteC --> GatewayC
+
+ NGFPod --> KubernetesAPI
+ NGFPod --gRPC--> NGINXAgentAB
+ NGINXAgentAB --> NGINXProcessAB
+ NGFPod --gRPC--> NGINXAgentC
+ NGINXAgentC --> NGINXProcessC
+
+ ClusterOperator --> KubernetesAPI
+ AppDevA --> KubernetesAPI
+ AppDevB --> KubernetesAPI
+ AppDevC --> KubernetesAPI
+
+ %% Styling
+ style ClusterOperator fill:#66CDAA,stroke:#333,stroke-width:2px
+ style GatewayAB fill:#66CDAA,stroke:#333,stroke-width:2px
+ style GatewayC fill:#66CDAA,stroke:#333,stroke-width:2px
+ style NGFPod fill:#66CDAA,stroke:#333,stroke-width:2px
+
+ style NGINXProcessAB fill:#66CDAA,stroke:#333,stroke-width:2px
+ style NGINXProcessC fill:#66CDAA,stroke:#333,stroke-width:2px
+
+ style KubernetesAPI fill:#9370DB,stroke:#333,stroke-width:2px
+
+ style HTTPRouteAAndApplicationA fill:#E0FFFF,stroke:#333,stroke-width:2px
+ style HTTPRouteBAndApplicationB fill:#E0FFFF,stroke:#333,stroke-width:2px
+
+ style AppDevA fill:#FFA07A,stroke:#333,stroke-width:2px
+ style HTTPRouteA fill:#FFA07A,stroke:#333,stroke-width:2px
+ style ApplicationA fill:#FFA07A,stroke:#333,stroke-width:2px
+ style ClientsA fill:#FFA07A,stroke:#333,stroke-width:2px
+
+ style AppDevB fill:#87CEEB,stroke:#333,stroke-width:2px
+ style HTTPRouteB fill:#87CEEB,stroke:#333,stroke-width:2px
+ style ApplicationB fill:#87CEEB,stroke:#333,stroke-width:2px
+ style ClientsB fill:#87CEEB,stroke:#333,stroke-width:2px
+
+ style AppDevC fill:#FFC0CB,stroke:#333,stroke-width:2px
+ style HTTPRouteC fill:#FFC0CB,stroke:#333,stroke-width:2px
+ style ApplicationC fill:#FFC0CB,stroke:#333,stroke-width:2px
+ style ClientsC fill:#FFC0CB,stroke:#333,stroke-width:2px
+
+ style PublicEndpointAB fill:#FFD700,stroke:#333,stroke-width:2px
+ style PublicEndpointC fill:#FFD700,stroke:#333,stroke-width:2px
+
+ %% Styling
+ classDef dashedSubgraph stroke-dasharray: 5, 5;
+
+ %% Assign Custom Style Classes
+ class DataplaneComponentsAB dashedSubgraph;
+ class DataplaneComponentsC dashedSubgraph;
+```
+
+{{< note >}} The figure does not show many of the necessary Kubernetes resources the Cluster Operators and Application Developers need to create, like deployment and services. {{< /note >}}
+
+The figure shows:
+
+- A _Kubernetes cluster_.
+- Users _Cluster Operator_, _Application Developer A_, _B_ and _C_. These users interact with the cluster through the Kubernetes API by creating Kubernetes objects.
+- _Clients A_, _B_, and _C_ connect to _Applications A_, _B_, and _C_ respectively, which the developers have deployed.
+- The _NGF Pod_, [deployed by _Cluster Operator_]({{< ref "/ngf/install/">}}) in the namespace _nginx-gateway_. For scalability and availability, you can have multiple replicas. The _NGF_ container interacts with the Kubernetes API to retrieve the most up-to-date Gateway API resources created within the cluster. When a new Gateway resource is provisioned, the control plane dynamically creates and manages a corresponding NGINX data plane Deployment and Service. It watches the Kubernetes API and dynamically configures these _NGINX_ deployments based on the Gateway API resources, ensuring proper alignment between the cluster state and the NGINX configuration.
+- The _NGINX Pod_ consists of an NGINX container and the integrated NGINX agent, which securely communicates with the control plane over gRPC. The control plane translates Gateway API resources into NGINX configuration, and sends the configuration to the agent.
+- Gateways _Gateway AB_ and _Gateway C_, created by _Cluster Operator_, request points where traffic can be translated to Services within the cluster. _Gateway AB_, includes a listener with a hostname `*.example.com`. _Gateway C_, includes a listener with a hostname `*.other-example.com`. Application Developers have the ability to attach their application's routes to the _Gateway AB_ if their application's hostname matches `*.example.com`, or to _Gateway C_ if their application's hostname matches `*.other-example.com`
+- _Application A_ with two pods deployed in the _applications_ namespace by _Application Developer A_. To expose the application to its clients (_Clients A_) via the host `a.example.com`, _Application Developer A_ creates _HTTPRoute A_ and attaches it to `Gateway AB`.
+- _Application B_ with one pod deployed in the _applications_ namespace by _Application Developer B_. To expose the application to its clients (_Clients B_) via the host `b.example.com`, _Application Developer B_ creates _HTTPRoute B_ and attaches it to `Gateway AB`.
+- _Application C_ with one pod deployed in the _applications2_ namespace by _Application Developer C_. To expose the application to its clients (_Clients C_) via the host `c.other-example.com`, _Application Developer C_ creates _HTTPRoute C_ and attaches it to `Gateway C`.
+- _Public Endpoint AB_, and _Public Endpoint C_ and which fronts the _NGINX AB_, and _NGINX C_ pods respectively. A public endpoint is typically a TCP load balancer (cloud, software, or hardware) or a combination of such load balancer with a NodePort service. _Clients A_ and _B_ connect to their applications via the _Public Endpoint AB_, and _Clients C_ connect to their applications via the _Public Endpoint C_.
+- The bold arrows represent connections related to the client traffic. Note that the traffic from _Clients C_ to _Application C_ is completely isolated from the traffic between _Clients A_ and _B_ and _Application A_ and _B_ respectively.
+
+The resources within the cluster are color-coded based on the user responsible for their creation.
+For example, the Cluster Operator is denoted by the color green, indicating they create and manage all the green resources.
+
+---
+
+## NGINX Gateway Fabric: Component Communication Workflow
+
+```mermaid
+graph LR
+ %% Main Components
+ KubernetesAPI[Kubernetes API]
+ PrometheusMonitor[Prometheus]
+ F5Telemetry[F5 Telemetry Service]
+ NGFPod[NGF Pod]
+ NGINXPod[NGINX Pod]
+ Client[Client]
+ Backend[Backend]
+
+ %% NGINX Pod Grouping
+ subgraph NGINXPod[NGINX Pod]
+ NGINXAgent[NGINX Agent]
+ NGINXMaster[NGINX Master]
+ NGINXWorker[NGINX Worker]
+ ConfigFiles[Config Files]
+ ContainerRuntimeNGINX[stdout/stderr]
+ end
+
+ subgraph NGFPod[NGF Pod]
+ NGFProcess[NGF Process]
+ ContainerRuntimeNGF[stdout/stderr]
+ end
+
+ %% External Components Grouping
+ subgraph ExternalComponents[.]
+ KubernetesAPI[Kubernetes API]
+ PrometheusMonitor[Prometheus]
+ F5Telemetry[F5 Telemetry Service]
+ end
+
+ %% HTTPS: Communication with Kubernetes API
+ NGFProcess -- "(1) Reads Updates" --> KubernetesAPI
+ NGFProcess -- "(1) Writes Statuses" --> KubernetesAPI
+
+ %% Prometheus: Metrics Collection
+ PrometheusMonitor -- "(2) Fetches controller-runtime metrics" --> NGFPod
+ PrometheusMonitor -- "(5) Fetches NGINX metrics" --> NGINXWorker
+
+ %% Telemetry: Product telemetry data
+ NGFProcess -- "(3) Sends telemetry data" --> F5Telemetry
+
+ %% File I/O: Logging
+ NGFProcess -- "(4) Write logs" --> ContainerRuntimeNGF
+ NGINXMaster -- "(11) Write logs" --> ContainerRuntimeNGINX
+ NGINXWorker -- "(12) Write logs" --> ContainerRuntimeNGINX
+
+ %% gRPC: Configuration Updates
+ NGFProcess -- "(6) Sends Config to Agent" --> NGINXAgent
+ NGINXAgent -- "(7) Validates & Writes Config & TLS Certs" --> ConfigFiles
+ NGINXAgent -- "(8) Reloads NGINX" --> NGINXMaster
+ NGINXAgent -- "(9) Sends DataPlaneResponse" --> NGFProcess
+
+ %% File I/O: Configuration and Secrets
+ NGINXMaster -- "(10) Reads TLS Secrets" --> ConfigFiles
+ NGINXMaster -- "(11) Reads nginx.conf & NJS Modules" --> ConfigFiles
+
+ %% Signals: Worker Lifecycle Management
+ NGINXMaster -- "(14) Manages Workers (Update/Shutdown)" --> NGINXWorker
+
+ %% Traffic Flow
+ Client -- "(15) Sends Traffic" --> NGINXWorker
+ NGINXWorker -- "(16) Routes Traffic" --> Backend
+
+ %% Styling
+ classDef important fill:#66CDAA,stroke:#333,stroke-width:2px;
+ classDef metrics fill:#FFC0CB,stroke:#333,stroke-width:2px;
+ classDef io fill:#FFD700,stroke:#333,stroke-width:2px;
+ classDef signal fill:#87CEEB,stroke:#333,stroke-width:2px;
+ style ExternalComponents fill:transparent,stroke-width:0px
+
+ %% Class Assignments for Node Colors
+ class NGFPod,KubernetesAPI important;
+ class PrometheusMonitor,F5Telemetry metrics;
+ class ConfigFiles,NGINXMaster,NGINXWorker,NGINXAgent io;
+ class Client,Backend signal;
+```
The following list describes the connections, preceeded by their types in parentheses. For brevity, the suffix "process" has been omitted from the process descriptions.
1. (HTTPS)
- Read: _NGF_ reads the _Kubernetes API_ to get the latest versions of the resources in the cluster.
- - Write: _NGF_ writes to the _Kubernetes API_ to update the handled resources' statuses and emit events. If there's more than one replica of _NGF_ and [leader election](https://github.com/nginx/nginx-gateway-fabric/tree/v{{< version-ngf >}}/charts/nginx-gateway-fabric#configuration) is enabled, only the _NGF_ pod that is leading will write statuses to the _Kubernetes API_.
-1. (HTTP, HTTPS) _Prometheus_ fetches the `controller-runtime` and NGINX metrics via an HTTP endpoint that _NGF_ exposes (`:9113/metrics` by default). Prometheus is **not** required by NGINX Gateway Fabric, and its endpoint can be turned off.
-1. (File I/O)
- - Write: _NGF_ generates NGINX _configuration_ based on the cluster resources and writes them as `.conf` files to the mounted `nginx-conf` volume, located at `/etc/nginx/conf.d`. It also writes _TLS certificates_ and _keys_ from [TLS secrets](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) referenced in the accepted Gateway resource to the `nginx-secrets` volume at the path `/etc/nginx/secrets`.
- - Read: _NGF_ reads the PID file `nginx.pid` from the `nginx-run` volume, located at `/var/run/nginx`. _NGF_ extracts the PID of the nginx process from this file in order to send reload signals to _NGINX master_.
+ - Write: _NGF_ writes to the _Kubernetes API_ to update the handled resources' statuses and emit events. If there's more than one replica of _NGF_ and [leader election](https://github.com/nginx/nginx-gateway-fabric/tree/v1.6.1/charts/nginx-gateway-fabric#configuration) is enabled, only the _NGF_ pod that is leading will write statuses to the _Kubernetes API_.
+1. (HTTP, HTTPS) _Prometheus_ fetches the `controller-runtime` metrics via an HTTP endpoint that _NGF_ exposes (`:9113/metrics` by default).
+Prometheus is **not** required by NGINX Gateway Fabric, and its endpoint can be turned off.
+1. (HTTPS) NGF sends [product telemetry data]({{< ref "/ngf/overview/product-telemetry.md" >}}) to the F5 telemetry service.
1. (File I/O) _NGF_ writes logs to its _stdout_ and _stderr_, which are collected by the container runtime.
-1. (HTTP) _NGF_ fetches the NGINX metrics via the unix:/var/run/nginx/nginx-status.sock UNIX socket and converts it to _Prometheus_ format used in #2.
-1. (Signal) To reload NGINX, _NGF_ sends the [reload signal](https://nginx.org/en/docs/control.html) to the **NGINX master**.
+1. (HTTP, HTTPS) _Prometheus_ fetches the NGINX metrics via an HTTP endpoint that _NGINX_ exposes (`:9113/metrics` by default). Prometheus is **not** required by NGINX, and its endpoint can be turned off.
+1. (gRPC) _NGF_ generates NGINX _configuration_ based on the cluster resources and sends them to _NGINX Agent_ over a secure gRPC connection.
+ - NGF sends a message containing file metadata to all pods (subscriptions) for the deployment.
+ - Agent receives a ConfigApplyRequest with the list of file metadata.
+ - Agent calls GetFile for each file in the list, which NGF sends back to the agent.
1. (File I/O)
- - Write: The _NGINX master_ writes its PID to the `nginx.pid` file stored in the `nginx-run` volume.
- - Read: The _NGINX master_ reads _configuration files_ and the _TLS cert and keys_ referenced in the configuration when it starts or during a reload. These files, certificates, and keys are stored in the `nginx-conf` and `nginx-secrets` volumes that are mounted to both the `nginx-gateway` and `nginx` containers.
+ - Write: __NGINX Agent_ validates the received configuration, and then writes and applies the config if valid. It also writes _TLS certificates_ and _keys_ from [TLS secrets](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) referenced in the accepted Gateway resource.
+1. (Signal) To reload NGINX, Agent sends the reload signal to the NGINX master.
+1. (gRPC) Agent responds to NGF with a DataPlaneResponse.
1. (File I/O)
- - Write: The _NGINX master_ writes to the auxiliary Unix sockets folder, which is located in the `/var/run/nginx`
- directory.
- - Read: The _NGINX master_ reads the `nginx.conf` file from the `/etc/nginx` directory. This [file](https://github.com/nginx/nginx-gateway-fabric/blob/v{{< version-ngf >}}/internal/mode/static/nginx/conf/nginx.conf) contains the global and http configuration settings for NGINX. In addition, _NGINX master_ reads the NJS modules referenced in the configuration when it starts or during a reload. NJS modules are stored in the `/usr/lib/nginx/modules` directory.
+ - Read: The _NGINX master_ reads _configuration files_ and the _TLS cert and keys_ referenced in the configuration when it starts or during a reload.
+1. (File I/O)
+ - Read: The _NGINX master_ reads the `nginx.conf` file from the `/etc/nginx` directory. This [file](https://github.com/nginx/nginx-gateway-fabric/blob/v1.6.1/internal/mode/static/nginx/conf/nginx.conf) contains the global and http configuration settings for NGINX. In addition, _NGINX master_ reads the NJS modules referenced in the configuration when it starts or during a reload. NJS modules are stored in the `/usr/lib/nginx/modules` directory.
1. (File I/O) The _NGINX master_ sends logs to its _stdout_ and _stderr_, which are collected by the container runtime.
1. (File I/O) An _NGINX worker_ writes logs to its _stdout_ and _stderr_, which are collected by the container runtime.
1. (Signal) The _NGINX master_ controls the [lifecycle of _NGINX workers_](https://nginx.org/en/docs/control.html#reconfiguration) it creates workers with the new configuration and shutdowns workers with the old configuration.
-1. (HTTP) To consider a configuration reload a success, _NGF_ ensures that at least one NGINX worker has the new configuration. To do that, _NGF_ checks a particular endpoint via the unix:/var/run/nginx/nginx-config-version.sock UNIX socket.
1. (HTTP, HTTPS) A _client_ sends traffic to and receives traffic from any of the _NGINX workers_ on ports 80 and 443.
1. (HTTP, HTTPS) An _NGINX worker_ sends traffic to and receives traffic from the _backends_.
-Below are additional connections not depcited on the diagram:
-
-- (HTTPS) NGF sends [product telemetry data]({{< ref "/ngf/overview/product-telemetry.md" >}}) to the F5 telemetry service.
-
---
### Differences with NGINX Plus
@@ -123,6 +381,4 @@ The normal process to update any changes to NGINX is to write the configuration
## Pod readiness
-The `nginx-gateway` container includes a readiness endpoint available through the path `/readyz`. A [readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes) periodically checks the endpoint on startup, returning a `200 OK` response when the pod can accept traffic for the data plane. Once the control plane successfully starts, the pod becomes ready.
-
-If there are relevant Gateway API resources in the cluster, the control plane will generate the first NGINX configuration and successfully reload NGINX before the pod is considered ready.
+The `nginx-gateway` container exposes a readiness endpoint at `/readyz`. During startup, a [readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes) periodically checks this endpoint. The probe returns a `200 OK` response once the control plane initializes successfully and is ready to begin configuring NGINX. At that point, the pod is marked as ready.
diff --git a/content/ngf/overview/nginx-plus.md b/content/ngf/overview/nginx-plus.md
index ae4de3643..30543eca3 100644
--- a/content/ngf/overview/nginx-plus.md
+++ b/content/ngf/overview/nginx-plus.md
@@ -10,7 +10,7 @@ NGINX Gateway Fabric can use NGINX Open Source or NGINX Plus as its data plane.
## Benefits of NGINX Plus
-- **Robust metrics**: A plethora of [additional Prometheus metrics](https://github.com/nginx/nginx-prometheus-exporter#metrics-for-nginx-plus) are available.
-- **Live activity monitoring**: The [NGINX Plus dashboard]({{< ref "/ngf/how-to/monitoring/dashboard.md" >}}) shows real-time metrics and information about your server infrastructure.
+- **Robust metrics**: A plethora of [additional Prometheus metrics]({{< ref "/ngf/monitoring/prometheus.md" >}}) are available.
+- **Live activity monitoring**: The [NGINX Plus dashboard]({{< ref "/ngf/monitoring/dashboard.md" >}}) shows real-time metrics and information about your server infrastructure.
- **Dynamic upstream configuration**: NGINX Plus can dynamically reconfigure upstream servers when applications in Kubernetes scale up and down, preventing the need for an NGINX reload.
- **Support**: With an NGINX Plus license, you can take advantage of full [support](https://my.f5.com/manage/s/article/K000140156/) from NGINX, Inc.
diff --git a/content/ngf/overview/product-telemetry.md b/content/ngf/overview/product-telemetry.md
index 3c73a4cb5..1213eff4b 100644
--- a/content/ngf/overview/product-telemetry.md
+++ b/content/ngf/overview/product-telemetry.md
@@ -1,9 +1,9 @@
---
title: Product telemetry
weight: 500
-type: reference
-product: NGF
-docs: DOCS-000
+nd-content-type: reference
+nd-product: NGF
+nd-docs: DOCS-000
---
Learn why, what and how NGINX Gateway Fabric collects telemetry.
@@ -16,8 +16,6 @@ Telemetry data is collected once every 24 hours and sent to a service managed by
**If you would prefer to not have data collected, you can [opt-out](#opt-out) when installing NGINX Gateway Fabric.**
----
-
## Collected data
- **Kubernetes:**
@@ -28,12 +26,12 @@ Telemetry data is collected once every 24 hours and sent to a service managed by
- **Cluster Node Count:** the number of Nodes in the cluster.
- **Version:** the version of the NGINX Gateway Fabric Deployment.
- **Deployment UID:** the UID of the NGINX Gateway Fabric Deployment.
-- **Deployment Replica Count:** the count of NGINX Gateway Fabric Pods.
- **Image Build Source:** whether the image was built by GitHub or locally (values are `gha`, `local`, or `unknown`). The source repository of the images is **not** collected.
- **Deployment Flags:** a list of NGINX Gateway Fabric Deployment flags that are specified by a user. The actual values of non-boolean flags are **not** collected; we only record that they are either `true` or `false` for boolean flags and `default` or `user-defined` for the rest.
- **Count of Resources:** the total count of resources related to NGINX Gateway Fabric. This includes `GatewayClasses`, `Gateways`, `HTTPRoutes`,`GRPCRoutes`, `TLSRoutes`, `Secrets`, `Services`, `BackendTLSPolicies`, `ClientSettingsPolicies`, `NginxProxies`, `ObservabilityPolicies`, `UpstreamSettingsPolicies`, `SnippetsFilters`, and `Endpoints`. The data within these resources is **not** collected.
-- **SnippetsFilters Info:** a list of directive-context strings from applied SnippetFilters and a total count per strings. The actual value of any NGINX directive is **not** collected.
-
+- **SnippetsFilters Info** a list of directive-context strings from applied SnippetFilters and a total count per strings. The actual value of any NGINX directive is **not** collected.
+- **Control Plane Pod Count** the count of NGINX Gateway Fabric Pods.
+- **Data Plane Pod Count** the count of NGINX data plane Pods.
This data is used to identify the following information:
- The flavors of Kubernetes environments that are most popular among our users.
diff --git a/content/ngf/overview/resource-validation.md b/content/ngf/overview/resource-validation.md
index f7c7fbb85..3c11f9b18 100644
--- a/content/ngf/overview/resource-validation.md
+++ b/content/ngf/overview/resource-validation.md
@@ -2,17 +2,15 @@
title: Resource validation
weight: 400
toc: true
-type: reference
-product: NGF
-docs: DOCS-1414
+nd-content-type: reference
+nd-product: NGF
+nd-docs: DOCS-1414
---
## Overview
This document describes how NGINX Gateway Fabric validates Gateway API and NGINX Gateway Fabric Kubernetes resources.
----
-
## Gateway API resource validation
NGINX Gateway Fabric validates Gateway API resources for several reasons:
@@ -23,8 +21,6 @@ NGINX Gateway Fabric validates Gateway API resources for several reasons:
The process involves four different steps, explained in detail in this document, with the goal of making sure that NGINX continues to handle traffic even if invalid Gateway API resources were created.
----
-
### Step 1 - OpenAPI Scheme validation by Kubernetes API Server
The Kubernetes API server validates Gateway API resources against the OpenAPI schema embedded in the Gateway API CRDs. For example, if you create an HTTPRoute with an invalid hostname "cafe.!@#$%example.com", the API server will reject it with the following error:
@@ -39,8 +35,6 @@ The HTTPRoute "coffee" is invalid: spec.hostnames[0]: Invalid value: "cafe.!@#$%
{{< note >}}While unlikely, bypassing this validation step is possible if the Gateway API CRDs are modified to remove the validation. If this happens, Step 4 will reject any invalid values (from NGINX perspective).{{< /note >}}
----
-
### Step 2 - CEL validation by Kubernetes API Server
The Kubernetes API server validates Gateway API resources using CEL validation embedded in the Gateway API CRDs. It validates Gateway API resources using advanced rules unavailable in the OpenAPI schema validation. For example, if you create a Gateway resource with a TCP listener that configures a hostname, the CEL validation will reject it with the following error:
@@ -55,8 +49,6 @@ The Gateway "some-gateway" is invalid: spec.listeners: Invalid value: "array": h
More information on CEL in Kubernetes can be found [here](https://kubernetes.io/docs/reference/using-api/cel/).
----
-
### Step 3 - Validation by NGINX Gateway Fabric
This step catches the following cases of invalid values:
@@ -93,8 +85,6 @@ Status:
{{< note >}} This validation step always runs and cannot be bypassed. {{< /note >}}
----
-
### Confirm validation
To confirm that a resource is valid and accepted by NGINX Gateway Fabric, check that the **Accepted** condition in the resource status has the Status field set to **True**. For example, in a status of a valid HTTPRoute, if NGINX Gateway Fabric accepts a parentRef, the status of that parentRef will look like this:
@@ -120,8 +110,6 @@ Status:
{{< note >}} Make sure the reported observed generation is the same as the resource generation. {{< /note >}}
----
-
## NGINX Gateway Fabric Resource validation
### Step 1 - OpenAPI Scheme validation by Kubernetes API Server
@@ -138,8 +126,6 @@ The NginxGateway "nginx-gateway-config" is invalid: spec.logging.level: Unsuppor
{{< note >}}While unlikely, bypassing this validation step is possible if the NGINX Gateway Fabric CRDs are modified to remove the validation. If this happens, Step 2 will report an error in the resource's status.{{< /note >}}
----
-
### Step 2 - Validation by NGINX Gateway Fabric
This step validates the settings in the NGINX Gateway Fabric CRDs and rejects invalid resources. The validation error is reported via the status and as an Event. For example:
@@ -168,8 +154,6 @@ Event:
Warning UpdateFailed 1s (x2 over 1s) nginx-gateway-fabric-nginx Failed to update control plane configuration: logging.level: Unsupported value: "some-level": supported values: "info", "debug", "error"
```
----
-
### Confirm validation
To confirm that a resource is valid and accepted by NGINX Gateway Fabric, check that the **Valid** condition in the resource status has the Status field set to **True**. For example, the status of a valid NginxGateway will look like this:
diff --git a/content/ngf/reference/_index.md b/content/ngf/reference/_index.md
index 7fad2be4f..0a5299d6b 100644
--- a/content/ngf/reference/_index.md
+++ b/content/ngf/reference/_index.md
@@ -1,5 +1,5 @@
---
title: "Reference"
-weight: 500
+weight: 700
url: /nginx-gateway-fabric/reference/
---
diff --git a/content/ngf/reference/api.md b/content/ngf/reference/api.md
index 77eafa4ac..39928f8dc 100644
--- a/content/ngf/reference/api.md
+++ b/content/ngf/reference/api.md
@@ -1,7 +1,9 @@
---
title: "API reference"
weight: 100
-toc: false
+type: reference
+product: NGF
+docs: DOCS-000
---
## Overview
NGINX Gateway API Reference
@@ -25,8 +27,6 @@ Resource Types:
NginxGateway
-NginxProxy
-
ObservabilityPolicy
SnippetsFilter
@@ -245,141 +245,6 @@ NginxGatewayStatus
-NginxProxy
-
-
-
-
NginxProxy is a configuration object that is attached to a GatewayClass parametersRef. It provides a way
-to configure global settings for all Gateways defined from the GatewayClass.
-
-
-
-
-Field |
-Description |
-
-
-
-
-
-apiVersion
-string |
-
-
-gateway.nginx.org/v1alpha1
-
- |
-
-
-
-kind
-string
- |
-NginxProxy |
-
-
-
-metadata
-
-
-Kubernetes meta/v1.ObjectMeta
-
-
- |
-
-Refer to the Kubernetes API documentation for the fields of the
-metadata field.
- |
-
-
-
-spec
-
-
-NginxProxySpec
-
-
- |
-
- Spec defines the desired state of the NginxProxy.
-
-
-
-
-
-ipFamily
-
-
-IPFamilyType
-
-
- |
-
-(Optional)
- IPFamily specifies the IP family to be used by the NGINX.
-Default is “dual”, meaning the server will use both IPv4 and IPv6.
- |
-
-
-
-telemetry
-
-
-Telemetry
-
-
- |
-
-(Optional)
- Telemetry specifies the OpenTelemetry configuration.
- |
-
-
-
-rewriteClientIP
-
-
-RewriteClientIP
-
-
- |
-
-(Optional)
- RewriteClientIP defines configuration for rewriting the client IP to the original client’s IP.
- |
-
-
-
-logging
-
-
-NginxLogging
-
-
- |
-
-(Optional)
- Logging defines logging related settings for NGINX.
- |
-
-
-
-disableHTTP2
-
-bool
-
- |
-
-(Optional)
- DisableHTTP2 defines if http2 should be disabled for all servers.
-Default is false, meaning http2 will be enabled for all servers.
- |
-
-
- |
-
-
-
ObservabilityPolicy
@@ -707,78 +572,6 @@ sigs.k8s.io/gateway-api/apis/v1alpha2.PolicyStatus
-Address
-
-
-
-(Appears on:
-RewriteClientIP)
-
-
-
Address is a struct that specifies address type and value.
-
-
-
-
-Field |
-Description |
-
-
-
-
-
-type
-
-
-AddressType
-
-
- |
-
- Type specifies the type of address.
- |
-
-
-
-value
-
-string
-
- |
-
- Value specifies the address value.
- |
-
-
-
-AddressType
-(string
alias)
-
-
-(Appears on:
-Address)
-
-
-
AddressType specifies the type of address.
-
-
-
-
-Value |
-Description |
-
-
-"CIDR" |
-CIDRAddressType specifies that the address is a CIDR block.
- |
-
"Hostname" |
-HostnameAddressType specifies that the address is a Hostname.
- |
-
"IPAddress" |
-IPAddressType specifies that the address is an IP address.
- |
-
-
ClientBody
@@ -1105,8 +898,8 @@ longer necessary.
ClientBody,
ClientKeepAlive,
ClientKeepAliveTimeout,
-TelemetryExporter,
-UpstreamKeepAlive)
+UpstreamKeepAlive,
+TelemetryExporter)
Duration is a string value representing a duration in time.
@@ -1114,34 +907,6 @@ Duration can be specified in milliseconds (ms), seconds (s), minutes (m), hours
A value without a suffix is seconds.
Examples: 120s, 50ms, 5m, 1h.
-IPFamilyType
-(string
alias)
-
-
-(Appears on:
-NginxProxySpec)
-
-
-
IPFamilyType specifies the IP family to be used by NGINX.
-
-
-
-
-Value |
-Description |
-
-
-"dual" |
-Dual specifies that NGINX will use both IPv4 and IPv6.
- |
-
"ipv4" |
-IPv4 specifies that NGINX will use only IPv4.
- |
-
"ipv6" |
-IPv6 specifies that NGINX will use only IPv6.
- |
-
-
Logging
@@ -1210,49 +975,6 @@ ControllerLogLevel
-NginxErrorLogLevel
-(string
alias)
-
-
-(Appears on:
-NginxLogging)
-
-
-
NginxErrorLogLevel type defines the log level of error logs for NGINX.
-
-
-
-
-Value |
-Description |
-
-
-"alert" |
-NginxLogLevelAlert is the alert level for NGINX error logs.
- |
-
"crit" |
-NginxLogLevelCrit is the crit level for NGINX error logs.
- |
-
"debug" |
-NginxLogLevelDebug is the debug level for NGINX error logs.
- |
-
"emerg" |
-NginxLogLevelEmerg is the emerg level for NGINX error logs.
- |
-
"error" |
-NginxLogLevelError is the error level for NGINX error logs.
- |
-
"info" |
-NginxLogLevelInfo is the info level for NGINX error logs.
- |
-
"notice" |
-NginxLogLevelNotice is the notice level for NGINX error logs.
- |
-
"warn" |
-NginxLogLevelWarn is the warn level for NGINX error logs.
- |
-
-
NginxGatewayConditionReason
(string
alias)
@@ -1362,142 +1084,15 @@ Logging
-NginxLogging
-
+ObservabilityPolicySpec
+
(Appears on:
-NginxProxySpec)
+ObservabilityPolicy)
-
NginxLogging defines logging related settings for NGINX.
-
-
-
-
-Field |
-Description |
-
-
-
-
-
-errorLevel
-
-
-NginxErrorLogLevel
-
-
- |
-
-(Optional)
- ErrorLevel defines the error log level. Possible log levels listed in order of increasing severity are
-debug, info, notice, warn, error, crit, alert, and emerg. Setting a certain log level will cause all messages
-of the specified and more severe log levels to be logged. For example, the log level ‘error’ will cause error,
-crit, alert, and emerg messages to be logged. https://nginx.org/en/docs/ngx_core_module.html#error_log
- |
-
-
-
-NginxProxySpec
-
-
-
-(Appears on:
-NginxProxy)
-
-
-
NginxProxySpec defines the desired state of the NginxProxy.
-
-
-
-
-Field |
-Description |
-
-
-
-
-
-ipFamily
-
-
-IPFamilyType
-
-
- |
-
-(Optional)
- IPFamily specifies the IP family to be used by the NGINX.
-Default is “dual”, meaning the server will use both IPv4 and IPv6.
- |
-
-
-
-telemetry
-
-
-Telemetry
-
-
- |
-
-(Optional)
- Telemetry specifies the OpenTelemetry configuration.
- |
-
-
-
-rewriteClientIP
-
-
-RewriteClientIP
-
-
- |
-
-(Optional)
- RewriteClientIP defines configuration for rewriting the client IP to the original client’s IP.
- |
-
-
-
-logging
-
-
-NginxLogging
-
-
- |
-
-(Optional)
- Logging defines logging related settings for NGINX.
- |
-
-
-
-disableHTTP2
-
-bool
-
- |
-
-(Optional)
- DisableHTTP2 defines if http2 should be disabled for all servers.
-Default is false, meaning http2 will be enabled for all servers.
- |
-
-
-
-ObservabilityPolicySpec
-
-
-
-(Appears on:
-ObservabilityPolicy)
-
-
-
ObservabilityPolicySpec defines the desired state of the ObservabilityPolicy.
+ObservabilityPolicySpec defines the desired state of the ObservabilityPolicy.
@@ -1538,116 +1133,6 @@ Support: HTTPRoute, GRPCRoute.
-RewriteClientIP
-
-
-
-(Appears on:
-NginxProxySpec)
-
-
-
RewriteClientIP specifies the configuration for rewriting the client’s IP address.
-
-
-
-
-Field |
-Description |
-
-
-
-
-
-mode
-
-
-RewriteClientIPModeType
-
-
- |
-
-(Optional)
- Mode defines how NGINX will rewrite the client’s IP address.
-There are two possible modes:
-- ProxyProtocol: NGINX will rewrite the client’s IP using the PROXY protocol header.
-- XForwardedFor: NGINX will rewrite the client’s IP using the X-Forwarded-For header.
-Sets NGINX directive real_ip_header: https://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header
- |
-
-
-
-setIPRecursively
-
-bool
-
- |
-
-(Optional)
- SetIPRecursively configures whether recursive search is used when selecting the client’s address from
-the X-Forwarded-For header. It is used in conjunction with TrustedAddresses.
-If enabled, NGINX will recurse on the values in X-Forwarded-Header from the end of array
-to start of array and select the first untrusted IP.
-For example, if X-Forwarded-For is [11.11.11.11, 22.22.22.22, 55.55.55.1],
-and TrustedAddresses is set to 55.55.55.1⁄32, NGINX will rewrite the client IP to 22.22.22.22.
-If disabled, NGINX will select the IP at the end of the array.
-In the previous example, 55.55.55.1 would be selected.
-Sets NGINX directive real_ip_recursive: https://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_recursive
- |
-
-
-
-trustedAddresses
-
-
-[]Address
-
-
- |
-
-(Optional)
- TrustedAddresses specifies the addresses that are trusted to send correct client IP information.
-If a request comes from a trusted address, NGINX will rewrite the client IP information,
-and forward it to the backend in the X-Forwarded-For* and X-Real-IP headers.
-If the request does not come from a trusted address, NGINX will not rewrite the client IP information.
-TrustedAddresses only supports CIDR blocks: 192.33.21.1⁄24, fe80::1⁄64.
-To trust all addresses (not recommended for production), set to 0.0.0.0/0.
-If no addresses are provided, NGINX will not rewrite the client IP information.
-Sets NGINX directive set_real_ip_from: https://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from
-This field is required if mode is set.
- |
-
-
-
-RewriteClientIPModeType
-(string
alias)
-
-
-(Appears on:
-RewriteClientIP)
-
-
-
RewriteClientIPModeType defines how NGINX Gateway Fabric will determine the client’s original IP address.
-
-
Size
(string
alias)
@@ -1828,8 +1313,8 @@ and the status of the SnippetsFilter with respect to each controller.
(Appears on:
-Telemetry,
Tracing,
+Telemetry,
Tracing)
@@ -1869,170 +1354,35 @@ Format: must have all ‘“’ escaped and must not contain any &ls
-
Telemetry
-
+TraceContext
+(string
alias)
(Appears on:
-NginxProxySpec)
+Tracing)
-
Telemetry specifies the OpenTelemetry configuration.
+TraceContext specifies how to propagate traceparent/tracestate headers.
-Field |
+Value |
Description |
-
-
-
-exporter
-
-
-TelemetryExporter
-
-
+ |
"extract" |
+TraceContextExtract uses an existing trace context from the request, so that the identifiers
+of a trace and the parent span are inherited from the incoming request.
|
-
-(Optional)
- Exporter specifies OpenTelemetry export parameters.
+ |
"ignore" |
+TraceContextIgnore skips context headers processing.
|
-
-
-
-serviceName
-
-string
-
+ |
"inject" |
+TraceContextInject adds a new context to the request, overwriting existing headers, if any.
|
-
-(Optional)
- ServiceName is the “service.name” attribute of the OpenTelemetry resource.
-Default is ‘ngf::’. If a value is provided by the user,
-then the default becomes a prefix to that value.
- |
-
-
-
-spanAttributes
-
-
-[]SpanAttribute
-
-
- |
-
-(Optional)
- SpanAttributes are custom key/value attributes that are added to each span.
- |
-
-
-
-TelemetryExporter
-
-
-
-(Appears on:
-Telemetry)
-
-
-
TelemetryExporter specifies OpenTelemetry export parameters.
-
-
-TraceContext
-(string
alias)
-
-
-(Appears on:
-Tracing)
-
-
-
TraceContext specifies how to propagate traceparent/tracestate headers.
-
-
-
-
-Value |
-Description |
-
-
-"extract" |
-TraceContextExtract uses an existing trace context from the request, so that the identifiers
-of a trace and the parent span are inherited from the incoming request.
- |
-
"ignore" |
-TraceContextIgnore skips context headers processing.
- |
-
"inject" |
-TraceContextInject adds a new context to the request, overwriting existing headers, if any.
- |
-
"propagate" |
-TraceContextPropagate updates the existing context (combines extract and inject).
+ |
"propagate" |
+TraceContextPropagate updates the existing context (combines extract and inject).
|
@@ -2308,15 +1658,20 @@ gateway.nginx.org API group.
Resource Types:
-ObservabilityPolicy
-
+NginxProxy
+
-
ObservabilityPolicy is a Direct Attached Policy. It provides a way to configure observability settings for
-the NGINX Gateway Fabric data plane. Used in conjunction with the NginxProxy CRD that is attached to the
-GatewayClass parametersRef.
+NginxProxy is a configuration object that can be referenced from a GatewayClass parametersRef
+or a Gateway infrastructure.parametersRef. It provides a way to configure data plane settings.
+If referenced from a GatewayClass, the settings apply to all Gateways attached to the GatewayClass.
+If referenced from a Gateway, the settings apply to that Gateway alone. If both a Gateway and its GatewayClass
+reference an NginxProxy, the settings are merged. Settings specified on the Gateway NginxProxy override those
+set on the GatewayClass NginxProxy.
@@ -2341,7 +1696,7 @@ gateway.nginx.org/v1alpha2
kind
string
-ObservabilityPolicy |
+NginxProxy |
@@ -2361,74 +1716,141 @@ Refer to the Kubernetes API documentation for the fields of the
|
spec
-
-ObservabilityPolicySpec
+
+NginxProxySpec
|
- Spec defines the desired state of the ObservabilityPolicy.
+Spec defines the desired state of the NginxProxy.
-tracing
+ipFamily
-
-Tracing
+
+IPFamilyType
|
(Optional)
- Tracing allows for enabling and configuring tracing.
+IPFamily specifies the IP family to be used by the NGINX.
+Default is “dual”, meaning the server will use both IPv4 and IPv6.
|
-targetRefs
+telemetry
-
-[]sigs.k8s.io/gateway-api/apis/v1alpha2.LocalPolicyTargetReference
+
+Telemetry
|
- TargetRefs identifies the API object(s) to apply the policy to.
-Objects must be in the same namespace as the policy.
-Support: HTTPRoute, GRPCRoute.
-TargetRefs must be distinct. This means that the multi-part key defined by kind and name must
-be unique across all targetRef entries in the ObservabilityPolicy.
+(Optional)
+Telemetry specifies the OpenTelemetry configuration.
|
-
+ |
+
+metrics
+
+
+Metrics
+
+
+ |
+
+(Optional)
+ Metrics defines the configuration for Prometheus scraping metrics. Changing this value results in a
+re-roll of the NGINX deployment.
|
-status
+rewriteClientIP
-
-sigs.k8s.io/gateway-api/apis/v1alpha2.PolicyStatus
+
+RewriteClientIP
|
- Status defines the state of the ObservabilityPolicy.
+(Optional)
+RewriteClientIP defines configuration for rewriting the client IP to the original client’s IP.
+ |
+
+
+
+logging
+
+
+NginxLogging
+
+
+ |
+
+(Optional)
+ Logging defines logging related settings for NGINX.
+ |
+
+
+
+nginxPlus
+
+
+NginxPlus
+
+
+ |
+
+(Optional)
+ NginxPlus specifies NGINX Plus additional settings.
+ |
+
+
+
+disableHTTP2
+
+bool
+
+ |
+
+(Optional)
+ DisableHTTP2 defines if http2 should be disabled for all servers.
+If not specified, or set to false, http2 will be enabled for all servers.
+ |
+
+
+
+kubernetes
+
+
+KubernetesSpec
+
+
+ |
+
+(Optional)
+ Kubernetes contains the configuration for the NGINX Deployment and Service Kubernetes objects.
+ |
+
+
-ObservabilityPolicySpec
-
+ObservabilityPolicy
+
-(Appears on:
-ObservabilityPolicy)
-
-
-
ObservabilityPolicySpec defines the desired state of the ObservabilityPolicy.
+ObservabilityPolicy is a Direct Attached Policy. It provides a way to configure observability settings for
+the NGINX Gateway Fabric data plane. Used in conjunction with the NginxProxy CRD that is attached to the
+GatewayClass parametersRef.
@@ -2440,35 +1862,1612 @@ sigs.k8s.io/gateway-api/apis/v1alpha2.PolicyStatus
-tracing
-
-
-Tracing
-
-
+apiVersion
+string |
+
+
+gateway.nginx.org/v1alpha2
+
|
+
+
-(Optional)
- Tracing allows for enabling and configuring tracing.
+kind
+string
|
+ObservabilityPolicy |
-targetRefs
+metadata
-
-[]sigs.k8s.io/gateway-api/apis/v1alpha2.LocalPolicyTargetReference
+
+Kubernetes meta/v1.ObjectMeta
|
- TargetRefs identifies the API object(s) to apply the policy to.
-Objects must be in the same namespace as the policy.
-Support: HTTPRoute, GRPCRoute.
-TargetRefs must be distinct. This means that the multi-part key defined by kind and name must
+Refer to the Kubernetes API documentation for the fields of the
+metadata field.
+ |
+
+
+
+spec
+
+
+ObservabilityPolicySpec
+
+
+ |
+
+ Spec defines the desired state of the ObservabilityPolicy.
+
+
+
+
+
+tracing
+
+
+Tracing
+
+
+ |
+
+(Optional)
+ Tracing allows for enabling and configuring tracing.
+ |
+
+
+
+targetRefs
+
+
+[]sigs.k8s.io/gateway-api/apis/v1alpha2.LocalPolicyTargetReference
+
+
+ |
+
+ TargetRefs identifies the API object(s) to apply the policy to.
+Objects must be in the same namespace as the policy.
+Support: HTTPRoute, GRPCRoute.
+TargetRefs must be distinct. This means that the multi-part key defined by kind and name must
be unique across all targetRef entries in the ObservabilityPolicy.
|
+
+ |
+
+
+
+status
+
+
+sigs.k8s.io/gateway-api/apis/v1alpha2.PolicyStatus
+
+
+ |
+
+ Status defines the state of the ObservabilityPolicy.
+ |
+
+
+
+AgentLogLevel
+(string
alias)
+
+
+(Appears on:
+NginxLogging)
+
+
+
AgentLevel defines the log level of the NGINX agent process.
+
+
+
+
+Value |
+Description |
+
+
+"debug" |
+AgentLogLevelDebug is the debug level NGINX agent logs.
+ |
+
"error" |
+AgentLogLevelError is the error level NGINX agent logs.
+ |
+
"fatal" |
+AgentLogLevelFatal is the fatal level NGINX agent logs.
+ |
+
"info" |
+AgentLogLevelInfo is the info level NGINX agent logs.
+ |
+
"panic" |
+AgentLogLevelPanic is the panic level NGINX agent logs.
+ |
+
+
+ContainerSpec
+
+
+
+(Appears on:
+DaemonSetSpec,
+DeploymentSpec)
+
+
+
ContainerSpec defines container fields for the NGINX container.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+debug
+
+bool
+
+ |
+
+(Optional)
+ Debug enables debugging for NGINX by using the nginx-debug binary.
+ |
+
+
+
+image
+
+
+Image
+
+
+ |
+
+(Optional)
+ Image is the NGINX image to use.
+ |
+
+
+
+resources
+
+
+Kubernetes core/v1.ResourceRequirements
+
+
+ |
+
+(Optional)
+ Resources describes the compute resource requirements.
+ |
+
+
+
+lifecycle
+
+
+Kubernetes core/v1.Lifecycle
+
+
+ |
+
+(Optional)
+ Lifecycle describes actions that the management system should take in response to container lifecycle
+events. For the PostStart and PreStop lifecycle handlers, management of the container blocks
+until the action is complete, unless the container process fails, in which case the handler is aborted.
+ |
+
+
+
+volumeMounts
+
+
+[]Kubernetes core/v1.VolumeMount
+
+
+ |
+
+(Optional)
+ VolumeMounts describe the mounting of Volumes within a container.
+ |
+
+
+
+DaemonSetSpec
+
+
+
+(Appears on:
+KubernetesSpec)
+
+
+
DaemonSet is the configuration for the NGINX DaemonSet.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+pod
+
+
+PodSpec
+
+
+ |
+
+(Optional)
+ Pod defines Pod-specific fields.
+ |
+
+
+
+container
+
+
+ContainerSpec
+
+
+ |
+
+(Optional)
+ Container defines container fields for the NGINX container.
+ |
+
+
+
+DeploymentSpec
+
+
+
+(Appears on:
+KubernetesSpec)
+
+
+
Deployment is the configuration for the NGINX Deployment.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+replicas
+
+int32
+
+ |
+
+(Optional)
+ Number of desired Pods.
+ |
+
+
+
+pod
+
+
+PodSpec
+
+
+ |
+
+(Optional)
+ Pod defines Pod-specific fields.
+ |
+
+
+
+container
+
+
+ContainerSpec
+
+
+ |
+
+(Optional)
+ Container defines container fields for the NGINX container.
+ |
+
+
+
+DisableTelemetryFeature
+(string
alias)
+
+
+(Appears on:
+Telemetry)
+
+
+
DisableTelemetryFeature is a telemetry feature that can be disabled.
+
+
+
+
+Value |
+Description |
+
+
+"DisableTracing" |
+DisableTracing disables the OpenTelemetry tracing feature.
+ |
+
+
+ExternalTrafficPolicy
+(string
alias)
+
+
+(Appears on:
+ServiceSpec)
+
+
+
ExternalTrafficPolicy describes how nodes distribute service traffic they
+receive on one of the Service’s “externally-facing” addresses (NodePorts, ExternalIPs,
+and LoadBalancer IPs. Ignored for ClusterIP services.
+
+
+
+
+Value |
+Description |
+
+
+"Cluster" |
+ExternalTrafficPolicyCluster routes traffic to all endpoints.
+ |
+
"Local" |
+ExternalTrafficPolicyLocal preserves the source IP of the traffic by
+routing only to endpoints on the same node as the traffic was received on
+(dropping the traffic if there are no local endpoints).
+ |
+
+
+IPFamilyType
+(string
alias)
+
+
+(Appears on:
+NginxProxySpec)
+
+
+
IPFamilyType specifies the IP family to be used by NGINX.
+
+
+
+
+Value |
+Description |
+
+
+"dual" |
+Dual specifies that NGINX will use both IPv4 and IPv6.
+ |
+
"ipv4" |
+IPv4 specifies that NGINX will use only IPv4.
+ |
+
"ipv6" |
+IPv6 specifies that NGINX will use only IPv6.
+ |
+
+
+Image
+
+
+
+(Appears on:
+ContainerSpec)
+
+
+
Image is the NGINX image to use.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+repository
+
+string
+
+ |
+
+(Optional)
+ Repository is the image path.
+Default is ghcr.io/nginx/nginx-gateway-fabric/nginx.
+ |
+
+
+
+tag
+
+string
+
+ |
+
+(Optional)
+ Tag is the image tag to use. Default matches the tag of the control plane.
+ |
+
+
+
+pullPolicy
+
+
+PullPolicy
+
+
+ |
+
+(Optional)
+ PullPolicy describes a policy for if/when to pull a container image.
+ |
+
+
+
+KubernetesSpec
+
+
+
+(Appears on:
+NginxProxySpec)
+
+
+
KubernetesSpec contains the configuration for the NGINX Deployment and Service Kubernetes objects.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+deployment
+
+
+DeploymentSpec
+
+
+ |
+
+(Optional)
+ Deployment is the configuration for the NGINX Deployment.
+This is the default deployment option.
+ |
+
+
+
+daemonSet
+
+
+DaemonSetSpec
+
+
+ |
+
+(Optional)
+ DaemonSet is the configuration for the NGINX DaemonSet.
+ |
+
+
+
+service
+
+
+ServiceSpec
+
+
+ |
+
+(Optional)
+ Service is the configuration for the NGINX Service.
+ |
+
+
+
+Metrics
+
+
+
+(Appears on:
+NginxProxySpec)
+
+
+
Metrics defines the configuration for Prometheus scraping metrics.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+port
+
+int32
+
+ |
+
+(Optional)
+ Port where the Prometheus metrics are exposed.
+ |
+
+
+
+disable
+
+bool
+
+ |
+
+(Optional)
+ Disable serving Prometheus metrics on the listen port.
+ |
+
+
+
+NginxErrorLogLevel
+(string
alias)
+
+
+(Appears on:
+NginxLogging)
+
+
+
NginxErrorLogLevel type defines the log level of error logs for NGINX.
+
+
+
+
+Value |
+Description |
+
+
+"alert" |
+NginxLogLevelAlert is the alert level for NGINX error logs.
+ |
+
"crit" |
+NginxLogLevelCrit is the crit level for NGINX error logs.
+ |
+
"debug" |
+NginxLogLevelDebug is the debug level for NGINX error logs.
+ |
+
"emerg" |
+NginxLogLevelEmerg is the emerg level for NGINX error logs.
+ |
+
"error" |
+NginxLogLevelError is the error level for NGINX error logs.
+ |
+
"info" |
+NginxLogLevelInfo is the info level for NGINX error logs.
+ |
+
"notice" |
+NginxLogLevelNotice is the notice level for NGINX error logs.
+ |
+
"warn" |
+NginxLogLevelWarn is the warn level for NGINX error logs.
+ |
+
+
+NginxLogging
+
+
+
+(Appears on:
+NginxProxySpec)
+
+
+
NginxLogging defines logging related settings for NGINX.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+errorLevel
+
+
+NginxErrorLogLevel
+
+
+ |
+
+(Optional)
+ ErrorLevel defines the error log level. Possible log levels listed in order of increasing severity are
+debug, info, notice, warn, error, crit, alert, and emerg. Setting a certain log level will cause all messages
+of the specified and more severe log levels to be logged. For example, the log level ‘error’ will cause error,
+crit, alert, and emerg messages to be logged. https://nginx.org/en/docs/ngx_core_module.html#error_log
+ |
+
+
+
+agentLevel
+
+
+AgentLogLevel
+
+
+ |
+
+(Optional)
+ AgentLevel defines the log level of the NGINX agent process. Changing this value results in a
+re-roll of the NGINX deployment.
+ |
+
+
+
+NginxPlus
+
+
+
+(Appears on:
+NginxProxySpec)
+
+
+
NginxPlus specifies NGINX Plus additional settings. These will only be applied if NGINX Plus is being used.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+allowedAddresses
+
+
+[]NginxPlusAllowAddress
+
+
+ |
+
+(Optional)
+ AllowedAddresses specifies IPAddresses or CIDR blocks to the allow list for accessing the NGINX Plus API.
+ |
+
+
+
+NginxPlusAllowAddress
+
+
+
+(Appears on:
+NginxPlus)
+
+
+
NginxPlusAllowAddress specifies the address type and value for an NginxPlus allow address.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+type
+
+
+NginxPlusAllowAddressType
+
+
+ |
+
+ Type specifies the type of address.
+ |
+
+
+
+value
+
+string
+
+ |
+
+ Value specifies the address value.
+ |
+
+
+
+NginxPlusAllowAddressType
+(string
alias)
+
+
+(Appears on:
+NginxPlusAllowAddress)
+
+
+
NginxPlusAllowAddressType specifies the type of address.
+
+
+
+
+Value |
+Description |
+
+
+"CIDR" |
+NginxPlusAllowCIDRAddressType specifies that the address is a CIDR block.
+ |
+
"IPAddress" |
+NginxPlusAllowIPAddressType specifies that the address is an IP address.
+ |
+
+
+NginxProxySpec
+
+
+
+(Appears on:
+NginxProxy)
+
+
+
NginxProxySpec defines the desired state of the NginxProxy.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+ipFamily
+
+
+IPFamilyType
+
+
+ |
+
+(Optional)
+ IPFamily specifies the IP family to be used by the NGINX.
+Default is “dual”, meaning the server will use both IPv4 and IPv6.
+ |
+
+
+
+telemetry
+
+
+Telemetry
+
+
+ |
+
+(Optional)
+ Telemetry specifies the OpenTelemetry configuration.
+ |
+
+
+
+metrics
+
+
+Metrics
+
+
+ |
+
+(Optional)
+ Metrics defines the configuration for Prometheus scraping metrics. Changing this value results in a
+re-roll of the NGINX deployment.
+ |
+
+
+
+rewriteClientIP
+
+
+RewriteClientIP
+
+
+ |
+
+(Optional)
+ RewriteClientIP defines configuration for rewriting the client IP to the original client’s IP.
+ |
+
+
+
+logging
+
+
+NginxLogging
+
+
+ |
+
+(Optional)
+ Logging defines logging related settings for NGINX.
+ |
+
+
+
+nginxPlus
+
+
+NginxPlus
+
+
+ |
+
+(Optional)
+ NginxPlus specifies NGINX Plus additional settings.
+ |
+
+
+
+disableHTTP2
+
+bool
+
+ |
+
+(Optional)
+ DisableHTTP2 defines if http2 should be disabled for all servers.
+If not specified, or set to false, http2 will be enabled for all servers.
+ |
+
+
+
+kubernetes
+
+
+KubernetesSpec
+
+
+ |
+
+(Optional)
+ Kubernetes contains the configuration for the NGINX Deployment and Service Kubernetes objects.
+ |
+
+
+
+NodePort
+
+
+
+(Appears on:
+ServiceSpec)
+
+
+
NodePort creates a port on each node on which the NGINX data plane service is exposed. The NodePort MUST
+map to a Gateway listener port, otherwise it will be ignored. If not specified, Kubernetes allocates a NodePort
+automatically if required. The default NodePort range enforced by Kubernetes is 30000-32767.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+port
+
+int32
+
+ |
+
+ Port is the NodePort to expose.
+kubebuilder:validation:Minimum=1
+kubebuilder:validation:Maximum=65535
+ |
+
+
+
+listenerPort
+
+int32
+
+ |
+
+ ListenerPort is the Gateway listener port that this NodePort maps to.
+kubebuilder:validation:Minimum=1
+kubebuilder:validation:Maximum=65535
+ |
+
+
+
+ObservabilityPolicySpec
+
+
+
+(Appears on:
+ObservabilityPolicy)
+
+
+
ObservabilityPolicySpec defines the desired state of the ObservabilityPolicy.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+tracing
+
+
+Tracing
+
+
+ |
+
+(Optional)
+ Tracing allows for enabling and configuring tracing.
+ |
+
+
+
+targetRefs
+
+
+[]sigs.k8s.io/gateway-api/apis/v1alpha2.LocalPolicyTargetReference
+
+
+ |
+
+ TargetRefs identifies the API object(s) to apply the policy to.
+Objects must be in the same namespace as the policy.
+Support: HTTPRoute, GRPCRoute.
+TargetRefs must be distinct. This means that the multi-part key defined by kind and name must
+be unique across all targetRef entries in the ObservabilityPolicy.
+ |
+
+
+
+PodSpec
+
+
+
+(Appears on:
+DaemonSetSpec,
+DeploymentSpec)
+
+
+
PodSpec defines Pod-specific fields.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+terminationGracePeriodSeconds
+
+int64
+
+ |
+
+(Optional)
+ TerminationGracePeriodSeconds is the optional duration in seconds the pod needs to terminate gracefully.
+Value must be non-negative integer. The value zero indicates stop immediately via
+the kill signal (no opportunity to shut down).
+If this value is nil, the default grace period will be used instead.
+The grace period is the duration in seconds after the processes running in the pod are sent
+a termination signal and the time when the processes are forcibly halted with a kill signal.
+Set this value longer than the expected cleanup time for your process.
+Defaults to 30 seconds.
+ |
+
+
+
+affinity
+
+
+Kubernetes core/v1.Affinity
+
+
+ |
+
+(Optional)
+ Affinity is the pod’s scheduling constraints.
+ |
+
+
+
+nodeSelector
+
+map[string]string
+
+ |
+
+(Optional)
+ NodeSelector is a selector which must be true for the pod to fit on a node.
+Selector which must match a node’s labels for the pod to be scheduled on that node.
+ |
+
+
+
+tolerations
+
+
+[]Kubernetes core/v1.Toleration
+
+
+ |
+
+(Optional)
+ Tolerations allow the scheduler to schedule Pods with matching taints.
+ |
+
+
+
+volumes
+
+
+[]Kubernetes core/v1.Volume
+
+
+ |
+
+(Optional)
+ Volumes represents named volumes in a pod that may be accessed by any container in the pod.
+ |
+
+
+
+topologySpreadConstraints
+
+
+[]Kubernetes core/v1.TopologySpreadConstraint
+
+
+ |
+
+(Optional)
+ TopologySpreadConstraints describes how a group of Pods ought to spread across topology
+domains. Scheduler will schedule Pods in a way which abides by the constraints.
+All topologySpreadConstraints are ANDed.
+ |
+
+
+
+PullPolicy
+(string
alias)
+
+
+(Appears on:
+Image)
+
+
+
PullPolicy describes a policy for if/when to pull a container image.
+
+
+
+
+Value |
+Description |
+
+
+"Always" |
+PullAlways means that kubelet always attempts to pull the latest image. Container will fail if the pull fails.
+ |
+
"IfNotPresent" |
+PullIfNotPresent means that kubelet pulls if the image isn’t present on disk. Container will fail if the image
+isn’t present and the pull fails.
+ |
+
"Never" |
+PullNever means that kubelet never pulls an image, but only uses a local image. Container will fail if the
+image isn’t present.
+ |
+
+
+RewriteClientIP
+
+
+
+(Appears on:
+NginxProxySpec)
+
+
+
RewriteClientIP specifies the configuration for rewriting the client’s IP address.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+mode
+
+
+RewriteClientIPModeType
+
+
+ |
+
+(Optional)
+ Mode defines how NGINX will rewrite the client’s IP address.
+There are two possible modes:
+- ProxyProtocol: NGINX will rewrite the client’s IP using the PROXY protocol header.
+- XForwardedFor: NGINX will rewrite the client’s IP using the X-Forwarded-For header.
+Sets NGINX directive real_ip_header: https://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header
+ |
+
+
+
+setIPRecursively
+
+bool
+
+ |
+
+(Optional)
+ SetIPRecursively configures whether recursive search is used when selecting the client’s address from
+the X-Forwarded-For header. It is used in conjunction with TrustedAddresses.
+If enabled, NGINX will recurse on the values in X-Forwarded-Header from the end of array
+to start of array and select the first untrusted IP.
+For example, if X-Forwarded-For is [11.11.11.11, 22.22.22.22, 55.55.55.1],
+and TrustedAddresses is set to 55.55.55.1⁄32, NGINX will rewrite the client IP to 22.22.22.22.
+If disabled, NGINX will select the IP at the end of the array.
+In the previous example, 55.55.55.1 would be selected.
+Sets NGINX directive real_ip_recursive: https://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_recursive
+ |
+
+
+
+trustedAddresses
+
+
+[]RewriteClientIPAddress
+
+
+ |
+
+(Optional)
+ TrustedAddresses specifies the addresses that are trusted to send correct client IP information.
+If a request comes from a trusted address, NGINX will rewrite the client IP information,
+and forward it to the backend in the X-Forwarded-For* and X-Real-IP headers.
+If the request does not come from a trusted address, NGINX will not rewrite the client IP information.
+To trust all addresses (not recommended for production), set to 0.0.0.0/0.
+If no addresses are provided, NGINX will not rewrite the client IP information.
+Sets NGINX directive set_real_ip_from: https://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from
+This field is required if mode is set.
+ |
+
+
+
+RewriteClientIPAddress
+
+
+
+(Appears on:
+RewriteClientIP)
+
+
+
RewriteClientIPAddress specifies the address type and value for a RewriteClientIP address.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+type
+
+
+RewriteClientIPAddressType
+
+
+ |
+
+ Type specifies the type of address.
+ |
+
+
+
+value
+
+string
+
+ |
+
+ Value specifies the address value.
+ |
+
+
+
+RewriteClientIPAddressType
+(string
alias)
+
+
+(Appears on:
+RewriteClientIPAddress)
+
+
+
RewriteClientIPAddressType specifies the type of address.
+
+
+
+
+Value |
+Description |
+
+
+"CIDR" |
+RewriteClientIPCIDRAddressType specifies that the address is a CIDR block.
+ |
+
"Hostname" |
+RewriteClientIPHostnameAddressType specifies that the address is a Hostname.
+ |
+
"IPAddress" |
+RewriteClientIPIPAddressType specifies that the address is an IP address.
+ |
+
+
+RewriteClientIPModeType
+(string
alias)
+
+
+(Appears on:
+RewriteClientIP)
+
+
+
RewriteClientIPModeType defines how NGINX Gateway Fabric will determine the client’s original IP address.
+
+
+ServiceSpec
+
+
+
+(Appears on:
+KubernetesSpec)
+
+
+
ServiceSpec is the configuration for the NGINX Service.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+type
+
+
+ServiceType
+
+
+ |
+
+(Optional)
+ ServiceType describes ingress method for the Service.
+ |
+
+
+
+externalTrafficPolicy
+
+
+ExternalTrafficPolicy
+
+
+ |
+
+(Optional)
+ ExternalTrafficPolicy describes how nodes distribute service traffic they
+receive on one of the Service’s “externally-facing” addresses (NodePorts, ExternalIPs,
+and LoadBalancer IPs.
+ |
+
+
+
+loadBalancerIP
+
+string
+
+ |
+
+(Optional)
+ LoadBalancerIP is a static IP address for the load balancer. Requires service type to be LoadBalancer.
+ |
+
+
+
+loadBalancerClass
+
+string
+
+ |
+
+(Optional)
+ LoadBalancerClass is the class of the load balancer implementation this Service belongs to.
+Requires service type to be LoadBalancer.
+ |
+
+
+
+loadBalancerSourceRanges
+
+[]string
+
+ |
+
+(Optional)
+ LoadBalancerSourceRanges are the IP ranges (CIDR) that are allowed to access the load balancer.
+Requires service type to be LoadBalancer.
+ |
+
+
+
+nodePorts
+
+
+[]NodePort
+
+
+ |
+
+(Optional)
+ NodePorts are the list of NodePorts to expose on the NGINX data plane service.
+Each NodePort MUST map to a Gateway listener port, otherwise it will be ignored.
+The default NodePort range enforced by Kubernetes is 30000-32767.
+ |
+
+
+
+ServiceType
+(string
alias)
+
+
+(Appears on:
+ServiceSpec)
+
+
+
ServiceType describes ingress method for the Service.
+
+
+
+
+Value |
+Description |
+
+
+"ClusterIP" |
+ServiceTypeClusterIP means a Service will only be accessible inside the
+cluster, via the cluster IP.
+ |
+
"LoadBalancer" |
+ServiceTypeLoadBalancer means a Service will be exposed via an
+external load balancer (if the cloud provider supports it), in addition
+to ‘NodePort’ type.
+ |
+
"NodePort" |
+ServiceTypeNodePort means a Service will be exposed on one port of
+every node, in addition to ‘ClusterIP’ type.
+ |
+
+
+Telemetry
+
+
+
+(Appears on:
+NginxProxySpec)
+
+
+
Telemetry specifies the OpenTelemetry configuration.
+
+
+
+
+Field |
+Description |
+
+
+
+
+
+disabledFeatures
+
+
+[]DisableTelemetryFeature
+
+
+ |
+
+(Optional)
+ DisabledFeatures specifies OpenTelemetry features to be disabled.
+ |
+
+
+
+exporter
+
+
+TelemetryExporter
+
+
+ |
+
+(Optional)
+ Exporter specifies OpenTelemetry export parameters.
+ |
+
+
+
+serviceName
+
+string
+
+ |
+
+(Optional)
+ ServiceName is the “service.name” attribute of the OpenTelemetry resource.
+Default is ‘ngf::’. If a value is provided by the user,
+then the default becomes a prefix to that value.
+ |
+
+
+
+spanAttributes
+
+
+[]SpanAttribute
+
+
+ |
+
+(Optional)
+ SpanAttributes are custom key/value attributes that are added to each span.
+ |
+
+
+
+TelemetryExporter
+
+
+
+(Appears on:
+Telemetry)
+
+
+
TelemetryExporter specifies OpenTelemetry export parameters.
+
+
TraceContext
diff --git a/content/ngf/reference/cli-help.md b/content/ngf/reference/cli-help.md
index be11a41b5..a526ce149 100644
--- a/content/ngf/reference/cli-help.md
+++ b/content/ngf/reference/cli-help.md
@@ -13,14 +13,14 @@ Learn about the commands available for the executable file of the NGINX Gateway
---
-## Static mode
+## Controller
-This command configures NGINX for a single NGINX Gateway Fabric resource.
+This command runs the NGINX Gateway Fabric control plane.
*Usage*:
```shell
- gateway static-mode [flags]
+ gateway controller [flags]
```
---
@@ -33,7 +33,6 @@ This command configures NGINX for a single NGINX Gateway Fabric resource.
|-------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| _gateway-ctlr-name_ | _string_ | The name of the Gateway controller. The controller name must be in the form: `DOMAIN/PATH`. The controller's domain is `gateway.nginx.org`. |
| _gatewayclass_ | _string_ | The name of the GatewayClass resource. Every NGINX Gateway Fabric must have a unique corresponding GatewayClass resource. |
-| _gateway_ | _string_ | The namespaced name of the Gateway resource to use. Must be of the form: `NAMESPACE/NAME`. If not specified, the control plane will process all Gateways for the configured GatewayClass. Among them, it will choose the oldest resource by creation timestamp. If the timestamps are equal, it will choose the resource that appears first in alphabetical order by {namespace}/{name}. |
| _nginx-plus_ | _bool_ | Enable support for NGINX Plus. |
| _gateway-api-experimental-features_ | _bool_ | Enable the experimental features of Gateway API which are supported by NGINX Gateway Fabric. Requires the Gateway APIs installed from the experimental channel. |
| _config_ | _string_ | The name of the NginxGateway resource to be used for this controller's dynamic configuration. Lives in the same namespace as the controller. |
@@ -41,12 +40,12 @@ This command configures NGINX for a single NGINX Gateway Fabric resource.
| _metrics-disable_ | _bool_ | Disable exposing metrics in the Prometheus format (Default: `false`). |
| _metrics-listen-port_ | _int_ | Sets the port where the Prometheus metrics are exposed. An integer between 1024 - 65535 (Default: `9113`) |
| _metrics-secure-serving_ | _bool_ | Configures if the metrics endpoint should be secured using https. Note that this endpoint will be secured with a self-signed certificate (Default `false`). |
-| _update-gatewayclass-status_ | _bool_ | Update the status of the GatewayClass resource (Default: `true`). |
| _health-disable_ | _bool_ | Disable running the health probe server (Default: `false`). |
| _health-port_ | _int_ | Set the port where the health probe server is exposed. An integer between 1024 - 65535 (Default: `8081`). |
| _leader-election-disable_ | _bool_ | Disable leader election, which is used to avoid multiple replicas of the NGINX Gateway Fabric reporting the status of the Gateway API resources. If disabled, all replicas of NGINX Gateway Fabric will update the statuses of the Gateway API resources (Default: `false`). |
| _leader-election-lock-name_ | _string_ | The name of the leader election lock. A lease object with this name will be created in the same namespace as the controller (Default: `"nginx-gateway-leader-election-lock"`). |
| _product-telemetry-disable_ | _bool_ | Disable the collection of product telemetry (Default: `false`). |
+| _nginx-docker-secret_ | _list_ | The name of the NGINX docker registry Secret(s). Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway). |
| _usage-report-secret_ | _string_ | The name of the Secret containing the JWT for NGINX Plus usage reporting. Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway) |
| _usage-report-endpoint_ | _string_ | The endpoint of the NGINX Plus usage reporting server. |
| _usage-report-resolver_ | _string_ | The nameserver used to resolve the NGINX Plus usage reporting endpoint. Used with NGINX Instance Manager. |
@@ -54,6 +53,7 @@ This command configures NGINX for a single NGINX Gateway Fabric resource.
| _usage-report-ca-secret_ | _string_ | The name of the Secret containing the NGINX Instance Manager CA certificate. Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway) |
| _usage-report-client-ssl-secret_ | _string_ | TThe name of the Secret containing the client certificate and key for authenticating with NGINX Instance Manager. Must exist in the same namespace that the NGINX Gateway Fabric control plane is running in (default namespace: nginx-gateway) |
| _snippets-filters_ | _bool_ | Enable SnippetsFilters feature. SnippetsFilters allow inserting NGINX configuration into the generated NGINX config for HTTPRoute and GRPCRoute resources. |
+| _nginx-scc_ | _string_ | The name of the SecurityContextConstraints to be used with the NGINX data plane Pods. Only applicable in OpenShift. |
{{% /bootstrap-table %}}
diff --git a/content/ngf/support.md b/content/ngf/support.md
index 7ec9f3f35..29f671a44 100644
--- a/content/ngf/support.md
+++ b/content/ngf/support.md
@@ -1,18 +1,16 @@
---
title: Support
-weight: 600
+weight: 800
toc: true
-type: reference
-product: NGF
-docs: DOCS-1411
+nd-content-type: reference
+nd-product: NGF
+nd-docs: DOCS-1411
---
F5 NGINX Gateway Fabric adheres to the support policy detailed in the following knowledge base article: [K000140156](https://my.f5.com/manage/s/article/K000140156).
After opening a support ticket, F5 staff will request additional information to better understand the problem.
----
-
## Kubernetes support plugin
The [nginx-supportpkg-for-k8s](https://github.com/nginx/nginx-supportpkg-for-k8s) plugin collects the information needed by F5 Technical Support to assist with troubleshooting your issue.
@@ -32,8 +30,6 @@ This plugin **does not** collect secrets or coredumps.
Visit the [project’s GitHub repository](https://github.com/nginx/nginx-supportpkg-for-k8s) for further details.
----
-
## Support channels
- If you experience issues with NGINX Gateway Fabric, please [open an issue](https://github.com/nginx/nginx-gateway-fabric/issues/new?assignees=&labels=&projects=&template=bug_report.md&title=) in GitHub.
diff --git a/content/ngf/traffic-management/_index.md b/content/ngf/traffic-management/_index.md
new file mode 100644
index 000000000..d87b77402
--- /dev/null
+++ b/content/ngf/traffic-management/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Traffic management"
+url: /nginx-gateway-fabric/traffic-management/
+weight: 400
+---
diff --git a/content/ngf/how-to/traffic-management/advanced-routing.md b/content/ngf/traffic-management/advanced-routing.md
similarity index 78%
rename from content/ngf/how-to/traffic-management/advanced-routing.md
rename to content/ngf/traffic-management/advanced-routing.md
index 062d1bf9e..bfbc54bef 100644
--- a/content/ngf/how-to/traffic-management/advanced-routing.md
+++ b/content/ngf/traffic-management/advanced-routing.md
@@ -2,18 +2,16 @@
title: Application routes using HTTP matching conditions
weight: 200
toc: true
-type: how-to
-product: NGF
-docs: DOCS-1422
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1422
---
Learn how to deploy multiple applications and HTTPRoutes with request conditions such as paths, methods, headers, and query parameters
----
-
## Overview
-In this guide we will configure advanced routing rules for multiple applications. These rules will showcase request matching by path, headers, query parameters, and method. For an introduction to exposing your application, we recommend that you follow the [basic guide]({{< ref "/ngf/how-to/traffic-management/routing-traffic-to-your-app.md" >}}) first.
+In this guide we will configure advanced routing rules for multiple applications. These rules will showcase request matching by path, headers, query parameters, and method. For an introduction to exposing your application, we recommend that you follow the [basic guide]({{< ref "/ngf/traffic-management/basic-routing.md" >}}) first.
The following image shows the traffic flow that we will be creating with these rules.
@@ -21,34 +19,20 @@ The following image shows the traffic flow that we will be creating with these r
The goal is to create a set of rules that will result in client requests being sent to specific backends based on the request attributes. In this diagram, we have two versions of the `coffee` service. Traffic for v1 needs to be directed to the old application, while traffic for v2 needs to be directed towards the new application. We also have two `tea` services, one that handles GET operations and one that handles POST operations. Both the `tea` and `coffee` applications share the same Gateway.
----
-
## Before you begin
-- [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
- ```text
- GW_IP=XXX.YYY.ZZZ.III
- GW_PORT=
- ```
-
-{{< note >}} In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for. {{< /note >}}
-
----
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric.
## Coffee applications
### Deploy the Coffee applications
-Begin by deploying the `coffee-v1` and `coffee-v2` applications:
+Begin by deploying the `coffee-v1`, `coffee-v2` and `coffee-v3` applications:
```shell
kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/advanced-routing/coffee.yaml
```
----
-
### Deploy the Gateway API Resources for the Coffee applications
The [gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/) resource is typically deployed by the [cluster operator](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/#roles-and-personas_1). To deploy the gateway:
@@ -69,6 +53,20 @@ EOF
```
This gateway defines a single listener on port 80. Since no hostname is specified, this listener matches on all hostnames.
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+Save the public IP address and port of the NGINX Service into shell variables:
+
+```text
+GW_IP=XXX.YYY.ZZZ.III
+GW_PORT=
+```
+
+{{< note >}}
+
+In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.
+
+{{< /note >}}
The [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) is typically deployed by the [application developer](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/#roles-and-personas_1). To deploy the `coffee` HTTPRoute:
@@ -108,6 +106,24 @@ spec:
backendRefs:
- name: coffee-v2-svc
port: 80
+ - matches:
+ - path:
+ type: PathPrefix
+ value: /coffee
+ headers:
+ - name: headerRegex
+ type: RegularExpression
+ value: "header-[a-z]{1}"
+ - path:
+ type: PathPrefix
+ value: /coffee
+ queryParams:
+ - name: queryRegex
+ type: RegularExpression
+ value: "query-[a-z]{1}"
+ backendRefs:
+ - name: coffee-v3-svc
+ port: 80
EOF
```
@@ -116,18 +132,25 @@ This HTTPRoute has a few important properties:
- The `parentRefs` references the gateway resource that we created, and specifically defines the `http` listener to attach to, via the `sectionName` field.
- `cafe.example.com` is the hostname that is matched for all requests to the backends defined in this HTTPRoute.
- The first rule defines that all requests with the path prefix `/coffee` and no other matching conditions are sent to the `coffee-v1` Service.
-- The second rule defines two matching conditions. If _either_ of these conditions match, requests are forwarded to the `coffee-v2` Service:
+- The second rule defines two matching conditions. If *either* of these conditions match, requests are forwarded to the `coffee-v2` Service:
- - Request with the path prefix `/coffee` and header `version=v2`
- - Request with the path prefix `/coffee` and the query parameter `TEST=v2`
+ - Request with the path prefix `/coffee` and header `version=v2`.
+ - Request with the path prefix `/coffee` and the query parameter `TEST=v2`.
- If you want both conditions to be required, you can define headers and queryParams in the same match object.
+ {{< note >}} The match type is `Exact` for both header and query param, by default. {{< /note >}}
----
+- The third rule defines two matching conditions. If *either* of these conditions match, requests are forwarded to the `coffee-v3` Service:
+
+ - Request with the path prefix `/coffee` and header `HeaderRegex=Header-[a-z]{1}`.
+ - Request with the path prefix `/coffee` and the query parameter `QueryRegex=Query-[a-z]{1}`.
+
+ {{< note >}} The match type used here is `RegularExpression`. A request will succeed if the header or query parameter value matches the specified regular expression. {{< /note >}}
+
+ If you want both conditions to be required, you can define headers and queryParams in the same match object.
### Send traffic to Coffee
-Using the external IP address and port for NGINX Gateway Fabric, we can send traffic to our coffee applications.
+Using the external IP address and port for the NGINX Service, we can send traffic to our coffee applications.
{{< note >}} If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that hostname, without needing to resolve. {{< /note >}}
@@ -161,22 +184,35 @@ Server address: 10.244.0.9:8080
Server name: coffee-v2-68bd55f798-s9z5q
```
----
+If we want our request to be routed to `coffee-v3`, then we need to meet the defined conditions. We can include a header matching the regular expression:
+
+```shell
+curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/coffee -H "headerRegex:header-a"
+```
+
+or include a query parameter matching the regular expression:
+
+```shell
+curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/coffee?queryRegex=query-a
+```
+
+Either request should result in a response from the `coffee-v3` Pod.
+
+```text
+Server address: 10.244.0.104:8080
+Server name: coffee-v3-66d58645f4-6zsl2
+```
## Tea applications
Let's deploy a different set of applications now called `tea` and `tea-post`. These applications will have their own set of rules, but will still attach to the same gateway listener as the `coffee` apps.
----
-
### Deploy the Tea applications
```shell
kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/advanced-routing/tea.yaml
```
----
-
### Deploy the HTTPRoute for the Tea services
We are reusing the previous gateway for these applications, so all we need to create is the HTTPRoute.
@@ -219,11 +255,9 @@ The properties of this HTTPRoute include:
- The first rule defines that a POST request to the `/tea` path is routed to the `tea-post` Service.
- The second rule defines that a GET request to the `/tea` path is routed to the `tea` Service.
----
-
### Send traffic to Tea
-Using the external IP address and port for NGINX Gateway Fabric, we can send traffic to our tea applications.
+Using the external IP address and port for the NGINX Service, we can send traffic to our tea applications.
{{< note >}} If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that hostname, without needing to resolve. {{< /note >}}
@@ -251,13 +285,11 @@ Server name: tea-post-b59b8596b-g586r
This request should receive a response from the `tea-post` pod. Any other type of method, such as PATCH, will result in a `404 Not Found` response.
----
-
## Troubleshooting
If you have any issues while sending traffic, try the following to debug your configuration and setup:
-- Make sure you set the shell variables $GW_IP and $GW_PORT to the public IP and port of the NGINX Gateway Fabric service. Refer to the [Installation]({{< ref "/ngf/installation/" >}}) guides for more information.
+- Make sure you set the shell variables $GW_IP and $GW_PORT to the public IP and port of the NGINX service. Refer to the [Installation]({{< ref "/ngf/install/" >}}) guides for more information.
- Check the status of the Gateway:
@@ -355,8 +387,6 @@ If you have any issues while sending traffic, try the following to debug your co
Check for any error messages in the conditions.
----
-
## See also
To learn more about the Gateway API and the resources we created in this guide, check out the following Kubernetes documentation resources:
diff --git a/content/ngf/how-to/traffic-management/routing-traffic-to-your-app.md b/content/ngf/traffic-management/basic-routing.md
similarity index 82%
rename from content/ngf/how-to/traffic-management/routing-traffic-to-your-app.md
rename to content/ngf/traffic-management/basic-routing.md
index 8d405485d..41a81cc81 100644
--- a/content/ngf/how-to/traffic-management/routing-traffic-to-your-app.md
+++ b/content/ngf/traffic-management/basic-routing.md
@@ -2,32 +2,20 @@
title: Routing traffic to applications
weight: 100
toc: true
-type: how-to
-product: NGF
-docs: DOCS-1426
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1426
---
Learn how to route external traffic to your Kubernetes applications using NGINX Gateway Fabric.
----
-
## Overview
You can route traffic to your Kubernetes applications using the Gateway API and NGINX Gateway Fabric. Whether you're managing a web application or a REST backend API, you can use NGINX Gateway Fabric to expose your application outside the cluster.
----
-
## Before you begin
-- [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
- ```text
- GW_IP=XXX.YYY.ZZZ.III
- GW_PORT=
- ```
-
----
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric.
## Example application
@@ -106,19 +94,19 @@ service/coffee ClusterIP 198.51.100.1 80/TCP 77s
## Application architecture with NGINX Gateway Fabric
-To route traffic to the **coffee** application, we will create a gateway and HTTPRoute. The following diagram shows the configuration we are creating in the next step:
+To route traffic to the **coffee** application, we will create a Gateway and HTTPRoute. The following diagram shows the configuration we are creating in the next step:
{{
}}
-We need a gateway to create an entry point for HTTP traffic coming into the cluster. The **cafe** gateway we are going to create will open an entry point to the cluster on port 80 for HTTP traffic.
+We need a Gateway to create an entry point for HTTP traffic coming into the cluster. The **cafe** Gateway we are going to create will open an entry point to the cluster on port 80 for HTTP traffic.
-To route HTTP traffic from the gateway to the **coffee** service, we need to create an HTTPRoute named **coffee** and attach it to the gateway. This HTTPRoute will have a single routing rule that routes all traffic to the hostname "cafe.example.com" from the gateway to the **coffee** service.
+To route HTTP traffic from the Gateway to the **coffee** service, we need to create an HTTPRoute named **coffee** and attach it to the Gateway. This HTTPRoute will have a single routing rule that routes all traffic to the hostname "cafe.example.com" from the Gateway to the **coffee** service.
-Once NGINX Gateway Fabric processes the **cafe** gateway and **coffee** HTTPRoute, it will configure its data plane (NGINX) to route all HTTP requests sent to "cafe.example.com" to the pods that the **coffee** service targets:
+Once NGINX Gateway Fabric processes the **cafe** Gateway and **coffee** HTTPRoute, it will configure a data plane (NGINX) to route all HTTP requests sent to "cafe.example.com" to the pods that the **coffee** service targets:
{{
}}
-The **coffee** service is omitted from the diagram above because the NGINX Gateway Fabric routes directly to the pods that the **coffee** service targets.
+The **coffee** service is omitted from the diagram above because the NGINX Pod routes directly to the pods that the **coffee** service targets.
{{< note >}}In the diagrams above, all resources that are the responsibility of the cluster operator are shown in blue. The orange resources are the responsibility of the application developers.
@@ -145,11 +133,26 @@ spec:
EOF
```
-This gateway is associated with the NGINX Gateway Fabric through the **gatewayClassName** field. The default installation of NGINX Gateway Fabric creates a GatewayClass with the name **nginx**. NGINX Gateway Fabric will only configure gateways with a **gatewayClassName** of **nginx** unless you change the name via the `--gatewayclass` [command-line flag]({{< ref "/ngf/reference/cli-help.md#static-mode" >}}).
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+Save the public IP address and port of the NGINX Service into shell variables:
+
+ ```text
+ GW_IP=XXX.YYY.ZZZ.III
+ GW_PORT=
+ ```
+
+{{< note >}}
+
+In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.
+
+{{< /note >}}
+
+This Gateway is associated with NGINX Gateway Fabric through the **gatewayClassName** field. The default installation of NGINX Gateway Fabric creates a GatewayClass with the name **nginx**. NGINX Gateway Fabric will only configure Gateways with a **gatewayClassName** of **nginx** unless you change the name via the `--gatewayclass` [command-line flag]({{< ref "/ngf/reference/cli-help.md#static-mode" >}}).
-We specify a [listener](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.Listener) on the gateway to open an entry point on the cluster. In this case, since the coffee application accepts HTTP requests, we create an HTTP listener, named **http**, that listens on port 80.
+We specify a [listener](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.Listener) on the Gateway to open an entry point on the cluster. In this case, since the coffee application accepts HTTP requests, we create an HTTP listener, named **http**, that listens on port 80.
-By default, gateways only allow routes (such as HTTPRoutes) to attach if they are in the same namespace as the gateway. If you want to change this behavior, you can set
+By default, Gateways only allow routes (such as HTTPRoutes) to attach if they are in the same namespace as the Gateway. If you want to change this behavior, you can set
the [**allowedRoutes**](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.AllowedRoutes) field.
Next you will create the HTTPRoute by copying and pasting the following into your terminal:
@@ -176,7 +179,7 @@ spec:
EOF
```
-To attach the **coffee** HTTPRoute to the **cafe** gateway, we specify the gateway name in the [**parentRefs**](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.CommonRouteSpec) field. The attachment will succeed if the hostnames and protocol in the HTTPRoute are allowed by at least one of the gateway's listeners.
+To attach the **coffee** HTTPRoute to the **cafe** Gateway, we specify the Gateway name in the [**parentRefs**](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.CommonRouteSpec) field. The attachment will succeed if the hostnames and protocol in the HTTPRoute are allowed by at least one of the Gateway's listeners.
The [**hostnames**](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.HTTPRouteSpec) field allows you to list the hostnames that the HTTPRoute matches. In this case, incoming requests handled by the **http** listener with the HTTP host header "cafe.example.com" will match this HTTPRoute and will be routed according to the rules in the spec.
@@ -186,9 +189,9 @@ The [**rules**](https://gateway-api.sigs.k8s.io/references/spec/#gateway.network
## Test the configuration
-To test the configuration, we will send a request to the public IP and port of NGINX Gateway Fabric that you saved in the [Before you begin](#before-you-begin) section and verify that the response comes from one of the **coffee** pods.
+To test the configuration, we will send a request to the public IP and port of the NGINX Service that you saved earlier after creating the Gateway resource and verify that the response comes from one of the **coffee** pods.
-{{< note >}}Your clients should be able to resolve the domain name "cafe.example.com" to the public IP of the NGINX Gateway Fabric. In this guide we will simulate that using curl's `--resolve` option. {{< /note >}}
+{{< note >}}Your clients should be able to resolve the domain name "cafe.example.com" to the public IP of the NGINX Service. In this guide we will simulate that using curl's `--resolve` option. {{< /note >}}
First, let's send a request to the path "/":
@@ -248,7 +251,7 @@ You should receive a 404 Not Found error:
If you have any issues while testing the configuration, try the following to debug your configuration and setup:
-- Make sure you set the shell variables $GW_IP and $GW_PORT to the public IP and port of the NGINX Gateway Fabric Service. Refer to the [Installation]({{< ref "/ngf/installation/" >}}) guides for more information.
+- Make sure you set the shell variables $GW_IP and $GW_PORT to the public IP and port of the NGINX Service. Refer to the [Installation]({{< ref "/ngf/install/" >}}) guides for more information.
- Check the status of the gateway:
@@ -345,7 +348,7 @@ If you have any issues while testing the configuration, try the following to deb
- Check the generated nginx config:
```shell
- kubectl exec -it -n nginx-gateway -c nginx -- nginx -T
+ kubectl exec -it -n -- nginx -T
```
The config should contain a server block with the server name "cafe.example.com" that listens on port 80. This server block should have a single location `/` that proxy passes to the coffee upstream:
diff --git a/content/ngf/how-to/traffic-management/client-settings.md b/content/ngf/traffic-management/client-settings.md
similarity index 81%
rename from content/ngf/how-to/traffic-management/client-settings.md
rename to content/ngf/traffic-management/client-settings.md
index 5ccc454cd..61de5de79 100644
--- a/content/ngf/how-to/traffic-management/client-settings.md
+++ b/content/ngf/traffic-management/client-settings.md
@@ -1,16 +1,14 @@
---
title: Client Settings Policy API
-weight: 800
-type: how-to
-product: NGF
toc: true
-docs: DOCS-000
+weight: 800
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-000
---
Learn how to use the `ClientSettingsPolicy` API.
----
-
## Overview
The `ClientSettingsPolicy` API allows Cluster Operators and Application Developers to configure the connection behavior between the client and NGINX.
@@ -34,69 +32,72 @@ This guide will show you how to use the `ClientSettingsPolicy` API to configure
For all the possible configuration options for `ClientSettingsPolicy`, see the [API reference]({{< ref "/ngf/reference/api.md" >}}).
----
-
## Before you begin
-- [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric.
- ```text
- GW_IP=XXX.YYY.ZZZ.III
- GW_PORT=
- ```
+Create the coffee and tea example applications:
- {{< note >}}In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.{{< /note >}}
+```yaml
+kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/client-settings-policy/app.yaml
+```
-- Create the coffee and tea example applications:
+Create a Gateway:
- ```yaml
- kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/client-settings-policy/app.yaml
- ```
+```yaml
+kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/client-settings-policy/gateway.yaml
+```
+
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+Create HTTPRoutes for the coffee and tea applications:
+
+```yaml
+kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/client-settings-policy/httproutes.yaml
+```
-- Create a Gateway:
+Save the public IP address and port of the NGINX Service into shell variables:
- ```yaml
- kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/client-settings-policy/gateway.yaml
- ```
+```text
+GW_IP=XXX.YYY.ZZZ.III
+GW_PORT=
+```
-- Create HTTPRoutes for the coffee and tea applications:
+{{< note >}}
- ```yaml
- kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/client-settings-policy/httproutes.yaml
- ```
+In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.
-- Test the configuration:
+{{< /note >}}
- You can send traffic to the coffee and tea applications using the external IP address and port for NGINX Gateway Fabric.
+Test the configuration:
- Send a request to coffee:
+You can send traffic to the coffee and tea applications using the external IP address and port for the NGINX Service.
- ```shell
- curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/coffee
- ```
+Send a request to coffee:
- This request should receive a response from the coffee Pod:
+```shell
+curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/coffee
+```
- ```text
- Server address: 10.244.0.9:8080
- Server name: coffee-76c7c85bbd-cf8nz
- ```
+This request should receive a response from the coffee Pod:
- Send a request to tea:
+```text
+Server address: 10.244.0.9:8080
+Server name: coffee-76c7c85bbd-cf8nz
+```
- ```shell
- curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/tea
- ```
+Send a request to tea:
- This request should receive a response from the tea Pod:
+```shell
+curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/tea
+```
- ```text
- Server address: 10.244.0.9:8080
- Server name: tea-76c7c85bbd-cf8nz
- ```
+This request should receive a response from the tea Pod:
----
+```text
+Server address: 10.244.0.9:8080
+Server name: tea-76c7c85bbd-cf8nz
+```
## Configure client max body size
@@ -183,8 +184,6 @@ Server name: coffee-56b44d4c55-7ldjc
You can repeat this test with the tea application to confirm that the policy affects both HTTPRoutes.
----
-
### Set a different client max body size for a route
To set a different client max body size for a particular route, you can create another `ClientSettingsPolicy` that targets the route:
@@ -284,8 +283,6 @@ spec:
EOF
```
----
-
## See also
- [Custom policies]({{< ref "/ngf/overview/custom-policies.md" >}}): learn about how NGINX Gateway Fabric custom policies work.
diff --git a/content/ngf/how-to/traffic-management/https-termination.md b/content/ngf/traffic-management/https-termination.md
similarity index 92%
rename from content/ngf/how-to/traffic-management/https-termination.md
rename to content/ngf/traffic-management/https-termination.md
index bc4253098..9c2736f5b 100644
--- a/content/ngf/how-to/traffic-management/https-termination.md
+++ b/content/ngf/traffic-management/https-termination.md
@@ -2,15 +2,13 @@
title: HTTPS termination
weight: 500
toc: true
-type: how-to
-product: NGF
-docs: DOCS-1421
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1421
---
Learn how to terminate HTTPS traffic using NGINX Gateway Fabric.
----
-
## Overview
In this guide, we will show how to configure HTTPS termination for your application, using an [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) redirect filter, secret, and [ReferenceGrant](https://gateway-api.sigs.k8s.io/api-types/referencegrant/).
@@ -19,24 +17,7 @@ In this guide, we will show how to configure HTTPS termination for your applicat
## Before you begin
-- [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
- ```text
- GW_IP=XXX.YYY.ZZZ.III
- GW_PORT=
- ```
-
- Save the ports of NGINX Gateway Fabric:
-
- ```text
- GW_HTTP_PORT=
- GW_HTTPS_PORT=
- ```
-
-{{< note >}}In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.{{< /note >}}
-
----
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric.
## Set up
@@ -96,8 +77,6 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coffee ClusterIP 10.96.189.37 80/TCP 40s
```
----
-
## Configure HTTPS termination and routing
For the HTTPS, we need a certificate and key that are stored in a secret. This secret will live in a separate namespace, so we will need a ReferenceGrant in order to access it.
@@ -175,6 +154,22 @@ This gateway configures:
- `http` listener for HTTP traffic
- `https` listener for HTTPS traffic. It terminates TLS connections using the `cafe-secret` we created.
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+Save the public IP address and ports of the NGINX Service into shell variables:
+
+ ```text
+ GW_IP=XXX.YYY.ZZZ.III
+ GW_HTTP_PORT=
+ GW_HTTPS_PORT=
+ ```
+
+{{< note >}}
+
+In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.
+
+{{< /note >}}
+
To create the httproute resources, copy and paste the following into your terminal:
```yaml
@@ -219,11 +214,9 @@ EOF
The first route issues a `requestRedirect` from the `http` listener on port 80 to `https` on port 443. The second route binds the `coffee` route to the `https` listener.
----
-
## Send traffic
-Using the external IP address and port for NGINX Gateway Fabric, we can send traffic to our coffee application.
+Using the external IP address and ports for the NGINX Service, we can send traffic to our coffee application.
{{< note >}}If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that hostname, without needing to resolve.{{< /note >}}
@@ -253,8 +246,6 @@ Server address: 10.244.0.6:80
Server name: coffee-6b8b6d6486-7fc78
```
----
-
## See also
To learn more about redirects using the Gateway API, see the following resource:
diff --git a/content/ngf/traffic-management/mirror.md b/content/ngf/traffic-management/mirror.md
new file mode 100644
index 000000000..03ab98793
--- /dev/null
+++ b/content/ngf/traffic-management/mirror.md
@@ -0,0 +1,215 @@
+---
+title: Configure Request Mirroring
+toc: true
+weight: 700
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-000
+---
+
+Learn how to mirror your HTTP or gRPC traffic using NGINX Gateway Fabric.
+
+## Overview
+
+[HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) and [GRPCRoute](https://gateway-api.sigs.k8s.io/api-types/grpcroute/) filters can be used to configure request mirroring. Mirroring copies a request to another backend.
+
+In this guide, we will set up two applications, **coffee** and **tea**, and mirror requests between them. All requests
+sent to the **coffee** application will also be sent to the **tea** application automatically.
+
+## Before you begin
+
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric.
+
+## Set up
+
+Create the **coffee** and **tea** applications in Kubernetes by copying and pasting the following block into your terminal:
+
+```yaml
+kubectl apply -f - < 80/TCP 3s
+service/tea ClusterIP 10.96.185.235 80/TCP 3s
+```
+
+---
+
+## Configure request mirroring
+
+First, create the **cafe** Gateway resource:
+
+```yaml
+kubectl apply -f - <
+```
+
+{{< note >}}
+
+In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.
+
+{{< /note >}}
+
+Now create an HTTPRoute that defines a RequestMirror filter that copies all requests sent to `/coffee` to be sent to the **coffee** backend and mirrored to the **tea** backend. Use the following command:
+
+```yaml
+kubectl apply -f - <}}) first.
+In this guide, we will set up the coffee application to demonstrate path URL rewriting, and the tea and soda applications to showcase path-based request redirection. For an introduction to exposing your application, we recommend that you follow the [basic guide]({{< ref "/ngf/traffic-management/basic-routing.md" >}}) first.
-To see an example of a redirect using scheme and port, see the [HTTPS Termination]({{< ref "/ngf/how-to/traffic-management/https-termination.md" >}}) guide.
-
----
+To see an example of a redirect using scheme and port, see the [HTTPS Termination]({{< ref "/ngf/traffic-management/https-termination.md" >}}) guide.
## Before you begin
-- [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
- ```text
- GW_IP=XXX.YYY.ZZZ.III
- GW_PORT=
- ```
-
-{{< note >}}In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.{{< /note >}}
-
----
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric.
## HTTP rewrites and redirects examples
We will configure a common gateway for the `URLRewrite` and `RequestRedirect` filter examples mentioned below.
----
-
### Deploy the Gateway resource for the applications
The [Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/) resource is typically deployed by the [Cluster Operator](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/#roles-and-personas_1). This Gateway defines a single listener on port 80. Since no hostname is specified, this listener matches on all hostnames. To deploy the Gateway:
@@ -60,14 +44,25 @@ spec:
EOF
```
----
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+Save the public IP address and port of the NGINX Service into shell variables:
+
+```text
+GW_IP=XXX.YYY.ZZZ.III
+GW_PORT=
+```
+
+{{< note >}}
+
+In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.
+
+{{< /note >}}
## URLRewrite example
This examples demonstrates how to rewrite the traffic URL for a simple coffee application. An HTTPRoute resource is used to define two `URLRewrite` filters that will rewrite requests. You can verify the server responds with the rewritten URL.
----
-
### Deploy the coffee application
Create the **coffee** application in Kubernetes by copying and pasting the following block into your terminal:
@@ -126,8 +121,6 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coffee ClusterIP 10.96.189.37 80/TCP 40s
```
----
-
### Configure a path rewrite
The following HTTPRoute defines two filters that will rewrite requests such as the following:
@@ -180,11 +173,9 @@ spec:
EOF
```
----
-
### Send traffic
-Using the external IP address and port for NGINX Gateway Fabric, we can send traffic to our coffee application.
+Using the external IP address and port for the NGINX Service, we can send traffic to our coffee application.
{{< note >}}If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that hostname, without needing to resolve.{{< /note >}}
@@ -258,14 +249,10 @@ Server name: coffee-6db967495b-twn6x
URI: /prices?test=v1&test=v2
```
----
-
## RequestRedirect example
This example demonstrates how to redirect the traffic to a new location for the tea and soda applications, using `RequestRedirect` filters.
----
-
### Setup
Create the **tea** and **soda** application in Kubernetes by copying and pasting the following block into your terminal:
@@ -359,8 +346,6 @@ service/soda ClusterIP 10.96.230.208 80/TCP 89m
service/tea ClusterIP 10.96.151.194 80/TCP 120m
```
----
-
### Configure a path redirect
In this section, we'll define two HTTPRoutes for the tea and soda applications to demonstrate different types of request redirection using the `RequestRedirect` filter.
@@ -427,11 +412,9 @@ spec:
EOF
```
----
-
### Send traffic
-Using the external IP address and port for NGINX Gateway Fabric, we can send traffic to our tea and soda applications to verify the redirect is successful. We will use curl's `--include` option to print the response headers (we are interested in the `Location` header).
+Using the external IP address and port for the NGINX Service, we can send traffic to our tea and soda applications to verify the redirect is successful. We will use curl's `--include` option to print the response headers (we are interested in the `Location` header).
{{< note >}}If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that hostname, without needing to resolve.{{< /note >}}
@@ -511,8 +494,6 @@ HTTP/1.1 302 Moved Temporarily
Location: http://cafe.example.com:8080/flavors?test=v1
```
----
-
## See also
To learn more about redirects and rewrites using the Gateway API, see the following resource:
diff --git a/content/ngf/how-to/traffic-management/request-response-headers.md b/content/ngf/traffic-management/request-response-headers.md
similarity index 94%
rename from content/ngf/how-to/traffic-management/request-response-headers.md
rename to content/ngf/traffic-management/request-response-headers.md
index c4820a46c..24740cbb1 100644
--- a/content/ngf/how-to/traffic-management/request-response-headers.md
+++ b/content/ngf/traffic-management/request-response-headers.md
@@ -2,9 +2,9 @@
title: Modify HTTP request and response headers
weight: 600
toc: true
-type: how-to
-product: NGF
-docs: DOCS-000
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-000
---
Learn how to modify the request and response headers of your application using NGINX Gateway Fabric.
@@ -13,30 +13,16 @@ Learn how to modify the request and response headers of your application using N
[HTTP Header Modifiers](https://gateway-api.sigs.k8s.io/guides/http-header-modifier/?h=request#http-header-modifiers) can be used to add, modify or remove headers during the request-response lifecycle. The [RequestHeaderModifier](https://gateway-api.sigs.k8s.io/guides/http-header-modifier/#http-request-header-modifier) is used to alter headers in a request sent by client and [ResponseHeaderModifier](https://gateway-api.sigs.k8s.io/guides/http-header-modifier/#http-response-header-modifier) is used to alter headers in a response to the client.
-This guide describes how to configure the headers application to modify the headers in the request. Another version of the headers application is then used to modify response headers when client requests are made. For an introduction to exposing your application, we recommend that you follow the [basic guide]({{< ref "/ngf/how-to/traffic-management/routing-traffic-to-your-app.md" >}}) first.
-
----
+This guide describes how to configure the headers application to modify the headers in the request. Another version of the headers application is then used to modify response headers when client requests are made. For an introduction to exposing your application, we recommend that you follow the [basic guide]({{< ref "/ngf/traffic-management/basic-routing.md" >}}) first.
## Before you begin
-- [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
- ```text
- GW_IP=XXX.YYY.ZZZ.III
- GW_PORT=
- ```
-
-{{< note >}} In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for .{{< /note >}}
-
----
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric.
## HTTP Header Modifiers examples
We will configure a common gateway for the `RequestHeaderModifier` and `ResponseHeaderModifier` examples mentioned below.
----
-
### Deploy the Gateway API resources for the Header application
The [Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/) resource is typically deployed by the [Cluster Operator](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/#roles-and-personas_1). This Gateway defines a single listener on port 80. Since no hostname is specified, this listener matches on all hostnames. To deploy the Gateway:
@@ -56,7 +42,20 @@ spec:
EOF
```
----
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+Save the public IP address and port of the NGINX Service into shell variables:
+
+```text
+GW_IP=XXX.YYY.ZZZ.III
+GW_PORT=
+```
+
+{{< note >}}
+
+In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.
+
+{{< /note >}}
## RequestHeaderModifier example
@@ -83,8 +82,6 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/headers ClusterIP 10.96.26.161 80/TCP 23s
```
----
-
### Configure the HTTPRoute with RequestHeaderModifier filter
Create a HTTPRoute that exposes the header application outside the cluster using the listener created in the previous section. Use the following command:
@@ -136,8 +133,6 @@ This HTTPRoute has a few important properties:
1. Appends the value `compress` to the `Accept-Encoding` header and `this-is-an-appended-value` to the `My-Cool-header`.
1. Removes `User-Agent` header.
----
-
### Send traffic to the Headers application
To access the application, use `curl` to send requests to the `headers` Service, which includes headers within the request.
@@ -168,8 +163,6 @@ In the output above, you can see that the headers application modifies the follo
- The header `My-Overwrite-Header` gets overwritten from `dont-see-this` to `this-is-the-only-value`.
- The header `Accept-encoding` remains unchanged as we did not modify it in the curl request sent.
----
-
### Delete the resources
Delete the headers application and HTTPRoute: another instance will be used for the next examples.
@@ -182,14 +175,10 @@ kubectl delete httproutes.gateway.networking.k8s.io headers
kubectl delete -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/http-request-header-filter/headers.yaml
```
----
-
## ResponseHeaderModifier example
Begin by configuring an application with custom headers and a simple HTTPRoute. The server response can be observed see its headers. The next step is to modify some of the headers using HTTPRoute filters to modify responses. Finally, verify the server responds with the modified headers.
----
-
### Deploy the Headers application
Begin by deploying the example application `headers`. It is a simple application that adds response headers that will be modified later.
@@ -212,8 +201,6 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/headers ClusterIP 10.96.15.12 80/TCP 95s
```
----
-
### Configure the basic HTTPRoute
Create a HTTPRoute that exposes the header application outside the cluster using the listener created in the previous section. You can do this with the following command:
@@ -247,8 +234,6 @@ This HTTPRoute has a few important properties:
- `cafe.example.com` is the hostname that is matched for all requests to the backends defined in this HTTPRoute.
- The `match` rule defines that all requests with the path prefix `/headers` are sent to the `headers` Service.
----
-
### Send traffic to the Headers application
Use `curl` with the `-i` flag to access the application and include the response headers in the output:
@@ -281,8 +266,6 @@ In the output above, you can see that the headers application adds the following
The next section will modify these headers by adding a ResponseHeaderModifier filter to the headers HTTPRoute.
----
-
### Update the HTTPRoute to modify the Response headers
Update the HTTPRoute by adding a `ResponseHeaderModifier` filter:
@@ -327,8 +310,6 @@ Notice that this HTTPRoute has a `ResponseHeaderModifier` filter defined for the
- Adds the value `this-is-the-appended-value` to the header `X-Header-Add`.
- Removes `X-Header-Remove` header.
----
-
### Send traffic to the modified Headers application
Send a curl request to the modified `headers` application to verify the response headers are modified.
@@ -354,8 +335,6 @@ ok
In the output above you can notice the modified response headers as the `X-Header-Unmodified` remains unchanged as we did not include it in the filter and `X-Header-Remove` header is absent. The header `X-Header-Add` gets appended with the new value and `X-Header-Set` gets overwritten to `overwritten-value` as defined in the _HttpRoute_.
----
-
## See also
To learn more about the Gateway API and the resources we created in this guide, check out the following Kubernetes documentation resources:
diff --git a/content/ngf/how-to/traffic-management/snippets.md b/content/ngf/traffic-management/snippets.md
similarity index 97%
rename from content/ngf/how-to/traffic-management/snippets.md
rename to content/ngf/traffic-management/snippets.md
index 543c2b917..0112d01a5 100644
--- a/content/ngf/how-to/traffic-management/snippets.md
+++ b/content/ngf/traffic-management/snippets.md
@@ -2,15 +2,13 @@
title: "Use the SnippetsFilter API"
weight: 800
toc: true
-type: how-to
-product: NGF
-docs: "DOCS-000"
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: "DOCS-000"
---
This topic introduces Snippets, how to implement them using the `SnippetsFilter` API, and provides an example of how to use `SnippetsFilter` for rate limiting.
----
-
## Overview
Snippets allow users to insert NGINX configuration into different contexts of the
@@ -22,8 +20,6 @@ and only in cases where Gateway API resources or NGINX extension policies don't
Users can configure Snippets through the `SnippetsFilter` API. `SnippetsFilter` can be an [HTTPRouteFilter](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.HTTPRouteFilter) or [GRPCRouteFilter](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.GRPCRouteFilter),
that can be defined in an HTTPRoute/GRPCRoute rule and is intended to modify NGINX configuration specifically for that Route rule. `SnippetsFilter` is an `extensionRef` type filter.
----
-
## Disadvantages of Snippets
{{< warning >}} We recommend managing NGINX configuration through Gateway API resources, [first-class policies]({{< ref "/ngf/overview/custom-policies.md" >}}), and other existing [NGINX extensions]({{< ref "/ngf/how-to/data-plane-configuration.md" >}})
@@ -41,8 +37,6 @@ Snippets have the following disadvantages:
{{< note >}} If the NGINX configuration includes an invalid Snippet, NGINX will continue to operate with the last valid configuration. No new configuration will be applied until the invalid Snippet is fixed. {{< /note >}}
----
-
## Best practices for SnippetsFilters
There are endless ways to use `SnippetsFilters` to modify NGINX configuration, and equal ways to generate invalid or undesired NGINX configuration.
@@ -54,25 +48,14 @@ We have outlined a few best practices to keep in mind when using `SnippetsFilter
1. In a `SnippetsFilter`, only one Snippet per NGINX context is allowed, however multiple `SnippetsFilters` can be referenced in the same routing rule. As such, `SnippetsFilters` should not conflict with each other. If `SnippetsFilters` do conflict, they should not be referenced on the same routing rule.
1. `SnippetsFilters` that define Snippets targeting NGINX contexts `main`, `http`, or `http.server`, can potentially affect more than the routing rule they are referenced by. Proceed with caution and verify the behavior of the NGINX configuration before creating those `SnippetsFilters` in a production scenario.
----
-
## Setup
-- To enable Snippets, [install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric with these modifications:
+- To enable Snippets, [install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric with these modifications:
- Using Helm: set the `nginxGateway.snippetsFilters.enable=true` Helm value.
- Using Kubernetes manifests: set the `--snippets-filters` flag in the nginx-gateway container argument, add `snippetsfilters` to the RBAC
rules with verbs `list` and `watch`, and add `snippetsfilters/status` to the RBAC rules with verb `update`. See this [example manifest](https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/main/deploy/snippets-filters/deploy.yaml) for clarification.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
- ```text
- GW_IP=
- GW_PORT=
- ```
-
- {{< note >}} In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for. {{< /note >}}
-
- Create the coffee and tea example applications:
```yaml
@@ -85,6 +68,18 @@ We have outlined a few best practices to keep in mind when using `SnippetsFilter
kubectl apply -f https://raw.githubusercontent.com/nginx/nginx-gateway-fabric/v{{< version-ngf >}}/examples/snippets-filter/gateway.yaml
```
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+- Save the public IP address and port of the NGINX Service into shell variables:
+
+ ```text
+ GW_IP=
+ GW_PORT=
+ ```
+
+ {{< note >}} In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for. {{< /note >}}
+
+
- Create HTTPRoutes for the coffee and tea applications:
```yaml
@@ -93,7 +88,7 @@ We have outlined a few best practices to keep in mind when using `SnippetsFilter
- Test the configuration:
- You can send traffic to the coffee and tea applications using the external IP address and port for NGINX Gateway Fabric.
+ You can send traffic to the coffee and tea applications using the external IP address and port for the NGINX Service.
Send a request to coffee:
@@ -129,8 +124,6 @@ We have outlined a few best practices to keep in mind when using `SnippetsFilter
You should see all successful responses in quick succession as we have not configured any rate limiting rules yet.
----
-
## Create Rate Limiting SnippetsFilters
Configure a rate limiting `SnippetsFilter` named `rate-limiting-sf` by adding the following `SnippetsFilter`:
@@ -219,8 +212,6 @@ Status:
Events:
```
----
-
## Configure coffee to reference rate-limiting-sf SnippetsFilter
To use the `rate-limiting-sf` `SnippetsFilter`, update the coffee HTTPRoute to reference it:
@@ -303,8 +294,6 @@ for i in `seq 1 10`; do curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://c
You should see all successful responses from the coffee Pod, but they should be spaced apart roughly one second each as
expected through the rate limiting configuration.
----
-
## Configure tea to reference no-delay-rate-limiting-sf SnippetsFilter
Update the tea HTTPRoute to reference the `no-delay-rate-limting-sf` `SnippetsFilter`:
@@ -400,8 +389,6 @@ Request ID: 890c17df930ef1ef573feed3c6e81290
This is the default error response given by NGINX when the rate limit burst is exceeded, meaning our `SnippetsFilter`
correctly applied our rate limiting NGINX configuration changes.
----
-
## Conclusion
You've successfully used Snippets with the `SnippetsFilter` resource to configure two distinct rate limiting rules to different backend applications.
@@ -414,8 +401,6 @@ This follows our recommended Role and Persona separation described in the [Best
For an alternative method of modifying the NGINX configuration NGINX Gateway Fabric generates through Gateway API resources, check out
our supported [first-class policies]({{< ref "/ngf/overview/custom-policies.md" >}}) which don't carry many of the aforementioned disadvantages of Snippets.
----
-
## Troubleshooting
If a `SnippetsFilter` is defined in a Route and contains a Snippet which includes an invalid NGINX configuration, NGINX will continue to operate
@@ -427,7 +412,7 @@ An example of an error from the NGINX Gateway Fabric `nginx-gateway` container l
{"level":"error","ts":"2024-10-29T22:19:41Z","logger":"eventLoop.eventHandler","msg":"Failed to update NGINX configuration","batchID":156,"error":"failed to reload NGINX: reload unsuccessful: no new NGINX worker processes started for config version 141. Please check the NGINX container logs for possible configuration issues: context deadline exceeded","stacktrace":"github.com/nginx/nginx-gateway-fabric/internal/mode/static.(*eventHandlerImpl).HandleEventBatch\n\tgithub.com/nginx/nginx-gateway-fabric/internal/mode/static/handler.go:219\ngithub.com/nginx/nginx-gateway-fabric/internal/framework/events.(*EventLoop).Start.func1.1\n\tgithub.com/nginx/nginx-gateway-fabric/internal/framework/events/loop.go:74"}
```
-An example of an error from the NGINX Gateway Fabric `nginx` container logs:
+An example of an error from the NGINX Pod's `nginx` container logs:
```text
2024/10/29 22:18:41 [emerg] 40#40: invalid number of arguments in "limit_req_zone" directive in /etc/nginx/includes/SnippetsFilter_http_default_rate-limiting-sf.conf:1
@@ -473,8 +458,6 @@ Conditions:
{{< note >}} If you run into situations where an NGINX directive fails to be applied and the troubleshooting information here isn't sufficient, please create an issue in the
[NGINX Gateway Fabric Github repository](https://github.com/nginx/nginx-gateway-fabric). {{< /note >}}
----
-
## See also
- [API reference]({{< ref "/ngf/reference/api.md" >}}): all configuration fields for the `SnippetsFilter` API.
diff --git a/content/ngf/how-to/traffic-management/tls-passthrough.md b/content/ngf/traffic-management/tls-passthrough.md
similarity index 92%
rename from content/ngf/how-to/traffic-management/tls-passthrough.md
rename to content/ngf/traffic-management/tls-passthrough.md
index 92f1c5385..82637e353 100644
--- a/content/ngf/how-to/traffic-management/tls-passthrough.md
+++ b/content/ngf/traffic-management/tls-passthrough.md
@@ -2,42 +2,26 @@
title: Configure TLS passthrough
weight: 800
toc: true
-type: how-to
-product: NGF
-docs: DOCS-000
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-000
---
-Learn how to use TLSRoutes to configure TLS Passthrough load-balancing with NGINX Gateway Fabric.
-
----
+Learn how to use TLSRoutes to configure TLS passthrough load-balancing with NGINX Gateway Fabric.
## Overview
In this guide, we will show how to configure TLS passthrough for your application, using a [TLSRoute](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.TLSRoute).
----
-
## Note on Gateway API Experimental Features
{{< important >}} TLSRoute is a Gateway API resource from the experimental release channel. {{< /important >}}
{{< include "/ngf/installation/install-gateway-api-experimental-features.md" >}}
----
-
## Before you begin
-- [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric with experimental features enabled.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
- ```text
- GW_IP=XXX.YYY.ZZZ.III
- GW_TLS_PORT=
- ```
-
-{{< note >}} In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the Gateway will forward for. {{< /note >}}
-
----
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric with experimental features enabled.
## Set up
@@ -171,6 +155,21 @@ This Gateway will configure NGINX Gateway Fabric to accept TLS connections on po
{{< note >}}It is possible to add an HTTPS listener on the same port that terminates TLS connections so long as the hostname does not overlap with the TLS listener hostname.{{< /note >}}
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+Save the public IP address and port of the NGINX Service into shell variables:
+
+```text
+GW_IP=XXX.YYY.ZZZ.III
+GW_TLS_PORT=
+```
+
+{{< note >}}
+
+In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the Gateway will forward for.
+
+{{< /note >}}
+
Create a TLSRoute that attaches to the Gateway and routes requests to `app.example.com` to the `secure-app` Service:
```yaml
@@ -195,11 +194,9 @@ EOF
{{< note >}}To route to a Service in a Namespace different from the TLSRoute Namespace, create a [ReferenceGrant](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1beta1.ReferenceGrant) to permit the cross-namespace reference. {{< /note >}}
----
-
## Send traffic
-Using the external IP address and port for NGINX Gateway Fabric, send traffic to the `secure-app` application.
+Using the external IP address and port for the NGINX Service, send traffic to the `secure-app` application.
{{< note >}}If you have a DNS record allocated for `app.example.com`, you can send the request directly to that hostname, without needing to resolve.{{< /note >}}
diff --git a/content/ngf/how-to/traffic-management/upstream-settings.md b/content/ngf/traffic-management/upstream-settings.md
similarity index 91%
rename from content/ngf/how-to/traffic-management/upstream-settings.md
rename to content/ngf/traffic-management/upstream-settings.md
index 6fd92b620..f57d4d13f 100644
--- a/content/ngf/how-to/traffic-management/upstream-settings.md
+++ b/content/ngf/traffic-management/upstream-settings.md
@@ -34,23 +34,7 @@ For all the possible configuration options for `UpstreamSettingsPolicy`, see the
## Before you begin
-- [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
- ```text
- GW_IP=XXX.YYY.ZZZ.III
- GW_PORT=
- ```
-
-- Lookup the name of the NGINX Gateway Fabric pod and save into shell variable:
-
- ```text
- NGF_POD_NAME=
- ```
-
- {{< note >}}In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.{{< /note >}}
-
----
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric.
## Setup
@@ -160,6 +144,23 @@ spec:
EOF
```
+After creating the Gateway resource, NGINX Gateway Fabric will provision an NGINX Pod and Service fronting it to route traffic.
+
+Save the public IP address and port of the NGINX Service into shell variables:
+
+```text
+GW_IP=XXX.YYY.ZZZ.III
+GW_PORT=
+```
+
+Lookup the name of the NGINX pod and save into shell variable:
+
+```text
+NGINX_POD_NAME=
+```
+
+{{< note >}}In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.{{< /note >}}
+
Create HTTPRoutes for the `coffee` and `tea` applications:
```yaml
@@ -206,7 +207,7 @@ EOF
Test the configuration:
-You can send traffic to the `coffee` and `tea` applications using the external IP address and port for NGINX Gateway Fabric.
+You can send traffic to the `coffee` and `tea` applications using the external IP address and port for the NGINX Service.
Send a request to `coffee`:
@@ -290,7 +291,7 @@ Events:
Next, verify that the policy has been applied to the `coffee` and `tea` upstreams by inspecting the NGINX configuration:
```shell
-kubectl exec -it -n nginx-gateway $NGF_POD_NAME -c nginx -- nginx -T
+kubectl exec -it -n $NGINX_POD_NAME -- nginx -T
```
You should see the `zone` directive in the `coffee` and `tea` upstreams both specify the size `1m`:
@@ -363,7 +364,7 @@ Events:
Next, verify that the policy has been applied to the `coffee` upstreams, by inspecting the NGINX configuration:
```shell
-kubectl exec -it -n nginx-gateway $NGF_POD_NAME -c nginx -- nginx -T
+kubectl exec -it -n $NGINX_POD_NAME -- nginx -T
```
You should see that the `coffee` upstream has the `keepalive` directive set to 32:
diff --git a/content/ngf/traffic-security/_index.md b/content/ngf/traffic-security/_index.md
new file mode 100644
index 000000000..70eb63d81
--- /dev/null
+++ b/content/ngf/traffic-security/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Traffic security"
+url: /nginx-gateway-fabric/traffic-security
+weight: 500
+---
\ No newline at end of file
diff --git a/content/ngf/how-to/traffic-security/integrating-cert-manager.md b/content/ngf/traffic-security/integrate-cert-manager.md
similarity index 98%
rename from content/ngf/how-to/traffic-security/integrating-cert-manager.md
rename to content/ngf/traffic-security/integrate-cert-manager.md
index 751e06329..575ced75a 100644
--- a/content/ngf/how-to/traffic-security/integrating-cert-manager.md
+++ b/content/ngf/traffic-security/integrate-cert-manager.md
@@ -2,9 +2,9 @@
title: Secure traffic using Let's Encrypt and cert-manager
weight: 100
toc: true
-type: how-to
-product: NGF
-docs: DOCS-1425
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1425
---
Learn how to issue and manage certificates using Let's Encrypt and cert-manager.
@@ -23,9 +23,11 @@ Follow the steps in this guide to:
## Before you begin
+You need:
+
- Administrator access to a Kubernetes cluster.
- [Helm](https://helm.sh) and [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) must be installed locally.
-- [NGINX Gateway Fabric deployed]({{< ref "/ngf/installation/" >}}) in the Kubernetes cluster.
+- [NGINX Gateway Fabric deployed]({{< ref "/ngf/install/" >}}) in the Kubernetes cluster.
- A DNS-resolvable domain name is required. It must resolve to the public endpoint of the NGINX Gateway Fabric deployment, and this public endpoint must be an external IP address or alias accessible over the internet. The process here will depend on your DNS provider. This DNS name will need to be resolvable from the Let’s Encrypt servers, which may require that you wait for the record to propagate before it will work.
---
@@ -288,7 +290,7 @@ Request ID: e64c54a2ac253375ac085d48980f000a
- The temporary HTTPRoute created by cert-manager routes the traffic between cert-manager and the Let's Encrypt server through NGINX Gateway Fabric. If the challenge is not successful, it may be useful to inspect the NGINX logs to see the ACME challenge requests. You should see something like the following:
```shell
- kubectl logs -n nginx-gateway -c nginx
+ kubectl logs -n
<...>
52.208.162.19 - - [15/Aug/2023:13:18:12 +0000] "GET /.well-known/acme-challenge/bXQn27Lenax2AJKmOOS523T-MWOKeFhL0bvrouNkUc4 HTTP/1.1" 200 87 "-" "cert-manager-challenges/v1.12.0 (linux/amd64) cert-manager/bd192c4f76dd883f9ee908035b894ffb49002384"
52.208.162.19 - - [15/Aug/2023:13:18:14 +0000] "GET /.well-known/acme-challenge/bXQn27Lenax2AJKmOOS523T-MWOKeFhL0bvrouNkUc4 HTTP/1.1" 200 87 "-" "cert-manager-challenges/v1.12.0 (linux/amd64) cert-manager/bd192c4f76dd883f9ee908035b894ffb49002384"
diff --git a/content/ngf/how-to/traffic-security/securing-backend-traffic.md b/content/ngf/traffic-security/secure-backend.md
similarity index 88%
rename from content/ngf/how-to/traffic-security/securing-backend-traffic.md
rename to content/ngf/traffic-security/secure-backend.md
index 6ba611f17..d532bdbc3 100644
--- a/content/ngf/how-to/traffic-security/securing-backend-traffic.md
+++ b/content/ngf/traffic-security/secure-backend.md
@@ -2,20 +2,18 @@
title: Securing backend traffic
weight: 200
toc: true
-type: how-to
-product: NGF
-docs: DOCS-1423
+nd-content-type: how-to
+nd-product: NGF
+nd-docs: DOCS-1423
---
Learn how to encrypt HTTP traffic between NGINX Gateway Fabric and your backend pods.
----
-
## Overview
-In this guide, we will show how to specify the TLS configuration of the connection from the Gateway to a backend pod/s via the Service API object using a [BackendTLSPolicy](https://gateway-api.sigs.k8s.io/api-types/backendtlspolicy/). This covers the use-case where the service or backend owner is doing their own TLS and NGINX Gateway Fabric needs to know how to connect to this backend pod that has its own certificate over HTTPS.
+In this guide, we will show how to specify the TLS configuration of the connection from the Gateway to a backend pod with the Service API object using a [BackendTLSPolicy](https://gateway-api.sigs.k8s.io/api-types/backendtlspolicy/).
----
+The intended use-case is when a service or backend owner is managing their own TLS and NGINX Gateway Fabric needs to know how to connect to this backend pod that has its own certificate over HTTPS.
## Note on Gateway API Experimental Features
@@ -23,21 +21,9 @@ In this guide, we will show how to specify the TLS configuration of the connecti
{{< include "/ngf/installation/install-gateway-api-experimental-features.md" >}}
----
-
## Before you begin
-- [Install]({{< ref "/ngf/installation/" >}}) NGINX Gateway Fabric with experimental features enabled.
-- Save the public IP address and port of NGINX Gateway Fabric into shell variables:
-
- ```text
- GW_IP=XXX.YYY.ZZZ.III
- GW_PORT=
- ```
-
-{{< note >}}In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.{{< /note >}}
-
----
+- [Install]({{< ref "/ngf/install/" >}}) NGINX Gateway Fabric with experimental features enabled.
## Set up
@@ -141,11 +127,9 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/secure-app ClusterIP 10.96.213.57 8443/TCP 9s
```
----
-
## Configure routing rules
-First, we will create the Gateway resource with an HTTP listener:
+First, create the Gateway resource with an HTTP listener:
```yaml
kubectl apply -f - <
+```
+
+{{< note >}}In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.{{< /note >}}
+
---
## Send Traffic without backend TLS configuration
-Using the external IP address and port for NGINX Gateway Fabric, we can send traffic to our secure-app application. To show what happens if we send plain HTTP traffic from NGF to our `secure-app`, let's try sending a request before we create the backend TLS configuration.
+Using the external IP address and port for the NGINX Service, we can send traffic to our secure-app application. To show what happens if we send plain HTTP traffic from NGINX to our `secure-app`, let's try sending a request before we create the backend TLS configuration.
{{< note >}}If you have a DNS record allocated for `secure-app.example.com`, you can send the request directly to that hostname, without needing to resolve.{{< /note >}}
@@ -216,7 +211,9 @@ We can see we a status 400 Bad Request message from NGINX.
## Create the backend TLS configuration
-To configure the backend TLS terminationm, first we will create the ConfigMap that holds the `ca.crt` entry for verifying our self-signed certificates:
+{{< note >}} This example uses a `ConfigMap` to store the CA certificate, but you can also use a `Secret`. This could be a better option if integrating with [cert-manager](https://cert-manager.io/). The `Secret` should have a `ca.crt` key that holds the contents of the CA certificate. {{< /note >}}
+
+To configure the backend TLS termination, first we will create the ConfigMap that holds the `ca.crt` entry for verifying our self-signed certificates:
```yaml
kubectl apply -f - < -c nginx -- /bin/sh
+kubectl exec -it -n -- /bin/sh
```
----
-
#### Logs
Logs from the NGINX Gateway Fabric control plane and data plane can contain information that isn't available to status or events. These can include errors in processing or passing traffic.
@@ -104,7 +94,7 @@ You can see logs for a crashed or killed container by adding the `-p` flag to th
To see logs for the data plane container:
```shell
- kubectl -n nginx-gateway logs -c nginx
+ kubectl -n logs -c nginx
```
1. Error Logs
@@ -118,13 +108,13 @@ You can see logs for a crashed or killed container by adding the `-p` flag to th
For the _nginx_ container you can `grep` for various [error](https://nginx.org/en/docs/ngx_core_module.html#error_log) logs. For example, to search for all logs logged at the `emerg` level:
```shell
- kubectl -n nginx-gateway logs -c nginx | grep emerg
+ kubectl -n logs -c nginx | grep emerg
```
For example, if a variable is too long, NGINX may display such an error message:
```text
- kubectl logs -n nginx-gateway ngf-nginx-gateway-fabric-bb8598998-jwk2m -c nginx | grep emerg
+ kubectl logs -n dev-env gateway-nginx-bb8598998-jwk2m -c nginx | grep emerg
2024/06/13 20:04:17 [emerg] 27#27: too long parameter, probably missing terminating """ character in /etc/nginx/conf.d/http.conf:78
```
@@ -138,8 +128,6 @@ You can see logs for a crashed or killed container by adding the `-p` flag to th
To modify log levels for the control plane in NGINX Gateway Fabric, edit the `NginxGateway` configuration. This can be done either before or after deploying NGINX Gateway Fabric. Refer to this [guide](https://docs.nginx.com/nginx-gateway-fabric/how-to/control-plane-configuration/) to do so.
To check error logs, modify the log level to `error` to view error logs. Similarly, change the log level to `debug` and `grep` for the word `debug` to view debug logs.
----
-
#### Understanding the generated NGINX configuration
Understanding the NGINX configuration is key for fixing issues because it shows how NGINX handles requests. This helps tweak settings to make sure NGINX behaves the way you want it to for your application. To see your current configuration, you can open a shell in the _nginx_ container by following these [steps](#get-shell-access-to-nginx-container) and run `nginx -T`. To understand the usage of NGINX directives in the configuration file, consult this list of [NGINX directives](https://nginx.org/en/docs/dirindex.html).
@@ -158,7 +146,7 @@ server {
}
```
-Once a HTTPRoute with path matches and rules are defined, nginx.conf is updated accordingly to determine which location block will manage incoming requests. To demonstrate how `nginx.conf` is changed, create some resources:
+Once an HTTPRoute with path matches and rules are defined, nginx.conf is updated accordingly to determine which location block will manage incoming requests. To demonstrate how `nginx.conf` is changed, create some resources:
1. A Gateway with single listener with the hostname `*.example.com` on port 80.
2. A simple `coffee` application.
@@ -284,25 +272,15 @@ Handling connection for 8080
The configuration may change in future releases. This configuration is valid for version 1.3.
{{< /warning >}}
----
-
#### Metrics for troubleshooting
-Metrics can be useful to identify performance bottlenecks and pinpoint areas of high resource consumption within NGINX Gateway Fabric. To set up metrics collection, refer to the [Prometheus Metrics guide]({{< ref "prometheus.md" >}}). The metrics dashboard will help you understand problems with the way NGINX Gateway Fabric is set up or potential issues that could show up with time.
-
-For example, metrics `nginx_reloads_total` and `nginx_reload_errors_total` offer valuable insights into the system's stability and reliability. A high `nginx_reloads_total` value indicates frequent updates or configuration changes, while a high `nginx_reload_errors_total` value suggests issues with the configuration or other problems preventing successful reloads. Monitoring these metrics helps identify and resolve configuration errors, ensuring consistent service reliability.
-
-In such situations, it's advisable to review the logs of both NGINX and NGINX Gateway containers for any potential error messages. Additionally, verify the configured resources to ensure they are in a valid state.
-
----
+Metrics can be useful to identify performance bottlenecks and pinpoint areas of high resource consumption within NGINX Gateway Fabric. To set up metrics collection, refer to the [Prometheus Metrics guide]({{< ref "/ngf/monitoring/prometheus.md" >}}). The metrics dashboard will help you understand problems with the way NGINX Gateway Fabric is set up or potential issues that could show up with time.
#### Access the NGINX Plus Dashboard
If you have NGINX Gateway Fabric installed with NGINX Plus, you can access the NGINX Plus dashboard at `http://localhost:8080/dashboard.html`.
Verify that the port number (for example, `8080`) matches the port number you have port-forwarded to your NGINX Gateway Fabric Pod. For further details, see the [dashboard guide]({{< ref "dashboard.md" >}})
----
-
### Common errors
{{< bootstrap-table "table table-striped table-bordered" >}}
@@ -312,13 +290,11 @@ Verify that the port number (for example, `8080`) matches the port number you ha
| Startup | NGINX Gateway Fabric fails to start. | Check logs for _nginx_ and _nginx-gateway_ containers. | Readiness probe failed. |
| Resources not configured | Status missing on resources. | Check referenced resources. | Referenced resources do not belong to NGINX Gateway Fabric. |
| NGINX errors | Reload failures on NGINX | Fix permissions for control plane. | Security context not configured. |
-| NGINX Plus errors | Failure to start; traffic interruptions | Set up the [NGINX Plus JWT]({{< ref "/ngf/installation/nginx-plus-jwt.md" >}}) | License is not configured or has expired. |
-| Client Settings | Request entity too large error | Adjust client settings. Refer to [Client Settings Policy]({{< relref "../traffic-management/client-settings.md" >}}) | Payload is greater than the [`client_max_body_size`](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size) value.|
+| NGINX Plus errors | Failure to start; traffic interruptions | Set up the [NGINX Plus JWT]({{< ref "/ngf/install/nginx-plus.md" >}}) | License is not configured or has expired. |
+| Client Settings | Request entity too large error | Adjust client settings. Refer to [Client Settings Policy]({{< ref "/ngf/traffic-management/client-settings.md" >}}) | Payload is greater than the [`client_max_body_size`](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size) value.|
{{< /bootstrap-table >}}
----
-
##### NGINX fails to reload
NGINX reload errors can occur for various reasons, including syntax errors in configuration files, permission issues, and more. To determine if NGINX has failed to reload, check logs for your _nginx-gateway_ and _nginx_ containers.
@@ -326,8 +302,6 @@ You will see the following error in the _nginx-gateway_ logs: `failed to reload
To debug why your reload has failed, start with verifying the syntax of your configuration files by opening a shell in the NGINX container following these [steps](#get-shell-access-to-nginx-container) and running `nginx -T`. If there are errors in your configuration file, the reload will fail and specify the reason for it.
----
-
##### NGINX Gateway Fabric Pod is not running or ready
To understand why the NGINX Gateway Fabric Pod has not started running or is not ready, check the state of the Pod to get detailed information about the current status and events happening in the Pod. To do this, use `kubectl describe`:
@@ -336,62 +310,99 @@ To understand why the NGINX Gateway Fabric Pod has not started running or is not
kubectl describe pod -n nginx-gateway
```
-The Pod description includes details about the image name, tags, current status, and environment variables. Verify that these details match your setup and cross-check with the events to ensure everything is functioning as expected. For example, the Pod below has two containers that are running and the events reflect the same.
+The Pod description includes details about the image name, tags, current status, and environment variables. Verify that these details match your setup and cross-check with the events to ensure everything is functioning as expected. For example, the Pod below has the nginx-gateway container that is running and the events reflect the same.
```text
Containers:
nginx-gateway:
- Container ID: containerd://06c97a9de938b35049b7c63e251418395aef65dd1ff996119362212708b79cab
- Image: nginx-gateway-fabric
- Image ID: docker.io/library/import-2024-06-13@sha256:1460d63bd8a352a6e455884d7ebf51ce9c92c512cb43b13e44a1c3e3e6a08918
- Ports: 9113/TCP, 8081/TCP
- Host Ports: 0/TCP, 0/TCP
+ Container ID: containerd://492f380d5919ae2cdca0e009e7a7d5bf4092f8e1910f52d8951d58b73f125646
+ Image: nginx-gateway-fabric:latest
+ Image ID: sha256:c034f1e5bde0490b1f2441e0e9b0bcfce5f2e259bb6210c55d4d67f808a74ecb
+ Ports: 8443/TCP, 9113/TCP, 8081/TCP
+ Host Ports: 0/TCP, 0/TCP, 0/TCP
+ SeccompProfile: RuntimeDefault
+ Args:
+ controller
+ --gateway-ctlr-name=gateway.nginx.org/nginx-gateway-controller
+ --gatewayclass=nginx
+ --config=my-release-config
+ --service=my-release-nginx-gateway-fabric
+ --agent-tls-secret=agent-tls
+ --metrics-port=9113
+ --health-port=8081
+ --leader-election-lock-name=my-release-nginx-gateway-fabric-leader-election
State: Running
- Started: Thu, 13 Jun 2024 11:47:46 -0600
+ Started: Thu, 24 Apr 2025 10:57:16 -0700
Ready: True
Restart Count: 0
Readiness: http-get http://:health/readyz delay=3s timeout=1s period=1s #success=1 #failure=3
Environment:
- POD_IP: (v1:status.podIP)
POD_NAMESPACE: nginx-gateway (v1:metadata.namespace)
- POD_NAME: ngf-nginx-gateway-fabric-66dd665756-zh7d7 (v1:metadata.name)
- nginx:
- Container ID: containerd://c2f3684fd8922e4fac7d5707ab4eb5f49b1f76a48893852c9a812cd6dbaa2f55
- Image: nginx-gateway-fabric/nginx
- Image ID: docker.io/library/import-2024-06-13@sha256:c9a02cb5665c6218373f8f65fc2c730f018d0ca652ae827cc913a7c6e9db6f45
- Ports: 80/TCP, 443/TCP
- Host Ports: 0/TCP, 0/TCP
- State: Running
- Started: Thu, 13 Jun 2024 11:47:46 -0600
- Ready: True
- Restart Count: 0
- Environment:
+ POD_NAME: my-release-nginx-gateway-fabric-b99bd5cdd-qzp5q (v1:metadata.name)
+ POD_UID: (v1:metadata.uid)
+ INSTANCE_NAME: (v1:metadata.labels['app.kubernetes.io/instance'])
+ IMAGE_NAME: nginx-gateway-fabric:latest
+ Mounts:
+ /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5dg45 (ro)
+ /var/run/secrets/ngf from nginx-agent-tls (rw)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
- Normal Scheduled 40s default-scheduler Successfully assigned nginx-gateway/ngf-nginx-gateway-fabric-66dd665756-zh7d7 to kind-control-plane
- Normal Pulled 40s kubelet Container image "nginx-gateway-fabric" already present on machine
- Normal Created 40s kubelet Created container nginx-gateway
- Normal Started 39s kubelet Started container nginx-gateway
- Normal Pulled 39s kubelet Container image "nginx-gateway-fabric/nginx" already present on machine
- Normal Created 39s kubelet Created container nginx
- Normal Started 39s kubelet Started container nginx
+ Normal Scheduled 22s default-scheduler Successfully assigned nginx-gateway/my-release-nginx-gateway-fabric-b99bd5cdd-qzp5q to kind-control-plane
+ Normal Pulled 20s kubelet Container image "nginx-gateway-fabric:latest" already present on machine
+ Normal Created 20s kubelet Created container: nginx-gateway
+ Normal Started 20s kubelet Started container nginx-gateway
```
----
-
-##### Insufficient Privileges errors
-
-Depending on your environment's configuration, the control plane may not have the proper permissions to reload NGINX. The NGINX configuration will not be applied and you will see the following error in the _nginx-gateway_ logs:
+##### NGINX Pod is not running or ready
-`failed to reload NGINX: failed to send the HUP signal to NGINX main: operation not permitted`
+To understand why the NGINX Pod has not started running or is not ready, check the state of the Pod to get detailed information about the current status and events happening in the Pod. To do this, use `kubectl describe`:
-To **resolve** this issue you will need to set `allowPrivilegeEscalation` to `true`.
+```shell
+kubectl describe pod -n
+```
-- If using Helm, you can set the `nginxGateway.securityContext.allowPrivilegeEscalation` value.
-- If using the manifests directly, you can update this field under the `nginx-gateway` container's `securityContext`.
+The Pod description includes details about the image name, tags, current status, and environment variables. Verify that these details match your setup and cross-check with the events to ensure everything is functioning as expected. For example, the Pod below has the nginx container that is running and the events reflect the same.
----
+```text
+Containers:
+ nginx:
+ Container ID: containerd://0dd33fd358ba3b369de315be15b197e369342aba7aa8d3ea12e4455823fa90ce
+ Image: nginx-gateway-fabric/nginx:latest
+ Image ID: sha256:e5cb19bab49cbde6222df607a0946e1e00c1af767263b79ae36e4c69f8547f20
+ Ports: 80/TCP, 9113/TCP
+ Host Ports: 0/TCP, 0/TCP
+ SeccompProfile: RuntimeDefault
+ State: Running
+ Started: Thu, 24 Apr 2025 10:57:36 -0700
+ Ready: True
+ Restart Count: 0
+ Environment:
+ Mounts:
+ /etc/nginx-agent from nginx-agent (rw)
+ /etc/nginx/conf.d from nginx-conf (rw)
+ /etc/nginx/includes from nginx-includes (rw)
+ /etc/nginx/main-includes from nginx-main-includes (rw)
+ /etc/nginx/secrets from nginx-secrets (rw)
+ /etc/nginx/stream-conf.d from nginx-stream-conf (rw)
+ /var/cache/nginx from nginx-cache (rw)
+ /var/lib/nginx-agent from nginx-agent-lib (rw)
+ /var/log/nginx-agent from nginx-agent-log (rw)
+ /var/run/nginx from nginx-run (rw)
+ /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9kph (ro)
+ /var/run/secrets/ngf from nginx-agent-tls (rw)
+ /var/run/secrets/ngf/serviceaccount from token (rw)
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Scheduled 2m57s default-scheduler Successfully assigned default/gateway-nginx-85f7f6d7d-fx7q2 to kind-control-plane
+ Normal Pulled 2m54s kubelet Container image "nginx-gateway-fabric:latest" already present on machine
+ Normal Created 2m54s kubelet Created container: init
+ Normal Started 2m54s kubelet Started container init
+ Normal Pulled 2m53s kubelet Container image "nginx-gateway-fabric/nginx:latest" already present on machine
+ Normal Created 2m53s kubelet Created container: nginx
+ Normal Started 2m53s kubelet Started container nginx
+```
##### NGINX Plus failure to start or traffic interruptions
@@ -409,9 +420,7 @@ nginx: [emerg] License file is required. Download JWT license from MyF5 and conf
nginx: [emerg] license expired
```
-These errors could prevent NGINX Plus from starting or prevent traffic from flowing. To fix these issues, see the [NGINX Plus JWT]({{< ref "/ngf/installation/nginx-plus-jwt.md" >}}) guide.
-
----
+These errors could prevent NGINX Plus from starting or prevent traffic from flowing. To fix these issues, see the [NGINX Plus JWT]({{< ref "/ngf/install/nginx-plus.md" >}}) guide.
##### 413 Request Entity Too Large
@@ -434,9 +443,7 @@ Or view the following error message in the NGINX logs:
```
The request body exceeds the [client_max_body_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
-To **resolve** this, you can configure the `client_max_body_size` using the `ClientSettingsPolicy` API. Read the [Client Settings Policy]({{< ref "/ngf/how-to/traffic-management/client-settings.md" >}}) documentation for more information.
-
----
+To **resolve** this, you can configure the `client_max_body_size` using the `ClientSettingsPolicy` API. Read the [Client Settings Policy]({{< ref "/ngf/traffic-management/client-settings.md" >}}) documentation for more information.
##### IP Family Mismatch Errors
@@ -472,8 +479,6 @@ To **resolve** this, you can do one of the following:
- Adjust the IPFamily of your Service to match that of the NginxProxy configuration.
----
-
##### Policy cannot be applied to target
If you `describe` your Policy and see the following error:
@@ -493,8 +498,6 @@ This means you are attempting to attach a Policy to a Route that has an overlapp
- Combine the Route rules for the overlapping path into a single Route.
- If the Policy allows it, specify both Routes in the `targetRefs` list.
----
-
##### Broken Header error
If you check your _nginx_ container logs and see the following error:
@@ -509,8 +512,6 @@ It indicates that `proxy_protocol` is enabled for the gateway listeners, but the
- Send valid proxy information with requests being handled by your application.
----
-
### See also
You can view the [Kubernetes Troubleshooting Guide](https://kubernetes.io/docs/tasks/debug/debug-application/) for more debugging guidance.
diff --git a/go.sum b/go.sum
index e54ae2fb6..d4f5dc928 100644
--- a/go.sum
+++ b/go.sum
@@ -1,8 +1,2 @@
-github.com/nginxinc/nginx-hugo-theme v0.42.1 h1:SYj7R7fKPYwtbQobTcJWy/ZWQxa5tlHCSJfU2dxYXxY=
-github.com/nginxinc/nginx-hugo-theme v0.42.1/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M=
-github.com/nginxinc/nginx-hugo-theme v0.42.27 h1:D80Sf/o9lR4P0NDFfP/hCQllohz6C5qlJ4nGNfdfnqM=
-github.com/nginxinc/nginx-hugo-theme v0.42.27/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M=
-github.com/nginxinc/nginx-hugo-theme v0.42.28 h1:1SGzBADcXnSqP4rOKEhlfEUloopH6UvMg+XTyVVQyjU=
-github.com/nginxinc/nginx-hugo-theme v0.42.28/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M=
github.com/nginxinc/nginx-hugo-theme v0.42.36 h1:vFBavxB+tw2fs0rLTpA3kYPMdBK15LtZkfkX21kzrDo=
github.com/nginxinc/nginx-hugo-theme v0.42.36/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M=
diff --git a/layouts/shortcodes/version-ngf.html b/layouts/shortcodes/version-ngf.html
index 308b6faa7..359a5b952 100644
--- a/layouts/shortcodes/version-ngf.html
+++ b/layouts/shortcodes/version-ngf.html
@@ -1 +1 @@
-1.6.2
\ No newline at end of file
+2.0.0
\ No newline at end of file
diff --git a/static/ngf/grafana-dashboard.json b/static/ngf/grafana-dashboard.json
index 0c3c40392..fbd5f2963 100644
--- a/static/ngf/grafana-dashboard.json
+++ b/static/ngf/grafana-dashboard.json
@@ -21,7 +21,6 @@
"graphTooltip": 0,
"id": 1,
"links": [],
- "liveNow": false,
"panels": [
{
"collapsed": false,
@@ -31,7 +30,7 @@
"x": 0,
"y": 0
},
- "id": 5,
+ "id": 13,
"panels": [],
"title": "Status",
"type": "row"
@@ -44,69 +43,79 @@
"fieldConfig": {
"defaults": {
"color": {
- "mode": "thresholds"
+ "mode": "palette-classic"
},
- "mappings": [
- {
- "options": {
- "0": {
- "index": 0,
- "text": "Down"
- },
- "1": {
- "index": 1,
- "text": "Up"
- }
- },
- "type": "value"
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "barWidthFactor": 0.6,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
}
- ],
+ },
+ "mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
- "color": "semi-dark-red",
+ "color": "green",
"value": null
},
{
- "color": "#EAB839",
- "value": 1
- },
- {
- "color": "semi-dark-green",
- "value": 1
+ "color": "red",
+ "value": 80
}
]
- },
- "unit": "none",
- "unitScale": true
+ }
},
"overrides": []
},
"gridPos": {
- "h": 4,
- "w": 6,
+ "h": 8,
+ "w": 12,
"x": 0,
"y": 1
},
- "id": 3,
+ "id": 12,
"options": {
- "colorMode": "background",
- "graphMode": "none",
- "justifyMode": "auto",
- "orientation": "horizontal",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "",
- "values": false
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
},
- "showPercentChange": false,
- "textMode": "auto",
- "wideLayout": true
+ "tooltip": {
+ "hideZeros": false,
+ "mode": "single",
+ "sort": "none"
+ }
},
- "pluginVersion": "10.3.3",
+ "pluginVersion": "11.5.2",
"targets": [
{
"datasource": {
@@ -115,38 +124,23 @@
},
"disableTextWrap": false,
"editorMode": "builder",
- "expr": "nginx_gateway_fabric_up{instance=~\"$instance\"}",
+ "expr": "up{instance=~\"$ngf_instance\"}",
"fullMetaSearch": false,
"includeNullMetadata": true,
- "instant": false,
"legendFormat": "",
"range": true,
"refId": "A",
"useBackend": false
}
],
- "title": "NGINX Status for $instance",
- "type": "stat"
- },
- {
- "collapsed": false,
- "gridPos": {
- "h": 1,
- "w": 24,
- "x": 0,
- "y": 5
- },
- "id": 6,
- "panels": [],
- "title": "Metrics",
- "type": "row"
+ "title": "NGF Status",
+ "type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
- "description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -156,11 +150,12 @@
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
- "axisLabel": "Connections (rate)",
+ "axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
+ "barWidthFactor": 0.6,
"drawStyle": "line",
- "fillOpacity": 10,
+ "fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
@@ -170,7 +165,7 @@
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
- "pointSize": 1,
+ "pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
@@ -191,21 +186,23 @@
{
"color": "green",
"value": null
+ },
+ {
+ "color": "red",
+ "value": 80
}
]
- },
- "unit": "reqps",
- "unitScale": true
+ }
},
"overrides": []
},
"gridPos": {
- "h": 10,
+ "h": 8,
"w": 12,
- "x": 0,
- "y": 6
+ "x": 12,
+ "y": 1
},
- "id": 1,
+ "id": 14,
"options": {
"legend": {
"calcs": [],
@@ -214,10 +211,12 @@
"showLegend": true
},
"tooltip": {
+ "hideZeros": false,
"mode": "single",
"sort": "none"
}
},
+ "pluginVersion": "11.5.2",
"targets": [
{
"datasource": {
@@ -225,34 +224,33 @@
"uid": "${DS_PROMETHEUS}"
},
"disableTextWrap": false,
- "editorMode": "code",
- "expr": "irate(nginx_gateway_fabric_connections_accepted{instance=~\"$instance\"}[1m])",
+ "editorMode": "builder",
+ "expr": "up{instance=~\"$nginx_instance\"}",
+ "format": "time_series",
"fullMetaSearch": false,
- "includeNullMetadata": false,
- "instant": false,
- "interval": "",
- "legendFormat": "{{instance}} accepted",
+ "includeNullMetadata": true,
+ "legendFormat": "",
"range": true,
"refId": "A",
"useBackend": false
- },
- {
- "datasource": {
- "type": "prometheus",
- "uid": "${DS_PROMETHEUS}"
- },
- "editorMode": "code",
- "expr": "irate(nginx_gateway_fabric_connections_handled{instance=~\"$instance\"}[1m])",
- "hide": false,
- "instant": false,
- "legendFormat": "{{instance}} handled",
- "range": true,
- "refId": "B"
}
],
- "title": "Processed Connections",
+ "title": "NGINX Status for All",
"type": "timeseries"
},
+ {
+ "collapsed": false,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 9
+ },
+ "id": 6,
+ "panels": [],
+ "title": "Metrics",
+ "type": "row"
+ },
{
"datasource": {
"type": "prometheus",
@@ -271,6 +269,7 @@
"axisLabel": "Connections",
"axisPlacement": "auto",
"barAlignment": 0,
+ "barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
@@ -306,16 +305,15 @@
}
]
},
- "unit": "short",
- "unitScale": true
+ "unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 10,
"w": 12,
- "x": 12,
- "y": 6
+ "x": 0,
+ "y": 10
},
"id": 4,
"options": {
@@ -326,64 +324,83 @@
"showLegend": true
},
"tooltip": {
+ "hideZeros": false,
"mode": "single",
"sort": "none"
}
},
+ "pluginVersion": "11.5.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
- "editorMode": "code",
- "expr": "nginx_gateway_fabric_connections_active{instance=~\"$instance\"}",
+ "disableTextWrap": false,
+ "editorMode": "builder",
+ "exemplar": false,
+ "expr": "irate(nginx_http_connection_count_connections{instance=~\"$nginx_instance\", nginx_connections_outcome=\"ACTIVE\"}[1m])",
+ "fullMetaSearch": false,
+ "includeNullMetadata": true,
"instant": false,
"legendFormat": "{{instance}} active",
"range": true,
- "refId": "A"
+ "refId": "A",
+ "useBackend": false
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
- "editorMode": "code",
- "expr": "nginx_gateway_fabric_connections_reading{instance=~\"$instance\"}",
+ "disableTextWrap": false,
+ "editorMode": "builder",
+ "expr": "irate(nginx_http_connection_count_connections{instance=~\"$nginx_instance\", nginx_connections_outcome=\"READING\"}[1m])",
+ "fullMetaSearch": false,
"hide": false,
+ "includeNullMetadata": true,
"instant": false,
"legendFormat": "{{instance}} reading",
"range": true,
- "refId": "B"
+ "refId": "B",
+ "useBackend": false
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
- "editorMode": "code",
- "expr": "nginx_gateway_fabric_connections_waiting{instance=~\"$instance\"}",
+ "disableTextWrap": false,
+ "editorMode": "builder",
+ "expr": "irate(nginx_http_connection_count_connections{instance=~\"$nginx_instance\", nginx_connections_outcome=\"WAITING\"}[1m])",
+ "fullMetaSearch": false,
"hide": false,
+ "includeNullMetadata": true,
"instant": false,
"legendFormat": "{{instance}} waiting",
"range": true,
- "refId": "C"
+ "refId": "C",
+ "useBackend": false
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
- "editorMode": "code",
- "expr": "nginx_gateway_fabric_connections_writing{instance=~\"$instance\"}",
+ "disableTextWrap": false,
+ "editorMode": "builder",
+ "expr": "irate(nginx_http_connection_count_connections{instance=~\"$nginx_instance\", nginx_connections_outcome=\"WRITING\"}[1m])",
+ "fullMetaSearch": false,
"hide": false,
+ "includeNullMetadata": true,
"instant": false,
"legendFormat": "{{instance}} writing",
"range": true,
- "refId": "D"
+ "refId": "D",
+ "useBackend": false
}
],
- "title": "Active Connections",
+ "title": "NGINX Active Connections",
"type": "timeseries"
},
{
@@ -403,8 +420,9 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
+ "barWidthFactor": 0.6,
"drawStyle": "line",
- "fillOpacity": 10,
+ "fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
@@ -414,7 +432,7 @@
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
- "pointSize": 1,
+ "pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
@@ -435,21 +453,23 @@
{
"color": "green",
"value": null
+ },
+ {
+ "color": "red",
+ "value": 80
}
]
- },
- "unit": "reqps",
- "unitScale": true
+ }
},
"overrides": []
},
"gridPos": {
- "h": 8,
- "w": 24,
- "x": 0,
- "y": 16
+ "h": 10,
+ "w": 12,
+ "x": 12,
+ "y": 10
},
- "id": 2,
+ "id": 11,
"options": {
"legend": {
"calcs": [],
@@ -458,10 +478,12 @@
"showLegend": true
},
"tooltip": {
+ "hideZeros": false,
"mode": "single",
"sort": "none"
}
},
+ "pluginVersion": "11.5.2",
"targets": [
{
"datasource": {
@@ -470,17 +492,29 @@
},
"disableTextWrap": false,
"editorMode": "code",
- "expr": "irate(nginx_gateway_fabric_http_requests_total{instance=~\"$instance\"}[1m])",
+ "expr": "irate(nginx_http_connections_total{instance=~\"$nginx_instance\", nginx_connections_outcome=\"ACCEPTED\"}[1m])",
"fullMetaSearch": false,
- "includeNullMetadata": false,
- "instant": false,
- "legendFormat": "{{instance}} total requests",
+ "includeNullMetadata": true,
+ "legendFormat": "{{instance}} accepted",
"range": true,
"refId": "A",
"useBackend": false
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "expr": "irate(nginx_http_connections_total{instance=~\"$nginx_instance\", nginx_connections_outcome=\"HANDLED\"}[1m])",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "{{instance}} handled",
+ "range": true,
+ "refId": "B"
}
],
- "title": "Total Requests",
+ "title": "NGINX Processed Connections",
"type": "timeseries"
},
{
@@ -500,6 +534,7 @@
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
+ "barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
@@ -535,17 +570,17 @@
}
]
},
- "unitScale": true
+ "unit": "reqps"
},
"overrides": []
},
"gridPos": {
"h": 8,
- "w": 12,
+ "w": 24,
"x": 0,
- "y": 24
+ "y": 20
},
- "id": 8,
+ "id": 2,
"options": {
"legend": {
"calcs": [],
@@ -554,11 +589,12 @@
"showLegend": true
},
"tooltip": {
+ "hideZeros": false,
"mode": "single",
"sort": "none"
}
},
- "pluginVersion": "10.3.3",
+ "pluginVersion": "11.5.2",
"targets": [
{
"datasource": {
@@ -566,185 +602,24 @@
"uid": "${DS_PROMETHEUS}"
},
"disableTextWrap": false,
- "editorMode": "code",
- "expr": "irate(nginx_gateway_fabric_nginx_reloads_total{instance=~\"$instance\"}[1m])",
+ "editorMode": "builder",
+ "expr": "irate(nginx_http_requests_total{instance=~\"$nginx_instance\"}[1m])",
"fullMetaSearch": false,
"includeNullMetadata": false,
"instant": false,
- "legendFormat": "{{instance}}",
+ "legendFormat": "{{instance}} total requests",
"range": true,
"refId": "A",
"useBackend": false
}
],
- "title": "Total NGINX Reloads Rate",
+ "title": "Total Requests",
"type": "timeseries"
- },
- {
- "datasource": {
- "type": "prometheus",
- "uid": "${DS_PROMETHEUS}"
- },
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 1
- }
- ]
- },
- "unitScale": true
- },
- "overrides": []
- },
- "gridPos": {
- "h": 8,
- "w": 6,
- "x": 12,
- "y": 24
- },
- "id": 9,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "",
- "values": false
- },
- "showPercentChange": false,
- "textMode": "auto",
- "wideLayout": true
- },
- "pluginVersion": "10.3.3",
- "targets": [
- {
- "datasource": {
- "type": "prometheus",
- "uid": "${DS_PROMETHEUS}"
- },
- "disableTextWrap": false,
- "editorMode": "builder",
- "expr": "nginx_gateway_fabric_nginx_reload_errors_total{instance=~\"$instance\"}",
- "fullMetaSearch": false,
- "includeNullMetadata": true,
- "instant": false,
- "legendFormat": "{{instance}}",
- "range": true,
- "refId": "A",
- "useBackend": false
- }
- ],
- "title": "Total NGINX Reload Errors",
- "type": "stat"
- },
- {
- "datasource": {
- "type": "prometheus",
- "uid": "${DS_PROMETHEUS}"
- },
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [
- {
- "options": {
- "0": {
- "color": "semi-dark-green",
- "index": 0,
- "text": "Up to date"
- },
- "1": {
- "color": "semi-dark-red",
- "index": 1,
- "text": "Stale"
- }
- },
- "type": "value"
- }
- ],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "semi-dark-red",
- "value": 1
- }
- ]
- },
- "unitScale": true
- },
- "overrides": []
- },
- "gridPos": {
- "h": 8,
- "w": 6,
- "x": 18,
- "y": 24
- },
- "id": 10,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "",
- "values": false
- },
- "showPercentChange": false,
- "textMode": "auto",
- "wideLayout": true
- },
- "pluginVersion": "10.3.3",
- "targets": [
- {
- "datasource": {
- "type": "prometheus",
- "uid": "${DS_PROMETHEUS}"
- },
- "disableTextWrap": false,
- "editorMode": "builder",
- "expr": "nginx_gateway_fabric_nginx_stale_config{instance=~\"$instance\"}",
- "fullMetaSearch": false,
- "includeNullMetadata": true,
- "instant": false,
- "legendFormat": "__auto",
- "range": true,
- "refId": "A",
- "useBackend": false
- }
- ],
- "title": "NGINX Config State",
- "type": "stat"
}
],
+ "preload": false,
"refresh": "5s",
- "schemaVersion": 39,
+ "schemaVersion": 40,
"tags": [
"nginx-gateway-fabric"
],
@@ -752,26 +627,45 @@
"list": [
{
"current": {
- "selected": false,
- "text": "default",
- "value": "default"
+ "text": "prometheus",
+ "value": "aeeumt3huyhogd"
},
- "hide": 0,
"includeAll": false,
"label": "datasource",
- "multi": false,
"name": "DS_PROMETHEUS",
"options": [],
"query": "prometheus",
- "queryValue": "",
"refresh": 1,
"regex": "",
- "skipUrlSync": false,
"type": "datasource"
},
{
"current": {
- "selected": true,
+ "text": "All",
+ "value": [
+ "$__all"
+ ]
+ },
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "definition": "label_values(nginx_http_connections_total,instance)",
+ "includeAll": true,
+ "multi": true,
+ "name": "nginx_instance",
+ "options": [],
+ "query": {
+ "qryType": 1,
+ "query": "label_values(nginx_http_connections_total,instance)",
+ "refId": "PrometheusVariableQueryEditor-VariableQuery"
+ },
+ "refresh": 1,
+ "regex": "",
+ "type": "query"
+ },
+ {
+ "current": {
"text": [
"All"
],
@@ -783,33 +677,31 @@
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
- "definition": "label_values(nginx_gateway_fabric_up,instance)",
- "hide": 0,
+ "definition": "label_values(nginx_gateway_fabric_event_batch_processing_milliseconds_sum,instance)",
"includeAll": true,
+ "label": "ngf_instance",
"multi": true,
- "name": "instance",
+ "name": "ngf_instance",
"options": [],
"query": {
"qryType": 1,
- "query": "label_values(nginx_gateway_fabric_up,instance)",
+ "query": "label_values(nginx_gateway_fabric_event_batch_processing_milliseconds_sum,instance)",
"refId": "PrometheusVariableQueryEditor-VariableQuery"
},
"refresh": 1,
"regex": "",
- "skipUrlSync": false,
- "sort": 0,
"type": "query"
}
]
},
"time": {
- "from": "now-15m",
+ "from": "now-5m",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "NGINX Gateway Fabric",
"uid": "cdb1c6f6-7c77-4cee-a177-593f41364dbe",
- "version": 1,
+ "version": 5,
"weekStart": ""
-}
+}
\ No newline at end of file
diff --git a/static/ngf/img/ngf-high-level.png b/static/ngf/img/ngf-high-level.png
deleted file mode 100644
index 6f2bef6f0..000000000
Binary files a/static/ngf/img/ngf-high-level.png and /dev/null differ
diff --git a/static/ngf/img/ngf-pod.png b/static/ngf/img/ngf-pod.png
deleted file mode 100644
index d7b67e5b0..000000000
Binary files a/static/ngf/img/ngf-pod.png and /dev/null differ