Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion latest/ug/nodes/hybrid-nodes-add-ons.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ The sections that follow describe differences between running compatible {aws} a
[#hybrid-nodes-add-ons-core]
== kube-proxy and CoreDNS

EKS installs kube-proxy and CoreDNS as self-managed add-ons by default when you create an EKS cluster with the {aws} API and {aws} SDKs, including from the {aws} CLI. You can overwrite these add-ons with Amazon EKS add-ons after cluster creation. Reference the EKS documentation for details on <<managing-kube-proxy>> and <<managing-coredns>>. If you are running a mixed mode cluster with both hybrid nodes and nodes in {aws} Cloud, it is recommended to have at least one CoreDNS replica on hybrid nodes and at least one CoreDNS replica on your nodes in {aws} Cloud. See <<hybrid-nodes-mixed-coredns>> for configuration steps.
EKS installs kube-proxy and CoreDNS as self-managed add-ons by default when you create an EKS cluster with the {aws} API and {aws} SDKs, including from the {aws} CLI. You can overwrite these add-ons with Amazon EKS add-ons after cluster creation. Reference the EKS documentation for details on <<managing-kube-proxy>> and <<managing-coredns>>. If you are running a mixed mode cluster with both hybrid nodes and nodes in {aws} Cloud, we recommend that you have at least one CoreDNS replica on hybrid nodes and at least one CoreDNS replica on your nodes in {aws} Cloud. See <<hybrid-nodes-mixed-coredns>> for configuration steps.

[#hybrid-nodes-add-ons-cw]
== CloudWatch Observability agent
Expand Down
2 changes: 1 addition & 1 deletion latest/ug/nodes/hybrid-nodes-concepts-kubernetes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ metadata:
namespace: kube-system
----

[#hybrid-nodes-concepts-k8s-pod-cidrs]]
[#hybrid-nodes-concepts-k8s-pod-cidrs]
== Routable remote Pod CIDRs

The <<hybrid-nodes-concepts-networking>> page details the requirements to run webhooks on hybrid nodes or to have pods running on cloud nodes communicate with pods running on hybrid nodes. The key requirement is that the on-premises router needs to know which node is responsible for a particular pod IP. There are several ways to achieve this, including Border Gateway Protocol (BGP), static routes, and Address Resolution Protocol (ARP) proxying. These are covered in the following sections.
Expand Down
4 changes: 2 additions & 2 deletions latest/ug/nodes/hybrid-nodes-creds.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Prepare credentials to authenticate hybrid nodes with Amazon EKS clusters
Prepare credentials to authenticate hybrid nodes with Amazon EKS clusters
--

Amazon EKS Hybrid Nodes use temporary IAM credentials provisioned by {aws} SSM hybrid activations or {aws} IAM Roles Anywhere to authenticate with the Amazon EKS cluster. You must use either {aws} SSM hybrid activations or {aws} IAM Roles Anywhere with the Amazon EKS Hybrid Nodes CLI (`nodeadm`). You should not use both {aws} SSM hybrid activations and {aws} IAM Roles Anywhere. It is recommended to use {aws} SSM hybrid activations if you do not have existing Public Key Infrastructure (PKI) with a Certificate Authority (CA) and certificates for your on-premises environments. If you do have existing PKI and certificates on-premises, use {aws} IAM Roles Anywhere.
Amazon EKS Hybrid Nodes use temporary IAM credentials provisioned by {aws} SSM hybrid activations or {aws} IAM Roles Anywhere to authenticate with the Amazon EKS cluster. You must use either {aws} SSM hybrid activations or {aws} IAM Roles Anywhere with the Amazon EKS Hybrid Nodes CLI (`nodeadm`). You should not use both {aws} SSM hybrid activations and {aws} IAM Roles Anywhere. We recommend that you use {aws} SSM hybrid activations if you do not have existing Public Key Infrastructure (PKI) with a Certificate Authority (CA) and certificates for your on-premises environments. If you do have existing PKI and certificates on-premises, use {aws} IAM Roles Anywhere.

[#hybrid-nodes-role]
== Hybrid Nodes IAM Role
Expand Down Expand Up @@ -45,7 +45,7 @@ By default, {aws} SSM hybrid activations are active for 24 hours. You can altern

See the example below for how to create an {aws} SSM hybrid activation with your Hybrid Nodes IAM role. When you use {aws} SSM hybrid activations for your hybrid nodes credentials, the names of your hybrid nodes will have the format `mi-012345678abcdefgh` and the temporary credentials provisioned by {aws} SSM are valid for 1 hour. You cannot alter the node name or credential duration when using {aws} SSM as your credential provider. The temporary credentials are automatically rotated by {aws} SSM and the rotation does not impact the status of your nodes or applications.

It is recommended to use one {aws} SSM hybrid activation per EKS cluster to scope the {aws} SSM `ssm:DeregisterManagedInstance` permission of the Hybrid Nodes IAM role to only be able to deregister instances that are associated with your {aws} SSM hybrid activation. In the example on this page, a tag with the EKS cluster ARN is used, which can be used to map your {aws} SSM hybrid activation to the EKS cluster. You can alternatively use your preferred tag and method of scoping the {aws} SSM permissions based on your permission boundaries and requirements. The `REGISTRATION_LIMIT` option in the command below is an integer used to limit the number of machines that can use the {aws} SSM hybrid activation (for example `10`)
We recommend that you use one {aws} SSM hybrid activation per EKS cluster to scope the {aws} SSM `ssm:DeregisterManagedInstance` permission of the Hybrid Nodes IAM role to only be able to deregister instances that are associated with your {aws} SSM hybrid activation. In the example on this page, a tag with the EKS cluster ARN is used, which can be used to map your {aws} SSM hybrid activation to the EKS cluster. You can alternatively use your preferred tag and method of scoping the {aws} SSM permissions based on your permission boundaries and requirements. The `REGISTRATION_LIMIT` option in the command below is an integer used to limit the number of machines that can use the {aws} SSM hybrid activation (for example `10`)

[source,bash,subs="verbatim,attributes"]
----
Expand Down
44 changes: 34 additions & 10 deletions latest/ug/nodes/hybrid-nodes-networking.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,26 +18,48 @@ image::images/hybrid-prereq-diagram.png[Hybrid node network connectivity.,scaled
[#hybrid-nodes-networking-on-prem]
== On-premises networking configuration

*Minimum network requirements*
[#hybrid-nodes-networking-min-reqs]
=== Minimum network requirements

For an optimal experience, {aws} recommends reliable network connectivity of at least 100 Mbps and a maximum of 200ms round trip latency for the hybrid nodes connection to the {aws} Region. The bandwidth and latency requirements can vary depending on the number of hybrid nodes and your workload characteristics such as application image size, application elasticity, monitoring and logging configurations, and application dependencies on accessing data stored in other {aws} services.
For an optimal experience, we recommend that you have reliable network connectivity of at least 100 Mbps and a maximum of 200ms round trip latency for the hybrid nodes connection to the {aws} Region. This is general guidance that accommodates most use cases but is not a strict requirement. The bandwidth and latency requirements can vary depending on the number of hybrid nodes and your workload characteristics, such as application image size, application elasticity, monitoring and logging configurations, and application dependencies on accessing data stored in other {aws} services. We recommend that you test with your own applications and environments before deploying to production to validate that your networking setup meets the requirements for your workloads.

*On-premises node and pod CIDRs*
[#hybrid-nodes-networking-on-prem-cidrs]
=== On-premises node and pod CIDRs

Identify the node and pod CIDRs you will use for your hybrid nodes and the workloads running on them. The node CIDR is allocated from your on-premises network and the pod CIDR is allocated from your Container Network Interface (CNI) if you are using an overlay network for your CNI. You pass your on-premises node CIDRs and optionally pod CIDRs as inputs when you create your EKS cluster with the `RemoteNodeNetwork` and `RemotePodNetwork` fields.
Identify the node and pod CIDRs you will use for your hybrid nodes and the workloads running on them. The node CIDR is allocated from your on-premises network and the pod CIDR is allocated from your Container Network Interface (CNI) if you are using an overlay network for your CNI. You pass your on-premises node CIDRs and pod CIDRs as inputs when you create your EKS cluster with the `RemoteNodeNetwork` and `RemotePodNetwork` fields. Your on-premises node CIDRs must be routable on your on-premises network. See the following section for information on the on-premises pod CIDR routability.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we include "Your on-premises node CIDRs must be routable on your on-premises network." as a requirement in the list below, rather than in the paragraph? its easier to miss in the paragraph


The on-premises node and pod CIDR blocks must meet the following requirements:

1. Be within one of the following `IPv4` RFC-1918 ranges: `10.0.0.0/8`, `172.16.0.0/12`, or `192.168.0.0/16`.
2. Not overlap with each other, the VPC CIDR for your EKS cluster, or your Kubernetes service `IPv4` CIDR.

If your CNI performs Network Address Translation (NAT) for pod traffic as it leaves your on-premises hosts, you do not need to make your pod CIDR routable on your on-premises network or configure your EKS cluster with your _remote pod network_ for hybrid nodes to become ready to workloads. If your CNI does not use NAT for pod traffic as it leaves your on-premises hosts, your pod CIDR must be routable on your on-premises network and you must configure your EKS cluster with your remote pod network for hybrid nodes to become ready to workloads.
[#hybrid-nodes-networking-on-prem-pod-routing]
=== On-premises pod network routing

There are several techniques you can use to make your pod CIDR routable on your on-premises network including Border Gateway Protocol (BGP), static routes, or other custom routing solutions. BGP is the recommended solution as it is more scalable and easier to manage than alternative solutions that require custom or manual route configuration. {aws} supports the BGP capabilities of Cilium and Calico for advertising hybrid nodes pod CIDRs, see <<hybrid-nodes-cni, Configure CNI for hybrid nodes>> for more information.
When using EKS Hybrid Nodes, we generally recommend that you make your on-premises pod CIDRs routable on your on-premises network to enable full cluster communication and functionality between cloud and on-premises environments.

If you are running webhooks on hybrid nodes, your pod CIDR must be routable on your on-premises network and you must configure your EKS cluster with your remote pod network so the EKS control plane can directly communicate with the webhooks running on hybrid nodes. If you cannot make your pod CIDR routable on your on-premises network but need to run webhooks, it is recommended to run webhooks on cloud nodes in the same EKS cluster. For more information on running webhooks on cloud nodes, see <<hybrid-nodes-webhooks, Configure webhooks for hybrid nodes>>.
*Routable pod networks*

*Access required during hybrid node installation and upgrade*
If you are able to make your pod network routable on your on-premises network, follow the guidance below.

1. Configure the `RemotePodNetwork` field for your EKS cluster with your on-premises pod CIDR, your VPC route tables with your on-premises pod CIDR, and your EKS cluster security group with your on-premises pod CIDR.
2. There are several techniques you can use to make your on-premises pod CIDR routable on your on-premises network including Border Gateway Protocol (BGP), static routes, or other custom routing solutions. BGP is the recommended solution as it is more scalable and easier to manage than alternative solutions that require custom or manual route configuration. {aws} supports the BGP capabilities of Cilium and Calico for advertising pod CIDRs, see <<hybrid-nodes-cni>> and <<hybrid-nodes-concepts-k8s-pod-cidrs>> for more information.
3. Webhooks can run on hybrid nodes as the EKS control plane is able to communicate with the Pod IP addresses assigned to the webhooks.
4. Workloads running on cloud nodes are able to communicate directly with workloads running on hybrid nodes in the same EKS cluster.
5. Other {aws} services, such as {aws} Application Load Balancers and Amazon Managed Service for Prometheus, are able to communicate with workloads running on hybrid nodes to balance network traffic and scrape pod metrics.

*Unroutable pod networks*

If you are _not_ able to make your pod networks routable on your on-premises network, follow the guidance below.

1. Webhooks cannot run on hybrid nodes because webhooks require connectivity from the EKS control plane to the Pod IP addresses assigned to the webhooks. In this case, we recommend that you run webhooks on cloud nodes in the same EKS cluster as your hybrid nodes, see <<hybrid-nodes-webhooks>> for more information.
2. Workloads running on cloud nodes are not able to communicate directly with workloads running on hybrid nodes when using the VPC CNI for cloud nodes and Cilium or Calico for hybrid nodes.
3. Use Service Traffic Distribution to keep traffic local to the zone it is originating from. For more information on Service Traffic Distribution, see <<hybrid-nodes-mixed-service-traffic-distribution>>.
4. Configure your CNI to use egress masquerade or network address translation (NAT) for pod traffic as it leaves your on-premises hosts. This is enabled by default in Cilium. Calico requires `natOutgoing` to be set to `true`.
5. Other {aws} services, such as {aws} Application Load Balancers and Amazon Managed Service for Prometheus, are not able to communicate with workloads running on hybrid nodes.

[#hybrid-nodes-networking-access-reqs]
=== Access required during hybrid node installation and upgrade

You must have access to the following domains during the installation process where you install the hybrid nodes dependencies on your hosts. This process can be done once when you are building your operating system images or it can be done on each host at runtime. This includes initial installation and when you upgrade the Kubernetes version of your hybrid nodes.

Expand Down Expand Up @@ -96,7 +118,8 @@ You must have access to the following domains during the installation process wh
^2^ Access to the {aws} IAM endpoints are only required if you are using {aws} IAM Roles Anywhere for your on-premises IAM credential provider.
====

*Access required for ongoing cluster operations*
[#hybrid-nodes-networking-access-reqs-ongoing]
=== Access required for ongoing cluster operations

The following network access for your on-premises firewall is required for ongoing cluster operations.

Expand Down Expand Up @@ -201,7 +224,8 @@ Depending on your choice of CNI, you need to configure additional network access
^1^ The IPs of the EKS cluster. See the following section on Amazon EKS elastic network interfaces.
====

*Amazon EKS network interfaces*
[#hybrid-nodes-networking-eks-network-interfaces]
=== Amazon EKS network interfaces

Amazon EKS attaches network interfaces to the subnets in the VPC you pass during cluster creation to enable the communication between the EKS control plane and your VPC. The network interfaces that Amazon EKS creates can be found after cluster creation in the Amazon EC2 console or with the {aws} CLI. The original network interfaces are deleted and new network interfaces are created when changes are applied on your EKS cluster, such as Kubernetes version upgrades. You can restrict the IP range for the Amazon EKS network interfaces by using constrained subnet sizes for the subnets you pass during cluster creation, which makes it easier to configure your on-premises firewall to allow inbound/outbound connectivity to this known, constrained set of IPs. To control which subnets network interfaces are created in, you can limit the number of subnets you specify when you create a cluster or you can update the subnets after creating the cluster.

Expand Down
2 changes: 1 addition & 1 deletion latest/ug/nodes/hybrid-nodes-os.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ include::../attributes.txt[]
Prepare operating system for use with Hybrid Nodes
--

Bottlerocket, Ubuntu, Red Hat Enterprise Linux (RHEL), and Amazon Linux 2023 (AL2023) are validated on an ongoing basis for use as the node operating system for hybrid nodes. {aws} supports the hybrid nodes integration with these operating systems but, with the exception of Bottlerocket, does not provide support for the operating systems itself. AL2023 is not covered by {aws} Support Plans when run outside of Amazon EC2. AL2023 can only be used in on-premises virtualized environments, reference the link:linux/al2023/ug/outside-ec2.html[Amazon Linux 2023 User Guide,type="documentation"] for more information.
Bottlerocket, Amazon Linux 2023 (AL2023), Ubuntu, and RHEL are validated on an ongoing basis for use as the node operating system for hybrid nodes. Bottlerocket is supported by {aws}in VMware vSphere environments only. AL2023 is not covered by {aws} Support Plans when run outside of Amazon EC2. AL2023 can only be used in on-premises virtualized environments, see the link:linux/al2023/ug/outside-ec2.html[Amazon Linux 2023 User Guide,type="documentation"] for more information. {aws} supports the hybrid nodes integration with Ubuntu and RHEL operating systems but does not provide support for the operating system itself.

You are responsible for operating system provisioning and management. When you are testing hybrid nodes for the first time, it is easiest to run the Amazon EKS Hybrid Nodes CLI (`nodeadm`) on an already provisioned host. For production deployments, we recommend that you include `nodeadm` in your operating system images with it configured to run as a systemd service to automatically join hosts to Amazon EKS clusters at host startup. If you are using Bottlerocket as your node operating system on vSphere, you do not need to use `nodeadm` as Bottlerocket already contains the dependencies required for hybrid nodes and will automatically connect to the cluster you configure upon host startup.

Expand Down
Loading