-
Notifications
You must be signed in to change notification settings - Fork 1.2k
fix: Invalid index for google_container_cluster.primary.private_cluster_config[0]
#2354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: Invalid index for google_container_cluster.primary.private_cluster_config[0]
#2354
Conversation
…enable_private_nodes for private clusters
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
In order to account for the creation of firewall rules which make use of the
It accounts for the following scenarios:
This is based on the documentation which states the order of precedence in order to define the control plane network - https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#create_a_cluster_and_select_the_control_plane_ip_address_range. Let me know please if this is overkill or if perhaps a better solution exists. |
@@ -147,7 +147,13 @@ locals { | |||
{% if private_cluster %} | |||
cluster_endpoint = (var.enable_private_nodes && length(google_container_cluster.primary.private_cluster_config) > 0) ? (var.enable_private_endpoint || var.deploy_using_private_endpoint ? google_container_cluster.primary.private_cluster_config[0].private_endpoint : google_container_cluster.primary.private_cluster_config[0].public_endpoint) : google_container_cluster.primary.endpoint | |||
cluster_peering_name = (var.enable_private_nodes && length(google_container_cluster.primary.private_cluster_config) > 0) ? google_container_cluster.primary.private_cluster_config[0].peering_name : null | |||
cluster_endpoint_for_nodes = google_container_cluster.primary.private_cluster_config[0].master_ipv4_cidr_block | |||
cluster_endpoint_for_nodes = (var.enable_private_nodes && length(google_container_cluster.primary.private_cluster_config) > 0) ? ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cluster_endpoint_for_nodes
is only used in firewall.tf.tmpl
resources which are already gated var.add_cluster_firewall_rules
, etc. So it might be simpler to remove this local (which is always evaluated, hence the issue), and instead evaluate directly in those (gated) resources?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback :)
Are cluster firewall rules only relevant when enabling private nodes? Because the module allows their creation even without the flag which would also necessitate the same logic.
Before 35.0.0
I noticed that the master_ipv4_cidr_block
had a default string value which I guess prevented this since private clusters always had this option specified.
Now it seems that we may need this since there are many possibilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Firewalls rules can still be created with var.add_cluster_firewall_rules
, var.add_master_webhook_firewall_rules
, and var.add_shadow_firewall_rules
, regardless of the different scenarios which i described in my first comment:
- Create a GKE cluster with no private nodes
- Create a GKE cluster with private nodes and
master_ipv4_cidr_block
set- Create a GKE cluster with private nodes and
private_endpoint_subnetwork
set- Create a GKE cluster with private nodes and no options
This is based on the documentation which states the order of precedence in order to define the control plane network - https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#create_a_cluster_and_select_the_control_plane_ip_address_range.
Removing local.cluster_endpoint_for_nodes
and evaluating directly inside the firewall resources wouldn't simplify the code as it still requires the same logic in order to determine the correct value for the master ip cidr range when firewall rules are created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @yuval2313!
An initial thought which might make this change simpler.
@apeabody |
/gcbrun |
Thanks @yuval2313 - Right now the CI is failing on an un-related issue, once that is resolved I'll get this tested. |
/gcbrun |
/gcbrun |
From the CI test:
|
Fixes #2353 :)