Skip to content

fix: Invalid index for google_container_cluster.primary.private_cluster_config[0] #2354

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

yuval2313
Copy link

Fixes #2353 :)

@yuval2313 yuval2313 requested review from apeabody, ericyz and a team as code owners May 20, 2025 11:58
Copy link

google-cla bot commented May 20, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@yuval2313 yuval2313 closed this May 20, 2025
@yuval2313 yuval2313 reopened this May 20, 2025
@yuval2313
Copy link
Author

yuval2313 commented May 20, 2025

In order to account for the creation of firewall rules which make use of the local.cluster_endpoint_for_nodes i have modified the logic for determining the value of this local variable according to this pseudo code:

IF var.enable_private_nodes == true:
  IF var.private_endpoint_subnetwork != null: RETURN data.google_compute_subnetwork.private_endpoint_subnetwork[0].ip_cidr_range
  ELSE IF var.master_ipv4_cidr_block != null: RETURN google_container_cluster.primary.private_cluster_config[0].master_ipv4_cidr_block
  ELSE: RETURN local.cluster_subnet_cidr
ELSE: RETURN local.cluster_subnet_cidr

It accounts for the following scenarios:

  1. Create a GKE cluster with no private nodes
  2. Create a GKE cluster with private nodes and master_ipv4_cidr_block set
  3. Create a GKE cluster with private nodes and private_endpoint_subnetwork set
  4. Create a GKE cluster with private nodes and no options

This is based on the documentation which states the order of precedence in order to define the control plane network - https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#create_a_cluster_and_select_the_control_plane_ip_address_range.

Let me know please if this is overkill or if perhaps a better solution exists.

@@ -147,7 +147,13 @@ locals {
{% if private_cluster %}
cluster_endpoint = (var.enable_private_nodes && length(google_container_cluster.primary.private_cluster_config) > 0) ? (var.enable_private_endpoint || var.deploy_using_private_endpoint ? google_container_cluster.primary.private_cluster_config[0].private_endpoint : google_container_cluster.primary.private_cluster_config[0].public_endpoint) : google_container_cluster.primary.endpoint
cluster_peering_name = (var.enable_private_nodes && length(google_container_cluster.primary.private_cluster_config) > 0) ? google_container_cluster.primary.private_cluster_config[0].peering_name : null
cluster_endpoint_for_nodes = google_container_cluster.primary.private_cluster_config[0].master_ipv4_cidr_block
cluster_endpoint_for_nodes = (var.enable_private_nodes && length(google_container_cluster.primary.private_cluster_config) > 0) ? (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cluster_endpoint_for_nodes is only used in firewall.tf.tmpl resources which are already gated var.add_cluster_firewall_rules, etc. So it might be simpler to remove this local (which is always evaluated, hence the issue), and instead evaluate directly in those (gated) resources?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback :)

Are cluster firewall rules only relevant when enabling private nodes? Because the module allows their creation even without the flag which would also necessitate the same logic.

Before 35.0.0 I noticed that the master_ipv4_cidr_block had a default string value which I guess prevented this since private clusters always had this option specified.

Now it seems that we may need this since there are many possibilities.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Firewalls rules can still be created with var.add_cluster_firewall_rules, var.add_master_webhook_firewall_rules, and var.add_shadow_firewall_rules, regardless of the different scenarios which i described in my first comment:

  1. Create a GKE cluster with no private nodes
  2. Create a GKE cluster with private nodes and master_ipv4_cidr_block set
  3. Create a GKE cluster with private nodes and private_endpoint_subnetwork set
  4. Create a GKE cluster with private nodes and no options

This is based on the documentation which states the order of precedence in order to define the control plane network - https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#create_a_cluster_and_select_the_control_plane_ip_address_range.

Removing local.cluster_endpoint_for_nodes and evaluating directly inside the firewall resources wouldn't simplify the code as it still requires the same logic in order to determine the correct value for the master ip cidr range when firewall rules are created.

Copy link
Collaborator

@apeabody apeabody left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @yuval2313!

An initial thought which might make this change simpler.

@yuval2313 yuval2313 requested a review from apeabody May 21, 2025 07:05
@yuval2313
Copy link
Author

@apeabody
Thanks for the code review! Though I believe the current conditional logic is still necessary despite the local.cluster_endpoint_for_nodes being used only in firewall.tf.tmpl gated resources. Left a few replies to your review.

@apeabody
Copy link
Collaborator

/gcbrun

@apeabody
Copy link
Collaborator

@apeabody Thanks for the code review! Though I believe the current conditional logic is still necessary despite the local.cluster_endpoint_for_nodes being used only in firewall.tf.tmpl gated resources. Left a few replies to your review.

Thanks @yuval2313 - Right now the CI is failing on an un-related issue, once that is resolved I'll get this tested.

@apeabody
Copy link
Collaborator

/gcbrun

@apeabody
Copy link
Collaborator

/gcbrun

@apeabody
Copy link
Collaborator

From the CI test:

        	Error:      	Received unexpected error:
        	            	FatalError{Underlying: error while running command: exit status 1; 
        	            	Error: projects/ci-gke-4f450dec-327e/regions/us-central1/subnetworks/safer-cluster-subnet not found
        	            	
        	            	  with module.example.module.gke.module.gke.data.google_compute_subnetwork.gke_subnetwork,
        	            	  on ../../../modules/beta-private-cluster/networks.tf line 19, in data "google_compute_subnetwork" "gke_subnetwork":
        	            	  19: data "google_compute_subnetwork" "gke_subnetwork" {
        	            	}
        	Test:       	TestSaferClusterIapBastion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

private-cluster module: Invalid index for google_container_cluster.primary.private_cluster_config[0]
2 participants