Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set new params #79

Open
wants to merge 51 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
1fe43f6
Add Microsoft Defender option
knutia Oct 11, 2023
25c97c8
Add documentation pileline
knutia Oct 11, 2023
f77fb2a
Fixing default for msd_enable variable
schildwaechter Oct 11, 2023
2005276
terraform-docs: automated action
github-actions[bot] Oct 11, 2023
d4fc370
Merge pull request #1 from TietoEVRY-DataPlatforms/fix-msd-enable
schildwaechter Oct 11, 2023
fee7080
using authorized_ip_ranges in sted of api_server_authorized_ip_ranges
knutia Oct 20, 2023
2f6dc87
Add dependabot
knutia Oct 20, 2023
8bace8d
docker_bridge_cidr is depricated
knutia Oct 20, 2023
0d944d1
Merge pull request #2 from TietoEVRY-DataPlatforms/fix-api_server_aut…
knutia Oct 20, 2023
8b520f4
make api_server_access_profile dynamic based on if api_server_authori…
knutia Nov 9, 2023
55b3105
Merge pull request #3 from TietoEVRY-DataPlatforms/fix_api_server_acc…
knutia Nov 9, 2023
46eb0e1
Added output for node_pool_rg ID
Nov 23, 2023
5b5d865
terraform-docs: automated action
github-actions[bot] Nov 23, 2023
69c4037
Merge pull request #4 from TietoEVRY-DataPlatforms/feat/nodepool_grou…
knutia Nov 23, 2023
2bd6b45
Adding rbac as explciit setting
schildwaechter Jan 19, 2024
772f0fd
Merge pull request #5 from TietoEVRY-DataPlatforms/feat/rbac-enabled-…
schildwaechter Jan 19, 2024
09f23a8
Add Windows maintenance and security updates
marcelgrygar Mar 5, 2024
0dbeb39
terraform-docs: automated action
github-actions[bot] Mar 5, 2024
5427e9c
Update TF module and rewrite dynamic block
marcelgrygar Mar 6, 2024
514fbe9
terraform-docs: automated action
github-actions[bot] Mar 6, 2024
cf88875
Rewriting TF module
marcelgrygar Mar 6, 2024
2cac22d
terraform-docs: automated action
github-actions[bot] Mar 6, 2024
1e84310
Update Module
marcelgrygar Mar 6, 2024
7422a7e
terraform-docs: automated action
github-actions[bot] Mar 6, 2024
a063f21
Add default null
marcelgrygar Mar 6, 2024
9d5e2f8
terraform-docs: automated action
github-actions[bot] Mar 6, 2024
c4a7496
Fix bugs
marcelgrygar Mar 6, 2024
0545c4f
terraform-docs: automated action
github-actions[bot] Mar 6, 2024
97aba31
Merge pull request #7 from TietoEVRY-DataPlatforms/update_automatic_u…
MGr-Sektetor Mar 7, 2024
7155fa0
Add Tuesday as Default
marcelgrygar Mar 7, 2024
afda379
terraform-docs: automated action
github-actions[bot] Mar 7, 2024
749c954
restructure maintenance_window_auto_upgrade and maintenance_window_no…
knutia Mar 8, 2024
961fcf7
terraform-docs: automated action
github-actions[bot] Mar 8, 2024
d7cdf8a
Add missing description
knutia Mar 8, 2024
89ce2fd
terraform-docs: automated action
github-actions[bot] Mar 8, 2024
47a4a83
Merge pull request #8 from TietoEVRY-DataPlatforms/add_day
schildwaechter Mar 8, 2024
6db71b0
Chnaging formats
schildwaechter Mar 8, 2024
30747d7
Merge branch 'master' into kia-autoupdate
schildwaechter Mar 8, 2024
34891d0
terraform-docs: automated action
github-actions[bot] Mar 8, 2024
1b0e3bb
Merge pull request #10 from TietoEVRY-DataPlatforms/kia-autoupdate
schildwaechter Mar 8, 2024
6f17336
Set the automatic_channel_upgrade to disabeld by default
knutia Mar 13, 2024
d93af57
terraform-docs: automated action
github-actions[bot] Mar 13, 2024
d60ec4d
Merge pull request #11 from TietoEVRY-DataPlatforms/automatic_channel…
knutia Mar 13, 2024
032e734
Default max_surge to 33%
knutia Mar 13, 2024
e6df7ba
terraform-docs: automated action
github-actions[bot] Mar 13, 2024
1d33597
Merge pull request #12 from TietoEVRY-DataPlatforms/max_surge_default
knutia Mar 13, 2024
fcec4cb
Fix the upgrade_settings for cluster_node_pools and remove retention_…
knutia Jun 5, 2024
2864a49
Merge pull request #13 from TietoEVRY-DataPlatforms/fix-upgrade_settings
knutia Jun 5, 2024
70ad9fd
Add test for spot on upgrade_settings for nodepools
knutia Jun 5, 2024
5ef6eaf
Merge pull request #14 from TietoEVRY-DataPlatforms/update-setings-spot
knutia Jun 5, 2024
5bc3d32
Add new param
marcelgrygar Jul 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
version: 2
updates:
- package-ecosystem: "terraform"
directory: "/"
schedule:
interval: "daily"
time: "23:00"
timezone: "Europe/Oslo"
open-pull-requests-limit: 3
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
19 changes: 19 additions & 0 deletions .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
name: Generate terraform docs
on:
- pull_request

jobs:
docs:
runs-on: [ubuntu-latest]
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.ref }}

- name: Render terraform docs and push changes back to PR
uses: terraform-docs/gh-actions@main
with:
working-dir: .
output-file: README.md
output-method: inject
git-push: "true"
22 changes: 22 additions & 0 deletions .terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

127 changes: 127 additions & 0 deletions README.md

Large diffs are not rendered by default.

140 changes: 115 additions & 25 deletions aks.tf
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ locals {
os_type = lookup(p, "os_type", local.default_pool_settings.os_type)
os_disk_size_gb = lookup(p, "os_disk_size_gb", local.default_pool_settings.os_disk_size_gb)
os_disk_type = lookup(p, "os_disk_type", local.default_pool_settings.os_disk_type)
vnet_subnet_id = var.create_vnet ? element(concat(azurerm_subnet.k8s_agent_subnet.*.id, [""]), 0) : var.aks_vnet_subnet_id
vnet_subnet_id = var.create_vnet ? element(concat(azurerm_subnet.k8s_agent_subnet[*].id, [""]), 0) : var.aks_vnet_subnet_id
zones = lookup(p, "zones", local.default_pool_settings.zones)

mode = lookup(p, "mode", "User")
Expand Down Expand Up @@ -114,15 +114,68 @@ resource "azurerm_subnet" "k8s_agent_subnet" {
}

resource "azurerm_kubernetes_cluster" "k8s_cluster" {
name = var.cluster_name
location = var.resource_group_location
resource_group_name = var.resource_group_name
dns_prefix = var.dns_prefix
private_cluster_enabled = var.private_cluster_enabled
private_dns_zone_id = var.private_dns_zone_id
kubernetes_version = var.k8s_version
api_server_authorized_ip_ranges = var.api_server_authorized_ip_ranges
automatic_channel_upgrade = var.automatic_channel_upgrade
name = var.cluster_name
location = var.resource_group_location
resource_group_name = var.resource_group_name
dns_prefix = var.dns_prefix
private_cluster_enabled = var.private_cluster_enabled
private_dns_zone_id = var.private_dns_zone_id
kubernetes_version = var.k8s_version
dynamic "api_server_access_profile" {
for_each = length(var.api_server_authorized_ip_ranges) != 0 ? [1] : []
content {
authorized_ip_ranges = var.api_server_authorized_ip_ranges
}
}
automatic_channel_upgrade = var.automatic_channel_upgrade

dynamic "maintenance_window_auto_upgrade" {
for_each = var.maintenance_window_auto_upgrade == null ? [] : [var.maintenance_window_auto_upgrade]
content {
duration = maintenance_window_auto_upgrade.value.duration
frequency = maintenance_window_auto_upgrade.value.frequency
interval = maintenance_window_auto_upgrade.value.interval
day_of_month = maintenance_window_auto_upgrade.value.day_of_month
day_of_week = maintenance_window_auto_upgrade.value.day_of_week
start_date = maintenance_window_auto_upgrade.value.start_date
start_time = maintenance_window_auto_upgrade.value.start_time
utc_offset = maintenance_window_auto_upgrade.value.utc_offset
week_index = maintenance_window_auto_upgrade.value.week_index

dynamic "not_allowed" {
for_each = maintenance_window_auto_upgrade.value.not_allowed == null ? {} : maintenance_window_auto_upgrade.value.not_allowed
content {
end = not_allowed.value.end
start = not_allowed.value.start
}
}
}
}

node_os_channel_upgrade = var.node_os_channel_upgrade

dynamic "maintenance_window_node_os" {
for_each = var.maintenance_window_node_os == null ? [] : [var.maintenance_window_node_os]
content {
duration = maintenance_window_node_os.value.duration
frequency = maintenance_window_node_os.value.frequency
interval = maintenance_window_node_os.value.interval
day_of_month = maintenance_window_node_os.value.day_of_month
day_of_week = maintenance_window_node_os.value.day_of_week
start_date = maintenance_window_node_os.value.start_date
start_time = maintenance_window_node_os.value.start_time
utc_offset = maintenance_window_node_os.value.utc_offset
week_index = maintenance_window_node_os.value.week_index

dynamic "not_allowed" {
for_each = maintenance_window_node_os.value.not_allowed == null ? {} : maintenance_window_node_os.value.not_allowed
content {
end = not_allowed.value.end
start = not_allowed.value.start
}
}
}
}

linux_profile {
admin_username = var.admin_username
Expand All @@ -135,6 +188,8 @@ resource "azurerm_kubernetes_cluster" "k8s_cluster" {
oidc_issuer_enabled = var.oidc_issuer_enabled
workload_identity_enabled = var.workload_identity_enabled

role_based_access_control_enabled = true

node_resource_group = var.node_resource_group

#if No aks_vnet_subnet_id is passed THEN use newly created subnet id ELSE use PASSED subnet id
Expand All @@ -144,7 +199,7 @@ resource "azurerm_kubernetes_cluster" "k8s_cluster" {
vm_size = lookup(var.default_pool, "vm_size", local.default_pool_settings.vm_size)
os_disk_size_gb = lookup(var.default_pool, "os_disk_size_gb", local.default_pool_settings.os_disk_size_gb)
os_disk_type = lookup(var.default_pool, "os_disk_type", local.default_pool_settings.os_disk_type)
vnet_subnet_id = var.create_vnet ? element(concat(azurerm_subnet.k8s_agent_subnet.*.id, [""]), 0) : var.aks_vnet_subnet_id
vnet_subnet_id = var.create_vnet ? element(concat(azurerm_subnet.k8s_agent_subnet[*].id, [""]), 0) : var.aks_vnet_subnet_id
zones = lookup(var.default_pool, "zones", local.default_pool_settings.zones)
type = lookup(var.default_pool, "type", local.default_pool_settings.default_pool_type)
enable_auto_scaling = lookup(var.default_pool, "enable_auto_scaling", true)
Expand All @@ -153,6 +208,14 @@ resource "azurerm_kubernetes_cluster" "k8s_cluster" {
tags = lookup(var.default_pool, "tags", var.tags)
max_pods = lookup(var.default_pool, "max_pods", local.default_pool_settings.max_pods)
orchestrator_version = lookup(var.default_pool, "k8s_version", local.default_pool_settings.k8s_version)

dynamic "upgrade_settings" {
for_each = var.max_surge == null ? [] : ["upgrade_settings"]

content {
max_surge = var.max_surge
}
}
}

dynamic "service_principal" {
Expand Down Expand Up @@ -200,10 +263,9 @@ resource "azurerm_kubernetes_cluster" "k8s_cluster" {
network_plugin = var.aks_network_plugin
network_policy = var.aks_network_policy

pod_cidr = var.aks_pod_cidr
service_cidr = var.aks_service_cidr
dns_service_ip = var.aks_dns_service_ip
docker_bridge_cidr = var.aks_docker_bridge_cidr
pod_cidr = var.aks_pod_cidr
service_cidr = var.aks_service_cidr
dns_service_ip = var.aks_dns_service_ip

dynamic "load_balancer_profile" {
for_each = var.outbound_type == "loadBalancer" ? [1] : []
Expand Down Expand Up @@ -246,6 +308,14 @@ resource "azurerm_kubernetes_cluster" "k8s_cluster" {
}
}

dynamic "microsoft_defender" {
for_each = var.msd_enable ? [1] : []

content {
log_analytics_workspace_id = var.msd_workspace_id
}
}

dynamic "oms_agent" {
for_each = var.oms_agent_enable ? [1] : []

Expand All @@ -256,6 +326,21 @@ resource "azurerm_kubernetes_cluster" "k8s_cluster" {


tags = var.tags

# dynamic "lifecycle" {
# for_each = lookup(var.default_pool, "enable_auto_scaling", true) ? [1] : []
#
# content {
# ignore_changes = [tags,]
# }
# }
# lifecycle {
# ignore_changes = [
# # Ignore changes to default_node_pools node_count , e.g. because it is managed by enable_auto_scaling
# default_node_pool[0].node_count,
# ]
# }

}

resource "azurerm_kubernetes_cluster_node_pool" "aks-node" {
Expand Down Expand Up @@ -284,6 +369,21 @@ resource "azurerm_kubernetes_cluster_node_pool" "aks-node" {
priority = each.value.priority
eviction_policy = each.value.eviction_policy
spot_max_price = each.value.spot_max_price

dynamic "upgrade_settings" {
for_each = var.max_surge == null || each.value.priority == "Spot" ? [] : ["upgrade_settings"]

content {
max_surge = var.max_surge
}
}

lifecycle {
ignore_changes = [
# Ignore changes to default_node_pools node_count , e.g. because it is managed by enable_auto_scaling
node_count,
]
}
}

resource "azurerm_monitor_diagnostic_setting" "aks-diagnostics" {
Expand All @@ -298,11 +398,6 @@ resource "azurerm_monitor_diagnostic_setting" "aks-diagnostics" {
content {
category = log.key
enabled = log.value.enabled

retention_policy {
enabled = log.value.retention.enabled
days = log.value.retention.days
}
}
}
dynamic "metric" {
Expand All @@ -312,11 +407,6 @@ resource "azurerm_monitor_diagnostic_setting" "aks-diagnostics" {
content {
category = metric.key
enabled = metric.value.enabled

retention_policy {
enabled = metric.value.retention.enabled
days = metric.value.retention.days
}
}
}
}
10 changes: 10 additions & 0 deletions outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -86,3 +86,13 @@ output "private_fqdn" {
output "oidc_issuer_url" {
value = azurerm_kubernetes_cluster.k8s_cluster.oidc_issuer_url
}

output "node_resource_group" {
description = "auto-generated resource group which contains the resources for this managed kubernetes cluster"
value = azurerm_kubernetes_cluster.k8s_cluster.node_resource_group
}

output "node_resource_group_id" {
description = "auto-generated resource group which contains the resources for this managed kubernetes cluster"
value = azurerm_kubernetes_cluster.k8s_cluster.node_resource_group_id
}
Loading