Skip to content

Conversation

@sivakami-projects
Copy link
Contributor

@sivakami-projects sivakami-projects commented Nov 24, 2025

Pipeline to run repeated tests on long running Swiftv2 AKS clusters.

Test pipeline - Tests are scheduled to run every 3 hours on central us euap. Link to Pipeline
Recent test run

Testing Approach
Test Lifecycle (per stage):
Create 8 pod scenarios with PodNetwork, PodNetworkInstance, Pods
Run 9 connectivity tests (HTTP-based)
Run private endpoint tests (storage access)
Delete all resources (Phase 1: Pods, Phase 2: PNI/PN/Namespaces)

Node Selection:
Tests filter by workload-type=$WORKLOAD_TYPE AND nic-capacity labels
Ensures isolation between different workload type stages
Currently: WORKLOAD_TYPE=swiftv2-linux

Files Changed
Pipeline Configuration
pipeline.yaml: Main pipeline with schedule trigger
long-running-pipeline-template.yaml: Stage definitions with VM SKU constants

Setup Scripts
create_aks.sh: AKS cluster creation with node labeling
create_vnets.sh: Customer VNet creation
create_peerings.sh: VNet peering mesh
create_storage.sh: Storage accounts with public access disabled (SA1 only)
create_nsg.sh: NSG rule application with retry logic
create_pe.sh: Private endpoint and DNS zone setup

Test Code
datapath.go: Enhanced with node label filtering, private endpoint testing
datapath_create_test.go: Resource creation scenarios
datapath_connectivity_test.go: HTTP connectivity validation
datapath_private_endpoint_test.go: Private endpoint access/isolation tests
datapath_delete_test.go: Resource cleanup

Documentation
README.md:

Reason for Change:

Issue Fixed:

Requirements:

Notes:

sivakami added 2 commits November 24, 2025 08:38
- Implemented scheduled pipeline running every 1 hour with persistent infrastructure
- Split test execution into 2 jobs: Create (with 20min wait) and Delete
- Added 8 test scenarios across 2 AKS clusters, 4 VNets, different subnets
- Implemented two-phase deletion strategy to prevent PNI ReservationInUse errors
- Added context timeouts on kubectl commands with force delete fallbacks
- Resource naming uses RG name as BUILD_ID for uniqueness across parallel setups
- Added SkipAutoDeleteTill tags to prevent automatic resource cleanup
- Conditional setup stages controlled by runSetupStages parameter
- Auto-generate RG name from location or allow custom names for parallel setups
- Added comprehensive README with setup instructions and troubleshooting
- Node selection by agentpool labels with usage tracking to prevent conflicts
- Kubernetes naming compliance (RFC 1123) for all resources

fix ginkgo flag.

Add datapath tests.

Delete old test file.

Add testcases for provate endpoint.

Ginkgo run specs only on specified files.

update pipeline params.

Add ginkgo tags

Add datapath tests.

Add ginkgo build tags.

remove wait time.

set namespace.

update pod image.

Add more nsg rules to block subnets s1 and s2

test change.

Change delegated subnet address range. Use delegated interface for network connectivity tests.

Datapath test between clusters.

test.

test private endpoints.

fix private endpoint tests.

Set storage account names in putput var.

set storage account name.

fix pn names.

update pe

update pe test.

update sas token generation.

Add node labels for sw2 scenario, cleanup pods on any test failure.

enable nsg tests.

update storage.

Add rules to nsg.

disable private endpoint negative test.

disable public network access on storage account with private endpoint.

wait for default nsg to be created.

disable negative test on private endpoint.

private endpoint depends on aks cluster vnets, change pipeline job dependencies.

Add node labels for each workload type and nic capacity.

make sku constant.

Update readme, set schedule for long running cluster on test branch.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a comprehensive long-running test pipeline for SwiftV2 pod networking on Azure AKS. The pipeline creates persistent infrastructure (2 AKS clusters, 4 VNets, storage accounts with private endpoints, NSGs) and runs scheduled tests every 3 hours to validate pod-to-pod connectivity, network security group isolation, and private endpoint access across multi-tenant scenarios.

Key Changes:

  • Adds scheduled pipeline with conditional infrastructure setup (runSetupStages parameter)
  • Implements 8 pod test scenarios across 2 clusters and 4 VNets with different NIC capacities
  • Includes 9 connectivity tests and 5 private endpoint tests with tenant isolation validation

Reviewed changes

Copilot reviewed 19 out of 20 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
.pipelines/swiftv2-long-running/pipeline.yaml Main pipeline with 3-hour scheduled trigger and runSetupStages parameter
.pipelines/swiftv2-long-running/template/long-running-pipeline-template.yaml Two-stage template: setup (conditional) and datapath tests with 4 jobs
.pipelines/swiftv2-long-running/scripts/*.sh Infrastructure setup scripts for AKS, VNets, storage, NSGs, and private endpoints
test/integration/swiftv2/longRunningCluster/datapath*.go Test implementation split into create, connectivity, private endpoint, and delete tests
test/integration/swiftv2/helpers/az_helpers.go Azure CLI and kubectl helper functions for resource management
test/integration/manifests/swiftv2/long-running-cluster/*.yaml Kubernetes resource templates for PodNetwork, PNI, and Pods
go.mod, go.sum Updates to support Ginkgo v2 testing framework
hack/aks/Makefile Updates for SwiftV2 cluster creation with multi-tenancy tags
.pipelines/swiftv2-long-running/README.md Comprehensive documentation of pipeline architecture and test scenarios

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 93 to 94
cmd_delegator_curl="'curl -X PUT http://localhost:8080/DelegatedSubnet/$modified_custsubnet'"
cmd_containerapp_exec="az containerapp exec -n subnetdelegator-westus-u3h4j -g subnetdelegator-westus --subscription 9b8218f9-902a-4d20-a65c-e98acec5362f --command $cmd_delegator_curl"
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded credentials and subscription IDs in the script. The script contains a hardcoded subscription ID 9b8218f9-902a-4d20-a65c-e98acec5362f and references a specific container app subnetdelegator-westus-u3h4j in resource group subnetdelegator-westus. These hardcoded values make the script non-portable and could expose sensitive information. Consider parameterizing these values or using environment variables.

Copilot uses AI. Check for mistakes.
responseFile="response.txt"
modified_vnet="${vnet_id//\//%2F}"
cmd_stamp_curl="'curl -v -X PUT http://localhost:8080/VirtualNetwork/$modified_vnet/stampcreatorservicename'"
cmd_containerapp_exec="az containerapp exec -n subnetdelegator-westus-u3h4j -g subnetdelegator-westus --subscription 9b8218f9-902a-4d20-a65c-e98acec5362f --command $cmd_stamp_curl"
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same hardcoded credentials issue. The script contains hardcoded subscription ID 9b8218f9-902a-4d20-a65c-e98acec5362f and references to subnetdelegator-westus-u3h4j container app. Consider parameterizing these values.

Copilot uses AI. Check for mistakes.
@sivakami-projects
Copy link
Contributor Author

/azp run Azure Container Networking PR

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@sivakami-projects
Copy link
Contributor Author

/azp run Azure Container Networking PR

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@sivakami-projects
Copy link
Contributor Author

/azp run Azure Container Networking PR

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@sivakami-projects sivakami-projects changed the title Add SwiftV2 long-running pipeline with scheduled tests Datapath tests for Long running clusters. Dec 12, 2025
@sivakami-projects sivakami-projects self-assigned this Dec 12, 2025
@sivakami-projects
Copy link
Contributor Author

/azp run Azure Container Networking PR

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Copy link
Contributor

@jpayne3506 jpayne3506 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to do a pass on other supporting files not included into this PR.

Comment on lines 233 to 235
echo "Waiting 2 minutes for pods to fully start and HTTP servers to be ready..."
sleep 120
echo "Wait period complete, proceeding with connectivity tests"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Historically when we have relied on sleep it has resulted in CI/CD failures.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing this.

# Job 3: Networking & Storage
# ------------------------------------------------------------
- job: NetworkingAndStorage
timeoutInMinutes: 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, you never want this job to timeout? Max is 6 hours no matter what you set.

Run locally against existing infrastructure:

```bash
export RG="sv2-long-run-centraluseuap" # Match your resource group
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This RG naming is way beyond the max cap I typically see. Is there not a managed cluster that gets paired with this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont think this is more than the max limit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its because the managed cluster that is created goes beyond the 80 character limit. IIRC the breaking point is close to 20~ characters due to how the name is duplicated when creating the managed RG. If it works, awesome!

echo "Provisioning finished with state: $state"
break
fi
sleep 6
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we look at leveraging another option besides sleep

- Remove service connection from pipeline parameters
- Update tests to use ginkgo v1
- Replace ginkgo CLI with go test
- Remove fixed sleep timers
- Add MTPNC and pod status verification in test code
- Remove skip delete tags
- Clean up long-running pipeline template
@sivakami-projects
Copy link
Contributor Author

/azp run Azure Container Networking PR

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

sivakami added 2 commits December 15, 2025 20:45
Set pipeline vars for delegator app.
Replace fixed and infinite sleeps with bounded retry loops

Optimize kubeconfig management by fetching once and reusing across jobs

add retry for Private endpoint ip to be available.

Remove unnecessary validation.

cleanup.

change kubeconfig paths.

Set kubeconfig.
@sivakami-projects
Copy link
Contributor Author

/azp run Azure Container Networking PR

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

- **Example**: PR validation run with Build ID 12345 → `sv2-long-run-12345`

**Important Notes**:
- Always follow the naming pattern for scheduled runs on master: `sv2-long-run-<region>`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any instance where you can see multiple runs leveraging this test? If so we also need to add a unique identifier, i.e. build ID to the RG + Cluster naming

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Purpose of this pipeline is to schedule periodic test runs on a single cluster. But yes people could manually trigger multiple runs in which case build id will be used in the RG name.

Every 3 hour, the pipeline:
1. Skips setup stages (infrastructure already exists)
2. **Job 1 - Create Resources**: Creates 8 test scenarios (PodNetwork, PNI, Pods with HTTP servers on port 8080)
3. **Job 2 - Connectivity Tests**: Tests HTTP connectivity between pods (9 test cases), then waits 20 minutes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious. What is the purpose of waiting for 20 minutes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the readme to reflect the latest tests.

Comment on lines 42 to 66
elif [ "$ACTION" == "delete" ]; then
echo "Removing Storage Blob Data Contributor role from service principal"

for SA in $STORAGE_ACCOUNTS; do
echo "Processing storage account: $SA"
SA_SCOPE="/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG}/providers/Microsoft.Storage/storageAccounts/${SA}"

ASSIGNMENT_ID=$(az role assignment list \
--assignee "$SP_OBJECT_ID" \
--role "Storage Blob Data Contributor" \
--scope "$SA_SCOPE" \
--query "[0].id" -o tsv 2>/dev/null || echo "")

if [ -z "$ASSIGNMENT_ID" ]; then
echo "[OK] No role assignment found for $SA (already deleted or never existed)"
continue
fi

az role assignment delete --ids "$ASSIGNMENT_ID" --output none \
&& echo "[OK] Role removed from service principal for $SA" \
|| echo "[WARNING] Failed to remove role for $SA (may not exist)"
done
fi

echo "RBAC management completed successfully."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you do a sanity check at the end of the delete to confirm everything was deleted properly. Ideally anything that would help us go through the CI/CD to confirm that rbac is not leaking would be beneficial.

Comment on lines 641 to 672
func RunConnectivityTest(test ConnectivityTest) error {
// Get kubeconfig for the source cluster
sourceKubeconfig := getKubeconfigPath(test.Cluster)

// Get kubeconfig for the destination cluster (default to source cluster if not specified)
destKubeconfig := sourceKubeconfig
if test.DestCluster != "" {
destKubeconfig = getKubeconfigPath(test.DestCluster)
}

// Get destination pod's eth1 IP (delegated subnet IP for cross-VNet connectivity)
// This is the IP that is subject to NSG rules, not the overlay eth0 IP
destIP, err := helpers.GetPodDelegatedIP(destKubeconfig, test.DestNamespace, test.DestinationPod)
if err != nil {
return fmt.Errorf("failed to get destination pod delegated IP: %w", err)
}

fmt.Printf("Testing connectivity from %s/%s (cluster: %s) to %s/%s (cluster: %s, eth1: %s) on port 8080\n",
test.SourceNamespace, test.SourcePod, test.Cluster,
test.DestNamespace, test.DestinationPod, test.DestCluster, destIP)

// Run curl command from source pod to destination pod using eth1 IP
// Using -m 3 for 3 second timeout (short because netcat closes connection immediately)
// Using --interface eth1 to force traffic through delegated subnet interface
// Using --http0.9 to allow HTTP/0.9 responses from netcat (which sends raw text without proper HTTP headers)
// Exit code 28 (timeout) is OK if we received data, since netcat doesn't properly close the connection
curlCmd := fmt.Sprintf("curl --http0.9 --interface eth1 -m 3 http://%s:8080/", destIP)

output, err := helpers.ExecInPod(sourceKubeconfig, test.SourceNamespace, test.SourcePod, curlCmd)
// Check if we received data even if curl timed out (exit code 28)
// Netcat closes the connection without proper HTTP close, causing curl to timeout
// But if we got the expected response, the connectivity test is successful
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd recommend looking at https://github.com/kubernetes/kubernetes/blob/c180d6762d7ac5059d9b50457cafb0d7f4cf74a9/test/e2e/framework/network/utils.go#L329-L373. Exec-ing + curl is not going to work 100% of the time. I would put up some reasonable guardrails for retries on these flaky operations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes http curl is unreliable. Switched to tcp netcat tests. Also pods have tcp server running on port 8080.

go.sum Outdated
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

git checkout master -- go.sum to revert and not have to deal with your IDE.

@sivakami-projects
Copy link
Contributor Author

/azp run Azure Container Networking PR

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants