Applications using modern HTTP clients (like Rust-based uv, Python package manager) were failing with DNS resolution errors:
dns error: failed to lookup address information: Name has no usable address
Root Cause:
- MicroK8s cluster had IPv6 enabled with Calico IPv6 IP pool (
fd70:8823:7283::/48) - Nodes had global IPv6 addresses but no IPv6 routing to the internet
- Applications attempting IPv6 connections would fail with "Network unreachable"
- Some applications (especially Rust-based tools like
uv) don't gracefully fall back to IPv4
kubectl delete ippool default-ipv6-ippoolThis removed IPv6 addressing from pods, leaving only the IPv4 pool (10.1.0.0/16).
CoreDNS is configured to block IPv6 AAAA DNS queries, returning empty responses:
# Block IPv6 AAAA queries
template IN AAAA . {
rcode NOERROR
}
forward . 8.8.8.8 8.8.4.4 {
prefer_udp
}This ensures DNS only returns IPv4 addresses. However, this doesn't prevent applications that have IPv6 enabled from trying IPv6 connections.
Note: This affects the host but NOT pod network namespaces!
Created a DaemonSet that disables IPv6 at the operating system level on every node in the cluster:
File: /tmp/disable-ipv6-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: disable-ipv6
namespace: kube-system
spec:
selector:
matchLabels:
name: disable-ipv6
template:
metadata:
labels:
name: disable-ipv6
spec:
hostNetwork: true
hostPID: true
initContainers:
- name: disable-ipv6
image: busybox
command:
- sh
- -c
- |
echo "🔧 Disabling IPv6 on host..."
# Disable IPv6 on all interfaces
nsenter -t 1 -m -u -n -i -- sysctl -w net.ipv6.conf.all.disable_ipv6=1
nsenter -t 1 -m -u -n -i -- sysctl -w net.ipv6.conf.default.disable_ipv6=1
nsenter -t 1 -m -u -n -i -- sysctl -w net.ipv6.conf.lo.disable_ipv6=0
echo "✅ IPv6 disabled on host"
echo "Current IPv6 status:"
nsenter -t 1 -m -u -n -i -- sysctl net.ipv6.conf.all.disable_ipv6
# Make persistent by adding to sysctl.conf
if ! nsenter -t 1 -m -u -n -i -- grep -q "disable_ipv6" /etc/sysctl.conf 2>/dev/null; then
echo "Adding to /etc/sysctl.conf for persistence..."
nsenter -t 1 -m -u -n -i -- sh -c 'echo "net.ipv6.conf.all.disable_ipv6=1" >> /etc/sysctl.conf'
nsenter -t 1 -m -u -n -i -- sh -c 'echo "net.ipv6.conf.default.disable_ipv6=1" >> /etc/sysctl.conf'
fi
echo "Done"
securityContext:
privileged: true
containers:
- name: pause
image: gcr.io/google_containers/pause:3.1
tolerations:
- effect: NoSchedule
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: ExistsApply the DaemonSet:
kubectl apply -f /tmp/disable-ipv6-daemonset.yaml- DaemonSet runs on ALL nodes - One pod per node (anubis, babel, ra)
- Privileged access - Uses
nsenterto break into host namespace - Kernel-level disable - Sets
net.ipv6.conf.all.disable_ipv6=1on the host - Persistent across reboots - Writes settings to
/etc/sysctl.conf
This is a cluster-wide fix that affects:
- All nodes in the cluster
- All pods on those nodes
- All network connections from pods
- Persists after node reboots
kubectl get daemonset -n kube-system disable-ipv6Expected output:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE
disable-ipv6 3 3 3 3 3
kubectl get pods -n kube-system -l name=disable-ipv6Check logs from any pod:
kubectl logs -n kube-system disable-ipv6-XXXXX -c disable-ipv6Should show:
✅ IPv6 disabled on host
Current IPv6 status:
net.ipv6.conf.all.disable_ipv6 = 1
Create a test pod to verify applications can now resolve DNS:
kubectl run dns-test --image=busybox --restart=Never -- sh -c "nslookup github.com && nslookup pypi.org"
kubectl logs dns-test
kubectl delete pod dns-test-
✅ CoreDNS template to block AAAA queries - Helped but didn't solve the root issue
- Some applications bypass CoreDNS or use cached DNS
-
✅ gai.conf modification for IPv4 preference - Helped for some tools
- Modified
/etc/gai.confwithprecedence ::ffff:0:0/96 100 - Not respected by all applications (especially Rust-based tools)
- Modified
-
❌ Calico FELIX_IPV6SUPPORT setting - Was already set correctly
FELIX_IPV6SUPPORT=true(for IPv6 readiness, not routing)- Didn't prevent IPv6 address assignment issues
What Works:
- ✅ DNS queries return IPv4 addresses only (no AAAA records)
- ✅ Busybox/curl/wget can resolve DNS
- ✅ Pods get IPv4-only addresses (no IPv6 from Calico pool)
What Still Fails:
- ❌ Applications with IPv6 enabled (like Rust-based
uv) still try IPv6 - ❌ Cannot disable IPv6 in pod network namespaces - Kubernetes forbids the required sysctl
- ❌ Chronicle
no-spacyimage still downloads packages at runtime despite the name
The fundamental issue is NOT just IPv6 - it's that:
- Chronicle images download dependencies at runtime instead of having them baked in
- The Rust-based
uvpackage manager doesn't gracefully fall back from IPv6 to IPv4 - Kubernetes security policy prevents using
sysctlsto disable IPv6 in pods
Why Host-Level Disable Doesn't Help:
- Each pod has its own isolated network namespace
- Host sysctls don't propagate to pods
- Test shows:
cat /proc/sys/net/ipv6/conf/all/disable_ipv6returns0(enabled) in pods
Find or build a Chronicle image that has ALL dependencies pre-installed:
# Image should include spacy, all Python packages
# No runtime downloads neededSet up a local PyPI mirror that's reachable via IPv4:
# devpi, bandersnatch, or pypi-mirror
# Configure UV_INDEX_URL to point to local mirrorTry environment variables to force IPv4 (limited success):
env:
- name: FORCE_IPV4
value: "1"
# May not work with all Rust networking stacksPartial Success:
- Standard tools (curl, wget, nslookup) work fine
- DNS resolution returns IPv4 only
- Host networking improved
Still Broken:
- Rust-based tools like
uvthat don't fall back to IPv4 - Any application that tries IPv6 connections despite DNS returning IPv4
- DaemonSet is permanent - Runs continuously to ensure IPv6 stays disabled
- Automatic on new nodes - Any new node added to cluster will automatically get IPv6 disabled
- Survives node reboots - Settings are written to
/etc/sysctl.conf
To re-enable IPv6:
# Delete the DaemonSet
kubectl delete daemonset disable-ipv6 -n kube-system
# On each node, manually re-enable IPv6:
# ssh to node and run:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0
# Remove from sysctl.conf
sudo sed -i '/disable_ipv6/d' /etc/sysctl.conf
# Recreate the IPv6 IP pool (if desired)
# You would need the original IPv6 pool configuration- 2026-01-14 - IPv6 IP pool deleted
- 2026-01-14 - DaemonSet deployed and verified on all 3 nodes (anubis, babel, ra)
- anubis (192.168.1.42)
- babel (192.168.1.43)
- ra (192.168.1.44)
All nodes now have IPv6 disabled at the kernel level.