Ushadow supports deploying services to Kubernetes clusters in addition to Docker. This document describes the architecture, components, and deployment flow.
┌─────────────────────────────────────────────────────────────┐
│ Frontend (React) │
│ - KubernetesClustersPage: Cluster management UI │
│ - DeployToK8sModal: Service deployment UI │
└─────────────────┬───────────────────────────────────────────┘
│ HTTPS API
┌─────────────────▼───────────────────────────────────────────┐
│ Backend (FastAPI) │
│ - routers/kubernetes.py: K8s API endpoints │
│ - services/kubernetes_manager.py: K8s operations │
│ - services/compose_registry.py: Service definitions │
└─────────────────┬───────────────────────────────────────────┘
│ Kubernetes Python Client
┌─────────────────▼───────────────────────────────────────────┐
│ Kubernetes Cluster │
│ - Namespace: ushadow (default) │
│ - ConfigMaps: Non-sensitive env vars │
│ - Secrets: Sensitive env vars │
│ - Deployments: Service pods │
│ - Services: Network endpoints │
└──────────────────────────────────────────────────────────────┘
We chose direct K8s API over deploying unode-manager to K8s:
Benefits:
- Simpler architecture (no additional pods to manage)
- Native K8s features (StatefulSets, Operators, CRDs)
- Better debugging (direct API errors)
- Can add unode-manager-in-k8s later if needed
Trade-offs:
- Different code paths for Docker vs K8s deployments
- K8s-specific manifest generation
Service Definition (Compose YAML)
↓
ComposeRegistry (parse & register)
↓
kubernetes_manager.compile_service_to_k8s()
↓
Generated Manifests:
- ConfigMap (non-sensitive env vars)
- Secret (sensitive env vars: keys, passwords, tokens)
- Deployment (pods with envFrom references)
- Service (ClusterIP/NodePort/LoadBalancer)
- Ingress (optional)
↓
Apply to Kubernetes via Python client
↓
Running Pods
Separation Strategy:
-
ConfigMap: Non-sensitive configuration
- Database URLs (without credentials)
- Service endpoints
- Feature flags
- Public configuration
-
Secret: Sensitive data (base64 encoded)
- API keys (
*_API_KEY,*_KEY) - Passwords (
*_PASSWORD,*_PASS) - Tokens (
*_TOKEN) - Credentials (
*_CREDENTIALS,*_SECRET)
- API keys (
Resolution Order:
- Manual value (from deployment UI)
- settingsStore suggestion (from user settings)
- Infrastructure discovery (from cluster scan)
- Default value (from compose file)
Variable Substitution:
Docker Compose variables like ${VAR:-default} are resolved at deployment time:
- Check service env_config
- Check OS environment
- Use default value
Multiple Ports Support:
# Service has multiple ports
ports: ['3002:3000', '8080:8080']
# Generated container ports with unique names
spec:
containers:
- ports:
- name: http
containerPort: 3000
- name: http-2
containerPort: 8080Port Name Requirements:
- Must be unique within a container
- Must match regex:
[a-z0-9]([-a-z0-9]*[a-z0-9])? - Max 15 characters
Scan Process:
1. User adds K8s cluster (with kubeconfig)
2. User clicks "Scan Infrastructure"
3. Backend scans namespace for services:
- mongo/mongodb
- redis
- postgres/postgresql
- qdrant
- neo4j
4. Results cached in cluster document
5. Auto-mapped to service env vars on deployment
Connection String Formats:
- ClusterIP:
{service}.{namespace}.svc.cluster.local:{port} - NodePort:
<node-ip>:{nodePort} - LoadBalancer:
{lb-ip}:{port}
{
"cluster_id": str, # Unique ID
"name": str, # Display name
"context": str, # Kubeconfig context
"server": str, # API server URL
"status": "connected", # connected | unreachable | unauthorized
"version": str, # K8s version
"node_count": int, # Number of nodes
"namespace": str, # Default namespace
"infra_scans": { # Cached scan results
"ushadow": {
"mongo": {
"found": true,
"endpoints": ["mongo.ushadow.svc.cluster.local:27017"]
},
...
}
}
}{
"replicas": int, # Pod replicas (default: 1)
"namespace": str, # Target namespace
"resources": { # Resource limits
"requests": {"cpu": "100m", "memory": "128Mi"},
"limits": {"cpu": "500m", "memory": "512Mi"}
},
"service_type": str, # ClusterIP | NodePort | LoadBalancer
"health_check_path": str, # Health probe path (None = disabled)
"ingress": { # Optional ingress config
"enabled": bool,
"host": str,
"path": str,
"tls": bool
},
"annotations": dict, # Custom annotations
"labels": dict # Custom labels
}POST /api/kubernetes/clusters- Add clusterGET /api/kubernetes/clusters- List clustersGET /api/kubernetes/clusters/{id}- Get clusterDELETE /api/kubernetes/clusters/{id}- Remove cluster
POST /api/kubernetes/{id}/scan-infra- Scan for infrastructureGET /api/kubernetes/services/available- List deployable servicesGET /api/kubernetes/services/infra- List infrastructure services
POST /api/kubernetes/{id}/envmap- Create ConfigMap/SecretPOST /api/kubernetes/{id}/deploy- Deploy service
- Encrypted at rest using Fernet (derived from app secret key)
- Stored as
.encfiles in/config/kubeconfigs/ - Never sent to frontend
- Temporary files deleted after use
Minimum required permissions for service account:
rules:
- apiGroups: [""]
resources: ["namespaces", "configmaps", "secrets", "services"]
verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "create", "update", "patch"]- Sensitive env vars automatically separated
- Base64 encoded in Kubernetes Secrets
- Never logged in plain text
- Accessed via envFrom in pods
All manifests saved to: /tmp/k8s-manifests/{cluster_id}/{namespace}/
docker exec ushadow-backend ls /tmp/k8s-manifests/
docker exec ushadow-backend cat /tmp/k8s-manifests/{cluster-id}/{namespace}/mem0-ui-deployment.yaml# Backend logs with full stack traces
docker logs ushadow-backend | grep -A 20 "deployment of"
# K8s deployment status
kubectl get deployments,pods,services -n ushadow
kubectl describe deployment mem0-ui -n ushadow
kubectl logs -f deployment/mem0-ui -n ushadowImage pull errors:
kubectl describe pod {pod-name} -n ushadow | grep -A 5 "Events:"ConfigMap/Secret issues:
kubectl get configmaps,secrets -n ushadow
kubectl describe configmap mem0-ui-config -n ushadowPort conflicts:
# Check generated manifest
docker exec ushadow-backend cat /tmp/k8s-manifests/{cluster-id}/{namespace}/mem0-ui-deployment.yaml | grep -A 10 "ports:"Current: All services use Deployments Future: Database services use StatefulSets with PVCs
Current: Direct manifest application Future: Optional Helm chart generation for complex services
Current: Direct deployment Future: ArgoCD/Flux integration with git-based workflows
Current: Single cluster per deployment Future: Deploy to multiple clusters simultaneously
Current: Fixed replica count Future: HPA (Horizontal Pod Autoscaler) based on metrics
Symptom: spec.template.spec.containers[0].ports[1].name: Duplicate value: "http"
Cause: Multiple ports with same name in container spec
Fix: Check generated manifest - each port must have unique name
Symptom: ErrImagePull or ImagePullBackOff
Causes:
- Image doesn't exist
- Registry authentication required
- Network connectivity issues
Fix:
# Check image
docker pull {image-name}
# Add image pull secret
kubectl create secret docker-registry regcred \
--docker-server={registry} \
--docker-username={user} \
--docker-password={password}Symptom: Pod keeps restarting
Debug:
kubectl logs {pod-name} -n ushadow --previous
kubectl describe pod {pod-name} -n ushadowSymptom: Pod killed by liveness probe
Fix: Set health_check_path: null in deployment spec to disable health checks for services without health endpoints
- Kubernetes Python Client
- Kubernetes API Reference
- KUBERNETES_INTEGRATION.md - Implementation details