Skip to content

kranixio/kranix-operator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

kranix-operator

Kubernetes operator — GitOps-native cluster reconciliation via custom resources.

kranix-operator runs inside your Kubernetes cluster as a controller watching Kranix's Custom Resource Definitions (CRDs). When you apply a KranixApp manifest to the cluster, the operator picks it up and drives the desired state through kranix-core. It is what makes Kranix GitOps-native: commit YAML, the operator reconciles. No manual kubectl apply chains, no out-of-band state.


What it does

  • Installs and watches KranixApp, KranixNamespace, and KranixPolicy CRDs
  • Runs a Kubernetes controller loop using controller-runtime
  • Translates CRD spec changes into kranix-core operations
  • Reports reconciliation status back to the CRD's .status field
  • Supports progressive rollouts, rollback triggers, and health gates
  • Respects RBAC — operates only in namespaces it has been granted access to

Architecture position

Git repo  ──►  kubectl apply  ──►  Kubernetes API server
                                          │
                                    kranix-operator
                                          │
                                    kranix-core
                                          │
                                    kranix-runtime

The operator sits between the Kubernetes API server (where CRD state lives) and kranix-core (where orchestration logic lives). It is the GitOps bridge.


Custom resource definitions

KranixApp

The primary resource. Describes a workload you want Kranix to manage.

apiVersion: kranix.io/v1alpha1
kind: KranixApp
metadata:
  name: api-server
  namespace: production
spec:
  image: myorg/api-server:v1.4.2
  replicas: 3
  namespace: production
  env:
    DATABASE_URL: "postgres://..."
  resources:
    cpu: "500m"
    memory: "512Mi"
  ports:
    - containerPort: 8080
      protocol: TCP
  rollout:
    strategy: RollingUpdate
    maxUnavailable: 1
    healthCheckPath: /healthz
    healthCheckTimeout: 30s
  autoHeal: true
status:
  phase: Running            # Pending | Deploying | Running | Degraded | Failed
  readyReplicas: 3
  lastReconciled: "2025-04-01T12:00:00Z"
  conditions:
    - type: Ready
      status: "True"

KranixNamespace

Declares a namespace that Kranix should manage:

apiVersion: kranix.io/v1alpha1
kind: KranixNamespace
metadata:
  name: staging
spec:
  labels:
    env: staging
  resourceQuota:
    cpu: "4"
    memory: "8Gi"

KranixPolicy

Defines infra policies applied to workloads in a namespace:

apiVersion: kranix.io/v1alpha1
kind: KranixPolicy
metadata:
  name: staging-policy
  namespace: staging
spec:
  enforceResourceLimits: true
  defaultCpuLimit: "500m"
  defaultMemoryLimit: "512Mi"
  allowPrivileged: false
  networkPolicy:
    ingressFrom: ["production"]

Reconciliation loop

For each KranixApp the operator:

  1. Reads the .spec from the Kubernetes API
  2. Calls kranix-core with the desired workload spec
  3. Core computes the diff and drives kranix-runtime
  4. Operator watches for status events from core
  5. Writes observed state back to .status on the CRD
  6. Re-queues after the configured resync period

If a workload enters Degraded or Failed state and autoHeal: true is set, the operator triggers automatic remediation via core.


Project structure

kranix-operator/
├── cmd/
│   └── operator/                 # Entry point (controller-runtime manager)
├── internal/
│   ├── controllers/
│   │   ├── kraneapp_controller.go
│   │   ├── kranenamespace_controller.go
│   │   └── kranepolicy_controller.go
│   ├── reconciler/               # Reconciler logic (calls kranix-core)
│   ├── predicates/               # Event filter predicates
│   └── webhooks/                 # Admission webhooks (validation + mutation)
├── api/
│   └── v1alpha1/                 # CRD Go types + generated deepcopy
├── config/
│   ├── crd/                      # CRD YAML manifests (generated by controller-gen)
│   ├── rbac/                     # ClusterRole, ClusterRoleBinding
│   └── manager/                  # Deployment, ServiceAccount manifests
└── tests/
    ├── unit/
    └── e2e/                      # Uses envtest or a real cluster

Getting started

Prerequisites

  • Go 1.22+
  • controller-gen (go install sigs.k8s.io/controller-tools/cmd/controller-gen@latest)
  • A Kubernetes cluster (kind, minikube, or real)
  • kranix-core reachable from inside the cluster

Generate CRD manifests

controller-gen crd:trivialVersions=true rbac:roleName=kranix-operator-role \
  paths="./..." output:crd:artifacts:config=config/crd/bases

Install CRDs on your cluster

kubectl apply -f config/crd/bases/

Run locally (out-of-cluster)

git clone https://github.com/kranix-io/kranix-operator
cd kranix-operator
go mod download

KRANE_CORE_ADDRESS=localhost:50051 \
go run ./cmd/operator --kubeconfig ~/.kube/config

Run tests

# Unit tests
go test ./internal/...

# E2E with envtest
go test ./tests/e2e/... -tags e2e

Deployment

The operator is deployed to the cluster via kranix-charts. Manual install:

kubectl apply -f config/rbac/
kubectl apply -f config/manager/

RBAC requirements

The operator's ServiceAccount needs the following cluster permissions:

rules:
  - apiGroups: ["kranix.io"]
    resources: ["kraneapps", "kranenamespaces", "kranepolicies"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: ["kranix.io"]
    resources: ["kraneapps/status"]
    verbs: ["get", "update", "patch"]
  - apiGroups: [""]
    resources: ["namespaces", "events"]
    verbs: ["get", "list", "watch", "create", "patch"]

Configuration

operator:
  resync_period: 30s
  max_concurrent_reconciles: 5
  leader_election: true
  leader_election_namespace: kranix-system

core:
  address: "kranix-core.kranix-system.svc.cluster.local:50051"

metrics:
  port: 8383

health:
  port: 8081

Connectivity

Repo Relationship
kranix-core Operator calls core for all reconciliation logic
kranix-charts Operator is packaged and deployed via Helm charts
kranix-packages Imports CRD types and shared utilities
Kubernetes API Operator watches and updates CRDs via controller-runtime

Contributing

See CONTRIBUTING.md. CRD type changes require regenerating deepcopy and CRD YAML via controller-gen. All controllers must have E2E tests using envtest.

License

Apache 2.0 — see LICENSE.

About

Kubernetes operator — GitOps-native cluster reconciliation via custom resources.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages