This repository contains manifests and Helm values.yaml files to demonstrate the capabilities of Falco, a security observability tool for hosts, containers, and Kubernetes.
- Access to a Kubernetes cluster (e.g. NETWAYS Managed Kubernetes®)
kubectlandhelminstalled on your machine
Clone this repository:
git clone https://github.com/netways-web-services/falco-k8s-demo
cd falco-k8s-demoDeploy Falco using its Helmchart:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install -n falco falco falcosecurity/falco -f falco-values.yamlFalco will be installed as DaemonSet to your cluster. Below is an explanation of the settings configured in falco-values.yaml:
# Housekeeping for faster rollouts during the demo
controller:
daemonset:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 3
maxUnavailable: 0
tty: true # instantly flush captured events to stdout of the Falco daemons
driver:
enabled: true
kind: modern_ebpf # Use Falco's most modern, eBPF-based driver
modernEbpf:
leastPrivileged: true # Only require the capabilities needed for injecting the eBPF programOnce Falco is installed, spin up an NGINX pod and get the name of the node it gets scheduled on:
kubectl run nginx --image nginx:latest
kubectl get pod nginx -o yaml | grep nodeName | sed 's/ nodeName: //'List all Falco daemons and find the one on the same node as the NGINX pod. Start streaming the logs of the Falco daemon in one terminal:
kubectl get pods -n falco -o wide
kubectl logs -f -n falco <falco-pod>In a second terminal, start an interactive shell in the NGINX pod, then issue some additional commands, e.g. running curl against the Kubernetes API or getting the outputs of sensitive files with cat:
# Run the first command
kubectl exec -it nginx -- bash
# Then run the other two in the started shell
curl https://kubernetes
cat /etc/shadowSwitch back to the terminal where you are tailing Falco's logs. You should see several events being logged, at least one corresponding to each of the commands executed in the NGINX pod:
08:36:42.509621124: Notice A shell was spawned in a container with an attached terminal | evt_type=execve user=root user_uid=0 user_loginuid=-1 process=bash proc_exepath=/usr/bin/bash parent=systemd command=bash terminal=34816 exe_flags=EXE_WRITABLE|EXE_LOWER_LAYER container_id=b9c0384aa272 container_name=nginx container_image_repository=docker.io/library/nginx container_image_tag=latest k8s_pod_name=nginx k8s_ns_name=default
Falco Sidekick is a connector enabling Falco to send events to more than 50 integrations, e.g. storage backends, notification channels, and more.
Adjust falco-values.yaml to look like this:
# Housekeeping for faster rollouts during the demo
controller:
daemonset:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 3
maxUnavailable: 0
tty: true
driver:
enabled: true
kind: modern_ebpf
modernEbpf:
leastPrivileged: true
falcosidekick:
enabled: true
grafana:
dashboards:
enabled: true
config:
loki:
hostport: http://loki
grafanaDashboard:
enabled: true
configMap:
name: falcosidekick-loki-dashboard-grafana Then, deploy a simple Loki setup for storing the captured events, Grafana for visualizing them, and update the Falco deployment:
kubectl apply -f loki.yaml
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install -n falco grafana grafana/grafana -f grafana-values.yaml
helm upgrade -n falco falco falcosecurity/falco -f falco-values.yaml
Next, fetch the password for Grafana, and port-forward its web UI so you can view captured events in Grafana:
kubectl get secret -n falco grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
kubectl -n falco port-forward svc/grafana 3000:service You can now open Grafana at http://localhost:3000 in your browser and log in with username admin and the password you fetched above.
Under Dashboards, you should find a dashboard called Falco logs containing event statistics captured by Falco.
Enter your NGINX pod again and run the same commands. After a few moments, the captured events will show up in the Grafana dashboard.
Note
You might have to adjust the timeframe in the Grafana dashboard to 5 minutes to immediately visualize captured events for all panels.
Falco Talon is an early-stage reaction engine that can be combined either with Falco directly or integrated with Falco Sidekick. It allows you to run predefined actions whenever Falco captures specified events, like
- shutting down Pods
- creating NetworkPolicies to sandbox workloads
- trigger cloud functions in AWS or GCP
First, adjust your falco-values.yaml to look like this:
# Housekeeping for faster rollouts during the demo
controller:
daemonset:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 3
maxUnavailable: 0
tty: true
driver:
enabled: true
kind: modern_ebpf
modernEbpf:
leastPrivileged: true
falcosidekick:
enabled: true
grafana:
dashboards:
enabled: true
config:
talon:
address: http://falco-talon:2803
loki:
hostport: http://loki
grafanaDashboard:
enabled: true
configMap:
name: falcosidekick-loki-dashboard-grafana
responseActions:
enabled: true
falco-talon:
config:
rulesOverride: |
- rule: Delete containers if shell starts
description: Force-deletes containers if a shell gets started
match:
rules:
- Terminal shell in container
actions:
- action: Terminate PodThen, redeploy the Falco Helmchart to install Falco Talon and configure its integration with Falco Sidekick:
helm upgrade -n falco falco falcosecurity/falco -f falco-values.yamlThe Helmchart configuration configures Falco Talon in a way so that it will terminate every Pod that starts a terminal shell.
You can test this by starting a new terminal session in your NGINX pod:
kubectl exec -it nginx -- bashThe pod will be terminated. You can check by listing all pods in the default namespace - it won't be there anymore:
kubectl get pods -n defaultTetragon is another eBPF-based security observability and enforcement solution. It comes with a steeper learning curve, but offers more finegrained policing and reacting to captured events.
First, uninstall the components you deployed so far, as they might interfer with what we're trying to observe:
helm uninstall -n falco falco
helm uninstall -n falco grafana
kubectl delete -f loki.yamlThen, install the Tetragon Helmchart. It will deploy the Tetragon operator, which in turn will spawn, configure, and manage a DaemonSet of Tetragon agents on each node:
helm repo add cilium https://helm.cilium.io/
helm repo update
helm install -n kube-system tetragon cilium/tetragonCreate a new NGINX pod if there is none present in your cluster, and get the node it has been scheduled on:
kubectl run nginx --image nginx:latest
kubectl get pod nginx -o yaml | grep nodeName | sed 's/ nodeName: //'
Start listening for captured events from the NGINX pod in the Tetragon agent on the same node:
kubectl get pods -n kube-system -l app.kubernetes.io/name=tetragon -o wide
kubectl exec -it -n kube-system -c tetragon <tetragon-pod> -- tetra getevents -o compact --pods nginxIn a second terminal, start a new shell in the NGINX Pod and issue a few commands. Check back on the terminal running Tetragon and observe its output:
kubectl exec -it nginx -- bash
# Issue some commands
curl https://kubernetes
cat /etc/shadowFinally, apply tetragon-policy.yaml to your cluster. It contains a CustomResource read by the Tetragon operator, that will get turned into configuration for the Tetragon daemons.
This specific policy targets reads of files under /tmp, and kills processes that attempt to do so.
kubectl apply -f tetragon-policy.yamlIf you stopped tailing captured events from the Tetragon agent, start doing so again.
Then, in your terminal session inside the NGINX pod, try writing to a file in /tmp:
echo "Hello Tetragon" > /tmp/hello-world.txtThis should work. Now try printing the contents of the created file:
cat /tmp/hello-world.txtThe process will be killed by Tetragon. You can spot corresponding captured event in the output of the Tetragon agent.
In fact, even listing the files under /tmp inside the NGINX pod will not work:
ls /tmpTetragon allows you to (re-)act on the process level and define TracingPolicies for virtually any kernel function or syscall, something Falco does not offer.
On the downside, configuring Tetragon correctly requires much deeper knowledge of the Linux kernel.
You can clean up the resources deployed to your cluster by following the commands below.
helm uninstall -n falco falcohelm uninstall -n falco grafanakubectl delete -f loki.yamlhelm uninstall -n kube-system tetragon
kubectl delete -f tetragon-policy.yamlkubectl delete pod nginxFind information of this project's license in LICENSE.