This repository is part of the Elastisys Welkin® application platform. The platform consists of the following repositories:
- compliantkubernetes-kubespray - Code for managing Kubernetes clusters and the infrastructure around them.
- compliantkubernetes-apps - Code, configuration and tools for running various services and applications on top of Kubernetes clusters.
The Elastisys Welkin® application platform runs two Kubernetes clusters. One called "service" and one called "workload".
The service cluster provides observability, log aggregation, private container registry with vulnerability scanning and authentication using the following services:
- Prometheus and Grafana
- OpenSearch and OpenSearch Dashboards
- Harbor
- Dex
The workload cluster manages the user applications as well as providing intrusion detection, security policies, log forwarding and monitoring using the following services:
- Falco
- Open Policy Agent
- Fluentd
- Prometheus
This repository installs all the applications of ck8s on top of already created clusters. To setup the clusters see compliantkubernetes-kubespray. A service-cluster (sc) or workload-cluster (wc) can be created separately but all of the applications will not work correctly unless both are running.
We follow the "configuration as code" principle which means that all configuration necessary to configure and operate the platform resides in the CK8S_CONFIG_PATH directory.
There will be four config files: common-config.yaml, wc-config.yaml, sc-config.yaml and secrets.yaml.
We strongly suggest to make your config directory part of a git repository so that it is stored safely and to allow you to rollback previously applied changes. We additionally suggest that you make Apps a submodule of your config repository in order to properly track which version of Apps you have applied and make your config repository the single source of truth of your environment.
All operations are done through the ./bin/ck8s command line tool. Run ./bin/ck8s help for a complete set of possible commands.
For more information please read our public documentation:
See Quickstart for instructions on how to initialize the repo
Currently we support the following cloud providers:
- AWS
- Azure
- Citycloud/Cleura
- Elastx
- Exoscale
- Openstack
- Safespring
- UpCloud
- In addition to this we support running Welkin on bare metal (beta).
The apps are installed using a combination of helm charts and manifests with the help of helmfile and some bash scripts.
To operate compliantkubernetes-apps some tools need to be installed. They are declared in the file REQUIREMENTS as PURLs.
Install the requirements to use compliantkubernetes-apps:
./bin/ck8s install-requirementsNote that you will need a service and workload cluster.
See DEVELOPMENT.md.
Configuration secrets in ck8s are encrypted using SOPS. We currently only support using PGP when encrypting secrets. Because of this, before you can start using ck8s, you need to generate your own PGP key:
gpg --full-generate-keyNote that it's generally preferable that you generate and store your primary key and revocation certificate offline. That way you can make sure you're able to revoke keys in the case of them getting lost, or worse yet, accessed by someone that's not you.
Instead create subkeys for specific devices such as your laptop that you use for encryption and/or signing.
If this is all new to you, here's a link worth reading!
Note
You probably want to check the compliantkubernetes-kubespray repository first, since compliantkubernetes-apps depends on having two clusters already set up.
Note
Depending on your configuration of the clusters and OIDC, you might not have access to workload cluster before installing Dex in the service cluster. You would then have to install Apps in each cluster separately, starting with the service cluster.
-
Decide on a name for this environment, the cloud provider to use as well as the flavor and set them as environment variables: Note that these will be later kept as global values in the common defaults config to prevent them from being inadvertently changed, as they will affect the default options of the configuration when generated or updated. To change them remove the common defaults config, set the new environment variables, and then generate a new configuration.
export CK8S_ENVIRONMENT_NAME=my-ck8s-cluster export CK8S_FLAVOR=[dev|prod|air-gapped] # defaults to dev # # If 'none', no infra provider tailored configuration will be performed! # export CK8S_CLOUD_PROVIDER=[exoscale|safespring|citycloud|elastx|upcloud|azure|aws|baremetal|openstack|none] export CK8S_K8S_INSTALLER=[kubespray|capi] # set this to whichever installer was used for the kubernetes layer
Note
The air-gapped flavor has a lot of the same settings as the prod flavor but with some additional variables that you need to configure yourself (these are set to set-me).
-
Then set the path to where the ck8s configuration should be stored and the PGP fingerprint of the key(s) to use for encryption:
export CK8S_CONFIG_PATH=${HOME}/.ck8s/my-ck8s-cluster export CK8S_PGP_FP=<PGP-fingerprint1,PGP-fingerprint2,...>
-
Initialize your environment and configuration: Note that the configuration is split between read-only default configs found in the
defaults/directory, and the override configscommon-config.yaml,sc-config.yamlandwc-config.yamlwhich are editable and will override any default value. Thecommon-config.yamlwill be applied to both the service and workload cluster, although it will be overridden by the any value set in thesc-config.yamlorwc-config.yamlrespectively. When new configs are created this will generate new random passwords for all services. When configs are updated this will not overwrite existing values in the override configs. It will create a backup of the old override configs placed inbackups/, generate new default configs indefaults/, merge common values intocommon-config.yaml, and clear out redundant values set in the override configs that matches the default values. See elastisys.io/welkin if you are uncertain about what order you should do things in../bin/ck8s init both
Note
It is possible to initialize wc and sc clusters separately by replacing both when running the init command:
./bin/ck8s init wc
./bin/ck8s init sc- Edit the configuration files that have been initialized in the configuration path.
Make sure that the
objectStoragevalues are set incommon-config.yamlorsc-config.yamlandwc-config.yaml, as well as required credentials insecrets.yamlaccording to yourobjectStorage.type. The type may already be set in the default configuration found in thedefaults/directory depending on your selected cloud provider. SetobjectStorage.s3.*if you are using S3 orobjectStorage.gcs.*if you are using GCS. Enable ExternalDNSexternalDns.enabledand set the required variables, if you want ExternalDNS to manage your records from inside your cluster. It requires credentials to route53,txtOwnerId,endpointsifexternalDns.sources.crdis enabled.
Note
One important configuration is whether or not you need to use proxy protocol for the ingress controller which depends on what infrastructure you use. You enable it and need to set an annotation depending on your infrastructure. Example for openstack
ingressNginx.controller.config.useProxyProtocol: true
ingressNginx.controller.service.annotations: { loadbalancer.openstack.org/proxy-protocol: "true" }
-
Create S3 buckets - optional If you have set
objectStorage.type: s3, then you need to create the buckets specified underobjectStorage.bucketsin your configuration files. You can run the scriptscripts/S3/entry.sh createto create the buckets required. The script usess3cmdin the background and it uses the${HOME}/.s3cfgfile for configuration and authentication for your S3 provider. There's also a helper scriptscripts/S3/generate-s3cfg.shthat will allow you to generate an appropriates3cfgconfig file for a few providers.# Use your s3cmd config file. scripts/S3/entry.sh create # Use custom config file for s3cmd. scripts/S3/generate-s3cfg.sh aws ${AWS_ACCESS_KEY} ${AWS_ACCESS_SECRET_KEY} s3.eu-north-1.amazonaws.com eu-north-1 > s3cfg-aws scripts/S3/entry.sh --s3cfg s3cfg-aws create
-
Test S3 configuration - optional If you enable object storage you also need to make sure that the buckets specified in
objecStorage.bucketsexist. You can run the following snippet to ensure that you've configured S3 correctly:( access_key=$(sops exec-file ${CK8S_CONFIG_PATH}/secrets.yaml 'yq ".objectStorage.s3.accessKey" {}') secret_key=$(sops exec-file ${CK8S_CONFIG_PATH}/secrets.yaml 'yq ".objectStorage.s3.secretKey" {}') sc_config=$(yq eval-all '. as $item ireduce ({}; . * $item )' ${CK8S_CONFIG_PATH}/defaults/common-config.yaml ${CK8S_CONFIG_PATH}/defaults/sc-config.yaml ${CK8S_CONFIG_PATH}/common-config.yaml ${CK8S_CONFIG_PATH}/sc-config.yaml) region=$(echo ${sc_config} | yq '.objectStorage.s3.region') host=$(echo ${sc_config} | yq '.objectStorage.s3.regionEndpoint') for bucket in $(echo ${sc_config} | yq '.objectStorage.buckets.*'); do s3cmd --access_key=${access_key} --secret_key=${secret_key} \ --region=${region} --host=${host} \ ls s3://${bucket} > /dev/null [ ${?} = 0 ] && echo "Bucket ${bucket} exists!" done )
-
Update Network Policies
./bin/ck8s update-ips both dry-run ./bin/ck8s update-ips both apply
-
Validate config and fill in missing values This should indicate any missing configuration that still needs to be set.
./bin/ck8s validate sc ./bin/ck8s validate wc
-
If you decide to not use ExternalDNS for DNS records, you will need to manually set up the following DNS entries (replace
example.comwith your domain).-
Manually point these domains to the workload cluster ingress controller:
*.example.com
-
Manually point these domains to the service cluster ingress controller:
*.ops.example.comdex.example.comgrafana.example.comharbor.example.comopensearch.example.com
Depending on your infrastructure, you might utilize a Service of type LoadBalancer for the ingress controller. This means you will not have an IP for the domains before installing the ingress controller. After configuring and validating the config, you can install just the ingress controller before the rest of apps with the following command
./bin/ck8s ops helmfile sc apply -lapp=ingress-nginx --include-transitive-needs ./bin/ck8s ops helmfile wc apply -lapp=ingress-nginx --include-transitive-needs
The IP is then available on the ingress controller Service
./bin/ck8s ops kubectl sc -n ingress-nginx get svc ingress-nginx-controller ./bin/ck8s ops kubectl wc -n ingress-nginx get svc ingress-nginx-controller
After configuring the DNS, update the Network Policies again.
./bin/ck8s update-ips both dry-run ./bin/ck8s update-ips both apply
-
-
Note, for this step each cluster need to be up and running already. Deploy the apps:
./bin/ck8s apply sc ./bin/ck8s apply wc
-
Test that the cluster is running correctly with:
./bin/ck8s test sc ./bin/ck8s test wc
-
You should now have a fully working environment. Check the next section for some additional steps to finalize it and set up user access.
If you followed the steps in the quickstart above, you should now have deployed the applications and have a fully functioning environment. However, there are a few steps remaining to make all applications ready for the user.
After the cluster setup has completed RBAC resources and namespaces will have been created for the user.
You can configure what namespaces should be created and which users that should get access using the following configuration options in wc-config.yaml:
user:
namespaces:
- demo1
- demo2
adminUsers:
- [email protected]
- [email protected]"A kubeconfig file for the user (${CK8S_CONFIG_PATH}/user/kubeconfig.yaml) can be created by running the script bin/ck8s kubeconfig user.
The user kubeconfig will be configured to use the first namespace by default.
OpenSearch Dashboards access for the user can be provided either by setting up OIDC or using the internal user database in OpenSearch:
-
OIDC:
- Set
opensearch.sso.enabled=trueinsc-config.yaml. - Configure extra role mappings under
opensearch.extraRoleMappingsto give the users the necessary roles.
extraRoleMappings: - mapping_name: kibana_user definition: users: - "configurer" - "User Name" - mapping_name: kubernetes_log_reader definition: users: - "User Name"
- Set
-
Internal user database:
- Log in to OpenSearch Dashboards using the admin account.
- Create an account for the user.
- Give the
kibana_userandkubernetes_log_readerroles to the user.
Users will be able to log in to Grafana using dex, but they will have read only access by default. To give them more privileges, you need to first ask them to log in (so that they show up in the users list) and then change their roles.
Harbor works in a multi-tenant way so that each logged in user will be able to create their own projects and manage them as admins (including adding more users as members). However, users will not be able to see each others (private) projects (unless explicitly invited) and won't have global admin access in Harbor. This also naturally means that container images uploaded to these private registries cannot automatically be pulled in to the Kubernetes cluster. The user will first need to add pull secrets that gives some ServiceAccount access to them before they can be used.
For more details and a list of available services see the user guide.
It is possible to run harbor in HA mode. This section describes the necessary configuration needed to setup harbor in HA mode. More information about harbor ha can be found here.
Both Postgres and Redis needs to be external, as harbor does not handle HA deployment of postgres and redis. It is up to the operator to set these up in a HA mode.
The following list is requirements on the external postgres
- Password encryption: none or md5
- Initial empty databases must be created before harbor starts
- registry
Config:
Harbor backup is not designed to work with a external database. You will have to provide your own backup solution.
In $CK8S_CONFIG_PATH/sc-config.yaml set the following configs
harbor:
...
backup:
enabled: false
database:
type: external
external:
host: "set-me"
port: "5432"
username: "set-me"
# "disable" - No SSL
# "require" - Always SSL (skip verification)
# "verify-ca" - Always SSL (verify that the certificate presented by the
# server was signed by a trusted CA)
# "verify-full" - Always SSL (verify that the certification presented by the
# server was signed by a trusted CA and the server host name matches the one
# in the certificate)
sslmode: "disable"In $CK8S_CONFIG_PATH/secrets.yaml add the postgres user password
harbor:
external:
databasePassword: set-meAlso configure network policies to access database
networkPolicies:
database:
internal:
ingress:
peers: []
externalEgress:
peers:
- namespaceSelectorLabels:
kubernetes.io/metadata.name: postgres-system
podSelectorLabels:
cluster-name: harbor-cluster
ports:
- 5432Config:
In $CK8S_CONFIG_PATH/sc-config.yaml set the following configs
harbor:
redis:
type: external
external:
addr: "rfs-redis-harbor.redis-system:26379"
sentinelMasterSet: "mymaster"Also configure network policies to access redis
networkPolicies:
redis:
internalIngress:
peers:
- namespaceSelectorLabels:
kubernetes.io/metadata.name: redis-system
podSelectorLabels:
app.kubernetes.io/name: redis-harbor
ports:
- 26379
- 6379For capacity management, compliantkubernetes-apps comes with some Prometheus alerts and a Grafana dashboard, which facilitate monitoring on a per Node as well as Node Group basis. The Node Group is meant to represent a logical grouping of Nodes, e.g., worker and control-plane. As such, in order to make use of these you first have to label your nodes with elastisys.io/node-group=<node-group>, for example:
kubectl label node <node-name> elastisys.io/node-group=<node-group>The bin/ck8s script provides an entry point to the clusters.
It should be used instead of using for example kubectlor helmfile directly as an operator.
To use the script, set the CK8S_CONFIG_PATH to the environment you want to access:
export CK8S_CONFIG_PATH=${HOME}/.ck8s/my-ck8s-clusterRun the script to see what options are available.
-
Deploy apps to the workload cluster:
./bin/ck8s apply wc
-
Run tests on the service cluster:
./bin/ck8s test sc -
Port-forward to a Service in the workload cluster:
./bin/ck8s ops kubectl wc port-forward svc/<service> --namespace <namespace> <port>
-
Run
helmfile diffon a helm release:./bin/ck8s ops helmfile sc -l <label=selector> diff
Add this to ~/.bashrc:
CK8S_APPS_PATH= # fill this in
source <($CK8S_APPS_PATH/bin/ck8s completion bash)The bin/ck8s script also provides commands to upgrade an environment in two steps prepare and apply.
The former runs scripted configuration steps that do not change the state of the environment, while the latter runs scripted upgrade steps that modifies the state of the environment.
On unexpected failures the command will try to perform a rollback when possible to ensure that the environment continues to function.
./bin/ck8s upgrade both vX.Y prepare
./bin/ck8s upgrade both vX.Y applyNote
It is possible to upgrade wc and sc clusters separately by replacing both when running the upgrade command, e.g. the following will only upgrade the workload cluster:
./bin/ck8s upgrade wc vX.Y prepare
./bin/ck8s upgrade wc vX.Y applyIt is possible to upgrade from one minor version to the next regardless of patch versions (vX.Y -> vX.Y+1), and from one patch version to any later patch versions (vX.Y.Z -> vX.Y.Z+N).
Version validation will require that you are on a release tag matching version specified in the command, and that your environment is at most one minor version behind.
When on a specific commit add the commit hash under global.ck8sVersion to pass validation, and for development set any to circumvent version validation completely.
There are two simple scripts that can be used to clean up you clusters.
To clean up the service cluster run:
./scripts/clean-sc.shTo clean up the workload cluster run:
./scripts/clean-wc.shSee https://elastisys.io/welkin/operator-manual/.
-
Go to the Google console and create a project.
-
Go to the Oauth consent screen and name the application with the same name as the project of your google cloud project add the top level domain e.g.
elastisys.seto Authorized domains. -
Go to Credentials and press
Create credentialsand selectOAuth client ID. Selectweb applicationand give it a name and add the URL to dex in theAuthorized Javascript originsfield, e.g.dex.demo.elastisys.se. Add<dex url>/callbackto Authorized redirect URIs field, e.g.dex.demo.elastisys.se/callback. -
Configure the following options in
CK8S_CONFIG_PATH/secrets.yamldex: googleClientID: googleClientSecret:
- OpenSearch Dashboards Single Sign On (SSO) via OpenID/Dex requires LetsEncrypt Production.
For more, please check the public GitHub issues: https://github.com/elastisys/compliantkubernetes-apps/issues.
All source files in this repository are licensed under the Apache License, Version 2.0 unless otherwise stated. See the LICENSE file for full details.