Skip to content
This repository has been archived by the owner on Aug 6, 2024. It is now read-only.

Let's play with Kubernetes in a safe sandbox

License

Notifications You must be signed in to change notification settings

netgroup/kubernetes-playground

Repository files navigation

Kubernetes Playground

Build Status Master Branch

This project is a playground where you can work with Kubernetes in a safe sandbox. It provides:

  1. A fully automated installation of Kubernetes over a cluster virtual machines (VMs).
  2. The VMs of the cluster are managed with Vagrant.
  3. Kubernetes Playground is compatible with the major operating systems (Windows, Linux, and macOS) and major hypervisors, such as Virtualbox and Libvirt/KVM.
  4. You can choose among a number of CNI network plugins and quickly re-provision the cluster on the fly.

Components

  1. Kubernetes control plane nodes. Defaults to 1.
  2. Kubernetes worker nodes. Defaults to 3

Dependencies

Runtime

  1. Vagrant. For the exact version, look the Vagrant.require_version constraint in the Vagrantfile.

Vagrant providers

This project currently supports the following Vagrant providers:

  1. Virtualbox. Dependencies:
    1. Virtualbox >= 6.1.4
  2. libvirt. Dependencies:
    1. libvirt >= 4.0.0
    2. QEMU >= 2.22.1

Vagrant plugins

When you first bring this environment up, the provisioning process will also install the needed Vagrant plugins:

  1. vagrant-libvirt

    = 0.0.45

How to Run

To provision and configure the environment as described, run the following commands from the root of the repository:

  1. Prepare a Vagrant box (base-box-builder) that will be used as a base for other VMs:

    1. Provision and configure base-box-builder:

      vagrant up base-box-builder.k8s-play.local
    2. Halt vagrant base-box-builder:

      vagrant halt base-box-builder.k8s-play.local
    3. Export a Vagrant box based on vagrant base-box-builder:

      VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="defaults"
      VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="$VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS,-ssh-userdir"
      VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="$VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS,-ssh-hostkeys"
      VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="$VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS,-lvm-uuids"
      export VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS
      vagrant package base-box-builder.k8s-play.local \
          --output kubernetes-playground-base.box
    4. Destroy base-box-builder to spare resources:

      vagrant destroy --force base-box-builder.k8s-play.local
    5. (optional) If you're updating a libvirt box, ensure you delete all the libvirt volumes based on previous versions of the box.

    6. Register the base Vagrant box to make it avaliable to Vagrant:

      vagrant box add --clean --force kubernetes-playground-base.box \
          --name ferrarimarco/kubernetes-playground-node
  2. Provision and configure the rest of the environment:

    vagrant up

Automatic Ansible Inventory Creation

When you run any vagrant command, an Ansible inventory (and related group_vars) will be generated in the ansible directory. Note that the contents of those file will be overidden on each run.

Running in Windows Subsystem for Linux (WSL)

If you want to run this project in WSL, follow the instructions in the official Vagrant docs.

Additionally, you need to enable the metadata as one of the default mount options. You might want to specify it in /etc/wsl.conf as follows:

[automount]
enabled = true
options = metadata,uid=1000,gid=1000,umask=0022

This is needed because otherwise the SSH private key file that Vagrant generates has too broad permissions and ssh refuses to use it.

Environment-specific configuration

You can find the default configuration in defaults.yaml. If you want to override any default setting, create a file named env.yaml and save it in the same directory as the defaults.yaml. The Vagrantfile will instruct Vagrant to load it.

You can configure aspects of the runtime environment, such as:

  • Default Vagrant provider.
  • Default Kubernetes networking plugin.
  • Enable or disable verbose output during provisioning and configuration.

Cleaning up and re-provisioning

If you want to re-test the initializion of the Kubernetes cluster, you can run two Vagrant provisioners (cleanup and mount-shared ) that do not run during the normal provisioning phase, and then execute the normal provisioning again:

  1. vagrant provision --provision-with cleanup
  2. vagrant provision --provision-with mount-shared
  3. vagrant provision

The cleanup provisioner also reboots the VMs, then the mount-shared provisioner is needed to restore the shared folders between host and VMs.

Quick CNI provisioning

If you want to test a different CNI plugin, run:

  1. vagrant provision --provision-with cleanup
  2. vagrant provision --provision-with mount-shared
  3. edit the env.yaml to change the network plugin.
  4. vagrant provision --provision-with quick-setup

Add-ons

You can install the following, optional, workloads and services in the cluster.

Kites experiments

Kites allows you to test the traffic exchanged between Nodes and Pods.

Net-Test DaemonSet

To deploy Net-Test DaemonSet, open a new SSH connection into the master and run the configuration script:

  1. vagrant ssh kubernetes-master-1.kubernetes-playground.local
  2. sudo /vagrant/scripts/linux/bootstrap-net-test-ds.sh

If you want to open a shell in the newly created container, follow the instructions in the official Kubernetes docs.

Development and testing

To bootstrap a development environment, you need to install the runtime dependencies listed above, plus the development environment dependencies.

Development dependencies

These are the dependencies that you need to install in your development environment:

  1. Docker, 19.03+
  2. Ruby 2.6.0+
  3. Bundler 1.13.0+
  4. GNU Coreutils

Setting up the development environment

After installing the dependencies, run the following scripts to install the necessary packages:

  1. Install Vagrant: scripts/linux/ci/install-vagrant.sh
  2. (only for headless environments) Manually install Vagrant plugins: scripts/linux/ci/install-vagrant.sh

Debugging and logs

Logs are saved to /vagrant/logs/.

Debugging ansible operations

For debbugging and development purposes, you can add the verbosity flags in your env.yaml as follows:

conf:
    additional_ansible_arguments: "-vv"

Running the tests

This section explains how to run linters and the compliance test suites. The same linters and test suites run automatically on each commit.

Linters and formatters

The codebase is statically checked with linters and against common issues and to ensure consistent formatting using super-linter.

Compliance test suite

The test suite checks the whole environment for compliance using a verifier (InSpec in this case).

How to run the compliance test suite

You can run the test suite against any guest, after provisioning and configuring it.

  1. Provision and configure the desired guest: vagrant up <guest-name>, or vagrant provision <guest-name> if the guest is already up.
  2. Run the tests: scripts/linux/ci/run-inspec-against-host.sh <guest-name>

Debugging and troubleshooting utilities

  1. A script that gathers information about the host: scripts/linux/ci/diagnostics.sh. You can run this script against a host by running it directly, or against a Vagrant VM, by executing the diagnostics provisioner:

    vagrant provision <guest-name> --provision-with diagnostics

    The script has a --help option that explains how to run it. Additionally, the diagnostis script can query the hypervisor directly, without going through Vagrant. This is useful when you've issues connecting with vagrant ssh.

Development notes

In this section, we list findings and notes that we found useful during development.

Network configuration

  • Debian Bullseye (server and cloud images) uses ifup and ifdown to manage network interfaces.
  • ifup and ifdown rely on /etc/network/interfaces (and related) configuration file.
  • Systemd invokes ifup and ifdown via the networking.service unit.
  • ifup and ifdown don't interact with interfaces that are missing the auto <INTERFACE_NAME> directive in /etc/network/interfaces.

Contributing

Contributions to this project are welcome! See the instructions in CONTRIBUTING.md.