This project is a playground where you can work with Kubernetes in a safe sandbox. It provides:
- A fully automated installation of Kubernetes over a cluster virtual machines (VMs).
- The VMs of the cluster are managed with Vagrant.
- Kubernetes Playground is compatible with the major operating systems (Windows, Linux, and macOS) and major hypervisors, such as Virtualbox and Libvirt/KVM.
- You can choose among a number of CNI network plugins and quickly re-provision the cluster on the fly.
- Kubernetes control plane nodes. Defaults to 1.
- Kubernetes worker nodes. Defaults to 3
- Vagrant. For the exact version, look the
Vagrant.require_version
constraint in the Vagrantfile.
This project currently supports the following Vagrant providers:
- Virtualbox. Dependencies:
- Virtualbox >= 6.1.4
- libvirt. Dependencies:
- libvirt >= 4.0.0
- QEMU >= 2.22.1
When you first bring this environment up, the provisioning process will also install the needed Vagrant plugins:
- vagrant-libvirt
= 0.0.45
To provision and configure the environment as described, run the following commands from the root of the repository:
-
Prepare a Vagrant box (
base-box-builder
) that will be used as a base for other VMs:-
Provision and configure
base-box-builder
:vagrant up base-box-builder.k8s-play.local
-
Halt
vagrant base-box-builder
:vagrant halt base-box-builder.k8s-play.local
-
Export a Vagrant box based on
vagrant base-box-builder
:VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="defaults" VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="$VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS,-ssh-userdir" VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="$VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS,-ssh-hostkeys" VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="$VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS,-lvm-uuids" export VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS vagrant package base-box-builder.k8s-play.local \ --output kubernetes-playground-base.box
-
Destroy
base-box-builder
to spare resources:vagrant destroy --force base-box-builder.k8s-play.local
-
(optional) If you're updating a libvirt box, ensure you delete all the libvirt volumes based on previous versions of the box.
-
Register the base Vagrant box to make it avaliable to Vagrant:
vagrant box add --clean --force kubernetes-playground-base.box \ --name ferrarimarco/kubernetes-playground-node
-
-
Provision and configure the rest of the environment:
vagrant up
When you run any vagrant command, an Ansible inventory (and related group_vars) will be generated in the ansible directory. Note that the contents of those file will be overidden on each run.
If you want to run this project in WSL, follow the instructions in the official Vagrant docs.
Additionally, you need to enable the metadata
as one of the default mount options. You might want to specify it in
/etc/wsl.conf
as follows:
[automount]
enabled = true
options = metadata,uid=1000,gid=1000,umask=0022
This is needed because otherwise the SSH private key file that Vagrant generates
has too broad permissions and ssh
refuses to use it.
You can find the default configuration in defaults.yaml
. If
you want to override any default setting, create a file named env.yaml
and
save it in the same directory as the defaults.yaml
. The
Vagrantfile
will instruct Vagrant to load it.
You can configure aspects of the runtime environment, such as:
- Default Vagrant provider.
- Default Kubernetes networking plugin.
- Enable or disable verbose output during provisioning and configuration.
If you want to re-test the initializion of the Kubernetes cluster, you can run two Vagrant provisioners (cleanup and mount-shared ) that do not run during the normal provisioning phase, and then execute the normal provisioning again:
vagrant provision --provision-with cleanup
vagrant provision --provision-with mount-shared
vagrant provision
The cleanup provisioner also reboots the VMs, then the mount-shared provisioner is needed to restore the shared folders between host and VMs.
If you want to test a different CNI plugin, run:
vagrant provision --provision-with cleanup
vagrant provision --provision-with mount-shared
- edit the
env.yaml
to change the network plugin. vagrant provision --provision-with quick-setup
You can install the following, optional, workloads and services in the cluster.
Kites allows you to test the traffic exchanged between Nodes and Pods.
To deploy Net-Test DaemonSet, open a new SSH connection into the master and run the configuration script:
vagrant ssh kubernetes-master-1.kubernetes-playground.local
sudo /vagrant/scripts/linux/bootstrap-net-test-ds.sh
If you want to open a shell in the newly created container, follow the instructions in the official Kubernetes docs.
To bootstrap a development environment, you need to install the runtime dependencies listed above, plus the development environment dependencies.
These are the dependencies that you need to install in your development environment:
- Docker, 19.03+
- Ruby 2.6.0+
- Bundler 1.13.0+
- GNU Coreutils
After installing the dependencies, run the following scripts to install the necessary packages:
- Install Vagrant: scripts/linux/ci/install-vagrant.sh
- (only for headless environments) Manually install Vagrant plugins: scripts/linux/ci/install-vagrant.sh
Logs are saved to /vagrant/logs/
.
For debbugging and development purposes, you can add the verbosity flags in your
env.yaml
as follows:
conf:
additional_ansible_arguments: "-vv"
This section explains how to run linters and the compliance test suites. The same linters and test suites run automatically on each commit.
The codebase is statically checked with linters and against common issues and to ensure consistent formatting using super-linter.
The test suite checks the whole environment for compliance using a verifier (InSpec in this case).
You can run the test suite against any guest, after provisioning and configuring it.
- Provision and configure the desired guest:
vagrant up <guest-name>
, orvagrant provision <guest-name>
if the guest is already up. - Run the tests:
scripts/linux/ci/run-inspec-against-host.sh <guest-name>
-
A script that gathers information about the host: scripts/linux/ci/diagnostics.sh. You can run this script against a host by running it directly, or against a Vagrant VM, by executing the
diagnostics
provisioner:vagrant provision <guest-name> --provision-with diagnostics
The script has a
--help
option that explains how to run it. Additionally, the diagnostis script can query the hypervisor directly, without going through Vagrant. This is useful when you've issues connecting withvagrant ssh
.
In this section, we list findings and notes that we found useful during development.
- Debian Bullseye (server and cloud images) uses ifup and ifdown to manage network interfaces.
- ifup and ifdown rely on
/etc/network/interfaces
(and related) configuration file. - Systemd invokes ifup and ifdown via the
networking.service
unit. - ifup and ifdown don't interact with interfaces that are missing the
auto <INTERFACE_NAME>
directive in/etc/network/interfaces
.
Contributions to this project are welcome! See the instructions in CONTRIBUTING.md.