Skip to content

This repository represents the final project in SVE2. It consists of multiple microservices running inside a kubernetes cluster and a frond-end application to use these services.

Notifications You must be signed in to change notification settings

sigusch/sve2-2020ss

Repository files navigation

SVE2 Project

About the Project

This repository represents the final project in SVE2. It consists of multiple microservices running inside a kubernetes cluster and a frond-end application to use these services. A graphical representation of the final architecture and kubernetes hierarchy can be found further below. Focus of this project was choreography of a kubernetes cluster and getting familiar with certain SOA concepts. Business logic of the individual services might therefore be rather superficial.

The project is about a Webshop-Microservice-Infrastructure. This is reflected in the use cases of the individual services. For development we are using minikube to run a kubernetes cluster locally. As frontend a simple console application, which connects to the cluster and consumes the different services, is used.

Kubernetes Basics

Basic features and bootstraping of a local kubernetes cluster using minikube are elaborated in a distinct markdown document.

Architecture

jabata-webshop

Ingress

Ingress is part of kubernetes and is an API object that manages external access to services inside a cluster. It has a wide variety of other features such as load balancing, TLS, name based virtual hosting.

URL-Rewriting

We are using a nginx-ingress-controller to expose all the microservices under a single endpoint to the outside world. Through url-rewriting it is possible to route requests that reach the ingress endpoint at a certain url (for example /product) to the root endpoint of the particular microservice inside the cluster (product-service).

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: jabata-shop-api-gateway
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
  - host:
    http:
      paths:
      - path: /product(/|$)(.*)
        backend:
          serviceName: product-service
          servicePort: 80
      - path: /user(/|$)(.*)
        backend:
          serviceName: user-service
          servicePort: 80

By using regular expressions for the url-rewriting functionality we declared that every service exposed has its own prefix on the outside endpoint. Everything past this prefix will be forwared to the root of the service endpoint inside the cluster.

Canary Release

We tried do a canary release by using the functionalities of an ingress-controller. The nginx-ingress-controller supports such a feature. Traffic can be routed to a secondary service according to certain conditions like a specific header, specific cookie or just a plain percentage of requests.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jabata-shop-api-gateway-canary
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
    - host:
      http:
        paths:
          - path: /product(/|$)(.*)
            backend:
              serviceName: product-service-canary
              servicePort: 80

Unfortunately the nginx-ingress-controller does ignore almost all other annotations when an ingress instance is set as canary. So we were only able to reach the base endpoint of a service due to the url-rewriting not working on a canary ingress.

Here the statement from Nginx documentation: Product service readme

Therefore we tried to use Ambassador as ingress controller. But the minikube version we were using did not support Ambassador as ingress controller yet (version above 1.10.1 is required). Although minikube version 1.11 was released on the 30th May 2020 we decided against an immediate upgrade in order to not jeopardize the rest of the project.

Services

user-service

A spring-boot based service which is responsible for authentication of users. It serves http-endpoints for listing all users and do a very simple authentication. This service is implemented as a simple Spring Boot application and uses Spring web to expose a Rest API. Additionally, lombok was used in order to avoid boilerplate code.

  • Spring Boot: create stand-alone Spring applications
  • Spring Web Starter: basic web stack support in Spring Boot
  • Lombok: library to avoid a lot of boilerplate code

basket-service

Basket cart service readme.

ordering-service

Ordering service readme.

product-service

Product service readme.

shop-cli

Shop cli readme.

RabbitMQ

For the deployment of RabbitMQ, we use the Helm Package manager. Helm, the Kubernetes application package manager, can streamline the installation process and deploy resources throughout the cluster very quickly.

Install Helm

choco install kubernetes-helm

Install RabbitMQ

We use the HA RabbitMQ: https://hub.helm.sh/charts/stable/rabbitmq-ha. To update the configuration we add the rabbitmq-values.yaml file. We define that all queues are highly available.

helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install rabbitmq stable/rabbitmq-ha -f .\rabbitmq-values.yaml
kubectl get deployments,pods,services

Uninstall

helm uninstall rabbitmq

Configure RabbitMQ Server

To access the management console do:

$pw = kubectl get secret --namespace default rabbitmq-rabbitmq-ha -o jsonpath="{.data.rabbitmq-password}"
[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($pw))
kubectl port-forward svc/rabbitmq-rabbitmq-ha 15672:15672
http://localhost:15672/

MySQL-Database

The product service uses a MySql database for fetching and persisting products. We integrated a single-instance stateful application. This means we will not be able to scale this database in kubernetes. To manage relational databases in kubernetes and make them scalable is a complex problem. It would require a data exchange between the instance, and we probably would lose the consistency (CAP Theorem).

Our microservice architecture is based on the idea that the database would run outside of the kubernetes cluster. For example a cloud based RDS like AWS RDS service would be a reasonably choice. As we are only running our app locally we decided to integrate the database in a kubernetes service.

The MySQL pod requires a persistence storage, because pods are ephemeral, but the stored data should persist. A PersistentVolume and a associated PersistentVolumeClaim is created. The claim connects the PV to the kubernetes deployment. For testing reason we created a LoadBalancer service and made the database reachable from outside of the cluster.

Within the cluster it is very easy to access other service as service discovery is a critical kubernetes concept. Pods within a cluster can talk to each other through the names of the services exposing them. Kubernetes has an internal DNS system that keeps track of domain names and IP addresses. Similarly to how Docker provides DNS resolution for containers, kubernetes provides DNS resolution for services. As we named the service: mysql our connection string will be: jdbc:mysql://mysql:3306/<db>.

kubectl apply -f .\mysql-pv.yaml
kubectl apply -f .\mysql-service.yaml

The automatic creation of the database made some difficulties.
We created the database manually after some unsuccessful debugging.

kubectl run -it --rm --image=mysql:latest --restart=Never mysql-client -- mysql -h mysql -ppassword
mysql >  create database product_db;

MongoDB-Database

Storage class

Through storage class objects, an admin can define different classes of volumes that are offered in a cluster. These classes will be used during the dynamic provisioning of volumes. The storage class defines the replication factor, I/O profile, and priority (e.g. SSD or HDD).

For the mongodb, the storage class that we deploy has a replication factor of 3 with I/O profile set to “db,” and priority set to “high.” This means that the storage will be optimized for low latency database workloads like MongoDB and automatically placed on the highest performance storage available in the cluster.

That does not have much influence on a Minikube cluster, but we want to try that setting. Unfortunately, this did not work with Minikube, therefore we remove the storage class setting again.

PVC

We created a Persistent Volume Claim (PVC) wihtout the Storage Class. Thanks to dynamic provisioning, the claims will be created without explicitly provisioning a persistent volume (PV).

Mongo db

We created the MongoDB instance as a Kubernetes deployment object. For simplicity’s sake, we deployed it as a single Mongo pod.

Init

Again we did not create the database automatically, but created it manually after deploying the container.

kubectl exec -ti basket-mongodb-6484cf48f4-srcsb  /bin/bash
mongo mongodb://localhost/admin
 use basket  
 db.init.insert({"init":"true"})   

Deployment

All services were deployed using a deployment resource. Here we define that a certain service is not exposed outside of the cluster. We can only access it using the Ingress gateway. Additionally, we created a ReplicaSet with 2 replicas to ensure that the service is highly available and load balanced.

Example Deployment Resource:

apiVersion: v1
# ... Service YAML definition
kind: Service
metadata:
  name: user-service
spec:
  ports:
    - port: 80
      targetPort: 8090
  selector:
    app: user-service
---
# ... Deployment YAML definition
apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service-backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
        - name: user-service
          image: user-service:1.0
          ports:
            - containerPort: 8090
          imagePullPolicy: Never

Resistence

Liveness and Readiness

The Liveness state of an application tells whether the internal state is valid. If Liveness is broken, this means that the application itself is in a failed state and cannot recover from it. In this case, the best course of action is to restart the application instance.

The Readiness state tells whether the application is ready to accept client requests. If the Readiness state is unready, Kubernetes should not route traffic to this instance. If an application is too busy processing a task queue, then it could declare itself as busy until its load is manageable again.

Health check

We added the health checks in two services. The ordering service and the user service. We added the Spring Boot Actuator dependency to the projects. The Actuator will use the Health support to configure Liveness and Readiness HTTP Probes.

The ordering service uses the k8s livenessProbe feature, to monitor the health of the application in the pod and restart the pod if the application is not healthy anymore. For presentation reasons the user service deployment does not monitor the health of its underlying service instances.

  livenessProbe:
    httpGet:
      path: /actuator/health/liveness
      port: 8080
    timeoutSeconds: 2
    periodSeconds: 8
    failureThreshold: 3

To check the health of the application you can use curl.

ip='172.17.186.204'
curl --location --request GET "http://$ip/user/actuator/health" | json_pp
curl --location --request GET "http://$ip/order/actuator/health" | json_pp

The presentation details can be found here: Demo Health readme.

Testing

To ensure a stable and resilient application we created a lightweight version of the chaos monkey. This script deletes pods randomly, but does spare the last replication of a single pod. For example, it deletes 2 of the 3 basket service instances. To check if our application runs smoothly we added a script which creates orders and then checks if all are available. This service monitoring uses the login, the basket, the rabbitmq and the ordering service.

After a lot of tests and debugging we found a problem in our HA architecture. Killing a certain RabbitMQ pod can lead to errors in some requests. Maybe some clustering setting was not set correctly. With client-side retries this problem could be fixed, but as we want to keep the service check script simple we do not kill RabbitMq servers.

Monitoring

shop-cli

About

This repository represents the final project in SVE2. It consists of multiple microservices running inside a kubernetes cluster and a frond-end application to use these services.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published