create a cluster and disable loggin and monitoring
gcloud beta container clusters create "logging-monitoring-3" \
--zone "us-central1-a" \
--no-enable-cloud-logging \
--no-enable-cloud-monitoring \
--cluster-version "1.8.4-gke.0"
verify that there are no fluentd running as deamonsets on your cluster
kubectl get pods --all-namespaces | grep fluentd
create a *-system namespace to manage this
kubectl create namespace logmon-system
Daemonset exports logs in for instance /var/log/containers/*.log to a centralized place, in this case, elasticsearch.
cd fluentd/
kubectl apply -f rbac.yml
kubectl apply -f configmap.yml
kubectl apply -f fluent-es-ds.yml
Statefulset to handle the ingestion of logs being passed from fluentd
cd elasticsearch/
kubectl apply -f rbac.yml
kubectl apply -f elasticsearch.yml
kubectl apply -f service.yml
Dashboard to elasticsearch
cd kibana/
kubectl apply -f kibana.yml
kubectl apply -f service.yml
Phase One Resource usage metrics, such as container CPU and memory usage, are available in Kubernetes through the Metrics API, these are generated by Kubelet, providing per-node/pod/container usage information but there's also Metrics Server aka slimmed down heapster it stores locally only latest values and has no sinks. It also exposes the master metrics API. The best explanation I've seen is the PRD for Kubernetes monitoring architecture
Phase2 would be monitoring agents report metrics to an outside moritoring cluster agent. Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization.
prometheus takes advantage of /metrics described
cd prometheus/
kubectl apply -f rbac.yml
kubectl apply -f prometheus.yml