So, you have a Kubernetes cluster. Let’s make sure you know how to see what’s going on inside. In this blog post, you’ll learn how to review logs from Kubernetes and what you may want to be running in it.
Start by Looking at the System
The first thing you may want to look at is how the Kubernetes master and nodes are behaving. The Kubernetes master controls the entire cluster—what’s running where and what needs to be provisioned—in coordination with the worker nodes.
There are three log files you can look at in the master node:
- /var/log/kube-apiserver.log– API Server, responsible for serving the API
- /var/log/kube-scheduler.log– Scheduler, responsible for making scheduling decisions
- /var/log/kube-controller-manager.log – Controller that manages replication controllers
These are the two log files you can look at on worker nodes:
- /var/log/kubelet.log– Kubelet, responsible for running containers on the node
- /var/log/kube-proxy.log– Kube Proxy, responsible for service load balancing
These logs tell you what’s going on and what can be helpful in troubleshooting problems with the master node. These logs sadly can’t be looked at through the kubectl command but instead need to be looked at directly from the machine. Depending on where you’re hosting the machine, you may have to SSH into the node directly. You can also use one of the many different logging consoles or aggregation services out there.
Since a lot of Kubernetes clusters are now being hosted in your cloud provider of choice, those master logs are a little bit trickier to get to. Most cloud providers will provide the logs through their logging interface. This means you don’t need to worry about the raw logs themselves and can just concentrate on the workers and what’s being deployed.