Our experience running a single node K3s cluster in a production environment.
Andes Web Design has always represented itself as a technology leader, using the latest tools and techniques to place small and medium businesses at the edge of the technology landscape. Partly this comes from our interest in tech, but the real driver for this effort is efficiency. In a nutshell, it makes business sense to do things in the most modern and efficient way possible, especially for a small organization that operates very lean. This is why we went with Kubernetes for our production deployments, we wanted to get the full feature set that the leading container orchestration system has to offer. However, because we also write efficient code and are still a small and growing company, we didn't want to pay thousands of dollars each year for a managed multi-node cluster. Our solution to this problem has been to run a single-node K3s cluster. In this post we describe our deployment and some of the techniques we have developed over the past couple of years operating this cluster in a production setting.
Before going into the details of how we run our node, we first would like to say that the entire experience has been great. We would highly recommend both K3s and this deployment style for its simplicity and robustness. Our experience has verified the reputation Kubernetes has as being exceptionally good at process management, keeping all manner of services available like clockwork. Also, we found that K3s uses about 1 GB of RAM and averages maybe a quarter of a CPU core to run, so is an amazingly light Kubernetes distribution. It's website has it marketed more towards the edge/ARM/embedded world, but our experience demonstrates it is perfect for orchestrating the tasks on a typical business server.
To begin, we note that a single-node deployment has a few simplifications available for us to make use of. The most helpful is using a hostPath
mount for all volumes. Since you are on a single node, you simply do not need to worry about provisioning proper PersistentVolumes, or installing a block storage solution like Longhorn. Another is not worrying about node roles/taints. K3s is installed as a single binary with the master and worker components bundled together, and is designed to run in a single-node configuration out of the box, so we merely need to define our cluster and everything will be run together.
One nice aspect of Kubernetes is interchangability of components, which allows us to chose the most reliable pieces for our cluster. For this reason we use PostgreSQL as the cluster datastore, an option unique to K3s. We went with this option due to our experience running postgres in production and as our standard development database and it has always lived up to its repuatation for extreme reliability. Another customization we went with was running an Nginx ingress. This too was a nod to our experience running Nginx as our main web server for static files and knowing it is incredibly reliable.
Operationally, the task of running a single-node cluster is no more troublesome than running a standard business server. We use Kubernetes's native liveliness probe functionality to monitor and restart services, so our server monitoring task mainly reduces to assessing whether the node itself is healthy. We don't run any monitoring or logging pods, wishing to keep our deployment light, and in two years this has been a solid strategy. In our experience, you simply don't need those things.
In conclusion, we would happily recommend Kubernetes and K3s in particular as a rock-solid process manager for your business server deployments. This option wasn't really available until recently, but now that it is here, it lives up to its reputation. Additionally, once you need to scale up to a multi-node cluster, you will only need to make minor changes to your deployment, so is as close to future-proof as one can get.