Scaling

Charmed Kubernetes has been designed to be flexible enough to efficiently run your workloads. Various components of Charmed Kubernetes can be horizontally scaled to meet demand or to increase reliability, as detailed below.

Note:

The information here is for scaling the installed Kubernetes® itself. For information about pod autoscaling, please see the Kubernetes autoscaling documentation for details.

kubernetes-control-plane

The kubernetes-control-plane nodes act as the control plane for the cluster. Charmed Kubernetes was designed with separate control-plane nodes so that these nodes can be scaled independently of the worker units, to give better efficiency and flexibility.

Additional units can be added like so:

juju add-unit kubernetes-control-plane

To add multiple units, you can also specify a numerical value

juju add-unit kubernetes-control-plane -n 3
Note:

Prior to the 1.24 release of charms, this application and charm was titled `kubernetes-master`. See inclusive-naming for more information.

kubernetes-worker

As the Kubernetes worker nodes handle the actual workloads, they usually run on machines with more resources. The resource profile of units is maintained by Juju using constraints.

You can check the current constraints with the command:

juju get-constraints kubernetes-worker

Which will return the current settings, for example:

cores=4 mem=4096M root-disk=16384M

To create an additional kubernetes-worker unit with this resource profile, you can simply run:

juju add-unit kubernetes-worker

To add multiple units, you can specify a number (for example 2):

juju add-unit kubernetes-worker -n 2

To add additional units with specific new resource constraints, these may also be specified in the command. For example:

juju add-unit kubernetes-worker -n 2 --constraints "mem=6G cores=2"

... will cause two new kubernetes-worker units to be added, with at least 2 cores, 6G of memory and 16G of storage (as the existing application constraints are inherited).

To change the constraints for all future units of kubernetes-worker, you can set different constraints like so:

juju set-constraints kubernetes-worker cores=2 mem=6G root-disk=16G

Note that in this case, any constraints you supply will replace all the exisiting constraints, so in this example, we also include the existing root-disk requirement.

Note:

Constraints are designed to supply the minimum of what is requested. This can result in the actual instances far exceeding these values, depending on the backing cloud.

Scaling down kubernetes-worker

Should workloads reduce, it is also possible to scale down the number of worker nodes. In order to do this safely, the node to be removed can be paused.

juju run kubernetes-worker/3 pause

Pausing the worker will indicate to Kubernetes that it is out of service. Any workloads will be migrated to alternative worker units. You can verify this with the command:

kubectl get pod -o wide

The individual unit (in this example, number 3) can then be safely removed:

juju remove-unit kubernetes-worker/3

Note that due to the numbering system used by Juju, if you subsequently add additional units of this application, the numbers of any previously deleted units _willnot be re-used.

etcd

Charmed Kubernetes installs a three machine cluster for etcd, which provides tolerence for a single failure. Should you wish to extend the fault tolerance, you can add additional units of etcd.

juju add-unit etcd -n 2

The recommended cluster-size for etcd is three, five or seven machines. Adding large numbers of additional units has a negative effect on performance due to synchronising data.

Juju high availability

On a default deployment of Charmed Kubernetes, there is only one controller instance, which isn't desirable for critical applications. It is possible to scale out the controller itself to prevent a single point of failure.

Juju supports a high availability mode to run multiple controllers with an automatic failover.

A single command will automatically create and maintain high availability for Juju:

juju enable-ha

You can verify the additional machines have been added by listing the machines in the controller model:

juju machines -m controller

For a more detailed guide, please refer to the Juju high availability documentation.

We appreciate your feedback on the documentation. You can edit this page or file a bug here.

See the guide to contributing or discuss these docs in our public Mattermost channel.