Managing a Kubernetes cluster

Scale up & down a Kubernetes cluster

Once a Kubernetes cluster has been configured, Leaseweb allows to resize of a cluster to the desired number of nodes.

Please note:  The additional resource used will be reflected on the invoice. Please consider the pricing at https://www.leaseweb.com/cloud/kubernetes for evaluating the impact on pricing of this process.

The following steps should allow to scale a cluster as needed:

  1. On the Kubernetes Section of the customer portal, a list of clusters is visible.


  2. Using the Scale action on a cluster will redirect to a Node pool Configurator page.


  3. The desired number of nodes can be chosen on the Node pool Configurator page.
    Once satisfied with the new number, Confirm the action with the Confirm button.

  4. If the cluster is scaled to fewer nodes, a warning might be emitted.


  5. Upon re-confirmation, Automation will start scaling the cluster accordingly.
    Use kubectl get nodes to monitor the scaling progress, or refresh the cluster page to visualize the new number of nodes.

Specificity to consider during the scale-down/scale-up a Kubernetes cluster

  • During the BETA release, we currently allow for a minimum of 2 nodes per cluster, and a maximum of 10 nodes per cluster. This limit can possibly be increased by opening a support ticket.
  • We currently do not allow updating the Node offering once a cluster is provisioned.
    For example, if S.4x8 was selected as base Node offering during the cluster initial configuration, updating to other "Node offering" (S.2x4, S.6x16 or S.16x64) will not be possible.
    Please contact us if this is a problem.
  • We do not support adding Labels & Taints on the newly created nodes, so Pods will be scheduled on them as soon as the new nodes are created.

What kind of consequence can arise from scaling-down a Kubernetes Cluster?

Scaling down a Kubernetes cluster will reduce a cluster's capacity in terms of memory and cpu, and might stop pods in the cluster, ultimately causing problems in the application it serves.
The scaling process does not account for the cluster capacity / current load to scale, so extra attention is needed.

To scale-down a cluster, Kubernetes will choose a Node to remove, drain the pods from it, and delete the node.

If an application lives only on the node being drained, and this application makes use of:

  • Local volume (using hostPath or equivalent)
  • nodeSelector
  • node affinity
  • PodDisruptionBudget

Kubernetes might have difficulty draining the node and keeping the pods live, and Kubernetes will hang during the draining process.

To prevent issues on pods during the scaling down of a cluster, the following strategies can be used:

  • Making sure a Pod has multiple replicas on multiple nodes using topology or affinity.
  • Use explicit PodDisruptionBudget on the deployments/pods.


An unstable / blocked draining process should be visible with the kubectl get nodes command, or by looking at the cluster's events.

In order to fix this, the remaining pods/resources can be cleaned on the node, before retrying the scale cluster action.

When using PodDisruptionBudget, be careful of the limitations it introduces.

By Example, if a PodDisruptionBudget is configured with 1 replicas but also a minAvailable: 1 policy.
Upon draining of the node which this pod is located on, Kubernetes will not be able to re-schedule this Pod somewhere else, and that will block the draining of the node, and therefore prevent the cluster from scaling.

Try to avoid issues by setting PodDisruptionBudget in a way to allow flexibility on the scheduler.

Setting StatefulsSet with 1 replica can also cause similar issues.

Further documentation & Related API objects

The following documentation might help :

NameShort descriptionRelevant documentation
Disruptions
PodDisruptionBudget
Article explaining the PodDisruptionBudget and its use.https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
https://kubernetes.io/docs/tasks/run-application/configure-pdb/
Node affinityAllows to assign a pod to certain nodes.https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/
NodeSelectorAllow to select on which node a pod can be scheduledhttps://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
TopologyAllow to select on which node a pod can be scheduledhttps://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/



Get Support

Need Technical Support?

Have a specific challenge with your setup?

Create a Ticket