Scale Kubernetes Clusters

Scale Kubernetes Clusters

Upon configuring a new cluster, two options are available. You can configure your cluster with manual scaling or you can select the cluster autoscaling option. Once a Kubernetes cluster has been configured, Leaseweb allows you to resize the cluster by adjusting the number of nodes and switching between scaling modes (manual or auto) as needed.

The additional resources used will be reflected on the invoice. Please consider the pricing at https://www.leaseweb.com/cloud/kubernetes for evaluating the impact on pricing of this process.

Manual scaling

Enabling Manual Scaling for a Kubernetes Cluster from the Customer Portal

Manual scaling allows you to manually adjust the number of nodes in your cluster based on your specific needs. You have full control over scaling decisions, ensuring the cluster size matches your workload requirements at any given time.

The following steps should allow to scale a cluster as needed:

  1. On the Kubernetes Section of the Customer Portal, a list of clusters is visible. Please note that the “Ready Workers” value indicates the actual number of the worker nodes the cluster is using.Manual New
  2. Using the Update action on a cluster will redirect you to the Update Cluster page where you can find the Node Offering Scaling Type option. Then, you can select the Manual Scaling option.Manual Selection
  3. The desired number of nodes can be chosen on the number of workers section.
    • Please note that if the selected number of nodes is lower than the currently ready worker nodes, this may lead to instability in your cluster.Potential Issue
      Once satisfied with the new number, confirm the action with the Confirm button.
  4. If the cluster is scaled to fewer nodes, a warning might be emitted upon confirmation.
    116064510
  5. Upon re-confirmation, automation will start scaling the cluster accordingly.
    Cluster admins can monitor the scaling progress within a cluster using the “kubectl get nodes” or refresh the cluster page to visualize the new number of nodes. The scaling process (both scaling up and scaling down) is also reflected in the cluster’s information.Scaling down
  6. You can view the selected Scaling Type and the desired Worker Nodes if you expand the cluster’s information in the Customer Portal.Scaling down done 1

The maximum number of worker nodes must be greater than or equal to the minimum number of worker nodes. If you require more than 10 nodes, please contact us. 

Autoscaling

Cluster autoscaling automatically adjusts the number of nodes in your cluster based on resource demand. When workloads increase, the system adds nodes to accommodate the load. When demand decreases, it removes unnecessary nodes, ensuring optimal resource usage and cost efficiency.

For more information about cluster autoscaling, refer to the official Kubernetes documentation here.

Enabling Cluster Autoscaling for a Kubernetes Cluster from the Customer Portal

You can enable autoscaling by setting a minimum and maximum cluster size.

The following steps should allow you to configure the auto-scaling in a cluster as needed:

  1. On the Kubernetes Section of the Customer Portal, a list of clusters is visible.
    • Please note that the “Ready Workers” value indicates the actual number of the worker nodes the cluster is using.Auto cluster
  2. Using the Update action on a cluster will redirect you to the Update Cluster page where you can find the Node Offering Scaling Type option. Then, you can select the Autoscaling option.Auto Select
  3. Minimum amount of workers: Defines the smallest size the cluster can scale down to. It must be at least 2 and cannot exceed the Maximum Node value.
    Maximum amount of workers: Specifies the maximum size the cluster can scale up to. Please note that the maximum amount of workers must be equal or greater than the currently ready worker nodes. A relevant message will be displayed to inform you of this potential issue.
    Auto warningOnce satisfied with the new number, confirm the action with the Confirm button.
  4. If the cluster is scaled to fewer nodes, a warning might be emitted.
    116064510
  5. Upon re-confirmation, automation will start scaling the cluster accordingly.
    Cluster admins can monitor the scaling progress within a cluster using the “kubectl get nodes” or refresh the cluster page to visualize the new number of nodes. The scaling process (both scaling up and scaling down) is also reflected in the cluster’s information.Scaling down
  6. You can view the selected Scaling Type if you expand the cluster’s information in the customer portal.
Auto done

The maximum number of worker nodes must be greater than or equal to the minimum number of worker nodes. If you require more than 10 nodes, please contact us. 

View the Cluster Autoscaling status

To monitor the status of the Autoscaling, you can check the relevant details from the configmap.

Run the following command to retrieve the status of the Autoscaling in the kube-system namespace:

kubectl get configmap cluster-autoscaler-status -o yaml -n kube-system

This command will return an output similar to the example below:

apiVersion: v1
data:
  status: |
    time: 2025-01-09 12:02:54.818691507 +0000 UTC
    autoscalerStatus: Running
    clusterWide:
      health:
        status: Healthy
        nodeCounts:
          registered:
            total: 3
            ready: 3
            notStarted: 0
          longUnregistered: 0
          unregistered: 0
        lastProbeTime: "2025-01-09T12:02:54.818691507Z"
        lastTransitionTime: "2025-01-07T09:29:00.390169767Z"
      scaleUp:
        status: NoActivity
        lastProbeTime: "2025-01-09T12:02:54.818691507Z"
        lastTransitionTime: "2025-01-07T09:29:00.390169767Z"
      scaleDown:
        status: NoCandidates
        lastProbeTime: "2025-01-09T12:02:54.818691507Z"
        lastTransitionTime: "2025-01-07T09:29:00.390169767Z"
. . .

Switching Between Scaling Modes

You can switch between manual and autoscaling modes for your cluster. To do so:

  1. On the Kubernetes Section of the customer portal, a list of clusters is visible. Manual New
  2. Using the Update action on a cluster will redirect you to the Update Cluster page where you can find the Node Offering Scaling Type option. Then, you change between the manual and automatic scaling options.Auto Select
  3. If the cluster is scaled to fewer nodes, a warning might be emitted.116064510

Please note: 

  1. If the minimum value set exceeds the number of ready worker nodes, autoscaling will initiate a scale-up to meet the defined minimum. 
  1. The maximum number of worker nodes must be defined as equal to or greater than the current number of ready worker nodes. 

Specificity to consider during the scale-down/scale-up of a Kubernetes cluster

  • During the BETA release, we currently allow for a minimum of 2 nodes per cluster, and a maximum of 10 nodes per cluster. This limit can possibly be increased by opening a support ticket.
  • We currently do not allow updating the Node offering once a cluster is provisioned.
    For example, if S.4×8 was selected as base Node offering during the cluster initial configuration, updating to other “Node offering” (S.6×16 or S.16×64) will not be possible.
    Please contact us if this is a problem.

An unstable / blocked draining process should be visible with the kubectl get nodes command, or by looking at the cluster’s events.

In order to fix this, the remaining pods/resources can be cleaned on the node, before retrying the scale cluster action.

For more information regarding the best practices to scale your Kubernetes clusters, please refer here.

Further documentation on Scaling & Related API objects

NameShort descriptionRelevant documentation
Disruptions
PodDisruptionBudget
Article explaining the PodDisruptionBudget and its use.https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
https://kubernetes.io/docs/tasks/run-application/configure-pdb/
Node affinityAllows to assign a pod to certain nodes.https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/
NodeSelectorAllow to select on which node a pod can be scheduledhttps://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
TopologyAllow to select on which node a pod can be scheduledhttps://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/