Managing the load balancer of Public Cloud

Leaseweb Public Cloud offers a load balancing as a service based on HAproxy software.

"As a service" in this context means the load balancer software and its configuration are managed out side of the load balancer instance. You will be able to manage the configuration or recreate the load balancer using the customer portal and API interfaces.

Launching a load balancer

The load balancer can be deployed on any instance type. Therefor, we offer all instance types, vintages and sizes to you in order to run a load balancer.

To launch a load balancer, please follow these steps:

  1. Go to the load balancer launch page
  2. Choose your instance type. A rule of thumb for resources: 2 vCPU and 2GB RAM per roughly 5K simultaneous connections.
  3. Give your load balancer a name.
  4. Provide on which protocol and port the load balancer receives traffic and on which port the instances receive traffic from the load balancer.
    1. HTTP: default http traffic on port 80.
    2. HTTPS: https traffic with ssl termination on the load balancer. This requires an ssl certificate to be uploaded.
    3. TCP: any type of tcp traffic will be directly proxied through to the backend instances. Use this option also if you want to forward tls traffic.
  5. Choose the instances you want as backend instances to receive traffic from the load balancer. Only instances with private networking enabled are listed here, make sure to enable private networking on your instance first.
  6. Choose the contract and payment terms for the load balancer instance and click the Launch button

Load balancer configuration

When the load balancer is running, you will see the instance under your Public Cloud account in the customer portal. The below settings can be managed.

  • Manage listener rule
  • Load balancer algorithm method
  • Client and server idle timeout
  • Disable X-Forwarded-For
  • Health checking

Manage listener rule

Manage on which protocol and port the load balancer receives traffic and on which port the instances receive traffic from the load balancer.

  • HTTP: default http traffic on port 80.
  • HTTPS: https traffic with ssl termination on the load balancer. This requires an ssl certificate to be uploaded.
  • TCP: any type of tcp traffic will be directly proxied through to the backend instances. Use this option also if you want to forward tls traffic.

Load balancer algorithm method

  • Round robin: Each instance in the group is used in turns
  • Least Connections: The instance with the least amount of connections receives the request or the connection
  • Source IP: The client source IP is hashed and divided on total amount of target instances to designate which target instance will receive the request. However the hashing result changes when a list of target instances change, or one of target instances goes offline. That will lead to many clients being directed to a different server. It is best to complement this algorithm with Sticky session setting if yopu use HTTP or HTTPS listener protocol

Client and server idle timeout

Maximum allowed inactivity time for both clients requests and target instance replies after which the connection will be dropped.

X-Forwarded-For

The load balancer will add the X-Forwarded-For header to the HTTP requests it sends to the target instances. This header contains the IP address of the client request.

Health checking

The load balancer will periodically perform HTTP requests to each instance to verify it is online and healthy. If an instance stops responding to such requests, or the returning HTTP status code is not "200 OK", it stops receiving client requests.

  • Method: an HTTP method to use in the health check requests.
  • URI: an arbitrary URI to use in the health check requests.
  • Host: use the protocol HTTP/1.1 instead of the default HTTP/1.0 and include the Host: header in the health check requests.


Get Support

Need Technical Support?

Have a specific challenge with your setup?

Create a Ticket