Load balancing in simplest terms refers to dynamically distributing application incoming network traffic across a group of backend nodes. It helps to maintain high availability, scalability, fault-tolerance of your application and gives a smooth experience to the application users because applications are serving hundreds of thousands, or millions, of concurrent requests from users or clients and return the correct response as per request. To cost‑effectively scale to meet these high volumes, modern computing best practice generally requires adding more servers.
A load balancer acts as a single point of contact for the application. It helps to manage your application servers and to route users/client requests across all servers capable of fulfilling those requests in a manner that maximises speed and capacity utilization and ensures that no one server is overworked, which could degrade performance. If a single server goes down, the load balancer redirects traffic to the remaining online servers. When a new server is added to the server group, the load balancer automatically starts to send requests to it.
Load distribution decision is based on the configured process and the traffic that is coming to the application. In this process, it checks connection requests type received from clients, using the protocol and port that you configure for front-end (client to load balancer) connections. It forwards requests to one or more registered backend nodes using the protocol and port number that you set for back-end (load balancer to backend nodes) connections.
The following are the essential characteristics of Load balancer:
You have the flexibility to add and remove backend nodes from your load balancer to handle the traffic load based on your requirement changes, and that is possible without interrupting the flow of user’s requests to your application. Registering Nodes add it to your load balancer. The load balancer starts routing requests to nodes as soon as it is registered. Deregistering Nodes removes it from your load balancer. The load balancer stops routing requests to nodes as more quickly as deregistered. A node that deregistered remains running, but no longer receives traffic from the load balancer, and you can register it with the load balancer again when you need it.
When you create a load balancer, you must choose whether to make it an internal load balancer or an external load balancer (Internet-facing ). The external load balancers have public IP addresses. Therefore, it can route requests from clients over the Internet to the Backend Nodes. The internal load balancers have only private IP addresses. Consequently, it can not route requests from clients over the private subnets only to the backend node.
E2E Load balancers support different types of load balancing algorithms that meant for various benefits, and the choice depends on your needs.
Monitoring of load balancer health in real-time is a free service that provides insights into resource usage across your infrastructure. There are several different display metrics to help you track the operational health of your infrastructure. The information graphically represented on the MyAccount portal. Learn more
Alerts You have the flexibility to easily configure alert policies, set email notifications to enable you to respond quickly to critical situations when load balancer health alerts are triggered. Learn more
You can attach a Reserved IP address either as an add-on IP which is going to associate with the load balancer’s primary network interface and work as an additional IP address. Or attach as a primary public IP which will work as the load balancer’s primary network interface with a load balancer. Learn More
How to Launch a Load Balancer Appliance?¶
Initiate Load Balancer Creation¶
Login to MyAccount portal using your credentials set up at the time of creating and activating the E2E Networks My Account.
After you log in to the E2E Networks My Account, you can click on any of the following options.
On the left side of the MyAccount dashboard, click on the Load Balancers sub-menu available under the Products section.
You will be routed to the Manage Load Balancers page. Now, you have to click on the ‘Add New LB’ button to create a load balancer that takes you to the create Load balancer page.
Select your Load Balancer Plan¶
All the load balancer plans are listed based on different memory, vCPU and storage configuration and price.
Please select a plan you wish to use to create the new load balancer.
After selecting the plan, you need to choose the Load balancer type. Here two option is available for LoadBlancer.
1. Classic Load Balancer :- Classic load balancer in its simplest term distributes the incoming traffic across multiple compute nodes based on balancing policy.
2. Advanced Load Balancer :- In Advance load balancer, We will be able to define advance access rules based on various conditions or ACL rules like path-based, host-based, query-params-match etc.
To create Classic Load Balancer, you need to click on the Classic Load Balancer
To create Advance Load Balancer, you need to click on the Advanced Load Balancer
You will have to enter various configuration details and preferences (like name, mode, port, list type, SSL certificates, etc.) for front-end (client to load balancer) connections, and back-end (load balancer to nodes) connections and to configure Advanced Load Balancer, you can click here to refer Advanced Load Balancer deployment Help.
For Classic Load Balancer
For Advanced Load Balancer
Choose a Load Balancer Name - Default name is provided based on the Plan you selected, but you can modify and give a string of characters that you can enter as the name of your Load Balancer Appliance.
Load Balancing Policy¶
Choose Balancing Policy - Different load balancing policy algorithms provide different benefits; the choice of load balancing method depends on your needs. Please select the balancing method either “Source IP Hash” or “Round Robin” based on your use case.
Round Robin Method: This method selects the backend servers in turns distributing the connection requests evenly among them.
Source IP Hash: This method selects which backend server to use based on a hash of the source IP, i.e. a user’s IP address, ensuring that a user will connect to the same backend server.
Load Balancer Protocol¶
Choose Mode - Mode is a process that checks for connection requests using a specified protocol and a port for front-end (client to load balancer) connections. E2E Networks Load Balancer supports the following protocols.
Support both HTTP and HTTPS protocol independently
HTTPS (secure HTTP) using SSL/TCL: Supports the X-Forwarded headers and Requires an SSL certificate deployed on the load balanc****er
HTTP: Supports the X-Forwarded headers.
For the back-end (load balancer to nodes) connections of a load balancer HTTP protocol by default is used.
Specify SSL certificate - If you use either “HTTPS (SSL or TLS)” or “Both (HTTP and HTTPS)” for your front-end protocol, you must specify an SSL/TLS certificate on your load balancer. The load balancer uses the certificate to terminate the connection and then decrypt requests from clients before sending them to the nodes.
The SSL and TLS protocols use an X.509 certificate (SSL/TLS server certificate) to authenticate both the client and the back-end application. An X.509 certificate is a digital form of identification issued by a certificate authority (CA). It contains identification information, a validity period, a public key, a serial number, and the digital signature of the issuer.
To import SSL Certificate, you have to click on Buy or Import SSL certificate and next click on Import SSL Certificate to import your purchased SSL Certificate Keys
In Import SSL Certificate, Please enter the following information to import your purchased SSL Certificate Keys:
Name: You need to enter the SSL bundle name which is a string of characters to be provided for saving your SSL certificates.
SSL Certificate: An SSL certificate is a type of digital certificate that provides authentication for a website and enables an encrypted connection. SSL certificates are what would allow websites to move from HTTP to HTTPS, which is more secure. You can paste the content of your SSL certificate file or can load it from a file via a click on the “Load from file” link.
SSL Private Key: The private key is a separate file and used for the encryption/decryption of data sent between your server and the connecting clients. You can paste the content of the private key, or you can load it from a file via a click on the “Load from file” link.
SSL Certificate Chain: A certificate chain is a list of certificates (usually starting with an end-entity certificate) followed by one or more CA certificates (often the last one being a self-signed certificate). SSL certificate file is not required to be included in this chain. You can paste the content of your intermediate/CA certificates chain or can load it from a file via a click on the “Load from file” link.
If a trusted CA didn’t issue the certificate, the connecting device (e.g. a web browser) checks to see if a trusted CA issued the certificate of the issuing CA. It continues monitoring until either a trusted CA is found (at which point a trusted, secure connection established), or no trusted CA can be located (at which point the device will usually display an error).
Redirect HTTP to HTTPS¶
Select Redirect HTTP to HTTPS checkbox, If you use “HTTPS (SSL or TLS)” for your front-end protocol of load balancer. For your web user safety, accessibility or PCI compliance, it becomes essential to enable redirect from HTTP to HTTPS to redirect all your traffic to LB from HTTP to HTTPS.
Load Balancer Connection Port¶
Specify the Port - The port number which automatically displayed according to the selection of protocol for frontend connections. Load balancers can listen on the ports: 1-65535.
The load balancer will accept HTTP requests on port 80 if you have set HTTP protocol Or it will accept HTTPS requests on port 443 if you have set HTTPS protocol for front-end (client to load balancer) connections.
In the List Type, you can select either Node(Static IPs) or Dynamic Scale Group (for Auto-scaling group nodes only) to configure backend connection of your load balancer as per your requirement.
Registering an E2E node adds it to your load balancer. The load balancer continuously monitors the health of registered nodes and routes requests to the nodes that are healthy. You can register or deregister nodes with the load balancer to handle the demand.
Select the Node(Static IPs) in the list type field. The node details section will be displayed.
In the Node Details section, you need to specify the details of the virtual nodes you wish to register behind the Load Balancer. For this, please enter the node’s name, node’s IP(preferably private IP if the backend node created on E2E cloud), and specify the port to which you want to send/receive traffic via Load Balancer.
You can use the ‘+ Add node’ button to add more node detail to the load balancer.
Dynamic Scale Group¶
Registering your Auto Scaling group with a load balancer helps you set up a load-balanced application because EAS enables you to dynamically scale compute nodes based on varying workloads and defined policy. Using this feature, you can meet the seasonal or varying demands of infrastructure while optimising the cost and distributing incoming traffic across your healthy E2E nodes—the scalability and availability of your application increases.
Select the Dynamic Scale group in the list type field. The node details section will be displayed.
In the Scale Group Details section, you need to select an application scaling group from the dropdown list you wish to register behind the Load Balancer.
After selecting the scale group, define the port to which you want to send/receive traffic via Load Balancer in Target Port field.
TCP Backend Details¶
A TCP (Transmission Control Protocol) backend in the context of load balancing refers to a configuration where a load balancer distributes incoming TCP connections across a group of backend servers. Unlike HTTP load balancing, which operates at the application layer, TCP load balancing works at the transport layer of the OSI model, making it suitable for various protocols beyond HTTP/HTTPS.
Here’s how TCP backend load balancing generally works:
Listening Port: The load balancer is configured to listen on a specific port for incoming TCP connections. This could be the same port that the client application connects to.
Backend Server Pool: The load balancer is associated with a set of backend servers. These servers could be physical machines, virtual machines, or containers that host the application or service you want to balance.
Connection Distribution: When a client initiates a TCP connection to the load balancer, the load balancer decides which backend server should handle the connection. Various algorithms like round-robin, least connections, and IP hash can be used for this decision-making.
Connection Persistence: In some cases, it’s important to maintain a consistent connection for the duration of a session. TCP load balancers can support session persistence, ensuring that a client’s requests are directed to the same backend server to maintain session states.
Traffic Forwarding: Once the load balancer selects a backend server, it forwards the incoming TCP connection to that server’s IP address and port.
Backend Server Handling: The selected backend server then processes the incoming data from the client, performs the necessary computations, and generates a response.
Response Forwarding: The response from the backend server is sent back through the load balancer to the client.
TCP backend load balancing is useful for applications that don’t rely on HTTP and need to maintain persistent connections, handle custom protocols, or operate at a lower network level. Examples include online gaming servers, VoIP services, and databases.
While TCP load balancing offers benefits like distribution of connections, improved reliability, and scalability, it might not provide the content-aware routing and advanced features that are available with application layer load balancing. The choice between TCP and application layer load balancing depends on the nature of your application and the protocols it uses.
For Classic Load Balancer
click on Add TCP Backend button
For Advanced Load Balancer
click on Add TCP Backend button
Your load balancer checks the health of the web application configuration that you specify. The health check configuration contains information such as the domain, and health URL. If the backend node responds with 2xx or 3xx HTTP status code for the defined URL path, it will mark as UP; else DOWN. By default, Load Balancer checks the connectivity to the backend-nodes/Target port defined during creation to mark a backend node up or down for it. If any node is down/unresponsive, traffic will not send to that particular node. The removal of unresponsive nodes and their re-addition on successful health checks are automatically taken care of by the Load Balancer appliance.
Select Add HTTP Health Checks checkbox to define an HTTP based monitoring for the health of your backend nodes.
You need to define a URL path to which HTTP HEAD requests will send to fetch the response code. If the backend node responds with 2xx or 3xx HTTP status code for the defined URL path, it will mark as UP; else DOWN.
The default URL path is / which means the index page of your site/application hosted on the backend nodes, and it can be changed to any other URI path as well.
Use Reserve IP - Enable this option to use a reserved IP as default Public IP for your load balancer. For example, you can dynamically update the backend resources of your applications and websites by re-assigning the reserved IP address without a downtime.
Use VPC IP - Enable this option to use a VPC IP as default VPC IP for your load balancer to connect your LoadBlancer internally.
BitNinja is an easy-to-use server security tool mixing the most powerful defense mechanisms. Every BitNinja protected load balancer learns from every attack and the system applies this information automatically on all BitNinja enabled servers/ load balancer. This way the shield is getting more and more powerful with every single attack. Learn more
Enable Bitninja - Enable this option to use a bitninja security tool for your load balancer. BitNinja has different modules for different aspects of cyberattacks. It is super easy-to-install, requires virtually no maintenance and able to protect any server by providing immediate protection against a wide range of cyberattacks.
Access logs contain detailed information about requests sent to your load balancer such as the date/time the request was received, client’s IP address, request protocol, request paths, and server responses. These access logs are useful to application incoming network traffic patterns and troubleshoot issues if any arise.
Access logging is an optional feature of load balancer that is disabled by default. After you enable access logs for your load balancer, it will capture the logs and store them in the EOS bucket that you specify as compressed files (GNU zip). You can disable access logging at any time.
There is no additional charge for access logs. You are charged storage costs for EOS. For more information about storage costs, see E2E Object Storage pricing.
- Enable Access Logs - Enable this option to start capturing access logs in the E2E storage bucket that you specify.
EOS Bucket: Please enter the bucket name you create to store access logs.
Access Key: Please enter the access key you assigned to the storage bucket and It will be used for accessing this storage bucket.
Secret Key: Please enter the secret key of the corresponding access key you assigned to the storage bucket.
You have an option to enable or disable the access logs service after the load balancer creation.
Deploy Load Balancer¶
After filling all the details successfully, click on the Deploy button. It will take a few minutes to set up the scale group and you will take to the ‘Manage Load Balancers’ page.
Load Balancer Detail¶
You can check all the basic, security, backend configurations and networks details of your load balancer on the Load balancer details tab.
Monitoring is an important part of maintaining the reliability, availability, and performance of your load balancer . You can check the monitoring information for your load balancer on the Monitoring tab. This information is collected from your load balancer and processes raw data into readable graphs. Each graph is based on one of the different metrics. Learn more.
Server health alerts are default created for your newly created load balancer using recommended parameters for alert policy. Also, you can set up new alerts by defining trigger parameters as per your use case. The alerting system works by sending automatic response notifications to your defined email list. Learn more
Manage Load Balancer¶
Once you have created your load balancer, you can access and manage load balancer from My Account Portal. Click on installed LoadBlancer instance and below you can see the various options to manage your installed LoadBlancer instance.
You will be redirected to the Edit Load balancer page. You can add/change backend and frontend configuration of your load balancer.
Type of action which you can perform with Load Balancer.
Stopping your Load Balancer¶
If you want to stop Load Balancer then you have to click on Stop button. And after clicking on that the confirmation popup will be open and you have to click on Power Off button.
Upgrade your Load Balancer¶
The LB upgrade feature enables customers to easily upgrade their LB plan based on their specific usage requirements. For upgrading your Load Balancer you have to click on Upgrade button under Action button.
After that you have to click on Apply button with the selected plan. Then the upgrading process will be start.
Please ensure that your database is stopped when performing the upgrade action.
For Deleting your Load Balancer you have to click on Delete button.