Nginx Load Balancing

Nginx Load Balancing

Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations.

nginx

sample nginx.conf :

nginx2

Choosing a Load Balancing Method

 round-robin:

This method is used by default (there is no directive for enabling it):

upstream weblb {
server weblb1.sysadminslab.com;
server weblb2.sysadminslab.com;
}

 

least_conn :

A request is sent to the server with the least number of active connections

upstream weblb {
least_conn;
server weblb1.sysadminslab.com;
server weblb2.sysadminslab.com;
}

 

Ip_hash:

The server to which a request is sent is determined from the client IP address. In this case, either the first three octets of IPv4 address or the whole IPv6 address are used to calculate the hash value. The method guarantees that requests from the same address get to the same server unless it is not available.

upstream weblb {
 ip_hash;
server weblb1.sysadminslab.com;
server weblb2.sysadminslab.com;
 server weblb3.sysadminslab.com  down;
 }

Weight

A load balanced setup that included server weight could look like this:

upstream weblb  {
server weblb1.sysadminslab.com weight=1;
server weblb2.sysadminslab.com weight=2;
server weblb3.sysadminslab.com weight=4;
}

The default weight is 1. With a weight of 2, weblb2.sysadminslab will be sent twice as much traffic as weblb1, and weblb3, with a weight of 4, will deal with twice as much traffic as weblb2 and four times as much as weblb 1.

Max Fails

According to the default round robin settings, nginx will continue to send data to the virtual private servers, even if the servers are not responding. Max fails can automatically prevent this by rendering unresponsive servers inoperative for a set amount of time.

There are two factors associated with the max fails: max_fails and fall_timeout. Max fails refers to the maximum number of failed attempts to connect to a server should occur before it is considered inactive. Fall_timeout specifies the length of that the server is considered inoperative. Once the time expires, new attempts to reach the server will start up again. The default timeout value is 10 seconds.

A sample configuration might look like this:

upstream weblb  {
server weblb1.sysadminslab.com max_fails=3  fail_timeout=15s;
server weblb2.sysadminslab.com weight=2;
 server weblb3.sysadminslab.com weight=4;
}

Note:- (Session persistence)

If there is the need to tie a client to a particular application server — in other words, make the client’s session “sticky” or “persistent” in terms of always trying to select a particular server — the ip-hash load balancing mechanism can be used.

Example :–

nginix-ha

Setting or Resetting Headers

To adjust or set headers for proxy connections, we can use the proxy_set_header directive. For instance, to change the “Host” header as we have discussed, and add some additional headers common with proxied requests, we could use something like this:

# server context

location /match/here {
    proxy_set_header HOST $host;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_pass http://example.com/new/prefix;
}

. . .

The above request sets the “Host” header to the $host variable, which should contain information about the original host being requested. The X-Forwarded-Proto header gives the proxied server information about the schema of the original client request (whether it was an http or an https request).

The X-Real-IP is set to the IP address of the client so that the proxy can correctly make decisions or log based on this information. The X-Forwarded-For header is a list containing the IP addresses of every server the client has been proxied through up to this point. In the example above, we set this to the$proxy_add_x_forwarded_for variable. This variable takes the value of the original X-Forwarded-Forheader retrieved from the client and adds the Nginx server’s IP address to the end.

Of course, we could move the proxy_set_header directives out to the server or http context, allowing it to be referenced in more than one location