Load Balancing
1. Abstract
- Revolves around the idea of balancing requests over a pool of resources to avoid the over-"load"ing of any single point of service.
- Is placed between the client and a group of servers (server farm)
- also see Reverse Proxy and how load balancing compares in the context of client-server architecture
- A core component that defines a load balancer's behaviour is the way it balances traffics, i.e. the underlying scheduling algorithm
- it can either be static (uses a predefined strategy) or dynamic (routes traffic based on the current state of the load balancer)
2. Benefits
2.1. Availibility
- Regular health checks
- automatic disaster recovery
- phased upgrade strategies without downtime
2.2. Scalability
- avoids over"load"ing a single server
- traffic prediction capabilities for auto-scaling
- redundancy allows scaling confidently
2.3. Security
- monitor traffic and block malicious content : see Web Application Firewalls for an elaboration
- redirect attack traffic to multiple backend servers to minimize impact
2.4. Performance
- evenly distributed loads help improve response times
- can redirect requests to geographically closer servers
- ensure the reliability of physical and virtual computing resources
3. OverArching Types
3.1. Application Load Balancing
- application level commuincation protocols influence routing decisions
3.2. Network Load Balancing
- Layer 4 traffic is examined (TCP/UDP) to influence routing decisions
3.3. Global Server Load Balancing
- Done across geographically distributed servers
- traffic redirected to a geographically closer server
3.4. DNS Load Balancing
- traffic can be distribued across multiple servers (IP addresses, to be specific) for higher availibility and performance
Tags::network:web: