fault tolerence
Table of Contents
Also see Chaos Engineering
- ability of a system to continue operating correctly even when some of its components fail.
Fault tolerance is essential for critical systems where downtime or data loss can have severe consequences, such as financial systems, healthcare applications, and online services. It helps ensure that these systems remain operational and accessible even in the face of unexpected failures.
1. Purpose
- To prevent failures from causing complete system downtime or data loss.
2. Types of Failures
3. Failure Detection
- identifying when a component (e.g., server, network link) in a distributed system stops functioning correctly.
- essential for triggering recovery mechanisms like failover, replication, or reconfiguration.
3.1. Challenges
- Network delays and partitions can make it difficult to distinguish between slow responses and actual failures.
- False positives (mistakenly declaring a node as failed) and false negatives (failing to detect a real failure) can have serious consequences.
3.2. Types of Failures
- Crash Failures: A node stops responding completely.
- Omission Failures: A node fails to send or receive messages.
- Byzantine Failures: A node behaves arbitrarily or maliciously.
3.3. Failure Detection Mechanisms
- Heartbeating: Nodes periodically send "I'm alive" messages to a central monitor or each other. Absence of heartbeats indicates a potential failure.
- Ping/Echo: A node sends a ping message, and the other node replies with an echo. Lack of response suggests a failure.
- Gossip Protocols: Nodes exchange information about the health of other nodes, spreading failure information quickly.
- Timeout-Based Detection: If a node doesn't respond within a certain time, it's assumed to have failed.
3.4. Failure Detector Properties
- Completeness: Every actual failure is eventually detected.
- Accuracy: No non-faulty node is incorrectly suspected of failing.
- Speed: Failures are detected quickly.
3.5. Practical Considerations
- Trade-offs must be made between accuracy and speed.
- Timeout values and other parameters need to be adjusted based on network conditions and application requirements.
- Failure detection is often probabilistic - It provides a likelihood of failure rather than absolute certainty.
4. Strategies
- Redundancy: Having multiple copies of critical components (e.g., servers, disks).
- Replication: Copying data to multiple locations for backup.
- Failover: Automatically switching to a backup component when a primary one fails.
- Error Detection and Correction: Identifying and fixing errors in data or software.
- Load Balancing: Distributing workload to prevent overload and improve performance.
5. Benefits
- Increased Reliability: Reduces the risk of system failures and downtime.
- High Availability: Ensures critical applications and services remain accessible even with failures.
- Data Protection: Safeguards against data loss due to hardware or software malfunctions.
- Improved Performance: Can enhance performance by distributing workload and preventing bottlenecks.
6. Instances
6.1. RAID (Redundant Array of Independent Disks)
Protects against disk failures by storing data redundantly across multiple disks.
6.2. Clustering
Groups servers to provide high availability and failover capabilities.
6.3. Distributed Databases
Replicate data across multiple nodes to ensure data consistency and availability.