Skip to content

Commit

Permalink
Uncapitalize headers
Browse files Browse the repository at this point in the history
  • Loading branch information
bschimke95 committed Jan 10, 2025
1 parent 9d8054f commit 3ba4713
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions docs/src/snap/explanation/high-availability.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# High Availability
# High availability

High availability (HA) is a core feature of {{ product }}, ensuring that
a Kubernetes cluster remains operational and resilient, even when nodes or
Expand All @@ -11,25 +11,25 @@ more nodes independent of the deployment method. By distributing key components
across multiple nodes, HA reduces the risk of downtime and service
interruptions, offering built-in redundancy and fault tolerance.

## Key Components of a Highly Available Cluster
## Key components of a highly available cluster

A highly available Kubernetes cluster exhibits the following characteristics:

### 1. **Multiple Nodes for Redundancy**
### 1. **Multiple nodes for redundancy**

Having multiple nodes in the cluster ensures workload distribution and
redundancy. If one node fails, workloads can be rescheduled on other available
nodes without disrupting services. This node-level redundancy minimizes the
impact of hardware or network failures.
redundancy. If one node fails, workloads will be rescheduled automatically on
other available nodes without disrupting services. This node-level redundancy
minimizes the impact of hardware or network failures.

### 2. **Control Plane Redundancy**
### 2. **Control plane redundancy**

The control plane manages the cluster’s state and operations. For high
availability, the control plane components—such as the API server, scheduler,
and controller-manager—are distributed across multiple nodes. This prevents a
single point of failure from rendering the cluster inoperable.

### 3. **Highly Available Datastore**
### 3. **Highly available datastore**

By default, {{ product }} uses **dqlite** to manage the Kubernetes
cluster state. Dqlite leverages the Raft consensus algorithm for leader
Expand All @@ -38,7 +38,7 @@ capabilities. When a leader node fails, a new leader is elected seamlessly
without administrative intervention. This mechanism allows the cluster to
remain operational even in the event of node failures. More details on
replication and leader elections can be found in
the [dqlite replication documentation][dqlite-replication].
the [dqlite replication documentation][Dqlite-replication].

<!-- LINKS -->
[dqlite-replication]: https://dqlite.io/docs/explanation/replication
[Dqlite-replication]: https://dqlite.io/docs/explanation/replication

0 comments on commit 3ba4713

Please sign in to comment.