Skip to content

Commit

Permalink
docs: CAPI docs review (#957)
Browse files Browse the repository at this point in the history
docs: CAPI docs review
  • Loading branch information
HomayoonAlimohammadi authored Jan 16, 2025
1 parent 90300ca commit 031b2b5
Show file tree
Hide file tree
Showing 12 changed files with 352 additions and 180 deletions.
29 changes: 15 additions & 14 deletions docs/src/capi/explanation/capi-ck8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ other low-level tasks, allowing users to define their desired cluster
configuration using simple YAML manifests. This makes it easier to create and
manage clusters in a repeatable and consistent manner, regardless of the
underlying infrastructure. In this way a wide range of infrastructure providers
has been made available, including but not limited to Amazon Web Services
has been made available, including but not limited to MAAS, Amazon Web Services
(AWS), Microsoft Azure, Google Cloud Platform (GCP), and OpenStack.

CAPI also abstracts the provisioning and management of Kubernetes clusters
Expand All @@ -29,8 +29,8 @@ With {{product}} CAPI you can:
- rolling upgrades for HA clusters and worker nodes
- in-place upgrades for non-HA control planes and worker nodes

Please refer to the “Tutorial” section for concrete examples on CAPI deployments:

Please refer to the [tutorial section] for concrete examples on CAPI
deployments.

## CAPI architecture

Expand All @@ -57,21 +57,17 @@ resources necessary for creating and managing additional Kubernetes clusters.
It is important to note that the management cluster is not intended to support
any other workload, as the workloads are expected to run on the provisioned
clusters. As a result, the provisioned clusters are referred to as workload
clusters.

Typically, the management cluster runs in a separate environment from the
clusters it manages, such as a public cloud or an on-premises data centre. It
serves as a centralised location for managing the configuration, policies, and
security of multiple managed clusters. By leveraging the management cluster,
users can easily create and manage a fleet of Kubernetes clusters in a
consistent and repeatable manner.
clusters. While CAPI providers mostly live on the management cluster, it's
also possible to maintain the them in the workload cluster.
Read more about this in the [upstream docs around pivoting].

The {{product}} team maintains the two providers required for integrating with CAPI:
The {{product}} team maintains the two providers required for integrating
with CAPI:

- The Cluster API Bootstrap Provider {{product}} (**CABPCK**) responsible for
provisioning the nodes in the cluster and preparing them to be joined to the
Kubernetes control plane. When you use the CABPCK you define a Kubernetes
Cluster object that describes the desired state of the new cluster and
`Cluster` object that describes the desired state of the new cluster and
includes the number and type of nodes in the cluster, as well as any
additional configuration settings. The Bootstrap Provider then creates the
necessary resources in the Kubernetes API server to bring the cluster up to
Expand All @@ -84,11 +80,16 @@ The {{product}} team maintains the two providers required for integrating with C
underlying Kubernetes distribution. Its main tasks are to update the machine
state and to generate the kubeconfig file used for accessing the cluster. The
kubeconfig file is stored as a secret which the user can then retrieve using
the `clusterctl` command.
the `clusterctl` command. This component also handles the upgrade process for
the control plane nodes.

```{figure} ../../assets/capi-ck8s.svg
:width: 100%
:alt: Deployment of components

Deployment of components
```

<!-- LINKS -->
[tutorial section]: ./tutorial
[upstream docs around pivoting]: https://cluster-api.sigs.k8s.io/clusterctl/commands/move#pivot
4 changes: 2 additions & 2 deletions docs/src/capi/explanation/in-place-upgrades.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# In-Place Upgrades
# In-place upgrades

Regularly upgrading the Kubernetes version of the machines in a cluster
is important. While rolling upgrades are a popular strategy, certain
Expand Down Expand Up @@ -49,7 +49,7 @@ For a complete list of annotations and their values please
refer to the [annotations reference page][4]. This explanation proceeds
to use abbreviations of the mentioned labels.

### Single Machine In-Place Upgrade Controller
### Single machine in-place upgrade controller

The Machine objects can be marked with the `upgrade-to` annotation to
trigger an in-place upgrade for that machine. While watching for changes
Expand Down
4 changes: 2 additions & 2 deletions docs/src/capi/explanation/index.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Explanation

For a better understanding of how {{product}} works and related
For a better understanding of how {{product}} CAPI works and related
topics such as security, these pages will help expand your knowledge and
help you get the most out of Kubernetes.
help you get the most out of Kubernetes and Cluster API.

```{toctree}
:hidden:
Expand Down
64 changes: 37 additions & 27 deletions docs/src/capi/howto/custom-ck8s.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,52 @@
# Install custom {{product}} on machines

By default, the `version` field in the machine specifications will determine which {{product}} is downloaded from the `stable` risk level. While you can install different versions of the `stable` risk level by changing the `version` field, extra steps should be taken if you're willing to install a specific risk level.
This guide walks you through the process of installing custom {{product}} on workload cluster machines.
By default, the `version` field in the machine specifications will determine
which {{product}} **version** is downloaded from the `stable` risk level.
This guide walks you through the process of installing {{product}}
with a specific **risk level**, **revision**, or from a **local path**.

## Prerequisites

To follow this guide, you will need:

- A Kubernetes management cluster with Cluster API and providers installed and configured.
- A Kubernetes management cluster with Cluster API and providers installed
and configured.
- A generated cluster spec manifest

Please refer to the [getting-started guide][getting-started] for further
details on the required setup.

In this guide we call the generated cluster spec manifest `cluster.yaml`.
This guide will call the generated cluster spec manifest `cluster.yaml`.

## Overwrite the existing `install.sh` script
## Using the configuration specification

{{product}} can be installed on machines using a specific `channel`,
`revision` or `localPath` by specifying the respective field in the spec
of the machine.

The installation of the {{product}} snap is done via running the `install.sh` script in the cloud-init.
While this file is automatically placed in every workload cluster machine which hard-coded content by {{product}} providers, you can overwrite this file to make sure your desired content is available in the script.
```yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: CK8sControlPlane
...
spec:
...
spec:
channel: 1.xx-classic/candidate
# Or
revision: 1234
# Or
localPath: /path/to/snap/on/machine
```
Note that for the `localPath` to work the snap must be available on the
machine at the specified path on boot.

## Overwrite the existing `install.sh` script

As an example, let's overwrite the `install.sh` for our control plane nodes. Inside the `cluster.yaml`, add the new file content:
Running the `install.sh` script is one of the steps that `cloud-init` performs
on machines and can be overwritten to install a custom {{product}}
snap. This can be done by adding a `files` field to the
`spec` of the machine with a specific `path`.

```yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
Expand All @@ -41,26 +67,10 @@ spec:
Now the new control plane nodes that are created using this manifest will have
the `1.31-classic/candidate` {{product}} snap installed on them!

## Use `preRunCommands`

As mentioned above, the `install.sh` script is responsible for installing {{product}} snap on machines. `preRunCommands` are executed before `install.sh`. You can also add an install command to the `preRunCommands` in order to install your desired {{product}} version.

```{note}
Installing the {{product}} snap via the `preRunCommands`, does not prevent the `install.sh` script from running. Instead, the installation process in the `install.sh` will fail with a message indicating that `k8s` is already installed.
This is not considered a standard way and overwriting the `install.sh` script is recommended.
```

Edit the `cluster.yaml` to add the installation command:

```yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: CK8sControlPlane
...
spec:
...
spec:
preRunCommands:
- snap install k8s --classic --channel=1.31-classic/candidate
[Use the configuration specification](#using-config-spec),
if you're only interested in installing a specific channel, revision, or
form the local path.
```

<!-- LINKS -->
Expand Down
2 changes: 1 addition & 1 deletion docs/src/capi/howto/external-etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Update the control plane resource `CK8sControlPlane` so that it is configured to
store the Kubernetes state in etcd. Add the following additional configuration
to the cluster template `cluster-template.yaml`:

```
```yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: CK8sControlPlane
metadata:
Expand Down
17 changes: 10 additions & 7 deletions docs/src/capi/howto/migrate-management.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,23 @@
# Migrate the management cluster

Management cluster migration is a really powerful operation in the cluster’s lifecycle as it allows admins
to move the management cluster in a more reliable substrate or perform maintenance tasks without disruptions.
In this guide we will walk through the migration of a management cluster.
Management cluster migration allows admins to move the management cluster
to a different substrate or perform maintenance tasks without disruptions.
This guide walks you through the migration of a management cluster.

## Prerequisites

In the [Cluster provisioning with CAPI and {{product}} tutorial] we showed how to provision a workloads cluster. Here, we start from the point where the workloads cluster is available and we will migrate the management cluster to the one cluster we just provisioned.
- A {{product}} CAPI management cluster with Cluster API and providers
installed and configured.

## Install the same set of providers to the provisioned cluster
## Configure the target cluster

Before migrating a cluster, we must make sure that both the target and source management clusters run the same version of providers (infrastructure, bootstrap, control plane). To do so, `clusterctl init` should be called against the target cluster:
Before migrating a cluster, ensure that both the target and source management
clusters run the same version of providers (infrastructure, bootstrap,
control plane). Use `clusterctl init` to target the cluster::

```
clusterctl get kubeconfig <provisioned-cluster> > targetconfig
clusterctl init --kubeconfig=$PWD/targetconfig --bootstrap ck8s --control-plane ck8s --infrastructure <infra-provider-of-choice>
clusterctl init --kubeconfig=$PWD/targetconfig --bootstrap canonical-kubernetes --control-plane canonical-kubernetes --infrastructure <infra-provider-of-choice>
```

## Move the cluster
Expand Down
6 changes: 3 additions & 3 deletions docs/src/capi/howto/refresh-certs.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Refreshing Workload Cluster Certificates
# Refreshing workload cluster certificates

This how-to will walk you through the steps to refresh the certificates for
both control plane and worker nodes in your {{product}} Cluster API cluster.
Expand All @@ -20,7 +20,7 @@ checking the `CK8sConfigTemplate` resource for the cluster to see if a
`BootstrapConfig` value was provided with the necessary certificates.
```

### Refresh Control Plane Node Certificates
### Refresh control plane node certificates

To refresh the certificates on control plane nodes, follow these steps for each
control plane node in your workload cluster:
Expand Down Expand Up @@ -65,7 +65,7 @@ the machine resource:
"machine.cluster.x-k8s.io/certificates-expiry": "2034-10-25T14:25:23-05:00"
```

### Refresh Worker Node Certificates
### Refresh worker node certificates

To refresh the certificates on worker nodes, follow these steps for each worker
node in your workload cluster:
Expand Down
8 changes: 7 additions & 1 deletion docs/src/capi/howto/rollout-upgrades.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,11 @@ details on the required setup.
This guide refers to the workload cluster as `c1` and its
kubeconfig as `c1-kubeconfig.yaml`.

```{note}
Rollout upgrades are recommended for HA clusters. For non-HA clusters, please
refer to the [in-place upgrade guide].
```

## Check the current cluster status

Prior to the upgrade, ensure that the management cluster is in a healthy
Expand All @@ -37,7 +42,6 @@ kubectl --kubeconfig c1-kubeconfig.yaml get nodes -o wide

```{note} For rollout upgrades, only the minor version should be updated.
```
<!-- TODO(ben): add reference to in-place upgrades once we have those docs. -->

## Update the control plane

Expand Down Expand Up @@ -122,3 +126,5 @@ kubectl get machines -A

<!-- LINKS -->
[getting-started]: ../tutorial/getting-started.md
[in-place upgrade guide]: ./in-place-upgrades.md
```
36 changes: 15 additions & 21 deletions docs/src/capi/howto/upgrade-providers.md
Original file line number Diff line number Diff line change
@@ -1,37 +1,31 @@
# Upgrading the providers of a management cluster

In this guide we will go through the process of upgrading providers of a management cluster.
This guide will walk you through the process of upgrading the
providers of a management cluster.

## Prerequisites

We assume we already have a management cluster and the infrastructure provider configured as described in the [Cluster provisioning with CAPI and {{product}} tutorial]. The selected infrastructure provider is AWS. We have not yet called `clusterctl init` to initialise the cluster.

## Initialise the cluster

To demonstrate the steps of upgrading the management cluster, we will begin by initialising a desired version of the {{product}} CAPI providers.

To set the version of the providers to be installed we use the following notation:

```
clusterctl init --bootstrap ck8s:v0.1.2 --control-plane ck8s:v0.1.2 --infrastructure <infra-provider-of-choice>
```
- A {{product}} CAPI management cluster with installed and
configured providers.

## Check for updates

With `clusterctl` we can check if there are any new versions of the running providers:
Check whether there are any new versions of your running
providers:

```
clusterctl upgrade plan
```

The output shows the existing version of each provider as well as the version that we can upgrade into:
The output shows the existing version of each provider as well
as the next available version:

```text
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-ck8s cabpck-system BootstrapProvider v0.1.2 v0.2.0
control-plane-ck8s cacpck-system ControlPlaneProvider v0.1.2 v0.2.0
cluster-api capi-system CoreProvider v1.8.1 Already up to date
infrastructure-aws capa-system InfrastructureProvider v2.6.1 Already up to date
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
canonical-kubernetes cabpck-system BootstrapProvider v0.1.2 v0.2.0
canonical-kubernetes cacpck-system ControlPlaneProvider v0.1.2 v0.2.0
cluster-api capi-system CoreProvider v1.8.1 Already up to date
infrastructure-aws capa-system InfrastructureProvider v2.6.1 Already up to date
```

## Trigger providers upgrade
Expand All @@ -45,8 +39,8 @@ clusterctl upgrade apply --contract v1beta1
To upgrade each provider one by one, issue:

```
clusterctl upgrade apply --bootstrap cabpck-system/ck8s:v0.2.0
clusterctl upgrade apply --control-plane cacpck-system/ck8s:v0.2.0
clusterctl upgrade apply --bootstrap cabpck-system/canonical-kubernetes:v0.2.0
clusterctl upgrade apply --control-plane cacpck-system/canonical-kubernetes:v0.2.0
```

<!-- LINKS -->
Expand Down
Loading

0 comments on commit 031b2b5

Please sign in to comment.