diff --git a/docs/src/capi/explanation/capi-ck8s.md b/docs/src/capi/explanation/capi-ck8s.md index 08dd47fd7..67f3f77f2 100644 --- a/docs/src/capi/explanation/capi-ck8s.md +++ b/docs/src/capi/explanation/capi-ck8s.md @@ -11,7 +11,7 @@ other low-level tasks, allowing users to define their desired cluster configuration using simple YAML manifests. This makes it easier to create and manage clusters in a repeatable and consistent manner, regardless of the underlying infrastructure. In this way a wide range of infrastructure providers -has been made available, including but not limited to Amazon Web Services +has been made available, including but not limited to MAAS, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and OpenStack. CAPI also abstracts the provisioning and management of Kubernetes clusters @@ -29,8 +29,8 @@ With {{product}} CAPI you can: - rolling upgrades for HA clusters and worker nodes - in-place upgrades for non-HA control planes and worker nodes -Please refer to the “Tutorial” section for concrete examples on CAPI deployments: - +Please refer to the [tutorial section] for concrete examples on CAPI +deployments. ## CAPI architecture @@ -57,21 +57,17 @@ resources necessary for creating and managing additional Kubernetes clusters. It is important to note that the management cluster is not intended to support any other workload, as the workloads are expected to run on the provisioned clusters. As a result, the provisioned clusters are referred to as workload -clusters. - -Typically, the management cluster runs in a separate environment from the -clusters it manages, such as a public cloud or an on-premises data centre. It -serves as a centralised location for managing the configuration, policies, and -security of multiple managed clusters. By leveraging the management cluster, -users can easily create and manage a fleet of Kubernetes clusters in a -consistent and repeatable manner. +clusters. While CAPI providers mostly live on the management cluster, it's +also possible to maintain the them in the workload cluster. +Read more about this in the [upstream docs around pivoting]. -The {{product}} team maintains the two providers required for integrating with CAPI: +The {{product}} team maintains the two providers required for integrating +with CAPI: - The Cluster API Bootstrap Provider {{product}} (**CABPCK**) responsible for provisioning the nodes in the cluster and preparing them to be joined to the Kubernetes control plane. When you use the CABPCK you define a Kubernetes - Cluster object that describes the desired state of the new cluster and + `Cluster` object that describes the desired state of the new cluster and includes the number and type of nodes in the cluster, as well as any additional configuration settings. The Bootstrap Provider then creates the necessary resources in the Kubernetes API server to bring the cluster up to @@ -84,7 +80,8 @@ The {{product}} team maintains the two providers required for integrating with C underlying Kubernetes distribution. Its main tasks are to update the machine state and to generate the kubeconfig file used for accessing the cluster. The kubeconfig file is stored as a secret which the user can then retrieve using - the `clusterctl` command. + the `clusterctl` command. This component also handles the upgrade process for + the control plane nodes. ```{figure} ../../assets/capi-ck8s.svg :width: 100% @@ -92,3 +89,7 @@ The {{product}} team maintains the two providers required for integrating with C Deployment of components ``` + + +[tutorial section]: ./tutorial +[upstream docs around pivoting]: https://cluster-api.sigs.k8s.io/clusterctl/commands/move#pivot diff --git a/docs/src/capi/explanation/in-place-upgrades.md b/docs/src/capi/explanation/in-place-upgrades.md index 5196fd7d1..e41c83c78 100644 --- a/docs/src/capi/explanation/in-place-upgrades.md +++ b/docs/src/capi/explanation/in-place-upgrades.md @@ -1,4 +1,4 @@ -# In-Place Upgrades +# In-place upgrades Regularly upgrading the Kubernetes version of the machines in a cluster is important. While rolling upgrades are a popular strategy, certain @@ -49,7 +49,7 @@ For a complete list of annotations and their values please refer to the [annotations reference page][4]. This explanation proceeds to use abbreviations of the mentioned labels. -### Single Machine In-Place Upgrade Controller +### Single machine in-place upgrade controller The Machine objects can be marked with the `upgrade-to` annotation to trigger an in-place upgrade for that machine. While watching for changes diff --git a/docs/src/capi/explanation/index.md b/docs/src/capi/explanation/index.md index b76885878..155fe19bd 100644 --- a/docs/src/capi/explanation/index.md +++ b/docs/src/capi/explanation/index.md @@ -1,8 +1,8 @@ # Explanation -For a better understanding of how {{product}} works and related +For a better understanding of how {{product}} CAPI works and related topics such as security, these pages will help expand your knowledge and -help you get the most out of Kubernetes. +help you get the most out of Kubernetes and Cluster API. ```{toctree} :hidden: diff --git a/docs/src/capi/howto/custom-ck8s.md b/docs/src/capi/howto/custom-ck8s.md index 6a68fc1ba..99bd26f56 100644 --- a/docs/src/capi/howto/custom-ck8s.md +++ b/docs/src/capi/howto/custom-ck8s.md @@ -1,26 +1,52 @@ # Install custom {{product}} on machines -By default, the `version` field in the machine specifications will determine which {{product}} is downloaded from the `stable` risk level. While you can install different versions of the `stable` risk level by changing the `version` field, extra steps should be taken if you're willing to install a specific risk level. -This guide walks you through the process of installing custom {{product}} on workload cluster machines. +By default, the `version` field in the machine specifications will determine +which {{product}} **version** is downloaded from the `stable` risk level. +This guide walks you through the process of installing {{product}} +with a specific **risk level**, **revision**, or from a **local path**. ## Prerequisites To follow this guide, you will need: -- A Kubernetes management cluster with Cluster API and providers installed and configured. +- A Kubernetes management cluster with Cluster API and providers installed +and configured. - A generated cluster spec manifest Please refer to the [getting-started guide][getting-started] for further details on the required setup. -In this guide we call the generated cluster spec manifest `cluster.yaml`. +This guide will call the generated cluster spec manifest `cluster.yaml`. -## Overwrite the existing `install.sh` script +## Using the configuration specification + +{{product}} can be installed on machines using a specific `channel`, +`revision` or `localPath` by specifying the respective field in the spec +of the machine. -The installation of the {{product}} snap is done via running the `install.sh` script in the cloud-init. -While this file is automatically placed in every workload cluster machine which hard-coded content by {{product}} providers, you can overwrite this file to make sure your desired content is available in the script. +```yaml +apiVersion: controlplane.cluster.x-k8s.io/v1beta2 +kind: CK8sControlPlane +... +spec: + ... + spec: + channel: 1.xx-classic/candidate + # Or + revision: 1234 + # Or + localPath: /path/to/snap/on/machine +``` + +Note that for the `localPath` to work the snap must be available on the +machine at the specified path on boot. + +## Overwrite the existing `install.sh` script -As an example, let's overwrite the `install.sh` for our control plane nodes. Inside the `cluster.yaml`, add the new file content: +Running the `install.sh` script is one of the steps that `cloud-init` performs +on machines and can be overwritten to install a custom {{product}} +snap. This can be done by adding a `files` field to the +`spec` of the machine with a specific `path`. ```yaml apiVersion: controlplane.cluster.x-k8s.io/v1beta2 @@ -41,26 +67,10 @@ spec: Now the new control plane nodes that are created using this manifest will have the `1.31-classic/candidate` {{product}} snap installed on them! -## Use `preRunCommands` - -As mentioned above, the `install.sh` script is responsible for installing {{product}} snap on machines. `preRunCommands` are executed before `install.sh`. You can also add an install command to the `preRunCommands` in order to install your desired {{product}} version. - ```{note} -Installing the {{product}} snap via the `preRunCommands`, does not prevent the `install.sh` script from running. Instead, the installation process in the `install.sh` will fail with a message indicating that `k8s` is already installed. -This is not considered a standard way and overwriting the `install.sh` script is recommended. -``` - -Edit the `cluster.yaml` to add the installation command: - -```yaml -apiVersion: controlplane.cluster.x-k8s.io/v1beta2 -kind: CK8sControlPlane -... -spec: - ... - spec: - preRunCommands: - - snap install k8s --classic --channel=1.31-classic/candidate +[Use the configuration specification](#using-config-spec), +if you're only interested in installing a specific channel, revision, or +form the local path. ``` diff --git a/docs/src/capi/howto/external-etcd.md b/docs/src/capi/howto/external-etcd.md index a77600c68..8e25f0364 100644 --- a/docs/src/capi/howto/external-etcd.md +++ b/docs/src/capi/howto/external-etcd.md @@ -83,7 +83,7 @@ Update the control plane resource `CK8sControlPlane` so that it is configured to store the Kubernetes state in etcd. Add the following additional configuration to the cluster template `cluster-template.yaml`: -``` +```yaml apiVersion: controlplane.cluster.x-k8s.io/v1beta2 kind: CK8sControlPlane metadata: diff --git a/docs/src/capi/howto/migrate-management.md b/docs/src/capi/howto/migrate-management.md index f902a0731..e7c4113c1 100644 --- a/docs/src/capi/howto/migrate-management.md +++ b/docs/src/capi/howto/migrate-management.md @@ -1,20 +1,23 @@ # Migrate the management cluster -Management cluster migration is a really powerful operation in the cluster’s lifecycle as it allows admins -to move the management cluster in a more reliable substrate or perform maintenance tasks without disruptions. -In this guide we will walk through the migration of a management cluster. +Management cluster migration allows admins to move the management cluster +to a different substrate or perform maintenance tasks without disruptions. +This guide walks you through the migration of a management cluster. ## Prerequisites -In the [Cluster provisioning with CAPI and {{product}} tutorial] we showed how to provision a workloads cluster. Here, we start from the point where the workloads cluster is available and we will migrate the management cluster to the one cluster we just provisioned. +- A {{product}} CAPI management cluster with Cluster API and providers +installed and configured. -## Install the same set of providers to the provisioned cluster +## Configure the target cluster -Before migrating a cluster, we must make sure that both the target and source management clusters run the same version of providers (infrastructure, bootstrap, control plane). To do so, `clusterctl init` should be called against the target cluster: +Before migrating a cluster, ensure that both the target and source management +clusters run the same version of providers (infrastructure, bootstrap, +control plane). Use `clusterctl init` to target the cluster:: ``` clusterctl get kubeconfig > targetconfig -clusterctl init --kubeconfig=$PWD/targetconfig --bootstrap ck8s --control-plane ck8s --infrastructure +clusterctl init --kubeconfig=$PWD/targetconfig --bootstrap canonical-kubernetes --control-plane canonical-kubernetes --infrastructure ``` ## Move the cluster diff --git a/docs/src/capi/howto/refresh-certs.md b/docs/src/capi/howto/refresh-certs.md index 9f8d3347d..f51b244fc 100644 --- a/docs/src/capi/howto/refresh-certs.md +++ b/docs/src/capi/howto/refresh-certs.md @@ -1,4 +1,4 @@ -# Refreshing Workload Cluster Certificates +# Refreshing workload cluster certificates This how-to will walk you through the steps to refresh the certificates for both control plane and worker nodes in your {{product}} Cluster API cluster. @@ -20,7 +20,7 @@ checking the `CK8sConfigTemplate` resource for the cluster to see if a `BootstrapConfig` value was provided with the necessary certificates. ``` -### Refresh Control Plane Node Certificates +### Refresh control plane node certificates To refresh the certificates on control plane nodes, follow these steps for each control plane node in your workload cluster: @@ -65,7 +65,7 @@ the machine resource: "machine.cluster.x-k8s.io/certificates-expiry": "2034-10-25T14:25:23-05:00" ``` -### Refresh Worker Node Certificates +### Refresh worker node certificates To refresh the certificates on worker nodes, follow these steps for each worker node in your workload cluster: diff --git a/docs/src/capi/howto/rollout-upgrades.md b/docs/src/capi/howto/rollout-upgrades.md index 2dfec4304..8fdc0b679 100644 --- a/docs/src/capi/howto/rollout-upgrades.md +++ b/docs/src/capi/howto/rollout-upgrades.md @@ -20,6 +20,11 @@ details on the required setup. This guide refers to the workload cluster as `c1` and its kubeconfig as `c1-kubeconfig.yaml`. +```{note} +Rollout upgrades are recommended for HA clusters. For non-HA clusters, please +refer to the [in-place upgrade guide]. +``` + ## Check the current cluster status Prior to the upgrade, ensure that the management cluster is in a healthy @@ -37,7 +42,6 @@ kubectl --kubeconfig c1-kubeconfig.yaml get nodes -o wide ```{note} For rollout upgrades, only the minor version should be updated. ``` - ## Update the control plane @@ -122,3 +126,5 @@ kubectl get machines -A [getting-started]: ../tutorial/getting-started.md +[in-place upgrade guide]: ./in-place-upgrades.md +``` diff --git a/docs/src/capi/howto/upgrade-providers.md b/docs/src/capi/howto/upgrade-providers.md index 5188c2413..03ed51afd 100644 --- a/docs/src/capi/howto/upgrade-providers.md +++ b/docs/src/capi/howto/upgrade-providers.md @@ -1,37 +1,31 @@ # Upgrading the providers of a management cluster -In this guide we will go through the process of upgrading providers of a management cluster. +This guide will walk you through the process of upgrading the +providers of a management cluster. ## Prerequisites -We assume we already have a management cluster and the infrastructure provider configured as described in the [Cluster provisioning with CAPI and {{product}} tutorial]. The selected infrastructure provider is AWS. We have not yet called `clusterctl init` to initialise the cluster. - -## Initialise the cluster - -To demonstrate the steps of upgrading the management cluster, we will begin by initialising a desired version of the {{product}} CAPI providers. - -To set the version of the providers to be installed we use the following notation: - -``` -clusterctl init --bootstrap ck8s:v0.1.2 --control-plane ck8s:v0.1.2 --infrastructure -``` +- A {{product}} CAPI management cluster with installed and +configured providers. ## Check for updates -With `clusterctl` we can check if there are any new versions of the running providers: +Check whether there are any new versions of your running +providers: ``` clusterctl upgrade plan ``` -The output shows the existing version of each provider as well as the version that we can upgrade into: +The output shows the existing version of each provider as well +as the next available version: ```text -NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION -bootstrap-ck8s cabpck-system BootstrapProvider v0.1.2 v0.2.0 -control-plane-ck8s cacpck-system ControlPlaneProvider v0.1.2 v0.2.0 -cluster-api capi-system CoreProvider v1.8.1 Already up to date -infrastructure-aws capa-system InfrastructureProvider v2.6.1 Already up to date +NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION +canonical-kubernetes cabpck-system BootstrapProvider v0.1.2 v0.2.0 +canonical-kubernetes cacpck-system ControlPlaneProvider v0.1.2 v0.2.0 +cluster-api capi-system CoreProvider v1.8.1 Already up to date +infrastructure-aws capa-system InfrastructureProvider v2.6.1 Already up to date ``` ## Trigger providers upgrade @@ -45,8 +39,8 @@ clusterctl upgrade apply --contract v1beta1 To upgrade each provider one by one, issue: ``` -clusterctl upgrade apply --bootstrap cabpck-system/ck8s:v0.2.0 -clusterctl upgrade apply --control-plane cacpck-system/ck8s:v0.2.0 +clusterctl upgrade apply --bootstrap cabpck-system/canonical-kubernetes:v0.2.0 +clusterctl upgrade apply --control-plane cacpck-system/canonical-kubernetes:v0.2.0 ``` diff --git a/docs/src/capi/reference/annotations.md b/docs/src/capi/reference/annotations.md index 3d540b446..818d4e0d1 100644 --- a/docs/src/capi/reference/annotations.md +++ b/docs/src/capi/reference/annotations.md @@ -7,17 +7,33 @@ pairs that can be used to reflect additional metadata for CAPI resources. The following annotations can be set on CAPI `Machine` resources. -### In-place Upgrade +### In-place upgrade -| Name | Description | Values | Set by user | -|-----------------------------------------------|------------------------------------------------------|------------------------------|-------------| -| `v1beta2.k8sd.io/in-place-upgrade-to` | Trigger a Kubernetes version upgrade on that machine | snap version e.g.:
- `localPath=/full/path/to/k8s.snap`
- `revision=123`
- `channel=latest/edge` | yes | -| `v1beta2.k8sd.io/in-place-upgrade-status` | The status of the version upgrade | in-progress\|done\|failed | no | -| `v1beta2.k8sd.io/in-place-upgrade-release` | The current version on the machine | snap version e.g.:
- `localPath=/full/path/to/k8s.snap`
- `revision=123`
- `channel=latest/edge` | no | -| `v1beta2.k8sd.io/in-place-upgrade-change-id` | The ID of the currently running upgrade | ID string | no | +| Name | Description | Values | Set by user | +|-----------------------------------------------------------|------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|-------------| +| `v1beta2.k8sd.io/in-place-upgrade-to` | Trigger a Kubernetes version upgrade on that machine | snap version e.g.:
- `localPath=/full/path/to/k8s.snap`
- `revision=123`
- `channel=latest/edge` | yes | +| `v1beta2.k8sd.io/in-place-upgrade-status` | The status of the version upgrade | in-progress\|done\|failed | no | +| `v1beta2.k8sd.io/in-place-upgrade-release` | The current version on the machine | snap version e.g.:
- `localPath=/full/path/to/k8s.snap`
- `revision=123`
- `channel=latest/edge` | no | +| `v1beta2.k8sd.io/in-place-upgrade-change-id` | The ID of the currently running upgrade | ID string | no | +| `v1beta2.k8sd.io/in-place-upgrade-last-failed-attempt-at` | The time of the last failed upgrade attempt | RFC1123Z timestamp | no | -### Refresh Certificates +### Refresh certificates -| Name | Description | Values | Set by user | -|-----------------------------------------------|------------------------------------------------------|------------------------------|-------------| -| `v1beta2.k8sd.io/refresh-certificates` | The requested duration (TTL) that the refreshed certificates should expire in. | Duration (TTL) string. A number followed by a unit e.g.: `1mo`, `1y`, `90d`
Allowed units: Any unit supported by `time.ParseDuration` as well as `y` (year), `mo` (month) and `d` (day). | yes | +| Name | Description | Values | Set by user | +|-----------------------------------------------|--------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------| +| `v1beta2.k8sd.io/refresh-certificates` | The requested duration (TTL) that the refreshed certificates should expire in. | Duration (TTL) string. A number followed by a unit e.g.: `1mo`, `1y`, `90d`
Allowed units: Any unit supported by `time.ParseDuration` as well as `y` (year), `mo` (month) and `d` (day). | yes | +| `v1beta2.k8sd.io/refresh-certificates-status` | The status of the certificate refresh request. | in-progress\|done\|failed | no | + +### Certificates expiry + +| Name | Description | Values | Set by user | +|------------------------------------------------|------------------------------------------------|-------------------|-------------| +| `machine.cluster.x-k8s.io/certificates-expiry` | Indicates the expiry date of the certificates. | RFC3339 timestamp | no | + +### Remediation + +| Name | Description | Values | Set by user | +|-----------------------------------------------------------|---------------------------------------------------------------|-------------|-------------| +| `controlplane.cluster.x-k8s.io/ck8s-server-configuration` | Stores the json-marshalled string of KCP ClusterConfiguration | JSON string | no | +| `controlplane.cluster.x-k8s.io/remediation-in-progress` | Keeps track that a KCP remediation is in progress | JSON string | no | +| `controlplane.cluster.x-k8s.io/remediation-for` | Links a new machine to the unhealthy machine it is replacing | JSON string | no | diff --git a/docs/src/capi/reference/configs.md b/docs/src/capi/reference/configs.md index a46000d2c..6687aac2e 100644 --- a/docs/src/capi/reference/configs.md +++ b/docs/src/capi/reference/configs.md @@ -1,11 +1,11 @@ -# Providers Configurations +# Providers configurations {{product}} bootstrap and control plane providers (CABPCK and CACPCK) can be configured to aid the cluster admin in reaching the desired state for the workload cluster. In this section we will go through different configurations that each one of these providers expose. -## Common Configurations +## Common configurations The following configurations are available for both bootstrap and control plane providers. @@ -27,11 +27,11 @@ To install a specific track or risk level, see [Install custom {{product}} on machines] guide. ``` -**Example Usage:** +**Example usage:** ```yaml spec: - version: 1.30 + version: 1.30 ``` ### `files` @@ -54,35 +54,35 @@ existing files. | `encoding` | `string` | Encoding of the file to create. One of `base64`, `gzip` and `gzip+base64` | `""` | | `owner` | `string` | Owner of the file to create, e.g. "root:root" | `""` | -**Example Usage:** +**Example usage:** - Using `content`: ```yaml spec: - files: - path: "/path/to/my-file" - content: | - #!/bin/bash -xe - echo "hello from my-file - permissions: "0500" - owner: root:root + files: + path: "/path/to/my-file" + content: | + #!/bin/bash -xe + echo "hello from my-file + permissions: "0500" + owner: root:root ``` - Using `contentFrom`: ```yaml spec: - files: - path: "/path/to/my-file" - contentFrom: - secret: - # Name of the secret in the CK8sBootstrapConfig's namespace to use. - name: my-secret - # Key is the key in the secret's data map for this value. - key: my-key - permissions: "0500" - owner: root:root + files: + path: "/path/to/my-file" + contentFrom: + secret: + # Name of the secret in the CK8sBootstrapConfig's namespace to use. + name: my-secret + # Key is the key in the secret's data map for this value. + key: my-key + permissions: "0500" + owner: root:root ``` ### `bootstrapConfig` @@ -102,37 +102,37 @@ nodes. The structure of the `bootstrapConfig` is defined in the | `content` | `string` | Content of the file. If this is set, `contentFrom` is ignored | `""` | | `contentFrom` | `struct` | A reference to a secret containing the content of the file | `nil` | -**Example Usage:** +**Example usage:** - Using `content`: ```yaml spec: - bootstrapConfig: - content: | - cluster-config: - network: - enabled: true - dns: - enabled: true - cluster-domain: cluster.local - ingress: - enabled: true - load-balancer: - enabled: true + bootstrapConfig: + content: | + cluster-config: + network: + enabled: true + dns: + enabled: true + cluster-domain: cluster.local + ingress: + enabled: true + load-balancer: + enabled: true ``` - Using `contentFrom`: ```yaml spec: - bootstrapConfig: - contentFrom: - secret: - # Name of the secret in the CK8sBootstrapConfig's namespace to use. - name: my-secret - # Key is the key in the secret's data map for this value. - key: my-key + bootstrapConfig: + contentFrom: + secret: + # Name of the secret in the CK8sBootstrapConfig's namespace to use. + name: my-secret + # Key is the key in the secret's data map for this value. + key: my-key ``` ### `bootCommands` @@ -144,13 +144,13 @@ spec: `bootCommands` specifies extra commands to run in cloud-init early in the boot process. -**Example Usage:** +**Example usage:** ```yaml spec: - bootCommands: - - echo "first-command" - - echo "second-command" + bootCommands: + - echo "first-command" + - echo "second-command" ``` ### `preRunCommands` @@ -167,13 +167,13 @@ k8s-snap setup runs. on machines. See [Install custom {{product}} on machines] guide for more info. ``` -**Example Usage:** +**Example usage:** ```yaml spec: - preRunCommands: - - echo "first-command" - - echo "second-command" + preRunCommands: + - echo "first-command" + - echo "second-command" ``` ### `postRunCommands` @@ -185,13 +185,13 @@ spec: `postRunCommands` specifies extra commands to run in cloud-init after k8s-snap setup runs. -**Example Usage:** +**Example usage:** ```yaml spec: - postRunCommands: - - echo "first-command" - - echo "second-command" + postRunCommands: + - echo "first-command" + - echo "second-command" ``` ### `airGapped` @@ -206,11 +206,11 @@ k8s-snap on the machine. The user is expected to install k8s-snap manually with [`preRunCommands`](#preRunCommands), or provide an image with k8s-snap pre-installed. -**Example Usage:** +**Example usage:** ```yaml spec: - airGapped: true + airGapped: true ``` ### `initConfig` @@ -232,17 +232,154 @@ spec: | `enableDefaultNetwork` | `bool` | Specifies whether to enable the default CNI. | `true` | -**Example Usage:** +**Example usage:** ```yaml spec: - initConfig: - annotations: - annotationKey: "annotationValue" - enableDefaultDNS: false - enableDefaultLocalStorage: true - enableDefaultMetricsServer: false - enableDefaultNetwork: true + initConfig: + annotations: + annotationKey: "annotationValue" + enableDefaultDNS: false + enableDefaultLocalStorage: true + enableDefaultMetricsServer: false + enableDefaultNetwork: true +``` + + +### `snapstoreProxyScheme` + +**Type:** `string` + +**Required:** no + +The snap store proxy domain's scheme, e.g. "http" or "https" without "://". +Defaults to `http`. + +**Example usage:** + +```yaml +spec: + snapstoreProxyScheme: "https" +``` + +### `snapstoreProxyDomain` + +**Type:** `string` + +**Required:** no + +The snap store proxy domain. + +**Example usage:** + +```yaml +spec: + snapstoreProxyDomain: "my.proxy.domain" +``` + +### `snapstoreProxyID` + +**Type:** `string` + +**Required:** no + +The snap store proxy ID. + +**Example usage:** + +```yaml +spec: + snapstoreProxyID: "my-proxy-id" +``` + +### `httpsProxy` + +**Type:** `string` + +**Required:** no + +The `HTTPS_PROXY` configuration. + +**Example usage:** + +```yaml +spec: + httpsProxy: "https://my.proxy.domain:8080" +``` + +### `httpProxy` + +**Type:** `string` + +**Required:** no + +The `HTTP_PROXY` configuration. + +**Example usage:** + +```yaml +spec: + httpProxy: "http://my.proxy.domain:8080" +``` + +### `noProxy` + +**Type:** `string` + +**Required:** no + +The `NO_PROXY` configuration. + +**Example usage:** + +```yaml +spec: + noProxy: "localhost,127.0.0.1" +``` + +### `channel` + +**Type:** `string` + +**Required:** no + +The channel to use for the snap install. + +**Example usage:** + +```yaml +spec: + channel: "1.32-classic/candidate" +``` + +### `revision` + +**Type:** `string` + +**Required:** no + +The revision to use for the snap install. + +**Example usage:** + +```yaml +spec: + channel: "1234" +``` + +### `localPath` + +**Type:** `string` + +**Required:** no + +The local path to use for the snap install. + +**Example usage:** + +```yaml +spec: + localPath: "/path/to/custom/k8s.snap" ``` ### `nodeName` @@ -256,11 +393,11 @@ for clouds where the cloud-provider has specific pre-requisites about the node names. It is typically set in Jinja template form, e.g. `"{{ ds.meta_data.local_hostname }}"`. -**Example Usage:** +**Example usage:** ```yaml spec: - nodeName: "{{ ds.meta_data.local_hostname }}" + nodeName: "{{ ds.meta_data.local_hostname }}" ``` ## Control plane provider (CACPCK) @@ -277,11 +414,11 @@ provider. `replicas` is the number of desired machines. Defaults to 1. When stacked etcd is used only odd numbers are permitted, as per [etcd best practice]. -**Example Usage:** +**Example usage:** ```yaml spec: - replicas: 2 + replicas: 2 ``` ### `controlPlane` @@ -306,25 +443,25 @@ spec: | `microclusterPort` | `int` | The port to use for MicroCluster. If unset, ":2380" (etcd peer) will be used. | `":2380"` | | `extraKubeAPIServerArgs` | `map[string]string` | Extra arguments to add to kube-apiserver. | `map[]` | -**Example Usage:** +**Example usage:** ```yaml spec: - controlPlane: - extraSANs: - - extra.san - cloudProvider: external - nodeTaints: - - myTaint - datastoreType: k8s-dqlite - datastoreServersSecretRef: - name: sfName - key: sfKey - k8sDqlitePort: 2379 - microclusterAddress: my.address - microclusterPort: ":2380" - extraKubeAPIServerArgs: - argKey: argVal + controlPlane: + extraSANs: + - extra.san + cloudProvider: external + nodeTaints: + - myTaint + datastoreType: k8s-dqlite + datastoreServersSecretRef: + name: sfName + key: sfKey + k8sDqlitePort: 2379 + microclusterAddress: my.address + microclusterPort: ":2380" + extraKubeAPIServerArgs: + argKey: argVal ``` diff --git a/docs/src/capi/tutorial/getting-started.md b/docs/src/capi/tutorial/getting-started.md index 62e3ab619..1de140eb3 100644 --- a/docs/src/capi/tutorial/getting-started.md +++ b/docs/src/capi/tutorial/getting-started.md @@ -12,10 +12,12 @@ placing it in your PATH. For example, at the time this guide was written, for `amd64` you would run: ``` -curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.9.0/clusterctl-linux-amd64 -o clusterctl +curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.9.3/clusterctl-linux-amd64 -o clusterctl sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl ``` +For more `clusterctl` versions refer to the [upstream release page][clusterctl-release-page]. + ## Set up a management cluster The management cluster hosts the CAPI providers. You can use {{product}} as a @@ -56,7 +58,8 @@ sudo mv clusterawsadm /usr/local/bin ``` `clusterawsadm` helps you bootstrapping the AWS environment that CAPI will use. -It will also create the necessary IAM roles for you. +It will also create the necessary IAM roles for you. For more `clusterawsadm` +versions refer to the [upstream release page][clusterawsadm-release-page]. Start by setting up environment variables defining the AWS account to use, if these are not already defined: @@ -140,9 +143,9 @@ You are now all set to deploy the MAAS CAPI infrastructure provider. ```` ````` -## Initialise the management cluster +## Initialize the management cluster -To initialise the management cluster with the latest released version of the +To initialize the management cluster with the latest released version of the providers and the infrastructure of your choice: ``` @@ -157,7 +160,7 @@ provision. You can generate a cluster manifest for a selected set of commonly used infrastructures via templates provided by the {{product}} team. -Ensure you have initialised the desired infrastructure provider and fetch +Ensure you have initialized the desired infrastructure provider and fetch the {{product}} provider repository: ``` @@ -215,13 +218,13 @@ After the first control plane node is provisioned, you can get the kubeconfig of the workload cluster: ``` -clusterctl get kubeconfig ${CLUSTER_NAME} ${CLUSTER_NAME}-kubeconfig +clusterctl get kubeconfig ${CLUSTER_NAME} > ./${CLUSTER_NAME}-kubeconfig ``` You can then see the workload nodes using: ``` -KUBECONFIG=./kubeconfig sudo k8s kubectl get node +KUBECONFIG=./${CLUSTER_NAME}-kubeconfig sudo k8s kubectl get node ``` ## Delete the cluster @@ -236,3 +239,5 @@ sudo k8s kubectl delete cluster ${CLUSTER_NAME} [upstream instructions]: https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl [CloudFormation]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html [IAM]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html +[clusterctl-release-page]: https://github.com/kubernetes-sigs/cluster-api/releases +[clusterawsadm-release-page]: https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases