diff --git a/docs/src/capi/explanation/capi-ck8s.svg b/docs/src/assets/capi-ck8s.svg similarity index 100% rename from docs/src/capi/explanation/capi-ck8s.svg rename to docs/src/assets/capi-ck8s.svg diff --git a/docs/src/capi/explanation/capi-ck8s.md b/docs/src/capi/explanation/capi-ck8s.md index d75db76ac..10b0e0674 100644 --- a/docs/src/capi/explanation/capi-ck8s.md +++ b/docs/src/capi/explanation/capi-ck8s.md @@ -1,12 +1,26 @@ # Cluster API - {{product}} -ClusterAPI (CAPI) is an open-source Kubernetes project that provides a declarative API for cluster creation, configuration, and management. It is designed to automate the creation and management of Kubernetes clusters in various environments, including on-premises data centers, public clouds, and edge devices. - -CAPI abstracts away the details of infrastructure provisioning, networking, and other low-level tasks, allowing users to define their desired cluster configuration using simple YAML manifests. This makes it easier to create and manage clusters in a repeatable and consistent manner, regardless of the underlying infrastructure. In this way a wide range of infrastructure providers has been made available, including but not limited to Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and OpenStack. - -CAPI also abstracts the provisioning and management of Kubernetes clusters allowing for a variety of Kubernetes distributions to be delivered in all of the supported infrastructure providers. {{product}} is one such Kubernetes distribution that seamlessly integrates with Cluster API. +ClusterAPI (CAPI) is an open-source Kubernetes project that provides a +declarative API for cluster creation, configuration, and management. It is +designed to automate the creation and management of Kubernetes clusters in +various environments, including on-premises data centers, public clouds, and +edge devices. + +CAPI abstracts away the details of infrastructure provisioning, networking, and +other low-level tasks, allowing users to define their desired cluster +configuration using simple YAML manifests. This makes it easier to create and +manage clusters in a repeatable and consistent manner, regardless of the +underlying infrastructure. In this way a wide range of infrastructure providers +has been made available, including but not limited to Amazon Web Services +(AWS), Microsoft Azure, Google Cloud Platform (GCP), and OpenStack. + +CAPI also abstracts the provisioning and management of Kubernetes clusters +allowing for a variety of Kubernetes distributions to be delivered in all of +the supported infrastructure providers. {{product}} is one such Kubernetes +distribution that seamlessly integrates with Cluster API. With {{product}} CAPI you can: + - provision a cluster with: - Kubernetes version 1.31 onwards - risk level of the track you want to follow (stable, candidate, beta, edge) @@ -20,21 +34,59 @@ Please refer to the “Tutorial” section for concrete examples on CAPI deploym ## CAPI architecture -Being a cloud-native framework, CAPI implements all its components as controllers that run within a Kubernetes cluster. There is a separate controller, called a ‘provider’, for each supported infrastructure substrate. The infrastructure providers are responsible for provisioning physical or virtual nodes and setting up networking elements such as load balancers and virtual networks. In a similar way, each Kubernetes distribution that integrates with ClusterAPI is managed by two providers: the control plane provider and the bootstrap provider. The bootstrap provider is responsible for delivering and managing Kubernetes on the nodes, while the control plane provider handles the control plane’s specific lifecycle. - -The CAPI providers operate within a Kubernetes cluster known as the management cluster. The administrator is responsible for selecting the desired combination of infrastructure and Kubernetes distribution by instantiating the respective infrastructure, bootstrap, and control plane providers on the management cluster. - -The management cluster functions as the control plane for the ClusterAPI operator, which is responsible for provisioning and managing the infrastructure resources necessary for creating and managing additional Kubernetes clusters. It is important to note that the management cluster is not intended to support any other workload, as the workloads are expected to run on the provisioned clusters. As a result, the provisioned clusters are referred to as workload clusters. - -Typically, the management cluster runs in a separate environment from the clusters it manages, such as a public cloud or an on-premises data center. It serves as a centralized location for managing the configuration, policies, and security of multiple managed clusters. By leveraging the management cluster, users can easily create and manage a fleet of Kubernetes clusters in a consistent and repeatable manner. +Being a cloud-native framework, CAPI implements all its components as +controllers that run within a Kubernetes cluster. There is a separate +controller, called a ‘provider’, for each supported infrastructure substrate. +The infrastructure providers are responsible for provisioning physical or +virtual nodes and setting up networking elements such as load balancers and +virtual networks. In a similar way, each Kubernetes distribution that +integrates with ClusterAPI is managed by two providers: the control plane +provider and the bootstrap provider. The bootstrap provider is responsible for +delivering and managing Kubernetes on the nodes, while the control plane +provider handles the control plane’s specific lifecycle. + +The CAPI providers operate within a Kubernetes cluster known as the management +cluster. The administrator is responsible for selecting the desired combination +of infrastructure and Kubernetes distribution by instantiating the respective +infrastructure, bootstrap, and control plane providers on the management +cluster. + +The management cluster functions as the control plane for the ClusterAPI +operator, which is responsible for provisioning and managing the infrastructure +resources necessary for creating and managing additional Kubernetes clusters. +It is important to note that the management cluster is not intended to support +any other workload, as the workloads are expected to run on the provisioned +clusters. As a result, the provisioned clusters are referred to as workload +clusters. + +Typically, the management cluster runs in a separate environment from the +clusters it manages, such as a public cloud or an on-premises data center. It +serves as a centralized location for managing the configuration, policies, and +security of multiple managed clusters. By leveraging the management cluster, +users can easily create and manage a fleet of Kubernetes clusters in a +consistent and repeatable manner. The {{product}} team maintains the two providers required for integrating with CAPI: -- The Cluster API Bootstrap Provider {{product}} (**CABPCK**) responsible for provisioning the nodes in the cluster and preparing them to be joined to the Kubernetes control plane. When you use the CABPCK you define a Kubernetes Cluster object that describes the desired state of the new cluster and includes the number and type of nodes in the cluster, as well as any additional configuration settings. The Bootstrap Provider then creates the necessary resources in the Kubernetes API server to bring the cluster up to the desired state. Under the hood, the Bootstrap Provider uses cloud-init to configure the nodes in the cluster. This includes setting up SSH keys, configuring the network, and installing necessary software packages. - -- The Cluster API Control Plane Provider {{product}} (**CACPCK**) enables the creation and management of Kubernetes control planes using {{product}} as the underlying Kubernetes distribution. Its main tasks are to update the machine state and to generate the kubeconfig file used for accessing the cluster. The kubeconfig file is stored as a secret which the user can then retrieve using the `clusterctl` command. - -```{figure} ./capi-ck8s.svg +- The Cluster API Bootstrap Provider {{product}} (**CABPCK**) responsible for + provisioning the nodes in the cluster and preparing them to be joined to the + Kubernetes control plane. When you use the CABPCK you define a Kubernetes + Cluster object that describes the desired state of the new cluster and + includes the number and type of nodes in the cluster, as well as any + additional configuration settings. The Bootstrap Provider then creates the + necessary resources in the Kubernetes API server to bring the cluster up to + the desired state. Under the hood, the Bootstrap Provider uses cloud-init to + configure the nodes in the cluster. This includes setting up SSH keys, + configuring the network, and installing necessary software packages. + +- The Cluster API Control Plane Provider {{product}} (**CACPCK**) enables the + creation and management of Kubernetes control planes using {{product}} as the + underlying Kubernetes distribution. Its main tasks are to update the machine + state and to generate the kubeconfig file used for accessing the cluster. The + kubeconfig file is stored as a secret which the user can then retrieve using + the `clusterctl` command. + +```{figure} ../../assets/capi-ck8s.svg :width: 100% :alt: Deployment of components diff --git a/docs/src/capi/explanation/index.md b/docs/src/capi/explanation/index.md index 775dd26a3..f10ada1ac 100644 --- a/docs/src/capi/explanation/index.md +++ b/docs/src/capi/explanation/index.md @@ -11,7 +11,7 @@ Overview ```{toctree} :titlesonly: -:globs: +:glob: about security diff --git a/docs/src/capi/reference/configs.md b/docs/src/capi/reference/configs.md index 60ce9bebe..7297d4f24 100644 --- a/docs/src/capi/reference/configs.md +++ b/docs/src/capi/reference/configs.md @@ -68,6 +68,7 @@ spec: - echo "second-command" ``` +(preruncommands)= ### `preRunCommands` **Type:** `[]string` @@ -107,7 +108,7 @@ spec: **Required:** no -`airGapped` is used to signal that we are deploying to an airgap environment. In this case, the provider will not attempt to install k8s-snap on the machine. The user is expected to install k8s-snap manually with [`preRunCommands`](#preRunCommands), or provide an image with k8s-snap pre-installed. +`airGapped` is used to signal that we are deploying to an airgap environment. In this case, the provider will not attempt to install k8s-snap on the machine. The user is expected to install k8s-snap manually with [`preRunCommands`](#preruncommands), or provide an image with k8s-snap pre-installed. **Example Usage:** ```yaml @@ -217,7 +218,7 @@ spec: ``` -[Install custom {{product}} on machines]: ../howto/custom-ck8s.md +[Install custom {{product}} on machines]: /capi/howto/custom-ck8s.md [etcd best practices]: https://etcd.io/docs/v3.5/faq/#why-an-odd-number-of-cluster-members diff --git a/docs/src/charm/howto/install-lxd.md b/docs/src/charm/howto/install-lxd.md index 321ce4e2c..9d3b96605 100644 --- a/docs/src/charm/howto/install-lxd.md +++ b/docs/src/charm/howto/install-lxd.md @@ -73,7 +73,7 @@ lxc profile show juju-myk8s ``` ```{note} For an explanation of the settings in this file, - [see below](explain-rules) + [see below](explain-rules-charm) ``` ## Deploying to a container @@ -81,7 +81,7 @@ lxc profile show juju-myk8s We can now deploy {{product}} into the LXD-based model as described in the [charm][] guide. -(explain-rules)= +(explain-rules-charm)= ## Explanation of custom LXD rules diff --git a/docs/src/snap/howto/networking/default-loadbalancer.md b/docs/src/snap/howto/networking/default-loadbalancer.md index 88a2a20fb..0fd14ced9 100644 --- a/docs/src/snap/howto/networking/default-loadbalancer.md +++ b/docs/src/snap/howto/networking/default-loadbalancer.md @@ -28,13 +28,15 @@ To check the current configuration of the load-balancer, run the following: ``` sudo k8s get load-balancer ``` + This should output a list of values like this: - `cidrs` - a list containing [cidr] or IP address range definitions of the pool of IP addresses to use - `l2-mode` - whether L2 mode (failover) is turned on -- `l2-interfaces` - optional list of interfaces to announce services over (defaults to all) +- `l2-interfaces` - optional list of interfaces to announce services over + (defaults to all) - `bgp-mode` - whether BGP mode is active. - `bgp-local-asn` - the local Autonomous System Number (ASN) - `bgp-peer-address` - the peer address @@ -47,7 +49,8 @@ These values are configured using the `k8s set`command, e.g.: sudo k8s set load-balancer.l2-mode=true ``` -Note that for the BGP mode, it is necessary to set ***all*** the values simultaneously. E.g. +Note that for the BGP mode, it is necessary to set ***all*** the values +simultaneously. E.g. ``` sudo k8s set load-balancer.bgp-mode=true load-balancer.bgp-local-asn=64512 load-balancer.bgp-peer-address=10.0.10.55/32 load-balancer.bgp-peer-asn=64512 load-balancer.bgp-peer-port=7012 diff --git a/docs/src/snap/howto/networking/dualstack.md b/docs/src/snap/howto/networking/dualstack.md index 343a02f37..406245ff5 100644 --- a/docs/src/snap/howto/networking/dualstack.md +++ b/docs/src/snap/howto/networking/dualstack.md @@ -6,13 +6,13 @@ both IPv4 and IPv6 addresses, allowing them to communicate over either protocol. This document will guide you through enabling dual-stack, including necessary configurations, known limitations, and common issues. -### Prerequisites +## Prerequisites Before enabling dual-stack, ensure that your environment supports IPv6, and that your network configuration (including any underlying infrastructure) is compatible with dual-stack operation. -### Enabling Dual-Stack +## Enabling Dual-Stack Dual-stack can be enabled by specifying both IPv4 and IPv6 CIDRs during the cluster bootstrap process. The key configuration parameters are: @@ -133,7 +133,7 @@ cluster bootstrap process. The key configuration parameters are: working. -### CIDR Size Limitations +## CIDR Size Limitations When setting up dual-stack networking, it is important to consider the limitations regarding CIDR size: diff --git a/docs/src/snap/howto/storage/ceph.md b/docs/src/snap/howto/storage/ceph.md index b448cab9f..ff55d10dd 100644 --- a/docs/src/snap/howto/storage/ceph.md +++ b/docs/src/snap/howto/storage/ceph.md @@ -331,7 +331,7 @@ Ceph documentation: [Intro to Ceph]. [Ceph]: https://ceph.com/ -[getting-started-guide]: ../tutorial/getting-started.md +[getting-started-guide]: /snap/tutorial/getting-started.md [block-devices-and-kubernetes]: https://docs.ceph.com/en/latest/rbd/rbd-kubernetes/ [placement groups]: https://docs.ceph.com/en/mimic/rados/operations/placement-groups/ [Intro to Ceph]: https://docs.ceph.com/en/latest/start/intro/ diff --git a/docs/src/snap/howto/storage/storage.md b/docs/src/snap/howto/storage/storage.md index dbba33631..71e9c3dd1 100644 --- a/docs/src/snap/howto/storage/storage.md +++ b/docs/src/snap/howto/storage/storage.md @@ -62,4 +62,4 @@ Disabling storage only removes the CSI driver. The persistent volume claims will still be available and your data will remain on disk. -[getting-started-guide]: ../tutorial/getting-started.md +[getting-started-guide]: /snap/tutorial/getting-started.md diff --git a/docs/src/snap/reference/community.md b/docs/src/snap/reference/community.md index 182ee8688..080b89348 100644 --- a/docs/src/snap/reference/community.md +++ b/docs/src/snap/reference/community.md @@ -68,6 +68,6 @@ the guidelines for participation. [matrix]: https://matrix.to/#/#k8s:ubuntu.com [Discourse]: https://discourse.ubuntu.com/c/kubernetes/180 [bugs]: https://github.com/canonical/k8s-snap/issues -[Contributing guide]: ../howto/contribute -[Developer guide]: ../howto/contribute +[Contributing guide]: /snap/howto/contribute +[Developer guide]: /snap/howto/contribute [support]: https://ubuntu.com/support diff --git a/docs/src/snap/tutorial/add-remove-nodes.md b/docs/src/snap/tutorial/add-remove-nodes.md index 736474b46..b36fd9a55 100644 --- a/docs/src/snap/tutorial/add-remove-nodes.md +++ b/docs/src/snap/tutorial/add-remove-nodes.md @@ -156,5 +156,5 @@ multipass purge [Ingress]: /snap/howto/networking/default-ingress [Kubectl]: kubectl [Command Reference]: /snap/reference/commands -[Storage]: /snap/howto/storage +[Storage]: /snap/howto/storage/index [Networking]: /snap/howto/networking/index.md diff --git a/docs/src/snap/tutorial/getting-started.md b/docs/src/snap/tutorial/getting-started.md index d613e202b..a79c8031d 100644 --- a/docs/src/snap/tutorial/getting-started.md +++ b/docs/src/snap/tutorial/getting-started.md @@ -94,9 +94,10 @@ Let's deploy a demo NGINX server: sudo k8s kubectl create deployment nginx --image=nginx ``` -This command launches a [pod](https://kubernetes.io/docs/concepts/workloads/pods/), -the smallest deployable unit in Kubernetes, -running the NGINX application within a container. +This command launches a +[pod](https://kubernetes.io/docs/concepts/workloads/pods/), the smallest +deployable unit in Kubernetes, running the NGINX application within a +container. You can check the status of your pods by running: @@ -214,6 +215,6 @@ This option ensures complete removal of the snap and its associated data. [How to use kubectl]: kubectl [Command Reference Guide]: /snap/reference/commands [Setting up a K8s cluster]: add-remove-nodes -[Storage]: /snap/howto/storage +[Storage]: /snap/howto/storage/index [Networking]: /snap/howto/networking/index.md [Ingress]: /snap/howto/networking/default-ingress.md \ No newline at end of file