Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ku 1824 doc fixes #716

Merged
merged 5 commits into from
Oct 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 69 additions & 17 deletions docs/src/capi/explanation/capi-ck8s.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,26 @@
# Cluster API - {{product}}

ClusterAPI (CAPI) is an open-source Kubernetes project that provides a declarative API for cluster creation, configuration, and management. It is designed to automate the creation and management of Kubernetes clusters in various environments, including on-premises data centers, public clouds, and edge devices.

CAPI abstracts away the details of infrastructure provisioning, networking, and other low-level tasks, allowing users to define their desired cluster configuration using simple YAML manifests. This makes it easier to create and manage clusters in a repeatable and consistent manner, regardless of the underlying infrastructure. In this way a wide range of infrastructure providers has been made available, including but not limited to Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and OpenStack.

CAPI also abstracts the provisioning and management of Kubernetes clusters allowing for a variety of Kubernetes distributions to be delivered in all of the supported infrastructure providers. {{product}} is one such Kubernetes distribution that seamlessly integrates with Cluster API.
ClusterAPI (CAPI) is an open-source Kubernetes project that provides a
declarative API for cluster creation, configuration, and management. It is
designed to automate the creation and management of Kubernetes clusters in
various environments, including on-premises data centers, public clouds, and
edge devices.

CAPI abstracts away the details of infrastructure provisioning, networking, and
other low-level tasks, allowing users to define their desired cluster
configuration using simple YAML manifests. This makes it easier to create and
manage clusters in a repeatable and consistent manner, regardless of the
underlying infrastructure. In this way a wide range of infrastructure providers
has been made available, including but not limited to Amazon Web Services
(AWS), Microsoft Azure, Google Cloud Platform (GCP), and OpenStack.

CAPI also abstracts the provisioning and management of Kubernetes clusters
allowing for a variety of Kubernetes distributions to be delivered in all of
the supported infrastructure providers. {{product}} is one such Kubernetes
distribution that seamlessly integrates with Cluster API.

With {{product}} CAPI you can:

- provision a cluster with:
- Kubernetes version 1.31 onwards
- risk level of the track you want to follow (stable, candidate, beta, edge)
Expand All @@ -20,21 +34,59 @@ Please refer to the “Tutorial” section for concrete examples on CAPI deploym

## CAPI architecture

Being a cloud-native framework, CAPI implements all its components as controllers that run within a Kubernetes cluster. There is a separate controller, called a ‘provider’, for each supported infrastructure substrate. The infrastructure providers are responsible for provisioning physical or virtual nodes and setting up networking elements such as load balancers and virtual networks. In a similar way, each Kubernetes distribution that integrates with ClusterAPI is managed by two providers: the control plane provider and the bootstrap provider. The bootstrap provider is responsible for delivering and managing Kubernetes on the nodes, while the control plane provider handles the control plane’s specific lifecycle.

The CAPI providers operate within a Kubernetes cluster known as the management cluster. The administrator is responsible for selecting the desired combination of infrastructure and Kubernetes distribution by instantiating the respective infrastructure, bootstrap, and control plane providers on the management cluster.

The management cluster functions as the control plane for the ClusterAPI operator, which is responsible for provisioning and managing the infrastructure resources necessary for creating and managing additional Kubernetes clusters. It is important to note that the management cluster is not intended to support any other workload, as the workloads are expected to run on the provisioned clusters. As a result, the provisioned clusters are referred to as workload clusters.

Typically, the management cluster runs in a separate environment from the clusters it manages, such as a public cloud or an on-premises data center. It serves as a centralized location for managing the configuration, policies, and security of multiple managed clusters. By leveraging the management cluster, users can easily create and manage a fleet of Kubernetes clusters in a consistent and repeatable manner.
Being a cloud-native framework, CAPI implements all its components as
controllers that run within a Kubernetes cluster. There is a separate
controller, called a ‘provider’, for each supported infrastructure substrate.
The infrastructure providers are responsible for provisioning physical or
virtual nodes and setting up networking elements such as load balancers and
virtual networks. In a similar way, each Kubernetes distribution that
integrates with ClusterAPI is managed by two providers: the control plane
provider and the bootstrap provider. The bootstrap provider is responsible for
delivering and managing Kubernetes on the nodes, while the control plane
provider handles the control plane’s specific lifecycle.

The CAPI providers operate within a Kubernetes cluster known as the management
cluster. The administrator is responsible for selecting the desired combination
of infrastructure and Kubernetes distribution by instantiating the respective
infrastructure, bootstrap, and control plane providers on the management
cluster.

The management cluster functions as the control plane for the ClusterAPI
operator, which is responsible for provisioning and managing the infrastructure
resources necessary for creating and managing additional Kubernetes clusters.
It is important to note that the management cluster is not intended to support
any other workload, as the workloads are expected to run on the provisioned
clusters. As a result, the provisioned clusters are referred to as workload
clusters.

Typically, the management cluster runs in a separate environment from the
clusters it manages, such as a public cloud or an on-premises data center. It
serves as a centralized location for managing the configuration, policies, and
security of multiple managed clusters. By leveraging the management cluster,
users can easily create and manage a fleet of Kubernetes clusters in a
consistent and repeatable manner.

The {{product}} team maintains the two providers required for integrating with CAPI:

- The Cluster API Bootstrap Provider {{product}} (**CABPCK**) responsible for provisioning the nodes in the cluster and preparing them to be joined to the Kubernetes control plane. When you use the CABPCK you define a Kubernetes Cluster object that describes the desired state of the new cluster and includes the number and type of nodes in the cluster, as well as any additional configuration settings. The Bootstrap Provider then creates the necessary resources in the Kubernetes API server to bring the cluster up to the desired state. Under the hood, the Bootstrap Provider uses cloud-init to configure the nodes in the cluster. This includes setting up SSH keys, configuring the network, and installing necessary software packages.

- The Cluster API Control Plane Provider {{product}} (**CACPCK**) enables the creation and management of Kubernetes control planes using {{product}} as the underlying Kubernetes distribution. Its main tasks are to update the machine state and to generate the kubeconfig file used for accessing the cluster. The kubeconfig file is stored as a secret which the user can then retrieve using the `clusterctl` command.

```{figure} ./capi-ck8s.svg
- The Cluster API Bootstrap Provider {{product}} (**CABPCK**) responsible for
provisioning the nodes in the cluster and preparing them to be joined to the
Kubernetes control plane. When you use the CABPCK you define a Kubernetes
Cluster object that describes the desired state of the new cluster and
includes the number and type of nodes in the cluster, as well as any
additional configuration settings. The Bootstrap Provider then creates the
necessary resources in the Kubernetes API server to bring the cluster up to
the desired state. Under the hood, the Bootstrap Provider uses cloud-init to
configure the nodes in the cluster. This includes setting up SSH keys,
configuring the network, and installing necessary software packages.

- The Cluster API Control Plane Provider {{product}} (**CACPCK**) enables the
creation and management of Kubernetes control planes using {{product}} as the
underlying Kubernetes distribution. Its main tasks are to update the machine
state and to generate the kubeconfig file used for accessing the cluster. The
kubeconfig file is stored as a secret which the user can then retrieve using
the `clusterctl` command.

```{figure} ../../assets/capi-ck8s.svg
:width: 100%
:alt: Deployment of components

Expand Down
2 changes: 1 addition & 1 deletion docs/src/capi/explanation/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Overview <self>

```{toctree}
:titlesonly:
:globs:
:glob:

about
security
Expand Down
5 changes: 3 additions & 2 deletions docs/src/capi/reference/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ spec:
- echo "second-command"
```

(preruncommands)=
### `preRunCommands`
**Type:** `[]string`

Expand Down Expand Up @@ -107,7 +108,7 @@ spec:

**Required:** no

`airGapped` is used to signal that we are deploying to an airgap environment. In this case, the provider will not attempt to install k8s-snap on the machine. The user is expected to install k8s-snap manually with [`preRunCommands`](#preRunCommands), or provide an image with k8s-snap pre-installed.
`airGapped` is used to signal that we are deploying to an airgap environment. In this case, the provider will not attempt to install k8s-snap on the machine. The user is expected to install k8s-snap manually with [`preRunCommands`](#preruncommands), or provide an image with k8s-snap pre-installed.

**Example Usage:**
```yaml
Expand Down Expand Up @@ -217,7 +218,7 @@ spec:
```

<!-- LINKS -->
[Install custom {{product}} on machines]: ../howto/custom-ck8s.md
[Install custom {{product}} on machines]: /capi/howto/custom-ck8s.md
[etcd best practices]: https://etcd.io/docs/v3.5/faq/#why-an-odd-number-of-cluster-members


Expand Down
4 changes: 2 additions & 2 deletions docs/src/charm/howto/install-lxd.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,15 +73,15 @@ lxc profile show juju-myk8s
```

```{note} For an explanation of the settings in this file,
[see below](explain-rules)
[see below](explain-rules-charm)
```

## Deploying to a container

We can now deploy {{product}} into the LXD-based model as described in
the [charm][] guide.

(explain-rules)=
(explain-rules-charm)=

## Explanation of custom LXD rules

Expand Down
7 changes: 5 additions & 2 deletions docs/src/snap/howto/networking/default-loadbalancer.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,15 @@ To check the current configuration of the load-balancer, run the following:
```
sudo k8s get load-balancer
```

This should output a list of values like this:


- `cidrs` - a list containing [cidr] or IP address range definitions of the
pool of IP addresses to use
- `l2-mode` - whether L2 mode (failover) is turned on
- `l2-interfaces` - optional list of interfaces to announce services over (defaults to all)
- `l2-interfaces` - optional list of interfaces to announce services over
(defaults to all)
- `bgp-mode` - whether BGP mode is active.
- `bgp-local-asn` - the local Autonomous System Number (ASN)
- `bgp-peer-address` - the peer address
Expand All @@ -47,7 +49,8 @@ These values are configured using the `k8s set`command, e.g.:
sudo k8s set load-balancer.l2-mode=true
```

Note that for the BGP mode, it is necessary to set ***all*** the values simultaneously. E.g.
Note that for the BGP mode, it is necessary to set ***all*** the values
simultaneously. E.g.

```
sudo k8s set load-balancer.bgp-mode=true load-balancer.bgp-local-asn=64512 load-balancer.bgp-peer-address=10.0.10.55/32 load-balancer.bgp-peer-asn=64512 load-balancer.bgp-peer-port=7012
Expand Down
6 changes: 3 additions & 3 deletions docs/src/snap/howto/networking/dualstack.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ both IPv4 and IPv6 addresses, allowing them to communicate over either protocol.
This document will guide you through enabling dual-stack, including necessary
configurations, known limitations, and common issues.

### Prerequisites
## Prerequisites

Before enabling dual-stack, ensure that your environment supports IPv6, and
that your network configuration (including any underlying infrastructure) is
compatible with dual-stack operation.

### Enabling Dual-Stack
## Enabling Dual-Stack

Dual-stack can be enabled by specifying both IPv4 and IPv6 CIDRs during the
cluster bootstrap process. The key configuration parameters are:
Expand Down Expand Up @@ -133,7 +133,7 @@ cluster bootstrap process. The key configuration parameters are:
working.


### CIDR Size Limitations
## CIDR Size Limitations

When setting up dual-stack networking, it is important to consider the
limitations regarding CIDR size:
Expand Down
2 changes: 1 addition & 1 deletion docs/src/snap/howto/storage/ceph.md
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@ Ceph documentation: [Intro to Ceph].
<!-- LINKS -->

[Ceph]: https://ceph.com/
[getting-started-guide]: ../tutorial/getting-started.md
[getting-started-guide]: /snap/tutorial/getting-started.md
[block-devices-and-kubernetes]: https://docs.ceph.com/en/latest/rbd/rbd-kubernetes/
[placement groups]: https://docs.ceph.com/en/mimic/rados/operations/placement-groups/
[Intro to Ceph]: https://docs.ceph.com/en/latest/start/intro/
2 changes: 1 addition & 1 deletion docs/src/snap/howto/storage/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,4 +62,4 @@ Disabling storage only removes the CSI driver. The persistent volume claims
will still be available and your data will remain on disk.

<!-- LINKS -->
[getting-started-guide]: ../tutorial/getting-started.md
[getting-started-guide]: /snap/tutorial/getting-started.md
4 changes: 2 additions & 2 deletions docs/src/snap/reference/community.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,6 @@ the guidelines for participation.
[matrix]: https://matrix.to/#/#k8s:ubuntu.com
[Discourse]: https://discourse.ubuntu.com/c/kubernetes/180
[bugs]: https://github.com/canonical/k8s-snap/issues
[Contributing guide]: ../howto/contribute
[Developer guide]: ../howto/contribute
[Contributing guide]: /snap/howto/contribute
[Developer guide]: /snap/howto/contribute
[support]: https://ubuntu.com/support
2 changes: 1 addition & 1 deletion docs/src/snap/tutorial/add-remove-nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,5 +156,5 @@ multipass purge
[Ingress]: /snap/howto/networking/default-ingress
[Kubectl]: kubectl
[Command Reference]: /snap/reference/commands
[Storage]: /snap/howto/storage
[Storage]: /snap/howto/storage/index
[Networking]: /snap/howto/networking/index.md
9 changes: 5 additions & 4 deletions docs/src/snap/tutorial/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,9 +94,10 @@ Let's deploy a demo NGINX server:
sudo k8s kubectl create deployment nginx --image=nginx
```

This command launches a [pod](https://kubernetes.io/docs/concepts/workloads/pods/),
the smallest deployable unit in Kubernetes,
running the NGINX application within a container.
This command launches a
[pod](https://kubernetes.io/docs/concepts/workloads/pods/), the smallest
deployable unit in Kubernetes, running the NGINX application within a
container.

You can check the status of your pods by running:

Expand Down Expand Up @@ -214,6 +215,6 @@ This option ensures complete removal of the snap and its associated data.
[How to use kubectl]: kubectl
[Command Reference Guide]: /snap/reference/commands
[Setting up a K8s cluster]: add-remove-nodes
[Storage]: /snap/howto/storage
[Storage]: /snap/howto/storage/index
[Networking]: /snap/howto/networking/index.md
[Ingress]: /snap/howto/networking/default-ingress.md
Loading