diff --git a/website/docs/using-qovery.md b/website/docs/using-qovery.md index 3dca2e718f..7e1b86cee9 100644 --- a/website/docs/using-qovery.md +++ b/website/docs/using-qovery.md @@ -1,5 +1,5 @@ --- -last_modified_on: "2023-05-29" +last_modified_on: "2023-12-22" title: Using Qovery description: "Everything you need to know to configure and use your applications on Qovery" sidebar_label: hidden diff --git a/website/docs/using-qovery/configuration/provider/kubernetes.md b/website/docs/using-qovery/configuration/provider/kubernetes.md index 8dcd707779..3d54c877b4 100644 --- a/website/docs/using-qovery/configuration/provider/kubernetes.md +++ b/website/docs/using-qovery/configuration/provider/kubernetes.md @@ -1,9 +1,12 @@ --- -last_modified_on: "2023-12-19" +last_modified_on: "2023-12-22" title: "Kubernetes" description: "Learn how to install and configure Qovery on your own Kubernetes cluster (BYOK) / Self-managed Kubernetes cluster" --- +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + import Steps from '@site/src/components/Steps'; import Alert from '@site/src/components/Alert'; import Assumptions from '@site/src/components/Assumptions'; @@ -20,7 +23,7 @@ This section is for Kubernetes power-users. If you are not familiar with Kuberne -Qovery Self-Managed or BYOK (Bring Your Own Kubernetes) is a self-hosted version of Qovery. It allows you to install Qovery on your own Kubernetes cluster. +Qovery Self-Managed (or BYOK: Bring Your Own Kubernetes) is a self-hosted version of Qovery. It allows you to install Qovery on your own Kubernetes cluster. Read [this article](https://www.qovery.com/blog/kubernetes-managed-by-qovery-vs-self-managed-byok) to better understand the difference with the Managed Kubernetes by Qovery. In a nutshell, Qovery Managed/BYOK is for Kubernetes experts who want to manage their own Kubernetes cluster. In this version, Qovery does not manage the Kubernetes cluster for you. @@ -44,9 +47,9 @@ They are two types of components: Qovery components: - Qovery Control Plane: the Qovery Control Plane is the brain of Qovery. It is responsible for managing your applications and providing the API to interact with Qovery. -- Qovery Engine: the Qovery Engine is responsible for managing your applications on your Kubernetes cluster. It is installed on your Kubernetes cluster. -- Qovery Cluster Agent (optional): the Qovery Cluster Agent is responsible for securely forwarding logs and metrics from your Kubernetes cluster to Qovery control plane. -- Qovery Shell Agent (optional): the Qovery Shell Agent is responsible for giving you a secure remote shell access to your Kubernetes pods if you need it. E.g. when using `qovery shell` command. +- Qovery Cluster Agent (mandatory): the Qovery Cluster Agent is responsible for securely forwarding logs and metrics from your Kubernetes cluster to Qovery control plane. +- Qovery Shell Agent (mandatory): the Qovery Shell Agent is responsible for giving you a secure remote shell access to your Kubernetes pods if you need it. E.g. when using `qovery shell` command. +- Qovery Engine (optional): the Qovery Engine is responsible for managing your applications deployment on your Kubernetes cluster. It can be used Qovery side or is installed on your Kubernetes cluster. Third-party components: - NGINX Ingress Controller (optional) @@ -56,7 +59,7 @@ Third-party components: - Cert Manager (optional) - ... -You can chose what you want to install and manage, and you will have a description of what services are usedi, and responsible for. You can disable them if you don't want to use them. And you can even install other components if you want to. +You can chose what you want to install and manage, and you will have a description of what services are used, and responsible for. You can disable them if you don't want to use them. And you can even install other components if you want to. ## What's the requirements? @@ -68,38 +71,114 @@ Qovery requires a Kubernetes cluster with the following requirements: - 4 GB RAM - 20 GB disk space - Being able to access to the Internet +- A private registry -## Run local demo infrastructure (optional) - +Here are some examples of Kubernetes distributions that can be used with Qovery. **This is a non exhaustive list**. + + -This local demo infrastructure is only for testing purpose, to quickly test Qovery. It is not supported by Qovery for production workloads. If you already have a managed Kubernetes cluster like EKS, you can skip this part. +Theses examples are not recommendations! Simply examples of what can be installed in the fastest way. -First you will need some binaries to run the demo infrastructure locally: -* [docker](https://www.docker.com/): Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. -* [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl): Kubernetes command-line tool. -* [k3d](https://k3d.io/): k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker. -* [Helm](https://helm.sh): Helm is a package manager for Kubernetes. + -Qovery requires a container registry to store its images. + - +To create a Kubernetes cluster on AWS, the simplest way is to use [eksctl binary](https://eksctl.io/installation/). For example: -We will use [ECR](https://aws.amazon.com/ecr/) to have a private repository for this demo, but you can chose any kind of registry (docker.io, [quay.io](https://quay.io/), GCR...). +```bash +eksctl create cluster --region=us-east-2 --zones=us-east-2a,us-east-2b,us-east-2d +``` + - + -We have to use a binary for ECR authentication and token rotation. So we create the prerequired folders and file for the binary: +To create a Kubernetes cluster on GCP, the simplest way is to use [gcloud binary](https://cloud.google.com/sdk/docs/install). For example: + +```bash +gcloud beta container --project "qovery-gcp-tests" \ + clusters create-auto "qovery-test" \ + --region "us-east5" \ + --release-channel "stable" \ + --network "projects/qovery-gcp-tests/global/networks/default" \ + --subnetwork "projects/qovery-gcp-tests/regions/us-east5/subnetworks/default" + --cluster-ipv4-cidr "/16" + --services-ipv4-cidr "10.0.0.0/16" ``` -mkdir -p registry/bin -touch registry/bin/ecr-credential-provider -chmod 755 registry/bin/ecr-credential-provider + + + + +To create a Kubernetes cluster on Scaleway, the simplest way is to use [scw binary](https://github.com/scaleway/scaleway-cli). For example: + +```bash +scw k8s cluster create name=qovery-test +scw k8s pool create cluster-id= name=pool node-type=GP1_XL size=3 ``` -Note: the ecr-credential-provider binary should be present for k3s to start. We will build it later. -And create an IAM user with the following policy: +You can find the [complete documentation here](https://www.scaleway.com/en/docs/containers/kubernetes/api-cli/creating-managing-kubernetes-lifecycle-cliv2/). + + + + + +Here is an example with K3d to deploy a local Kubernetes cluster (you can use k3s or any other Kubernetes distribution): + +```bash +k3d cluster create --image rancher/k3s:v1.26.11-k3s2 --k3s-arg "--disable=traefik,metrics-server@server:0" \ +-v $(pwd)/registry_bin:/var/lib/rancher/credentialprovider/bin@server:0 \ +-v $(pwd)/config.yaml:/var/lib/rancher/credentialprovider/config.yaml@server:0 +``` + +Note: please take a look at the registry information below to understand why we need to mount the registry folder. + + + + +## Private registry + +Qovery requires a private registry to store built images and mirror containers in order to reduce potential images deletion by 3rd party, while you still need them ([more info here][docs.using-qovery.deployment.image-mirroring]). + +

+ Kubelet Credential Providers +

+ +To do so, Qovery advise to use [Kubelet Credential Provider](https://kubernetes.io/blog/2022/12/22/kubelet-credential-providers/) as it's transparent for developers. + + + + + +If you're running Qovery Self-Managed version, and you are going to use the registry from the cloud provider itself, you don't have anything to do. The cloud providers already manage this part for you. + + + + + +If you want to use ECR on a non-EKS cluster, you will need to install the ECR Credential Provider on your Kubernetes cluster. + +You have to create an IAM user with the following policy, and generate an access key: ```json { "Statement": [ @@ -115,7 +194,7 @@ And create an IAM user with the following policy: } ``` -Then we create a `registry/config.yaml` file to configure the ECR credential provider, where you should set the AWS credentials: +Then we create a `config.yaml` file to configure the ECR credential provider, where you should set the AWS credentials previously generated: ```yaml apiVersion: kubelet.config.k8s.io/v1 kind: CredentialProviderConfig @@ -138,14 +217,36 @@ providers: value: xxx ``` -Now we can run a local Kubernetes cluster: + + +Depending on your Kubernetes installation (cloud provider, on premise...) please refer to the official documentation to deploy the credential provider. + + + +
Example with K3d + +Here is an example with K3d to deploy a local Kubernetes cluster with the ECR credential provider. + +We first create the prerequired folders and file for the binary: +``` +mkdir -p registry_bin +touch registry_bin/ecr-credential-provider +chmod 755 registry_bin/ecr-credential-provider +``` +Note: the ecr-credential-provider binary should be present for k3s to start. We will build it later. + +Now we can run a local Kubernetes cluster (update the path to `config.yaml` file, and the Kubernetes [image tag version](https://hub.docker.com/r/rancher/k3s/tags)): ```bash -k3d cluster create --k3s-arg "--disable=traefik,metrics-server@server:0" \ --v $(pwd)/registry/bin:/var/lib/rancher/credentialprovider/bin@server:0 \ --v $(pwd)/registry/config.yaml:/var/lib/rancher/credentialprovider/config.yaml@server:0 +k3d cluster create --image rancher/k3s:v1.26.11-k3s2 --k3s-arg "--disable=traefik,metrics-server@server:0" \ +-v $(pwd)/registry_bin:/var/lib/rancher/credentialprovider/bin@server:0 \ +-v $(pwd)/config.yaml:/var/lib/rancher/credentialprovider/config.yaml@server:0 ``` -After a few seconds/minutes (depending on your network bandwidth), you should have a local Kubernetes cluster running. Deploy this job to build and deploy the ECR credential provider binary on k3d (`job.yaml`): +

+ +Once the Credential provider configuration has been deployed, we'll build the binary and deploy it on the cluster (note: it has to be present on all worker nodes). +Simply deploy this job which will do the work: + ```yaml apiVersion: batch/v1 kind: Job @@ -180,20 +281,10 @@ spec: name: host ``` -``` -kubectl apply -f job.yaml -``` - -You should have see those pods running: -```bash -$ kubectl get po -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system local-path-provisioner-957fdf8bc-nwz5q 1/1 Running 0 112m -kube-system coredns-77ccd57875-jhcnk 1/1 Running 0 112m -default cloud-provider-repository-binary-builder-4cvsv 0/1 Completed 0 112m -``` +You can now move on the Qovery Helm deployment. -You're now ready to move on. +
+
## Install Qovery @@ -247,8 +338,6 @@ services: enabled: true qovery-shell-agent: enabled: true - qovery-engine: - enabled: true ingress: ingress-nginx: enabled: true @@ -320,6 +409,12 @@ helm upgrade --install -n qovery -f values-demo.yaml qovery This is the configuration of Qovery itself. It is used by all Qovery components. + + +**Do not share the jwtToken! Keep it in a safe place.** It is used to authenticate the cluster. + + + | Key | Required | Description | Default | |----------------------------|----------|----------------------------------------------------------------|---------------------------| | `qovery.clusterId` | Yes | The cluster ID. It is used to identify your cluster. | `set-by-customer` | @@ -334,58 +429,61 @@ This is the configuration of Qovery itself. It is used by all Qovery components. | `qovery.externalDnsPrefix` | No | ExernalDNS TXT record prefix (required if ExternalDNS is set) | `set-by-customer` | | `qovery.architectures` | No | Set cluster architectures (comma separated) | `AMD64` | - - -**Do not share the jwtToken! Keep it in a safe place.** It is used to authenticate the cluster. - - - #### Qovery Cluster Agent - - -Optional. If you don't want to use the cluster agent, you can disable it. You will not be able to see your logs and metrics in the Qovery dashboard. +| | | +|-----------------|----------| +| **Required** | Yes | +| **If deployed** | The cluster agent is responsible for securely forwarding logs and metrics from your Kubernetes cluster to Qovery control plane | +| **If missing** | The cluster will not report to Qovery control plane Kubernetes information, so the Qovery console will report unknown satus values | - -The cluster agent is responsible for securely forwarding logs and metrics from your Kubernetes cluster to Qovery control plane. - -| Key | Required | Description | Default | -|------------------------------------------------------------------------|----------|--------------------------------------|-------------------| -| `services.qovery-cluster-agent.environmentVariables.GRPC_SERVER` | Yes | The gRPC server URL. | `set-by-customer` | -| `services.qovery-cluster-agent.environmentVariables.CLUSTER_JWT_TOKEN` | Yes | The JWT token. | `set-by-customer` | -| `services.qovery-cluster-agent.environmentVariables.CLUSTER_ID` | Yes | The cluster ID. | `set-by-customer` | -| `services.qovery-cluster-agent.environmentVariables.ORGANIZATION_ID` | Yes | The organization ID. | `set-by-customer` | +```yaml +qovery-cluster-agent: + fullnameOverride: qovery-cluster-agent +``` #### Qovery Shell Agent - - -Optional. If you don't want to use the shell agent, you can disable it. You will not be able to open a secure remote shell to your application. - - +| | | +|-----------------|-----| +| **Required** | Yes | +| **If deployed** | Used to give a remote shell access to you Kubernetes pods (if user is allowed from Qovery RBAC) with the Qovery CLI | +| **If missing** | No remote connection will be possible, and Qovery support will not be able to help you to diagnose issues | -The shell agent is responsible for giving you a secure remote shell access to your Kubernetes pods if you need it. E.g. when using `qovery shell` command. - -| Key | Required | Description | Default | -|-----------------------------------------------------------------------|----------|-------------------------------------|-------------------| -| `services.qovery-shell-agent.environmentVariables.GRPC_SERVER` | Yes | The gRPC server URL. | `set-by-customer` | -| `services.qovery-shell-agent.environmentVariables.CLUSTER_JWT_TOKEN` | Yes | The JWT token. | `set-by-customer` | -| `services.qovery-shell-agent.environmentVariables.CLUSTER_ID` | Yes | The cluster ID. | `set-by-customer` | -| `services.qovery-shell-agent.environmentVariables.ORGANIZATION_ID` | Yes | The organization ID. | `set-by-customer` | +```yaml +qovery-shell-agent: + fullnameOverride: qovery-shell-agent +``` ### Ingress - - -Optional. To be able to expose web services privately or publicly, an Ingress is required. If you don't need it, you can disable the service. - - +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Web services can be privately or publicly exposed | +| **If missing** | No web services will be exposed | -Qovery uses [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/) by default to route traffic to your applications. +Qovery us will be exposed [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/) by default to route traffic to your applications. #### Nginx Ingress Controller + + + + Here is the minimum override configuration to be used: ```yaml @@ -404,7 +502,102 @@ ingress-nginx: publishService: enabled: true ``` + + + + +Here is an example with Nginx Ingress Controller on AWS with NLB: + +```yaml +ingress-nginx: + controller: + useComponentLabel: true + admissionWebhooks: + enabled: set-by-customer + metrics: + enabled: set-by-customer + serviceMonitor: + enabled: set-by-customer + config: + proxy-body-size: 100m + server-tokens: "false" + ingressClass: nginx-qovery + extraArgs: + default-ssl-certificate: "cert-manager/letsencrypt-acme-qovery-cert" + updateStrategy: + rollingUpdate: + maxUnavailable: 1 + + autoscaling: + enabled: true + minReplicas: set-by-customer + maxReplicas: set-by-customer + targetCPUUtilizationPercentage: set-by-customer + + publishService: + enabled: true + + service: + enabled: true + annotations: + service.beta.kubernetes.io/aws-load-balancer-type: nlb + external-dns.alpha.kubernetes.io/hostname: "set-by-customer" + externalTrafficPolicy: "Local" + sessionAffinity: "" + healthCheckNodePort: 0 +``` + + + + +Here is an example with Nginx Ingress Controller on Scaleway: + +```yaml +ingress-nginx: + controller: + useComponentLabel: true + admissionWebhooks: + enabled: set-by-customer + metrics: + enabled: set-by-customer + serviceMonitor: + enabled: set-by-customer + config: + proxy-body-size: 100m + server-tokens: "false" + use-proxy-protocol: "true" + ingressClass: nginx-qovery + extraArgs: + default-ssl-certificate: "cert-manager/letsencrypt-acme-qovery-cert" + updateStrategy: + rollingUpdate: + maxUnavailable: 1 + autoscaling: + enabled: true + minReplicas: set-by-customer + maxReplicas: set-by-customer + targetCPUUtilizationPercentage: set-by-customer + publishService: + enabled: true + service: + enabled: true + # https://github.com/scaleway/scaleway-cloud-controller-manager/blob/master/docs/loadbalancer-annotations.md + annotations: + service.beta.kubernetes.io/scw-loadbalancer-forward-port-algorithm: "leastconn" + service.beta.kubernetes.io/scw-loadbalancer-protocol-http: "false" + service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v1: "false" + service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: "true" + service.beta.kubernetes.io/scw-loadbalancer-health-check-type: tcp + service.beta.kubernetes.io/scw-loadbalancer-use-hostname: "true" + service.beta.kubernetes.io/scw-loadbalancer-type: "set-by-customer" + external-dns.alpha.kubernetes.io/hostname: "set-by-customer" + externalTrafficPolicy: "Local" +``` + + + + #### Other Ingress Controllers @@ -412,20 +605,34 @@ Qovery supports other Ingress Controllers. Please contact us if you want to use ### DNS - - -Optional but strongly recommended. Used to easily reach your applications with DNS records, even on private network. - - +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Used to easily reach your applications with DNS records, even on private network | +| **If missing** | You will have easy access with dns names to your services, you'll have to use IPs | Qovery uses [External DNS](https://github.com/kubernetes-sigs/external-dns) to automatically configure DNS records for your applications. -If you don't want or can't add your own DNS provider, you can use the Qovery DNS provider. It is a managed DNS provider by Qovery with a sub-domain given by Qovery for free. +If you don't want or can't add your own DNS provider, Qovery proposes it's own managed sub-domain DNS provider for free. You'll then be able to later add your custom DNS record (no matter the provider) to point to your Qovery DNS sub-domain. #### External DNS -Here is one example with Qoery DNS provider: + + + + +Here is one example with Qovery DNS provider: ```yaml external-dns: fullnameOverride: external-dns @@ -444,17 +651,65 @@ external-dns: apiPort: 443 ``` -### Logging + - + -Optional but strongly recommended. Promtail and Loki are not mandatory to use Qovery. However, it's required if you want to have log history and reduce Kubernetes API load. +Here is one example with Cloudflare: +```yaml +external-dns: + fullnameOverride: external-dns + provider: cloudflare + domainFilters: [""] + # an owner ID is set to avoid conflicts in case of multiple Qovery clusters + txtOwnerId: *shortClusterId + # a prefix to help Qovery to debug in case of issues + txtPrefix: *externalDnsPrefix + # set the Cloudflare DNS provider configuration + cloudflare: + ## @param cloudflare.apiToken When using the Cloudflare provider, `CF_API_TOKEN` to set (optional) + apiToken: "" + ## @param cloudflare.apiKey When using the Cloudflare provider, `CF_API_KEY` to set (optional) + apiKey: "" + ## @param cloudflare.secretName When using the Cloudflare provider, it's the name of the secret containing cloudflare_api_token or cloudflare_api_key. + ## This ignores cloudflare.apiToken, and cloudflare.apiKey + secretName: "" + ## @param cloudflare.email When using the Cloudflare provider, `CF_API_EMAIL` to set (optional). Needed when using CF_API_KEY + email: "" + ## @param cloudflare.proxied When using the Cloudflare provider, enable the proxy feature (DDOS protection, CDN...) (optional) + proxied: true +``` - + + + + +### Logging + +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Retrieve and store application's log history | +| **If missing** | You'll have live logs, but you will miss log history for debugging purpose | -Qovery uses [Loki](https://grafana.com/oss/loki/) to store your logs and [Promtail](https://grafana.com/docs/loki/latest/clients/promtail/) to collect your logs. +Qovery uses [Loki](https://grafana.com/oss/loki/) to store your logs in a S3 compatible bucket and [Promtail](https://grafana.com/docs/loki/latest/clients/promtail/) to collect your logs. #### Loki + + + + +Here is a configuration **in Memory (no persistence)** for Loki: ```yaml loki: @@ -517,8 +772,106 @@ loki: mountPath: /var/loki ``` + + + + +Here is a configuration example with AWS S3 as storage backend: + +```yaml +loki: + fullnameOverride: loki + kubectlImage: + registry: set-by-customer + repository: set-by-customer + + loki: + image: + registry: set-by-customer + repository: set-by-customer + auth_enabled: false + commonConfig: + replication_factor: 1 # single binary version + ingester: + chunk_idle_period: 3m + chunk_block_size: 262144 + chunk_retain_period: 1m + max_transfer_retries: 0 + lifecycler: + ring: + kvstore: + store: memberlist + replication_factor: 1 + memberlist: + abort_if_cluster_join_fails: false + bind_port: 7946 + join_members: + - loki-headless.logging.svc:7946 + max_join_backoff: 1m + max_join_retries: 10 + min_join_backoff: 1s + limits_config: + ingestion_rate_mb: 20 + ingestion_burst_size_mb: 30 + enforce_metric_name: false + reject_old_samples: true + reject_old_samples_max_age: 168h + max_concurrent_tail_requests: 100 (default 10) + split_queries_by_interval: 15m (default 15m) + max_query_lookback: 12w (default 0) + compactor: + working_directory: /data/retention + shared_store: aws + compaction_interval: 10m + retention_enabled: set-by-customer + retention_delete_delay: 2h + retention_delete_worker_count: 150 + table_manager: + retention_deletes_enabled: set-by-customer + retention_period: set-by-customer + schema_config: + configs: + - from: 2020-05-15 + store: boltdb-shipper + object_store: s3 + schema: v11 + index: + prefix: index_ + period: 24h + - from: 2023-06-01 + store: boltdb-shipper + object_store: s3 + schema: v12 + index: + prefix: index_ + period: 24h + storage: + bucketNames: + chunks: + ruler: + admin: + type: s3 + s3: + s3: + region: + s3ForcePathStyle: + insecure: + storage_config: + boltdb_shipper: + active_index_directory: /data/loki/index + shared_store: s3 + resync_interval: 5s + cache_location: /data/loki/boltdb-cache +``` + + + + + #### Promtail +A configuration example compatible with all providers: + ```yaml promtail: fullnameOverride: promtail @@ -538,16 +891,18 @@ promtail: ### Certificates - - -Optional but strongly recommended. Cert-manager helps you to get TLS certificates through Let's Encrypt. Without it, you will not be able to automatically get TLS certificates. - - +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Cert-manager helps you to get TLS certificates through Let's Encrypt | +| **If missing** | Without it, you will not be able to automatically get TLS certificates | Qovery uses [Cert Manager](https://cert-manager.io/) to automatically get TLS certificates for your applications. #### Cert Manager +Here is the minimal setup for all cloud providers: + ```yaml cert-manager: fullnameOverride: cert-manager @@ -567,11 +922,27 @@ cert-manager: #### Qovery Cert Manager Webhook - +| | | +|-----------------|----------------------------------------------| +| **Required** | No (but if you're using Qovery DNS Provider) | +| **If deployed** | Required to get Let's Encrypt TLS if Qovery DNS Provider is used | +| **If missing** | Without it, you will not be able to automatically get TLS certificates with Qovery DNS Provider | -Optional and only required if you're using Qovery DNS provider. Set this to get automatic TLS certificates by Qovery. + - + + +A configuration example compatible with all providers: ```yaml qovery-cert-manager-webhook: @@ -584,8 +955,32 @@ qovery-cert-manager-webhook: apiKey: *jwtToken ``` + + + #### Cert Manager Configs +| | | +|-----------------|----------------------------------------------| +| **Required** | No | +| **If deployed** | This is an helper to deploy cert-manager config. But you can manually set it | +| **If missing** | Installing Cert-manager is not enough, you have to configure it to get TLS working | + + + + + This is the configuration of Cert Manager itself. It is used by all Cert Manager components. ```yaml @@ -607,17 +1002,117 @@ cert-manager-configs: apiKey: *jwtToken ``` + + + + +This is the configuration of Cert Manager itself. It is used by all Cert Manager components. + +```yaml +cert-manager-configs: + fullnameOverride: cert-manager-configs + # set pdns to use Qovery DNS provider + externalDnsProvider: pdns + managedDns: [*domain] + acme: + letsEncrypt: + emailReport: *acmeEmailAddr + acmeUrl: https://acme-v02.api.letsencrypt.org/directory + provider: + # set the provider of your choice or use the Qovery DNS provider + pdns: + apiPort: 443 + apiUrl: *qoveryDnsUrl + apiKey: *jwtToken +``` + + + + + +This is the configuration of Cert Manager itself. It is used by all Cert Manager components. + +```yaml +cert-manager-configs: + fullnameOverride: cert-manager-configs + # set pdns to use Qovery DNS provider + externalDnsProvider: pdns + managedDns: [*domain] + acme: + letsEncrypt: + emailReport: *acmeEmailAddr + acmeUrl: https://acme-v02.api.letsencrypt.org/directory + provider: + cloudflare: + apiToken: "set your Cloudflare API token here" + email: "set your Cloudflare email here" +``` + + + + + Qovery uses [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) to collect metrics from your Kubernetes cluster and scale your applications automatically based on custom metrics. ## Observability ### Metrics Server - +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Mandatory if you want to retrive pod metrics for the Qovery agent and if you want to be able to use the horizontal pod scaling | +| **If missing** | No HPA and no application metrics in the QOveyr console | + + + + -Optional but strongly recommended. Without metrics server, you will not be able to scale your applications automatically and will not have metrics information in the Qovery dashboard. +```yaml +metrics-server: + fullnameOverride: metrics-server + apiService: + create: true + + updateStrategy: + type: set-by-customer + + resources: + limits: + cpu: set-by-customer + memory: set-by-customer + requests: + cpu: set-by-customer + memory: set-by-customer +``` - + + + + +Nothing needs to be deployed, as GCP already provides a managed metrics server. + + + + + +Nothing needs to be deployed, as Scaleway already provides a managed metrics server. + + + + ```yaml metrics-server: @@ -632,6 +1127,9 @@ metrics-server: create: false ``` + + + ## FAQ ### I have a non-covered use case. What should I do? @@ -646,6 +1144,7 @@ At the momement, you can't. But please [contact us][urls.qovery_contact_us] to d [docs.using-qovery.configuration.cloud-service-provider.amazon-web-services]: /docs/using-qovery/configuration/cloud-service-provider/amazon-web-services/ [docs.using-qovery.configuration.cloud-service-provider.google-cloud-platform]: /docs/using-qovery/configuration/cloud-service-provider/google-cloud-platform/ [docs.using-qovery.configuration.cloud-service-provider.microsoft-azure]: /docs/using-qovery/configuration/cloud-service-provider/microsoft-azure/ +[docs.using-qovery.deployment.image-mirroring]: /docs/using-qovery/deployment/image-mirroring/ [guides.provider.guide-kubernetes]: /guides/provider/guide-kubernetes/ [urls.helm]: https://helm.sh [urls.qovery_contact_us]: https://www.qovery.com/contact diff --git a/website/docs/using-qovery/configuration/provider/kubernetes.md.erb b/website/docs/using-qovery/configuration/provider/kubernetes.md.erb index 9eb58eb79e..51668484a2 100644 --- a/website/docs/using-qovery/configuration/provider/kubernetes.md.erb +++ b/website/docs/using-qovery/configuration/provider/kubernetes.md.erb @@ -20,7 +20,7 @@ This section is for Kubernetes power-users. If you are not familiar with Kuberne -Qovery Self-Managed or BYOK (Bring Your Own Kubernetes) is a self-hosted version of Qovery. It allows you to install Qovery on your own Kubernetes cluster. +Qovery Self-Managed (or BYOK: Bring Your Own Kubernetes) is a self-hosted version of Qovery. It allows you to install Qovery on your own Kubernetes cluster. Read [this article](https://www.qovery.com/blog/kubernetes-managed-by-qovery-vs-self-managed-byok) to better understand the difference with the Managed Kubernetes by Qovery. In a nutshell, Qovery Managed/BYOK is for Kubernetes experts who want to manage their own Kubernetes cluster. In this version, Qovery does not manage the Kubernetes cluster for you. @@ -36,9 +36,9 @@ They are two types of components: Qovery components: - Qovery Control Plane: the Qovery Control Plane is the brain of Qovery. It is responsible for managing your applications and providing the API to interact with Qovery. -- Qovery Engine: the Qovery Engine is responsible for managing your applications on your Kubernetes cluster. It is installed on your Kubernetes cluster. -- Qovery Cluster Agent (optional): the Qovery Cluster Agent is responsible for securely forwarding logs and metrics from your Kubernetes cluster to Qovery control plane. -- Qovery Shell Agent (optional): the Qovery Shell Agent is responsible for giving you a secure remote shell access to your Kubernetes pods if you need it. E.g. when using `qovery shell` command. +- Qovery Cluster Agent (mandatory): the Qovery Cluster Agent is responsible for securely forwarding logs and metrics from your Kubernetes cluster to Qovery control plane. +- Qovery Shell Agent (mandatory): the Qovery Shell Agent is responsible for giving you a secure remote shell access to your Kubernetes pods if you need it. E.g. when using `qovery shell` command. +- Qovery Engine (optional): the Qovery Engine is responsible for managing your applications deployment on your Kubernetes cluster. It can be used Qovery side or is installed on your Kubernetes cluster. Third-party components: - NGINX Ingress Controller (optional) @@ -48,7 +48,7 @@ Third-party components: - Cert Manager (optional) - ... -You can chose what you want to install and manage, and you will have a description of what services are usedi, and responsible for. You can disable them if you don't want to use them. And you can even install other components if you want to. +You can chose what you want to install and manage, and you will have a description of what services are used, and responsible for. You can disable them if you don't want to use them. And you can even install other components if you want to. ## What's the requirements? @@ -60,38 +60,114 @@ Qovery requires a Kubernetes cluster with the following requirements: - 4 GB RAM - 20 GB disk space - Being able to access to the Internet +- A private registry -## Run local demo infrastructure (optional) - +Here are some examples of Kubernetes distributions that can be used with Qovery. **This is a non exhaustive list**. + + -This local demo infrastructure is only for testing purpose, to quickly test Qovery. It is not supported by Qovery for production workloads. If you already have a managed Kubernetes cluster like EKS, you can skip this part. +Theses examples are not recommendations! Simply examples of what can be installed in the fastest way. -First you will need some binaries to run the demo infrastructure locally: -* [docker](https://www.docker.com/): Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. -* [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl): Kubernetes command-line tool. -* [k3d](https://k3d.io/): k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker. -* [Helm](https://helm.sh): Helm is a package manager for Kubernetes. + -Qovery requires a container registry to store its images. + - +To create a Kubernetes cluster on AWS, the simplest way is to use [eksctl binary](https://eksctl.io/installation/). For example: -We will use [ECR](https://aws.amazon.com/ecr/) to have a private repository for this demo, but you can chose any kind of registry (docker.io, [quay.io](https://quay.io/), GCR...). +```bash +eksctl create cluster --region=us-east-2 --zones=us-east-2a,us-east-2b,us-east-2d +``` + - + + +To create a Kubernetes cluster on GCP, the simplest way is to use [gcloud binary](https://cloud.google.com/sdk/docs/install). For example: -We have to use a binary for ECR authentication and token rotation. So we create the prerequired folders and file for the binary: +```bash +gcloud beta container --project "qovery-gcp-tests" \ + clusters create-auto "qovery-test" \ + --region "us-east5" \ + --release-channel "stable" \ + --network "projects/qovery-gcp-tests/global/networks/default" \ + --subnetwork "projects/qovery-gcp-tests/regions/us-east5/subnetworks/default" + --cluster-ipv4-cidr "/16" + --services-ipv4-cidr "10.0.0.0/16" ``` -mkdir -p registry/bin -touch registry/bin/ecr-credential-provider -chmod 755 registry/bin/ecr-credential-provider + + + + +To create a Kubernetes cluster on Scaleway, the simplest way is to use [scw binary](https://github.com/scaleway/scaleway-cli). For example: + +```bash +scw k8s cluster create name=qovery-test +scw k8s pool create cluster-id= name=pool node-type=GP1_XL size=3 ``` -Note: the ecr-credential-provider binary should be present for k3s to start. We will build it later. -And create an IAM user with the following policy: +You can find the [complete documentation here](https://www.scaleway.com/en/docs/containers/kubernetes/api-cli/creating-managing-kubernetes-lifecycle-cliv2/). + + + + + +Here is an example with K3d to deploy a local Kubernetes cluster (you can use k3s or any other Kubernetes distribution): + +```bash +k3d cluster create --image rancher/k3s:v1.26.11-k3s2 --k3s-arg "--disable=traefik,metrics-server@server:0" \ +-v $(pwd)/registry_bin:/var/lib/rancher/credentialprovider/bin@server:0 \ +-v $(pwd)/config.yaml:/var/lib/rancher/credentialprovider/config.yaml@server:0 +``` + +Note: please take a look at the registry information below to understand why we need to mount the registry folder. + + + + +## Private registry + +Qovery requires a private registry to store built images and mirror containers in order to reduce potential images deletion by 3rd party, while you still need them ([more info here][docs.using-qovery.deployment.image-mirroring]). + +

+ Kubelet Credential Providers +

+ +To do so, Qovery advise to use [Kubelet Credential Provider](https://kubernetes.io/blog/2022/12/22/kubelet-credential-providers/) as it's transparent for developers. + + + + + +If you're running Qovery Self-Managed version, and you are going to use the registry from the cloud provider itself, you don't have anything to do. The cloud providers already manage this part for you. + + + + + +If you want to use ECR on a non-EKS cluster, you will need to install the ECR Credential Provider on your Kubernetes cluster. + +You have to create an IAM user with the following policy, and generate an access key: ```json { "Statement": [ @@ -107,7 +183,7 @@ And create an IAM user with the following policy: } ``` -Then we create a `registry/config.yaml` file to configure the ECR credential provider, where you should set the AWS credentials: +Then we create a `config.yaml` file to configure the ECR credential provider, where you should set the AWS credentials previously generated: ```yaml apiVersion: kubelet.config.k8s.io/v1 kind: CredentialProviderConfig @@ -130,14 +206,36 @@ providers: value: xxx ``` -Now we can run a local Kubernetes cluster: + + +Depending on your Kubernetes installation (cloud provider, on premise...) please refer to the official documentation to deploy the credential provider. + + + +
Example with K3d + +Here is an example with K3d to deploy a local Kubernetes cluster with the ECR credential provider. + +We first create the prerequired folders and file for the binary: +``` +mkdir -p registry_bin +touch registry_bin/ecr-credential-provider +chmod 755 registry_bin/ecr-credential-provider +``` +Note: the ecr-credential-provider binary should be present for k3s to start. We will build it later. + +Now we can run a local Kubernetes cluster (update the path to `config.yaml` file, and the Kubernetes [image tag version](https://hub.docker.com/r/rancher/k3s/tags)): ```bash -k3d cluster create --k3s-arg "--disable=traefik,metrics-server@server:0" \ --v $(pwd)/registry/bin:/var/lib/rancher/credentialprovider/bin@server:0 \ --v $(pwd)/registry/config.yaml:/var/lib/rancher/credentialprovider/config.yaml@server:0 +k3d cluster create --image rancher/k3s:v1.26.11-k3s2 --k3s-arg "--disable=traefik,metrics-server@server:0" \ +-v $(pwd)/registry_bin:/var/lib/rancher/credentialprovider/bin@server:0 \ +-v $(pwd)/config.yaml:/var/lib/rancher/credentialprovider/config.yaml@server:0 ``` -After a few seconds/minutes (depending on your network bandwidth), you should have a local Kubernetes cluster running. Deploy this job to build and deploy the ECR credential provider binary on k3d (`job.yaml`): +

+ +Once the Credential provider configuration has been deployed, we'll build the binary and deploy it on the cluster (note: it has to be present on all worker nodes). +Simply deploy this job which will do the work: + ```yaml apiVersion: batch/v1 kind: Job @@ -172,20 +270,10 @@ spec: name: host ``` -``` -kubectl apply -f job.yaml -``` - -You should have see those pods running: -```bash -$ kubectl get po -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system local-path-provisioner-957fdf8bc-nwz5q 1/1 Running 0 112m -kube-system coredns-77ccd57875-jhcnk 1/1 Running 0 112m -default cloud-provider-repository-binary-builder-4cvsv 0/1 Completed 0 112m -``` +You can now move on the Qovery Helm deployment. -You're now ready to move on. +
+
## Install Qovery @@ -239,8 +327,6 @@ services: enabled: true qovery-shell-agent: enabled: true - qovery-engine: - enabled: true ingress: ingress-nginx: enabled: true @@ -312,6 +398,12 @@ helm upgrade --install -n qovery -f values-demo.yaml qovery This is the configuration of Qovery itself. It is used by all Qovery components. + + +**Do not share the jwtToken! Keep it in a safe place.** It is used to authenticate the cluster. + + + | Key | Required | Description | Default | |----------------------------|----------|----------------------------------------------------------------|---------------------------| | `qovery.clusterId` | Yes | The cluster ID. It is used to identify your cluster. | `set-by-customer` | @@ -326,58 +418,61 @@ This is the configuration of Qovery itself. It is used by all Qovery components. | `qovery.externalDnsPrefix` | No | ExernalDNS TXT record prefix (required if ExternalDNS is set) | `set-by-customer` | | `qovery.architectures` | No | Set cluster architectures (comma separated) | `AMD64` | - - -**Do not share the jwtToken! Keep it in a safe place.** It is used to authenticate the cluster. - - - #### Qovery Cluster Agent - - -Optional. If you don't want to use the cluster agent, you can disable it. You will not be able to see your logs and metrics in the Qovery dashboard. - - +| | | +|-----------------|----------| +| **Required** | Yes | +| **If deployed** | The cluster agent is responsible for securely forwarding logs and metrics from your Kubernetes cluster to Qovery control plane | +| **If missing** | The cluster will not report to Qovery control plane Kubernetes information, so the Qovery console will report unknown satus values | -The cluster agent is responsible for securely forwarding logs and metrics from your Kubernetes cluster to Qovery control plane. -| Key | Required | Description | Default | -|------------------------------------------------------------------------|----------|--------------------------------------|-------------------| -| `services.qovery-cluster-agent.environmentVariables.GRPC_SERVER` | Yes | The gRPC server URL. | `set-by-customer` | -| `services.qovery-cluster-agent.environmentVariables.CLUSTER_JWT_TOKEN` | Yes | The JWT token. | `set-by-customer` | -| `services.qovery-cluster-agent.environmentVariables.CLUSTER_ID` | Yes | The cluster ID. | `set-by-customer` | -| `services.qovery-cluster-agent.environmentVariables.ORGANIZATION_ID` | Yes | The organization ID. | `set-by-customer` | +```yaml +qovery-cluster-agent: + fullnameOverride: qovery-cluster-agent +``` #### Qovery Shell Agent - +| | | +|-----------------|-----| +| **Required** | Yes | +| **If deployed** | Used to give a remote shell access to you Kubernetes pods (if user is allowed from Qovery RBAC) with the Qovery CLI | +| **If missing** | No remote connection will be possible, and Qovery support will not be able to help you to diagnose issues | -Optional. If you don't want to use the shell agent, you can disable it. You will not be able to open a secure remote shell to your application. - - - -The shell agent is responsible for giving you a secure remote shell access to your Kubernetes pods if you need it. E.g. when using `qovery shell` command. - -| Key | Required | Description | Default | -|-----------------------------------------------------------------------|----------|-------------------------------------|-------------------| -| `services.qovery-shell-agent.environmentVariables.GRPC_SERVER` | Yes | The gRPC server URL. | `set-by-customer` | -| `services.qovery-shell-agent.environmentVariables.CLUSTER_JWT_TOKEN` | Yes | The JWT token. | `set-by-customer` | -| `services.qovery-shell-agent.environmentVariables.CLUSTER_ID` | Yes | The cluster ID. | `set-by-customer` | -| `services.qovery-shell-agent.environmentVariables.ORGANIZATION_ID` | Yes | The organization ID. | `set-by-customer` | +```yaml +qovery-shell-agent: + fullnameOverride: qovery-shell-agent +``` ### Ingress - - -Optional. To be able to expose web services privately or publicly, an Ingress is required. If you don't need it, you can disable the service. - - +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Web services can be privately or publicly exposed | +| **If missing** | No web services will be exposed | -Qovery uses [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/) by default to route traffic to your applications. +Qovery us will be exposed [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/) by default to route traffic to your applications. #### Nginx Ingress Controller + + + + Here is the minimum override configuration to be used: ```yaml @@ -396,7 +491,102 @@ ingress-nginx: publishService: enabled: true ``` + + + + +Here is an example with Nginx Ingress Controller on AWS with NLB: + +```yaml +ingress-nginx: + controller: + useComponentLabel: true + admissionWebhooks: + enabled: set-by-customer + metrics: + enabled: set-by-customer + serviceMonitor: + enabled: set-by-customer + config: + proxy-body-size: 100m + server-tokens: "false" + ingressClass: nginx-qovery + extraArgs: + default-ssl-certificate: "cert-manager/letsencrypt-acme-qovery-cert" + updateStrategy: + rollingUpdate: + maxUnavailable: 1 + + autoscaling: + enabled: true + minReplicas: set-by-customer + maxReplicas: set-by-customer + targetCPUUtilizationPercentage: set-by-customer + + publishService: + enabled: true + + service: + enabled: true + annotations: + service.beta.kubernetes.io/aws-load-balancer-type: nlb + external-dns.alpha.kubernetes.io/hostname: "set-by-customer" + externalTrafficPolicy: "Local" + sessionAffinity: "" + healthCheckNodePort: 0 +``` + + + + + +Here is an example with Nginx Ingress Controller on Scaleway: + +```yaml +ingress-nginx: + controller: + useComponentLabel: true + admissionWebhooks: + enabled: set-by-customer + metrics: + enabled: set-by-customer + serviceMonitor: + enabled: set-by-customer + config: + proxy-body-size: 100m + server-tokens: "false" + use-proxy-protocol: "true" + ingressClass: nginx-qovery + extraArgs: + default-ssl-certificate: "cert-manager/letsencrypt-acme-qovery-cert" + updateStrategy: + rollingUpdate: + maxUnavailable: 1 + autoscaling: + enabled: true + minReplicas: set-by-customer + maxReplicas: set-by-customer + targetCPUUtilizationPercentage: set-by-customer + publishService: + enabled: true + service: + enabled: true + # https://github.com/scaleway/scaleway-cloud-controller-manager/blob/master/docs/loadbalancer-annotations.md + annotations: + service.beta.kubernetes.io/scw-loadbalancer-forward-port-algorithm: "leastconn" + service.beta.kubernetes.io/scw-loadbalancer-protocol-http: "false" + service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v1: "false" + service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: "true" + service.beta.kubernetes.io/scw-loadbalancer-health-check-type: tcp + service.beta.kubernetes.io/scw-loadbalancer-use-hostname: "true" + service.beta.kubernetes.io/scw-loadbalancer-type: "set-by-customer" + external-dns.alpha.kubernetes.io/hostname: "set-by-customer" + externalTrafficPolicy: "Local" +``` + + + #### Other Ingress Controllers @@ -404,20 +594,34 @@ Qovery supports other Ingress Controllers. Please contact us if you want to use ### DNS - - -Optional but strongly recommended. Used to easily reach your applications with DNS records, even on private network. - - +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Used to easily reach your applications with DNS records, even on private network | +| **If missing** | You will have easy access with dns names to your services, you'll have to use IPs | Qovery uses [External DNS](https://github.com/kubernetes-sigs/external-dns) to automatically configure DNS records for your applications. -If you don't want or can't add your own DNS provider, you can use the Qovery DNS provider. It is a managed DNS provider by Qovery with a sub-domain given by Qovery for free. +If you don't want or can't add your own DNS provider, Qovery proposes it's own managed sub-domain DNS provider for free. You'll then be able to later add your custom DNS record (no matter the provider) to point to your Qovery DNS sub-domain. #### External DNS -Here is one example with Qoery DNS provider: + + + + +Here is one example with Qovery DNS provider: ```yaml external-dns: fullnameOverride: external-dns @@ -436,17 +640,65 @@ external-dns: apiPort: 443 ``` -### Logging + - + -Optional but strongly recommended. Promtail and Loki are not mandatory to use Qovery. However, it's required if you want to have log history and reduce Kubernetes API load. +Here is one example with Cloudflare: +```yaml +external-dns: + fullnameOverride: external-dns + provider: cloudflare + domainFilters: [""] + # an owner ID is set to avoid conflicts in case of multiple Qovery clusters + txtOwnerId: *shortClusterId + # a prefix to help Qovery to debug in case of issues + txtPrefix: *externalDnsPrefix + # set the Cloudflare DNS provider configuration + cloudflare: + ## @param cloudflare.apiToken When using the Cloudflare provider, `CF_API_TOKEN` to set (optional) + apiToken: "" + ## @param cloudflare.apiKey When using the Cloudflare provider, `CF_API_KEY` to set (optional) + apiKey: "" + ## @param cloudflare.secretName When using the Cloudflare provider, it's the name of the secret containing cloudflare_api_token or cloudflare_api_key. + ## This ignores cloudflare.apiToken, and cloudflare.apiKey + secretName: "" + ## @param cloudflare.email When using the Cloudflare provider, `CF_API_EMAIL` to set (optional). Needed when using CF_API_KEY + email: "" + ## @param cloudflare.proxied When using the Cloudflare provider, enable the proxy feature (DDOS protection, CDN...) (optional) + proxied: true +``` - + + + + +### Logging + +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Retrieve and store application's log history | +| **If missing** | You'll have live logs, but you will miss log history for debugging purpose | -Qovery uses [Loki](https://grafana.com/oss/loki/) to store your logs and [Promtail](https://grafana.com/docs/loki/latest/clients/promtail/) to collect your logs. +Qovery uses [Loki](https://grafana.com/oss/loki/) to store your logs in a S3 compatible bucket and [Promtail](https://grafana.com/docs/loki/latest/clients/promtail/) to collect your logs. #### Loki + + + + +Here is a configuration **in Memory (no persistence)** for Loki: ```yaml loki: @@ -509,8 +761,106 @@ loki: mountPath: /var/loki ``` + + + + +Here is a configuration example with AWS S3 as storage backend: + +```yaml +loki: + fullnameOverride: loki + kubectlImage: + registry: set-by-customer + repository: set-by-customer + + loki: + image: + registry: set-by-customer + repository: set-by-customer + auth_enabled: false + commonConfig: + replication_factor: 1 # single binary version + ingester: + chunk_idle_period: 3m + chunk_block_size: 262144 + chunk_retain_period: 1m + max_transfer_retries: 0 + lifecycler: + ring: + kvstore: + store: memberlist + replication_factor: 1 + memberlist: + abort_if_cluster_join_fails: false + bind_port: 7946 + join_members: + - loki-headless.logging.svc:7946 + max_join_backoff: 1m + max_join_retries: 10 + min_join_backoff: 1s + limits_config: + ingestion_rate_mb: 20 + ingestion_burst_size_mb: 30 + enforce_metric_name: false + reject_old_samples: true + reject_old_samples_max_age: 168h + max_concurrent_tail_requests: 100 (default 10) + split_queries_by_interval: 15m (default 15m) + max_query_lookback: 12w (default 0) + compactor: + working_directory: /data/retention + shared_store: aws + compaction_interval: 10m + retention_enabled: set-by-customer + retention_delete_delay: 2h + retention_delete_worker_count: 150 + table_manager: + retention_deletes_enabled: set-by-customer + retention_period: set-by-customer + schema_config: + configs: + - from: 2020-05-15 + store: boltdb-shipper + object_store: s3 + schema: v11 + index: + prefix: index_ + period: 24h + - from: 2023-06-01 + store: boltdb-shipper + object_store: s3 + schema: v12 + index: + prefix: index_ + period: 24h + storage: + bucketNames: + chunks: + ruler: + admin: + type: s3 + s3: + s3: + region: + s3ForcePathStyle: + insecure: + storage_config: + boltdb_shipper: + active_index_directory: /data/loki/index + shared_store: s3 + resync_interval: 5s + cache_location: /data/loki/boltdb-cache +``` + + + + + #### Promtail +A configuration example compatible with all providers: + ```yaml promtail: fullnameOverride: promtail @@ -530,16 +880,18 @@ promtail: ### Certificates - - -Optional but strongly recommended. Cert-manager helps you to get TLS certificates through Let's Encrypt. Without it, you will not be able to automatically get TLS certificates. - - +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Cert-manager helps you to get TLS certificates through Let's Encrypt | +| **If missing** | Without it, you will not be able to automatically get TLS certificates | Qovery uses [Cert Manager](https://cert-manager.io/) to automatically get TLS certificates for your applications. #### Cert Manager +Here is the minimal setup for all cloud providers: + ```yaml cert-manager: fullnameOverride: cert-manager @@ -559,11 +911,27 @@ cert-manager: #### Qovery Cert Manager Webhook - +| | | +|-----------------|----------------------------------------------| +| **Required** | No (but if you're using Qovery DNS Provider) | +| **If deployed** | Required to get Let's Encrypt TLS if Qovery DNS Provider is used | +| **If missing** | Without it, you will not be able to automatically get TLS certificates with Qovery DNS Provider | -Optional and only required if you're using Qovery DNS provider. Set this to get automatic TLS certificates by Qovery. + - + + +A configuration example compatible with all providers: ```yaml qovery-cert-manager-webhook: @@ -576,8 +944,32 @@ qovery-cert-manager-webhook: apiKey: *jwtToken ``` + + + #### Cert Manager Configs +| | | +|-----------------|----------------------------------------------| +| **Required** | No | +| **If deployed** | This is an helper to deploy cert-manager config. But you can manually set it | +| **If missing** | Installing Cert-manager is not enough, you have to configure it to get TLS working | + + + + + This is the configuration of Cert Manager itself. It is used by all Cert Manager components. ```yaml @@ -599,17 +991,117 @@ cert-manager-configs: apiKey: *jwtToken ``` + + + + +This is the configuration of Cert Manager itself. It is used by all Cert Manager components. + +```yaml +cert-manager-configs: + fullnameOverride: cert-manager-configs + # set pdns to use Qovery DNS provider + externalDnsProvider: pdns + managedDns: [*domain] + acme: + letsEncrypt: + emailReport: *acmeEmailAddr + acmeUrl: https://acme-v02.api.letsencrypt.org/directory + provider: + # set the provider of your choice or use the Qovery DNS provider + pdns: + apiPort: 443 + apiUrl: *qoveryDnsUrl + apiKey: *jwtToken +``` + + + + + +This is the configuration of Cert Manager itself. It is used by all Cert Manager components. + +```yaml +cert-manager-configs: + fullnameOverride: cert-manager-configs + # set pdns to use Qovery DNS provider + externalDnsProvider: pdns + managedDns: [*domain] + acme: + letsEncrypt: + emailReport: *acmeEmailAddr + acmeUrl: https://acme-v02.api.letsencrypt.org/directory + provider: + cloudflare: + apiToken: "set your Cloudflare API token here" + email: "set your Cloudflare email here" +``` + + + + + Qovery uses [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) to collect metrics from your Kubernetes cluster and scale your applications automatically based on custom metrics. ## Observability ### Metrics Server - +| | | +|-----------------|-------------------------------| +| **Required** | No (but strongly recommended) | +| **If deployed** | Mandatory if you want to retrive pod metrics for the Qovery agent and if you want to be able to use the horizontal pod scaling | +| **If missing** | No HPA and no application metrics in the QOveyr console | + + + + -Optional but strongly recommended. Without metrics server, you will not be able to scale your applications automatically and will not have metrics information in the Qovery dashboard. +```yaml +metrics-server: + fullnameOverride: metrics-server + apiService: + create: true + + updateStrategy: + type: set-by-customer + + resources: + limits: + cpu: set-by-customer + memory: set-by-customer + requests: + cpu: set-by-customer + memory: set-by-customer +``` - + + + + +Nothing needs to be deployed, as GCP already provides a managed metrics server. + + + + + +Nothing needs to be deployed, as Scaleway already provides a managed metrics server. + + + + ```yaml metrics-server: @@ -624,6 +1116,9 @@ metrics-server: create: false ``` + + + ## FAQ ### I have a non-covered use case. What should I do? diff --git a/website/docs/using-qovery/integration.md b/website/docs/using-qovery/integration.md index 0fd2ecde14..194d2bab7b 100644 --- a/website/docs/using-qovery/integration.md +++ b/website/docs/using-qovery/integration.md @@ -1,5 +1,5 @@ --- -last_modified_on: "2023-11-30" +last_modified_on: "2023-12-20" title: Integrations description: "Integrate Qovery with your existing tools and workflow" sidebar_label: hidden diff --git a/website/docs/using-qovery/troubleshoot.md b/website/docs/using-qovery/troubleshoot.md index f8c588bd7e..204aec20f9 100644 --- a/website/docs/using-qovery/troubleshoot.md +++ b/website/docs/using-qovery/troubleshoot.md @@ -1,5 +1,5 @@ --- -last_modified_on: "2023-11-02" +last_modified_on: "2023-12-22" title: Troubleshoot description: "Everything you need to troubleshoot your application with Qovery" sidebar_label: hidden diff --git a/website/guides/advanced/helm-chart.md b/website/guides/advanced/helm-chart.md index b176080b08..c3150695bd 100644 --- a/website/guides/advanced/helm-chart.md +++ b/website/guides/advanced/helm-chart.md @@ -1,5 +1,5 @@ --- -last_modified_on: "2023-06-07" +last_modified_on: "2023-12-20" $schema: "/.meta/.schemas/guides.json" title: Helm Charts description: Learn how to deploy Helm charts with Qovery diff --git a/website/guides/advanced/microservices.md b/website/guides/advanced/microservices.md index 7a262946ff..81c1f2b4d4 100644 --- a/website/guides/advanced/microservices.md +++ b/website/guides/advanced/microservices.md @@ -1,5 +1,5 @@ --- -last_modified_on: "2023-06-05" +last_modified_on: "2023-12-20" $schema: "/.meta/.schemas/guides.json" title: Microservices description: How to deploy microservices with Qovery diff --git a/website/static/img/configuration/provider/kubelet-credential-providers-plugin.png b/website/static/img/configuration/provider/kubelet-credential-providers-plugin.png new file mode 100644 index 0000000000..2aeedb738f Binary files /dev/null and b/website/static/img/configuration/provider/kubelet-credential-providers-plugin.png differ