Skip to content

Latest commit

 

History

History
191 lines (147 loc) · 8.87 KB

File metadata and controls

191 lines (147 loc) · 8.87 KB

Helm Chart Deployment steps for Trusted Workload Placement - Cloud Service Provider Usecase

A collection of helm charts for Trusted Workload Placement - Cloud Service Provider Usecase

Deployment diagram

K8s Deployment-fsws

Getting Started

Below steps guide in the process for installing isecl-helm charts on a kubernetes cluster.

Pre-requisites

  • Non Managed Kubernetes Cluster up and running

  • Helm 3 installed

    curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
    chmod 700 get_helm.sh
    ./get_helm.sh
  • For building container images Refer here for instructions

  • Setup NFS, Refer instructions for setting up and configuring NFS Server

Support Details

Kubernetes Details
Cluster OS RedHat Enterprise Linux 8.x
Ubuntu 20.04
Distributions Any non-managed K8s cluster
Versions v1.23
Storage NFS
Container Runtime CRI-O

Use Case Helm Charts

Use case Helm Charts
Trusted-Workload-Placement Cloud-Service-Provider ta
ihub
isecl-controller
isecl-scheduler
admission-controller

Setting up for Helm deployment

Create a namespace or use the namespace used for helm deployment. kubectl create ns isecl

Create Secrets for ISecL Scheduler TLS Key-pair

ISecl Scheduler runs as https service, therefore it needs TLS Keypair and tls certificate needs to be signed by K8s CA, inorder to have secure communication between K8s base scheduler and ISecl K8s Scheduler. The creation of TLS keypair is a manual step, which has to be done prior deplolying the helm for Trusted Workload Placement usecase. Following are the steps involved in creating tls cert signed by K8s CA.

mkdir -p /tmp/k8s-certs/tls-certs && cd /tmp/k8s-certs/tls-certs
openssl req -new -days 365 -newkey rsa:4096 -addext "subjectAltName = DNS:<Controlplane hostname>" -nodes -text -out server.csr -keyout server.key -sha384 -subj "/CN=ISecl Scheduler TLS Certificate"

cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: isecl-scheduler.isecl
spec:
  request: $(cat server.csr | base64 | tr -d '\n')
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
EOF

kubectl certificate approve isecl-scheduler.isecl
kubectl get csr isecl-scheduler.isecl -o jsonpath='{.status.certificate}' \
    | base64 --decode > server.crt
kubectl create secret tls isecl-scheduler-certs --cert=/tmp/k8s-certs/tls-certs/server.crt --key=/tmp/k8s-certs/tls-certs/server.key -n isecl

Note: CSR needs to be deleted if we want to regenerate isecl-scheduler-certs secret with command kubectl delete csr isecl-scheduler.isecl

Create Secrets for Admission controller TLS Key-pair

Create admission-controller-certs secrets for admission controller deployment

mkdir -p /tmp/adm-certs/tls-certs && cd /tmp/adm-certs/tls-certs
openssl req -new -days 365 -newkey rsa:4096 -addext "subjectAltName = DNS:admission-controller.isecl.svc" -nodes -text -out server.csr -keyout server.key -sha384 -subj "/CN=system:node:<nodename>;/O=system:nodes"

cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: admission-controller.isecl
spec:
  groups:
  - system:authenticated
  request: $(cat server.csr | base64 | tr -d '\n')
  signerName: kubernetes.io/kubelet-serving
  usages:
  - digital signature
  - key encipherment
  - server auth
EOF

kubectl certificate approve admission-controller.isecl
kubectl get csr admission-controller.isecl -o jsonpath='{.status.certificate}' \
    | base64 --decode > server.crt
kubectl create secret tls admission-controller-certs --cert=/tmp/adm-certs/tls-certs/server.crt --key=/tmp/adm-certs/tls-certs/server.key -n isecl

Generate CA Bundle

kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}'

Add the output base64 encoded string to value in caBundle sub field of admission-controller in usecase/trusted-workload-placement/values.yml in case of usecase deployment chart.

Note: CSR needs to be deleted if we want to regenerate admission-controller-certs secret with command kubectl delete csr admission-controller.isecl

Installing isecl-helm charts

  • Add the chart repository
helm repo add isecl-helm https://intel-secl.github.io/helm-charts
helm repo update
  • To find list of available charts
helm search repo --versions

Usecase based chart deployment (using umbrella charts)

Update values.yaml for Use Case chart deployments

Some assumptions before updating the values.yaml are as follows:

  • The images are built on the build machine and images are pushed to a registry tagged with release_version(e.g:v5.0.0) as version for each image
  • The NFS server setup is done either using sample script instructions or by the user itself
  • The K8s non-managed cluster is up and running
  • Helm 3 is installed

The helm chart support Nodeports for services to support ingress model, enable the ingress by setting the value ingress enabled to true in values.yaml file.

Update the hvsUrl, cmsUrl and aasUrl under global section according to the conifgured model. e.g For ingress. hvsUrl: https://hvs.isecl.com/hvs/v2 For Nodeport, hvsUrl: https://<controlplane-hosntam/IP>:30443/hvs/v2

Use Case charts Deployment

export VERSION=v5.0.0
helm pull isecl-helm/Trusted-Workload-Placement-Cloud-Service-Provider  --version $VERSION && tar -xzf Trusted-Workload-Placement-Cloud-Service-Provider -$VERSION.tgz Trusted-Workload-Placement-Cloud-Service-Provider/values.yaml
helm install <helm release name> isecl-helm/Trusted-Workload-Placement-Cloud-Service-Provider  --version $VERSION -f Trusted-Workload-Placement-Cloud-Service-Provider/values.yaml --create-namespace -n <namespace>

Note: If using a separate .kubeconfig file, ensure to provide the path using --kubeconfig <.kubeconfig path>

Configure kube-scheduler to establish communication with isecl-scheduler after successful deployment.

Refer instructions for configuring kube-scheduler to establish communication with isecl-scheduler

Setup task workflow.

  • Refer instructions for running service specific setup tasks

To uninstall a chart

helm uninstall <release-name> -n <namespace>

To list all the helm chart deployments

helm list -A

Cleanup steps that needs to be done for a fresh deployment

  • Uninstall all the chart deployments
  • Cleanup the data at NFS mount and trustagent data mount on each nodes (/etc/trustagent, /var/log/trustagent)
  • cleanup the secrets for isecl-scheduler-certs and admission-controller-certs. kubectl delete secret -n <namespace> isecl-scheduler-certs admission-controller-certs
  • Remove all objects(secrets, rbac, clusterrole, service account) related namespace related to deployment kubectl delete ns <namespace>.

Note:

Before redeploying any of the chart please check the pv and pvc of corresponding deployments are removed. Suppose
if you want to redeploy aas, make sure that aas-logs-pv, aas-logs-pvc, aas-config-pv, aas-config-pvc, aas-db-pv, aas-db-pvc, aas-base-pvc are removed successfully.
Command: ```kubectl get pvc -n <namespace>``` && ```kubectl get pv -n <namespace>```