Skip to content

Commit

Permalink
Fixes typos: Removes double "the" (#518)
Browse files Browse the repository at this point in the history
  • Loading branch information
rieck-srlabs authored Jul 2, 2024
1 parent ddb6e0e commit fbba161
Show file tree
Hide file tree
Showing 6 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion content/cost_optimization/cost_opt_storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Ephemeral volumes are for applications that require transient local volumes but

### Using Amazon EC2 Instance Stores

[Amazon EC2 instance stores](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html) provide temporary block-level storage for your EC2 instances. The storage provided by EC2 instance stores is accessible through disks that are physically attached to the hosts. Unlike Amazon EBS, you can only attach instance store volumes when the instance is launched, and these volumes only exist during the lifetime of the instance. They cannot be detached and re-attached to other instances. You can learn more about Amazon EC2 instance stores [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html). *There are no additional fees associated with an instance store volume.* This makes them (instance store volumes) _more cost efficient_ than the the general EC2 instances with large EBS volumes.
[Amazon EC2 instance stores](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html) provide temporary block-level storage for your EC2 instances. The storage provided by EC2 instance stores is accessible through disks that are physically attached to the hosts. Unlike Amazon EBS, you can only attach instance store volumes when the instance is launched, and these volumes only exist during the lifetime of the instance. They cannot be detached and re-attached to other instances. You can learn more about Amazon EC2 instance stores [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html). *There are no additional fees associated with an instance store volume.* This makes them (instance store volumes) _more cost efficient_ than the general EC2 instances with large EBS volumes.

To use local store volumes in Kubernetes, you should partition, configure, and format the disks [using the Amazon EC2 user-data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-add-user-data.html) so that volumes can be mounted as a [HostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) in the pod spec. Alternatively, you can leverage the [Local Persistent Volume Static Provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) to simplify local storage management. The Local Persistent Volume static provisioner allows you to access local instance store volumes through the standard Kubernetes PersistentVolumeClaim (PVC) interface. Furthermore, it will provision PersistentVolumes (PVs) that contains node affinity information to schedule Pods to the correct nodes. Although it uses Kubernetes PersistentVolumes, EC2 instance store volumes are ephemeral in nature. Data written to ephemeral disks is only available during the instance’s lifetime. When the instance is terminated, so is the data. Please refer to this [blog](https://aws.amazon.com/blogs/containers/eks-persistent-volumes-for-instance-store/) for more details.

Expand Down
2 changes: 1 addition & 1 deletion content/scalability/docs/kcp_monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ The key takeaway here is when looking into scalability issues, to look at every
## ETCD
etcd uses a memory mapped file to store key value pairs efficiently. There is a protection mechanism to set the size of this memory space available set commonly at the 2, 4, and 8GB limits. Fewer objects in the database means less clean up etcd needs to do when objects are updated and older versions needs to be cleaned out. This process of cleaning old versions of an object out is referred to as compaction. After a number of compaction operations, there is a subsequent process that recovers usable space space called defragging that happens above a certain threshold or on a fixed schedule of time.
There are a couple user related items we can do to limit the number of objects in Kubernetes and thus reduce the impact of both the compaction and de-fragmentation process. For example, Helm keeps a high `revisionHistoryLimit`. This keeps older objects such as ReplicaSets on the system to be able to do rollbacks. By setting the history limits down to 2 we can reduce the the number of objects (like ReplicaSets) from ten to two which in turn would put less load on the system.
There are a couple user related items we can do to limit the number of objects in Kubernetes and thus reduce the impact of both the compaction and de-fragmentation process. For example, Helm keeps a high `revisionHistoryLimit`. This keeps older objects such as ReplicaSets on the system to be able to do rollbacks. By setting the history limits down to 2 we can reduce the number of objects (like ReplicaSets) from ten to two which in turn would put less load on the system.
```yaml
apiVersion: apps/v1
Expand Down
2 changes: 1 addition & 1 deletion content/scalability/docs/node_efficiency.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Using node sizes that are slightly larger (4-12xlarge) increases the available s

Large nodes sizes allow us to have a higher percentage of usable space per node. However, this model can be taken to to the extreme by packing the node with so many pods that it causes errors or saturates the node. Monitoring node saturation is key to successfully using larger node sizes.

Node selection is rarely a one-size-fits-all proposition. Often it is best to split workloads with dramatically different churn rates into different node groups. Small batch workloads with a high churn rate would be best served by the the 4xlarge family of instances, while a large scale application such as Kafka which takes 8 vCPU and has a low churn rate would be better served by the 12xlarge family.
Node selection is rarely a one-size-fits-all proposition. Often it is best to split workloads with dramatically different churn rates into different node groups. Small batch workloads with a high churn rate would be best served by the 4xlarge family of instances, while a large scale application such as Kafka which takes 8 vCPU and has a low churn rate would be better served by the 12xlarge family.

![Churn rate](../images/churn-rate.png)

Expand Down
2 changes: 1 addition & 1 deletion content/scalability/docs/scaling_theory.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ When fewer errors are occurring, it is easier spot issues in the system. By peri

#### Expanding Our View

In large scale clusters with 1,000’s of nodes we don’t want to look for bottlenecks individually. In PromQL we can find the highest values in a data set using a function called topk; K being a variable we place the number of items we want. Here we use three nodes to get an idea whether all of the the Kubelets in the cluster are saturated. We have been looking at latency up to this point, now let’s see if the Kubelet is discarding events.
In large scale clusters with 1,000’s of nodes we don’t want to look for bottlenecks individually. In PromQL we can find the highest values in a data set using a function called topk; K being a variable we place the number of items we want. Here we use three nodes to get an idea whether all of the Kubelets in the cluster are saturated. We have been looking at latency up to this point, now let’s see if the Kubelet is discarding events.

```
topk(3, increase(kubelet_pleg_discard_events{}[$__rate_interval]))
Expand Down
2 changes: 1 addition & 1 deletion content/security/docs/data.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ There are several viable alternatives to using Kubernetes secrets, including [AW
As the use of external secrets stores has grown, so has need for integrating them with Kubernetes. The [Secret Store CSI Driver](https://github.com/kubernetes-sigs/secrets-store-csi-driver) is a community project that uses the CSI driver model to fetch secrets from external secret stores. Currently, the Driver has support for [AWS Secrets Manager](https://github.com/aws/secrets-store-csi-driver-provider-aws), Azure, Vault, and GCP. The AWS provider supports both AWS Secrets Manager **and** AWS Parameter Store. It can also be configured to rotate secrets when they expire and can synchronize AWS Secrets Manager secrets to Kubernetes Secrets. Synchronization of secrets can be useful when you need to reference a secret as an environment variable instead of reading them from a volume.

!!! note
When the the secret store CSI driver has to fetch a secret, it assumes the IRSA role assigned to the pod that references a secret. The code for this operation can be found [here](https://github.com/aws/secrets-store-csi-driver-provider-aws/blob/main/auth/auth.go).
When the secret store CSI driver has to fetch a secret, it assumes the IRSA role assigned to the pod that references a secret. The code for this operation can be found [here](https://github.com/aws/secrets-store-csi-driver-provider-aws/blob/main/auth/auth.go).

For additional information about the AWS Secrets & Configuration Provider (ASCP) refer to the following resources:

Expand Down
2 changes: 1 addition & 1 deletion content/security/docs/incidents.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Your first course of action should be to isolate the damage. Start by identifyi

### Identify the offending Pods and worker nodes using workload name

If you know the name and namespace of the offending pod, you can identify the the worker node running the pod as follows:
If you know the name and namespace of the offending pod, you can identify the worker node running the pod as follows:

```bash
kubectl get pods <name> --namespace <namespace> -o=jsonpath='{.spec.nodeName}{"\n"}'
Expand Down

0 comments on commit fbba161

Please sign in to comment.