Skip to content

Commit

Permalink
docs: fix small typos
Browse files Browse the repository at this point in the history
  • Loading branch information
gonmmarques authored Nov 3, 2023
1 parent 7dce19b commit 61ad6ab
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion content/cost_optimization/cost_opt_compute.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ Spot compute should use a wide variety of instance types to reduce the likelihoo

It is possible to balance spot and on-demand instances in a single cluster. With Karpenter you can create [weighted provisioners](https://karpenter.sh/docs/concepts/scheduling/#on-demandspot-ratio-split) to achieve a balance of different capacity types. With Cluster Autoscaler you can create [mixed node groups with spot and on-demand or reserved instances](https://aws.amazon.com/blogs/containers/amazon-eks-now-supports-provisioning-and-managing-ec2-spot-instances-in-managed-node-groups/).

Here is an example of using Karpenter to prioritize Spot **** instances ahead of On-Demand instances. When creating a provisioner, you can specify either Spot, On-Demand, or both (as shown below). When you specify both, and if the pod does not explicitly specify whether it needs to use Spot or On-Demand, then Karpenter prioritizes Spot when provisioning a node with [price-capacity-optimization allocation strategy](https://aws.amazon.com/blogs/compute/introducing-price-capacity-optimized-allocation-strategy-for-ec2-spot-instances/) .
Here is an example of using Karpenter to prioritize Spot instances ahead of On-Demand instances. When creating a provisioner, you can specify either Spot, On-Demand, or both (as shown below). When you specify both, and if the pod does not explicitly specify whether it needs to use Spot or On-Demand, then Karpenter prioritizes Spot when provisioning a node with [price-capacity-optimization allocation strategy](https://aws.amazon.com/blogs/compute/introducing-price-capacity-optimized-allocation-strategy-for-ec2-spot-instances/) .

```yaml hl_lines="9"
apiVersion: karpenter.sh/v1alpha5
Expand Down
2 changes: 1 addition & 1 deletion content/cost_optimization/cost_opt_networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ The diagram below depicts network paths for traffic flowing from the load balanc

Data transfer into the Amazon ECR private registry is free. _In-region data transfer incurs no cost_, but data transfer out to the internet and across regions will be charged at Internet Data Transfer rates on both sides of the transfer.

You should utilize ECRs built-in[image replication](https://docs.aws.amazon.com/AmazonECR/latest/userguide/replication.html)[feature](https://docs.aws.amazon.com/AmazonECR/latest/userguide/replication.html) to replicate the relevant container images into the same region as your workloads. This way the replication would be charged once, and all the same region (intra-region) image pulls would be free.
You should utilize ECRs built-in [image replication feature](https://docs.aws.amazon.com/AmazonECR/latest/userguide/replication.html) to replicate the relevant container images into the same region as your workloads. This way the replication would be charged once, and all the same region (intra-region) image pulls would be free.

You can further reduce data transfer costs associated with pulling images from ECR (data transfer out) by _using [Interface VPC Endpoints](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html) to connect to the in-region ECR repositories_. The alternative approach of connecting to ECR’s public AWS endpoint (via a NAT Gateway and an Internet Gateway) will incur higher data processing and transfer costs. The next section will cover reducing data transfer costs between your workloads and AWS Services in greater detail.

Expand Down
4 changes: 2 additions & 2 deletions content/cost_optimization/cost_opt_observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ However, this will not have an instant affect on your cost savings. For addition

#### Export to Amazon S3 from CloudWatch

For storing Amazon CloudWatch logs long term, we recommend exporting your Amazon EKS CloudWatch logs to Amazon Simple Storage Service (Amazon S3). You can forward the logs to Amazon S3 bucket by creating an export task via the [Console](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html) or the API. After you have done so, Amazon S3 presents many options to further reduce cost. You can define your own [Amazon S3 Lifecycle rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) to move your logs to a storage class that a fits your needs, or leverage the [Amazon S3 Intelligent-Tiering](https://aws.amazon.com/s3/storage-classes/intelligent-tiering/) storage class to have AWS automatically move data to long-term storage based on your usage pattern. Please refer to this [blog](https://aws.amazon.com/blogs/containers/understanding-and-cost-optimizing-amazon-eks-control-plane-logs/) for more details. For example, for your production environment logs reside in CloudWatch formore than 30 days then exported to Amazon S3 bucket. You can then use Amazon Athena to query the data in Amazon S3 bucket if you need to refer back to the logs at a later time.
For storing Amazon CloudWatch logs long term, we recommend exporting your Amazon EKS CloudWatch logs to Amazon Simple Storage Service (Amazon S3). You can forward the logs to Amazon S3 bucket by creating an export task via the [Console](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html) or the API. After you have done so, Amazon S3 presents many options to further reduce cost. You can define your own [Amazon S3 Lifecycle rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) to move your logs to a storage class that a fits your needs, or leverage the [Amazon S3 Intelligent-Tiering](https://aws.amazon.com/s3/storage-classes/intelligent-tiering/) storage class to have AWS automatically move data to long-term storage based on your usage pattern. Please refer to this [blog](https://aws.amazon.com/blogs/containers/understanding-and-cost-optimizing-amazon-eks-control-plane-logs/) for more details. For example, for your production environment logs reside in CloudWatch for more than 30 days then exported to Amazon S3 bucket. You can then use Amazon Athena to query the data in Amazon S3 bucket if you need to refer back to the logs at a later time.

### Reduce Log Levels

Expand Down Expand Up @@ -348,6 +348,6 @@ For example, if you have traces that are 90 days old, [Amazon S3 Intelligent-Tie
## Additional Resources:
* [Observability Best Practices Guide](https://aws-observability.github.io/observability-best-practices/guides/)
* [Best Practices Metrics Collection](https://aws-observability.github.io/observability-best-practices/guides/containers/oss/eks/)best-practices-metrics-collection/
* [Best Practices Metrics Collection](https://aws-observability.github.io/observability-best-practices/guides/containers/oss/eks/)
* [AWS re:Invent 2022 - Observability best practices at Amazon (COP343)](https://www.youtube.com/watch?v=zZPzXEBW4P8)
* [AWS re:Invent 2022 - Observability: Best practices for modern applications (COP344)](https://www.youtube.com/watch?v=YiegAlC_yyc)
4 changes: 2 additions & 2 deletions content/scalability/docs/data-plane.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Selecting EC2 instance types is possibly one of the hardest decisions customers

## Automatic node autoscaling

We recommend you use node autoscaling that reduces toil and integrates deeply with Kubernetes. [Managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) and [Karpenter](https://karpenter.sh/) are recomended for large scale clusters.
We recommend you use node autoscaling that reduces toil and integrates deeply with Kubernetes. [Managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) and [Karpenter](https://karpenter.sh/) are recommended for large scale clusters.

Managed node groups will give you the flexibility of Amazon EC2 Auto Scaling groups with added benefits for managed upgrades and configuration. It can be scaled with the [Kubernetes Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) and is a common option for clusters that have a variety of compute needs.

Expand Down Expand Up @@ -44,7 +44,7 @@ A cluster with three u-24tb1.metal instances (24 TB memory and 448 cores) has 3

Workloads should define the resources they need and the availability required via taints, tolerations, and [PodTopologySpread](https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/). They should prefer the largest nodes that can be fully utilized and meet availability goals to reduce control plane load, lower operations, and reduce cost.

The Kubernetes Scheduler will automatically try to spread workloads across availablility zones and hosts if resources are available. If no capacity is available the Kubernetes Cluster Autoscaler will attempt to add nodes in each Availability Zone evenly. Karpenter will attempt to add nodes as quickly and cheaply as possible unless the workload specifies other requirements.
The Kubernetes Scheduler will automatically try to spread workloads across availability zones and hosts if resources are available. If no capacity is available the Kubernetes Cluster Autoscaler will attempt to add nodes in each Availability Zone evenly. Karpenter will attempt to add nodes as quickly and cheaply as possible unless the workload specifies other requirements.

To force workloads to spread with the scheduler and new nodes to be created across availability zones you should use topologySpreadConstraints:

Expand Down
2 changes: 1 addition & 1 deletion content/scalability/docs/node_efficiency.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ Using this metric, we can see in the above chart every thread on the box was sta
### HPA V2
It is recommended to use the autoscaling/v2 version of the HPA API. The older versions of the HPA API could get stuck scaling in certain edge cases. It was also limited to pods only doubling during each scaling step, which created issues for small deployments that needed to scale rapidly.

Autoscaling/v2 allows us more flexibility to include mutliple criteria to scale on and allows us a great deal of flexiblity when using custom and external metrics (non K8s metrics).
Autoscaling/v2 allows us more flexibility to include multiple criteria to scale on and allows us a great deal of flexibility when using custom and external metrics (non K8s metrics).

As an example, we can scaling on the highest of three values (see below). We scale if the average utilization of all the pods are over 50%, if custom metrics the packets per second of the ingress exceed an average of 1,000, or ingress object exceeds 10K request per second.

Expand Down
2 changes: 1 addition & 1 deletion content/upgrades/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ aws iam get-role --role-name ${ROLE_ARN##*/} \

Amazon EKS automatically installs add-ons such as the Amazon VPC CNI plugin for Kubernetes, `kube-proxy`, and CoreDNS for every cluster. Add-ons may be self-managed, or installed as Amazon EKS Add-ons. Amazon EKS Add-ons is an alternate way to manage add-ons using the EKS API.

You can use Amazon EKS Add-ons to update vesions with a single command. For Example:
You can use Amazon EKS Add-ons to update versions with a single command. For Example:

```
aws eks update-addon —cluster-name my-cluster —addon-name vpc-cni —addon-version version-number \
Expand Down

0 comments on commit 61ad6ab

Please sign in to comment.