-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revise scalar-loki-stack-custom-values.yaml to scrape the logs of scalar-admin-for-kubernetes #249
Conversation
…lar-admin-for-kubernetes
@@ -53,3 +53,27 @@ promtail: | |||
- __meta_kubernetes_pod_uid | |||
- __meta_kubernetes_pod_container_name | |||
target_label: __path__ | |||
# -- the `scalar-admin-for-kubernetes` job scrapes all the logs of Scalar Admin for Kubernetes Pods |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry that I'm not familiar with the Loki configurations, and it might not be directly related to this PR; let me ask several questions.
- Do we need to scrape all the logs in every job execution?
- scraped results are stored anywhere persistent? Or if Loki logs are removed or rotated, are they gone?
Anyway, we should keep the paused duration persistently independent from the Loki logs lifecycle, and I'm wondering if we can achieve that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a discussion with Yokoyama-san and confirm the feasibility.
In short, Scalar Manager will periodically (maybe per minute) get paused durations from Loki and preserve them in the ConfigMap. So, it doesn't matter how users configure Loki's retention or storage. Scalar Manager provides paused duration by itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@supl So, the answers to the questions are both No?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the answer to the first question is yes, it might be an issue. For example, if the job scrapes all the logs in every execution, and users configure a long retention period, I think it might make the job execution take quite long and expensive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for my late response.
Do we need to scrape all the logs in every job execution?
In the comment the scalar-admin-for-kubernetes job scrapes all the logs of Scalar Admin for Kubernetes Pods
, the all
is for pods.
Maybe it's better to changed the sentence to the scalar-admin-for-kubernetes job scrapes the logs of all Scalar Admin for Kubernetes Pods
The Promtail job is a progressive process (tail) that keeps shipping new updated logs to Loki over time. So it's no problem even it has been running and shipping many logs for a very long time.
scraped results are stored anywhere persistent? Or if Loki logs are removed or rotated, are they gone?
- Loki has no retention by default unless users explicitly configure it. If users specify a retention, the Loki will remove the logs older than the retention from its storage over time.
- Loki's default storage is file-based, similar to
/var/logs
. If we don't use a permanent volume, or change the storage to a cloud service, for example, DynamoDB, we will loss the logs when Loki server is restarted.
- job_name: scalar-admin-for-kubernetes | ||
pipeline_stages: | ||
- docker: {} | ||
- cri: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is necessary in case a managed Kubernetes cluster is using CRI but not Docker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not related to this PR directly, but do we need to add this cri: {}
configuration in both ScalarDB and ScalarDL configurations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for my late review... 🙇♀️
I was able to see the log of Scalar Admin for Kubernetes on the Grafana dashboard by using this configuration.
So, overall looks good to me, but I have one question.
Please take a look when you have time!
- job_name: scalar-admin-for-kubernetes | ||
pipeline_stages: | ||
- docker: {} | ||
- cri: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not related to this PR directly, but do we need to add this cri: {}
configuration in both ScalarDB and ScalarDL configurations?
Sorry for my late response. Thank you for the reviewing. Regarding your question,
I think we can do it as well because I guess we only need the The What do you think? |
@supl I understood the behavior of configuration of As we discussed in the meeting, let's keep this PR as is and update configurations for ScalarDB and ScalarDL as another task after we test! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thank you!
Thank you. FYI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Description
This PR adds a configuration to
scalar-loki-stack-custom-values.yaml
to make Loki to scrape the logs of the pods that run scalar-admin-for-kubernetes. This is an important configuration because the results of scalar-admin-for-kubernetes are displayed as logs.Related issues and/or PRs
N/A
Changes made
scalar-loki-stack-custom-values.yaml
Checklist