How to ensure that linkerd-proxy metrics are scraped from ephemeral pods? #5435
-
I really like having the However, there seem to be many cases where the As I understand it, ephemeral pods are somewhat antithetical to prometheus' scrape/pull model. The push gateway is a possible solution, but has a number of quirks. So is there a recommended best practice to ensure that linkerd-proxy containers on ephemeral pods are scraped? Alternatively, is there a way to get the source pod and namespace as labels on the |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I don't know of any "best practices" for this particular scenario. Is it possible to wrap your job application in a script that sleeps for a period (at least one prometheus scrape interval) after the main process completes? This way the pod should remain alive long enough to get scraped. Just an idea to try. There is no way currently to get source labels for inbound metrics. See #4101 |
Beta Was this translation helpful? Give feedback.
I don't know of any "best practices" for this particular scenario. Is it possible to wrap your job application in a script that sleeps for a period (at least one prometheus scrape interval) after the main process completes? This way the pod should remain alive long enough to get scraped. Just an idea to try.
There is no way currently to get source labels for inbound metrics. See #4101