Host_metrics collection over http_server and data loss in pipelines #22149
Replies: 2 comments
-
And here are the same metrics, just not tunneled from http_server. In this case, everything works as it should. full vector.yaml: https://pastebin.com/Rpnerxg6 |
Beta Was this translation helpful? Give feedback.
-
Forgot to mention: vector is at version 43.0 and runs inside Docker v26 on a Debian 12 host. |
Beta Was this translation helpful? Give feedback.
-
I get host_metrics from local machine, parse and send successfully to file/prometheus/blackhole. Vector top displays it correctly. This works.
I have other instances where I want to collect host_metrics and send them to a central vector where they will be collected and stored. For this I use http_server endpoint (I didn't find anything better for it, some kind of vector tunnel).
The problem is that I receive the data, I see it at the input, I see it at the first transformation, but then it does not get into the conversion of metrics to logs, also when writing to the database, etc. The only place it mysteriously ends up in a black hole.
And in addition, in this case there is a difference between what is in a vector top and a vector tap. In some cases, vector top says there is no input, but vector tap --inputs-of makes it clear that there is an input.
I've tried to graphically show what's going on:
full config vector.yaml https://pastebin.com/mRZSSVF9
(the port on the tunnel was the same in the configuration, it's just different in this screenshot)
Beta Was this translation helpful? Give feedback.
All reactions