You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have configured Azure Container Apps with an OpenTelemetry (otel) collector image for automatic load balancing of requests. Additionally, we employ JMeter to generate load, with three clients each making 50 concurrent requests per second. However, we've observed that the server's response time increases significantly once the number of requests per container exceeds 50.
We suspect that this latency spike may be related to limitations within the OpenTelemetry collector's internal thread pool. It's perplexing because this limit of 50 concurrent requests per container seems quite low, especially considering that the CPU and memory utilization in the pod are well within their minimum VM settings.
Our goal is to find a way to increase the number of concurrent requests that a single OpenTelemetry collector can handle without introducing latency. This is particularly important because there's still ample available capacity in the underlying virtual machine.
We've attempted various approaches, including disabling all OpenTelemetry processors and using a simple logger for exporting data instead of a remote exporter. However, even with these optimizations, increasing the request rate beyond 50 requests per second results in a substantial increase in response time for the collector.
We're currently seeking guidance on how to adjust the OpenTelemetry collector's configuration to efficiently process a higher number of concurrent HTTP requests without experiencing latency issues. The exact origin of this "magic" threshold of 50 requests per second remains unclear to us.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
We have configured Azure Container Apps with an OpenTelemetry (otel) collector image for automatic load balancing of requests. Additionally, we employ JMeter to generate load, with three clients each making 50 concurrent requests per second. However, we've observed that the server's response time increases significantly once the number of requests per container exceeds 50.
We suspect that this latency spike may be related to limitations within the OpenTelemetry collector's internal thread pool. It's perplexing because this limit of 50 concurrent requests per container seems quite low, especially considering that the CPU and memory utilization in the pod are well within their minimum VM settings.
Our goal is to find a way to increase the number of concurrent requests that a single OpenTelemetry collector can handle without introducing latency. This is particularly important because there's still ample available capacity in the underlying virtual machine.
We've attempted various approaches, including disabling all OpenTelemetry processors and using a simple logger for exporting data instead of a remote exporter. However, even with these optimizations, increasing the request rate beyond 50 requests per second results in a substantial increase in response time for the collector.
We're currently seeking guidance on how to adjust the OpenTelemetry collector's configuration to efficiently process a higher number of concurrent HTTP requests without experiencing latency issues. The exact origin of this "magic" threshold of 50 requests per second remains unclear to us.
Beta Was this translation helpful? Give feedback.
All reactions