high latency after adding linkerd to grpc application #4954
-
What is the issue? The background is we have 2 grpc go applications running on Kubernetes, we are doing load-test for this application. For load-test we are just using get call which returns a text message as response. We have added a sleep of 50ms in call here. Another observation is throughput is also decreased after adding linkerd without linkerd application was able to process 3.8k/sec, after adding linkerd if fall down 2.8k/sec. Results before adding linkerd: Latency distribution: Status code distribution: Error distribution: After adding linkerd: Latency distribution: Status code distribution: Error distribution: I have also noticed that CPU utilization of linkerd proxy at client side is increasing heavily. Is there something I have not configured properly that would cause this kind of behavior? Linkerd Proxy CPU at client side: Output of linkerd check
Deployment annotations:
Environment: I can see there are other issues which talks about this: |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
@sumit-joshi-mt Can you try with the latest edge release? The team did some work in the proxy to support multi-threaded runtime recently. Just curious to see what the latency will look like in your setup. I am also curious about how the values in the CPU usage chart is calculated. Can you share the PromQL query behind it? Thanks. |
Beta Was this translation helpful? Give feedback.
@sumit-joshi-mt Can you try with the latest edge release? The team did some work in the proxy to support multi-threaded runtime recently. Just curious to see what the latency will look like in your setup.
I am also curious about how the values in the CPU usage chart is calculated. Can you share the PromQL query behind it? Thanks.