You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our use case for kafka involves high latency (on the order of a second round trip time) and sudden bursts of messages on the order of 300k. We are seeing slowness in that situation, and I was looking into different kafka settings that might help mitigate this.
It seems like those use cases were slightly different, in that they wanted to limit the batch size due to memory concerns. In our case, processing 300k messages at a batch size of 500 would spend about ten minutes waiting for network round trips.
I also found this documentation that is related, but not quite for this case:
Our use case for kafka involves high latency (on the order of a second round trip time) and sudden bursts of messages on the order of 300k. We are seeing slowness in that situation, and I was looking into different kafka settings that might help mitigate this.
I saw that
max.poll.records
is set to 500 by default. I thought this might solve our problem (or would at least be worth testing), but I saw that it is not supported in rdkafka and found these two issues:It seems like those use cases were slightly different, in that they wanted to limit the batch size due to memory concerns. In our case, processing 300k messages at a batch size of 500 would spend about ten minutes waiting for network round trips.
I also found this documentation that is related, but not quite for this case:
Specifically, I'm wondering:
The text was updated successfully, but these errors were encountered: