You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #5390 , we ran performance benchmark tests on BufferedStorageBackend for ledger metadata files using different batch sizes to determine the optimal configuration. Since then, a fix has been made to the XDR library used for unmarshalling the downloaded files. We should re-run these benchmark tests with the fix and update configuration recommendations for BufferedStorageBackend.
For configurations with 1 ledger per file, scaling beyond 10 download workers and buffer size of 100 showed no significant improvement.
For multiple ledgers per file, ledgers per file configurations from 64 to 512 ledgers showed no significant performance improvement beyond 2 workers and a buffer size of 10.
These tests were conducted on my local machine.
Once reviewed, I can update producer.go based on the findings.
What problem does your feature solve?
In #5390 , we ran performance benchmark tests on BufferedStorageBackend for ledger metadata files using different batch sizes to determine the optimal configuration. Since then, a fix has been made to the XDR library used for unmarshalling the downloaded files. We should re-run these benchmark tests with the fix and update configuration recommendations for BufferedStorageBackend.
What would you like to see?
Rerun benchmark tests and Adjust default Configurations based on the results.
What alternatives are there?
Let SDK users to figure out the best configurations based on their own experimentation as needed.
The text was updated successfully, but these errors were encountered: