Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use a buffer for state_history's boost iostreams de/compression pipeline improving performance #113

Merged
merged 1 commit into from
May 6, 2024

Conversation

spoonincode
Copy link
Member

Without a buffer size passed to these components in the de/compression pipeline, boost iostreams seems to send data through byte-by-byte. Adding a buffer improves performance nearly double for populating initial state. In leap 5.0: ~12m30s, after #33: ~6m20s, this PR: ~3m15s.

Certainly open to being schooled on how to better utilize boost iostreams if I missed a better way; this very pessimistic default buffering behavior was surprising.

Copy link
Member

@heifner heifner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The zlib de/compressor of transaction.cpp could likely benefit from a buffer as well.

@spoonincode
Copy link
Member Author

fwiw I looked at this a little more and the reason buffering is being disabled is because the counter indicates,


which means this gets called,
std::streamsize optimal_buffer_size() const { return 0; }

which effectively disables buffering (I guess just going in to the counter?). Otherwise filters should have a default buffer of 128 bytes, and devices 4K. It's quite possible for ship's large data structures the bigger 64KB buffer is healthier than 128/4K anyways though.

@spoonincode
Copy link
Member Author

Since I've already tested these changes, going to go ahead and merge as-is even though it seems like the lack of buffering by default is due to improper counter implementation.

The zlib de/compressor of transaction.cpp could likely benefit from a buffer as well.

It would be curious to benchmark it now that it appears that usage would have a (small?) buffer by default

@spoonincode spoonincode merged commit 3cefaf1 into main May 6, 2024
36 checks passed
@spoonincode spoonincode deleted the ship_ios_pipeline_compression_buffer branch May 6, 2024 20:45
@ericpassmore
Copy link
Contributor

Note:start
group: STABILITY
category: PERFORMACE
summary: Improve state history performance by using a buffer for the de/compression pipeline.
Note:end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants