Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low Throughput #46

Open
libliflin opened this issue Jul 9, 2017 · 4 comments
Open

Low Throughput #46

libliflin opened this issue Jul 9, 2017 · 4 comments

Comments

@libliflin
Copy link

Hi,

I'm not sure if this is the right spot for this question. I was hoping for some sort of slack channel or gitter or something.

I created a benchmark because I thought this might cause a speed up, but to my surprise I couldn't get much performance boost. To be fair: I'm completely new to observables and undertow and postgres 😄 .

Is there any supporting evidence that this is actually faster than jdbc? or is it a per use case kind of thing?

My code is at async-postgres-benchmark and specifically QueriesAsyncPostgresqlGetHandler.

Thanks!

@nilskp
Copy link

nilskp commented Oct 22, 2017

There's a common misconception that async (non-blocking really) is "faster", when in reality, you'll often see slightly increased latency.
Non-blocking behavior increases the amount of load an application can handle, which means it will perform better under load than the equivalent blocking program.

@gmokki
Copy link
Collaborator

gmokki commented Oct 23, 2017

I benchmarked this project before and after RxJava1 was introduced. Before it was just simple callback.
The RxJava1 overhead slowed down a simple select 1 operation by 30%. Hopefully update to RxJava2 would give some of that lost performance back.

Also the official postgres jdbc driver has lots of optimizations that are not ported to this driver, most notably the support for binary encoding of values.

@nilskp
Copy link

nilskp commented Oct 23, 2017

IMO, it would be preferable to get rid of the RxJava dependency altogether.

EDIT: which seems to be the plan: #47 (comment)

@libliflin
Copy link
Author

Hi @nilskp and @gmokki

I'm not sure how my benchmark is measuring latency. I thought I understood that with async you trade off some latency for increased peak throughput. Please correct if this is false.

My question/comment was on why every measurement I tried is worse compared to the serial option. It's hard to find what exactly is gained with this project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants