-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve performance of Java agent #134
base: develop
Are you sure you want to change the base?
Conversation
scavenger-model/src/main/java/com/navercorp/scavenger/util/HashGenerator.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, thank you for contributing such a great PR.
An agent is characterized by the fact that it uses the client's resources.
If the scavenger agent is using more of the user's resources (CPU, memory, etc..) than before, I think it's a bigger issue than performance improvement.
Is it possible to measure this as well?
(It doesn't seem to be an issue from the looks of it).
cc. @sohyun-ku @kojandy
I'll see what I can do :) |
d3cf1fa
to
6c65b22
Compare
41a92ee
to
8f92a70
Compare
Scavenger Test Results167 files 167 suites 1m 39s ⏱️ Results for commit 135a84e. ♻️ This comment has been updated with latest results. |
Hi @taeyeon-Kim I've added memory profiling results to the PR description. However, despite my best efforts, I was unable to obtain consistent CPU profiling results from which we can make a meaningful comparison between before and after my changes. I suspect sampling based profiling is not conducive to measuring methods whose invocation time is in the order of nanoseconds. From the rough testing I have done, the overall impact on a running Java application is essentially negligible both before and after these changes. |
Hi @sohyun-ku, @taeyeon-Kim and others 👋🏾
I've been working on some performance optimisations for the Java agent. I'd be keen to get your thoughts on these, and if you think they are worthwhile.
The optimisations in this PR consist of the following:
MethodRegistry#extractSignature
algorithm optimisationConcurrentHashMap
(rather thanLinkedBlockingQueue
) and remove worker threadI've also made the following additional changes:
InvocationTracker
andScheduler
SchedulerTest
unit tests to only testScheduler
(unblocked by 6.)These changes can mostly be reviewed commit by commit but I can also split them into multiple PRs if it would be easier to review.
Whilst developing and concluding the implementation of these optimisations, I ran some JMH benchmarks on my local machine to verify and measure the performance improvements.
Whilst I have limited experience writing and running such benchmarks, I took some care to avoid the most common pitfalls. I collected these results using Java 21 and the compiler blackhole configuration. In saying that, there are some discrepancies that I wasn't able to completely explain. So whilst I'm not completely confident in the absolute numbers, I have a reasonable level of confidence that the ordering of the results is correct.
Method hashing -
MethodRegistry#getHash
(ConcurrentHashMap
cache disabled)Invocation registering -
InvocationRegistry#register
In the following benchmarks, 'not in buffer' refers to the state of the invocations buffer immediately upon agent startup, whereas 'reset buffer' refers to the state immediately after invocation data is published and the buffer is cleared.
*This was my initial alternative implementation (44c2418), I ended up settling on a slightly different approach which offers better 'reset buffer' performance and is arguably simpler.
Overall -
InvocationTracker#onInvocation
In a real world scenario, the vast majority of invocations fall into the 'hash cached, in buffer' scenario, where the latency is minimal irrespective of the optimisations. However, the latency spikes on application start up and momentarily after every publication. Also notably, the worker thread is eliminated, leading to further indirect performance gains. In complex web applications with many tracked methods, there can be 10,000+ tracked invocations per served request. This can sum up to a delay in the order of milliseconds and thus these optimisations should help measurably reduce the worst-case performance of the agent.
Following a discussion in the comments, the change of the hashing algorithm from MD5 to MurmurHash was extracted into a separate PR, #142.
Memory profiling
The Scavenger agent was attached to a Java application and then a sequence of E2E tests were executed against the application. This was done in an automated and reproducible manner. A memory snapshot was then taken, prior to the first invocations publication. Despite efforts to control the environment, there is no guarantee that the volume and sequence of corresponding Java method invocations will be identical between independent runs, and so the following profiling results should be taken with a grain of salt.
Before PR
After PR
Comparison
MethodRegistry
InvocationRegistry
Analysis
The memory usage of the
MethodRegistry
has seemingly decreased. This is likely due to optimisation (2) where we no longer cache synthetic method signatures, instead excluding them on advice installation.The memory usage of the
InvocationRegistry
has seemingly increased. This is expected as we are now using a more complex data structure comprising of aConcurrentHashMap
andBooleanHolder
s rather than just aHashSet
. This increased memory usage allows us to benefit from improved invocation registering performance on startup and immediately after a publication (refer to 'Invocation registering' benchmarks above).The total memory usage of these 2 objects is more difficult to ascertain as the method signature hash Strings are shared between the 2 objects and so the total should be less than their individual sums. Overall, I am not too concerned about the changes to memory usage here as the memory growth is still bounded by the number of methods tracked and the absolute memory usage is still relatively low.