Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to freeze templates when trainning? #82

Open
lkyhfx opened this issue May 29, 2023 · 2 comments
Open

Is it possible to freeze templates when trainning? #82

lkyhfx opened this issue May 29, 2023 · 2 comments
Labels
question Further information is requested

Comments

@lkyhfx
Copy link

lkyhfx commented May 29, 2023

Currently, I can't find a way to freeze template after the cluster reach some criteria, such us when the cluster size is large enough.
By adding the ability to freeze the template, we can adopt the template extraction incrementally. If so, we can keep the model trainning, choose the some templates, assign a meaningful name to templates variables for downstream analysis. Since the templates are freeze after reaching some citeria, our code that used to extract the varibles don't need to be changed. If some day we are interested in other templates newly added, we just need to write new code to provide data for downstream.

@Superskyyy
Copy link
Collaborator

Superskyyy commented Jun 28, 2023

Hi the template freezing functionality can be partially achieve by training drain3 offline and using it without add_log_message. If you want to implement some threshold to stop changing old templates on the fly, we should try to figure out what exact criteria shall be used and what parameters to expose for configuration. @lkyhfx

Some aspects to think about, e.g. what's a "large enough" cluster size? This seems highly unpredictable to me.

A better way might be that if ClusterX hasn't been changed for, say a period of time/count of logs ingested, we freeze this particular one. In the end, we freeze all of them.

@Superskyyy Superskyyy added the question Further information is requested label Jun 28, 2023
@lkyhfx
Copy link
Author

lkyhfx commented Jul 17, 2023

@Superskyyy Thank you for your response. Actually, my request is to add a feature that allows freezing the cluster in training mode. The specific strategy and implementation details are not crucial. Additionally, the two approaches you mentioned are decent ways to achieve this goal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants