Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transfer Learning and Anomaly Attribution #87

Open
jspiliot opened this issue Dec 17, 2023 · 1 comment
Open

Transfer Learning and Anomaly Attribution #87

jspiliot opened this issue Dec 17, 2023 · 1 comment
Assignees

Comments

@jspiliot
Copy link

jspiliot commented Dec 17, 2023

Hey,

Causica looks super promising, thanks for making open!
I've been playing around for a few days now trying to get a good grasp of it, and I there a couple points that are a bit unclear for me:

  1. How could I do transfer learning/fine-tune. I've trained a model with the entirety of the data, but I'd like to fine tune it for a single customer that potentially doesnt have enough data to train it only on his data.
  2. How could I do anomaly attribution or distributional changes attribution. ie something changed could the model tell me what was the root cause of it? So far the only way I could find was to take the causica-generated graph to dowhy to perform this, but this means, I think, that I'd loose the forward methods as dowhy does not work with them.

Looking forward to hearing any suggestions!

Cheers,
Jason

@pawni
Copy link
Contributor

pawni commented Feb 13, 2024

Thanks for raising this @jspiliot!

  1. Do you have a similar dataset that you could use to train this with? It's not quite clear what type of fine-tuning you're referring to. In the simplest case where the variables are the same for a bigger dataset and the single customer with a smaller dataset - you could simply train on the larger dataset and assume that the causal connections generalise between the customers (and then potentially fine tune on the specific customer). If the variables are not the same, you might still be lucky that you can make use of the composability of causal mechanisms. Do you have more insights into you problem setup?
  2. Similarly, what is your setup here? Generally, once you have trained a DECI model, you could calculate different counterfactuals to the sample that is an outlier and see which (minimal) change makes it behave "more normal". We don't have an explicit root-cause analysis module built but the general functionality should be present.
    Cheers,
    Nick

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants