You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’ve noticed a potential issue with using the LIME package for ML model explainability, and I’d appreciate some clarification. Given that LIME is a local method, it generates data samples around the given data instance without considering any multicollinearity present in the data. This oversight can introduce significant variance in the estimation of the linear regression coefficients, thereby compromising the reliability of the explanations.
Has anyone else encountered this issue? If so, how have you addressed it?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi Community!
I’ve noticed a potential issue with using the LIME package for ML model explainability, and I’d appreciate some clarification. Given that LIME is a local method, it generates data samples around the given data instance without considering any multicollinearity present in the data. This oversight can introduce significant variance in the estimation of the linear regression coefficients, thereby compromising the reliability of the explanations.
Has anyone else encountered this issue? If so, how have you addressed it?
Thanks!
The text was updated successfully, but these errors were encountered: