You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently trying to implement LRP on my own graph neural network, focused on discovering relevances at the node feature level (on my own implementation) where node features are roughly updated as:
H_next = (A.dot(H)).dot(W)
I see however, that the torchgraph implementation, and thus the LRP step in your research solely multiplies the node features with the weights.
Thus I would like to better understand how the node adjacencies are taken into consideration for the computations?
Thank you, and thank you for the great research and examples!
The text was updated successfully, but these errors were encountered:
I am currently trying to implement LRP on my own graph neural network, focused on discovering relevances at the node feature level (on my own implementation) where node features are roughly updated as:
H_next = (A.dot(H)).dot(W)
I see however, that the torchgraph implementation, and thus the LRP step in your research solely multiplies the node features with the weights.
Thus I would like to better understand how the node adjacencies are taken into consideration for the computations?
Thank you, and thank you for the great research and examples!
The text was updated successfully, but these errors were encountered: