Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[todo] structure module #93

Open
lucidrains opened this issue Jul 26, 2021 · 3 comments
Open

[todo] structure module #93

lucidrains opened this issue Jul 26, 2021 · 3 comments
Labels
enhancement New feature or request

Comments

@lucidrains
Copy link
Owner

in the structure module, the FAPE loss needs to be applied on every iteration, while the rotation needs to have a stop-gradient for all but the last iteration

@hypnopump
Copy link
Collaborator

May i ask what "the rotation needs to have a stop-gradient for all but the last iteration" means?

The FAPE function satisfies the main constraints the methods section outlined: https://github.com/EleutherAI/mp_nerf/blob/master/mp_nerf/ml_utils.py#L102 Do you find something to be missing? i might add it

@hypnopump hypnopump added the enhancement New feature or request label Jul 26, 2021
@lucidrains
Copy link
Owner Author

May i ask what "the rotation needs to have a stop-gradient for all but the last iteration" means?

The FAPE function satisfies the main constraints the methods section outlined: https://github.com/EleutherAI/mp_nerf/blob/master/mp_nerf/ml_utils.py#L102 Do you find something to be missing? i might add it

5214156 took care of it here :)

just need to apply the FAPE loss every iteration and sum up the auxiliary losses now!

@lucidrains
Copy link
Owner Author

May i ask what "the rotation needs to have a stop-gradient for all but the last iteration" means?

The FAPE function satisfies the main constraints the methods section outlined: https://github.com/EleutherAI/mp_nerf/blob/master/mp_nerf/ml_utils.py#L102 Do you find something to be missing? i might add it

they have this MultiRigidSidechain class https://github.com/deepmind/alphafold/blob/0bab1bf84d9d887aba5cfb6d09af1e8c3ecbc408/alphafold/model/folding.py#L931 and it seems like there's a parameterized transformation before applying the loss itself

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants