Skip to content
This repository has been archived by the owner on Aug 1, 2024. It is now read-only.

Memory usage during model load #203

Answered by Turakar
Turakar asked this question in Ideas
Discussion options

You must be logged in to vote

Workaround:
Step 1: Load the state dict on a machine with enough memory and delete irrelevant keys. In my case, the files were stored to ~/.cache/torch/hub/checkpoints/.

state_dict = torch.load("<path>.pt")
if "optimizer_history" in state_dict:
    del state_dict["optimizer_history"]
if "last_optimizer_state" in state_dict:
    del state_dict["last_optimizer_state"]
torch.save(state_dict, "<path>-inference.pt")

Step 2: Copy the regression weights file from <path>-contact-regression.pt to <path>-inference-contact-regression.pt.
Step 3: Move the files to the machine with low memory.
Step 4: Load it there using:

esm.pretrained.load_model_and_alphabet_local("<path>-inference.pt")

@tomsercu wh…

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
1 reply
@zhoubay
Comment options

Comment options

You must be logged in to vote
0 replies
Answer selected by Turakar
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Ideas
Labels
None yet
3 participants