-
Thank you for publishing this model! I noticed that loading |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hmm I see, the ESM-1b model checkpoint is 7.3GB which will have to fit in RAM. |
Beta Was this translation helpful? Give feedback.
-
Workaround: state_dict = torch.load("<path>.pt")
if "optimizer_history" in state_dict:
del state_dict["optimizer_history"]
if "last_optimizer_state" in state_dict:
del state_dict["last_optimizer_state"]
torch.save(state_dict, "<path>-inference.pt") Step 2: Copy the regression weights file from esm.pretrained.load_model_and_alphabet_local("<path>-inference.pt") @tomsercu what do you think about adding this to the ESM library and providing the stripped-down versions via Facebook's Servers? I can contribute the code as an PR, or just copy it from here. |
Beta Was this translation helpful? Give feedback.
Workaround:
Step 1: Load the state dict on a machine with enough memory and delete irrelevant keys. In my case, the files were stored to
~/.cache/torch/hub/checkpoints/
.Step 2: Copy the regression weights file from
<path>-contact-regression.pt
to<path>-inference-contact-regression.pt
.Step 3: Move the files to the machine with low memory.
Step 4: Load it there using:
@tomsercu wh…