Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in downloading model from huggingface #94

Open
Wesady opened this issue Dec 2, 2024 · 2 comments
Open

Error in downloading model from huggingface #94

Wesady opened this issue Dec 2, 2024 · 2 comments

Comments

@Wesady
Copy link

Wesady commented Dec 2, 2024

HI
When I download model from huggingface

from transformers import AutoConfig, AutoModelForCausalLM

model_name = 'togethercomputer/evo-1-8k-base'
model_config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, revision="1.1_fix")
model_config.use_cache = True

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    config=model_config,
    trust_remote_code=True,
    revision="1.1_fix"
)

I got the error Could not locate the togethercomputer/evo-1-131k-base--configuration_hyena.py inside togethercomputer/evo-1-8k-base.
My transformers version is 4.27.0.
Can you give me some help? Thank you very much!

@GoldenJin24
Copy link

GoldenJin24 commented Dec 30, 2024

so sad,i met the same question.I download the model files in huggingface,then offline loading the model 1.5-8k like below code

import os
os.environ['TRANSFORMERS_OFFLINE']="1"

from transformers import AutoConfig, AutoModelForCausalLM

# enable   local-path
model_name = './evo-1.5-8k-base'

# 
model_config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, use_cache=False)
model_config.use_cache = False

# 
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    config=model_config,
    use_cache=False,
    local_files_only=True  # 
)

so sad the same error like

Could not locate the configuration_hyena.py inside togethercomputer/evo-1-131k-base.

the key error is i choose 1.5-8k,why mentioned 1-131k?different model😭

@GoldenJin24
Copy link

GoldenJin24 commented Dec 30, 2024

lucky!althougth i don't solve this problem,in chinese i uesd

#bash like this
HF_ENDPOINT=https://hf-mirror.com    python    your_script.py

#code like readme.txt

from transformers import AutoModelForCausalLM, AutoTokenizer

from evo import Evo
import torch

device = 'cuda:0'

evo_model = Evo('evo-1.5-8k-base')
model, tokenizer = evo_model.model, evo_model.tokenizer
model.to(device)
model.eval()

print('================ start inference')
sequence = 'ATCG'
input_ids = torch.tensor(
    tokenizer.tokenize(sequence),
    dtype=torch.int,
).to(device).unsqueeze(0)

with torch.no_grad():
    logits, _ = model(input_ids) # (batch, length, vocab)

print('Logits: ', logits)
print('Shape (batch, length, vocab): ', logits.shape)

sucessed!

in addition,i used 3090;torch 2.0;flash_attn-2.5.6+cu118torch2.0cxx11abiFALSE-cp38-cp38-linux_x86_64;python 3.8.
in chinese,maybe i can provide mirror image in autodl(like docker to easily use?) if you need,talk to me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants