Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LitGPT fine-tuning Dont Use GPU #1911

Open
strikene opened this issue Jan 20, 2025 · 0 comments
Open

LitGPT fine-tuning Dont Use GPU #1911

strikene opened this issue Jan 20, 2025 · 0 comments
Labels
question Further information is requested

Comments

@strikene
Copy link

This is the first time I have used LitGPT for fine-tuning, and as an experiment, I chose Qwen2.5-0.5B for fine-tuning.
I am Runing this on windows server 2022dc

(.env) f:\litgpt>litgpt finetune Qwen/Qwen2.5-0.5B-Instruct --data JSON --data.json_path F:\litgpt\alpaca_zh_demo.json --data.val_split_fraction 0.1 --out_dir f:\out
I found that the whole process was done using the CPU, not my GPU

Image

I confirm that cuda12.2 is installed correctly and I can use ML Studio to use the GGUF format model.

Did I miss any settings?

@strikene strikene added the question Further information is requested label Jan 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant