You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the first time I have used LitGPT for fine-tuning, and as an experiment, I chose Qwen2.5-0.5B for fine-tuning.
I am Runing this on windows server 2022dc
(.env) f:\litgpt>litgpt finetune Qwen/Qwen2.5-0.5B-Instruct --data JSON --data.json_path F:\litgpt\alpaca_zh_demo.json --data.val_split_fraction 0.1 --out_dir f:\out
I found that the whole process was done using the CPU, not my GPU
I confirm that cuda12.2 is installed correctly and I can use ML Studio to use the GGUF format model.
Did I miss any settings?
The text was updated successfully, but these errors were encountered:
This is the first time I have used LitGPT for fine-tuning, and as an experiment, I chose Qwen2.5-0.5B for fine-tuning.
I am Runing this on windows server 2022dc
(.env) f:\litgpt>litgpt finetune Qwen/Qwen2.5-0.5B-Instruct --data JSON --data.json_path F:\litgpt\alpaca_zh_demo.json --data.val_split_fraction 0.1 --out_dir f:\out
I found that the whole process was done using the CPU, not my GPU
I confirm that cuda12.2 is installed correctly and I can use ML Studio to use the GGUF format model.
Did I miss any settings?
The text was updated successfully, but these errors were encountered: