From 7350c964cc41653d4cc2d0a493fc43d28f5fab0c Mon Sep 17 00:00:00 2001 From: Mamta Singh <168400541+quic-mamta@users.noreply.github.com> Date: Thu, 2 Jan 2025 17:26:39 +0530 Subject: [PATCH] [QEff. Finetune]: Update finetune documentation (#208) Update finetune documentation Signed-off-by: Mamta Singh --- docs/source/finetune.md | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/docs/source/finetune.md b/docs/source/finetune.md index c5ea96b4..c42a53f3 100644 --- a/docs/source/finetune.md +++ b/docs/source/finetune.md @@ -1,11 +1,15 @@ # Finetune Infra This repository provides the infrastructure for finetuning models using different hardware accelerators such as QAIC. -Same CLI can be used to run Finetuning on GPU by setting the device flag. +Same CLI can be used to run Finetuning on gpu by setting the device flag.(for finetuning on gpu, install torch specific to cuda) ## Installation -Same as QEfficient along with QAIC Eager mode +Same as QEfficient along with QAIC PyTorch Eager mode. +For torch_qaic, assuming QEfficient is already installed, +```bash +pip install /opt/qti-aic/integrations/torch_qaic/py310/torch_qaic-0.1.0-cp310-cp310-linux_x86_64.whl +``` ## Finetuning @@ -16,7 +20,6 @@ export HF_DATASETS_TRUST_REMOTE_CODE=True Export the ENV variables to get the device and HW traces and debugging logs ```bash -export QAIC_DELAY_SEM_WAIT_AT_COPY=1 # For HW profile traces export QAIC_DEVICE_LOG_LEVEL=0 # For Device level logs export QAIC_DEBUG=1 # To understand the CPU fallback ops ``` @@ -33,12 +36,6 @@ To download the grammar dataset, visit this [link](https://github.com/meta-llama ## Usage -Inside eager release docker, -```bash -export "LD_LIBRARY_PATH=/opt/qti-aic/dev/lib/x86_64/" -pip install -e . --extra-index-url https://download.pytorch.org/whl/cpu -``` - ### Single SOC finetuning on QAIC ```python