Fine-tuning DistilBERT on the SQuAD dataset for Question Answering task
This code is based on the tutorial of Hugging Face for fine-tuning DistilBERT https://huggingface.co/course/chapter7/7?fw=pt
- Mixed-precision training was applied to accelerate training time.
- The energy consumption and carbon emissions of the model training is tracked with the eco2AI library https://github.com/sb-ai-lab/Eco2AI