A simple face mask detector using deep learning.
For this project we have used GitHub user X-zhangyang's "Real-World-Masked-Face-Dataset". The original source can be found here.
For the sake of convenience (in particular for demonstration purposes), we have already pre-processed the dataset and are mirroring it here. For model training, the original dataset mentioned above was used.
The model architecture is very loosely based on LeNet-5, with some added layers and increased complexity. The model definition can be found in architecture.py.
More details regarding the architecture and how it compares to other state of the art solutions can be found in the project report.
A link to a pre-trained model can be found here.
-
Configure your local environment with the necessary dependencies. We recommend using conda for setting up your environment. This project uses Python
3.10
.conda create -n <env_name> python=3.10 conda activate <env_name> pip install -r requirements.txt
-
Copy
example_config.ini
toconfig.ini
. Make sure to adjust the config values based on your setup. Alternatively, you may also specify the values by exporting the following environment variables (see the file.envrc
for an example):Value Environment Var Dataset path $DATASET_PATH
Testset path $TESTSET_PATH
Model path $MODEL_PATH
-
(Optional): Download the pre-trained model and dataset. These can each be downloaded by running
make model
andmake dataset
respectively. The output path(s) can be overwritten by specifying$OUT
.
We have included a Jupyter notebook in this project as a means of demonstrating the model's performance. The demo can be run as-is (provided you have completed the project setup). Utility functions to download the necessary testsets and pre-trained model weights are provided in notebook.
The demo notebook can be started by running jupyter-lab --port 8080
and then
opening the file demo.ipynb
from within the web-GUI.
We have also included a live-evaluation script which will use to model to classify the existence of a masked individual from a webcam feed (or lack thereof). The script is a little unreliable, so to get the "best" picture of the model we recommend running the notebook first (see also: evaluation of the dataset in the report). To run the webcam-evaluator, ensure you have followed the setup guide and then execute the following commands:
# If you haven't already, download the model
OUT=$(pwd)/model.pt make model
MODEL_PATH=$(pwd)/model.pt python3 ./eval_model_webcam.py
The webcam feed is coded to appear grayscale when no mask is detected and colored otherwise.
To run a training loop, ensure you have taken the following steps:
-
Complete the project setup.
-
If you haven't already, download the dataset:
OUT=$(pwd)/dataset make dataset
The training script can then be run with the following command:
WANDB_MODE=disabled DATASET_PATH=$(pwd)/dataset/train python3 train_model.py
Note: if you decide to test / train on a different dataset, ensure that the "unmasked" class is the positive class (label 1).
To run an evaluation loop on a batch of images, ensure you have taken the following steps:
-
Complete the project setup.
-
If you haven't already, download the dataset and model:
OUT=$(pwd)/dataset make dataset OUT=$(pwd)/model.pt make model
The evaluation script can then be run with the following command:
MODEL_PATH=$(pwd)/model.pt TESTSET_PATH=$(pwd)/dataset/test python3 eval_model.py
Note: if you decide to evaluate on a different dataset, ensure that the "unmasked" class is the positive class (label 1).
It is possible to classify individual images with our model. Make sure you have completed all the steps in the setup section, including the specification of the path to the trained model.
run the following command to classify an image:
MODEL_PATH=$(pwd)/model.pt python3 run_model.py --image=path/to/image
We have summarized our findings in a project report. You can view the rendered document here.
To build the report yourself, run make report
(requires tex to be configured
on your system).