Skip to content

EnnioEnnio/dl_facemask_detector

Repository files navigation

dl_facemask_detector

A simple face mask detector using deep learning.

Table of Contents

  1. Dataset
  2. Model
  3. Setup
  4. Demo
  5. Running your own training loop
  6. Running your own evaluation loop
  7. Report

Dataset

For this project we have used GitHub user X-zhangyang's "Real-World-Masked-Face-Dataset". The original source can be found here.

For the sake of convenience (in particular for demonstration purposes), we have already pre-processed the dataset and are mirroring it here. For model training, the original dataset mentioned above was used.

Model

The model architecture is very loosely based on LeNet-5, with some added layers and increased complexity. The model definition can be found in architecture.py.

More details regarding the architecture and how it compares to other state of the art solutions can be found in the project report.

A link to a pre-trained model can be found here.

Setup

  1. Configure your local environment with the necessary dependencies. We recommend using conda for setting up your environment. This project uses Python 3.10.

    conda create -n <env_name> python=3.10
    conda activate <env_name>
    pip install -r requirements.txt

⚠️ If you just want to run the demo notebook, you can skip the next steps entirely.

  1. Copy example_config.ini to config.ini. Make sure to adjust the config values based on your setup. Alternatively, you may also specify the values by exporting the following environment variables (see the file .envrc for an example):

    Value Environment Var
    Dataset path $DATASET_PATH
    Testset path $TESTSET_PATH
    Model path $MODEL_PATH
  2. (Optional): Download the pre-trained model and dataset. These can each be downloaded by running make model and make dataset respectively. The output path(s) can be overwritten by specifying $OUT.

Demo

We have included a Jupyter notebook in this project as a means of demonstrating the model's performance. The demo can be run as-is (provided you have completed the project setup). Utility functions to download the necessary testsets and pre-trained model weights are provided in notebook.

The demo notebook can be started by running jupyter-lab --port 8080 and then opening the file demo.ipynb from within the web-GUI.

We have also included a live-evaluation script which will use to model to classify the existence of a masked individual from a webcam feed (or lack thereof). The script is a little unreliable, so to get the "best" picture of the model we recommend running the notebook first (see also: evaluation of the dataset in the report). To run the webcam-evaluator, ensure you have followed the setup guide and then execute the following commands:

# If you haven't already, download the model
OUT=$(pwd)/model.pt make model

MODEL_PATH=$(pwd)/model.pt python3 ./eval_model_webcam.py

The webcam feed is coded to appear grayscale when no mask is detected and colored otherwise.

Training

To run a training loop, ensure you have taken the following steps:

  1. Complete the project setup.

  2. If you haven't already, download the dataset:

    OUT=$(pwd)/dataset make dataset

The training script can then be run with the following command:

WANDB_MODE=disabled DATASET_PATH=$(pwd)/dataset/train python3 train_model.py

Note: if you decide to test / train on a different dataset, ensure that the "unmasked" class is the positive class (label 1).

Evaluation

To run an evaluation loop on a batch of images, ensure you have taken the following steps:

  1. Complete the project setup.

  2. If you haven't already, download the dataset and model:

    OUT=$(pwd)/dataset make dataset
    OUT=$(pwd)/model.pt make model

The evaluation script can then be run with the following command:

MODEL_PATH=$(pwd)/model.pt TESTSET_PATH=$(pwd)/dataset/test python3 eval_model.py

Note: if you decide to evaluate on a different dataset, ensure that the "unmasked" class is the positive class (label 1).

Running on Images

It is possible to classify individual images with our model. Make sure you have completed all the steps in the setup section, including the specification of the path to the trained model.

run the following command to classify an image:

MODEL_PATH=$(pwd)/model.pt python3 run_model.py --image=path/to/image

Report

We have summarized our findings in a project report. You can view the rendered document here.

To build the report yourself, run make report (requires tex to be configured on your system).

About

A simple face mask detector using deep learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published