Neural networks in general are vulnerable to perturbations, which can be naturally occurring or adversarially generated. In context of autonomous driving these perturbations can have a significant impact on the performance of an object detector. We want to assess this impact and make the object detector robust to these perturbations.
We use the Berkeley Deep Drive dataset and compare our work on two single shot object detectors: Yolov3 and Retinanet. Next we measure the robustness performance with two different methods: Adversarial training and IRL
src
|__ code
|_ YOLOv3
|_ retinanet
|_ DSSD
|_ shared
|_ attacks
|_ evaluate
|_ representation_learning
This repository has implementations of the following three object detectors in PyTorch:
It uses the Berkeley Deep Drive dataset.
The following three perturbations have been implemented in the attacks module:
- Random Noise
generate_noisy_image(images, noise)
- FGSM
generate_fgsm_image(model, images, targets, epsilon, model_type)
- PGD
generate_pgd_image(model, images, targets, alpha, epsilon, iterations, model_type)
detection on clean image:
detection on FGSM attacked image:
detection on PGD attacked image:
The following two robustness improving techniques have been implemented:
Results of Adversarial Training:
cd src
# For Retinanet:
python code/retinanet/train.py --attack_type FGSM --eps 4
# For YOLOv3:
python code/YOLOv3/train.py --epochs 50 --training_type Adversarial --attack_type FGSM -- eps 2
cd src
# For Retinanet:
python code/retinanet/train.py --epochs 30 --irl 1 --irl_noise_type random_noise --irl_alpha 0.8 --irl_beta 0.2 --irl_gamma 0.1 --irl_loss_type 8 --eps 16
# For YOLOv3:
python code/YOLOv3/train.py --epochs 50 --training_type IRL --attack_type FGSM --eps 2 --irl_loss_type 6 --irl_alpha 0.5 --irl_beta 0.5 --irl_gamma 1.0
-
Adversarial Training
-
A. Madry et al., “Towards Deep Learning Models Resistant to Adversarial Attacks”, arXiv:1706.06083, 2017
-
Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. "Adversarial machine learning at scale", arXiv preprint arXiv:1611.01236 (2016).
-
-
IRL
-
J. Salazar et al., “Invariant Representation Learning for Robust Deep Networks”, workshop in NeurIPS, 2018
-
D. Liang et al., “Learning Noise-Invariant Representations for Robust Speech Recognition” in IEEE SLT, 2018
-