Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible Errors in scripts under notebook/ and 'issues' forum. #29

Open
wants to merge 6 commits into
base: pytorch
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
This codebase implements the system described in the paper ["AdaptIS: Adaptive Instance Selection Network"](https://arxiv.org/abs/1909.07829), Konstantin Sofiiuk, Olga Barinova, Anton Konushin. Accepted at ICCV 2019.
The code performs **instance segmentation** and can be also used for **panoptic segmentation**.

**[UPDATE]** We have released PyTorch implementation of our algorithm (now it supports only ToyV1 and ToyV2 datasets on single gpu). See [pytorch](https://github.com/saic-vul/adaptis/tree/pytorch) branch.

<p align="center">
<img src="./images/adaptis_model_scheme.png" alt="drawing" width="600"/>
</p>
Expand All @@ -12,7 +14,7 @@ The code performs **instance segmentation** and can be also used for **panoptic

We generated an even more complex synthetic dataset to show the main advantage of our algorithm over other detection-based instance segmentation algorithms. The new dataset contains 25000 images for training and 1000 images each for validation and testing. Each image has resolution of 128x128 and can contain from 12 to 52 highly overlapping objects.

You can download the ToyV2 dataset from [here](https://drive.google.com/open?id=1iUMuWZUA4wzBC3ka01jkUM5hNqU3rV_U). You can test and visualize the model trained on this dataset using [this](notebooks/test_toy_v2_model.ipynb) notebook.
You can download the ToyV2 dataset from [here](https://drive.google.com/open?id=1iUMuWZUA4wzBC3ka01jkUM5hNqU3rV_U). You can test and visualize the model trained on this dataset using [this](notebooks/test_toy_v2_model.ipynb) notebook. You can download pretrained model from [here](https://drive.google.com/open?id=1RxepfpJF5gRpRNYu1urdV748suF3TL5k).

![alt text](./images/toy_v2_comparison.jpg)

Expand All @@ -23,7 +25,7 @@ We used the ToyV1 dataset for our experiments in the paper. We generated 12k sam
* **original** contains generated samples without augmentations;
* **augmented** contains generated samples with fixed augmentations (random noise and blur).

We trained our model on the original/train part and tested it on the augmented/test part. You can download the toy dataset from [here](https://drive.google.com/open?id=161UZrYSE_B3W3hIvs1FaXFvoFaZae4FT). The repository provides an example of testing and metric evalutation for the toy dataset. You can test and visualize trained model on the toy dataset using [provided](notebooks/test_toy_model.ipynb) Jupyter Notebook.
We trained our model on the original/train part and tested it on the augmented/test part. You can download the toy dataset from [here](https://drive.google.com/open?id=161UZrYSE_B3W3hIvs1FaXFvoFaZae4FT). The repository provides an example of testing and metric evalutation for the toy dataset. You can test and visualize trained model on the toy dataset using [provided](notebooks/test_toy_model.ipynb) Jupyter Notebook. You can download pretrained model from [here](https://drive.google.com/open?id=1IuJUh0JvbKYILBxCeO2h6U4LG-9DoTHi).


### Setting up a development environment
Expand Down
3 changes: 0 additions & 3 deletions adaptis/utils/args.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,6 @@ def get_common_arguments():
parser.add_argument('--thread-pool', action='store_true', default=False,
help='use ThreadPool for dataloader workers')

parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')

parser.add_argument('--ngpus', type=int,
default=len(mx.test_utils.list_gpus()),
help='number of GPUs')
Expand Down
21 changes: 8 additions & 13 deletions adaptis/utils/exp.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,20 +46,15 @@ def init_experiment(experiment_name, add_exp_args, script_path=None):
fh.setFormatter(formatter)
logger.addHandler(fh)

if args.no_cuda:
logger.info('Using CPU')
args.kvstore = 'local'
args.ctx = mx.cpu(0)
if args.gpus:
args.ctx = [mx.gpu(int(i)) for i in args.gpus.split(',')]
args.ngpus = len(args.ctx)
else:
if args.gpus:
args.ctx = [mx.gpu(int(i)) for i in args.gpus.split(',')]
args.ngpus = len(args.ctx)
else:
args.ctx = [mx.gpu(i) for i in range(args.ngpus)]
logger.info(f'Number of GPUs: {args.ngpus}')

if args.ngpus < 2:
args.syncbn = False
args.ctx = [mx.gpu(i) for i in range(args.ngpus)]
logger.info(f'Number of GPUs: {args.ngpus}')

if args.ngpus < 2:
args.syncbn = False

logger.info(args)

Expand Down