Skip to content

Commit

Permalink
Merge branch 'main' into dependabot/github_actions/styfle/cancel-work…
Browse files Browse the repository at this point in the history
…flow-action-0.12.0
  • Loading branch information
kevinyamauchi authored Jan 23, 2024
2 parents 663ee32 + 560beb3 commit d103655
Show file tree
Hide file tree
Showing 20 changed files with 993 additions and 85 deletions.
15 changes: 15 additions & 0 deletions .github/ISSUE_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
* membrain-seg version:
* Python version:
* Operating System:

### Description

Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.

### What I Did

```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
12 changes: 12 additions & 0 deletions .github/TEST_FAIL_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
title: "{{ env.TITLE }}"
labels: [bug]
---
The {{ workflow }} workflow failed on {{ date | date("YYYY-MM-DD HH:mm") }} UTC

The most recent failing test was on {{ env.PLATFORM }} py{{ env.PYTHON }}
with commit: {{ sha }}

Full run: https://github.com/{{ repo }}/actions/runs/{{ env.RUN_ID }}

(This post will be updated if another test fails, as long as this issue remains open.)
2 changes: 1 addition & 1 deletion .github/workflows/build-and-deploy-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v4
Expand Down
60 changes: 36 additions & 24 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,21 @@ on:
pull_request:
workflow_dispatch:
schedule:
- cron: "0 0 * * 0" # every week (for --pre release tests)
# run every week (for --pre release tests)
- cron: "0 0 * * 0"

# cancel in-progress runs that use the same workflow and branch
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
check-manifest:
# check-manifest is a tool that checks that all files in version control are
# included in the sdist (unless explicitly excluded)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- run: pipx run check-manifest

test:
Expand All @@ -24,16 +32,16 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ['3.8', '3.9', '3.10']
platform: [ubuntu-latest, macos-latest, windows-latest]
python-version: ["3.9", "3.10", "3.11"]
platform: [ubuntu-latest] #, macos-latest, windows-latest]

steps:
- name: Cancel Previous Runs
uses: styfle/[email protected]
with:
access_token: ${{ github.token }}

- uses: actions/checkout@v3
- uses: actions/checkout@v4

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
Expand All @@ -42,25 +50,25 @@ jobs:
cache-dependency-path: "pyproject.toml"
cache: "pip"

# if running a cron job, we add the --pre flag to test against pre-releases
- name: Install dependencies
- name: Install Dependencies
run: |
python -m pip install -U pip
python -m pip install -e .[test] ${{ github.event_name == 'schedule' && '--pre' || '' }}
# if running a cron job, we add the --pre flag to test against pre-releases
python -m pip install .[test] ${{ github.event_name == 'schedule' && '--pre' || '' }}
- name: Test
- name: 🧪 Run Tests
run: pytest --color=yes --cov --cov-report=xml --cov-report=term-missing

# If something goes wrong, we can open an issue in the repo
- name: Report --pre Failures
# If something goes wrong with --pre tests, we can open an issue in the repo
- name: 📝 Report --pre Failures
if: failure() && github.event_name == 'schedule'
uses: JasonEtco/create-an-issue@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PLATFORM: ${{ matrix.platform }}
PYTHON: ${{ matrix.python-version }}
RUN_ID: ${{ github.run_id }}
TITLE: '[test-bot] pip install --pre is failing'
TITLE: "[test-bot] pip install --pre is failing"
with:
filename: .github/TEST_FAIL_TEMPLATE.md
update_existing: true
Expand All @@ -74,28 +82,32 @@ jobs:
if: success() && startsWith(github.ref, 'refs/tags/') && github.event_name != 'schedule'
runs-on: ubuntu-latest

permissions:
# IMPORTANT: this permission is mandatory for trusted publishing on PyPi
# see https://docs.pypi.org/trusted-publishers/
id-token: write
# This permission allows writing releases
contents: write

steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4

- name: Set up Python
- name: 🐍 Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.x"

- name: install
- name: 👷 Build
run: |
git tag
pip install -U pip build twine
python -m pip install build
python -m build
twine check dist/*
ls -lh dist
- name: Build and publish
run: twine upload dist/*
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.TWINE_API_KEY }}
- name: 🚢 Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.TWINE_API_KEY }}

- uses: softprops/action-gh-release@v1
with:
generate_release_notes: true
files: './dist/*'
16 changes: 16 additions & 0 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,9 @@ This should display the different options you can choose from MemBrain, like "se

## Step 5: Download pre-trained segmentation model (optional)
We recommend to use denoised (ideally Cryo-CARE<sup>1</sup>) tomograms for segmentation. However, our current best model is available for download [here](https://drive.google.com/file/d/1tSQIz_UCsQZNfyHg0RxD-4meFgolszo8/view?usp=sharing) and should also work on non-denoised data. Please let us know how it works for you.

NOTE: Previous model files are not compatible with MONAI v1.3.0 or higher. So if you're using v1.3.0 or higher, consider downgrading to MONAI v1.2.0 or downloading this [adapted version](https://drive.google.com/file/d/1Tfg2Ju-cgSj_71_b1gVMnjqNYea7L1Hm/view?usp=sharing) of our most recent model file.

If the given model does not work properly, you may want to try one of our previous versions:

Other (older) model versions:
Expand All @@ -65,3 +68,16 @@ Once downloaded, you can use it in MemBrain-seg's [Segmentation](./Usage/Segment
```
[1] T. -O. Buchholz, M. Jordan, G. Pigino and F. Jug, "Cryo-CARE: Content-Aware Image Restoration for Cryo-Transmission Electron Microscopy Data," 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 2019, pp. 502-506, doi: 10.1109/ISBI.2019.8759519.
```


# Troubleshooting
Here is a collection of common issues and how to fix them:

- `RuntimeError: The NVIDIA driver on your system is too old (found version 11070). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has
been compiled with your version of the CUDA driver.`

The latest Pytorch versions require higher CUDA versions that may not be installed on your system yet. You can either install the new CUDA version or (maybe easier) downgrade Pytorch to a version that is compatible:

`pip uninstall torch`

`pip install torch==2.0.1`
6 changes: 6 additions & 0 deletions src/membrain_seg/annotations/extract_patch_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,11 @@ def extract_patches(
help="Path to the folder where extracted patches should be stored. \
(subdirectories will be created)",
),
ds_token: str = Option( # noqa: B008
"other",
help="Dataset token. Important for distinguishing between different \
datasets. Should NOT contain underscores!",
),
coords_file: str = Option( # noqa: B008
None,
help="Path to a file containing coordinates for patch extraction. The file \
Expand Down Expand Up @@ -93,6 +98,7 @@ def extract_patches(
coords=coords,
out_dir=out_folder,
idx_add=idx_add,
ds_token=ds_token,
token=token,
pad_value=pad_value,
)
31 changes: 21 additions & 10 deletions src/membrain_seg/annotations/extract_patches.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def pad_labels(patch, padding, pad_value=2.0):


def get_out_files_and_patch_number(
token, out_folder_raw, out_folder_lab, patch_nr, idx_add
ds_token, token, out_folder_raw, out_folder_lab, patch_nr, idx_add
):
"""
Create filenames and corrected patch numbers.
Expand All @@ -62,8 +62,10 @@ def get_out_files_and_patch_number(
Parameters
----------
ds_token : str
The dataset identifier used as a part of the filename.
token : str
The unique identifier used as a part of the filename.
The tomogram identifier used as a part of the filename.
out_folder_raw : str
The directory path where raw data patches are stored.
out_folder_lab : str
Expand Down Expand Up @@ -96,27 +98,34 @@ def get_out_files_and_patch_number(
"""
patch_nr += idx_add
out_file_patch = os.path.join(
out_folder_raw, token + "_patch" + str(patch_nr) + "_raw.nii.gz"
out_folder_raw, ds_token + "_" + token + "_patch" + str(patch_nr) + ".nii.gz"
)
out_file_patch_label = os.path.join(
out_folder_lab, token + "_patch" + str(patch_nr) + "_labels.nii.gz"
out_folder_lab, ds_token + "_" + token + "_patch" + str(patch_nr) + ".nii.gz"
)
exist_add = 0
while os.path.isfile(out_file_patch):
exist_add += 1
out_file_patch = os.path.join(
out_folder_raw,
token + "_patch" + str(patch_nr + exist_add) + "_raw.nii.gz",
ds_token + "_" + token + "_patch" + str(patch_nr + exist_add) + ".nii.gz",
)
out_file_patch_label = os.path.join(
out_folder_lab,
token + "_patch" + str(patch_nr + exist_add) + "_labels.nii.gz",
ds_token + "_" + token + "_patch" + str(patch_nr + exist_add) + ".nii.gz",
)
return patch_nr + exist_add, out_file_patch, out_file_patch_label


def extract_patches(
tomo_path, seg_path, coords, out_dir, idx_add=0, token=None, pad_value=2.0
tomo_path,
seg_path,
coords,
out_dir,
ds_token="other",
token=None,
idx_add=0,
pad_value=2.0,
):
"""
Extracts 3D patches from a given tomogram and corresponding segmentation.
Expand All @@ -133,11 +142,13 @@ def extract_patches(
List of tuples where each tuple represents the 3D coordinates of a patch center.
out_dir : str
The output directory where the extracted patches will be saved.
idx_add : int, optional
The index addition for patch numbering, default is 0.
ds_token : str, optional
Dataset token to uniquely identify the dataset, default is 'other'.
token : str, optional
Token to uniquely identify the tomogram, default is None. If None,
the base name of the tomogram file path is used.
idx_add : int, optional
The index addition for patch numbering, default is 0.
pad_value: float, optional
Borders of extracted patch are padded with this value ("ignore" label)
Expand Down Expand Up @@ -170,7 +181,7 @@ def extract_patches(

for patch_nr, cur_coords in enumerate(coords):
patch_nr, out_file_patch, out_file_patch_label = get_out_files_and_patch_number(
token, out_folder_raw, out_folder_lab, patch_nr, idx_add
ds_token, token, out_folder_raw, out_folder_lab, patch_nr, idx_add
)
print("Extracting patch nr", patch_nr, "from tomo", token)
try:
Expand Down
8 changes: 5 additions & 3 deletions src/membrain_seg/annotations/merge_corrections.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,13 +46,15 @@ def get_corrections_from_folder(folder_name, orig_pred_file):
or filename.startswith("Ignore")
or filename.startswith("ignore")
):
print("ATTENTION! Not processing", filename)
print("Is this intended?")
print(
"File does not fit into Add/Remove/Ignore naming! " "Not processing",
filename,
)
continue
readdata = sitk.GetArrayFromImage(
sitk.ReadImage(os.path.join(folder_name, filename))
)
print("Adding file", filename, "<--")
print("Adding file", filename)

if filename.startswith("Add") or filename.startswith("add"):
add_patch += readdata
Expand Down
4 changes: 2 additions & 2 deletions src/membrain_seg/segmentation/cli/segment_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def segment(
@cli.command(name="components", no_args_is_help=True)
def components(
segmentation_path: str = Option( # noqa: B008
help="Path to the membrane segmentation to be processed.", **PKWARGS
..., help="Path to the membrane segmentation to be processed.", **PKWARGS
),
out_folder: str = Option( # noqa: B008
"./predictions", help="Path to the folder where segmentations should be stored."
Expand Down Expand Up @@ -114,7 +114,7 @@ def components(
@cli.command(name="thresholds", no_args_is_help=True)
def thresholds(
scoremap_path: str = Option( # noqa: B008
help="Path to the membrane scoremap to be processed.", **PKWARGS
..., help="Path to the membrane scoremap to be processed.", **PKWARGS
),
thresholds: List[float] = Option( # noqa: B008
...,
Expand Down
30 changes: 29 additions & 1 deletion src/membrain_seg/segmentation/cli/train_cli.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
from typing import List, Optional

from typer import Option
from typing_extensions import Annotated

from ..train import train as _train
from .cli import OPTION_PROMPT_KWARGS as PKWARGS
Expand Down Expand Up @@ -70,7 +73,7 @@ def train_advanced(
help="Batch size for training.",
),
num_workers: int = Option( # noqa: B008
1,
8,
help="Number of worker threads for loading data",
),
max_epochs: int = Option( # noqa: B008
Expand All @@ -84,6 +87,22 @@ def train_advanced(
but also severely increases training time.\
Pass "True" or "False".',
),
use_surface_dice: bool = Option( # noqa: B008
False, help='Whether to use Surface-Dice as a loss. Pass "True" or "False".'
),
surface_dice_weight: float = Option( # noqa: B008
1.0, help="Scaling factor for the Surface-Dice loss. "
),
surface_dice_tokens: Annotated[
Optional[List[str]],
Option(
help='List of tokens to \
use for the Surface-Dice loss. \
Pass tokens separately:\
For example, train_advanced --surface_dice_tokens "ds1" \
--surface_dice_tokens "ds2"'
),
] = None,
use_deep_supervision: bool = Option( # noqa: B008
True, help='Whether to use deep supervision. Pass "True" or "False".'
),
Expand Down Expand Up @@ -119,6 +138,12 @@ def train_advanced(
If set to False, data augmentation still happens, but not as frequently.
More data augmentation can lead to a better performance, but also increases the
training time substantially.
use_surface_dice : bool
Determines whether to use Surface-Dice loss, by default True.
surface_dice_weight : float
Scaling factor for the Surface-Dice loss, by default 1.0.
surface_dice_tokens : list
List of tokens to use for the Surface-Dice loss, by default ["all"].
use_deep_supervision : bool
Determines whether to use deep supervision, by default True.
project_name : str
Expand All @@ -140,6 +165,9 @@ def train_advanced(
max_epochs=max_epochs,
aug_prob_to_one=aug_prob_to_one,
use_deep_supervision=use_deep_supervision,
use_surf_dice=use_surface_dice,
surf_dice_weight=surface_dice_weight,
surf_dice_tokens=surface_dice_tokens,
project_name=project_name,
sub_name=sub_name,
)
Expand Down
Loading

0 comments on commit d103655

Please sign in to comment.