-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference #9
Comments
Thank you for your interest in our work! Unfortunately, we don't plan to release the trained model, since we have refactored the code quite a lot and the trained model cannot be loaded directly in the current repo due to inconsistent names/structures. Note that our method is dataset-specific, such that the trained model on one dataset can not be used to denoise any other datasets. |
Thanks for your answer, it was very helpful! I have a few more questions. The paper notes that currently DDM2 can only be used with certain data sets (those 4 brains?). Can I put it in if I want to use my CARDIAC IMAGE data set? If my code is weak, is it impossible to adjust the code? I'm sorry I have so many questions. Thanks again for your reply! Thank you!! |
Hi, yes it is absolutely fine to use DDM2 on different datasets. However, you have to make sure that the dataset you are using is still a 4D volume [H x W x D x T], while T indicates the number of different observations of the same 3D volume. Then I believe you can train DDM2 on a new dataset seamlessly. |
Could you please comment on how can one reproduce the experiments with |
Hi, are you referring to Figure 11 of results on synthesis noise with n=1? If so, this experiment indicates the results of using only 1 prior slice as input (while in the main paper, we usually used 3 prior slices,). This does not necessarily require T to be 1 as well. In fact, I don't think any unsupervised algorithms right now can handle T = 1. |
Thanks for the quick reply, sorry for being a bit vague at first. Yes, I was talking about the result in Figure 11 and it seems I was mistaking My general understanding is that, for example, Noise2Self is designed to work with
I was wondering if your method/code would allow one to do the same. |
Oh, now I get what you mean! Yes, we do require multiple 2D observations of the same underlying 2D slice for unsupervised learning. The difference between Noise2Self and DDM2 is the definition and scope of data point: In Noise2Self, a data point is usually referred to as a single pixel, while in DDM2 a data point is actually a 2D slice. In this way, Noise2Self can achieve denoising on the 2D noisy image itself (since it contains many pixels), and of course, masking is required to make this strategy effective. DDM2, on the other hand, requires multiple 2D slices as inputs, and no masking is needed. Hope this clarifies :) |
@tiangexiangHello! I trained the Stanford HARDI dataset according to the steps. The images generated after denoising in stage 3 looked good during the training, but the effect I got after using denoising.py was very strange. I don't know why. Did I do something wrong? |
Hi! Sorry for the unclearness, after Stage II is finished, the generated '.txt' file should be specified at the 'stage2_file' variable in the config file, which is the last variable in the file. It shouldn't be specified at 'initial_stage_file' for both 'train' and 'val' in the 'datasets' section. Sorry this is an outdated statement and we will update it accordingly. Note that the 'stage2_file' is needed for both Stage III training and denoising. And please make sure the trained model is loaded properly when denoising! |
A kind note to update "After Stage II finished, the state file (recorded in the previous step) needs to be specified at 'initial_stage_file'" |
Hi! Did you solve this problem, my inference on hardi150 looks weird like this |
@gzliyu 我遇到了一样的问题,去噪结果同样非常奇怪。请问您解决了吗? |
@gzliyu @VGANGV Sorry I just saw these messages! I think one potential reason is from model loading (either in stage 3 training or inference). Did you specify the correct stage 3 model checkpoint before running inference? Can you please provide some validation results during the training process (for both stage 1 and 3)? |
@tiangexiang Thank you Tiange! I realized that I forgot to update the config file before denoising. After I changed the "resume_state" of "noise_model" in the config file to the Stage 3 model, I got the normal denoising result. |
So do you mean T is the acquiring number of the same volume/phantom? We can acquire the image/slice more than once separately. |
@BAOSONG1997 yes! T here indicates the number of acquisitions for the same underlying 3D volume. I think it is possible that T is the array coil number of the same volume. As long as noise in each observation of 3D volume is i.i.d, I think DDM2 is able to handle :) |
Great job! Thanks! Will you upload the trained model in the future? It's something we can use to infer directly, without training. I don't know if it's okay to ask, thank you again!!!
The text was updated successfully, but these errors were encountered: