-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR ~ Error executing process > 'pipeline:reference_assembly:map_reads (1)' #121
Comments
Hi @physnano Just the process name should be included in the process selector like so:
|
Thanks @nrhorner, that along with
It seems that many of the steps of this workflow do not have sufficient default memory allocated to the (sub)processes... |
Hi @physnano Ok, thanks for the update. We will review memory allocations for this workflow. Would you be able to share a bit of information about your data? How many samples and total number of reads are you using? ALso which version of the workflow and the command you used? Thanks, Neil |
Hi @nrhorner , In my case 3 replicates for 2 samples (6 total) were split across two PromethION flow cells, so ~40-50M raw reads per individual barcode. The makeReport step spikes to ~200GB according to my monitoring. I am using the latest version v1.4.0 --> Command used:
|
Hi @physnano It's not good that the report generation step is using so much memory. I will investigate this. |
Would you be able to try out version 1.6.0 and see if memory consumption has reduced please? |
Hi @nrhorner, I am rerunning on v1.6.0 today and will let you know how it goes when it completes! |
Hi @nrhorner , I have run v1.6.0 and it completes, however since I am running the workflow via singularity profile on a cluster I needed to specify job runtimes via the config profile (clusterOptions = '--qos=long') mainly for the map reads step. Also I am noticing that my "results_dge.tsv" file has raw read counts (I reran the script and same result) instead of the "gene" "logFC" "logCPM" "F" "PValue" "FDR" columns expected of the DGE analysis. The weird thing is this doesn't happen when I processed a different dataset (PacBio reads) with a nearly identical script, so I am confused as to why this might occur... Any ideas why this would be the case? (I can share the log file if needed) |
closing as this final issue identified in #139 . |
My workflow keeps failing at the reference_assembly:map_reads step:
Error code 140 suggests Memory/CPU constraint, however adding the following to the config file has not resolved the issue:
--->
The text was updated successfully, but these errors were encountered: