-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with running on Slurm #16
Comments
Hi @haraldgrove, Thank you for your comment. There are two configuration files that you need to modify:
As you mentioned, it currently has
The line Feel free to reach out if you have any further questions. Best regards, |
Thank you for your response, I'd missed the tool configuration file. Edit: I see I was wrong about my interpretation of the nCPUs parameter. The requested CPUs seems to follow the parameters in the 'config.yaml' file. In that case, what does the nCPUs parameter regulate?
Best regards |
You didn't misunderstand. In the context of using Snakemake (which I am using implicitly in Princess) to spawn multiple jobs and control cluster submission, the job configuration takes precedence over the cluster configuration. Therefore, it will choose 3 based on the job configuration instead of 12 from the cluster configuration. Best, |
Hello
I'm trying to run Princess on a cluster managed by Slurm. I've followed the 4 steps indicated to change the configuration files and the minimap job has been submitted and is running.
However, despite the configuration file specifying 12 CPUs, it seems like the job only requested 3 CPUs on the cluster and the minimap command line only specifies 3 threads. Are there any more settings I need to change to increase the number of threads used and requested?
The job running on the cluster:
minimap2 -Y -R @RG\tSM:SAMPLE\tID:SAMPLE -ax map-ont /mnt/ScratchProjects/Causative/reference/bovine_ARS-UCD1.2/GCF_002263795.2_ARS-UCD1.3_genomic.fna.gz /mnt/ScratchProjects/Causative/bovine_11978/princess/filtlong_11978.fq.gz --MD -t 3 -y
The command to submit to the cluster:
sbatch --parsable --job-name=snakejob.minimap2 -n 3 --mem=20G --partition=smallmem --time=72:00:00 /net/fs-2/scale/OrionStore/ScratchProjects/Causative/bovine_11978/princess/.snakemake/tmp.tqfwec9x/snakejob.minimap2.3.sh
The text was updated successfully, but these errors were encountered: