-
Notifications
You must be signed in to change notification settings - Fork 648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support disk
directive for local executor
#5652
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Ben Sherman <[email protected]>
✅ Deploy Preview for nextflow-docs-staging ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
The documentation looks good. I'll leave the code review for someone else. |
Signed-off-by: Ben Sherman <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about this PR. First the avail disk storage is a dynamic value, while CPUs and memory are static.
Moreover the main needed for using CPUs and memory is throttling task submission to avoid over allocation avail resources. But this cannot be done with disk storage and therefore it would ultimately just thrown an error as when the task run out of space
log.debug "Local executor is using a remote work directory -- task disk requirements will be ignored" | ||
return 0 | ||
} | ||
(session.getExecConfigProp(name, 'disk', session.workDir.toFile().getUsableSpace()) as MemoryUnit).toBytes() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the avail disk changes over time, how to take into account free spaces can increase or decrease while the workflow is running?
I will let @schorlton-bugseq make his case since he submitted the original issue. As for my thoughts: Disk works exactly the same way as memory. There is a total amount and a currently available amount. The local executor doesn't prevent any task from using more than their allocated memory (unless Docker is enabled), it just uses the task resources as "hints" to limit the parallelism accordingly. The same is true for disk, it's just a hint that the user can use to limit the parallelism based on how much disk space the user estimates each task will need. The only practical difference is that the steady-state disk usage is likely higher than steady-state memory usage, so it's more accurate to use the currently available disk space at the start of the run as the "total", rather than the true total. Overall, it's a simple change that's opt-in and provides the same guarantees as the memory tracking, so I'm fine with it. |
Close #5636
This PR adds support for the
disk
directive to the local executor. It uses File::getUsableSpace() to estimate the total available disk space at the beginning of the run.Disk requirements are ignored when using the local executor with a remote filesystem via Fusion.