Using Docker under Slurm: Difference between revisions
mNo edit summary |
(Explain how to pass the right GPUs) |
||
Line 28: | Line 28: | ||
The --memory argument is in megabytes (hence the 'm' at the end). So the above example will set a memory limit of 1GB. | The --memory argument is in megabytes (hence the 'm' at the end). So the above example will set a memory limit of 1GB. | ||
== Docker and GPUs == | |||
If you are using GPUs with Docker, you need to make sure that your Docker container requests access to the ''correct'' GPUs: the ones which Slurm assigned to your job. These will be passed in the <code>SLURM_STEP_GPUS</code> (for GPUs for a single step) or <code>SLURM_JOB_GPUS</code> (for GPUs for a whole job) environment variables. They need to be passed to Docker like this: | |||
docker run --gpus="\"device=${SLURM_STEP_GPUS:-$SLURM_JOB_GPUS}\"" nvidia/cuda nvidia-smi | |||
'''Note the escaped quotes'''; the Docker command needs to have double-quotes ''inside'' the argument value. The <code>${:-}</code> syntax will use <code>SLURM_STEP_GPUS</code> if it is set and <code>SLURM_JOB_GPUS</code> if it isn't; if you know which will be set for your job, you can use just that one. | |||
If you are using Nextflow, you will need to set <code>docker.runOptions</code> to include this flag. | |||
docker.runOptions="--gpus \\\"device=$SLURM_JOB_GPUS\\\"" | |||
If you are using Toil to run CWL or WDL, the correct GPUs will be passed to containers automatically. | |||
== Cleaning Scripts == | == Cleaning Scripts == |
Latest revision as of 14:45, 4 December 2024
Sometimes it is convenient to ask Slurm to run your job in a docker container. This is just fine, however, you will need to fully test your job in a docker container beforehand (on mustard or emerald, for example) to see how much RAM and CPU resources it requires, so you can accurately describe in your slurm job submission file how many resources it needs.
Testing
You can run your container on mustard then look at 'top' to see how much RAM and CPU it needs.
You also will need to be aware that you will need to pull your docker image from a registry, like DockerHub or Quay. And you should also run you docker container with the '--rm' flag, so the container cleans itself up after running. So your workflow would look something like this:
1: Pull image from DockerHub 2: docker run --rm docker/welcome-to-docker
Optionally you can clean up your image as well, but only if you don't have many jobs using that image on the same node. For example, if I wanted to remove the image laballed "weiler/mytools":
$ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE weiler/mytools latest be6777ad00cf 19 hours ago 396MB somedude/tools latest 9b1d1f6fbf6f 3 weeks ago 607MB $ docker image rm be6777ad00cf
Resource Limits
When running docker containers on Slurm, slurm cannot limit the resources that docker uses. Therefore, when you launch a container, you will need to know how much resources (RAM, CPU) it uses beforehand, determined by your testing. Then launch your job with the following --cpus and --memory parameters so docker itslef will limit what it uses:
docker run --rm --cpus=16 --memory=1024m docker/welcome-to-docker
The --memory argument is in megabytes (hence the 'm' at the end). So the above example will set a memory limit of 1GB.
Docker and GPUs
If you are using GPUs with Docker, you need to make sure that your Docker container requests access to the correct GPUs: the ones which Slurm assigned to your job. These will be passed in the SLURM_STEP_GPUS
(for GPUs for a single step) or SLURM_JOB_GPUS
(for GPUs for a whole job) environment variables. They need to be passed to Docker like this:
docker run --gpus="\"device=${SLURM_STEP_GPUS:-$SLURM_JOB_GPUS}\"" nvidia/cuda nvidia-smi
Note the escaped quotes; the Docker command needs to have double-quotes inside the argument value. The ${:-}
syntax will use SLURM_STEP_GPUS
if it is set and SLURM_JOB_GPUS
if it isn't; if you know which will be set for your job, you can use just that one.
If you are using Nextflow, you will need to set docker.runOptions
to include this flag.
docker.runOptions="--gpus \\\"device=$SLURM_JOB_GPUS\\\""
If you are using Toil to run CWL or WDL, the correct GPUs will be passed to containers automatically.
Cleaning Scripts
We also have auto-cleaning scripts running that will delete any containers and images that were created/pulled more than 7 days ago. This includes the cluster nodes and also the phoenix head node itself. If you need a place to have your images/containers remain longer than that, please put them on mustard, emerald, crimson or razzmatazz.
Also, there are cleaning scripts in place that will destroy any running containers that have been running for over 7 days. We assume that such a container was not launched with --rm and needs to be cleaned up.