GPU Resources: Difference between revisions
No edit summary |
|||
Line 77: | Line 77: | ||
====Running Containers in Docker==== | ====Running Containers in Docker==== | ||
You might be used to running containers with Docker, or containerized GPU workloads with the nVidia Container Runtime or Toolkit. | You might be used to running containers with Docker, or containerized GPU workloads with the nVidia Container Runtime or Toolkit. Docker is installed on all the nodes and the daemon is running; if the `docker` command does not work for you, ask cluster-admin to add you to the right groups. | ||
We are working on getting the nVidia runtime set up to make the '''--gpus''' option to '''docker run''' work. |
Revision as of 19:37, 15 May 2023
When submitting jobs, you can ask for GPUs in one of two ways. One is:
#SBATCH --gres=gpu:1
That will ask for 1 GPU generically on a node with a free GPU. This request is more specific:
#SBATCH --gres=gpu:A5500:3
That requests 3 A5500 GPUs only.
We have several GPU types on the cluster which may fit your specific needs:
nVidia RTX A5500 : 24GB RAM nVidia A100 : 80GB RAM nVidia GeForce RTX 2080 Ti : 11GB RAM nVidia GeForce RTX 1080 Ti : 11GB RAM
Running GPU Workloads
To actually use an nVidia GPU, you need to run a program that uses the CUDA API. There are a few ways to obtain such a program.
Prebuilt CUDA Applications
The Slurm cluster nodes have the nVidia drivers installed, as well as basic CUDA tools like nvidia-smi.
Some projects, such as tensorflow, may ship pre-built binaries that can use CUDA. You should be able to run these binaries directly, if you download them.
Building CUDA Applications
The cluster nodes do not have the full CUDA Toolkit. In particular, they do not have the nvcc CUDA-enabled compiler. If you want to compile applications that use CUDA, you will need to install the development environment yourself for your user.
Once you have nvcc available to your user, building CUDA applications should work. To run them, you will have to submit them as jobs, because the head node does not have a GPU.
Containerized GPU Workloads
Instead of directly installing binaries, or installing and using the CUDA Toolkit, it is often easiest to use containers to download a prebuilt GPU workload and all of its libraries and dependencies. THere are a few options for running containerized GPU workloads on the cluster.
Running Containers in Singularity
You can run containers on the cluster using Singularity, and give them access to GPUs using the --nv option. For example:
singularity pull docker://tensorflow/tensorflow:latest-gpu srun -c 8 --mem 10G --gres=gpu:1 --exclude phoenix-01 singularity run --nv docker://tensorflow/tensorflow:latest-gpu python -c 'from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())'
This will produce output showing that the Tensorflow container is indeed able to talk to one GPU:
INFO: Using cached SIF image 2023-05-15 11:36:33.110850: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-05-15 11:36:38.799035: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /device:GPU:0 with 22244 MB memory: -> device: 0, name: NVIDIA RTX A5500, pci bus id: 0000:03:00.0, compute capability: 8.6 [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 8527638019084870106 xla_global_id: -1 , name: "/device:GPU:0" device_type: "GPU" memory_limit: 23324655616 locality { bus_id: 1 links { } } incarnation: 1860154623440434360 physical_device_desc: "device: 0, name: NVIDIA RTX A5500, pci bus id: 0000:03:00.0, compute capability: 8.6" xla_global_id: 416903419 ]
Running Containers in Slurm
Slurm itself also supports a --container option for jobs, which allows a whole job to be run inside a container. If you are able to convert your container to OCI Bundle format, you can pass it directly to Slurm instead of using Singularity from inside the job. However, Docker-compatible image specifiers can't be given to Slurm, only paths to OCI bundles on disk, and the tools to download a Docker image from Docker Hub in OCI bundle format (skopeo and umoci) are not installed on the cluster.
Slurm containers should have access to their assigned GPUs, but it is not clear if tools like nvidia-smi are injected into the container, as they would be with Singularity or the nVidia Container Runtime.
Running Containers in Docker
You might be used to running containers with Docker, or containerized GPU workloads with the nVidia Container Runtime or Toolkit. Docker is installed on all the nodes and the daemon is running; if the `docker` command does not work for you, ask cluster-admin to add you to the right groups.
We are working on getting the nVidia runtime set up to make the --gpus option to docker run work.