Slurm Tips for vg

From UCSC Genomics Institute Computing Infrastructure Information

This page explains how to set up a development environment for vg on the Phoenix cluster.

Setting Up

1. After connecting to the VPN, connect to the cluster head node:

ssh phoenix.prism

This node is relatively small, so you shouldn't run real work on it, but it is the place you need to be to submit Slurm jobs.

2. Make yourself a user directory under /private/groups, which is where large data must be stored. For example, if you are in the Paten lab:

mkdir /private/groups/patenlab/$USER

3. (Optional) Link it over to your home directory, so it is easy to use storage there to store your repos. The /private/groups storage may be faster than the home directory storage.

mkdir -p /private/groups/patenlab/$USER/workspace
ln -s /private/groups/patenlab/$USER/workspace ~/workspace

4. Make sure you have SSH keys created and add them to Github.

cat ~/.ssh/id_ed25519.pub || (ssh-keygen -t ed25519 && cat  ~/.ssh/id_ed25519.pub)
# Paste into https://github.com/settings/ssh/new

5. Make a place to put your clone, and clone vg:

mkdir -p ~/workspace
cd ~/workspace
git clone --recursive git@github.com:vgteam/vg.git
cd vg

6. vg's dependencies should already be installed on the cluster nodes. If any of them seem to be missing, tell cluster-admin@soe.ucsc.edu to install them.

7. Build vg as a Slurm job. This will send the build out to the cluster as a 64-core, 80G memory job, and keep the output logs in your terminal.

srun -c 64 --mem=80G --time=00:30:00 make -j64

This will leave your vg binary at ~/workspace/vg/bin/vg.

Misc Tips

  • If you want an interactive session with appreciable resources, you can schedule one with srun. For example, to get 16 cores and 120G memory all for you, run:
srun -c 16 --mem 120G --time=08:00:00 --partition=medium --pty bash -i
  • To send out a job without making a script file for it, use sbatch --wrap "your command here".
  • You can use arguments from SBATCH lines on the command line!
  • You can use Slurm Commander to watch the state of the cluster with the scom command.