Firewalled Environment Storage Overview

From UCSC Genomics Institute Computing Infrastructure Information

Server Types and Management

After confirming your VPN software is working, you can ssh into one of the compute servers behind the VPN:

crimson.prism: 256GB RAM, 32 cores, 5.5TB local scratch space, CentOS 7.9
razzmatazz.prism: 256GB RAM, 32 cores, 5.5TB local scratch space, Ubuntu 22.04
mustard.prism: 1.5TB RAM, 160 cores, 9TB local scratch space, Ubuntu 22.04

These servers are managed by the Genomics Institute Cluster Admin group. If you need software installed on any of these servers, please make your request by emailing cluster-admin@soe.ucsc.edu.

Storage

These servers mount two types of storage; home directories and group storage directories.

Filesystem Specifications

Filesystem
/private/home /private/groups
Default Soft Quota 30 GB 15 TB
Default Hard Quota 31 GB 16 TB
Total Capacity 19 TB 500 TB
Access Speed Slow - Moderate (Spinning Disk) Very Fast (NVMe Flash Media)
Intended Use This space should be used for login scripts, small bits of code or software repos, etc. No large data should be stored here. This space should be used for large computational/shared data, large software installations and the like.

Home Directories (/private/home/username)

Your home directory will be located as "/private/home/username" and has a 30GB soft quota and a 31GB hard quota. Your home directory is meant for small scripts and login data, or a git repo. Please do not try to store large data there or computer on large jobs using data in your home directory.

Groups Directories (/private/group/groupname

The group storage directories are created per PI, and each group directory has a default 15TB soft quota and 16TB hard quota. For example, if David Haussler is the PI that you report to directly, then the directory would exist as /private/groups/hausslerlab. Request access to that group directory and you will then be able to write to it. Each of those group directories are shared by the lab it belongs to, so you must be wary of everyone's data usage and share the 15TB available per group accordingly.

On the compute servers you can check your group's current quota usage by using the '/usr/bin/viewquota' command. You can only check the quota of a group you are part of (you would be a member of the UNIX group of the same name). If you wanted to check the quota usage of /private/groups/hausslerlab for example, you would do:

$ viewquota hausslerlab

Project quota on /export (/dev/mapper/export)
Project ID   Used   Soft   Hard Warn/Grace   
---------- --------------------------------- 
hausslerlab   1.8T    15T    16T  00 [------]


Soft Versus Hard Quotas

We use soft and hard quotas for disk space.

Once you exceed a directory's soft quota, a one-week countdown timer starts. When that timer runs out, you will no longer be able to create new files or write more data in that directory. You can reset the countdown timer by dropping down to under the soft quota limit.

You will not be permitted to exceed a directory's hard quota at all. Any attempt to try will produce an error; the precise error will depend on how your software responds to running out of disk space.

When quotas are first applied to a directory, or are reduced, it is possible to end up with more data or files in the directory than the quota allows for. This outcome does not trigger deletion of any existing data, but will prevent creation of new data or files.

/scratch Space on the Servers

Each server will generally have a local /scratch filesystem that you can use to store temporary files. BE ADVISED that /scratch is not backed up, and the data there could disappear in the event of a disk failure or anything else. Do not store important data there. If it is important, it should be moved somewhere else very soon after creation.

Actually Doing Work and Computing

When doing research, running jobs and the like, please be careful of your resource consumption on the server you are on. Don't run too many threads or cores at once if such a thing overruns the RAM available or the disk IO available. If you are not sure of your potential RAM, CPU or disk impact, start small with one or two processes and work your way up from there. Also, before running your stuff, check what else is already happening on the server by using the 'top' command to see who else and what else is running and what kind of resources are already being consumed. If, after starting a process, you realize that the server slows down considerably or becomes unusable, kill your processes and re-evaluate what you need to make things work. These servers are shared resources - be a good neighbor!

The Firewall

All servers are behind a firewall in this environment, and as such, you must connect to the VPN in order to access them. They will not be accessible from the greater Internet without VPN. Although you will be able to connect outbound from them to other servers on the internet to copy data in, sync git repos, stuff like that. It is only inbound connections that will be blocked. All machines behind the firewall have the private domain name suffix of "*.prism".

The Phoenix Cluster

This is a cluster of ~20 Ubuntu 22.04 nodes, some of which have GPUs in them. Each node generally has about 2TB RAM and 128 cores, although the cluster is heterogeneous and has multiple node types.

The cluster head node, from which all jobs are submitted via the SLURM job scheduling framework, is phoenix.prism. To learn more about how to use Slurm, refer to:

https://giwiki.gi.ucsc.edu/index.php/Genomics_Institute_Computing_Information#Slurm_at_the_Genomics_Institute

For scratch on the cluster, TMPDIR will be set to /data/tmp. That area is cleaned often so don't store any data there that isn't being used by your jobs.