Computational Genomics Kubernetes Installation

From UCSC Genomics Institute Computing Infrastructure Information

The Computational Genomics Group has a Kubernetes Cluster running on several large instances in AWS. The current cluster makeup includes two worker nodes, each with the following specs:

* 96 CPU cores (3.1 GHz)
* 384 GB RAM
* 3.3 TB Local NVMe Flash Storage
* 25 Gb/s Network Interface 

Getting Authorized to Connect

If you require access to this kubernetes cluster, contact Benedict Paten asking for permission to use it, then pass on that permission via email to:

cluster-admin@soe.ucsc.edu

Let us know which group you are with and we can authorize you to use the cluster in the correct namespace.

Authenticating to Kubernetes

We will authorize (authz) you to use the cluster on the server side, but you will also need to authenticate (authn) using your '@ucsc.edu' email address and a unique Java Web token. These credentials are installed in ~/.kube/config in whatever machine you are coming from to get to the cluster.

To authenticate and get your base kubernetes configuration, go to this URL (below), which will ask you to authenticate to Google. Use your '@ucsc.edu' email address as the login. It will then ask you to authenticate via CruzID Gold if your web browser doesn't already have the authentication token cached:

https://cg-kube-auth.gi.ucsc.edu

Once you authenticate (via username/password and 2-factor auth for CruzID Gold), it will pass you back to the 'https://cg-kube-auth.gi.ucsc.edu' website and it should confirm authentication on the top with a message saying "Successfully Authenticated". If you see any errors in red, but are sure you typed in your password and 2-factor auth correctly, click on the above link again (https://cg-kube-auth.gi.ucsc.edu) and authenticate a second time, which should work. There is a quirk where the web token doesn't always pass back to us correctly on the first try.

Upon success, you will be able to click the blue "Download Config File" button, which contains your initial kubernetes config file. Copy this file to your home directory as ~/.kube/config. Follow the directions on the web page to insert your "namespace:" line as directed. We will let you know which namespace to use.

Testing Connectivity

Once your ~/.kube/config file is set up correctly, you should be able to connect to the cluster. All our shared servers here at the Genomics Institute have the 'kubectl' command installed on them, but if you are coming from somewhere else, just make sure the "kubectl" utility is installed on that machine.

A quick test should go as follows:

$ kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k1.kube       Ready    <none>   13h   v1.15.3
k2.kube       Ready    <none>   13h   v1.15.3
k3.kube       Ready    <none>   13h   v1.15.3
master.kube   Ready    master   13h   v1.15.3

Running Pods and Jobs with Requests and Limits

When running jobs and pods on kubernetes, you will always want to specify "requests" and "limits" on resources, otherwise you pods will get stuck with the default limits which are tiny (to protect against runaway pods). You should always have an idea of how much resources will be consumed by your jobs, and not use much more than that, in order not to hog all the resources. It also prevents your jib from "running away" unexpectedly and chewing up more resources than expected.

Here is a good example of a job file that specifies limits:

job.yml

apiVersion: batch/v1
kind: Job
metadata:
  name: $USER-$TS
spec:
  backoffLimit: 0
  ttlSecondsAfterFinished: 30
  template:
    spec:
      containers:
      - name: magic
        image: robcurrie/ubuntu
        imagePullPolicy: Always
        resources:
          requests:
            cpu: "1"
            memory: "2G"
            ephemeral-storage: "2G"
          limits:
            cpu: "2"
            memory: "3G"
            ephemeral-storage: "3G"
        command: ["/bin/bash", "-c"]
        args: ['for i in {1..100}; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
      restartPolicy: Never

NOTE: Jobs and pods that have completed over 48 hours ago but have not been cleaned up will be automatically removed by the garbage collector. Most jobs will have the "ttlSecondsAfterFinished" configuration item in them, so they will automatically cleaned up after that time expires, but leaving the old pods and jobs around pins the disk space they were using while they remain, so it's good to get rid of them as soon as they are done unless you are debugging a failure or something like that.

Jobs that run over 48 hours will not be deleted, only the ones that have exited over 48 hours ago.

A lot of other good information can be viewed on Rob Currie's github page, which includes examples and some "How To" documentation:

https://github.com/rcurrie/kubernetes

View the Cluster's Current Activity

One quick way to check the cluster's utilization is to do:

kubectl top nodes

NAME          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k1.kube       1815m        1%     1191Mi          0%        
k2.kube       51837m       53%    46507Mi         12%       
k3.kube       1458m        1%     61270Mi         15%       
master.kube   111m         5%     1024Mi          46%

That means the worker nodes, k1, k2 and k3, are using minimal memory, k2 is using 52% CPU but lots of room still open for new jobs. Ignore the master node as that one only handles cluster management and doesn't run jobs or pods for users.

You can also take a look at current resource consumption by taking a look at our Ganglia Cluster monitor tool:

https://ganglia.gi.ucsc.edu/

That website requires a username and password:

username: genecats
password: KiloKluster

That's mostly for keeping the scrip kiddies and bots from banging on it.

Once you get in, you should see a drop-down menu near the top left of the screen near "Genomics Institute Grid". From the drop-down menu, select "CG Kubernetes Cluster". It will take you to a page detailing the current resource usage and activity on the nodes. This can be useful for see if anyone else is using the whole cluster, or just to get an idea of how many resources are available for your batch of jobs to assign to the cluster.