Phoenix WDL Tutorial
Tutorial: Getting Started with WDL Workflows on Phoenix
Instead of giant shell scripts that only work on one grad student's laptop, modern, reusable bioinformatics experiments should be written as workflows, in a language like Workflow Description language (WDL). Workflows succinctly describe their own execution requirements, and which pieces depend on which other pieces, making your analyses reproducible by people other than you.
Workflows are also easily scaled up and down: you can develop and test your workflow on a small test data set on one machine, and then run it on real data on the cluster without having to worry about whether the right tasks will run in the right order.
This tutorial will help you get started writing and running workflows. The Phoenix Cluster Setup section is specifically for the UC Santa Cruz Genomics Institute's Phoenix Slurm cluster. The other sections are broadly applicable to other environments. By the end, you will be able to run workflows on Slurm with Toil, write your own workflows in WDL, and debug workflows when something goes wrong.
Phoenix Cluster Setup
Before we begin, you will need a computer to work at, which you are able to install software on, and the ability to connect to other machines over SSH.
Getting VPN access
We are going to work on the Phoenix cluster, but this cluster is kept behind the Prism firewall, where all of our controlled-access data lives. So, to get access to the cluster, you need to get access to the VPN (Virtual Private Network) system that we use to allow people through the firewall.
To get VPN access, follow the instructions at https://giwiki.gi.ucsc.edu/index.php/Requirement_for_users_to_get_GI_VPN_access. Note that this process involves making a one-on-one appointment with one of our admins to help you set up your VPN client, so make sure to do it in advance of when you need to use the cluster.
Connecting to Phoenix
Once you have VPN access, you can connect to any of the machines with access to the Phoenix cluster. These interactive nodes are fairly large machines that can do some work locally, but you will still want to run larger workflows on the actual cluster. For this tutorial, we will use emerald.prism
as our login node.
To connect to the cluster:
1. Connect to the VPN.
2. SSH to emerald.prism
. At the command line, run:
ssh emerald.prism
If your username on the cluster (say, flastname
) is different than your username on your computer (which might be firstname
), you might instead have to run:
ssh flastname@emerald.prism
The first time you connect, you will see a message like:
The authenticity of host 'emerald.prism (10.50.1.67)' can't be established. ED25519 key fingerprint is SHA256:8hJQShO6jhrym9UVyMldKsKOnOFtWRChgjK5cZNhkAI. This key is not known by any other names. Are you sure you want to continue connecting (yes/no/[fingerprint])?
This is your computer asking you to help it decide if it is talking to the genuine emerald.prism
, and not an imposter. You will want to make sure that the "key fingerprint" is indeed SHA256:8hJQShO6jhrym9UVyMldKsKOnOFtWRChgjK5cZNhkAI.
. If it is not, someone (probably the GI sysadmins, but possibly a cabal of hackers) has replaced the head node, and you should verify that this was supposed to happen. If the fingerprints do match, type yes
to accept and remember that the server is who it says it is.
Installing Toil with WDL support
Once you are on the head node, you can install Toil, a program for running workflows. When installing, you need to specify that you want WDL support. To do this, you can run:
pip install --upgrade --user 'toil[wdl]'
If you also want to use AWS S3 s3://
and/or Google gs://
URLs for data, you will need to also install Toil with the aws
and google
extras, respectively:
pip install --upgrade --user 'toil[wdl,aws,google]'
This will install Toil in the .local
directory inside your home directory, which we write as ~/.local
. The program to run WDL workflows, toil-wdl-runner
, will be at ~/.local/bin/toil-wdl-runner
.
By default, the command interpreter *will not* look there, so if you type toil-wdl-runner
, it will complain that the command is not found. To fix this, you need to configure the command interpreter (bash) to look where Toil is installed. To do this, run:
echo 'export PATH="${HOME}/.local/bin:${PATH}"' >>~/.bashrc
After that, **log out and log back in**, to restart bash and pick up the change.
To make sure it worked, you can run:
toil-wdl-runner --help
If everything worked correctly, it will print a long list of the various option flags that the toil-wdl-runner
command supports.
If you ever want to upgrade Toil to a new release, you can repeat the pip
command above.
Configuring your Phoenix Environment
Do not try and store data in your home directory on Phoenix! The home directories are meant for code and programs. Any data worth running a workflow on should be in a directory under /private/groups
. You will probably need to email the admins to get added to a group so you can create a directory to work in somewhere under /private/groups
. Usually you would end up with /private/groups/YOURGROUPNAME/YOURUSERNAME
.
Remember this path; we will need it later.
Configuring Toil for Phoenix
Toil is set up to work in a large number of different environments, and doesn't necessarily rely on the existence of things like a shared cluster filesystem. However, on the Phoenix cluster, we have a shared filesystem, and so we should configure Toil to use it for caching the Docker container images used for running workflow steps. However, since these files can be large, and the home directory quota is only 30 GB, we might not be able to keep these in your home directory.
We would like to be able to store these on the cluster's large storage array, under /private/groups
. However, Toil needs to use file locks in these directories to prevent simultaneous Singularity calls from producing internal Singularity errors, and Ceph currently has a bug where these file locking operations can freeze the Ceph servers.
If you have a small number of container images that will fit in your home directory, you can keep them there. Since Toil 6.1.0, this is the default behavior and you don't need to do anything. (Unless you previously set SINGULARITY_CACHEDIR
or MINIWDL__SINGULARITY__IMAGE_CACHE
, in which case you need to unset them.)
If you don't have room in your home directory for container images, currently the recommended approach is to use node-local storage under /data/tmp
. This results in each node pulling each container image, but images will be saved across workflows.
You can set that up for all your workflows with:
echo 'export SINGULARITY_CACHEDIR="/data/tmp/$(whoami)/cache/singularity"' >>~/.bashrc echo 'export MINIWDL__SINGULARITY__IMAGE_CACHE="/data/tmp/$(whoami)/cache/miniwdl"' >>~/.bashrc
Then log out and log back in again, to apply the changes.
Running an existing workflow
First, let's use toil-wdl-runner
to run an existing demonstration workflow. We're going to use the MiniWDL self-test workflow, from the MiniWDL project.
First, go to your user directory under /private/groups
, and make a directory to work in.
cd /private/groups/YOURGROUPNAME/YOURUSERNAME mkdir workflow-test cd workflow-test
Next, download the workflow. While Toil can run workflows directly from a URL, your commands will be shorter if the workflow is available locally.
wget https://raw.githubusercontent.com/DataBiosphere/toil/d686daca091849e681d2f3f3a349001ca83d2e3e/src/toil/test/wdl/miniwdl_self_test/self_test.wdl
Preparing an input file
Near the top of the WDL file, there's a section like this:
workflow hello_caller { input { File who }
This means that there is a workflow named hello_caller
in this file, and it takes as input a file variable named who
. For this particular workflow, the file is supposed to have a list of names, one per line, and the workflow is going to greet each one.
So first, we have to make that list of names. Let's make it in names.txt
echo "Mridula Resurrección" >names.txt echo "Gershom Šarlota" >>names.txt echo "Ritchie Ravi" >>names.txt
Then, we need to create an inputs file, which is a JSON (JavaScript Object Notation) file describing what value to use for each input when running the workflow. (You can also reach down into the workflow and override individual task settings, but for now we'll just set the inputs.) So, make another file next to names.txt
that references it by relative path, like this:
echo '{"hello_caller.who": "./names.txt"}' >inputs.json
Note that, for a key, we're using the workflow name, a dot, and then the input name. For a value, we're using a quoted string of the filename, relative to the location of the inputs file. Absolute paths and URLs will also work for files; more information on the input file syntax is in the JSON Input Format section of the WDL specification.
Testing at small scale single-machine
We are now ready to run the workflow!
You don't want to run workflows on the head node. So, use Slurm to get an interactive session on one of the cluster's worker nodes, by running:
srun -c 2 --mem 8G --time=02:00:00 --partition=medium --pty bash -i
This will start a new shell that can run for 2 hours; to leave it and go back to the head node you can use exit
.
In your new shell, run this Toil command:
toil-wdl-runner self_test.wdl inputs.json -o local_run
This will, by default, use the single_machine
Toil "batch system" to run all of the workflow's tasks locally. Output will be sent to a new directory named local_run
.
This will print a lot of logging to standard error, and to standard output it will print:
{"hello_caller.message_files": ["local_run/Mridula Resurrecci\u00f3n.txt", "local_run/Gershom \u0160arlota.txt", "local_run/Ritchie Ravi.txt"], "hello_caller.messages": ["Hello, Mridula Resurrecci\u00f3n!", "Hello, Gershom \u0160arlota!", "Hello, Ritchie Ravi!"]}
The local_run
directory will contain the described text files (with Unicode escape sequences like \u00f3
replaced by their corresponding characters), each containing a greeting for the corresponding person.
To leave your interactive Slurm session and return to the head node, use exit
.
Running at larger scale
Back on the head node, let's prepare to run a larger run. Greeting 3 people isn't cool, let's greet one hundred people!
Go get this handy list of people and cut it to length:
wget https://gist.githubusercontent.com/smsohan/ae142977b5099dba03f6e0d909108e97/raw/f6e319b1a0f6a0f87f93f73b3acd24795361aeba/1000_names.txt head -n100 1000_names.txt >100_names.txt
And make a new inputs file:
echo '{"hello_caller.who": "./100_names.txt"}' >inputs_big.json
Now, we will run the same workflow, but with the new inputs, and against the Slurm cluster.
To run against the Slurm cluster, we need to use the --jobStore
option to point Toil to a shared directory it can create where it can store information that the cluster nodes can read. Until Toil gets support for data file caching on Slurm, we will also need the --caching false
option. We will add the --batchLogsDir
option to tell Toil to store the logs from the individual Slurm jobs in a folder on the shared filesystem. We'll also use the -m
option to save the output JSON to a file instead of printing it.
Additionally, since Toil can't manage Slurm partitions itself, we will use the TOIL_SLURM_ARGS
environment variable to tell Toil how long jobs should be allowed to run for (2 hours) and what partition they should go in.
mkdir -p logs export TOIL_SLURM_ARGS="--time=02:00:00 --partition=medium" toil-wdl-runner --jobStore ./big_store --batchSystem slurm --caching false --batchLogsDir ./logs self_test.wdl inputs_big.json -o slurm_run -m slurm_run.json
This will tick for a while, but eventually you should end up with 100 greeting files in the slurm_run
directory.
Writing your own workflow
In addition to running existing workflows, you probably want to be able to write your own. This part of the tutorial will walk you through writing a workflow. We're going to write a workflow for Fizz Buzz.
Writing the file
Version
All WDL files need to start with a version
statement (unless they are very old draft-2
files). Toil supports draft-2
, WDL 1.0, and WDL 1.1, while Cromwell (another popular WDL runner used on Terra) supports only draft-2
and 1.0.
So let's start a new WDL 1.0 workflow. Open up a file named fizzbuzz.wdl
and start with a version statement:
version 1.0
Workflow Block
Then, add an empty workflow
named FizzBuzz
.
version 1.0 workflow FizzBuzz { }
Input Block
Workflows usually need some kind of user input, so let's give our workflow an input
section.
version 1.0 workflow FizzBuzz { input { # How many FizzBuzz numbers do we want to make? Int item_count # Every multiple of this number, we produce "Fizz" Int to_fizz = 3 # Every multiple of this number, we produce "Buzz" Int to_buzz = 5 # Optional replacement for the string to print when a multiple of both String? fizzbuzz_override } }
Notice that each input has a type, a name, and an optional default value. If the type ends in ?
, the value is optional, and it may be null
. If an input is not optional, and there is no default value, then the user's inputs file must specify a value for it in order for the workflow to run.
Body
Now we'll start on the body of the workflow, to be inserted just after the inputs section.
The first thing we're going to need to do is create an array of all the numbers up to the item_count
. We can do this by calling the WDL range()
function, and assigning the result to an Array[Int]
variable.
Array[Int] numbers = range(item_count)
WDL 1.0 has a wide variety of functions in its standard library, and WDL 1.1 has even more.
Scattering
Once we create an array of all the numbers, we can use a scatter
to operate on each. WDL does not have loops; instead it has scatters, which work a bit like a map()
in Python. The body of the scatter runs for each value in the input array, all in parallel. We're going to increment all the numbers, since FizzBuzz starts at 1 but WDL range()
starts at 0.
Array[Int] numbers = range(item_count) scatter (i in numbers) { Int one_based = i + 1 }
Conditionals
Inside the body of the scatter, we are going to put some conditionals to determine if we should produce "Fizz"
, "Buzz"
, or "FizzBuzz"
. To support our fizzbuzz_override
, we use an array of it and a default value, and use the WDL select_first()
function to find the first non-null value in that array.
Each execution of a scatter is allowed to declare variables, and outside the scatter those variables are combined into arrays of all the results. But each variable can be declared only once in the scatter, even with conditionals. So we're going to use select_first()
at the end and take advantage of variables from un-executed conditionals being null
.
Note that WDL supports conditional expressions with a then
and an else
, but conditional statements only have a body, not an else
branch. If you need an else you will have to check the negated condition.
So first, let's handle the special cases.
Array[Int] numbers = range(item_count) scatter (i in numbers) { Int one_based = i + 1 if (one_based % to_fizz == 0) { String fizz = "Fizz" if (one_based % to_buzz == 0) { String fizzbuzz = select_first([fizzbuzz_override, "FizzBuzz"]) } } if (one_based % to_buzz == 0) { String buzz = "Buzz" } if (one_based % to_fizz != 0 && one_based % to_buzz != 0) { # Just a normal number. } }
Calling Tasks
Now for the normal numbers, we need to convert our number into a string. In WDL 1.1, and in WDL 1.0 on Cromwell, you can use a ${}
substitution syntax in quoted strings anywhere, not just in command line commands. Toil technically will support this too, but it's not in the spec, and the tutorial needs an excuse for you to call a task. So we're going to insert a call to a stringify_number
task, to be written later.
To call a task (or another workflow), we use a call
statement and give it some inputs. Then we can fish the output values out of the task with . access, only if we don't make a noise instead.
Array[Int] numbers = range(item_count) scatter (i in numbers) { Int one_based = i + 1 if (one_based % to_fizz == 0) { String fizz = "Fizz" if (one_based % to_buzz == 0) { String fizzbuzz = select_first([fizzbuzz_override, "FizzBuzz"]) } } if (one_based % to_buzz == 0) { String buzz = "Buzz" } if (one_based % to_fizz != 0 && one_based % to_buzz != 0) { # Just a normal number. call stringify_number { input: the_number = one_based } } String result = select_first([fizzbuzz, fizz, buzz, stringify_number.the_string]) }
We can put the code into the workflow now, and set about writing the task.
version 1.0 workflow FizzBuzz { input { # How many FizzBuzz numbers do we want to make? Int item_count # Every multiple of this number, we produce "Fizz" Int to_fizz = 3 # Every multiple of this number, we produce "Buzz" Int to_buzz = 5 # Optional replacement for the string to print when a multiple of both String? fizzbuzz_override } Array[Int] numbers = range(item_count) scatter (i in numbers) { Int one_based = i + 1 if (one_based % to_fizz == 0) { String fizz = "Fizz" if (one_based % to_buzz == 0) { String fizzbuzz = select_first([fizzbuzz_override, "FizzBuzz"]) } } if (one_based % to_buzz == 0) { String buzz = "Buzz" } if (one_based % to_fizz != 0 && one_based % to_buzz != 0) { # Just a normal number. call stringify_number { input: the_number = one_based } } String result = select_first([fizzbuzz, fizz, buzz, stringify_number.the_string] } }
Writing Tasks
Our task should go after the workflow in the file. It looks a lot like a workflow except it uses task
.
task stringify_number { }
We're going to want it to take in an integer the_number
, and we're going to want it to output a string the_string
. So let's fill that in in input
and output
sections.
task stringify_number { input { Int the_number } # ??? output { String the_string # = ??? } }
Now, unlike workflows, tasks can have a command
section, which gives a command to run. This section is now usually set off with triple angle brackets, and inside it you can use ~{}
, that is, Bash-like substitution but with a tilde, to place WDL variables into your command script. So let's add a command that will echo back the number so we can see it as a string.
task stringify_number { input { Int the_number } command <<< # This is a Bash script. # So we should do good Bash script things like stop on errors set -e # Now print our number as a string echo ~{the_number} >>> output { String the_string # = ??? } }
Now we need to capture the result of the command script. The WDL stdout()
returns a WDL File
containing the standard output printed by the task's command. We want to read that back into a string, which we can do with the WDL read_string()
function (which also removes trailing newlines).
task stringify_number { input { Int the_number } command <<< # This is a Bash script. # So we should do good Bash script things like stop on errors set -e # Now print our number as a string echo ~{the_number} >>> output { String the_string = read_string(stdout()) } }
We're also going to want to add a runtime
section to our task, to specify resource requirements. We're also going to tell it to run in a Docker container, to make sure that absolutely nothing can go wrong with our delicate echo
command. In a real workflow, you probably want to set up optiopnal inputs for all the tasks to let you control the resource requirements, but here we will just hardcode them.
task stringify_number { input { Int the_number } command <<< # This is a Bash script. # So we should do good Bash script things like stop on errors set -e # Now print our number as a string echo ~{the_number} >>> output { String the_string = read_string(stdout()) } runtime { cpu: 1 memory: "0.5 GB" disks: "local-disk 1 SSD" docker: "ubuntu:22.04" } }
The disks
section is a little weird; it isn't in the WDL spec, but Toil supports Cromwell-style strings that ask for a local-disk
of a certain number of gigabytes, which may suggest that it be SSD
storage.
Then we can put our task into our WDL file:
version 1.0 workflow FizzBuzz { input { # How many FizzBuzz numbers do we want to make? Int item_count # Every multiple of this number, we produce "Fizz" Int to_fizz = 3 # Every multiple of this number, we produce "Buzz" Int to_buzz = 5 # Optional replacement for the string to print when a multiple of both String? fizzbuzz_override } Array[Int] numbers = range(item_count) scatter (i in numbers) { Int one_based = i + 1 if (one_based % to_fizz == 0) { String fizz = "Fizz" if (one_based % to_buzz == 0) { String fizzbuzz = select_first([fizzbuzz_override, "FizzBuzz"]) } } if (one_based % to_buzz == 0) { String buzz = "Buzz" } if (one_based % to_fizz != 0 && one_based % to_buzz != 0) { # Just a normal number. call stringify_number { input: the_number = one_based } } String result = select_first([fizzbuzz, fizz, buzz, stringify_number.the_string] } } task stringify_number { input { Int the_number } command <<< # This is a Bash script. # So we should do good Bash script things like stop on errors set -e # Now print our number as a string echo ~{the_number} >>> output { String the_string = read_string(stdout()) } runtime { cpu: 1 memory: "0.5 GB" disks: "local-disk 1 SSD" docker: "ubuntu:22.04" } }
Output Block
Now the only thing missing is a workflow-level output
section. Technically, in WDL 1.0 you aren't supposed to need this, but you do need it in 1.1 and Toil doesn't actually send your outputs anywhere yet if you don't have one, so we're going to make one. We need to collect together all the strings that came out of the different tasks in our scatter into an Array[String]
. We'll add the output
section at the end of the workflow
section, above the task.
version 1.0 workflow FizzBuzz { input { # How many FizzBuzz numbers do we want to make? Int item_count # Every multiple of this number, we produce "Fizz" Int to_fizz = 3 # Every multiple of this number, we produce "Buzz" Int to_buzz = 5 # Optional replacement for the string to print when a multiple of both String? fizzbuzz_override } Array[Int] numbers = range(item_count) scatter (i in numbers) { Int one_based = i + 1 if (one_based % to_fizz == 0) { String fizz = "Fizz" if (one_based % to_buzz == 0) { String fizzbuzz = select_first([fizzbuzz_override, "FizzBuzz"]) } } if (one_based % to_buzz == 0) { String buzz = "Buzz" } if (one_based % to_fizz != 0 && one_based % to_buzz != 0) { # Just a normal number. call stringify_number { input: the_number = one_based } } String result = select_first([fizzbuzz, fizz, buzz, stringify_number.the_string] } output { Array[String] fizzbuzz_results = result } } task stringify_number { input { Int the_number } command <<< # This is a Bash script. # So we should do good Bash script things like stop on errors set -e # Now print our number as a string echo ~{the_number} >>> output { String the_string = read_string(stdout()) } runtime { cpu: 1 memory: "0.5 GB" disks: "local-disk 1 SSD" docker: "ubuntu:22.04" } }
Because the result
variable is defined inside a scatter
, when we reference it outside the scatter we see it as being an array.
Running the Workflow
Now all that remains is to run the workflow! As before, make an inputs file to specify the workflow inputs:
echo '{"FizzBuzz.item_count": 20}' >fizzbuzz.json
Then run it on the cluster with Toil:
mkdir -p logs export TOIL_SLURM_ARGS="--time=02:00:00 --partition=medium" toil-wdl-runner --jobStore ./fizzbuzz_store --batchSystem slurm --caching false --batchLogsDir ./logs fizzbuzz.wdl fizzbuzz.json -o fizzbuzz_out -m fizzbuzz_out.json
Or locally:
toil-wdl-runner fizzbuzz.wdl fizzbuzz.json -o fizzbuzz_out -m fizzbuzz_out.json
Debugging Workflows
Sometimes, your workflow won't work. Try these ideas for figuring out what is going wrong.
Restarting the Workflow
If you think your workflow failed from a transient problem (such as a Docker image not being available) that you have fixed, and you ran the workflow with --jobStore
set manually to a directory that persists between attempts, you can add --restart
to your workflow command and make Toil try again. It will pick up from where it left off and rerun any failed tasks and then the rest of the workflow.
This will will not pick up any changes to your WDL source code files; those are read once at the beginning and not re-read on restart.
If restarting the workflow doesn't help, you may need to move on to more advanced debugging techniques.
Debugging Options
When debugging a workflow, make sure to run the workflow with --logDebug
, to set the log level to DEBUG
, and with --jobStore /some/path/to/a/shared/directory/it/can/create
so that the stored files shipped between jobs are in a place you can access them.
When debug logging is on, the log from every Toil job is inserted in the main Toil log between these markers:
=========> Toil job log is here <=========
Normally, only the logs of failing jobs and the output of commands run from WDL are reproduced like this.
Reading the Log
When a WDL workflow fails, you are likely to see a message like this:
WDL.runtime.error.CommandFailed: task command failed with exit status 1 [2023-07-16T16:23:54-0700] [MainThread] [E] [toil.worker] Exiting the worker because of a failed job on host phoenix-15.prism
This means that the command line command specified by one of your WDL tasks exited with a failing (i.e. nonzero) exit code, which will happen when either the command line command is written wrong, or when the error detection code in the tool you are trying to run detects and reports an error.
Go up higher in the log until you find lines that look like:
[2024-01-16T20:12:19-0500] [Thread-3 (statsAndLoggingAggregator)] [I] [toil.statsAndLogging] hello_caller.0.hello.stderr follows:
And
[2024-01-16T20:12:19-0500] [Thread-3 (statsAndLoggingAggregator)] [I] [toil.statsAndLogging] hello_caller.0.hello.stdout follows:
These will be followed by the standard error and standard output log data from the task's command. There may be useful information (such as an error message from the underlying tool) in there.
If you would like individual task logs to be saved separately for later reference, you can use the --writeLogs
option to specify a directory to store them. For more information, see the Toil documentation of workflow task logs.
Reproducing Problems
When trying to fix a failing step, it is useful to be able to run a command outside of Toil or WDL that might reproduce the problem. In addition to getting the standard output and standard error logs as described above, you may also need input files for your tool in order to do this.
Automatically Fetching Input Files
The toil debug-job
command has a --retrieveTaskDirectory
option that lets you dump out a directory with all the files that a failing WDL task would use. You can use it like:
toil debug-job ./jobstore WDLTaskJob --retrieveTaskDirectory dumpdir
If there are multiple failing tasks, you might need to replace WDLTaskJob
with the name of one of the failing jobs. See the Toil documentation on retrieving files for more on how to use this command.
Manually Finding Input Files
If you can't use toil debug-job
, you might need to manually dig through the job store for files. In the log of your failing Toil task, look for lines like this:
[2023-07-16T16:23:54-0700] [MainThread] [W] [toil.fileStores.abstractFileStore] Failed job accessed files: [2023-07-16T16:23:54-0700] [MainThread] [W] [toil.fileStores.abstractFileStore] Downloaded file 'files/no-job/file-4f886176ab8344baaf17dc72fc445445/toplog.sh' to path '/data/tmp/c3d51c0611b9511da167528976fef714/9b0e/467f/tmprwhi6h3q/toplog.sh' [2023-07-16T16:23:54-0700] [MainThread] [W] [toil.fileStores.abstractFileStore] Downloaded file 'files/for-job/kind-WDLTaskJob/instance-b4c5x6hq/file-1bb5d92ae8f3413eb82fe8ef88686bf6/Sample.bam' to path '/data/tmp/c3d51c0611b9511da167528976fef714/9b0e/467f/tmpjyksfoko/Sample.bam' ...
The files/for-job/kind-WDLTaskJob/instance-b4c5x6hq/file-1bb5d92ae8f3413eb82fe8ef88686bf6/Sample.bam
part is a Toil file ID, and it is a relative path from your --jobStore
value to where the file is stored on disk. So if you ran the workflow with --jobStore /private/groups/patenlab/anovak/jobstore
, you would look for this file at:
/private/groups/patenlab/anovak/jobstore/files/for-job/kind-WDLTaskJob/instance-b4c5x6hq/file-1bb5d92ae8f3413eb82fe8ef88686bf6/Sample.bam
More Ways of Finding Files
Sometimes, a step might not fail, but you still might want to see the files it is using as input. If you have the job store path, you can use the find
command to try and find the files by name. For example, if you want to look at Sample.bam
, you can look for it like this:
find /path/to/the/jobstore -name "Sample.bam"
If you want to find files that were uploaded from a job, look for lines like this in the job's log:
[2023-07-16T15:58:39-0700] [MainThread] [D] [toil.wdl.wdltoil] Virtualized /data/tmp/2846b6012e3e5535add03b363950dd78/cb23/197c/work/bamPerChrs/Sample.chr14.bam as WDL file toilfile:2703483274%3A0%3Afiles%2Ffor-job%2Fkind-WDLTaskJob%2Finstance-b4c5x6hq%2Ffile-c4e4f1b16ddf4c2ab92c2868421f3351%2FSample.chr14.bam/Sample.chr14.bam
You can take the toilfile:2703483274%3A0%3Afiles%2Ffor-job%2Fkind-WDLTaskJob%2Finstance-b4c5x6hq%2Ffile-c4e4f1b16ddf4c2ab92c2868421f3351%2FSample.chr14.bam/Sample.chr14.bam
URI and URL-decode it with, for example, [1], getting this:
toilfile:2703483274:0:files/for-job/kind-WDLTaskJob/instance-b4c5x6hq/file-c4e4f1b16ddf4c2ab92c2868421f3351/Sample.chr14.bam/Sample.chr14.bam
Then you can take the part after the last colon, files/for-job/kind-WDLTaskJob/instance-b4c5x6hq/file-c4e4f1b16ddf4c2ab92c2868421f3351/Sample.chr14.bam/Sample.chr14.bam
, and that is the path relative to the job store where this file can be found.
Using Development Versions of Toil
Sometimes, bugs will be fixed in the development version of Toil, but not released yet. To try the current development version of Toil, you can install it like this:
pip install --upgrade --user 'git+https://github.com/DataBiosphere/toil.git#egg=toil[wdl,aws,google]'
If you want to use a particular branch or commit, like aaa451b320fc115b3563ced25cb501301cf86f90
, you can do:
pip install --upgrade --user 'git+https://github.com/DataBiosphere/toil.git@aaa451b320fc115b3563ced25cb501301cf86f90#egg=toil[wdl,aws,google]'
Frequently Asked Questions
I am getting warnings about XDG_RUNTIME_DIR
You may be seeing warnings that XDG_RUNTIME_DIR is set to nonexistent directory /run/user/$UID; your environment may be out of spec!
You should upgrade Toil. Since Toil 6.1.0, Toil no longer issues this warning, and just puts up with bad XDG_RUNTIME_DIR
settings.
Toil said it was Redirecting logging
somewhere, but I can't find that file!
The Toil worker process for each job will say that it is Redirecting logging to /data/tmp/somewhere/worker_log.txt
, and when running in single machine mode these messages go to the main Toil log.
The Toil worker logs are automatically cleaned up when the worker finishes. If you want to see the individual worker logs in the Toil log, use the --logDebug
option to Toil.
If you are looking for the log for a worker process that did not finish (i.e. that crashed), make sure to look on the machine that the worker actually ran on, not on the head node.
Additional WDL resources
For more information on writing and running WDL workflows, see: