Fyrd’s Documentation

Python job submission on torque and slurm clusters with dependency tracking.

Author

Michael D Dacre <mike.dacre@gmail.com>

License

MIT License, property of Stanford, use as you wish

Version

0.6.2b1

https://readthedocs.org/projects/fyrd/badge/?version=latest https://travis-ci.org/MikeDacre/fyrd.svg?branch=master https://api.codacy.com/project/badge/Grade/c163cff81a1941a18b2c5455901695a3 https://img.shields.io/badge/python%20versions-2.7%203.4%203.5%203.6-brightgreen.svgPyPI Version

Allows simple job submission with dependency tracking and queue waiting on either torque, slurm, or locally with the multiprocessing module. It uses simple techniques to avoid overwhelming the queue and to catch bugs on the fly.

It is routinely tested on Mac OS and Linux with slurm and torque clusters, or in the absence of a cluster, on Python versions 2.7.10, 2.7.11, 2.7.12, 3.3.0, 3.4.0, 3.5.2, 3.6.2, and 3.7-dev. The full test suite is available in the tests folder.

Fyrd is pronounced ‘feared’ (sort of), it is an Anglo-Saxon term for an army, particularly an army of freemen (in this case an army of compute nodes). The logo is based on a Saxon shield commonly used by these groups. This software was formerly known as ‘Python Cluster’.

The code is hosted at github: https://github.com/MikeDacre/fyrd

To install, use PyPI:

pip install fyrd
fyrd conf init

Contents:

Getting Started

Simple Job Submission

At its simplest, this module can be used by just executing ``submit(<command>)``, where command is a function or system command/shell script. The module will autodetect the cluster, generate an intuitive name, run the job, and write all outputs to files in the current directory. These can be cleaned with ``clean_dir()``.

To run with dependency tracking, run:

import fyrd
job  = fyrd.submit(<command1>)
job2 = fyrd.submit(<command2>, depends=job1)
out1, out2 = fyrd.get([job, job2])  # Will block until job completes

The ``submit()`` function is actually just a wrapper for the Job class. The same behavior as above can be obtained by initializing a ``Job`` object directly:

import fyrd
job  = fyrd.Job(<command1>)
job.submit()
job2 = fyrd.Job(<command2>, depends=job1).submit()
out  = job2.get()  # Will block until job completes

Note that as shown above, the submit method returns the ``Job`` object, so it can be called on job initialization. Also note that the object returned by calling the ``submit()`` function (as in the first example) is also a ``Job`` object, so these two examples can be used fully interchangeably.

Similar wrappers allow you to submit and monitor existing job files, such as those made by other pipelines:

import os
import fyrd
jobs = []
job_dir = os.path.abspath('./jobs/')
for job in [os.path.join(job_dir, i) for i in os.listdir(job_dir) if i.endswith('sh')]:
    jobs.append(fyrd.submit_file(job))
fyrd.wait(jobs)  # Will block until every job is completed

This type of thing can also be accomplished using the console script:

fyrd run --wait ./jobs/*.sh

Functions

The submit function works well with python functions as well as with shell scripts and shell commands, in fact, this is the most powerful feature of this package. For example:

import fyrd
def raise_me(something, power=2):
    return something**power
outs = []
if __name__ == '__main__':
    for i in range(80):
        outs.append(fyrd.submit(raise_me, (i,), {'power': 4},
                                mem='10MB', time='00:00:30'))
    final_sum = 0
    for i in outs:
        final_sum += i.get()
    print(final_sum)

By default this will submit every instance as a job on the cluster, then get the results and clean up all intermediate files, and the code will work identically on a Mac with no cluster access, a slurm cluster, or a torque cluster, with no need to change syntax.

This is very powerful when combined with simple methods that split files or large python classes, to make this kind of work easier, a number of simple functions are provided in the helpers module, to learn more about that, review the Advanced Usage section of this documentation.

Function submission works equally well for submitting methods, however the original class object will not be updated, the method return value will be accurate, but any changes the method makes to ``self`` will not be returned from the cluster and will be lost.

Possible Infinate Recursion Error

Warning: in order for function submission to work, fyrd ends up importing your original script file on the nodes. This means that all code in your file will be executed, so anything that isn’t a function or class must be protected with an ``if __name__ == '__main__':`` protecting statement.

If you do not do this you can end up with multi-submission and infinite recursion, which could mess up your jobs or just crash the job, but either way, it won’t be good.

This isn’t true when submitting from an interactive session such as ipython or jupyter.

Using the Jobify Decorator

Function submission can be made much easier by using the ``jobify`` decorator.

Using the example above with a decorator, we can do this:

import fyrd
@fyrd.jobify(mem='10MB', time='00:00:30')
def raise_me(something, power=2):
    return something**power
outs = []
if __name__ == '__main__':
    for i in range(80):
        outs.append(raise_me(i, power=4))
    final_sum = 0
    for i in outs:
        final_sum += i.get()
    print(final_sum)

Here is a full, if silly, example with outputs:

>>> import fyrd
>>> @fyrd.jobify(name='test_job', mem='1GB')
... def test(string, iterations=4):
...     """This does basically nothing!"""
...     outstring = ""
...     for i in range(iterations):
...         outstring += "Version {0}: {1}".format(i, string)
...     return outstring
...
>>> test?
Signature: test(*args, **kwargs)
Docstring:
This is a fyrd.job.Job decorated function.

When you call it it will return a Job object from which you can get
the results with the ````.get()```` method.

Original Docstring:

This does basically nothing!
File:      ~/code/fyrd/fyrd/helpers.py
Type:      function
>>> j = test('hi')
>>> j.get()
'Version 0: hiVersion 1: hiVersion 2: hiVersion 3: hi'

You can see that the decorator also maintains the original docstring if it is implemented.

By default, the returned job will be submitted already, but you can override that behavior:

import fyrd
@fyrd.jobify(mem='10MB', time='00:00:30', submit=False)
def raise_me(something, power=2):
    return something**power

File Submission

If you want to just submit a job file that has already been created, either by this software or any other method, that can be done like this:

from fyrd import submit_file
submit_file('/path/to/script', dependencies=[7, 9])

This will return the job number and will enter the job into the queue as dependant on jobs 7 and 9. The dependencies can be omitted.

Keywords

The ``Job`` class, and therefore every submission script, accepts a large number of keyword arguments and synonyms to make job submission easy. Some good examples:

  • cores

  • mem (or memory)

  • time (or walltime)

  • partition (or queue)

The synonyms are provided to make submission easy for anyone familiar with the arguments used by either torque or slurm. For example:

job = Job('zcat huge_file | parse_file', cores=1, mem='30GB', time='24:00:00')
job = Job(my_parallel_function, cores=28, mem=12000, queue='high_mem')
for i in huge_list:
    out.append(submit(parser_function, i, cores=1, mem='1GB', partition='small'))
job = Job('ls /etc')

As you can see, optional keywords make submission very easy and flexible. The whole point of this software it to make working with a remote cluster in python as easy as possible.

For a full list of keyword arguments see the Keyword Arguments section of the documentation.

All options are defined in the fyrd.options module. If you want extra options, just submit an issue or add them yourself and send me a pull request.

Profiles

One of the issues with using keyword options is the nuisance of having to type them every time. More importantly, when writing code to work on any cluster one has to deal with heterogeneity between the clusters, such as the number of cores available on each node, or the name of the submission queue.

Because of this, fyrd makes use of profiles that bundle keyword arguments and give them a name, so that cluster submission can look like this:

job = Job('zcat huge_file | parse_file', profile='large')
job = Job(my_parallel_function, cores=28, profile='high_mem')

These profiles are defined in ``~/.fyrd/profiles.txt`` by default and have the following syntax:

[large]
partition = normal
cores = 16
nodes = 1
time = 24:00:00
mem = 32000

This means that you can now do this:

Job(my_function, profile='large')

You can create as many of these as you like.

While you can edit the profile file directly to add and edit profile, it is easier and more stable to use the console script:

..code:: shell

fyrd profile list fyrd profile edit large time:02-00:00:00 mem=64GB fyrd profile edit DEFAULT partition:normal fyrd profile remove-option DEFAULT cores fyrd profile add silly cores:92 mem:1MB fyrd profile delete silly

The advantage of using the console script is that argument parsing is done on editing the profiles, so any errors are caught at that time. If you edit the file manually, then any mistakes will cause an Exception to be raised when you try to submit a job.

If no arguments are given the default profile (called ‘DEFAULT’ in the config file) is used.

Note: any arguments in the DEFAULT profile are available in all profiles if the are not manually overridden there. The DEFAULT profile cannot be deleted. It is a good place to put the name of the default queue.

Configuration

Many program parameters can be set in the config file, found by default at ~/.fyrd/config.txt.

This file has three sections with the following defaults:

[queue]:

max_jobs (int):     sets the maximum number of running jobs before
                    submission will pause and wait for the queue to empty
sleep_len (int):    sets the amount of time the program will wait between
                    submission attempts
queue_update (int): sets the amount of time between refreshes of the queue.
res_time (int):     Time in seconds to wait if a job is in an uncertain
                    state, usually preempted or suspended. These jobs often
                    resolve into running or completed again after some time
                    so it makes sense to wait a bit, but not forever. The
                    default is 45 minutes: 2700 seconds.
queue_type (str):   the type of queue to use, one of 'torque', 'slurm',
                    'local', 'auto'. Default is auto to auto-detect the
                    queue.

[jobs]:

clean_files (bool):    means that by default files will be deleted when job
                       completes
clean_outputs (bool):  is the same but for output files (they are saved
                       first)
file_block_time (int): Max amount of time to block after job completes in
                       the queue while waiting for output files to appear.
                       Some queues can take a long time to copy files under
                       load, so it is worth setting this high, it won't
                       block unless the files do not appear.
filepath (str):        Path to write all temp and output files by default,
                       must be globally cluster accessible. Note: this is
                       *not* the runtime path, just where files are written
                       to.
suffix (str):          The suffix to use when writing scripts and output
                       files
auto_submit (bool):    If wait() or get() are called prior to submission,
                       auto-submit the job. Otherwise throws an error and
                       returns None
generic_python (bool): Use /usr/bin/env python instead of the current
                       executable, not advised, but sometimes necessary.
profile_file (str):    the config file where profiles are defined.

[jobqueue]:

Sets options for the local queue system, will be removed in the future in
favor of database.

jobno (int):  The current job number for the local queue, auto-increments
              with every submission.

Example file:

[queue]
res_time = 2700
queue_type = auto
sleep_len = 1
queue_update = 2
max_jobs = 1000
bool = True

[jobs]
suffix = cluster
file_block_time = 12
filepath = None
clean_outputs = False
auto_submit = True
profile_file = /Users/dacre/.fyrd/profiles.txt
clean_files = True
generic_python = False

[jobqueue]
jobno = 9

The config is managed by fyrd/conf.py and enforces a minimum set of entries. If the config does not exist or any entries are missing, they will be created on the fly using the defaults defined in the defaults.

Keyword Arguments

To make submission easier, this module defines a number of keyword arguments in the options.py file that can be used for all submission and Job() functions. These include things like ‘cores’ and ‘nodes’ and ‘mem’.

The following is a complete list of arguments that can be used in this version

Common: Used in every mode

Option

Description

Type

Default

depends

A job or list of jobs to depend on

list

None

clean_files

Auto clean script files when fetching outputs

bool

None

clean_outputs

Auto clean output files when fetching outputs

bool

None

cores

Number of cores to use for the job

int

1

modules

Modules to load with the module load command

list

None

syspaths

Paths to add to _sys.path for submitted functions

list

None

scriptpath

Folder to write cluster script files to, must be accessible to the compute nodes.

str

.

outpath

Folder to write cluster output files to, must be accessible to the compute nodes.

str

.

runpath

The working directory for the job

str

.

suffix

A suffix to append to job files (e.g. job.suffix.qsub)

str

cluster

outfile

File to write STDOUT to

str

None

errfile

File to write STDERR to

str

None

Func: Used for function calls

Option

Description

Type

Default

imports

Imports to be used in function calls (e.g. sys, os)

list

None

Cluster: Options that work in both slurm and torque

Option

Description

Type

Default

nodes

Number of nodes to request

int

1

features

A comma-separated list of node features to require

list

None

qos

A quality of service to require

str

None

time

Walltime in HH:MM:SS

str

12:00:00

mem

Memory to use in MB (e.g. 4000)

[‘int’, ‘str’]

4000

partition

The partition/queue to run in (e.g. local/batch)

str

None

account

Account to be charged

str

None

export

Comma separated list of environmental variables to export

str

None

Slurm: Used for slurm only

Option

Description

Type

Default

begin

Start after this much time

str

None

Synonyms

Synonym

Option

depend

depends

dependency

depends

dependencies

depends

stdout

outfile

stderr

errfile

queue

partition

memory

mem

cpus

cores

threads

cores

walltime

time

delete_files

clean_files

delete_outputs

clean_outputs

filedir

scriptpath

filepath

scriptpath

dir

runpath

path

runpath

paths

syspaths

syspath

syspaths

scriptdir

scriptpath

cleanfiles

clean_files

delfiles

clean_files

cleanouts

clean_outputs

delouts

clean_outputs

deloutputs

clean_outputs

cleanoutputs

clean_outputs

Note: Type is enforced, any provided argument must match that python type (automatic conversion is attempted), the default is just a recommendation and is not currently used. These arguments are passed like regular arguments to the submission and Job() functions, eg:

Job(nodes=1, cores=4, mem='20MB')

This will be interpretted correctly on any system. If torque or slurm are not available, any cluster arguments will be ignored. The module will attempt to honor the cores request, but if it exceeds the maximum number of cores on the local machine, then the request will be trimmed accordingly (i.e. a 50 core request will become 8 cores on an 8 core machine).

Adding your own keywords

There are many more options available for torque and slurm, to add your own, edit the options.py file, and look for CLUSTER_OPTS (or TORQUE/SLURM if your keyword option is only availble on one system). Add your option using the same format as is present in that file. The format is:

('name', {'slurm': '--option-str={}', 'torque': '--torque-option={}',
          'help': 'This is an option!', 'type': str, 'default': None})

You can also add list options, but they must include ‘sjoin’ and ‘tjoin’ keys to define how to merge the list for slurm and torque, or you must write custom option handling code in fyrd.options.options_to_string(). For an excellent example of both approaches included in a single option, see the ‘features’ keyword above.

Console Scripts

This software is primarily intended to be a library, however some management tasks are just easier from the console. For that reason, fyrd has a frontend console script that makes tasks such as managing the local config and profiles trivial, it also has modes to inspect the queue easily, and to wait for jobs from the console, as well as to clean the working directory.

fyrd

This software has uses a subcommand system to separate modes, and has six modes:

  • run - run an arbitrary shell script on the cluster

  • run-job - run existing cluster script(s)

  • wait - wait for a list of jobs

  • queue - show running jobs, makes filtering jobs very easy

  • config — show and edit the contents of the config file

  • profile - inspect and manage cluster profiles

  • keywords - print a list of current keyword arguments with descriptions for each

  • clean - clean all script and output files in the given directory

Several of the commands have aliases (conf and prof being the two main ones)

Emailing

The run, run-job, and wait commands can all email you when they are done. To use this you need to configure the sending in the ~/fyrd/config.txt file:

[notify]
mode = linux  # Can be linux or smtp, linux uses the mail command
notify_address = your.address@gmail.com
# The following are only needed for smtp mode
smtp_host = smtp.gmail.com
smtp_port = 587
smtp_tls = True
smtp_from = your.server@gmail.com
smtp_user = None  # Defaults to smtp_from
# This is insecure, so use an application specific password. This should
# be a read-only file with the SMTP password. After making it run:
# chmod 400 ~/.fyrd/smtp_pass
smtp_passfile = ~/.fyrd/smtp_pass

To enable emailing, pass -n (notify) to wait, or -w -n to the other two commands. You can also manually specify the address with -e your.address@gmail.com.

Examples

fyrd run 'samtools display big_file.bam | python $HOME/bin/my_parser.py > outfile'
fyrd run --profile long --args walltime=24:00:00,mem=20G --wait -n \
         'samtools display big_file.bam | python $HOME/bin/my_parser.py > outfile'
fyrd submit --wait -n ./jobs/*.sh
fyrd prof list
fyrd prof add large cores:92 mem:200GB partition:high_mem time:00:06:00
fyrd queue  # Shows all of your current jobs
fyrd queue -a # Shows all users jobs
fyrd queue -p long -u bob dylan # Show all jobs owned by bob and dylan in the long queue
fyrd wait 19872 19876
fyrd wait -u john

# Will block until all of bob's jobs in the long queue finish
fyrd queue -p long -u bob -l | xargs fyrd wait
fyrd clean

All Options

fyrd:

usage: fyrd [-h] [-v] {run,submit,wait,queue,conf,prof,keywords,clean} ...

Manage fyrd config, profiles, and queue.

============   ======================================
Author         Michael D Dacre <mike.dacre@gmail.com>
Organization   Stanford University
License        MIT License, use as you wish
Version        0.6.2a1
============   ======================================

positional arguments:
  {run,submit,wait,queue,conf,prof,keywords,clean}
    run (r)             Run simple shell scripts
    submit (sub, s)     Submit existing job files
    wait (w)            Wait for jobs
    queue (q)           Search the queue
    conf (config)       View and manage the config
    prof (profile)      Manage profiles
    keywords (keys, options)
                        Print available keyword arguments.
    clean               Clean up a job directory

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         Show debug outputs

fyrd run:

usage: fyrd run [-h] [-p PROFILE] [-c CORES] [-m MEM] [-t TIME] [-a ARGS] [-w]
                [-k] [-l] [-n] [-e EMAIL] [-s] [-x EXTRA_VARS] [-d]
                [shell_script] [file_parsing [file_parsing ...]]

Run a shell script on the cluster and optionally wait for completion.

Allows the running of a single simple shell script, or the same shell script on
many files, or more complex file interpretation.

positional arguments:
  shell_script          The script to run
  file_parsing          The script to run

optional arguments:
  -h, --help            show this help message and exit
  -s, --simple          The amount of walltime to request
  -x EXTRA_VARS, --extra-vars EXTRA_VARS
                        Regex in form "new_var:orig_var:regex:sub,..."
  -d, --dry-run         Print commands instead of running them

Run Options:
  -p PROFILE, --profile PROFILE
                        The profile to use to run
  -c CORES, --cores CORES
                        The number of cores to request
  -m MEM, --mem MEM     The amount of memory to request
  -t TIME, --time TIME  The amount of walltime to request
  -a ARGS, --args ARGS  Submission args, e.g.:
                        'time=00:20:00,mem=20G,cores=10'
  -w, --wait            Wait for the job to complete
  -k, --keep            Keep submission scripts
  -l, --clean           Delete STDOUT and STDERR files when done

Notification Options:
  -n, --notify          Send notification email when done
  -e EMAIL, --email EMAIL
                        Email address to send notification to, default set in
                        ~/.fyrd/config.txt

fyrd submit:

usage: fyrd submit [-h] [-p PROFILE] [-c CORES] [-m MEM] [-t TIME] [-a ARGS]
                   [-w] [-k] [-l] [-n] [-e EMAIL]
                   job_files [job_files ...]

Run a shell script on the cluster and optionally wait for completion.

Allows the running of a single simple shell script, or the same shell script on
many files, or more complex file interpretation.

positional arguments:
  job_files             The script to run

optional arguments:
  -h, --help            show this help message and exit

Run Options:
  -p PROFILE, --profile PROFILE
                        The profile to use to run
  -c CORES, --cores CORES
                        The number of cores to request
  -m MEM, --mem MEM     The amount of memory to request
  -t TIME, --time TIME  The amount of walltime to request
  -a ARGS, --args ARGS  Submission args, e.g.:
                        'time=00:20:00,mem=20G,cores=10'
  -w, --wait            Wait for the job to complete
  -k, --keep            Keep submission scripts
  -l, --clean           Delete STDOUT and STDERR files when done

Notification Options:
  -n, --notify          Send notification email when done
  -e EMAIL, --email EMAIL
                        Email address to send notification to, default set in
                        ~/.fyrd/config.txt

fyrd wait:

usage: fyrd wait [-h] [-n] [-e EMAIL] [-u USERS] [jobs [jobs ...]]

Wait on a list of jobs, block until they complete.

positional arguments:
  jobs                  Job list to wait for

optional arguments:
  -h, --help            show this help message and exit
  -u USERS, --users USERS
                        A comma-separated list of users to wait for

Notification Options:
  -n, --notify          Send notification email when done
  -e EMAIL, --email EMAIL
                        Email address to send notification to, default set in
                        ~/.fyrd/config.txt

fyrd queue:

usage: fyrd queue [-h] [-u  [...] | -a] [-p  [...]] [-r | -q | -d | -b]
                  [-l | -c]

Check the local queue, similar to squeue or qstat but simpler, good for
quickly checking the queue.

By default it searches only your own jobs, pass '--all-users' or
'--users <user> [<user2>...]' to change that behavior.

To just list jobs with some basic info, run with no arguments.

optional arguments:
  -h, --help            show this help message and exit

queue filtering:
  -u  [ ...], --users  [ ...]
                        Limit to these users
  -a, --all-users       Display jobs for all users
  -p  [ ...], --partitions  [ ...]
                        Limit to these partitions (queues)

queue state filtering:
  -r, --running         Show only running jobs
  -q, --queued          Show only queued jobs
  -d, --done            Show only completed jobs
  -b, --bad             Show only completed jobs

display options:
  -l, --list            Print job numbers only, works well with xargs
  -c, --count           Print job count only

fyrd conf:

usage: fyrd conf [-h] {show,list,help,update,alter,init} ...

This script allows display and management of the fyrd config file found
here: /home/dacre/.fyrd/config.txt.

positional arguments:
  {show,list,help,update,alter,init}
    show (list)         Show current config
    help                Show info on every config option
    update (alter)      Update the config
    init                Interactively initialize the config

optional arguments:
  -h, --help            show this help message and exit

Show usage::
    fyrd conf show [-s <section>]

Update usage::
    fyrd conf update <section> <option> <value>

*Values can only be altered one at a time*

To create a new config from scratch interactively::
    fyrd conf init [--defaults]

fyrd prof:

usage: fyrd prof [-h]
                 {show,list,add,new,update,alter,edit,remove-option,del-option,delete,del}
                 ...

Fyrd jobs use keyword arguments to run (for a complete list run this script
with the keywords command). These keywords can be bundled into profiles, which
are kept in /home/dacre/.fyrd/profiles.txt. This file can be edited directly or manipulated here.

positional arguments:
  {show,list,add,new,update,alter,edit,remove-option,del-option,delete,del}
    show (list)         Print current profiles
    add (new)           Add a new profile
    update (alter, edit)
                        Update an existing profile
    remove-option (del-option)
                        Remove a profile option
    delete (del)        Delete an existing profile

optional arguments:
  -h, --help            show this help message and exit

Show::
    fyrd prof show

Delete::
    fyrd prof delete <name>

Update::
    fyrd prof update <name> <options>

Add::
    fyrd prof add <name> <options>

<options>:
    The options arguments must be in the following format::
        opt:val opt2:val2 opt3:val3

Note: the DEFAULT profile is special and cannot be deleted, deleting it will
cause it to be instantly recreated with the default values. Values from this
profile will be available in EVERY other profile if they are not overriden
there. i.e. if DEFAULT contains ``partition=normal``, if 'long' does not have
a 'partition' option, it will default to 'normal'.

To reset the profile to defaults, just delete the file and run this script
again.

fyrd keywords:

usage: fyrd keywords [-h] [-t | -s | -l]

optional arguments:
  -h, --help          show this help message and exit
  -t, --table         Print keywords as a table
  -s, --split-tables  Print keywords as multiple tables
  -l, --list          Print a list of keywords only

fyrd clean:

usage: fyrd clean [-h] [-o] [-s SUFFIX] [-q {torque,slurm,local}] [-n] [dir]

Clean all intermediate files created by the cluster module.

If not directory is passed, the default if either scriptpath or outpath are
set in the config is to clean files in those locations is to clean those
directories. If they are not set, the default is the current directory.

By default, outputs are not cleaned, to clean them too, pass '-o'

Caution:
    The clean() function will delete **EVERY** file with
    extensions matching those these::

        .<suffix>.err
        .<suffix>.out
        .<suffix>.sbatch & .fyrd.script for slurm mode
        .<suffix>.qsub for torque mode
        .<suffix> for local mode
        _func.<suffix>.py
        _func.<suffix>.py.pickle.in
        _func.<suffix>.py.pickle.out

positional arguments:
  dir                   Directory to clean (optional)

optional arguments:
  -h, --help            show this help message and exit
  -o, --outputs         Clean output files too
  -s SUFFIX, --suffix SUFFIX
                        Suffix to use for cleaning
  -q {torque,slurm,local}, --qtype {torque,slurm,local}
                        Limit deletions to this qtype
  -n, --no-confirm      Do not confirm before deleting (for scripts)

Aliases

Several shell scripts are provided in bin/ to provide shortcuts to the fyrd subcommands:

  • frun: fyrd run

  • fsub: fyrd submit

  • my-queue (or myq): fyrd queue

  • clean-job-files: fyrd clean

  • monitor-jobs: fyrd wait

  • cluster-keywords: fyrd keywords

Advanced Usage

Most of the important functionality is covered in the Getting Started section, and full details on the library are available in the API Reference section. This section just provides some extra information on Job and Queue management, and importantly introduces some of the higher-level options available through the helpers.

The Job Class

The core of this submission system is the Job class, this class builds a job using keyword arguments and profile parsing. The bulk of this is done at class initialization and is covered in the getting started section of this documentation and on job submission with the submit() method. There are several other features of this class to be aware of though.

Script File Handling

Torque and slurm both require submission scripts to work. In the future these will be stored by fyrd in a database and submitted from memory, but for now they are written to disk.

The creation and writing of these scripts is handled by the Script and Function classes in the fyrd.submission_scripts module. These classes pass keywords to the options_to_string() function of the options method, which converts them into a submission string compatible with the active cluster. These are then written to a script for submission to the cluster.

The Function class has some additional functionality to allow easy submission of functions to the cluster. It tries to build a list of all possible modules that the function could need and adds import statements to all of them to the function submission script. It then pickles the submitted function and arguments to a pickle file on the disk, and writes a python script to the same directory.

This python script unpickles the function and arguments and runs them, pickling either the result or and exception, if one is raised, to the disc on completion. The submission script calls this python script on the cluster nodes.

The script and output files are written to the path defined by the .filepath attribute of the Job class, which is set using the ‘filepath’ keyword argument. If not set, this directory defaults to the directory set in the filepath section of the config file or the current working directory. Note that this path is independent of the .runpath attibute, which is where the code will actually run, and also defaults to the current working directory.

Job Output Handling and Retrieval

The correct way to get outputs from within a python session is to call the .get() method of the Job class. This first calls the .wait() method, which blocks until job completion, and then the .fetch_outputs() method which calls get_output, get_stdout, and get_stderr, which save all function outputs, STDOUT, and STDERR to the class. This means that outputs can be accessed using the following Job class attributes:

  • .output — the function output for functions or STDOUT for scripts

  • .stdout — the STDOUT for the script submission (always present)

  • .stderr — the STDERR for the script submission (always present)

This makes job output retrieval very easy, but it is sometimes not what you want, particularly if outputs are very large (they get loaded into memory).

The wait() method will not save any outputs. In addition get() can be with the save=False argument, which means it will fetch the output (or STDOUT) only, but will not write them to the class itself.

Note: By default, get() also deletes all script and output files. This is generally a good thing as it keeps the working directory clean, but it isn’t always what you want. To prevent outputs from being deleted, pass delete_outfiles=False to get(), or alternatively set the .clean_outputs attribute to False prior to running get(). To prevent the cleaning of any files, including the script files, pass cleanup=False or set .clean_files to False.

clean_files and clean_outputs can also be set globally in the config file.

Job Files

All jobs write out a job file before submission, even though this is not necessary (or useful) with multiprocessing. This will change in a future version.

To ensure files are obviously produced by this package and that files are unique the file format is name.number.random_string.suffix.extension. These are:

name: Defined by the name= argument or guessed from the function/script number: A number count of the total jobs with the same name already queued random_string: An 8-character random string suffix: A string defined in the config file, default ‘cluster’ extension: An obvious extension such as ‘.sbatch’ or ‘.qsub’

To change the directory these files are written to, set the filedir item in the config file or use the ‘filedir’ keyword argument to Job or submit.

NOTE: This directory must be accessible to the compute nodes!!!

It is sometimes useful to set the filedir setting in the config to a single directory accessible cluster-wide. This avoids cluttering the current directory, particularly as outputs can be retrieved so easily from within python. If you are going to do this set the ‘clean_files’ and ‘clean_outfiles’ arguments in the config file to avoid cluttering the directory.

All Job objects have a ``clean()`` method that will delete any left over files. In addition there is a clean_job_files script that will delete all files made by this package in any given directory. Be very careful with the script though, it can clobber a lot of work all at once if it is used wrong.

Helpers

The fyrd.helpers module defines several simple functions that allow more complex job handling.

The helpers are all high level functions that are not required for the library but make difficult jobs easy to assist in the goal of trivially easy cluster submission.

Decorator

The fyrd.helpers.jobify (also imported as fyrd.jobify) allows you to decorate any function to make it submit to the cluster.

For example:

import fyrd
@fyrd.jobify(name='test_job', mem='1GB')
def test(string, iterations=4):
    """This does basically nothing!"""
    outstring = ""
    for i in range(iterations):
        outstring += "Version {0}: {1}".format(i, string)
    return outstring

The decorator also maintains the original docstring if it is implemented.

By default, the returned job will be submitted already, but you can override that behavior:

import fyrd
@fyrd.jobify(mem='10MB', time='00:00:30', submit=False)
def raise_me(something, power=2):
    return something**power

For more information, see the basic usage info.

Pandas

The most important function in fyrd.helpers is parapply(), which allows the user to submit a pandas.DataFrame.apply method to the cluster in parallel by splitting the DataFrame, submitting jobs, and then recombining the DataFrame at the end, all without leaving any temp files behind. e.g.:

df = pandas.read_csv('my_huge_file.txt')
df = fyrd.helpers.parapply(100, df, long_running_function, profile='fast')

That command will split the dataframe into 100 pieces, submit each to the cluster as a different job with the profile ‘fast’, and then recombine them into a single DataFrame again at the end.

parapply_summary behaves similarly but assumes that the function summarizes the data rather than returning a DataFrame of the same size. It thus runs the function on the resulting DataFrame also, allowing all dfs to be merged. e.g.:

df = fyrd.helpers.parapply_summary(df, numpy.mean)

This will return just the mean of all the numeric columns, parapply would return a DataFrame with duplicates for every submitted job.

Running on a split file

The splitrun function behaves similarly to the parapply() function, with the exception that it works on a filesystem file instead, which it splits into pieces. It then runs your job on all of the pieces and attempts to recombine them, deleting the intermediate files as it goes.

If you specify an output file, the outputs are merged and places into that file, otherwise, if the outputs are a string (always true for scripts), the function returns a merged string. If the outputs are not strings, then the function just returns a list out outputs that you will have to combine yourself.

The key to this function is that if the job is a script, it must at a minimum contain ‘{file}’ where the file argument goes, and if the job is a function it must contain and argument or keyword argument that matches ‘<file>’.

If you expect the job to have and output, you must provide the outfile= argument too, and be sure that ‘{outfile}’ is present in the script, if a script, or ‘<outfile>’ is in either args or kwargs if a function.

In addition, you should pass inheader=True if the input file has a header line, and outheader=True if the same is true for the outfile. It is very important to pass these arguments, because they both will strip the top line from a file if True. Importantly, if inheader is True on a file without a header, the top line will appear at the top of every broken up file.

Examples:

script = """my_long_script --in {file} --out {outfile}"""
outfile = fyrd.helpers.splitrun(
    100, 'huge_file.txt', script, name='my_job', profile='long',
    outfile='output.txt', inheader=True, outheader=True
)
output = fyrd.helpers.splitrun(
    100, 'huge_file.txt', function, args=('<file>',), name='my_job',
    profile='long', outfile='output.txt', inheader=True, outheader=True
)

Queue Management

Queue handling is done by the Queue class in the fyrd.queue module. This class calls the fyrd.queue.queue_parser iterator which in turn calls either fyrd.queue.torque_queue_parser or fyrd.queue.slurm_queue_parser depending on the detected cluster environment (set by fyrd.queue.QUEUE_MODE and overridden by the ‘queue_type’ config option if desired (not necessary, queue type is auto-detected)).

These iterators return the following information from the queue:

job_id, name, userid, partition, state, node-list, node-count, cpu-per-node, exit-code

These pieces of information are used to create QueueJob objects for every job, which are stored in the Queue.jobs attribute (a dictionary). The Queue class provides several properties, attributes, and methods to allow easy filtering of these jobs.

Most important is the QueueJob.state attribute, which holds information on the current state of that job. To get a list of all states in the queue, call the Queue.job_states property, which will return a list of states in the queue. All of these states are also attributes of the Queue class, for example:

fyrd.Queue.completed

returns all completed jobs in the queue as a dictionary (a filtered copy of the .jobs attribute).

Note: torque states are auto-converted to slurm states, as slurm states are easier to read. e.g. ‘C’ becomes ‘completed’.

The most useful method of Queue is wait(), it will take a list of job numbers or Job objects and wait until all of them are complete. This method is called by the Job.wait() method, and can be called directly to wait for an arbitrary number of jobs.

Note, if you add the following to ~/.fyrd/config.txt wait will automatically send an email when done:

[notify]
mode = linux  # Can be linux or smtp, linux uses the mail command
notify_address = your.address@gmail.com
# The following are only needed for smtp mode
smtp_host = smtp.gmail.com
smtp_port = 587
smtp_tls = True
smtp_from = your.server@gmail.com
smtp_user = None  # Defaults to smtp_from
# This is insecure, so use an application specific password. This should
# be a read-only file with the SMTP password. After making it run:
# chmod 400 ~/.fyrd/smtp_pass
smtp_passfile = ~/.fyrd/smtp_pass

To wait for all jobs from a given user, you can do this:

q = fyrd.Queue()
q.wait(q.get_user_jobs(['bob', 'fred']))

This task can also be accomplished with the console application:

fyrd wait <job> [<job>...]
fyrd wait -u bob fred

The method can actually be simply accessed as a function instead of needing the Queue class:

fyrd.wait([1,2,3])

To generate a Queue object, do the following:

import fyrd
q = fyrd.Queue(user='self')

This will give you a simple queue object containg a list of jobs that belong to you. If you do not provide user, all jobs are included for all users. You can provide qtype to explicitly force the queue object to contain jobs from one queing system (e.g. local or torque).

To get a dictionary of all jobs, running jobs, queued jobs, and complete jobs, use:

q.jobs
q.running
q.complete
q.queued

Every job is a QueueJob class and has a number of attributes, including owner, nodes, cores, memory.

Config

Many of the important options used by this software are set in a config file and can be managed on the console with fyrd conf ....

For full information see the Configuration section of this documentation.

Logging

I use a custion logging script called logme to log errors. To get verbose output, set fyrd.logme.MIN_LEVEL to ‘debug’ or ‘verbose’. To reduce output, set logme.MIN_LEVEL to ‘warn’.

Adding Batch Systems

Fyrd is intended to be fully modular, meaning anyone should be able to implement support for any batch system, even other remote submission systems like DistributedPython if they are able to define the following functions and options.

To add a new batch system, you will need to:

  1. Edit __init__.py to:

    1. Update DEFINED_SYSTEMS to include your batch system

    2. Edit get_cluster_environment() to detect your batch system, this function is ordered, meaning that it checks for slurm before torque, as slurm implements torque aliases. You should add a sensible way of detecting your batch system here.

  2. Create a file in this directory with the name of your batch system (must match the name in DEFINED_SYSTEMS). This file must contain all constants and functions described below in the Batch Script section.

  3. Edit options.py as described below in the Options section.

  4. Run the pyenv test suite on your cluster system and make sure all tests pass on all versions of python supported by fyrd on your cluster system.

  5. Optionally add a buildkite script on your cluster to allow CI testing. Note, this will technically give anyone with push privileges (i.e. me) the ability to execute code on your server. I promise to do no evil, but I can understand a degree of uncertainty regarding that. However, using buildkite will allow us to make sure that future updates don’t break support for your batch system.

  6. Become a fyrd maintainer! I always need help, if you want to contribute more, please do :-)

Options

Fyrd works primarily by converting batch system arguments (e.g. –queue for torque and –partition for slurm) into python keyword arguments. This is done by creating dictionaries in the fyrd/batch_systems/options.py file.

Option parsing is done on job creation by calling the options.options_to_string() function on the user provided keyword arguments. The primary point of this function is to convert all keyword arguments to string forms that can go at the top of your batch file prior to cluster submission. Therefore you must edit the dictionaries in options.py to include your batch system definitions. The most important section to edit is CLUSTER_CORE, this dictionary has sections for each batch system, e.g. for walltime:

('time',
 {'help': 'Walltime in HH:MM:SS',
  'default': '12:00:00', 'type': str,
  'slurm': '--time={}', 'torque': '-l walltime={}'}),

This auto-converts the time argument provided by the user into –time for slurm and -l walltime= for torque.

As all systems are a little different, options.options_to_string() first calls the parse_strange_options() function in the batch system definition script to allow you the option to manually parse all options that cannot be handled so simply. Hopefully this function will do nothing, but return the input, but in some cases it makes sense for this function to handle every argument, an obvious example is when running using something like multiprocessing instead of a true batch system.

Batch Script

The defined batch script must have the name of your system and must define the following constants and functions in exactly the way described below. Your functions can do anything you want, and you can have extra functions in your file (maybe make them private with a leading _ in the name), but the primary functions must take exactly the same arguments as those described below, and provide exactly the same return values.

Constants

  • PREFIX: The string that will go before options at the top of a script file, could be blank for simple shell scripts, for slurm is is ‘#SBATCH’

Functions

queue_test(warn=True)

Input:

  • warn: bool, warn on failure, optional

Output:

  • functional: bool, True if this system can be used

Description:

Use this function to write code to test that your system can function. If you are using a specific command line tool in your code, consider adding it to the config file to allow users to specify an absolute path or alternate name.

Use a combination of _run.which() (which returns a full path to an executable if the executable is in the user’s PATH and is executable) and _run.is_exe() (which tests if a file is executable) to check your command line tools.

Use the warn parameter with _logme.log() to set a log level, e.g.:

log_level = 'error' if warn else 'debug'
_logme.log('Cannot use me :-(', log_level)

Try not to raise any Exceptions, instead try to just log the problem and return False.

This code is run very frequently to test that the queue is usable, so make your code as simple and efficient as possible.

normalize_job_id(job_id)

Input:

  • job_id: string, return value from job submission

Output:

  • job_id: string, a normalized job id

  • array_id: string or None, a normalized array job id

Description:

Take a string returned by your job submission script (e.g. qsub) and turn it into a normalized (hopefully string version of an int) job ID or process ID and an array_id, if that is implemented by your system. The array_id can be None if not implemented and should be None if not present (i.e. the job is not an array job).

normalize_state(state)

Input:

  • state: string, a state description from the queue, e.g. ‘running’, or ‘R’

Output:

  • state: string, a state normalized into one of: - ‘completed’, - ‘completing’ - ‘held’ - ‘pending’ - ‘running’ - ‘suspended’ - ‘running’ - ‘suspended’

gen_scripts(job_object, command, args, precmd, modstr)

Input:

  • job_object: Job, a fyrd.job.Job object for the current job

  • command: string, a string of the command to be run

  • args: any additional arguments that are to be submitted, generally not used

  • precmd: string, the batch system directives created by options_to_string, you can edit this or overwrite it if necessary

  • modstr: string, a string of module imports (e.g. module load samtools) set by the user

Output:

  • submission_script: fyrd.submission_scripts.Script object with the script to run

  • exec_script: fyrd.submission_scripts.Script object with an additional script called by submission script if necessary, can be None

Description:

This is one of the more complex functions, but essentially you are going to just format the fyrd.script_runners.CMND_RUNNER_TRACK script using the objects in the inputs. This just makes an executable submission script, so you can build this anyway you want, you don’t have to use the CMND_RUNNER_TRACK script. However, if you make your own script, the STDOUT must include timestamps like this:

date +'%y-%m-%d-%H:%M:%S'
echo "Running {name}"
{command}
exitcode=$?
echo Done
date +'%y-%m-%d-%H:%M:%S'
if [[ $exitcode != 0 ]]; then
    echo Exited with code: $exitcode >&2
fi
exit $exitcode

This is because we parse the first two and last 2/3 lines of the file to get the job runtimes and exit codes.

Here is an example function:

def gen_scripts(job_object, command, args, precmd, modstr):
"""Create script object for job, does not create a sep. exec script."""
scrpt = _os.path.join(job_object.scriptpath,
                      '{}.cluster.qsub'.format(job_object.name))

sub_script = _scrpts.CMND_RUNNER_TRACK.format(
    precmd=precmd, usedir=job_object.runpath, name=job_object.name,
    command=command
)
return _Script(script=sub_script, file_name=scrpt), None
submit(file_name, dependencies=None, job=None, args=None, kwds=None)

Input:

  • file_name: string, The path to the file to execute [required]

  • dependencies: list, A list of dependencies (job objects or job numbers) [optional]

  • job: fyrd.job.Job, A job object of the calling job (not always passed) [optional]

  • args: list, A list of additional arguments (currently unused) [optional]

  • kwargs: dict or str, A dictionary or string of ‘arg:val,arg,arg:val,…’ (currently unused) [optional]

Output:

  • job_id: string, A job number

Description:

This function must actually submit the job file, however you want it to. If possible, include dependency tracking, if that isn’t possible, raise a NotImplemented Exception. You can make use of fyrd.run.cmd, which allows you to execute code directly on the terminal and can catch errors and retry submission however many times you choose (5 is a good number). It also returns the exit_code, STDOUT, and STDERR for the execution.

The job object is passed whenever a job is submitted using the normal submission process, and will contain all keyword arguments. If your batch system requires command line arguments, you can parse the keyword arguments with the parse_strange_options function and store them in the submit_args attribute of the Job object. You can then access that attribute in this submission function and pass them to fyrd.run.cmd (or any other method you choose) as command line arguments.

Note, this submit function can also be called on existing scripts without a job object, so your function should not require the job object. The args and kwds arguments exist to allow additional parsing, although they are currently unused; right now args gets the contents of Job.submit_args and kwds gets the contents of the additional_keywords argument to Job.submit(). This argument is currently ignored by all batch scripts.

Please add as much error catching code as possible in the submit function, the torque.py example is a good one.

kill(job_ids)

Input:

  • job_ids: list, A list of job numbers

Output:

  • bool: True on success, False on failure

Immediately terminate the running jobs

queue_parser(user=None, partition=None)

Input:

  • user: string, optional username to limit to

  • partition: string, optional partition/queue to limit to

(Fine to ignore these arguments if they are not implemented on your system)

Yields (must be an iterator):

  • job_id: string

  • array_id: string, optional array job number

  • name: string, a name for the job

  • userid: string, user of the job (can be None)

  • partition: string, partition running in (can be None)

  • state: string a slurm-style string representation of the state

  • nodelist: list, the nodes the job is running on

  • numnodes: int, a count of the number of nodes

  • threads_per_node: int, a count of the number of cores being used on each node

  • exit_code: int, an exit_code (can be None if not exited yet) Must be an int if state == ‘completed’. must be 0 if job completed successfully.

Description:

This is the iterator that is the core of the batch system definition. You must somehow be able to parse all of the currently running jobs and return the above information about every job. If your batch system implements array jobs, this generator must yield one entry per array child, not parent job.

parse_strange_options(option_dict)

Inputs:

  • option_dict: dictionary, a dictionary of keywords from the options.py file prior to interpretation with option_to_string, allowing parsing of all unusual keywords.

Outputs:

  • outlist: list, A list of strings that will be added to the top of the submit file

  • option_dict: dictionary, A parsed version of option_dict with all options not defined in the appropriate dictionaries in `options.py` removed.

  • other_args: a list of parsed arguments to be passed at submit time, this will be added to the submit_args attribute of the Job or passed as the args argument to submit.

Summary

The modularity of this system is intended to make it easy to support any batch system, however it is possible that some systems won’t fit into the mold defined here. If that is the case, feel free to alter other parts of the code to make it work, but be sure that all tests run successfully on every defined cluster on every supported version of python. Feel free to reach out to me to request testing if you do not have access to any system.

API Reference

fyrd.queue

The core class in this file is the Queue() class which does most of the queue management. In addition, get_cluster_environment() attempts to autodetect the cluster type (torque, slurm, normal) and sets the global cluster type for the whole file. Finally, the wait() function accepts a list of jobs and will block until those jobs are complete.

The Queue class relies on a few simple queue parsers defined by the torque_queue_parser and slurm_queue_parser functions. These call qstat -x or squeue and sacct to get job information, and yield a simple tuple of that data with the following members:

job_id, name, userid, partition, state, node-list, node-count, cpu-per-node, exit-code

The Queue class then converts this information into a Queue.QueueJob object and adds it to the internal jobs dictionary within the Queue class. This list is now the basis for all of the other functionality encoded by the Queue class. It can be accessed directly, or sliced by accessing the completed, queued, and running attributes of the Queue class, these are used to simply divide up the jobs dictionary to make finding information easy.

fyrd.queue.Queue

class fyrd.queue.Queue(user=None, partition=None, qtype=None)[source]

Bases: object

A wrapper for all defined batch systems.

jobs

A dictionary of all jobs in this queue in the form: {jobid: Queue.QueueJob}

Type

dict

finished

A dictionary of all completed jobs, same format as jobs

Type

dict

bad

A dictionary of all jobs with failed or unknown states, same format as jobs

Type

dict

active_job_count

Total jobs in the queue (including array job children)

Type

int

max_jobs

The maximum number of jobs allowed in the queue

Type

int

can_submit

True if active_job_count < max_jobs, False otherwise

Type

bool

job_states

A list of the different states of jobs in this queue

Type

list

active_job_count

A count of all jobs that are either pending or running in the current queue

Type

int

can_submit

True if total active jobs is less than max_jobs

Type

bool

users

A set of all users with active jobs

Type

set

job_states

A set of all current job states

Type

set

wait(jobs, return_disp=False)[source]

Block until all jobs in jobs are complete.

get(jobs)[source]

Get all results from a bunch of Job objects.

wait_to_submit(max_jobs=None)[source]

Block until fewer running/pending jobs in queue than max_jobs.

update()[source]

Refresh the list of jobs from the server.

get_jobs(key)[source]

Return a dict of jobs where state matches key.

get_user_jobs(users)[source]

Return a dict of jobs for all all jobs by each user in users.

Can filter by user, queue type or partition on initialization.

Parameters
  • user (str) – Optional usernameto filter the queue with. If user=’self’ or ‘current’, the current user will be used.

  • partition (str) – Optional partition to filter the queue with.

  • qtype (str) – one of the defined batch queues (e.g. ‘slurm’)

Methods
Queue.wait(jobs, return_disp=False, notify=True)[source]

Block until all jobs in jobs are complete.

Update time is dependant upon the queue_update parameter in your ~/.fyrd/config.txt file.

Parameters
  • jobs (list) – List of either fyrd.job.Job, fyrd.queue.QueueJob, job_id

  • return_disp (bool, optional) – If a job disappeares from the queue, return ‘disapeared’ instead of True

  • notify (str, True, or False, optional) – If True, both notification address and wait_time must be set in the [notify] section of the config. A notification email will be sent if the time exceeds this time. This is the default. If a string is passed, notification is forced and the string must be the to address. False means no notification

Returns

True on success False or None on failure unless return_disp is True and the job disappeares, then returns ‘disappeared’

Return type

bool or str

Queue.get(jobs)[source]

Get all results from a bunch of Job objects.

Parameters

jobs (list) – List of fyrd.Job objects

Returns

job_results{job_id: Job}

Return type

dict

Raises

fyrd.ClusterError – If any job fails or goes missing.

Queue.wait_to_submit(max_jobs=None)[source]

Block until fewer running/queued jobs in queue than max_jobs.

Parameters

max_jobs (int) – Override self.max_jobs for wait

Queue.test_job_in_queue(job_id, array_id=None)[source]

Check to make sure job is in self.

Tries 12 times with 1 second between each. If found returns True, else False.

Parameters
  • job_id (str) –

  • array_id (str, optional) –

Returns

exists

Return type

bool

Queue.get_jobs(key)[source]

Return a dict of jobs where state matches key.

Queue.get_user_jobs(users)[source]

Filter jobs by user.

Parameters

users (list) – A list of users/owners

Returns

A filtered job dictionary of {job_id: QueueJob} for all jobs owned by the queried users.

Return type

dict

Queue.update()[source]

Refresh the list of jobs from the server, limit queries.

Queue.check_dependencies(dependencies)[source]

Check if dependencies are running.

Parameters

dependencies (list) – List of job IDs

Returns

‘active’ if dependencies are running or queued, ‘good’ if completed, ‘bad’ if failed, cancelled, or suspended, ‘absent’ otherwise.

Return type

str

fyrd.queue Jobs

Hold information about individual jobs, QueueJob about primary jobs, QueueChild about individual array jobs (which are stored in the children attribute of QueueJob objects.

class fyrd.queue.QueueJob[source]

A very simple class to store info about jobs in the queue.

Only used for torque and slurm queues.

id

Job ID

Type

int

name

Job name

Type

str

owner

User who owns the job

Type

str

threads

Number of cores used by the job

Type

int

queue

The queue/partition the job is running in

Type

str

state

Current state of the job, normalized to slurm states

Type

str

nodes

List of nodes job is running on

Type

list

exitcode

Exit code of completed job

Type

int

disappeared

Job cannot be found in the queue anymore

Type

bool

array_job

This job is an array job and has children

Type

bool

children

If array job, list of child job numbers

Type

dict

Initialize.

class fyrd.queue.QueueChild(parent)[source]

A very simple class to store info about child jobs in the queue.

Only used for torque and slurm queues.

id

Job ID

Type

int

name

Job name

Type

str

owner

User who owns the job

Type

str

threads

Number of cores used by the job

Type

int

queue

The queue/partition the job is running in

Type

str

state

Current state of the job, normalized to slurm states

Type

str

nodes

List of nodes job is running on

Type

list

exitcode

Exit code of completed job

Type

int

disappeared

Job cannot be found in the queue anymore

Type

bool

parent

Backref to parent job

Type

QueueJob

Initialize with a parent.

fyrd.queue.QueueError

exception fyrd.queue.QueueError[source]

Simple Exception wrapper.

fyrd.job

Job management is handled by the Job() class. This is a very large class that defines all the methods required to build and submit a job to the cluster.

It accepts keyword arguments defined in fyrd.options on initialization, which are then fleshed out using profile information from the config files defined by fyrd.conf.

The primary argument on initialization is the function or script to submit.

Examples:

Job('ls -lah | grep myfile')
Job(print, ('hi',))
Job('echo hostname', profile='tiny')
Job(huge_function, args=(1,2) kwargs={'hi': 'there'},
    profile='long', cores=28, mem='200GB')

fyrd.job.Job

class fyrd.Job(command, args=None, kwargs=None, name=None, qtype=None, profile=None, queue=None, **kwds)[source]

Bases: object

Information about a single job on the cluster.

Holds information about submit time, number of cores, the job script, and more.

Below are the core attributes and methods required to use this class, note that this is an incomplete list.

id

The ID number for the job, only set once the job has been submitted

Type

str

name

The name of the job

Type

str

command

The function or shell script that will be submitted

Type

str or callable

args

A list of arguments to the shell script or function in command

Type

list

kwargs

A dictionary of keyword arguments to the function (not shell script) in command

Type

dict

state
A slurm-style one word description of the state of the job, one of:
  • Not_Submitted

  • queued

  • running

  • completed

  • failed

Type

str

submitted
Type

bool

written
Type

bool

done
Type

bool

running
Type

bool

dependencies

A list of dependencies associated with this job

Type

list

out

The output of the function or a copy of stdout for a script

Type

str

stdout

Any output to STDOUT

Type

str

stderr

Any output to STDERR

Type

str

exitcode

The exitcode of the running processes (the script runner if the Job is a function).

Type

int

submit_time

A datetime object for the time of submission

Type

datetime

start

A datetime object for time execution started on the remote node.

Type

datetime

end

A datetime object for time execution ended on the remote node.

Type

datetime

runtime

A timedelta object containing runtime.

Type

timedelta

files

A list of script files associated with this job

Type

list

nodes

A list of nodes associated with this job

Type

list

modules

A list of modules associated with this job

Type

list

clean_files

If True, auto-delete script and function files on job completion

Type

bool

clean_outputs

If True, auto-delete script outputs and error files on job completion

Type

bool

kwds

Keyword arguments to the batch system (e.g. mem, cores, walltime), this is initialized by taking every additional keyword argument to the Job. e.g. Job(‘echo hi’, profile=large, walltime=’00:20:00’, mem=’2GB’) will result in kwds containing {walltime: ‘00:20:00’, mem: ‘2GB’}. There is no need to alter this manually.

Type

dict

submit_args

List of parsed submit arguments that will be passed at runtime to the submit function. Generated within the Job object, no need to set manually, use the kwds attribute instead.

Type

list

initialize()[source]

Use attributes to prep job for running

gen_scripts()[source]

Create script files (but do not write them)

write(overwrite=True)[source]

Write scripts to files

submit(wait_on_max_queue=True)[source]

Submit the job if it is ready and the queue is sufficiently open.

resubmit(wait_on_max_queue=True)[source]

Clean all internal states with scrub() and then resubmit

kill(confirm=True)[source]

Immediately kill the currently running job

clean(delete_outputs=True, get_outputs=True)[source]

Delete any files created by this object

scrub(confirm=True)[source]

Clean everything and reset to an unrun state.

update(fetch_info=True)[source]

Update our status from the queue

wait()[source]

Block until the job is done

get()[source]

Block until the job is done and then return the output (stdout if job is a script), by default saves all outputs to self (i.e. .out, .stdout, .stderr) and deletes all intermediate files before returning. If save argument is False, does not delete the output files by default.

Notes

Printing or reproducing the class will display detailed job information.

Both wait() and get() will update the queue every few seconds (defined by the queue_update item in the config) and add queue information to the job as they go.

If the job disappears from the queue with no information, it will be listed as ‘completed’.

All jobs have a .submission attribute, which is a Script object containing the submission script for the job and the file name, plus a ‘written’ bool that checks if the file exists.

In addition, some batch systems (e.g. SLURM) have an .exec_script attribute, which is a Script object containing the shell command to run. This difference is due to the fact that some SLURM systems execute multiple lines of the submission file at the same time.

Finally, if the job command is a function, this object will also contain a .function attribute, which contains the script to run the function.

Initialization function arguments.

Parameters
  • command (function/str) – The command or function to execute.

  • args (tuple/dict, optional) – Optional arguments to add to command, particularly useful for functions.

  • kwargs (dict, optional) – Optional keyword arguments to pass to the command, only used for functions.

  • name (str, optional) – Optional name of the job. If not defined, guessed. If a job of the same name is already queued, an integer job number (not the queue number) will be added, ie. <name>.1

  • qtype (str, optional) – Override the default queue type

  • profile (str, optional) – The name of a profile saved in the conf

  • queue (fyrd.queue.Queue, optional) – An already initiated Queue class to use.

  • kwdsAll other keywords are parsed into cluster keywords by the options system. For available keywords see fyrd.option_help()

Methods
Job.initialize()[source]

Make self runnable using set attributes.

Job.gen_scripts()[source]

Create the script objects from the set parameters.

Job.write(overwrite=True)[source]

Write all scripts.

Parameters

overwrite (bool, optional) – Overwrite existing files, defaults to True.

Returns

self

Return type

Job

Job.clean(delete_outputs=None, get_outputs=True)[source]

Delete all scripts created by this module, if they were written.

Parameters
  • delete_outputs (bool, optional) – also delete all output and err files, but get their contents first.

  • get_outputs (bool, optional) – if delete_outputs, save outputs before deleting.

Returns

self

Return type

Job

Job.scrub(confirm=True)[source]

Clean everything and reset to an unrun state.

Parameters

confirm (bool, optional) – Get user input before proceeding

Returns

self

Return type

Job

Job.submit(wait_on_max_queue=True, additional_keywords=None, max_jobs=None)[source]

Submit this job.

To disable max_queue_len, set it to 0. None will allow override by the default settings in the config file, and any positive integer will be interpretted to be the maximum queue length.

Parameters
  • wait_on_max_queue (bool, optional) – Block until queue limit is below the maximum before submitting.

  • additional_keywords (dict, optional) – Pass this dictionary to the batch system submission function, not necessary.

  • max_jobs (int, optional) – Override the maximum number of jobs to wait for

Returns

self

Return type

Job

Job.resubmit(wait_on_max_queue=True, cancel_running=None)[source]

Attempt to auto resubmit, deletes prior files.

Parameters
  • wait_on_max_queue (bool, optional) – Block until queue limit is below the maximum before submitting.

  • cancel_running (bool or None, optional) – If the job is currently running, cancel it before resubmitting. If None (default), will ask the user.

  • disable max_queue_len, set it to 0. None will allow override by (To) –

  • default settings in the config file, and any positive integer will (the) –

  • interpretted to be the maximum queue length. (be) –

Returns

self

Return type

Job

Job.get_keywords()[source]

Return a list of the keyword arguments used to make the job.

Job.set_keywords(kwds, replace=False)[source]

Set the job keywords, just updates self.kwds.

Parameters
  • kwds (dict) – Set of valid arguments.

  • replace (bool, optional) – Overwrite the keword arguments instead of updating.

Job.wait()[source]

Block until job completes.

Returns

success – True if exitcode == 0, False if not, ‘disappeared’ if job lost from queue.

Return type

bool or str

Job.get(save=True, cleanup=None, delete_outfiles=None, del_no_save=None, raise_on_error=True)[source]

Block until job completed and return output of script/function.

By default saves all outputs to this class and deletes all intermediate files.

Parameters
  • save (bool, optional) – Save all outputs to the class also (advised)

  • cleanup (bool, optional) – Clean all intermediate files after job completes.

  • delete_outfiles (bool, optional) – Clean output files after job completes.

  • del_no_save (bool, optional) – Delete output files even if save is False

  • raise_on_error (bool, optional) – If the returned output is an Exception, raise it.

Returns

Function output if Function, else STDOUT

Return type

str

Job.get_output(save=True, delete_file=None, update=True, raise_on_error=True)[source]

Get output of function or script.

This is the same as stdout for a script, or the function output for a function.

By default, output file is kept unless delete_file is True or self.clean_files is True.

Parameters
  • save (bool, optional) – Save the output to self.out, default True. Would be a good idea to set to False if the output is huge.

  • delete_file (bool, optional) – Delete the output file when getting

  • update (bool, optional) – Update job info from queue first.

  • raise_on_error (bool, optional) – If the returned output is an Exception, raise it.

Returns

output – The output of the script or function. Always a string if script.

Return type

anything

Job.get_stdout(save=True, delete_file=None, update=True)[source]

Get stdout of function or script, same for both.

By default, output file is kept unless delete_file is True or self.clean_files is True.

Also sets self.start and self.end from the contents of STDOUT if possible.

Returns

  • save (bool, optional) – Save the output to self.stdout, default True. Would be a good idea to set to False if the output is huge.

  • delete_file (bool, optional) – Delete the stdout file when getting

  • update (bool, optional) – Update job info from queue first.

Returns

The contents of STDOUT, with runtime info and trailing newline removed.

Return type

str

Job.get_stderr(save=True, delete_file=None, update=True)[source]

Get stderr of function or script, same for both.

By default, output file is kept unless delete_file is True or self.clean_files is True.

Parameters
  • save (bool, optional) – Save the output to self.stdout, default True. Would be a good idea to set to False if the output is huge.

  • delete_file (bool, optional) – Delete the stdout file when getting

  • update (bool, optional) – Update job info from queue first.

Returns

The contents of STDERR, with trailing newline removed.

Return type

str

Job.get_times(update=True, stdout=None)[source]

Get stdout of function or script, same for both.

Sets self.start and self.end from the contents of STDOUT if possible.

Parameters
  • update (bool, optional) – Update job info from queue first.

  • stdout (str, optional) – Pass existing stdout for use

Returns

  • start (datetime.datetime)

  • end (datetime.datetime)

Job.get_exitcode(update=True, stdout=None)[source]

Try to get the exitcode.

Parameters
  • update (bool, optional) – Update job info from queue first.

  • stdout (str, optional) – Pass existing stdout for use

Returns

exitcode

Return type

int

Job.update(fetch_info=True)[source]

Update status from the queue.

Parameters

fetch_info (bool, optional) – Fetch basic job info if complete.

Returns

self

Return type

Job

Job.update_queue_info()[source]

Set (and return) queue_info from the queue even if done.

Job.fetch_outputs(save=True, delete_files=None, get_stats=True)[source]

Save all outputs in their current state. No return value.

This method does not wait for job completion, but merely gets the outputs. To wait for job completion, use get() instead.

Parameters
  • save (bool, optional) – Save all outputs to the class also (advised)

  • delete_files (bool, optional) – Delete the output files when getting, only used if save is True

  • get_stats (bool, optional) – Try to get exitcode.

fyrd.submission_scripts

This module defines to classes that are used to build the actual jobs for submission, including writing the files. Function is actually a child class of Script.

class fyrd.submission_scripts.Script(file_name, script)[source]

Bases: object

A script string plus a file name.

Initialize the script and file name.

clean(delete_output=None)[source]

Delete any files made by us.

property exists

True if file is on disk, False if not.

write(overwrite=True)[source]

Write the script file.

class fyrd.submission_scripts.Function(file_name, function, args=None, kwargs=None, imports=None, syspaths=None, pickle_file=None, outfile=None)[source]

Bases: fyrd.submission_scripts.Script

A special Script used to run a function.

Create a function wrapper.

NOTE: Function submission will fail if the parent file’s code is not wrapped in an if __main__ wrapper.

Parameters
  • file_name (str) – A root name to the outfiles

  • function (callable) – Function handle.

  • args (tuple, optional) – Arguments to the function as a tuple.

  • kwargs (dict, optional) – Named keyword arguments to pass in the function call

  • imports (list, optional) – A list of imports, if not provided, defaults to all current imports, which may not work if you use complex imports. The list can include the import call, or just be a name, e.g [‘from os import path’, ‘sys’]

  • syspaths (list, optional) – Paths to be included in submitted function

  • pickle_file (str, optional) – The file to hold the function.

  • outfile (str, optional) – The file to hold the output.

clean(delete_output=False)[source]

Delete the input pickle file and any scripts.

Parameters

delete_output (bool, optional) – Delete the output pickle file too.

write(overwrite=True)[source]

Write the pickle file and call the parent Script write function.

fyrd.batch_systems

All batch systems are defined here.

fyrd.batch_systems functions

fyrd.batch_systems.get_cluster_environment(overwrite=False)[source]

Detect the local cluster environment and set MODE globally.

Detect the current batch system by looking for command line utilities. Order is important here, so we hard code the batch system lookups.

Paths to files can also be set in the config file.

Parameters

overwrite (bool, optional) – If True, run checks anyway, otherwise just accept MODE if it is already set.

Returns

MODE

Return type

str

fyrd.batch_systems.check_queue(qtype=None)[source]

Check if both MODE and qtype are valid.

First checks the MODE global and autodetects its value, if that fails, no other tests are done, the qtype argument is ignored.

After MODE is found to be a reasonable value, the queried queue is tested for functionality. If qtype is defined, this queue is tested, else the queue in MODE is tested.

Tests are defined per batch system.

Parameters

qtype (str) –

Returns

batch_system_functional

Return type

bool

Raises

ClusterError – If MODE or qtype is not in DEFINED_SYSTEMS

See also

get_cluster_environment()

Auto detect the batch environment

get_batch_system()

Return the batch system module

fyrd.batch_systems.get_batch_system(qtype=None)[source]

Return a batch_system module.

fyrd.batch_systems.options

All keyword arguments are defined in dictionaries in the options.py file, alongside function to manage those dictionaries. Of particular importance is option_help(), which can display all of the keyword arguments as a string or a table. check_arguments() checks a dictionary to make sure that the arguments are allowed (i.e. defined), it is called on all keyword arguments in the package.

To see keywords, run fyrd keywords from the console or fyrd.option_help() from a python session.

The way that option handling works in general, is that all hard-coded keyword arguments must contain a dictionary entry for ‘torque’ and ‘slurm’, as well as a type declaration. If the type is NoneType, then the option is assumed to be a boolean option. If it has a type though, check_argument() attempts to cast the type and specific idiosyncrasies are handled in this step, e.g. memory is converted into an integer of MB. Once the arguments are sanitized format() is called on the string held in either the ‘torque’ or the ‘slurm’ values, and the formatted string is then used as an option. If the type is a list/tuple, the ‘sjoin’ and ‘tjoin’ dictionary keys must exist, and are used to handle joining.

The following two functions are used to manage this formatting step.

option_to_string() will take an option/value pair and return an appropriate string that can be used in the current queue mode. If the option is not implemented in the current mode, a debug message is printed to the console and an empty string is returned.

options_to_string() is a wrapper around option_to_string() and can handle a whole dictionary of arguments, it explicitly handle arguments that cannot be managed using a simple string format.

fyrd.batch_systems.options.option_help(mode='string', qtype=None, tablefmt='simple')[source]

Print a sting to stdout displaying information on all options.

The possible run modes for this extension are:

string

Return a formatted string

print

Print the string to stdout

list

Return a simple list of keywords

table

Return a table of lists

merged_table

Combine all keywords into a single table

Parameters
  • mode ({'string', 'print', 'list', 'table', 'merged_table'}, optional) –

  • qtype (str, optional) – If provided only return info on that queue type.

  • tablefmt (str, optional) –

    A tabulate-style table format, one of:

    'plain', 'simple', 'grid', 'pipe', 'orgtbl',
    'rst', 'mediawiki', 'latex', 'latex_booktabs'
    

Returns

A formatted string

Return type

str

fyrd.batch_systems.options.sanitize_arguments(kwds)[source]

Run check_arguments, but return unmatched keywords as is.

fyrd.batch_systems.options.split_keywords(kwargs)[source]

Split a dictionary of keyword arguments into two dictionaries.

The first dictionary will contain valid arguments for fyrd, the second will contain all others.

Returns

valid_args, other_args

Return type

dict

fyrd.batch_systems.options.check_arguments(kwargs)[source]

Make sure all keywords are allowed.

Raises OptionsError on error, returns sanitized dictionary on success.

Note: Checks in SYNONYMS if argument is not recognized, raises OptionsError

if it is not found there either.

fyrd.batch_systems.options.options_to_string(option_dict, qtype=None)[source]

Return a multi-line string for job submission.

This function pre-parses options and then passes them to the parse_strange_options function of each batch system, before using the option_to_string function to parse the remaining options.

Parameters
  • option_dict (dict) – Dict in format {option: value} where value can be None. If value is None, default used.

  • qtype (str) – The defined batch system

Returns

  • parsed_options (str) – A multi-line string of parsed options

  • runtime_options (list) – A list of parsed options to be used at submit time

fyrd.batch_systems.options.option_to_string(option, value=None, qtype=None)[source]

Return a string with an appropriate flag for slurm or torque.

Parameters
  • option (str) – An allowed option definied in options.all_options

  • value (str, optional) – A value for that option if required (if None, default used)

  • qtype (str, optional) – One of the defined batch systems

Returns

A string with the appropriate flags for the active queue.

Return type

str

fyrd.conf

fyrd.conf handles the config (~/.fyrd/config.txt) file and the profiles (~/.fyrd/profiles.txt) file.

Profiles are combinations of keyword arguments that can be called in any of the submission functions. Both the config and profiles are just ConfigParser objects, conf.py merely adds an abstraction layer on top of this to maintain the integrity of the files.

config

The config has three sections (and no defaults):

  • queue — sets options for handling the queue

  • jobs — sets options for submitting jobs

  • jobqueue — local option handling, will be removed in the future

For a complete reference, see the config documentation : Configuration

Options can be managed with the get_option() and set_option() functions, but it is actually easier to use the console script:

fyrd conf list
fyrd conf edit max_jobs 3000
fyrd.conf.get_option(section=None, key=None, default=None)[source]

Get a single key or section.

All args are optional, if they are missing, the parent section or entire config will be returned.

Parameters
  • section (str) – The config section to use (e.g. queue), if None, all sections returned.

  • key (str) – The config key to get (e.g. ‘max_jobs’), if None, whole section returned.

  • default – If the key does not exist, create it with this default value.

Returns

Option value if key exists, None if no key exists.

Return type

option_value

See also

set_option()

Set an option

get_config()

Get the entire config

fyrd.conf.set_option(section, key, value)[source]

Write a config key to the config file.

Parameters
  • section (str) – Section of the config file to use.

  • key (str) – Key to add.

  • value – Value to add for key.

Returns

Return type

ConfigParser

fyrd.conf.delete(section, key)[source]

Delete a config item.

Parameters
  • section (str) – Section of config file.

  • key (str) – Key to delete

Returns

Return type

ConfigParger

fyrd.conf.load_config()[source]

Load config from the config file.

If any section or key from DEFAULTS is not present in the config, it is added back, enforcing a minimal configuration.

Returns

Return type

ConfigParser

fyrd.conf.write_config()[source]

Write the current config to CONFIG_FILE.

fyrd.conf.create_config(cnf=None, def_queue=None)[source]

Create an initial config file.

Gets all information from the file-wide DEFAULTS constant and overwrites specific keys using the values in cnf.

This means that any records in the cnf dict that are not present in DEFAULTS will be ignored, and any records that are absent will be populated from DEFAULTS.

Parameters
  • cnf (dict) – A dictionary of config defaults.

  • def_queue (str) – A name for a queue to add to the default profile.

fyrd.conf.create_config_interactive(prompt=True)[source]

Interact with the user to create a new config.

Uses readline autocompletion to make setup easier.

Parameters

prompt (bool) – As for confirmation before beginning wizard.

profiles

Profiles are wrapped in a Profile() class to make attribute access easy, but they are fundamentally just dictionaries of keyword arguments. They can be created with cluster.conf.Profile(name, {keywds}) and then written to a file with the write() method.

The easiest way to interact with profiles is not with class but with the get_profile(), set_profile(), and del_profile() functions. These make it very easy to go from a dictionary of keywords to a profile.

Profiles can then be called with the profile= keyword in any submission function or Job class.

As with the config, profile management is the easiest and most stable when using the console script:

fyrd profile list
fyrd profile add very_long walltime:120:00:00
fyrd profile edit default partition:normal cores:4 mem:10GB
fyrd profile delete small
fyrd.conf.Profile
class fyrd.conf.Profile(name, kwds)[source]

Bases: object

A job submission profile. Just a thin wrapper around a dict.

name
Type

str

kwds
Type

dict

write : Write self to config file

Set up bare minimum attributes.

Parameters
  • name (str) – Name of the profile

  • kwds (dict) – Dictionary of keyword arguments (will be validated).

write()[source]

Write self to config file.

fyrd.conf.set_profile(name, kwds, update=True)[source]

Write profile to config file.

Parameters
  • name (str) – The name of the profile to add/edit.

  • kwds (dict) – Keyword arguments to add to the profile.

  • update (bool) – Update the profile rather than overwriting it.

fyrd.conf.get_profile(profile=None, allow_none=True)[source]

Return a profile if it exists, if None, return all profiles.

Will return None if profile is supplied but does not exist.

Parameters
  • profile (str) – The name of a profile to search for.

  • allow_none (bool) – If True, return None if no profile matches, otherwise raise a ValueError.

Returns

The requested profile.

Return type

fyrd.conf.Profile

fyrd.helpers

The helpers are all high level functions that are not required for the library but make difficult jobs easy to assist in the goal of trivially easy cluster submission.

The functions in fyrd.basic below are different in that they provide simple job submission and management, while the functions in fyrd.helpers allow the submission of many jobs.

fyrd.helpers.jobify(name=None, profile=None, qtype=None, submit=True, **kwds)[source]

Decorator to make any function a job.

Will make any function return a Job object that will execute the function on the cluster.

If submit is True, the job will be submitted when it is returned.

Usage:

@fyrd.jobify(name='my_job', profile='small', mem='8GB',
             time='00:10:00', imports=['from time import sleep'])
def do_something(file_path, iteration_count=24):
    for i in range(iteration_count):
        print(file_path + i)
        sleep(1)
    return file_path

job = do_something('my_file.txt')
out = job.get()
Parameters
  • name (str, optional) – Optional name of the job. If not defined, guessed. If a job of the same name is already queued, an integer job number (not the queue number) will be added, ie. <name>.1

  • qtype (str, optional) – Override the default queue type

  • profile (str, optional) – The name of a profile saved in the conf

  • submit (bool, optional) – Submit the Job before returning it

  • kwdsAll other keywords are parsed into cluster keywords by the options system. For available keywords see fyrd.option_help()

Returns

A Job class initialized with the decorated function.

Return type

fyrd.job.Job

Examples

>>> import fyrd
>>> @fyrd.jobify(name='test_job', mem='1GB')
... def test(string, iterations=4):
...     """This does basically nothing!"""
...     outstring = ""
...     for i in range(iterations):
...         outstring += "Version {0}: {1}".format(i, string)
...     return outstring
>>> j = test('hi')
>>> j.get()
'Version 0: hiVersion 1: hiVersion 2: hiVersion 3: hiVersion 4: hi'
fyrd.helpers.parapply(jobs, df, func, args=(), profile=None, applymap=False, merge_axis=0, merge_apply=False, name='parapply', imports=None, direct=True, **kwds)[source]

Split a dataframe, run apply in parallel, return result.

This function will split a dataframe into however many pieces are requested with the jobs argument, run apply in parallel by submitting the jobs to the cluster, and then recombine the outputs.

If the ‘clean_files’ and ‘clean_outputs’ arguments are not passed, we delete all intermediate files and output files by default.

This function will take any keyword arguments accepted by Job, which can be found by running fyrd.options.option_help(). It also accepts any of the keywords accepted by by pandas.DataFrame.apply(), found here

Parameters
  • jobs (int) – Number of pieces to split the dataframe into

  • df (DataFrame) – Any pandas DataFrame

  • args (tuple) – Positional arguments to pass to the function, keyword arguments can just be passed directly.

  • profile (str) – A fyrd cluster profile to use

  • applymap (bool) – Run applymap() instead of apply()

  • merge_axis (int) – Which axis to merge on, 0 or 1, default is 1 as apply transposes columns

  • merge_apply (bool) – Apply the function on the merged dataframe also

  • name (str) – A prefix name for all of the jobs

  • imports (list) – A list of imports in any format, e.g. [‘import numpy’, ‘scipy’, ‘from numpy import mean’]

  • direct (bool) – Whether to run the function directly or to return a Job. Default True.

  • keyword arguments recognized by fyrd will be used for job (Any) –

  • submission.

  • keyword arguments will be passed to DataFrame.apply()* (*Additional) –

Returns

A recombined DataFrame: concatenated version of original split DataFrame

Return type

DataFrame

Example

>>> import numpy
>>> import pandas
>>> import fyrd
>>> df = pandas.DataFrame([[0, 1], [2, 6], [9, 24], [13, 76], [4, 12]])
>>> df['sum'] = fyrd.helpers.parapply(2, df, lambda x: x[0]+x[1], axis=1)
>>> df
    0   1  sum
    0   0   1    1
    1   2   6    8
    2   9  24   33
    3  13  76   89
    4   4  12   16

See also

parapply_summary()

Merge results of parapply using applied function

splitrun()

Run a command in parallel on a split file

fyrd.helpers.parapply_summary(jobs, df, func, args=(), profile=None, applymap=False, name='parapply', imports=None, direct=True, **kwds)[source]

Run parapply for a function with summary stats.

Instead of returning the concatenated result, merge the result using the same function as was used during apply.

This works best for summary functions like .mean(), which do a linear operation on a whole dataframe or series.

Parameters
  • jobs (int) – Number of pieces to split the dataframe into

  • df (DataFrame) – Any pandas DataFrame

  • args (tuple) – Positional arguments to pass to the function, keyword arguments can just be passed directly.

  • profile (str) – A fyrd cluster profile to use

  • applymap (bool) – Run applymap() instead of apply()

  • merge_axis (int) – Which axis to merge on, 0 or 1, default is 1 as apply transposes columns

  • merge_apply (bool) – Apply the function on the merged dataframe also

  • name (str) – A prefix name for all of the jobs

  • imports (list) – A list of imports in any format, e.g. [‘import numpy’, ‘scipy’, ‘from numpy import mean’]

  • direct (bool) – Whether to run the function directly or to return a Job. Default True.

  • keyword arguments recognized by fyrd will be used for job (Any) –

  • submission.

  • keyword arguments will be passed to DataFrame.apply()* (*Additional) –

Returns

A recombined DataFrame

Return type

DataFrame

Example

>>> import numpy
>>> import pandas
>>> import fyrd
>>> df = pandas.DataFrame([[0, 1], [2, 6], [9, 24], [13, 76], [4, 12]])
>>> df = fyrd.helpers.parapply_summary(2, df, numpy.mean)
>>> df
0     6.083333
1    27.166667
dtype: float64

See also

parapply()

Run a command in parallel on a DataFrame without merging the

result()

fyrd.helpers.splitrun(jobs, infile, inheader, command, args=None, kwargs=None, name=None, qtype=None, profile=None, outfile=None, outheader=False, merge_func=None, direct=True, **kwds)[source]

Split a file, run command in parallel, return result.

This function will split a file into however many pieces are requested with the jobs argument, and run command on each.

Accepts exactly the same arguments as the Job class, with the exception of the first three and last four arguments, which are:

  • the number of jobs

  • the file to work on

  • whether the input file has a header

  • an optional output file

  • whether the output file has a header

  • an optional function to use to merge the resulting list, only used if there is no outfile.

  • whether to run directly or to return a Job. If direct is True, this function will just run and thus block until complete, if direct is False, the function will submit as a Job and return that Job.

Note: If command is a string, .format(file={file}) will be called on it, where file is each split file. If command is a function, the there must be an argument in either args or kwargs that contains {file}. It will be replaced with the path to the file, again by the format command.

If outfile is specified, there must also be an ‘{outfile}’ line in any script or an ‘{outfile}’ argument in either args or kwargs. When this function completes, the file at outfile will contain the concatenated output files of all of the jobs.

If the ‘clean_files’ and ‘clean_outputs’ arguments are not passed, we delete all intermediate files and output files by default.

The intermediate files will be stored in the ‘scriptpath’ directory.

Any header line is kept at the top of the file.

Primary return value varies and is decided in this order:

If outfile:

the absolute path to that file

If merge_func:

the result of merge_func(list), where list is the list of outputs.

Else:

a list of results

If direct is False, this function returns a fyrd.job.Job object which will return the results described above on get().

Parameters
  • jobs (int) – Number of pieces to split the dataframe into

  • infile (str) – The path to the file to be split.

  • inheader (bool) – Does the input file have a header?

  • command (function/str) – The command or function to execute.

  • args (tuple/dict) – Optional arguments to add to command, particularly useful for functions.

  • kwargs (dict) – Optional keyword arguments to pass to the command, only used for functions.

  • name (str) – Optional name of the job. If not defined, guessed. If a job of the same name is already queued, an integer job number (not the queue number) will be added, ie. <name>.1

  • qtype (str) – Override the default queue type

  • profile (str) – The name of a profile saved in the conf

  • outfile (str) – The path to the expected output file.

  • outheader (bool) – Does the input outfile have a header?

  • merge_func (function) – An optional function used to merge the output list if there is no outfile.

  • direct (bool) – Whether to run the function directly or to return a Job. Default True.

  • other keywords are parsed into cluster keywords by the options (*All) –

  • For available keywords see fyrd.option_help() * (system.) –

Returns

See description above

Return type

Varies

fyrd.basic

This module holds high level functions to make job submission easy, allowing the user to skip multiple steps and to avoid using the Job class directly.

submit(), make_job(), and make_job_file() all create Job objects in the background and allow users to submit jobs. All of these functions accept the exact same arguments as the Job class does, and all of them return a Job object.

submit_file() is different, it simply submits a pre-formed job file, either one that has been written by this software or by any other method. The function makes no attempt to fix arguments to allow submission on multiple clusters, it just submits the file.

clean() takes a list of job objects and runs the clean() method on all of them, clean_dir() uses known directory and suffix information to clean out all job files from any directory.

fyrd.basic.submit()[source]

Submit a script to the cluster.

Parameters
  • command (function/str) – The command or function to execute.

  • args (tuple/dict, optional) – Optional arguments to add to command, particularly useful for functions.

  • kwargs (dict, optional) – Optional keyword arguments to pass to the command, only used for functions.

  • name (str, optional) – Optional name of the job. If not defined, guessed. If a job of the same name is already queued, an integer job number (not the queue number) will be added, ie. <name>.1

  • qtype (str, optional) – Override the default queue type

  • profile (str, optional) – The name of a profile saved in the conf

  • queue (fyrd.queue.Queue, optional) – An already initiated Queue class to use.

  • kwdsAll other keywords are parsed into cluster keywords by the options system. For available keywords see fyrd.option_help()

Returns

Return type

Job object

fyrd.basic.make_job()[source]

Make a job compatible with the chosen cluster but do not submit.

Parameters
  • command (function/str) – The command or function to execute.

  • args (tuple/dict, optional) – Optional arguments to add to command, particularly useful for functions.

  • kwargs (dict, optional) – Optional keyword arguments to pass to the command, only used for functions.

  • name (str, optional) – Optional name of the job. If not defined, guessed. If a job of the same name is already queued, an integer job number (not the queue number) will be added, ie. <name>.1

  • qtype (str, optional) – Override the default queue type

  • profile (str, optional) – The name of a profile saved in the conf

  • queue (fyrd.queue.Queue, optional) – An already initiated Queue class to use.

  • kwdsAll other keywords are parsed into cluster keywords by the options system. For available keywords see fyrd.option_help()

Returns

Return type

Job object

fyrd.basic.make_job_file()[source]

Make a job file compatible with the chosen cluster.

Parameters
  • command (function/str) – The command or function to execute.

  • args (tuple/dict, optional) – Optional arguments to add to command, particularly useful for functions.

  • kwargs (dict, optional) – Optional keyword arguments to pass to the command, only used for functions.

  • name (str, optional) – Optional name of the job. If not defined, guessed. If a job of the same name is already queued, an integer job number (not the queue number) will be added, ie. <name>.1

  • qtype (str, optional) – Override the default queue type

  • profile (str, optional) – The name of a profile saved in the conf

  • queue (fyrd.queue.Queue, optional) – An already initiated Queue class to use.

  • kwdsAll other keywords are parsed into cluster keywords by the options system. For available keywords see fyrd.option_help()

Returns

Path to job file

Return type

str

fyrd.basic.submit_file()[source]

Submit an existing job file to the cluster.

This function is independent of the Job object and just submits a file using a cluster appropriate method.

Parameters
  • script_file (str) – The path to the file to submit

  • dependencies (str or list of strings, optional) – A job number or list of job numbers to depend on

  • qtype (str, optional) – The name of the queue system to use, auto-detected if not given.

  • submit_args (dict) – A dictionary of keyword arguments for the submission script.

Returns

job_number

Return type

str

fyrd.basic.clean()[source]

Delete all files in jobs list or single Job object.

Parameters
  • jobs (fyrd.job.Job or list of fyrd.job.Job) – Job objects to clean

  • clean_outputs (bool) – Also clean outputs.

fyrd.basic.clean_dir()[source]

Delete all files made by this module in directory.

CAUTION: The clean() function will delete EVERY file with
extensions matching those these::

.<suffix>.err .<suffix>.out .<suffix>.out.func.pickle .<suffix>.sbatch & .<suffix>.script for slurm mode .<suffix>.qsub for torque mode .<suffix>.job for local mode _func.<suffix>.py _func.<suffix>.py.pickle.in _func.<suffix>.py.pickle.out

Note

This function will change in the future to use batch system defined paths.

Parameters
  • directory (str) – The directory to run in, defaults to the current directory.

  • suffix (str) – Override the default suffix.

  • qtype (str) – Only run on files of this qtype

  • confirm (bool) – Ask the user before deleting the files

  • delete_outputs (bool) – Delete all output files too.

Returns

A set of deleted files

Return type

list

fyrd.run

A library of useful functions used throughout the fyrd package.

These include functions to handle data, format outputs, handle file opening, run commands, check file extensions, get user input, and search and format imports.

These functions are not intended to be accessed directly and so documentation is limited.

exception fyrd.run.CommandError[source]

Bases: Exception

A custom exception.

class fyrd.run.CustomFormatter(prog, indent_increment=2, max_help_position=24, width=None)[source]

Bases: argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter

Custom argparse formatting.

fyrd.run.block_read(files, size=65536)[source]

Iterate through a file by blocks.

fyrd.run.check_pid(pid)[source]

Check For the existence of a unix pid.

fyrd.run.cmd(command, args=None, stdout=None, stderr=None, tries=1)[source]

Run command and return status, output, stderr.

Parameters
  • command (str) – Path to executable.

  • args (tuple, optional) – Tuple of arguments.

  • stdout (str, optional) – File or open file like object to write STDOUT to.

  • stderr (str, optional) – File or open file like object to write STDERR to.

  • tries (int, optional) – Number of times to try to execute. 1+

Returns

  • exit_code (int)

  • STDOUT (str)

  • STDERR (str)

fyrd.run.cmd_or_file(string)[source]

If string is a file, return the contents, else return the string.

Parameters

string (str) – Path to a file or any other string

Returns

script – Either the contents of the file if string is a file or just the contents of string.

Return type

str

fyrd.run.count_lines(infile, force_blocks=False)[source]

Return the line count of a file as quickly as possible.

Uses wc if avaialable, otherwise does a rapid read.

fyrd.run.exp_file(infile)[source]

Return an expanded path to a file.

fyrd.run.export_globals(function)[source]

Add a function’s globals to the current globals.

fyrd.run.export_imports(function, kwds)[source]

Get imports from a function and from kwds.

Also sets globals and adds path to module to sys path.

Parameters
  • function (callable) – A function handle

  • kwds (dict) – A dictionary of keyword arguments

Returns

imports + sys.path.append for module path

Return type

list

fyrd.run.export_run(function, args, kwargs)[source]

Execute a function after first exporting all imports.

fyrd.run.file_getter(file_strings, variables, extra_vars=None, max_count=None)[source]

Get a list of files and variable values using the search string.

The file strings can contain standard unix glob (like *) and variable containing strings in the form {name}.

For example, a file_string of {dir}/*.txt will match every file that ends in .txt in every directory relative to the current path.

The result for a directory name test with two files named 1.txt and 2.txt is a list of:

[(('dir/1.txt'), {'dir': 'test'}),
 (('dir/2.txt'), {'dir': 'test'})]

This is repeated for every file_string in file_strings, and the following tests are done:

  1. All file_strings must result in identical numbers of files

  2. All variables must have only a single value in every file string

If there are multiple file_strings, they are added to the result x in order, but the dictionary remains the same as variables must be shared. If multiple file_strings are provided the results are combined by alphabetical order.

Parameters
  • file_strings (list of str) – List of search strings, e.g. */*, */*.txt, {dir}/*.txt or {dir}/{file}.txt

  • variables (list of str) – List of variables to look for

  • extra_vars (list of str, optional) –

    A list of additional variables specified in a very precise format:

    new_var:orig_var:regex:sub_str
    
    or
    
    new_var:value
    

    The orig_var must correspond to a variable in variables. var will be generated by running re.sub(regex, sub_str, string) where string is the result of orig_var for the given file set

  • max_count (int, optional) – Max number of file_strings to parse, default is all.

Returns

A list of files. Each list item will be a two-item tuple of (files, variables). Files will be a tuple with the same length as max_count, or file_strings if max_count is None. Variables will be a dictionary of all variables and extra_vars for this file set. e.g.:

[((file1, dir1, file2), {var1: val, var2: val})]

Return type

list

Raises

ValueError – Raised if any of the above tests are not met.

fyrd.run.file_type(infile)[source]

Return file type after stripping gz or bz2.

fyrd.run.get_all_imports(function, kwds, prot=False)[source]

Get all imports from a function and from kwds.

Parameters
  • function (callable) – A function handle

  • kwds (dict) – A dictionary of keyword arguments

  • prot (bool) – Wrap all import in try statement

Returns

Imports

Return type

list

fyrd.run.get_function_path(function)[source]

Return path to module defining a function if it exists.

fyrd.run.get_imports(function, mode='string')[source]

Build a list of potentially useful imports from a function handle.

Gets:

  • All modules from globals()

  • All modules from the function’s globals()

  • All functions from the function’s globals()

Modes:

string:

Return a list of strings formatted as unprotected import calls

prot:

Similar to string, but with try..except blocks

list:

Return two lists: (import name, module name) for modules and (import name, function name, module name) for functions

Parameters
  • function (callable) – A function handle

  • mode (str) – A string corresponding to one of the above modes

Returns

Return type

str or list

fyrd.run.get_input(message, valid_answers=None, default=None)[source]

Get input from the command line and check answers.

Allows input to work with python 2/3

Parameters
  • message (str) – A message to print, an additional space will be added.

  • valid_answers (list) – A list of answers to accept, if None, ignored. Case insensitive. There is one special option here: ‘yesno’, this allows all case insensitive variations of y/n/yes/no.

  • default (str) – The default answer.

Returns

response

Return type

str

fyrd.run.get_pbar(iterable, name=None, unit=None, **kwargs)[source]

Return a tqdm progress bar iterable.

If progressbar is set to False in the config, will not be shown.

fyrd.run.get_yesno(message, default=None)[source]

Get yes/no answer from user.

Parameters
  • message (str) – A message to print, an additional space will be added.

  • default ({'y', 'n'}, optional) – One of {‘y’, ‘n’}, the default if the user gives no answer. If None, answer forced.

Returns

True on yes, False on no

Return type

bool

fyrd.run.import_function(function, mode='string')[source]

Return an import string for the function.

Attempts to resolve the parent module also, if the parent module is a file, ie it isn’t __main__, the import string will include a call to sys.path.append to ensure the module is importable.

If this function isn’t defined by a module, returns an empty string.

Parameters

mode ({'string', 'list'}, optional) – string/list, return as a unified string or a list.

fyrd.run.indent(string, prefix=' ')[source]

Replicate python3’s textwrap.indent for python2.

Parameters
  • string (str) – Any string.

  • prefix (str) – What to indent with.

Returns

Indented string

Return type

str

fyrd.run.is_exc(x)[source]

Check if x is the output of sys.exc_info().

Returns

True if matched the output of sys.exc_info().

Return type

bool

fyrd.run.is_exe(fpath)[source]

Return True is fpath is executable.

fyrd.run.is_file_type(infile, types)[source]

Return True if infile is one of types.

Parameters
  • infile (str) – Any file name

  • types (list) – String or list/tuple of strings (e.g [‘bed’, ‘gtf’])

Returns

is_file_type

Return type

bool

fyrd.run.listify(iterable)[source]

Try to force any iterable into a list sensibly.

fyrd.run.merge_lists(lists)[source]

Turn a list of lists into a single list.

fyrd.run.normalize_imports(imports, prot=True)[source]

Take a heterogenous list of imports and normalize it.

Parameters
  • imports (list) – A list of strings, formatted differently.

  • prot (bool) – Protect imports with try..except blocks

Returns

A list of strings that can be used for imports

Return type

list

fyrd.run.open_zipped(infile, mode='r')[source]

Open a regular, gzipped, or bz2 file.

If infile is a file handle or text device, it is returned without changes.

Returns

Return type

text mode file handle.

fyrd.run.opt_split(opt, split_on)[source]

Split options by chars in split_on, merge all into single list.

Parameters
  • opt (list) – A list of strings, can be a single string.

  • split_on (list) – A list of characters to use to split the options.

Returns

A single merged list of split options, uniqueness guaranteed, order not.

Return type

list

fyrd.run.parse_glob(string, get_vars=None)[source]

Return a list of files that match a simple regex glob.

Parameters
  • string (str) –

  • get_vars (list) – A list of variable names to search for. The string must contain these variables in the form {variable}. These variables will be temporarily replaced with a * and then run through glob.glob to generate a list of files. This list is then parsed to create the output.

Returns

Keys are all files that match the string, values are None if get_vars is not passed. If get_vars is passed, the values are dictionaries of {‘variable’: ‘result’}. e.g. for {name}.txt and hi.txt:

{hi.txt: {name: 'hi'}}

Return type

dict

Raises

ValueError – If blank or numeric variable names are used or if get_vars returns multiple different names for a file.

fyrd.run.replace_argument(args, find_string, replace_string, error=True)[source]

Replace find_string with replace string in a tuple or dict.

If dict, the values are replaced, not the keys.

Note: args can also be a list, in which case the first item is assumed to be a tuple, and the second a dictionary

Parameters
  • args (list/tuple/dict) – Tuple or dict of args

  • find_string (str) – A string to search for

  • replace_string (str) – A string to replace with

  • error (bool) – Raise ValueError if replacement fails

Returns

Return type

The same object as was passed, with alterations made.

fyrd.run.split_file(infile, parts, outpath='', keep_header=False)[source]

Split a file in parts and return a list of paths.

Note

Linux specific (uses wc).

If has_header is True, the top line is stripped off the infile prior to splitting and assumed to be the header.

Parameters
  • outpath (str, optional) – The directory to save the split files.

  • keep_header (bool, optional) – Add the header line to the top of every file.

Returns

Paths to split files.

Return type

list

fyrd.run.string_getter(string)[source]

Parse a string for {}, {#}, and {string}.

Parameters

string (str) –

Returns

  • ints (set) – A set of ints containing all {#} values

  • vrs (set) – A set of {string} values

Raises

ValueError – If both {} and {#} are passed

fyrd.run.syspath_fmt(syspaths)[source]

Take a list of paths and return a sys of sys.path.append strings.

fyrd.run.update_syspaths(function, kwds=None)[source]

Add function path to ‘syspaths’ in kwds.

fyrd.run.which(program)[source]

Replicate the UNIX which command.

Taken verbatim from:

stackoverflow.com/questions/377017/test-if-executable-exists-in-python

Parameters

program (str) – Name of executable to test.

Returns

Path to the program or None on failure.

Return type

str or None

fyrd.run.write_iterable(iterable, outfile)[source]

Write all elements of iterable to outfile.

fyrd.logme

This is a package I wrote myself and keep using because I like it. It provides syslog style leveled logging (e.g. ‘debug’->’info’->’warn’->’error’->’critical’) and it implements colors and timestamped messages.

The minimum print level can be set module wide at runtime by changing cluster.logme.MIN_LEVEL.

fyrd.logme.log(message, level='info', logfile=None, also_write=None, min_level=None, kind=None)[source]

Print a string to logfile.

Levels display as:

verbose:  <timestamp> VERBOSE -->
debug:    <timestamp> DEBUG -->
info:     <timestamp> INFO -->
warn:     <timestamp> WARNING -->
error:    <timestamp> ERROR -->
critical: <timestamp> CRITICAL -->
Parameters
  • message (str, optional) – The message to print.

  • logfile (file or logging object, optional) – Optional file to log to, defaults to STDERR. Can provide a logging object

  • level ({'debug', 'info', 'warn', 'error', 'normal'}, optional) – Will only print if level > MIN_LEVEL

  • also_write ({'stdout', 'stderr'}, optional) – Print to STDOUT or STDERR also. These only have an effect if the output is not already set to the same device.

  • min_level (str, deprecated) – Retained for backwards compatibility, min_level should be set using the logme.MIN_LEVEL constant.

  • kind (str, deprecated) – synonym for level, kept to retain backwards compatibility

Logging with timestamps and optional log files.

Print a timestamped message to a logfile, STDERR, or STDOUT.

If STDERR or STDOUT are used, colored flags are added. Colored flags are INFO, WARNINING, ERROR, or CRITICAL.

It is possible to write to both logfile and STDOUT/STDERR using the also_write argument.

If level is ‘error’ or ‘critical’, error is written to STDERR unless also_write == -1

MIN_LEVEL can also be provided, logs will only print if vlevel > MIN_LEVEL. Level order: critical>error>warn>info>debug>verbose

Usage:

import logme as lm
lm.log("Screw up!", <outfile>,
    level='debug'|'info'|'warn'|'error'|'normal',
    also_write='stderr'|'stdout')

Example:

lm.log('Hi')
Prints: 20160223 11:46:24.969 | INFO --> Hi
lm.log('Hi', level='debug')
Prints nothing
lm.MIN_LEVEL = 'debug'
lm.log('Hi', level='debug')
Prints: 20160223 11:46:24.969 | DEBUG --> Hi

Note: Uses terminal colors and STDERR, not compatible with non-unix systems

fyrd.logme.log(message, level='info', logfile=None, also_write=None, min_level=None, kind=None)[source]

Print a string to logfile.

Levels display as:

verbose:  <timestamp> VERBOSE -->
debug:    <timestamp> DEBUG -->
info:     <timestamp> INFO -->
warn:     <timestamp> WARNING -->
error:    <timestamp> ERROR -->
critical: <timestamp> CRITICAL -->
Parameters
  • message (str, optional) – The message to print.

  • logfile (file or logging object, optional) – Optional file to log to, defaults to STDERR. Can provide a logging object

  • level ({'debug', 'info', 'warn', 'error', 'normal'}, optional) – Will only print if level > MIN_LEVEL

  • also_write ({'stdout', 'stderr'}, optional) – Print to STDOUT or STDERR also. These only have an effect if the output is not already set to the same device.

  • min_level (str, deprecated) – Retained for backwards compatibility, min_level should be set using the logme.MIN_LEVEL constant.

  • kind (str, deprecated) – synonym for level, kept to retain backwards compatibility

Change Log

Version 0.6.211

This version adds bugfixes and stability over version 0.6.2a1.

Version 0.6.2a1

This version brings a major overhaul to the structure of the code, while leaving the API mostly intact.

Major Changes

  • Batch system definitions now fully modular and are contained in the fyrd.batch_systems package. options.py has also been moved into this package, which allows any programmer to add a new batch system definition to fyrd by just editing the contents of that small subpackaged

  • Updated console script to allow running arbitrary shell scripts on the console with fyrd run or submitting any number of existing job files using fyrd sub. Added the new alias scripts frun and fsub for those new modes also. Both new modes will accept the --wait argument, meaning that they will block until the jobs complete.

  • Documentation overhauled to update API and add instructions on creating a new batch system, these instructions are duplicated in the README within the batch_systems package folder.

  • Local support temporarily removed. It didn’t work very well, and it broke the new batch system structure, I hope to add it back again shortly.

  • Full support for array job parsing for both torque and slurm. We now create on job entry for each array job child, instead of for each array job. To manage this, the fyrd.queue.Queue.QueueJob class was moved to fyrd.queue.QueueJob and split to add a child class, fyrd.queue.QueueChild. All array jobs not have one fyrd.queue.QueueJob job, plus one fyrd.queue.QueueChild job for each of their children, which are stored in the children dictionary in the fyrd.queue.QueueJob class.

  • Added a get method to the fyrd.queue.Queue class to allow a user to get outputs from a list of jobs, loops continuously through the jobs so that jobs are not lost.

  • Added tqdm as a requirement and enabled progressbars in multi-job wait and get

Minor Changes

  • Updated the documentation to include this changelog, which will only contain change information for version 0.6.2a1 onwards.

  • Added additional tests to cover the new changes as well as generally increase test suite coverage.

  • Several small bug fixes

Indices and tables