Welcome to Loom’s documentation!¶
Release Notes¶
0.7.3¶
- Support Google Cloud Registry authentication
- Handle conflicting filenames with automatic index
- Bug fixes: - Preserve order of steps in a run to match the template - Fix run, template, and data object delete functions - Add docker-py dependency needed for local deployment - Fix file_relative_path error related to run import - Quote env values to support ansible>=2.8.0
0.7.2¶
- Bug fixes: - The force_rerun setting was being ignored - Duplicate task attempts were being created on failure - Fixed import of nested templates with preexisting children - Python package conflicts resolved, migrated Docker image to ubuntu - Fixes to the Jenkins pipeline
- Added “size” and “index” special functions to workflow language
- Refactored views in portal for better performance
0.7.1¶
- Bug fixes: - critical bug affecting all runs with multiple outputs - bug that prevented setting disk size for server or workers in Google Cloud - bug that prevented old logs from being automatically deleted
0.7.0¶
- Import/export runs and templates with dependencies
- Re-use existing results if an identical Task has already been run
0.6.0¶
- Installable from PyPi with “pip install loomengine”
- User authentication and authorization
0.5.3¶
- Retry tasks on failure or lost heartbeat
- Paginated index views in browser
- Chunked index view queries by commandline client
- Migrated kibana logs to same port as Loom under /logs
- Critical bugfix for corrupted data trees
- Simplified output filenames and directory structure
- Periodic check for tasks that failed to clean up
- Client warns, not errors, for duplicate file or template imports
- Add setting LOOM_FORCE_DB_MIGRATIONS_ON_START
0.5.2¶
- “loom server start” now starts a local server using default settings
- Required settings to launch on gcloud were reduced to a minimal set
- Server settings validation
- Removed deprecated commands “loom show”, “loom import”, and “loom export”
- Removed deprecated fixed_inputs in templates
- Deprecation warning for “loom server start –admin-files-dir”, renamed flag as “–resource-dir”
- Server settings can be changed without destroying server, using “loom server start/stop”
- Docker image versions for Loom server components can be upgraded without destroying server, using “loom server start/stop”
0.5.1¶
- Enhanced validation of templates and data objects
0.5.0¶
- Tags and labels for files, templates, and runs
- Changed client commands to follow ‘loom {noun} {verb}’ pattern
0.4.1¶
- Notification for completed runs by email or posting JSON to URL
- Documentation for Loom templates
- Templates can be referenced by hash
- Added retries to file import/export, docker pull, other external services
- Added
--original-copy
option to “loom file import” - Added LOOM_DEFAULT_DOCKER_REGISTRY setting
0.4.0¶
- Parallel workflows
- Deprecated fixed inputs, replaced with optional and overridable “data” field on standard inputs
- User-defined run names, using the optional
--name
flag with “loom run” - Updated examples, including two parallel examples “word_scoring” and “word_combinations”
- Saving of templates is no longer asynchronous, so any errors are raised immediately with “loom import template”
- Outputs can now use “glob” source in addition to “filename” and “stream”
0.3.8¶
- Run overview shows nested runs, tasks, and task-attempts
0.3.7¶
- Retries for upload/download from Google Storage
0.3.6¶
- Runs have “waiting” status until they start
- Runs are no longer changed to “killed” if they already completed
- Input/output detail routes on runs
0.3.5¶
- Critical bugfix for 0.3.4
0.3.4¶
- Pre-configure Kibana
- Disable X-Pack features in Kibana and Elasticsearch
- Handle several sporadic failures from gcloud services
- Handle gcloud gucket to bucket file copies longer than 30s
- Prune docker data volumes
0.3.3¶
- Critical bugfix for 0.3.2 that prevented use on Google Cloud
0.3.2¶
- Fluentd for logging, with kibana+elasticsearch for log viewing
- Nested templates by reference
- API documentation with swagger
- Reduced lag time in running tasks
0.3.1¶
- Allow substitution in template output filenames
- Added LOOM_PRESERVE_ON_FAILURE and LOOM_PRESERVE_ALL flags for debugging
- Several bugfixes
0.3.0¶
- User-configurable playbooks
- Non-reverse-compatible simplifications to API
- Reduced server response times
- Dockerized deployment on local and google cloud
- Optional dockerized MySQL server
- Retry tasks if process stops responding
0.2.1¶
- Use release-specific DOCKER_TAG in default settings
0.2.0¶
- Loom can create a server locally or on Google Cloud Platform
- Accepts workflow templates in JSON or YAML format
- Web portal provides a brower interface for viewing templates, files, and runs
- Loom client for managing runs from the terminal
About¶
What is Loom?¶
Loom is a platform-independent tool to create, execute, track, and share workflows.
Why use Loom?¶
Ease of use¶
Loom runs out-of-the-box locally or in the cloud.
Repeatable analysis¶
Loom makes sure you can repeat your analysis months and years down the road after you’ve lost your notebook, your data analyst has found a new job, and your server has had a major OS version upgrade.
Loom uses Docker to reproduce your runtime environment, records file hashes to verify analysis inputs, and keeps fully reproducible records of your work.
Traceable results¶
Loom remembers anything you ever run and can tell you exactly how each result was produced.
Portability between platforms¶
Exactly the same workflow can be run on your laptop or on a cloud service.
Open architecture¶
Not only is Loom open source and free to use, it uses an inside-out architecture that minimizes lock-in and lets you easily share your work with other people.
- Write your results to a traditional filesystem or object store and browse them outside of Loom
- Publish your tools as Docker images
- Publish your workflows as simple, human-readable documents
- Collaborate by sharing your workflows and results between Loom servers
- Connect Loom to multiple file stores without creating redundant copies
- Efficient re-use of results for redundant analysis steps
Browser interface¶
While you may want to automate your analysis from the command line, a browser interface is invaluable for exploring your workflow templates and keeping an eyeon current analysis runs.
Who needs Loom?¶
Loom is built for the kind of workflows that bioinformaticians run – multi-step analyses with large data files passed between steps. But nothing about Loom is specific to bioinformatics.
Loom is scalable and supports individual analysts or large institutions.
What is the current status?¶
Loom is under active development. To get involved, contact nhammond@stanford.edu
Contributors¶
- Nathan Hammond
- Isaac Liao
Getting Started¶
This guide walks you through installing the Loom client, using it to launch a Loom server either on your local machine or on Google Cloud Platform, and running a workflow.
Installing the Loom client¶
Prerequisites¶
Required¶
- python >= 2.7 < 3.x
- pip
- Docker (required in default ‘local’ mode)
- Google Cloud SDK (required in ‘gcloud’ mode)
Installing¶
To install the client, use:
pip install loomengine
You do not need to install the loomengine_worker or loomengine_server packages directly. The client can provision a server, and the server can provision workers, either in local docker containers or on newly provisioned VMs.
Starting a server¶
Local server¶
To start a local server with default settings:
loom server start
Skip to “Running a workflow” to run an analysis on the local server.
Google Cloud server¶
SECURITY WARNING: Running on Google Cloud is not currently secure with default firewall settings. By default, the Loom server accepts requests on port 443 from any source. Unless you restrict access, anyone in the world can access data or cause jobs to be run with your service account key. At present, Loom should only be run in Google Cloud if it is in a secured private network.
Unless you have read and understand the warning above, do not proceed with the instructions below.
First, create a directory that will store files needed to administer the Loom server:
mkdir ~/loom-admin-files
Second, create a service account credential. Refer to the instructions in Google Cloud documentation.
Save the JSON credential to “~/loom-admin-files/key.json”.
Third, make sure your Google Cloud SDK is initialized and authenticated:
gcloud init
gcloud auth application-default
Fourth, copy the settings file template from “loom/loomengine/client/settings/gcloud.conf” to “~/loom-gcloud.conf” and fill in values specific to your project in the copied version. Make sure these settings are defined:
LOOM_GCE_EMAIL: # service account email whose key you provided
LOOM_GCE_PEM_FILE: key.json
LOOM_GCE_PROJECT:
LOOM_GOOGLE_STORAGE_BUCKET:
Finally, create and start the server:
loom server start --settings-file ~/loom-gcloud.conf --admin-files-dir ~/loom-admin-files
Verify that the server is running¶
loom server status
Running a workflow¶
Export an example workflow¶
Loom ships with example workflows that demonstrate the features of Loom workflow definitions give instructions on how to run a workflow.
To see a list of available examples, use:
loom example list
To work with an example, export it by name:
loom example export hello_world
This will create a local directory with the example, any input files, and a README.rst file explaining the example and how to execute it.
Import the template and input files¶
loom file import hello_world/hello.txt
loom file import hello_world/world.txt
loom template import hello_world/hello_world.yaml
Start a run¶
loom run start hello_world hello=hello.txt world=world.txt
Listing objects in Loom’s database¶
loom file list
loom template list
loom run list
Using unique identifiers and hash values¶
Note that a unique identifier (a UUID) has been appended to the file, template, and run names, predeeded by the “@” symbol. If you have multiple objects with the same name, it is good practice to use all or part of the UUID along with the human readable name, e.g.
loom run start hello_world@37fa721e hello=hello.txt@17c73d43 world=world.txt@f2fc4af5
(UUIDs are generated randomly at the time of import, so yours will not match those shown in the command above.)
You can also use hash of the file contents to uniquely identify imported data files or templates. Hashes are preceeded with the “$” symbol.
loom run start hello_world\$11405cbf2599f017c67179c271a064ec hello=hello.txt\$b1946ac92492d2347c6235b4d2611184 world=world.txt\$591785b794601e212b260e25925636fd
Human-readable names are optional when another identifier is used, but including them will improve readability.
Viewing run progress in a web browser¶
loom browser
Deleting the Loom server¶
Warning! This may result in permanent loss of data.
loom server delete
You will be prompted to confirm the server name in order to delete (default “loom-server”)
Loom Templates¶
To run an analysis on Loom, you must first have a template that defines the analysis steps and their relative arrangement (input/output dependencies, scatter-gather patterns). An analysis run may then be initiated by assigning input data to an existing template.
A Loom template is defined in a yaml or json file and then imported to the Loom server.
Examples¶
To run these examples, you will need access to a running Loom server. See Getting Started for help launching a Loom server either locally or in the cloud.
join_two_words¶
simplest example
This example illustrates the minimal set of features in a Loom template: name, command, environment (defined by a docker image), and input/output definitions.
We use the optional “data” field on the inputs to assign default values.
join_two_words.yaml:
name: join_two_words
command: echo {{word1}} {{word2}}
environment:
docker_image: ubuntu:latest
inputs:
- channel: word1
type: string
data:
contents: hello
- channel: word2
type: string
data:
contents: world
outputs:
- channel: output_text
type: string
source:
stream: stdout
The command “echo {{word1}} {{word2}}” makes use of Jinja2 notation to substitute input values. “{{word1}}” in the command will be substituted with the value provided on the “word1” input channel. For inputs of type “string”, “integer”, “boolean”, and “float”, the value substituted is a string representation of the data. For inputs of type “file”, the filename is substituted. The full set of Jinja2 features may be used, including filters, conditional statements, and loops.
Run the join_two_words example
loom template import join_two_words.yaml
# Run with default input data
loom run start join_two_words
# Run with custom input data
loom run start join_two_words word1=foo word2=bar
capitalize_words¶
array data, iterating over an array input
This template illustrates the concept of non-scalar data (in this case a 1-dimensional array). The default mode for inputs is “no_gather”, which means that rather than gather all the objects into an array to be processed together in a single task, Loom will iterate over the array and execute the command once for each data object, in separate tasks.
Here we capitalize each word in the array. The output from each task executed is a string, but since many tasks are executed, the output is an array of strings.
Note the use of “as_channel” on the input definition. Since our input channel is an array we named the channel with the plural “words”, but this run executes a separate tasks for each element in the array it may be confusing to refer to “{{words}} inside the command. It improves readability to use “as_channel: word”.
capitalize_words.yaml:
name: capitalize_words
command: echo -n {{word}} | awk '{print toupper($0)}'
environment:
docker_image: ubuntu:latest
inputs:
- channel: words
as_channel: word
type: string
data:
contents: [aardvark,aback,abacus,abaft]
outputs:
- channel: wordoutput
type: string
source:
stream: stdout
Run the capitalize_words example
loom template import capitalize_words.yaml
# Run with default input data
loom run start capitalize_words
# Run with custom input data
loom run start capitalize_words words=[uno,dos,tres]
join_array_of_words¶
array data, gather mode on an input
Earlier we saw how to join two words, each defined on a separate input. But what if we want to join an arbitrary number of words?
This example has a single input, whose default value is an array of words. By setting the mode of this input as “gather”, instead of iterating as in the last example we will execute a single task that receives the full list of words as an input.
In this example we merge the strings and output the result as a string.
join_array_of_words.yaml:
name: join_array_of_words
command: echo -n {{wordarray}}
environment:
docker_image: ubuntu:latest
inputs:
- channel: wordarray
type: string
mode: gather
data:
contents: [aardvark,aback,abacus,abaft]
outputs:
- channel: wordoutput
type: string
source:
stream: stdout
Run the join_array_of_words example
loom template import join_array_of_words.yaml
# Run with default input data
loom run start join_array_of_words
# Run with custom input data
loom run start join_array_of_words wordarray=[uno,dos,tres]
split_words_into_array¶
array data, scatter mode on an output, output parsers
This example is the reverse of the previous example. We begin with a scalar string of space-separated words, and split them into an array.
To generate an array output from a single task, we set the output mode to “scatter”.
We also need to instruct Loom how to split the text in stdout to an array. For this we use a parser that uses the space character as the delimiter and trims any extra whitespace characters from the words.
split_words_into_array.yaml:
name: split_words_into_array
command: echo -n {{text}}
environment:
docker_image: ubuntu:latest
inputs:
- channel: text
type: string
data:
contents: >
Lorem ipsum dolor sit amet, consectetur adipiscing
elit, sed do eiusmod tempor incididunt ut labore et dolore
magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat.
outputs:
- channel: wordlist
type: string
mode: scatter
source:
stream: stdout
parser:
type: delimited
options:
delimiter: " "
trim: True
Run the split_words_into_array example
loom template import split_words_into_array.yaml
# Run with default input data
loom run start split_words_into_array
# Run with custom input data
loom run start split_words_into_array text="one two three"
add_then_multiply¶
multistep templates, connecting inputs and outputs, custom interpreter
All the previous examples have involved just one step. Here we show how to define more than one step in a template.
Also, since we are doing math in this example, it is easier to use python than bash, so we introduce the concept of custom interpreters.
Notice how the flow of data is defined using shared channel names between inputs and outputs. On the top-level template “add_then_multiply” we define input channels “a”, “b”, and “c”. These are used by the steps “add” (“a” and “b”) and “multiply” (“c”). There is also an output from “add” called “ab_sum” that serves as an input for “multiply”. Finally, the output from “multiply”, called “result” is passed up to “add_then_multiply” as a top-level output.
add_then_multiply.yaml:
name: add_then_multiply
inputs:
- type: integer
channel: a
data:
contents: 3
- type: integer
channel: b
data:
contents: 5
- type: integer
channel: c
data:
contents: 7
outputs:
- type: integer
channel: result
steps:
- name: add
command: print({{ a }} + {{ b }}, end='')
environment:
docker_image: python
interpreter: python
inputs:
- type: integer
channel: a
- type: integer
channel: b
outputs:
- type: integer
channel: ab_sum
source:
stream: stdout
- name: multiply
command: print({{ c }} * {{ ab_sum }}, end='')
environment:
docker_image: python
interpreter: python
inputs:
- type: integer
channel: ab_sum
- type: integer
channel: c
outputs:
- type: integer
channel: result
source:
stream: stdout
Run the add_then_multiply example
loom template import add_then_multiply.yaml
# Run with default input data
loom run start add_then_multiply
# Run with custom input data
loom run start add_then_multiply a=1 b=2 c=3
building_blocks¶
reusing templates
Let’s look at another way to write the previous workflow. The “add” and “multiply” steps can be defined as stand-alone workflows. After they are defined, we can create a template that includes those templates as steps.
add.yaml:
name: add
command: print({{ a }} + {{ b }}, end='')
environment:
docker_image: python
interpreter: python
inputs:
- type: integer
channel: a
- type: integer
channel: b
outputs:
- type: integer
channel: ab_sum
source:
stream: stdout
multiply.yaml:
name: multiply
command: print({{ c }} * {{ ab_sum }}, end='')
environment:
docker_image: python
interpreter: python
inputs:
- type: integer
channel: ab_sum
- type: integer
channel: c
outputs:
- type: integer
channel: result
source:
stream: stdout
building_blocks.yaml:
name: building_blocks
inputs:
- type: integer
channel: a
data:
contents: 3
- type: integer
channel: b
data:
contents: 5
- type: integer
channel: c
data:
contents: 7
outputs:
- type: integer
channel: result
steps:
- add
- multiply
Run the building_blocks example
# Import the parent template along with any dependencies
loom template import building_blocks.yaml
# Run with default input data
loom run start building_blocks
# Run with custom input data
loom run start building_blocks a=1 b=2 c=3
search_file¶
file inputs
Most of these examples use non-file inputs for convenience, but files can be used as inputs and outputs much like other data types.
In this example, the “lorem_ipsum.txt” input file should be imported prior to importing the “search_file.yaml” template that references it.
lorem_ipsum.txt:
Lorem ipsum dolor sit amet, consectetur adipiscing
elit, sed do eiusmod tempor incididunt ut labore et
dolore magna aliqua. Ut enim ad minim veniam, quis
nostrud exercitation ullamco laboris nisi ut aliquip
ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore
eu fugiat nulla pariatur. Excepteur sint occaecat
cupidatat non proident, sunt in culpa qui officia
deserunt mollit anim id est laborum.
search_file.yaml:
name: search_file
command: grep {{pattern}} {{file_to_search}}
environment:
docker_image: ubuntu:latest
inputs:
- channel: file_to_search
type: file
data:
contents: lorem_ipsum.txt
- channel: pattern
type: string
data:
contents: dolor
outputs:
- channel: matches
type: string
mode: scatter
source:
stream: stdout
parser:
type: delimited
options:
delimiter: "\n"
Here is an alternative text file not referenced in the template. We can override the default input file and specify beowulf.txt as the input when starting a run.
beowulf.txt:
Lo! the Spear-Danes' glory through splendid achievements
The folk-kings' former fame we have heard of,
How princes displayed then their prowess-in-battle.
Oft Scyld the Scefing from scathers in numbers
From many a people their mead-benches tore.
Since first he found him friendless and wretched,
The earl had had terror: comfort he got for it,
Waxed 'neath the welkin, world-honor gained,
Till all his neighbors o'er sea were compelled to
Bow to his bidding and bring him their tribute:
An excellent atheling! After was borne him
A son and heir, young in his dwelling,
Whom God-Father sent to solace the people.
Run the search_file example
# Import the template along with dependencies
loom template import search_file.yaml
# Run with default input data
loom run start search_file
# Run with custom input data
loom file import beowulf.txt
loom run start search_file pattern=we file_to_search=beowulf.txt\$20b8f89484673eae4f121801e1fec28c
word_combinations¶
scatter-gather, input groups, output mode gather(n)
When a template step has two inputs rather than one, iteration can be done in two ways:
- collated iteration: [a,b] + [c,d] => [a+c,b+d]
- combinatorial iteration: [a,b] + [c,d] => [a+c, a+d, b+c, b+d]
With more than two inputs, we could employ some combination of these two approaches.
“groups” provide a flexible way to define when to use collated or combinatorial iteration. Each input has an integer group ID (the default is 0). All inputs with a common group ID will be combined with collation. Between groups, combinatorial iteration is used.
In this example, we iterate over two inputs, one with an array of adjectives and one with an array of nouns. Since the inputs have different group IDs, we iterate over all possible combinations of word pairs (combinatorial).
word_combinations.yaml:
name: word_combinations
inputs:
- channel: adjectives
type: string
data:
contents: [green,purple,orange]
- channel: nouns
type: string
data:
contents: [balloon,button]
outputs:
- channel: all_word_pairs
type: file
steps:
- name: combine_words
command: echo "{{adjective}} {{noun}}" > {{word_pair_file}}
environment:
docker_image: ubuntu
inputs:
- channel: adjectives
as_channel: adjective
type: string
group: 0
- channel: nouns
as_channel: noun
type: string
group: 1
outputs:
- channel: word_pair_files
as_channel: word_pair_file
type: file
source:
filename: word_pair.txt
- name: merge_word_pairs
command: cat {{word_pair_files}} > {{all_word_pairs}}
environment:
docker_image: ubuntu
inputs:
- channel: word_pair_files
type: file
mode: gather(2)
outputs:
- channel: all_word_pairs
type: file
source:
filename: all_word_pairs.txt
You may have noticed that we gather the input “word_pair_files” with “mode: gather(2)”. This is because word_pair_files is not just an array, but an array of arrays. We wish to gather it to full depth. You may wish to modify this example to use “mode: gather” (or equivalently “mode: gather(1)”) to see how it affects the result.
Run the word_combinations example
loom template import word_combinations.yaml
# Run with default input data
loom run start word_combinations
# Run with custom input data
loom run start word_combinations adjectives=[little,green] nouns=[men,pickles,apples]
sentence_scoring¶
nested scatter-gather
Why should we bother differentiating between “gather” and “gather(2)”? This example illustrates why, by showing how to construct a scatter-scatter-gather-gather workflow. On the first gather, we do not fully gather the results into an array, but only gather the last level of nested arrays. This lets us group data for the letters in each word while keeping data for different words separate. On the second gather, we combine the data for each word to get an overall result for the sentence.
sentence_scoring.yaml:
name: sentence_scoring
inputs:
- channel: sentence
type: string
hint: Input text to be broken into words and letters
data:
contents: I am robot
outputs:
- channel: sentence_value
type: integer
steps:
- name: split_into_words
command: echo {{ sentence }}
inputs:
- channel: sentence
type: string
outputs:
- channel: words
mode: scatter
type: string
source:
stream: stdout
parser:
type: delimited
options:
delimiter: " "
trim: true
environment:
docker_image: ubuntu
- name: split_into_letters
interpreter: python
command: print(' '.join([letter for letter in '{{ word }}']))
inputs:
- channel: words
as_channel: word
type: string
outputs:
- channel: letters
type: string
mode: scatter
source:
stream: stdout
parser:
type: delimited
options:
delimiter: " "
trim: true
environment:
docker_image: python
- name: letter_to_integer
interpreter: python
command: print(ord( '{{ letter }}' ), end='')
inputs:
- channel: letters
as_channel: letter
type: string
outputs:
- channel: letter_values
type: integer
source:
stream: stdout
environment:
docker_image: python
- name: sum_word
interpreter: python
command: print({{ letter_values|join(' + ') }}, end='')
inputs:
- channel: letter_values
type: integer
mode: gather
outputs:
- channel: word_values
type: integer
source:
stream: stdout
environment:
docker_image: python
- name: multiply_sentence
interpreter: python
command: print({{ word_values|join(' * ') }}, end='')
inputs:
- channel: word_values
type: integer
mode: gather
outputs:
- channel: sentence_value
type: integer
source:
stream: stdout
environment:
docker_image: python
Run the sentence_scoring example
loom template import sentence_scoring.yaml
# Run with default input data
loom run start sentence_scoring
# Run with custom input data
loom run start sentence_scoring sentence='To infinity and beyond'
Special functions¶
The examples above demonstrated how jinja template notation can be used to incorporate input values into commands, e.g. “echo {{input1}}”. The template context contains all input channel names as keys, but it also contains the special functions below.
If an input uses the same name as a special function, the input value overrides.
index¶
index[i] returns the one-based index of the current task. So if a run contains 3 parallel tasks, index[1] will return value 1, 2, or 3 for the respective tasks. If the run contains nested parallel tasks, index[i] will return the index of the task in dimension i. If i is a positive integer larger than the dimensionality of the tasks, it will return a default value of 1 (e.g. index[1], index[2], etc. all return 1 for scalar data.). If i is not a positive integer value, a validation error will result.
size¶
size[i] returns the size of the specified dimension. So if a run contains 3 parallel tasks, size[1] will return a value of 3 for all tasks. If the run contains nested parallel tasks, size[i] will return the size of dimension i. If i is a positive integer larger than the dimensionality of the tasks, it will return a value of 1 (e.g. size[1], size[2], etc. all return 1 for scalar data). If i is not a positive integer value, a validation error will result.
Schemas¶
Template schema¶
field | required | default | type | example |
---|---|---|---|---|
name | yes | string | ‘calculate_error’ | |
inputs | no | [] | [Input] | [‘channel’: ‘input1’, ‘type’: ‘string’] |
outputs | no | [] | [Output] | [‘channel’: ‘output1’, ‘type’: ‘string’, ‘source’: {‘stream’: ‘stdout’}] |
command* | yes | string | ‘echo {{input1}}’ | |
interpreter* | no | /bin/bash -euo pipefail | string | ‘/usr/bin/python’ |
resources* | no | null | ||
environment* | yes | string | {‘docker_image’: ‘ubuntu:latest’} | |
steps+ | no | [] | [Template|string] | see examples in previous section |
* only on executable steps (leaf nodes)
+ only on container steps (non-leaf nodes)
Input schema¶
field | required | default | type | example |
---|---|---|---|---|
channel | yes | string | ‘sampleid’ | |
type | yes | string | ‘file’ | |
mode* | no | no_gather | string | ‘gather’ |
group* | no | 0 | integer | 2 |
hint | no | string | ‘Enter a quality threshold’ | |
data | no | null | DataNode | {‘contents’: [3,7,12]} |
* only on executable steps (leaf nodes)
DataNode schema¶
field | required | default | type | example |
---|---|---|---|---|
contents | yes | see notes below |
DataNode contents can be a valid data value of any type. They can also be a list, or nested lists of any of these types, provided all items are of the same type and at the same nested depth.
data type | valid DataNode contents examples | invalid DataNode contents examples |
---|---|---|
integer | 172 | |
float | 3.98 | |
string | ‘sx392’ | |
boolean | true | |
file | myfile.txt | |
file | myfile.txt$9dd4e461268c8034f5c8564e155c67a6 | |
file | $9dd4e461268c8034f5c8564e155c67a6 | |
file | myfile.txt@ef62b731-e714-4b82-b1a7-057c1032419e | |
file | myfile.txt@ef62b7 | |
file | @ef62b7 | |
integer | [2,3] | |
integer | [[2,2],[2,3,5],[17]] | |
integer | [2,’three’] (mismatched types) | |
integer | [[2,2],[2,3,[5,17]]] (mismatched depths) |
Output schema¶
field | required | default | type | example |
---|---|---|---|---|
channel | yes | string | ‘sampleid’ | |
type | yes | string | ‘file’ | |
mode* | no | no_gather | string | ‘gather’ |
parser* | no | null | OutputParser | {‘type’: ‘delimited’, ‘options’: {‘delimiter’: ‘,’} |
source* | yes | OutputSource | {‘glob’: ‘*.dat’} |
* only on executable steps (leaf nodes)
OutputParser schema¶
field | required | default | type | example |
---|---|---|---|---|
type* | yes | string | ‘delimited’ | |
options | no | ParserOptions | {‘delimiter’:’ ‘,’trim’:true} |
* Currently “delimited” is the only OutputParser type
OutputSource schema¶
field | required | default | type | example |
---|---|---|---|---|
filename* | false | string | ‘out.txt’ | |
stream* | false | string | ‘stderr’ | |
glob+ | false | string | ‘*.txt’ | |
filenames+ | false | string | [‘out1.txt’,’out2.txt’] |
* When used with outputs with “scatter” mode, an OutputParser is required
+ Only for outputs with “scatter” mode. (No parser required.) The “glob” field supports “*”, ”?”, and character ranges using “[]”.
Tags and Labels¶
Tags and labels are metadata that can be applied to files, templates, and runs. They can be modified or deleted without affecting the target object.
Each tag is unique and can only identify one object of a given type. Labels are not unique, so the same label can be applied to many objects.
Tags¶
Creating a tag
A tag can be added to an existing object as follows:
loom file tag add FILE_ID TAG
loom template tag add TEMPLATE_ID TAG
loom run tag add RUN_ID TAG
A tag can also be applied when each of these objects is created, as follows:
loom file import myfile.dat --tag TAG
loom template import mytemplate.yaml --tag TAG
loom run start TEMPLATE_ID [INPUT1=VALUE1 [INPUT2=VALUE2 ...]] --tag TAG
Multiple tags can be added at once by repeating the --tag
flag:
loom file import myfile.dat --tag TAG1 --tag TAG2
loom template import mytemplate.yaml --tag TAG1 --tag TAG2
loom run start TEMPLATE_ID [INPUT1=VALUE1 [INPUT2=VALUE2 ...]] --tag TAG1 --tag TAG2
Viewing tags
To view existing tags on all objects of a given type, use one of these commands:
loom file tag list
loom template tag list
loom run tag list
To view existing tags on a specific object, use one of these commands:
loom file tag list FILE_ID
loom template tag list TEMPLATE_ID
loom run tag list RUN_ID
Referencing an object by its tag
Just like hashes and UUIDs, tags can be appended to a reference ID string, preceeded by the ”:” symbol. The tag name can also be used alone as reference ID. These two statements should return the same file, but the first command will fail if the file tagged as “NEWDATA” does not have UUID “74f5a659-9d03-422b-b5b7-3439465a2455” or filename “myfile.dat”.
loom file list myfile.dat@74f5a659-9d03-422b-b5b7-3439465a2455:NEWDATA
loom file list :NEWDATA
The same notation is used for tagged runs and templates.
Removing a tag
A tag can be removed with the following commands:
loom file tag remove FILE_ID TAG
loom template tag remove TEMPLATE_ID TAG
loom run tag remove RUN_ID TAG
Since the tag itself can be used as the reference ID, this command would be one valid way to remove a tag:
loom file tag remove :TAG TAG
Labels¶
Creating a label
A label can be added to an existing object as follows:
loom file label add FILE_ID LABEL
loom template label add TEMPLATE_ID LABEL
loom run label add RUN_ID LABEL
A label can also be applied when each of these objects is created, as follows:
loom file import myfile.dat --label LABEL
loom template import mytemplate.yaml --label LABEL
loom run start TEMPLATE_ID [INPUT1=VALUE1 [INPUT2=VALUE2 ...]] --label LABEL
Multiple labels can be added at once by repeating the --label
flag.
loom file import myfile.dat --label LABEL1 --label LABEL2
loom template import mytemplate.yaml --label LABEL1 --label LABEL2
loom run start TEMPLATE_ID [INPUT1=VALUE1 [INPUT2=VALUE2 ...]] --label LABEL1 --label LABEL2
Viewing labels
To view existing labels on all objects of a given type, use one of these commands:
loom file label list
loom template label list
loom run label list
To view existing labels on a specific object, use one of these commands:
loom file label list FILE_ID
loom template label list TEMPLATE_ID
loom run label list RUN_ID
Listing objects by label
Unlike tags, labels cannot be used in refence ID strings since they are not unique. The --label
flag can be used with a list statement to show all objects of the specified type with a given label:
loom file list --label LABEL
loom template list --label LABEL
loom run list --label LABEL
If a reference ID is given along with the --label
flag, the object will be shown only if it matches the given label.
loom file list --label LABEL FILE_ID
loom template list --label LABEL TEMPLATE_ID
loom run list --label LABEL RUN_ID
Multiple --label
flags are allowed. Only objects that match ALL specified labels will be shown.
loom file list --label LABEL1 --label LABEL2
loom template list --label LABEL1 --label LABEL2
loom run list --label LABEL1 --label LABEL2
Removing a label
A label can be removed with the following commands:
loom file label remove FILE_ID LABEL
loom template label remove TEMPLATE_ID LABEL
loom run label remove RUN_ID LABEL
Server Settings¶
Passing settings to the client¶
If you start a Loom server without specifying any settings, Loom uses appropriate defaults for a local Loom server.
loom server start
Non-default settings are assigned to a new server in one of three ways: using the --settings-file
/-s
flag to specify one or more config files:
loom server start --settings-file mysettings.conf
or individually, using the --extra-settings
/-e
flag:
loom server start --extra-settings LOOM_SERVER_NAME=MyServerName
or through environment variables that use the LOOM_
prefix:
export LOOM_SERVER_NAME=MyServerName
loom server start
Multiple --settings-file
flags are allowed, with the last file given highest priority for any conflicts. Multiple --extra-settings
flags are also allowed. If different sources or settings are used together, the order of precedence, from highest to lowest, is:
- environment variable
--extra-settings
--settings-file
Settings file format¶
Loom settings files are formatted like ini files, but without section headers. They are a flat file with a list of key-value pairs, separated by ”:” or “=” and optional spaces. Comments are denoted with “#”, and blank lines are allowed. For example:
demosettings.conf:
LOOM_SERVER_NAME: loom-demo
# Google cloud settings
LOOM_MODE: gcloud
LOOM_GCLOUD_SERVER_INSTANCE_TYPE: n1-standard-4
LOOM_GCLOUD_WORKER_BOOT_DISK_SIZE_GB: 50
Resources¶
Some settings give the name of a file resource used by Loom. For example, LOOM_SSL_CERT_KEY_FILE and LOOM_SSL_CERT_FILE refer to the SSL key and certificate, and LOOM_GCE_PEM_FILE refers to the service account key file for Google Cloud Platform.
Rather than hard-code the path to these files, you must place them in one directory and make the setting indicate the file name or relative path in that directory. The directory is specified with the --resources
flag on “loom server start”.
ssl-settings.conf file contents:
LOOM_SSL_CERT_KEY_FILE: loom.key
LOOM_SSL_CERT_FILE: loom.crt
resources/ directory contents:
resources/loom.key
resources/loom.crt
Use this command to start the server with the necessary files:
loom server start --resource-dir ./resources --settings-file ssl-settings.conf
On the client machine, Loom will cache a copy of these files in ~/.loom (or the value of the environment variable LOOM_SETTINGS_HOME), and it will create an additional copy on the server if it is remote, so the original copy does not need to be retained for the server to work.
Required settings to launch on Google Cloud¶
To launch a server on Google Cloud Platform, at a minimum you will need these required settings:
LOOM_MODE: gcloud LOOM_GCE_PEM_FILE: your-gcp-key.json LOOM_GCE_EMAIL: service-account-id@your-gcp-project.iam.gserviceaccount.com LOOM_GCE_PROJECT: your-gcp-project LOOM_GOOGLE_STORAGE_BUCKET: your-gcp-bucket
Updating settings¶
Not all settings can be changed safely. Changing LOOM_MYSQL_DATABASE, for example, will leave Loom unable to connect to the database.
However, sometimes it is necessary to modify settings, and many of them, such as LOOM_DEBUG, can be safely changed. You may want to test changes in a staging server before working with production data.
On the client machine:
- Edit setting(s) in ~/.loom/server-settings.conf
- Edit any resource files in ~/.loom/resources/
- Wait for any runs in progress to complete.
- loom server stop
- loom server start
This will overwrite settings and resource files on the server with any changes you have made.
Modes¶
The LOOM_MODE setting is used to toggle between deployment modes. Currently loom has two modes: “local” and “gcloud”. The default mode is “local”.
Custom playbooks¶
The primary difference between modes is that they correspond to different sets of playbooks. You can find these in the Loom source code under “loomengine/client/playbooks/”. Loom requires at least these five playbooks to be defined for any mode:
- {mode}_cleanup_task_attempt.yml
- {mode}_delete_server.yml
- {mode}_run_task_attempt.yml
- {mode}_start_server.yml
- {mode}_stop_server.yml
So for example, in the playbooks directory you will see a “gcloud_stop_server.yml” and a “local_stop_server.yml”.
Loom allows you to use a custom set of playbooks to control how Loom is deployed. To do this, first create a copy of the “loomengine/client/playbooks” directory. Use the “local_*.yml” or “gcloud_*.yml” playbooks as a starting point. You may wish to change the prefix, but make sure that when you launch a new server, the LOOM_MODE setting matches the playbook prefix that you choose.
To pass the custom playbooks directory to loom when starting a new server, use the --playbooks
flag:
loom server start --my-custom-settings.conf --playbook-dir ./my-custom-playbooks
Loom settings are passed to the playbooks as environment variables. You are welcome to use your own settings for custom playbooks, but you may have to disable settings validation with “LOOM_SKIP_SETTINGS_VALIDATION=true”.
Index of settings¶
default | |
valid values | |
notes |
Settings for all modes¶
LOOM_SERVER_NAME¶
default | loom-server |
valid values | String that begins with alpha, ends with alphanumeric, and contains only alphanumeric or -. Max length of 63. |
LOOM_SERVER_NAME determines how several components of the Loom server named. For example, the Docker container hosting the Loom server web application is named {{LOOM_SERVER_NAME}}-master, and the instance hosting the server in gcloud mode is named {{LOOM_SERVER_NAME}}.
LOOM_MODE¶
default | local |
valid values | local|gcloud|{custom} |
LOOM_MODE selects between different sets of playbooks. It also changes some default settings and the rules for settings validation. Supported modes are “local” and “gcloud”. You may also develop custom playbooks that are compatible with another mode.
LOOM_DEBUG¶
default | false |
valid values | true|false |
When true, it activates several debugging tools and verbose server errors. Primarily for development use.
LOOM_LOG_LEVEL¶
default | INFO |
valid values | CRITICAL|ERROR|WARNING|INFO|DEBUG |
LOOM_DEFAULT_DOCKER_REGISTRY¶
default | none |
LOOM_DEFAULT_DOCKER_REGISTRY applies to the LOOM_DOCKER_IMAGE and “docker_image” values in templates. Anywhere that a repo is given with no specific registry, LOOM_DEFAULT_DOCKER_REGISTRY will be assumed.
LOOM_STORAGE_TYPE¶
default | local |
valid values | local|google_storage |
Sets the type of persistent file storage. Usually google_storage would only be used with gcloud mode, but Loom does not impose this restriction. This may be useful for testing or for a custom deployment mode.
LOOM_STORAGE_ROOT¶
default (local) | ~/loomdata |
default (gcloud) | /loomdata |
valid values | absolute file path |
LOOM_GOOGLE_STORAGE_BUCKET¶
default | None. Setting is required if LOOM_STORAGE_TYPE==google_storage |
valid values | Valid Google Storage bucket name. |
Loom will attempt to create the bucket it if it does not exist.
LOOM_ANSIBLE_INVENTORY¶
default (local) | localhost, |
default (gcloud) | gce_inventory_wrapper.py |
valid values | Comma-delimited list of hosts, or executable filename |
Accepts either a comma-separated list of host inventory (e.g. “localhost,” – the comma is required) or a dynamic inventory executable. The executable must be in the playbooks directory.
LOOM_ANSIBLE_HOST_KEY_CHECKING¶
default | false |
valid values | true|false |
Leaving LOOM_ANSIBLE_HOST_KEY_CHECKING as false will ignore warnings about invalid host keys. These errors are common on Google Cloud Platform where IP addresses are frequently reused, causing conflicts with known_hosts.
LOOM_HTTP_PORT¶
default | 80 |
valid values | 1–65535 |
LOOM_HTTPS_PORT¶
default | 443 |
valid values | 1–65535 |
LOOM_HTTP_PORT_ENABLED¶
default | true |
valid values | true|false |
LOOM_HTTPS_PORT_ENABLED¶
default | false |
valid values | true|false |
LOOM_HTTP_REDIRECT_TO_HTTPS¶
default | false |
valid values | true|false |
If true, NGINX will redirect requests on LOOM_HTTP_PORT to LOOM_HTTPS_PORT.
LOOM_SSL_CERT_KEY_FILE¶
default | {{LOOM_SERVER_NAME}}+’-ssl-cert-key.pem’ |
LOOM_SSL_CERT_FILE¶
default | {{LOOM_SERVER_NAME}}+’-ssl-cert.pem’ |
LOOM_SSL_CERT_CREATE_NEW¶
default | false |
valid values | true|false |
If true, Loom will create a self-signed certificate and key. If LOOM_SSL_CERT_CREATE_NEW==false and LOOM_HTTPS_PORT_ENABLED==true, user must provide certificate and key in the resources directory and set LOOM_SSL_CERT_KEY_FILE and LOOM_SSL_CERT_FILE to the correct filenames.
LOOM_SSL_CERT_C¶
default | US |
Used in subject field if self-signed SSL certificate if LOOM_SSL_CERT_CREATE_NEW==true.
LOOM_SSL_CERT_ST¶
default | California |
Used in subject field if self-signed SSL certificate if LOOM_SSL_CERT_CREATE_NEW==true.
LOOM_SSL_CERT_L¶
default | Palo Alto |
Used in subject field if self-signed SSL certificate if LOOM_SSL_CERT_CREATE_NEW==true.
LOOM_SSL_CERT_O¶
default | Stanford University |
Used in subject field if self-signed SSL certificate if LOOM_SSL_CERT_CREATE_NEW==true.
LOOM_SSL_CERT_CN¶
default | {{ansible_hostname}} |
Used in subject field if self-signed SSL certificate if LOOM_SSL_CERT_CREATE_NEW==true.
LOOM_SERVER_ALLOWED_HOSTS¶
default | [*] |
List of hosts from which Loom will accept a connection. Corresponds to the django ALLOWED_HOSTS setting.
LOOM_SERVER_CORS_ORIGIN_ALLOW_ALL¶
default | false |
Whitelist all hosts for cross-origin resource sharing. Corresponds to the django CORS_ORIGIN_ALLOW_ALL setting.
LOOM_SERVER_CORS_ORIGIN_WHITELIST¶
default | [] |
Hosts to be whitelisted for cross-origin resource sharing. Corresponds to the django CORS_ORIGIN_WHITELIST setting.
LOOM_TASKRUNNER_HEARTBEAT_INTERVAL_SECONDS¶
default | 60 |
Frequency of heatbeats sent by TaskAttempt monitor process to Loom server.
LOOM_TASKRUNNER_HEARTBEAT_TIMEOUT_SECONDS¶
default | 300 |
Kill any TaskAttempt that has not sent a heartbeat in this time.
LOOM_PRESERVE_ON_FAILURE¶
default | false |
valid values | true|false |
Do not clean up instance or containers for any failed TaskAttempts. May be useful for debugging.
LOOM_PRESERVE_ALL¶
default | false |
valid values | true|false |
Do not clean up instance or containers for any TaskAttempts. May be useful for debugging.
LOOM_SERVER_GUNICORN_WORKERS_COUNT¶
default | 10 |
LOOM_WORKER_CELERY_CONCURRENCY¶
default | 30 |
LOOM_MYSQL_CREATE_DOCKER_CONTAINER¶
default | true |
Create a new Docker container to host the Loom database instead of connecting to an external database.
LOOM_MYSQL_HOST¶
default | {{mysql_container_name}} if LOOM_MYSQL_CREATE_DOCKER_CONTAINER==true; otherwise no default. |
MySQL server connection settings.
LOOM_MYSQL_IMAGE¶
default | mysql:5.7.17 |
Docker image used to create MySQL container if LOOM_MYSQL_CREATE_DOCKER_CONTAINER==true.
LOOM_MYSQL_RANDOM_ROOT_PASSWORD¶
default | true |
Create a random root password when initializing database if LOOM_MYSQL_CREATE_DOCKER_CONTAINER==true.
LOOM_MYSQL_SSL_CA_CERT_FILE¶
default | none |
If needed, certificate files for MySQL database connection should be provided through the resources directory.
LOOM_MYSQL_SSL_CLIENT_CERT_FILE¶
default | none |
If needed, certificate files for MySQL database connection should be provided through the resources directory.
LOOM_MYSQL_SSL_CLIENT_KEY_FILE¶
default | none |
If needed, certificate files for MySQL database connection should be provided through the resources directory.
LOOM_RABBITMQ_USER¶
default | guest |
LOOM_RABBITMQ_PASSWORD¶
default | guest |
LOOM_RABBITMQ_PORT¶
default | 5672 |
LOOM_RABBITMQ_VHOST¶
default | / |
LOOM_NGINX_SERVER_NAME¶
default | localhost |
Value for “server_name” field in NGINX configuration file.
LOOM_FLUENTD_IMAGE¶
default | loomengine/fluentd-forest-googlecloud |
Docker image used to create fluentd container. The default repo includes fluentd with the forst and google-cloud plugins installed.
LOOM_FLUENTD_PORT¶
default | 24224 |
LOOM_FLUENTD_OUTPUTS¶
default | elasticsearch,file |
valid values | comma-separated list of elasticsearch &| file &| gcloud_cloud |
LOOM_ELASTICSEARCH_IMAGE¶
default | docker.elastic.co/elasticsearch/elasticsearch:5.3.2 |
Docker image used for elasticsearch container.
LOOM_ELASTICSEARCH_PORT¶
default | 9200 |
LOOM_ELASTICSEARCH_JAVA_OPTS¶
default | -Xms512m -Xmx512m |
LOOM_KIBANA_VERSION¶
default | 5.3.2 |
LOOM_KIBANA_IMAGE¶
default | docker.elastic.co/kibana/kibana:{{LOOM_KIBANA_VERSION}} |
Docker image to create Kibana container.
LOOM_KIBANA_PORT¶
default | 5601 |
LOOM_FLOWER_INTERNAL_PORT¶
default | 5555 |
LOOM_NOTIFICATION_ADDRESSES¶
default | [] |
Email addresses or http/https URLs to report to whenever a run reaches terminal status. Requires email configuration.
LOOM_NOTIFICATION_HTTPS_VERIFY_CERTIFICATE¶
default | true |
valid values | true|false |
When one or more notification addresses is an https URL, LOOM_NOTIFICATION_HTTP_VERIFY_CERTIFICATE determines whether to validate ssl certificates. You may wish to set this to false when using self-signed certificates.
Settings for gcloud mode¶
LOOM_GCE_PEM_FILE¶
default | none |
valid values | filename |
This should be a JSON file with your Google Cloud Project service account key. File must be provided to Loom through the resources directory.
LOOM_GCE_PROJECT¶
default | none |
valid values | valid GCE project name |
LOOM_GCE_EMAIL¶
default | none |
valid values | valid GCE email identifier associated with the service account in LOOM_GCE_PEM_FILE |
LOOM_SSH_PRIVATE_KEY_NAME¶
default | loom_id_rsa |
valid values | valid filename string. Will create files in ~/.ssh/{{LOOM_SSH_PRIVATE_KEY_NAME}} and ~/.ssh/{{LOOM_SSH_PRIVATE_KEY_NAME}}.pub |
LOOM_GCLOUD_SERVER_BOOT_DISK_TYPE¶
default | pd-standard |
valid values | valid GCP disk type |
LOOM_GCLOUD_SERVER_BOOT_DISK_SIZE_GB¶
default | 10 |
valid values | float value in GB |
LOOM_GCLOUD_SERVER_INSTANCE_IMAGE¶
default | centos-7 |
valid values | valid GCP image |
LOOM_GCLOUD_SERVER_INSTANCE_TYPE¶
default | none |
valid values | valid GCP instance type |
LOOM_GCLOUD_SERVER_NETWORK¶
default | none |
valid values | valid GCP network name |
LOOM_GCLOUD_SERVER_SUBNETWORK¶
default | none |
valid values | valid GCP subnetwork name |
LOOM_GCLOUD_SERVER_ZONE¶
default | us-central1-c |
valid values | valid GCP zone |
LOOM_GCLOUD_SERVER_SKIP_INSTALLS¶
default | false |
valid values | true|false |
If LOOM_GCLOUD_SERVER_SKIP_INSTALLS==true, when bringing up a server Loom will use the LOOM_GCLOUD_SERVER_INSTANCE_IMAGE directly without installing system packages. Before using this setting, you will need to create the base image. One way to do this is to start a Loom server with the defaults and then create an image from its disk.
Usually the same image can be used for both LOOM_GCLOUD_WORKER_INSTANCE_IMAGE and LOOM_GCLOUD_SERVER_INSTANCE_IMAGE.
LOOM_GCLOUD_SERVER_EXTERNAL_IP¶
default | ephemeral |
valid values | none|ephemeral|desired IP |
LOOM_GCLOUD_SERVER_TAGS¶
default | none |
valid values | comma-separated list of network tags |
LOOM_GCLOUD_WORKER_BOOT_DISK_TYPE¶
default | pd-standard |
valid values | Valid GCP disk type |
LOOM_GCLOUD_WORKER_BOOT_DISK_SIZE_GB¶
default | 10 |
valid values | float value in GB |
LOOM_GCLOUD_WORKER_SCRATCH_DISK_TYPE¶
default | |
valid values |
LOOM_GCLOUD_WORKER_SCRATCH_DISK_MIN_SIZE_GB¶
default | |
valid values |
LOOM_GCLOUD_WORKER_INSTANCE_IMAGE¶
default | centos-7 |
valid values | valid GCP image |
LOOM_GCLOUD_WORKER_INSTANCE_TYPE¶
default | none |
valid values | valid GCP instance type |
LOOM_GCLOUD_WORKER_NETWORK¶
default | none |
valid values | valid GCP network name |
LOOM_GCLOUD_WORKER_SUBNETWORK¶
default | none |
valid values | valid GCP subnetwork name |
LOOM_GCLOUD_WORKER_ZONE¶
default | us-central1-c |
valid values | valid GCP zone |
LOOM_GCLOUD_WORKER_SKIP_INSTALLS¶
default | false |
valid values | true|false |
If LOOM_GCLOUD_WORKER_SKIP_INSTALLS==true, when bringing up a server Loom will use the LOOM_GCLOUD_WORKER_INSTANCE_IMAGE directly without installing system packages. Before using this setting, you will need to create the base image. One way to do this is to start a Loom server with the defaults and then create an image from its disk.
Usually the same image can be used for both LOOM_GCLOUD_WORKER_INSTANCE_IMAGE and LOOM_GCLOUD_SERVER_INSTANCE_IMAGE.
LOOM_GCLOUD_WORKER_EXTERNAL_IP¶
default | ephemeral |
valid values | none|ephemeral |
Note that using a reserved IP is not allowed, since multiple workers will be started. To restrict IP range, use a subnetwork instead.
LOOM_GCLOUD_WORKER_TAGS¶
default | none |
valid values | comma-separated list of network tags |
LOOM_GCLOUD_WORKER_USES_SERVER_INTERNAL_IP¶
default | false |
valid values | true|false |
If true, worker makes http/https connections to server using private IP.
LOOM_GCLOUD_CLIENT_USES_SERVER_INTERNAL_IP¶
default | false |
valid values | true|false |
If true, client makes ssh/http/https connections to server using private IP.
LOOM_GCLOUD_SERVER_USES_WORKER_INTERNAL_IP¶
default | false |
valid values | true|false |
If true, server makes ssh connections to worker using private IP.