PK /EA˟9 9 gaffer-0.4.4/util.html
Return a gid, given a group value
If the group value is unknown, raises a ValueError.
Return an uid, given a user value. If the value is an integer, make sure it’s an existing uid.
If the user value is unknown, raises a ValueError.
The manager module is a core component of gaffer. A Manager is responsible of maintaining processes and allows you to interract with them.
Bases: object
Manager - maintain process alive
A manager is responsible of maintaining process alive and manage actions on them:
The design is pretty simple. The manager is running on the default event loop and listening on events. Events are sent when a process exit or from any method call. The control of a manager can be extended by adding apps on startup. For example gaffer provides an application allowing you to control processes via HTTP.
Running an application is done like this:
# initialize the application with the default loop
loop = pyuv.Loop.default_loop()
m = Manager(loop=loop)
# start the application
m.start(apps=[HttpHandler])
.... # do smth
m.stop() # stop the controlller
m.run() # run the event loop
Note
The loop can be omitted if the first thing you do is launching a manager. The run function is here for convenience. You can of course just run loop.run() instead
Warning
The manager should be stopped the last one to prevent any lock in your application.
add a process to the manager. all process should be added using this function
return the process status:
{
"active": str,
"running": int,
"max_processes": int
}
subscribe to the manager event eventype
‘on’ is an alias to this function
subscribe to the manager event eventype
‘on’ is an alias to this function
Convenience function to use in place of loop.run() If the manager is not started it raises a RuntimeError.
Note: if you want to use separately the default loop for this thread then just use the start function and run the loop somewhere else.
send a signal to a process or all processes contained in a state
stop a process by name or id
If a name is given all processes associated to this name will be removed and the process is marked at stopped. If the internal process id is givien, only the process with this id will be stopped
subscribe to the manager event eventype
‘on’ is an alias to this function
subscribe once to the manager event eventype
‘once’ is an alias to this function
increase the number of system processes for a state. Change is handled once the event loop is idling
decrease the number of system processes for a state. Change is handled once the event loop is idling
Webhooks allow to register an url to a specific event (or alls) and the event will be posted on this URL. Each events can triger a post on a given url.
for example to listen all create events on http://echohttp.com/echo you can add this line in the webhooks sections of the gaffer setting file:
[webhooks]
create = http://echohttp.com/echo you
Or programatically:
from gaffer.manager import Manager
from gaffer.webhooks import WebHooks
hooks = [("create", "http://echohttp.com/echo you ")
webhooks = WebHooks(hooks=hooks)
manager = Manager()
manager.start(apps=[webhooks])
This gaffer application is started like other applications in the manager. All Gaffer events are supported.
an http API provided by the gaffer.http_handler.HttpHandler` gaffer application can be used to control gaffer via HTTP. To embed it in your app just initialize your manager with it:
manager = Manager(apps=[HttpHandler()])
The HttpHandler can be configured to accept multiple endpoinds and can be extended with new HTTP handlers. Internally we are using Tornado so you can either extend it with rules using pure totrnado handlers or wsgi apps.
Gaffer supports GET, POST, PUT, DELETE, OPTIONS HTTP verbs.
All messages (except some streams) are JSON encoded. All messages sent to gaffers should be json encoded.
Gaffer supports cross-origin resource sharing (aka CORS).
Main http endpoints are described in the description of the gafferctl commands in Gafferctl:
Gafferctl is using extensively this HTTP api.
The output streams can be fetched by doing:
GET /streams/<pid>/<nameofeed>
It accepts the following query parameters:
ex:
$ curl localhost:5000/streams/1/stderr?feed=continuous
STDERR 12
STDERR 13
STDERR 14
STDERR 15
STDERR 16
STDERR 17
STDERR 18
STDERR 19
STDERR 20
STDERR 21
STDERR 22
STDERR 23
STDERR 24
STDERR 25
STDERR 26
STDERR 27
STDERR 28
STDERR 29
STDERR 30
STDERR 31
$ curl localhost:5000/streams/1/stderr?feed=longpoll
STDERR 215
$ curl localhost:5000/streams/1/stderr?feed=eventsource
event: stderr
data: STDERR 20
event: stderr
data: STDERR 21
event: stderr
data: STDERR 22
$ curl localhost:5000/streams/1/stdout?feed=longpoll
STDOUTi 14
It is now possible to write to stdin via the HTTP api by sending:
POST to /streams/<pid>/ttin
Where <pid> is an internal process ide that you can retrieve by calling GET /processses/<name>/_pids
ex:
$ curl -XPOST -d $'ECHO\n' localhost:5000/streams/2/stdin
{"ok": true}
$ curl localhost:5000/streams/2/stdout?feed=longpoll
ECHO
It is now possible to get stin/stdout via a websocket. Writing to ws://HOST:PORT/wstreams/<pid> will send the data to stdin any information written on stdout will be then sent back to the websocket.
See the echo client/server example in the example folder:
$ python echo_client.py
Sent
Reeiving...
Received 'ECHO
'
(test)enlil:examples benoitc$ python echo_client.py
Sent
Reeiving...
Received 'ECHO
Note
unfortunately the echo_client script can only be launched with python 2.7 :/
Note
to redirect stderr to stdout just use the same name when you setting the redirect_output property on process creation.
Gafferd is a server able to launch and manage processes. It can be controlled via the HTTP api .
$ gafferd -h
usage: gafferd [-h] [-c CONFIG_FILE] [-p PLUGINS_DIR] [-v] [-vv] [--daemon]
[--pidfile PIDFILE] [--bind BIND] [--certfile CERTFILE]
[--keyfile KEYFILE] [--backlog BACKLOG]
[config]
Run some watchers.
positional arguments:
config configuration file
optional arguments:
-h, --help show this help message and exit
-c CONFIG_FILE, --config CONFIG_FILE
configuration file
-p PLUGINS_DIR, --plugins-dir PLUGINS_DIR
default plugin dir
-v verbose mode
-vv like verbose mode but output stream too
--daemon Start gaffer in the background
--pidfile PIDFILE
--bind BIND default HTTP binding
--certfile CERTFILE SSL certificate file for the default binding
--keyfile KEYFILE SSL key file for the default binding
--backlog BACKLOG default backlog
[gaffer]
http_endpoints = public
[endpoint:public]
bind = 127.0.0.1:5000
;certfile=
;keyfile=
[webhooks]
;create = http://some/url
;proc.dummy.spawn = http://some/otherurl
[process:dummy]
cmd = ./dummy.py
;cwd = .
;uid =
;gid =
;detach = false
;shell = false
; flapping format: attempts=2, window=1., retry_in=7., max_retry=5
;flapping = 2, 1., 7., 5
numprocesses = 1
redirect_output = stdout, stderr
; redirect_input = true
; graceful_timeout = 30
[process:echo]
cmd = ./echo.py
numprocesses = 1
redirect_output = stdout, stderr
redirect_input = true
Plugins are a way to enhance the basic gafferd functionality in a custom manner. Plugins allows you to load any gaffer application and site plugins. You can for example use the plugin system to add a simple UI to administrate gaffer using the HTTP interface.
A plugin has the following structure:
/pluginname
_site/
plugin/
__init__.py
...
***.py
A plugin can be discovered by adding one ore more module that expose a class inheriting from gaffer.Plugin. Every plugin file should have a __all__ attribute containing the implemented plugin class. Ex:
from gaffer import Plugin
__all__ = ['DummyPlugin']
from .app import DummyApp
class DummyPlugin(Plugin):
name = "dummy"
version = "1.0"
description = "test"
def app(self, cfg):
return DummyApp()
The dummy app here only print some info when started or stopped:
class DummyApp(object):
def start(self, loop, manager):
print("start dummy app")
def stop(sef):
print("stop dummy")
def rester(self):
print("restart dummy")
See the Overview for more infos. You can try it in the example folder:
$ cd examples
$ gafferd -c gaffer.ini -p plugins/
Installing plugins can be done by placing the plugin in the plugin folder. The plugin folder is either set in the setting file using the plugin_dir in the gaffer section or using the -p option of the command line.
The default plugin dir is set to ~/.gafferd/plugins .
Plugins can have “sites” in them, any plugin that exists under the plugins directory with a _site directory, its content will be statically served when hitting /_plugin/[plugin_name]/ url. Those can be added even after the process has started.
Installed plugins that do not contain any Python related content, will automatically be detected as site plugins, and their content will be moved under _site.
If you rely on some plugins, you can define mandatory plugins using the mandatory attribute of a the plugin class, for example, here is a sample config:
class DummyPlugin(Plugin):
...
mandatory = ['somedep']
Bases: object
Bases: object
simple gaffer application that gives an HTTP API access to gaffer.
This application can listen on multiple endpoints (tcp or unix sockets) with different options. Each endpoint can also listen on different interfaces
Gaffer is a process management framework but also a set of command lines tools allowing yout to manage on your machine or a cluster. All the command line tools are obviously using the framework.
gaffer`is an interface to the :doc:`gaffer HTTP api and inclusde support for loading/unloadin apps, scaling them up and down, ... . It can also be used as a manager for Procfile-based applications similar to foreman but using the gaffer framework. It is running your application directly using a Procfile or export it to a gafferd configuration file or simply to a JSON file that you could send to gafferd using the HTTP api.
Gafferd is a server able to launch and manage processes. It can be controlled via the HTTP api. It is controlled by gafferctl and can be used to handle many processes.
The tool Gafferctl allows you to control a local or remote gafferd node via the HTTP API. You can show processes informations, add new processes, changes their configureation, get changes on the nodes in rt ....
Application deployement, monitoring and supervision made simple.
Gaffer is a set of Python modules and tools to easily maintain and interact with your applications.
Framework to manage and interact your processes
Fully evented. Use the libuv event loop using the pyuv library
Server and Command Line tools to manage your processes
Procfile applications support (see Gaffer)
HTTP Api (multiple binding, unix sockets & HTTPS supported)
Flapping: handle cases where your processes crash too much
- Possibility to interact with STDIO:
- websocket stream to write to stdin and receive from stdout (muliple clients can read and write at the same time)
- subscribe on stdout/stderr feed via longpolling, continuous stream, eventsource or websockets
- write your own client/server using the framework
Subscribe to process statistics per process or process templates and get them in quasi RT.
Easily extensible: add your own endpoint, create your client, embed gaffer in your application, ...
Compatible with python 2.6x, 2.7x, 3.x
Note
gaffer source code is hosted on Github
Gaffer provides you a simple Client to control a gaffer node via HTTP.
Example of usage:
import pyuv
from gaffer.httpclient import Server
# initialize a loop
loop = pyuv.Loop.default_loop()
s = Server("http://localhost:5000", loop=loop)
# add a process without starting it
process = s.add_process("dummy", "/some/path/to/dummy/script", start=False)
# start a process
process.start()
# increase the number of process by 2 (so 3 will run)
process.add(2)
# stop all processes
process.stop()
loop.run()
Bases: object
simple client to fetch Gaffer streams using the eventsource stream.
Example of usage:
loop = pyuv.Loop.default_loop()
def cb(event, data):
print(data)
# create a client
url = http://localhost:5000/streams/1/stderr?feed=continuous'
client = EventSourceClient(loop, url)
# subscribe to the stderr event
client.subscribe("stderr", cb)
# start the client
client.start()
Bases: exceptions.Exception
exption raised on HTTP 409
Bases: exceptions.Exception
exception raised on HTTP 404
Bases: object
A blocking HTTP client.
This interface is provided for convenience and testing; most applications that are running an IOLoop will want to use AsyncHTTPClient instead. Typical usage looks like this:
http_client = httpclient.HTTPClient()
try:
response = http_client.fetch("http://www.friendpaste.com/")
print response.body
except httpclient.HTTPError as e:
print("Error: %s" % e)
Executes a request, returning an HTTPResponse.
The request may be either a string URL or an HTTPRequest object. If it is a string, we construct an HTTPRequest using any additional kwargs: HTTPRequest(request, **kwargs)
If an error occurs during the fetch, we raise an HTTPError.
Bases: object
Process object. Represent a remote process state
Bases: object
Process Id object. It represent a pid
Bases: object
Server, main object to connect to a gaffer node. Most of the calls are blocking. (but running in the loop)
add a process. Use the same arguments as in save_process.
If a process with the same name is already registred a GafferConflict exception is raised.
get a process by name or id.
If id is given a ProcessId instance is returned in other cases a Process instance is returned.
save a process.
Args:
If _force_update=True is passed, the existing process template will be overwritten.
Bases: gaffer.httpclient.EventsourceClient
simple EventsourceClient wrapper that decode the JSON to a python object
module to parse and manage a Procfile
Bases: object
Procfile object to parse a procfile and a list of given environnment files.
return a ConfigParser object. It can be used to generate a gafferd setting file or a configuration file that can be included.
This tutorial exposes the usage of gaffer as a tool. For a general overview or how to integrate it in your application you should read the overview page.
Gaffer allows you to launch OS processes and supervise them. 3 command line tools allows you to use it for now:
A process template is the way you describe the launch of an OS process, how many you want to launch on startup, how many time you want to restart it in case of failures (flapping).... A process template can be loaded using any tool or on gafferd startup using its configuration file.
To use gaffer tools you need to:
For more informations of gafferd go on its documentation page .
To launch gafferd run the following command line:
$ gafferd -c /path/to/gaffer.ini
If you want to launch custom plugins with gafferd you can also set the path to them:
$ gafferd -c /path/to/gaffer.ini -p /path/to/plugun
Note
default plugin path is relative to the user launching gaffer and is set to ~/.gaffer/plugins.
Note
To launch it in daemon mode use the --daemon option.
Then with the default configuration, you can check if gafferd is alive
The configuration file can be used to set the global configuration of gafferd, setup some processes and webhooks.
Note
Since the configuration is passed to the plugin you can also use this configuration file to setup your plugins.
Here is a simple example of a config to launch the dumy process from the example folder:
[process:dummy]
cmd = ./dummy.py
numprocesses = 1
redirect_output = stdout, stderr
Note
Process can be grouped. You can then start and stop all processes of a group and see if a process is member of a group using the HTTP api. (sadly this is not yet possible to do it using the command line).
For example if you want dummy be part of the group test, then [process:dummy] will become [process:test:dummy] . A process template as you can see can only be part of one group.
Groups are useful when you want to manage a configuration for one application or processes / users.
Each process section should be prefixed by process:. Possible parameters are:
Sometimes you also want to pass a custom environnement to your process. This is done by creating a special configuration section named env:processname. Each environmenets sections are prefixed by env:. For example to pass a special PORT environment variable to dummy:
[env:dummy]
port = 80
All environment variables key are passed in uppercase to the process environment.
The gaffer command line tool is an interface to the gaffer HTTP api and include support for loading/unloading Procfile applications, scaling them up and down, ... .
It can also be used as a manager for Procfile-based applications similar to foreman but using the gaffer framework. It is running your application directly using a Procfile or export it to a gafferd configuration file or simply to a JSON file that you could send to gafferd using the HTTP api.
For example using the following Procfile:
dummy: python -u dummy_basic.py
dummy1: python -u dummy_basic.py
You can launch all the programs in this procfile using the following command line:
$ gaffer start
Or load them on a gaffer node:
$ gaffer load
All processes in the Procfile will be then loaded to gafferd and started.
If you want to start a process with a specific environment file you can create a .env in he application folder (or use the command line option to tell to gaffer which one to use). Each environmennt variables are passed by lines. Ex:
PORT=80
and then scale them up and down:
$ gaffer scale dummy=3 dummy1+2
Scaling dummy processes... done, now running 3
Scaling dummy1 processes... done, now running 3
have a look on the Gaffer page for more informations about the commands.
gafferctl can be used to run any command listed below. For example, you can get a list of all processes templates:
$ gafferctl processes
You can simply add a process using the load command:
$ gafferctl load_process ../test.json
$ cat ../test.json | gafferctl load_process -
$ gafferctl load_process - < ../test.json
test.json can be:
{
"name": "somename",
"cmd": "cmd to execute":
"args": [],
"env": {}
"uid": int or "",
"gid": int or "",
"cwd": "working dir",
"detach: False,
"shell": False,
"os_env": False,
"numprocesses": 1
}
You can also add a process using the add command:
gafferctl add name inc
where name is the name of the process to create and inc the number of new OS processes to start.
To start a process run the following command:
$ gafferctl start name
And stop it using the stop command.
To scale up a process use the add command. For example to increase the number of processes from 3:
$ gafferctl add name 3
To decrease the number of processes use the command stop/
The command watch allows you to watch changes n a local or remote gaffer node.
For more informations go on the Gafferctl page.
Initial release
Bases: object
This method is called whenever a callback run by the IOLoop throws an exception.
By default simply logs the exception as an error. Subclasses may override this method to customize reporting of exceptions.
The exception itself is not passed explicitly, but is available in sys.exc_info.
module to return all streams from the managed processes to the console. This application is subscribing to the manager to know when a process is created or killed and display the information. When an OS process is spawned it then subscribe to its streams if any are redirected and print the output on the console. This module is used by Gaffer .
Note
if colorize is set to true, each templates will have a different colour
Bases: object
wrapper around colorama to ease the output creation. Don’t use it directly, instead, use the colored(name_of_color, lines) to return the colored ouput.
Colors are: cyan, yellow, green, magenta, red, blue, intense_cyan, intense_yellow, intense_green, intense_magenta, intense_red, intense_blue.
lines can be a list or a string.
The gaffer command line tool is an interface to the gaffer HTTP api and include support for loading/unloading Procfile applications, scaling them up and down, ... .
It can also be used as a manager for Procfile-based applications similar to foreman but using the gaffer framework. It is running your application directly using a Procfile or export it to a gafferd configuration file or simply to a JSON file that you could send to gafferd using the HTTP api.
For example using the following Procfile:
dummy: python -u dummy_basic.py
dummy1: python -u dummy_basic.py
You can launch all the programs in this procfile using the following command line:
$ gaffer start
Or load them on a gaffer node:
$ gaffer load
and then scale them up and down:
$ gaffer scale dummy=3 dummy1+2
Scaling dummy processes... done, now running 3
Scaling dummy1 processes... done, now running 3
$ gaffer
usage: gaffer [options] command [args]
manage Procfiles applications.
optional arguments:
-h, --help show this help message and exit
-c CONCURRENCY, --concurrency CONCURRENCY
Specify the number of each process type to run. The
value passed in should be in the format
process=num,process=num
-e ENVS [ENVS ...], --env ENVS [ENVS ...]
Specify one or more .env files to load
-f FILE, --procfile FILE
Specify an alternate Procfile to load
-d ROOT, --directory ROOT
Specify an alternate application root. This defaults
to the directory containing the Procfile
--endpoint ENDPOINT Gaffer node URL to connect
--version show program's version number and exit
Commands:
---------
start Start a process
run Run one-off command
export Export a Procfile
load Load a Procfile application to gafferd
unload Unload a Procfile application to gafferd
scale Scaling your process
ps List your process informations
help Get help on a command
Many events happend in gaffer.
Manager events have the following format:
{
"event": "<nameofevent">>,
"name": "<templatename>"
}
All processes’ events are prefixed by proc.<name> to make the pattern matching easier, where <name> is the name of the process template
Events are:
proc.<name>.start : the template <name> start to spawn processes
proc.<name>.spawn : one OS process using the process <name> template is spawned. Message is:
{
"event": "proc.<name>.spawn">>,
"name": "<name>",
"detach": false,
"pid": int
}
Note
pid is the internal pid
proc.<name>.exit: one OS process of the <name> template has exited. Message is:
{
"event": "proc.<name>.exit">>,
"name": "<name>",
"pid": int,
"exit_code": int,
"term_signal": int
}
proc.<name>.stop: all OS processes in the template <name> are stopped.
proc.<name>.stop_pid: One OS process of the template <name> is stopped. Message is:
{
"event": "proc.<name>.stop_pid">>,
"name": "<name>",
"pid": int
}
proc.<name>.stop_pid: One OS process of the template <name> is reapped. Message is:
{
"event": "proc.<name>.reap">>,
"name": "<name>",
"pid": int
}
This module offeres a common way to susbscribe and emit events. All events in gaffer are using.
event = EventEmitter()
# subscribe to all events with the pattern a.*
event.subscribe("a", subscriber)
# subscribe to all events "a.b"
event.subscribe("a.b", subscriber2)
# subscribe to all events (wildcard)
event.subscribe(".", subscriber3)
# publish an event
event.publish("a.b", arg, namedarg=val)
In this example all subscribers will be notified of the event. A subscriber is just a callable (event, *args, **kwargs)
Bases: object
Many events happend in gaffer. For example a process will emist the events “start”, “stop”, “exit”.
This object offer a common interface to all events emitters
close the event
This function clear the list of listeners and stop all idle callback
emit an event evtype
The event will be emitted asynchronously so we don’t block here
The process module wrap a process and IO redirection
Bases: object
class wrapping a process
Args:
return the process info. If the process is monitored it return the last informations stored asynchronously by the watcher
start to monitor the process
Listener can be any callable and receive (“stat”, process_info)
Bases: object
object to retrieve process stats
Bases: object
redirect stdin allows multiple sender to write to same pipe
Bases: gaffer.process.RedirectStdin
create custom stdio
Please activate JavaScript to enable the search functionality.
From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list.
Gaffer applications are applications that are started by the manager. A gaffer application can be used to interract with the manager or listening on events.
An application is a class with the following structure:
class Myapplication(object):
def __init__(self):
# do inti
def start(self, loop, manager):
# this method is call by the manager to start the
application
def stop(self):
# method called when the manager stop
def restart(self):
# methhod called when the manager restart
Gaffer is a set of Python modules and tools to easily maintain and interact with your processes.
Depending on your needs you ca simply use the gaffer tools (eventually extend them) or embed the gaffer possibilities in your own apps.
Gaffer is internally based on an event loop using the libuv from Joyent via the pyuv binding
All gaffer events are added to the loop and processes asynchronously wich make it pretty performant to handle multiple process and their control.
At the lowest level you will find the manager. A manager is responsible of maintaining process alive and manage actions on them:
A process template describe the way a process will be launched and how many OS processes you want to handle for this template. This number can be changed dynamically. Current properties of this templates are:
The manager is also responsible of starting and stopping gaffer applications that you add to he manager to react on different events. A applicaton can fetch informations from the manager and interract with him.
Running an application is done like this:
# initialize the controller with the default loop
loop = pyuv.Loop.default_loop()
manager = Manager(loop=loop)
# start the controller
manager.start(applications=[HttpHandler()])
.... # do smth
manager.stop() # stop the controlller
manager.run() # run the event loop
The HttpHandler application allows you to interact with gaffer via HTTP. It is used by the gafferd server which is able for now to load process templates via an ini files and maintain an HTTP endpoint which can be configured to be accessible on multiples interfaces and transports (tcp & unix sockets) .
Note
Only applications instances are used by the manager. It allows you to initialize them with your own settings.
Building your own application is easy, basically an application has the following structure:
class MyApplication(object):
def __init__(self):
# do inti
def start(self, loop, manager):
# this method is call by the manager to start the controller
def stop(self):
# method called when the manager stop
def restart(self):
# methhod called when the manager restart
You can use this structure for anything you want, even add an app to the loop.
To help you in your work a pyuv implementation of tornado is integrated and a powerfull events modules will allows you to manage PUB/SUB events (or anything evented) inside your app. An EventEmitter is a threadsafe class to manage subscriber and publisher of events. It is internally used to broadcast processes and manager events.
Stats of a process ca, be monitored continuously (there is a refresh interval of 0.1s to fetch CPU informations) using the following mettod:
manager.monitor(<nameorid>, <listener>)
Where <nameorid> is the name of the process template. In this case the statistics of all the the OS processes using this template will be emitted. Stats events are collected in the listener callback.
Callback signature: callback(evtype, msg).
evtype is always “STATS” here and msg is a dict:
{
"mem_info1: int,
"mem_info2: int,
"cpu": int,
"mem": int,
"ctime": int,
"pid": int,
"username": str,
"nicce": int,
"cmdline": str,
"children": [{ stat dict, ... }]
}
To unmonitor the process in your app run:
manager.unmonitor(<nameorid>, <listener>)
Note
Internally a monitor subscribe you to an EventEmitter. A timer is running until there are subscribers to the process stats events.
Of course you can monitor directly to a process using the internal pid:
process = manager.running[pid]
process.monitor(<listener>)
...
process.unmonitor(<listener>)
You can subscribe to stdout/stderr process stream and even write to stdin if you want.
To be able to receive the stdour/stderr streams in your application, you need to create a process with the redirect_output setting:
manager.add_process("nameofprocestemplate", cmd,
redirect_output["stdout", "stderr"])
Note
Name of outputs can be anything, only the order count so if you want to name stdout as a just replace stdout by a in the declaration.
If you don’t want to receive stderr, just omit it in the list. Alos if you want to redirect stderr to stdout just use the same name.
Then for example, to monitor the stdout output do:
process.monitor_io("stdout", somecallback)
Callback signature: callback(evtype, msg).
And to unmonitor:
process.unmonitor_io("stdout", somecallback)
Note
To subscribe to all process streams replace the stream name by ‘.’` .
Writing to stdin is pretty easy. Just do:
process.write("somedata")
or to send multiple lines:
process.writelines(["line", "line"])
You can write lines from multiple publisher and multiple publishers can write at the same time. This method is threadsafe.
See the HTTP api description for more informations.
Gaffer proposes different tools (and more will come soon) to manage your process without having to code. It can be used like supervisor, god, runit or other tools around. Speaking of runit a similar controlling will be available in 0.2 .
See the Command Line documentation for more informations.
gafferctl can be used to run any command listed below. For example, you can get a list of all processes templates:
$ gafferctl processes
gafferctl is an HTTP client able to connect to a UNIX pipe or a tcp connection and connect to a gaffer node. It is using the httpclient module to do it.
You can create your own client either by using the client API provided in the httpclient module or by reading the doc here and passing your own message to the gaffer node. All messages are encoded in JSON.
$ gafferctl help
usage: gafferctl [--version] [--connect=<endpoint>]
[--certfile] [--keyfile]
[--help]
<command> [<args>]
Commands:
add Increment the number of OS processes
add_process Add a process to monitor
del_process Get a process description
get_process Fetch a process template
help Get help on a command
kill Send a signal to a process
load_process Load a process from a file
numprocesses Number of processes that should be launched
pids Get launched process ids for a process template
processes Add a process to monitor
running Number of running processes for this process description
start Start a process
status Return the status of a process
stop Stop a process
sub Decrement the number of OS processes
update_process Update a process description