Nirmata Documentation

Contents:

Introduction

About Nirmata

Nirmata’s mission is to help businesses innovate faster, by enabling continuous delivery of software for all enterprises.

Nirmata enables Enterprise DevOps by making it easy for developers to perform fully automated application operations tasks based on policies that are managed by a platform team. Nirmata has been built ground-up for Microservices style applications, where an application is composed of multiple services and each service is designed to be elastic, resilient, composable, minimal, and complete. Nirmata provides seamless service discovery, registration, load-balancing and customizable routing for microservices. These features enable several DevOps best practices, and sophisticated automation required for continious delivery of software.

The Nirmata solution is non-intrusive and easy to use. It integrates with your current build tools and does not try to hide, or abstract-away, the Infrastructure-as-a-Service (IaaS) layer. This approach allows full visibility and control, and yet provides the benefits of a platform. You can configure your cloud or data center resources, using each provider’s security and management best practices and then use Nirmata to orchestrate and manage applications across providers.

Nirmata Overview Video

Core Concepts

The figure below shows the core concepts in Nirmata and their relationships to each other. Each of these concepts are defined below:

_images/nirmata-concepts.png

Applications

Applications are composed of multiple Services, and can run in one or more Environments. While Nirmata has been designed for Microservcies-style applications, it is easy to model and manage traditional client-server applications as well.

Services

A Service is part of an Application. It is the unit of software versioning and delivery. Each Service is run in its own application container. Some applications may contain a few services, like a web-app and a database. Others can contain several services to enable a Microservcies-style architecture.

Environments

An Environment is a runtime instance of an Application. Environments can be created for different stages of development, such as dev-test, staging, production or can be based on deployment characteristics such as regions.

Policies

Policies are used to govern resource usage, application constraints, and ensure scalable and repeatable behaviours, and best practices, across multiple teams.

Cloud Providers

Cloud Providers supply resources to run application containers. You can create one or more cloud providers, setup pools of hosts from them, and then use policies to manage how applications, services, and environments are mapped to hosts.

Nirmata currently supports the following cloud providers:
  • Public Clouds:
    • Amazon Web Services (AWS)
    • Microsoft Azure
    • Cisco Cloud Services
    • VMware vCloud Air
    • Other (individual machine instances from any other IaaS providers)
  • Private Clouds:
    • VMware vSphere
    • OpenStack
    • Other (individual virtual or physical servers)

Nirmata can securely manage both public and private clouds, without requiring any special network or firewall configuration.

Host Groups

Host Groups are used to model pools of similar hosts (servers) in a Cloud Provider. For example, you can allocate pools of resources based on service tiers, application characteristics, or application lifecycle needs.

Containers

Each Service runs in a Container. Nirmata uses the Docker Engine as its container techonology. Since Docker is an open techonology, you can always keep control of your images, and can also run them outside of Nirmata.

Image Registries

An Image Registry stores Docker images, which are typically produced by a build system. Nirmata supports both public and private image registries. You can setup your build tools to generate images for each service, and then trigger Nirmata to deploy the images.

Getting Started

Here are three easy steps to familiarize yourself with Nirmata:
  1. Setup a Cloud Provider and a Host Group (see Cloud Providers and Host Groups.)
  2. Import the a sample application from the Nirmata OSS Github repository (see Create an Application.)
  3. Run the Application in an Environment. (see Deploy an Application to a Environment.)

Getting Started Video

Cloud Providers and Host Groups

A Cloud Provider is used to enable Nirmata access to your public or private cloud resources.

A Host Group allows you to create pools of identical resources. You can create a single Host Group, or separate Host Groups for services, classes of services, teams, or any other classification, and then use Resource Selection. policies to control how the Host Group is used.

The setup for any public Cloud Provider type has the following general steps:
  1. Prepare a VM template, or similar, as detailed in the Host Setup section.
  2. Create the Cloud Provider in Nirmata
  3. Create one or more Host Groups in Nirmata

The setup for private clouds, has an additional step. You will first need run the Nirmata Private Cloud Agent and then configure a Cloud Provider. See details in the Private Cloud. setup section.

Host Setup

Here are the general requirements for a Host that is managed by Nirmata.

  1. Can be any Linux flavor that can run Docker.
  2. Requires Docker 1.10+ (see Docker installation guide.)
  3. The Nirmata agent must be installed (see Using cloud-init.)

Using cloud-init

If your OS provisioning system supports cloud-init (sometimes called Cloud-Config), or other post provisioning mechanisms, you can easily install Docker and the Nirmata Agent as follows (example shown is for Ubuntu and the Direct Connect Host Group):

#!/bin/bash -x
# Install Docker
wget -qO- https://get.docker.com/ | sh

# Install Nirmata Agent
sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud other --hostgroup <key>

Nirmata Agent Setup

To setup the Nirmata agent on the host instance

  1. Run the Nirmata Agent configuration script:

    sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud <CloudProvider> --hostgroup <HostGroupId>
    
            where:
              CloudProvider = [aws | azure | oraclecs | openstack | vsphere | google | digitalocean | other]
              HostGroupId = unique ID for the Host Group. Only required for the 'Direct Connect' container hosts
    
  2. Verify that Nirmata Agent has started:

    sudo docker ps
    

Note: setup-nirmata-agent.sh uses upstart/systemd to enable management of nirmata-agent as a service. If upstart or systemd is not available on your host instance, you can directly run nirmata-agent container using docker.

To start/stop Nirmata agent you can use:

Upstart:

sudo start nirmata-agent
sudo stop nirmata-agent

Systemd:

systemctl start nirmata-agent
systemctl stop nirmata-agent

To setup the Nirmata agent on any host instance using commandline (for ‘other’ cloud provider):

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v /var/log/nirmata-agent:/var/log/supervisor \
           -v /opt/nirmata/conf:/usr/share/nirmata/conf -v /opt/nirmata/db:/usr/share/nirmata/db --name=nirmata-agent \
           -v /sys:/sys:ro -e NIRMATA_HOST_ADDRESS=<HostName> -e NIRMATA_CLOUD_PROVIDER_TYPE=other -e NIRMATA_USE_HTTP=false\
           -e NIRMATA_HOST_GATEWAY=www.nirmata.io/host-gateway/manager -e NIRMATA_LABELS=<HostLabels> -e NIRMATA_HOST_GROUP_ID=<HostGroupId> nirmata/nirmata-agent:latest

where:
  HostName = Host name or IP address of your host
  HostGroupId = unique ID for the Host Group. Required for the 'other' cloud provider type
  HostLabels = Host labels in json format without any spaces between keys and values. e.g. NIRMATA_LABELS={\"host-type\":\"SSD\",\"host-location\":\"us-east\"}

Host Labels

You can setup host labels when installing the Nirmata agent. Nirmata agent can detect host labels directly from docker-engine. Host labels can also be passed in to Nirmata agent using environment variables.

Docker engine:

Add the label option when start docker engine. Depending on your docker-engine installation, you can add the label option to DOCKER_OPTS
   e.g. --label environment-type=\"production\"

Environment variable:

You can add the labels to Nirmata agent as environment variables. For this, you can modify the Nirmata agent init script (/etc/init/nirmata-agent.conf) and update the HOST_LABELS field. The labels should be specified in json format without any whitespaces.
   e.g. HOST_LABELS={\"host-type\":\"SSD\",\"host-location\":\"us-east\"}

Once you setup the host labels, you will need to restart the Nirmata agent. The host labels should be detected and display in Nirmata console once the host connects. Now, you can use these labels to control the placement of your containers.

HTTP Proxy

You can setup Nirmata agent to use a HTTP proxy to communicate to the internet. Here are the instructions

Upstart:

Add the following to the docker run command in /etc/init/nirmata-agent.conf file:
    -e HTTPS_PROXY=<http(s) proxy address and port>

e.g. -e HTTPS_PROXY=https://192.162.11.11:443

Systemd:

Add the following to the docker run command in /opt/nirmata/bin/start-nirmata-agent.sh file:
    -e HTTPS_PROXY=<http(s) proxy address and port>

e.g. -e HTTPS_PROXY=https://192.162.11.11:443

Restart Nirmata agent using the appropriate command.

Host Networking

Nirmata does nor require any special networking setup, and works with your existing Host networking. Nirmata uses the Docker bridge network mode on each Host, so each application service is addressable using the Host IP address and an allocated (static or dynamic) port. Based on the cloud provider, Nirmata will detect the public and private IP addresses of your Host. When available, the Host’s private IP address will be used for communication between services.

Amazon Web Services

Setting up Amazon Web Services (AWS) consists of the following steps:
  1. Create an Amazon Machine Image (AMI).
  2. Setup AWS.
  3. Create a AWS Cloud Provider.
  4. Create a AWS Host Group.

Create an Amazon Machine Image (AMI)

You can create your own AMI with Docker running on it. The AMI has the following requirements:
  1. Can be any Linux flavor that can run Docker
  2. Requires Docker 1.10+ (download)

You will need to setup the Nirmata agent container on that instance by running the command:

sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud aws

Once the instance is setup, create an AMI via the AWS console. This AMI can be used with your Auto Scaling Group or Spot Fleet instances.

Setup AWS

Nirmata needs to know which host instances should be used to deploy applications. For AWS, you can choose to configure hosts using any of the following options:

Using Auto Scaling Groups (ASG)

Detailed information on creating and managing Auto Scaling groups can be found here:

The Launch Configuration of the ASG must use the AMI created earlier or you can use a standard Linux AMI and use Using cloud-init to install docker and Nirmata agent.

This will ensure that Docker and Nirmata Host Agent are pre installed.

_images/aws-launch-config.png

Using Spot Fleet Requests

You can use AWS Spot Fleet Requests with Nirmata to take advantage of discounted spot instance pricing. To use Spot Fleet Requests, you need to first create the Spot Fleet Requests using AWS console. Detailed information on creating and managing Spot Fleet Requests can be found here:

The Spot Fleet Request can use the AMI created earlier or you can use a standard Linux AMI and use Using cloud-init to install docker and Nirmata agent.

Using Launch Configurations

AWS Launch Configurations can be used with Nirmata in order to provision/deprovision EC2 instances from Nirmata. Detailed information on creating a Launch Configuration can be found here:

The Launch Configurations must use the AMI created earlier or you can use a standard Linux AMI and use Using cloud-init to install docker and Nirmata agent.

Create a AWS Cloud Provider

Nirmata requires read-only access to EC2 service if using ASGs or Spot Fleet Requests and full access to EC2 service if using Launch Configuration to provision your VMs. The secure way to provide access is by configuring an IAM role for Nirmata in your AWS account. To configure a role, you will need the Nirmata AWS account ID and an unique external ID. When the role is configured, you provide Nirmata the role ARN (Amazon Resource Name).

This process seems involved, but only takes a few minutes to set up! Here are the detailed steps:

  1. Launch the Add Cloud Provider Wizard and select AWS as the provider.
_images/AWS-IAM-Role-0.png
  1. The Settings page with show you the Nirmata Account ID (094919933512) and a unique external ID for the Cloud Provider. You will require these in a later step:
_images/AWS-IAM-Role-6.png
  1. Login to your AWS account. Select IAM and navigate to the option to create a new user role:
_images/AWS-IAM-Role-1.png _images/AWS-IAM-Role-2.png _images/AWS-IAM-Role-3.png
  1. Provide a name (e.g. ‘nirmata’) and go to the next page:
_images/AWS-IAM-Role-4-1.png
  1. Select the ‘Roles for Cross-Account Access’ option, and then select ‘Provide access to a 3rd party AWS account’ option:
_images/AWS-IAM-Role-5-1.png
  1. Enter in the Nirmata AWS Account ID (094919933512) and the unique Cloud Provider External ID shown in the Nirmata Cloud Provider wizard:
_images/AWS-IAM-Role-7-1.png
  1. On the next page, select the ‘AmazonEC2ReadOnlyAccess’ and ‘AmazonEC2AutomationRole’ to allow Nirmata to provision EC2 instances:
_images/AWS-IAM-Role-8-1.png

You can also create a new custom policy (e.g. NirmataAutomationPolicy) for more granular access control. Below is the Policy Document that can be used in the custom policy. This policy limits Start/Stop/TerminateInstance to the instances created by Nirmata with appropriate tag:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:RunInstances",
                "ec2:CreateTags",
                "ec2:Describe*"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances",
                "ec2:TerminateInstances"
            ],
            "Resource": "arn:aws:ec2:<region>:<account>:instance/*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/com.nirmata.createdBy": "nirmata"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "autoscaling:Describe*",
            "Resource": "*"
        }
    ]
}

NOTE: be sure to replace the <region> and <account> placeholders, with the allowed region or “*” to allow all regions, and your AWS account ID.

  1. Finish creating the AWS IAM role:
_images/AWS-IAM-Role-10-1.png
  1. Select the role you just created and copy the Role ARN:
_images/AWS-IAM-Role-11.png
  1. Paste the Role ARN into the Nirmata Add Cloud Provider Wizard. You can then click ‘Next’ and Nirmata will validate the settings:
_images/AWS-IAM-Role-12.png

Creating AWS Cloud Provider Video

Create a AWS Host Group

Create a Host Group by specifying the Host Group Name and selecting the Cloud Provider. Then you need to select the host instance type (ASG, Spot Fleet Request or Launch Configuration).

You can also update the Resource Selection Rule for this Host Group

_images/create-aws-host-group-1.png _images/create-aws-host-group-2.png

Troubleshooting AWS Host creation

In some cases, host creation on AWS may fail due to various reasons. Here are a few things to check when the host creation fails:

  1. Check if docker engine and Nirmata agent were successfully installed and started on the instances provisioned on AWS. To check if Nirmata agent is running, ssh to your AWS instance and use commad: sudo docker ps. This command will list all the running containers. Verify that the nirmata-agent container is running
  2. If Nirmata agent is running on your AWS instance but the state displaying on the Nirmata console is ‘Pending Connect’ or ‘Not Connected’, the Nirmata agent might be unable to connect back to Nirmata server. Verify that AWS security groups for your instance allow outbound HTTPS traffic. You can check be accessing a website e.g. www.google.com using curl or wget (e.g. curl -XGET http://www.google.com)
  3. When creating AWS instances using Launch Configuration mode, ensure that the network selected in Nirmata is the same as the network of security group configured for the launch configuration. If this is not the case, instance creation will fail.

Microsoft Azure

Settine up Microsoft Azure consists of the following steps:
  1. Create Azure Provider.
  2. Create Azure Host Group.

Create Azure Provider

Create an Azure Provider by entering the Subscription ID, Tenant ID, Client ID and Client Secret. Nirmata uses Azure Active Directory for authentication so you need to ensure that you have setup Azure Active Directory. For setting up Azure Active Directory, refer to the documentation (https://azure.microsoft.com/en-us/documentation/services/active-directory/)

Once you provide the required information, Nirmata will validate access to your account.

Create Azure Host Group

Nirmata integrates with Azure Resource Manager. Prior to creating a Host Group in Nirmata, you need to create a Resource Group in Azure. Within your resource group, you should also create a network, security group(s) and storage account. These will be using while creating the host group.

Create a host group by selecting the Azure provider, specifying the number of desired hosts and selecting the resource group. Select the image, instance type, virtual network, security group and the strage account. You can also select if you want to attach public IP addresses. Enter the username and password to access your instance.

You need to install docker engine and Nirmata agent using cloud-init by adding the installation script in User Data. Clicking on ‘Install Nirmata Agent’ will add the following script to User Data:

#!/bin/bash -x
# Install Docker
wget -qO- https://get.docker.com/ | sh

# Install Nirmata Agent
sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud azure

Once this is done, Nirmata will create and setup VMs. Once the VMs are powered on, they will connect to Nirmata SaaS. Now you can create applications and deploy them to your Azure resources.

Google Compute Engine

Setting up Google Compute Engine consists of the following steps:
  1. Create GCE Provider.
  2. Create GCE Host Group.

Create GCE Provider

Here are the detailed steps to create a cloud provider for Google Compute Engine:

Launch the Add Cloud Provider Wizard and select Google Compute Engine as the provider.

In the Settings page provide the service account key. This can be generated from Google Cloud Platform account.

Click Next to validate the settings and complete the wizard.

Create GCE Host Group

To create a Host Group, provide the name and select the Cloud Provider. Then select the desired host count.

In the Settings Tab enter value for:
  • Zone
  • Configuration Type:
    • Image: to use a pre-built image
    • Instance Group: to use the instance group

Provide additional settings based on the configuration type selected.

On the Tags tab enter the tags to apply to the instances.

On the StartUp Script Tab, click on checkbox to use a default script to install docker and Nirmata agent or provide your own script

Once all the information is provided and the wizard is completed, a host group will be created and the instances will be provisioned. Once the instances connect to Nirmata, they are ready to be used to deploy your application.

Oracle Public Cloud

Setting up Oracle Public Cloud consists of the following steps:
  1. Create Oracle Public Cloud Provider.
  2. Create Oracle Public Cloud Host Group.

Create Oracle Public Cloud Provider

Here are the detailed steps to create a cloud provider for Oracle Public Cloud:

Launch the Add Cloud Provider Wizard and select Oracle Public Cloud Services as the provider.

In the Settings page provide the endpoint URL, identity domain, username and password. The endpoint URL and identity domain can be found in your Oracle Public Cloud account.

Click Next to validate the settings and complete the wizard.

Create Oracle Public Cloud Host Group

To create a Host Group, provide the name and select the Cloud Provider.

In the Settings Tab select host instances:
  • Orchestration - to discover instances provisioned by the selected orchestration
  • Identity Domain - to discover all instances in the identity domain

Once all the information is provided and the wizard is completed, a host group will be created and the instances will be discovered. Once the instances connect to Nirmata, they are ready to be used to deploy your application.

Note: For Oracle Public Cloud, the instances need to be created directly using the Oracle Public Cloud Console or API. For the instance to be discovered by Nirmata, docker engine and Nirmata agent should be installed on running on the instance. See :ref: host-setup section.

Digital Ocean

Setting up Digital Ocean consists of the following steps:
  1. Create an Image.
  2. Create a Digital Ocean Provider.
  3. Create a Digital Ocean Host Group.

Create an Image

Create and launch a Linux droplet in Digital Ocean. Connect to the VM and setup Nirmata agent by running the following command:

sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud digitalocean

Now, power down the droplet and create an image snapshot from the Images page. This image should be selected when creating a host group for your Digital Ocean provider

_images/create-digital-ocean-image.png

Create a Digital Ocean Provider

Create an API key on the Digital Ocean console. You can create the API key here

_images/create-digital-ocean-api-key.png

In Nirmata, create a Digital Ocean cloud provider by providing the API Key.

_images/create-digital-ocean-provider-1.png _images/create-digital-ocean-provider-2.png

Create a Digital Ocean Host Group

Create a host group by selecting the Digital Ocean cloud provider, specifying the number of desired hosts, region and selecting the VM image created earlier. Select the droplet size. You can also update the Resource Selection Rule for this Host Group.

_images/create-digital-ocean-host-group-1.png _images/create-digital-ocean-host-group-2.png

Once this is done, Nirmata will create and setup droplets (VMs). Once the VMs are powered on, they will connect to Nirmata SaaS. Now you can create applications and deploy them to your Digital Ocean resources.

Cisco Metapod

Setting up Cisco Metapod consists of the following steps:
  1. Create a VM Template (Optional).
  2. Create a Cisco Metapod Provider.
  3. Create a Cisco Metapod Host Group.

Create a VM Template (Optional)

Create and launch a Linux VM using Metapod console. Connect to the VM, install docker engine and setup Nirmata agent by running the following command:

sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud openstack

Now you can create a template for the VM using Metapod Horizon. This template should be selected when creating a host group for your Metapod provider.

Note: You can skip this step by using cloud-init to install docker and Nirmata agent while creating the Cisco Metapod Hostgroup in Nirmata (Create a Cisco Metapod Host Group)

Create a Cisco Metapod Provider

Create a Cisco Metapod provider by providing the Keystone identity service URL (https://serviceaddress:5000/v2.0/), project name, and credentials. After entering the credentials, validate access to your cloud provider before closing the wizard.

_images/create-metapod-provider-1.png _images/create-metapod-provider-2.png

Create a Cisco Metapod Host Group

Create a host group by selecting the Metapod provider, specifying the number of desired hosts and selecting the template created earlier. Provide the resource information such as flavor, keypair, network and security group. You can install docker engine and Nirmata agent using cloud-init by adding the installation script in User Data. You can also update the Resource Selection Rule for this Host Group:

#!/bin/bash -x
# Install Docker
wget -qO- https://get.docker.com/ | sh

# Install Nirmata Agent
sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud openstack
_images/create-metapod-hg-0.png _images/create-metapod-hg-1.png _images/create-metapod-hg-2.png

Once this is done, Nirmata will create and setup VMs. Once the VMs are powered on, they will connect to Nirmata SaaS. Now you can create applications and deploy them to your Cisco Metapod resources.

VMware vSphere

To securely connect Nirmata to a vSphere in your Private Cloud or Data Center, first setup a Private Cloud.

Setting up VMware consists of the following steps:
  1. Setup a Resource Pool in vCenter.
  2. Create a vSphere template.
  3. Create a VMware vSphere Cloud Provider.
  4. Create a VMware vSphere Host Group.

Setup a Resource Pool in vCenter

Refer to VMWare vSphere documentation for instructions on setting up a Resource Pool.

Create a vSphere template

Create and launch a Linux VM using vSphere. Connect to the VM and setup Nirmata agent by running the following command:

sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud vsphere

Now you can create a template for the VM using vCenter. This template should be selected when creating a host group for your vSphere provider

Create a VMware vSphere Cloud Provider

Create a VMWare vSphere provider by providing the vCenter SDK URL (http://serveraddress/sdk) and credentials:

_images/create-vsphere-provider-1.png

After entering the credentials, validate access to your cloud provider before closing the wizard:

_images/create-vsphere-provider-2.png

Create a VMware vSphere Host Group

Create a host group by selecting the vSphere provider, specifying the number of desired hosts, providing the data center information and selecting the resource pool, image template, flavor, network and datastore. You can also update the Resource Selection Rule for this Host Group.

_images/create-vsphere-hg-0.png _images/create-vsphere-hg-1.png

Once this is done, Nirmata will create and setup VMs. Once the VMs are powered on, they will connect to Nirmata SaaS. Now you can create applications and deploy them to your VMWare vSphere resources.

OpenStack

To securely connect Nirmata to OpenStack in your Private Cloud or Data Center, first setup a Private Cloud.

Setting up OpenStack consists of the following steps:
  1. Create a VM Template.
  2. Create OpenStack Provider.
  3. Create OpenStack Host Group.

Create a VM Template

Create and launch a Linux VM using OpenStack. Connect to the VM and setup Nirmata agent by running the following command:

sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud openstack

Now you can create a template for the VM using OpenStack Horizon. This template should be selected when creating a host group for your OpenStack provider

Create OpenStack Provider

Create an OpenStack provider by providing Keystone identity service URL (https://serviceaddress:5000/v2.0/), project name and the credentials:

_images/create-openstack-provider-1.png

After entering the credentials, validate access to your cloud provider before closing the wizard:

_images/create-openstack-provider-2.png

Create OpenStack Host Group

Create a host group by selecting the OpenStack provider, specifying the number of desired hosts and selecting the template created earlier. Provide the resource information such as flavor, keypair, network and security group. You can also update the Resource Selection Rule for this Host Group.

_images/create-openstack-hg-0.png _images/create-vsphere-hg-1.png

Once this is done, Nirmata will create and setup VMs. Once the VMs start, they will connect to Nirmata SaaS. Now you can create applications and deploy them to your OpenStack resources.

Other Cloud Providers & Direct Connect Hosts

You can bring any (public or private cloud) server into Nirmata using the ‘Direct Connect’ cloud provider type. In this type of deployment you control which servers are made available in a Host Group. Auto Scaling and Auto Recovery of hosts is not enabled for this provider type.

To setup a Create a Host Group of type ‘Direct Connect’ by navigating to Host Groups -> Direct Connect -> Add Host Group. Give your Host Group a name and drill down into its details page. This will display an message as follows:

To connect a Host, run the Host Agent with Host Group ID: 9k1617f2-932d-4918-a6f8-e9863a27649f

To complete the setup, launch the Nirmata agent passing in the provider type and Host Group ID:

sudo curl -sSL http://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud other --hostgroup <HostGroupId>

        where HostGroupId is the unique Host Group ID for your Host Group

If your server is a Virtual Machine, you can also save it as a template or image to create additional Nirmata Host instances from it.

Note: Remove the file: /usr/share/nirmata/conf/host_agent.id and the directory: /opt/nirmata/db/ prior to saving the VM as a template.

Bare Metal Servers

You can use the Other Cloud Providers & Direct Connect Hosts. option to configure Host Groups for bare metal (physical) servers in Nirmata.

Private Cloud

Nirmata can securely manage your VMware and OpenStack cloud resources, and Docker Image Registries, in your Data Center. To connect your Private Cloud, you will need to run the Nirmata Private Cloud Agent, on a system within your Data Center that has connectivity to your cloud management system (e.g. VMware’s vCenter) and/or your private Docker Image Registry. Once the Nirmata Private Cloud Agent is connected, you can then provision Cloud Providers and Image Registries and select the appropriate private cloud for these systems.

Here are the steps to setup a Private Cloud:

  1. In Nirmata go to Settings -> Private cloud, select the option to add a private cloud, and provide a unique name.
_images/private-cloud-setup.png
  1. Setup a system in your Data Center for the Nirmata Private Cloud agent, and run the command to install the agent, using the unique ID for your Private Cloud:

    curl -sSL http://www.nirmata.io/nirmata-private-cloud-agent/setup-nirmata-private-cloud-agent.sh | sudo sh -s b71025b0-068f-40a1-8804-f03e52c598db
    

Once the Private Cloud is connected, you can select it when creating an Image Registry, a VMware vSphere Cloud Provider, or an OpenStack Cloud Provider. Here is an example:

_images/private-image-registry.png

Monitoring Cloud Resources

You can monitor your host groups by clicking on the host group name. You should see a list of all the hosts that are in your host group. The State column indicates whether the host is connected to Nirmata SaaS or not. In case the host is not connected follow the instructions to create a host group for your cloud provider. Containers will not be deployed to hosts that are not connected.

_images/host-monitoring-1.png

For each host, you can view additional information such as average CPU utilization, average memory utilization, per container CPU and memory utilization. You can also start, stop the containers and view container logs.

_images/host-monitoring-2.png _images/host-monitoring-3.png

Applications

Create an Application

An Application is composed of one or more Services. The number of services will depend on your application architecture. For example some 3-tier applications may have a few services, or even a single service, where as Microservices style applications may have hundreds of services.

To create an Application, you will need to provide a name, for the application, and an optional description. You can now add services to your application. See:

You can also define dependencies between applications. If Application A depends on Application B then services from Application A can talk to services from Application B. You can control precisely which of the services can communicate with each other by specifying an Application Routing Policy. Defining dependencies between applications can be usefull when sharing common services such as a database between multiple applications.

Service Affinities

Service Affinity rules are used to manage how services are placed, in relation to other services. Multiple affinity rules can be defined for each application. They define if services must be grouped together on the same host or if they must be deployed on separate hosts. Affinity rules are optional. When there is no affinity rule specified, services are placed on host solely based on the placement type specified in the resource selection policy. There are three affinity types supported:

Affinity Type Description
Same Host Services must be placed on the same host. The selected host can be shared with other services.
Unique Host Services must be placed on the same host. The selected host cannot be shared with other services
Different Hosts Services must be placed on different hosts. The selected hosts can be shared with other services

Affinity rules take precedence over the placement type specified in a resource selection rule. A given service can only appear in one rule of type “Same Host” or “Unique host”.

Creating Affinity rules

To create an affinity rule, select the application to which the rule applies, the list of services, the list of environment types applicable for this rule and finally the affinity type.

_images/create-affinity-rule.png

Updating Affinity rules

Affinity rules can be updated after being created. The new definition for a rule will take effect only when a new environment is created or when an existing environment is scaled up.

_images/edit-affinity-rule.png

Affinity rules Example

_images/affinity-rules-example.png

In this example, the first rule specifies that deals, ratings, wishlist and loyalty services must alway be placed on the same host when the application is deployed in a staging environment or in a production environment.

The second rule specifies that payment, orders and customer services must be placed on the same host when the application is deployed in a staging environment or in a production environment.

The third and fourth rules specify that the shopme-gateway services should never be placed on the host running either the deals service or the payment service.

Finally the fifth rule specifies that when the application is deployed in a sandbox environment, all services must be placed on the same host.

Scaling & Recovery

Scaling rules manage how many instances of each service should be deployed in an environment, and if auto-recovery is enabled for the services.

Service scaling rules can be specified when an environment is created or directly as part of the blueprint of an application. When added to an application blueprint, scaling rules are used as the default scaling rules when creating new environments. A scaling rule specifies the number of service instances to be deployed. The system will guarantee that the desired number of service instances is running at all time. If a service instance dies or is shutdown by a user, the system will automatically restart a new one.

_images/create-scaling-rule.png

In this example, the scaling rules specifies that when the application is deployed in a production environment, the wishlist and the catalog services must be deployed with three service instances each.

Service Routing

By default, an Application has one routing policy defining how the application services can communicate with each other. In the example below we specified that all the services from the shopme application can talk to each other except for the orders service which cannot communicate with the ratings service:

_images/application-routing-policy-1.png

If your application dependes on other application, then one Routing Policy per required Application is added to your application definition. In the example below, the shope application depends on MongoDB and ElasticSearch. The shopme application has a total of three Routing Policies: one for its own services, one controlling the communication between the shopme application and the Elasticsearch cluster and another one defining the communication between shopme and MongoDB.

We show in this picture the details of the Shopme/Elasticsearch Routing Policy. None of the shopme application services can talk to elasticsearch, except for orders and custtomers:

_images/application-routing-policy-2.png

Import or Export an Application

You can export an application blueprint as a JSON file. This file can then be shared, versioned, or archived in any tool of your choice:

_images/export-application.png

You can also import a previously exported application blueprint.

Services

Creating Services

You can create a new Service by adding it to an application. You can also create services, or applications with a single service from an existing Docker image, for example from the DockerHub.

Add a Service to an Application

To add a Service to an Application, click on the Application name to go to Application details and select the + icon.

_images/create-service-wizard-2.png

The Wizard will guide you through the following steps:

Service

Specify the service name and select a Container Type:

Field Required? Description
Name required a unique name for the Service
Type required the Container Type to use for the Service
Depends on optional other services that this Service depends on at runtime. Also see: Service Dependency Injection.

Image

Select or enter an image path, and optionally the version, for the Service:

Field Required? Description
Image Registry optional the Docker Registry to pull the image from
Image Repository required the full image path, including the registry URL
Tag optional the image tag. The ‘latest’ tag is used if not specified

Run Settings (Optional)

Here, you can manage the runtime command line, host name, and environment details for your container:

Field Required? Description
Run Command optional the command line to invoke in the container
Hostname optional the host name to use for the container
Environment Variables optional environment variables to set in the container. See Service Dependency Injection.

Networking

On this page, you can choose how your Service will communicate with other services. You can enable the Nirmata Service Networking, which provides seamless Service Naming, Registration and Discovery, Service Load Balancing, and Customizable Service Routing, or you choose to manage your own networking.

Field Required? Description
Service Networking required enable, or disable, Nirmata Service Networking. See Service Naming, Registration and Discovery, Dynamic Load Balancing, Service Gateway and Service Gateway for more information on Nirmata’s Service Networking features.
Service Ports optional configure which ports are exposed on the container and host. Also see Service Port Management
Network Mode optional Select the Docker Network mode.
DNS optional Configure one or more DNS servers for the container

Routes (Optional)

Field Required? Description
Routes optional Configure URL or DNS routes, if your service is reachable via the Nirmata Gateway Service. See Service Gateway

Health Check (Optional)

Field Required? Description
Type required Type of health check: TCP, HTTP, HTTPS or NONE (No health check)
Endpoint required Endpoint/port for the health check.
Path optional Path/URL to query for the health check.
Start Delay required Time in seconds after which health check should be started.
Interval required Time in seconds between health checks.
Timeout required Health check timeout when the service doesn’t respond
Failure threshold required Number of health check attempts after which a service should be marked DOWN
Success threshold required Number of health check attempts after which a service should be marked UP

Volumes (Optional)

If your Service requires shared or persistent storage, you can map a host file system volumes to the container.

Field Required? Description
Volumes optional configure the Host volume paths to map to the Service container

Advanced (Optional)

If your Service requires shared or persistent storage, you can map a volume to the container.

Field Required Description
Privileged Mode optional select if your Service needs access to host devices. Warning: enabling this mode is not a security best practice.

Environment Variables Injection

Several environment variables are injected by default into every container. These environment variables provide information regarding the context in which the container is running. You can retrieve the value of these environment variables from your code if necessary.

Environment Variable Description
NIRMATA_APPLICATION_NAME The name you gave to your application
NIRMATA_APPLICATION_ID A unique ID generated by Nirmata identifying your application. This ID is used in the Nirmata REST APIs.
NIRMATA_ENVIRONMENT_NAME The environment name where the application is being deployed
NIRMATA_ENVIRONMENT_ID A unique ID generated by Nirmata identifying your environment. This ID is used in the Nirmata REST APIs.
NIRMATA_ENVIRONMENT_TYPE_NAME The name of the environment type use to deploy the environment.
NIRMATA_CONTAINER_TYPE_NAME Name of the container type associated to this service instance.
NIRMATA_CLOUD_PROVIDER_NAME The name
NIRMATA_CLOUD_PROVIDER_ID Unique ID identifying the Cloud Provider used to start the container
NIRMATA_HOST_GROUP_NAME Name of the Host Group selected to start the container
NIRMATA_HOST_GROUP_ID Unique ID of the Host Group selected to start the container
NIRMATA_HOST_ADDRESS IP address of the host selected to run the container
NIRMATA_HOST_NAME Name of the host selected to run the container
NIRMATA_HOST_ID Unique ID identifying the host selected to run the container
NIRMATA_SERVICE_INSTANCE_ID Unique ID identifying the service instance associated to the container
NIRMATA_SERVICE_NAME The name of the service being instantiated
NIRMATA_SERVICE_ID Unique ID identifying the service being instantiated
NIRMATA_SERVICE_VERSION docker tag of the service being instantiated
NIRMATA_SERVICE_PORTS List of ports exposed by this containers. The syntax is: <port-type>:<host-port1>:<container-port1>, ... , <port-type>:<host-portN>:<container-portN> where port type is either TCP or UDP. Example: “TCP:8081:80,TCP:8082:80

Service Dependency Injection

You may need services to know information about other services. This can be achieved using the Nirmata REST API with the Service, or by configuring a dependency across the services, and then injecting service properties such as hostname, IP addresses and port values.

Service Dependency injection is supported within the following fields:

  • Run Command
  • Environment Variables
  • Volume Mappings

There are three syntaxes supported

Syntax 1: {property}

The list of properties supported with syntax 1 are:

  • NIRMATA_APPLICATION_NAME
  • NIRMATA_APPLICATION_ID
  • NIRMATA_ENVIRONMENT_NAME
  • NIRMATA_ENVIRONMENT_ID
  • NIRMATA_ENVIRONMENT_TYPE_NAME
  • NIRMATA_CONTAINER_TYPE_NAME
  • NIRMATA_SERVICE_NAME
  • NIRMATA_SERVICE_ID
  • NIRMATA_SERVICE_INSTANCE_ID
  • NIRMATA_SERVICE_INSTANCE_VERSION
  • NIRMATA_CONTAINER_NAME
  • NIRMATA_HOST_ADDRESS
  • NIRMATA_HOST_PUBLIC_ADDRESS
  • NIRMATA_HOST_PRIVATE_ADDRESS
  • NIRMATA_HOST_NAME
  • NIRMATA_HOST_DNS_NAME
  • NIRMATA_HOST_PUBLIC_DNS_NAME
  • NIRMATA_HOST_PRIVATE_DNS_NAME

This form of dependency injection is used to inject parameters related to the environment (both logical and physical) where the service is going to run.

Example

_images/dependency-injection-form-1.png

Syntax 2: {<service-name>.property}

Where <service-name> is the name of a service required by the service being instantiated. When defining the blueprint of your service you must add <service-name> to the list of required services (“Depends On” field).

The list of properties supported with syntax 2 are:

  • NIRMATA_SERVICE_HOST_NAME
  • NIRMATA_SERVICE_HOST_DNS_NAME
  • NIRMATA_SERVICE_HOST_PUBLIC_DNS_NAME
  • NIRMATA_SERVICE_HOST_PRIVATE_DNS_NAME
  • NIRMATA_SERVICE_ADDRESS
  • NIRMATA_SERVICE_PUBLIC_ADDRESS
  • NIRMATA_SERVICE_PRIVATE_ADDRESS

This form of dependency injection is used to inject runtime parameters from one service into another.

Example

_images/dependency-injection-form-2.png

Syntax 3: {<service-name>.NIRMATA_SERVICE_PORT.<port-name>}

This form of dependency injection is used to inject a port value from one service into another. <service-name> must defined as a required service and <port-name> must be defined in the blueprint of <service-name>.

Example

First we define a management port in the catalog service. In this particular case, the host port is set to 0 to indicate that the port must be allocated dynamically.

_images/dependency-injection-form-3-1.png

Then we can define that the order service depends on the catalog service and we inject the value of catalog management port into the order service.

_images/dependency-injection-form-3-2.png

Service Port Management

The host port can be left blank. In this case, the host port is allocated dynamically within the custom port range (49152-65535).

Ports exposed by a service can have an optional name that is used to identify the port, if this port must be injected into another service.

Service Health Checks

You can optionally configure Service Health Checks for a Service. Nirmata supports TCP, HTTP, and HTTPS health checks. The health check is performed after the application container is running, and after a configurable delay.

_images/service-health.png
Field Description
Type The type of check (TCP, HTTP, or HTTPS) to perform.
Endpoint The Service port to perform the check against
Path The URL to use (required for HTTP and HTTPS checks.) A HTTP 2xx code is considered successful.
Start Delay The number of seconds to wait after the container is started, before performing health checks.
Interval The number of seconds between health checks.
Timeout The TCP connection timeout.
Failure Threshold The number of allowed failures, before a service is set to ‘Failed’
Success Threshold The number of required successes, before a service is set to ‘Running’.

When Service Health checks are enabled, the service’s health state is used to determine if a Service Instance should be marked as running or not. This can impact overall deployment times, especially when Service dependencies are configured.

Cluster Services

When defining Service you can check the “Enable Cluster” flag to specify that this service operates as a cluster.

_images/cluster-service-flag.png

A Cluster Service differ from a regular Service in several ways:

  • Additional environment variables are injected in each container being part of the cluster. These environment variables provide details about the other members in the cluster.
  • When an member is restarted, it is always restarted on the same host. This is done to make sure environment variables injected in other members remain valid.
  • When a Service depends on a Cluster Service, then its service instances are only started once all the members in the cluster are up and running.
  • Scaling up and scaling down a cluster may require a manual restart of the cluster node in order to re-generate the right configuration files.

Cluster Environment Variables

There are 3 additional environment variables injected into containers being part of a cluster.

Variable Description
NIRMATA_CLUSTER_NODE_COUNT Indicates the number of nodes in the cluster
NIRMATA_CLUSTER_NODE_ID ID assigned to this node. Numbering starts from 1.
NIRMATA_CLUSTER_INFO JSON formatted string. It provides details about the IP and ports of all the nodes in the cluster.

The syntax for the NIRMATA_CLUSTER_INFO environment variable is:

[
        {
                "nodeId" : <integer>,
                "ipAddress":<string>,
                "ports": [
                                { "portName":<string>, portType: <"TCP"|"UDP"|"HTTP"|"HTTPS">, "hostPort":<integer>, "containerPort":<integer>}
                ]
        }
]

These environment variables can be used to generate the configuration files of your cluster service.

When a service depends on a cluster service then a specific environment variable is injected in the dependent service’s container. If your service depends on multiple clusters then there will be one environment variable injected per required cluster. The name of the environment variable is formatted using the name of the cluster service:

NIRMATA_CLUSTER_INFO_<cluster-service-name>

The value of the environment variable is a JSON formatted string. Its format is identical to the format of the NIRMATA_CLUSTER_INFO. The dependent service instances will only be started once all the required cluster members are fully deployed.

Scaling Up a Cluster

To add a node to a deployed cluster, you just need to edit the scaling rule of the cluster service and specify the total number of members you need.

If your cluster service uses static host ports, you must make sure that you have enough hosts available in your host group to avoid port conflicts. It is possible to deploy cluster services on fewer hosts than what you have specified in the scaling rule. In this case, you must use dynamic host ports in the service blueprint.

The procedure used to scale up a cluster varies based on the type of cluster you are deploying. Some clusters like older versions of Zookeeper require changing the configuration of existing nodes when a new node is added. Other clusters such as MongoDB do no not require restarting the nodes to expend the cluster.

You can use the restart action in the service instance pull-down menu to restart a node. A cluster service instance is always restarted on the same host. The environment variables injected into the new container reflect the new configuration and size of the cluster.

Scaling Down a Cluster

To scale down a cluster, just edit the scaling rule of the cluster service and decrease the number of service instance required.

Depending on the type of cluster you are operating, you may have to restart the remaining nodes of the cluster.

Service Networking

Nirmata provides advanced features to help interconnect and network your applications. With Nirmata, you retain full control of your host networking and security. For example, your hosts can be in different zones or subnets. Nirmata only requires that hosts used by service instances with interconnectivity requirements have a Layer 3 connection.

Nirmata uses standard Docker networking capabilities and provides:
  • Service Naming, Registration and Discovery
  • Dynamic load balancing
  • Service Gateway functions
  • Programmable Routing

Each of these features is described below:

Service Naming, Registration and Discovery

Services within an application will often need to communicate with other services in the same application. Traditionally this requires complex multi-host and multi-device configuration, and is dependent on cloud infrastructure.

With Nirmata, application services can easily connect with each other, without requiring code changes or complex configuration. Best of all, the Nirmata solution works on any public or private cloud and fully decouples your application from the underlying cloud infrastructure.

Each Service in Nirmata has a DNS complaint name. When Nirmata’s Service Networking features are enabled, Nirmata provides seamless registration and discovery for these services. As service instances are deployed, Nirmata automatically tracks the runtime data for each service and populates this in a distributed service registry that is managed across hosts, within the Nirmata Host Agent container.

Enabling Service Networking is easy – it’s a single checkbox!

_images/service-networking-setup.png

Services are named using the following scheme:

<service>.<application>.local

where:
service = the Service Name
application = the Application Name

By default, the service names are resolved within an environment. If you want to reach a service in another environment, you can use following form:

<service>.<environment>.<application>.local

where:
service = the Service name
environment = the Environment name
application = the Application name

When Service Networking is enabled, Nirmata will resolve DNS and TCP requests originating from the application container. Only DNS requests that end with the “.local” domain, and TCP requests for declared Service Ports are handled by the Nirmata and all other requests are propagated upstream. Your application services can now simply use well known names, such as myservice.newapp.local, to interconnect.

Dynamic Load Balancing

In a Microservices style application, each Service is designed to be elastic, and can have several instances running within a single environment.

As discussed above Nirmata automatically resolves Service names to an IP address and port. The Service Discovery also has built in load balancing, to automatically spread requests across available Service Instances.

For example, the orders service can connect to the catalog service using a the well known name: catalog.shopme.local. As shown in the CMD shell output, an HTTP/S request can be simply made as: “https://catalog.shopme.local” and Nirmata will dynamically resolve the request to an IP address and port for the service. If multiple instances of the catalog service are running, Nirmata will automatically load balance requests across these, and will keep track of instances that are added, deleted, or are unreachable. The service load-balancing is also fully integrated with service health checks, for maximum resilency.

_images/service-networking-load-balancing.png

Service Gateway

The Nirmata Service Gateway enables routing of client requests based on the HTTP content or DNS names, to application service instances. The Service Gateway is intended to be used as an entry point to a Microservices style application. Most load balancers provide VM or Host based load balancing. A Service Gateway solves a slightly different problem: with a Microservices style application, an external client must connect to a backend service. However, requests from the client may need to be routed to different services within an application. A Service Gateway can use information in the HTTP content to determine which backend application service should be targeted. Once the application service is selected, the Service Gateway chooses an available instance and resolves its IP address and port.

For example, in the figure below the application has 3 services and each service has multiple instances. The Service Gateway acts as a single client endpoint for all front-end services. This allows a single client endpoint to dynamically address multiple services, on the same connection, by using HTTP information such as the URL path.

_images/service-gateway-example.png

Here is the corresponding Service Gateway configuration in Nirmata:

_images/service-gateway-setup.png

NOTE: A Service Gateway is not required for load-balancing across services in the same environment. Nirmata already provides service discovery and load-balancing across all services deployed using Nirmata. The Service Gateway is to be used for external client applications, not deployed by Nirmata, and who need access to your application services.

Deploying a Service Gateway

The Nirmata Service Gateway is deployed as part of your application. You can add a Service Gateway to your Application Blueprint:

_images/service-gateway-add.png

A Service Gateway will typically require public or externally routed IP Addresses. In this case, you can define a new Container Type, for example Gateway, in the Policies section, and then configure a Resource Selection Rule to place Service Gateway instances on Hosts that are created with a public IP address:

_images/service-gateway-resource-selection-rule.png

You can configure a Service Gateway when the application is deployed (in the Create Environment Wizard), or within an Environment’s view. The available options request routing are discussed below:

TCP Routing

TCP routing allows the mapping of a well known port to a backend service. For example:

:7000  ---->  catalog:80

This route type is useful for custom TCP protocols, stateful protocols like Websockets, and for web applications that perform redirects to internal endpoints. To configure port-based routing, the Service Gateway’s application blueprint must expose the ports that are routed. The Sticky Sessions and Target URL configuration options are not applicable for TCP routing.

URL Routing

URL routing allows a single client to call multiple backend servcies, using the same HTTP/S connection. For example a Web UI may require access to different backend application services. When URL Routing is used, each HTTP request is inspected and then routed based on the URL Path.

You can specify a different Target URL for your service. The Nirmata Service Gateway will rewrite your URL as follows:

/route/(.*)  --->   /targetUrl/$1

The path elements after the route will be captured and appended to the Target URL. If the Target URL is empty the entire URL Path in the HTTP Request will be sent to the backend service.

DNS Routing

DNS routing allows the mapping of an DNS name to a backend service. For example:

catalog.shopping.com  ---->  catalog:8080

DNS routing is intended to be used for load-balancing HTTP/S and Websocket (WS/S) connections to an available instance of a single service. Nirmata will look at the Host name in the HTTP/S request, resolve the service specified in the route, and connect the client to the service. Once connected, all subsequent requests will be sent to the same service instances and no request routing is performed.

Sticky Sessions

You can optionally configure a URL or DNS Route to be sticky. This means that the Service Gateway will select the same backend Service Instance for a client. This option is useful when services are not stateless and maintain session state.

When the ‘Sticky’ option is enabled, the Service Gateway adds a HTTP cookie to the response headers with service address. This cookie is then used when subsequent requests are made from the client. If the service specified in the cookie is not available, another instance will be automatically selected.

Managing HTTPS

The Nirmata Service Gateway can be used to terminate, or proxy, SSL.

To configure SSL for Nirmata Service Gateway, you will need to provide a TLS certificate and a key. The certificate and the key can be uploaded to the gateway container by mounting the directory that contains the certicate and key files. You will also need to specify the environment variables to provide the certificate and key file path to the gateway service.

In Nirmata Service Gateway configuration:

  • Mount the volume. e.g. /usr/share/nirmata/ssl:/usr/share/nirmata/ssl
  • Add the environment variables in the table below
Environment Variable Description
NIRMATA_GW_TLS_CERT Path for the TLS certificate e.g. /usr/share/nirmata/ssl/gateway.cer
NIRMATA_GW_TLS_KEY Path for your TLS key e.g. /usr/share/nirmata/ssl/gateway.key
  • Add HTTPS Port

Note: You will need to ensure that the certificate and the key file are placed at the specified location on the host that is running the gateway container.

If SSL proxying is used, the backed Services must be configured with a valid SSL certificate (self-signed, and expired, certificates will be rejected.)

HTTP Redirect

You can optionally redirect all HTTP connections to the HTTPS port. You first need to configure the HTTPS as specified above. To redirect HTTP, you need to configure an HTTP and HTTPS port in the Service Gateway and then enable the redirection:

_images/service-gateway-redirect-http.png

Programmable Service Routing

When you create an Environment you can set the default routing policy to either Allow All or Deny All service traffic, and then customize which services can communicate to each other. The routing rules can be configured using service names and tags, allowing control over which versions of your application services can communicate with each other.

Nirmata allows you to control routes across services, and also from the Nirmata Gateway Service to other services.

In the example below, we have configured 2 deny rules, one to not allow traffic from catalog to orders, and the other to not allow traffic from the gateway to orders. Note that I have not chosen a tag, but could use that to control traffic to different versions of services in my environment:

_images/programmable-routing.png

Policies

Nirmata offers a rich set of policies that map applications to resources, and control how applications are placed and managed. This abstraction allows proper separation of application definitions from the infrastructure.

Here is a summary of the available policies. Each one of these is described in more detail in the sections below.

Policy Description
Resource Selection Controls which Host Groups and Placement types are used for Applications, Services, and Environments
Host Scaling Defines the conditions upon which hosts must be create or delete automatically
Environment Types Defines available environment types with optional image labels and update policies
Container Types Defines available container types with optional CPU and memory limits

Resource Selection

Resource selection rules select which Host Group(s) are used for placement as well has the type of placement to be used (sequential versus available memory). Rules are selected based on the container type associated with the Service being placed and the type of the Environment being created. If multiple rules match these two criteria, the following steps are applied to select a host:

  1. Rules are sorted by rank, in ascending order i.e. a lower number is considered the higher rank
  2. Host Groups from all rules with the same rank are treated as a pool, and the best Host is selected based on the placement type
  3. If no hosts are available in the highest rank, the next rank is processed

A Resource Selection rule can also specify a minimum. A minimum indicates how many instances of a Service must be placed in the Host Group before other Host Groups from the same rule, or other rules of the same rank, are used. This is useful for managing availability across regions. An AWS specific use case for the minimum is when spot instances are used, in which case you can specify a minimum count for rules that select Host Groups with on-demand, or dedicated instances, to gaurantee service availability when spot instances are not available.

There are are two types of placement supported:

Placement Description
Available Memory The host with the highest amount of memory is selected first.
Sequential The host with the smallest amount of memory is selected first

Sequential placement is used to minimize the number of hosts required to create environments. Available Memory placement is used to take full advantage of all the hosts available to the user.

The placement type is not strongly enforced as other placement criteria can take precedence. For instance, port collision avoidance and affinity rules may dictate to select a different host than the one selected based on the placement type.

Creating Resource Selection Rule

A default Resource Selection Rule, that matches all containers and all environment types is created in a new account. You can create additional rules at any time:

_images/resource-selection-rules.png _images/create-resource-selection-rule.png

Updating Resource Selection Rule

Once created, resource selection rules can be edited. The modified rule will only take effect when a new environment is created, or when new services instances are deployed in an existing environment.

Host Scaling

Host Scaling is one of the features you can use to optimize your cloud resources utilization. Host Scaling Rules can be defined to control when new Hosts must be created due to an increase of a Host Group utilization. A Host Scaling Rule also defines when empty hosts can be removed.

The Host Scaling features works with all the types of Cloud Providers except Other and AWS Auto Scaling Group & AWS Spot Instances. You can use the host auto-scaling feature on AWS when using a Launch Configuration.

Creating a Host Scaling Rule

The Host Scaling wizard has 3 sections. The first section is use to define the general parameters

_images/create-host-scaling-rule-general-setting.png

The first parameter is the Auto Scaling flag. You can decide to temporarily disable a specific rule by un-checking the Auto-Scaling flag. When the rule is disabled, none of its parameters apply anymore, not event the minimum number of hosts nor the maximum number of hosts.

The second parameter is the name of the rule.

The last parameter is the list of Host Groups to which this rule applies. You must specify at least one Host Group.

The next section of the wizard is used to specify when new hosts must be added the Host Group.

_images/create-host-scaling-rule-scale-up.png

The first parameter is the scale up Host Group utilization threshold. A new Host is added to the Host Group when the total memory allocated in a Host Group exceeds this threshold.

The second parameter is a cooldown period expressed in seconds. A new Host is added to the Host Group if the utilization threshold is exceeded for at least the time defined by this parameter. If you want to scale up your Host Group as soon as the utilization threshold is exceeded then set this parameter to zero. The cooldown period is used to avoid adding and removing Hosts within a short period of time when the utlization osiliates around the threshold.

The last parameter defines the maximum number of Hosts that can be part of a Host Group. Hosts in failed state or not connected are not taken into account.

The last section of the wizard is used to specify when Hosts must be removed from the Host Group.

_images/create-host-scaling-rule-scale-down.png

The first parameter is the scale down Host Group utilization threshold. An empty Host is removed from the Host Group when the percentage of memory allocated in the Host Group moves below this threshold. An empty host is a host with no managed containers. The scale down Host Group utilization threshold must be equal or less than the scale up Host Group utilization threshold.

The second parameter is a cooldown period expressed in seconds. An empty host is removed from the Host Group if the utilization stays below the utilization threshold for at least the time defined by this parameter.

The last parameter defines the minimum number of hosts that can be part of a Host Group. Hosts in failed state or not connected are not taken into account.

Environment Types

Environment types are used to categorize application environments. An environment type can represent a stage in a code delivery pipeline e.g. as ‘Development’, ‘QA’, and ‘Test’ or can even be used to categorize types of customers e.g. Bronze, Silver, Gold. You can define multiple environment types and then use these types in policies like Resource Selection and Scaling & Recovery.

In a new account, there are four predefined environment types that represent common stages in a code delivery pipeline:

  • Dev: developer sandboxes that can accept any image and promote to the Test environment type
  • Test: test / QA environments that only accept images tagged with the prefix TEST and promote to the Staging environment type
    • Staging: system test environments that only accept images tagged with the prefix STAGE and promote to the Staging environment type
    • Production: production environments that only accept images tagged with the prefix PROD
_images/environment-types-table.png

You can customize these types, or delete and define your own environment types.

The optional Accept Tag and Promote Tag fields control how container images are managed across environments. These fields work together with the Update Policy and the Environment’s Promote and Update actions to allow end-to-end automation of your DevOps processes.

Accept Tag

The optional Accept Tag controls which images are accepted into an environment of this type. If not specified, all images are accepted. If specified only images that are prefixed with the specified tag are accepted.

For example, the predefined Production environment type will only accept tags like PROD_latest or PROD_20150614_101231

Promote Tag

The optional Promote Tag controls how images are promoted out of an environment of this type. If not specified, the Promote action is disabled. If specified, the Promote action will generate image tags that are prefixed with this tag.

For example, when the Promote action is requested for a Service Instance in an environment of the predefined Staging environment type, two new images tags will be created for the Service’s image: PROD_latest or PROD_20150615_081054

Update Policy

The Update Policy manages how new images that are allowed for the environment are handled, to either result in an automated update or a user notification. The Update Policy works in conjunction with the Allow Tag defined for the environment type. If the Environment Type has an Allow Tag defined, only new images tagged as {TAG}_latest will be processed (where {TAG} is the user specified allow tag). If the Environment Type does not have an Allow Tag images with the latest tag will be processed.

If you enable auto-restarts, a rolling upgrade will be automatically performed when a new image that is allowed for the environment type, is detected. Otherwise a notification is shown on the Service Instances that require an upgrade.

The Quiet Period controls how many seconds to wait before the upgrade is started, and the Restart Interval controls how many seconds to wait between upgrades of Service Instances.

Access Control

You can configure access controls on an Environment Type. Access controls can be used to restrict environment access to a subset of users, and also control which users can create environments of a type. When new environments are created, they inherit the configured access controls from the environment type. If needed, you can also customize access controls in a deployed environment.

The access provided to a user can be of the following types:

  • Read Only Access: the user can view created environments and running services, but cannot modify environment settings or manage the services.
  • Read Write Access: the user can create new environments, view existing environments, modify environment settings, and manage services.

When access control is enabled, you must explicitly add rules for each user role (or individual user) that should be allowed access. Users who do not match the access control rules will not have any access i.e. they will not see the environment type, not be able to create new environments of that type, and also will not see any existing environments created from the environment type.

If both the user’s role and the user’s email are configured in the access control rules, the rule with the email will be used.

In the example below, users of type Administrator and Platform have read write access, and users of type DevOps have read only access.

_images/access-controls.png

Container Types

When defining the blueprint of an application, users must associate a container type to each service. For a new account, four container types are available:
  1. 4GB container type
  2. 2GB container type
  3. 1GB container type
  4. 512MB container type

Users can create as many container types as they want. Container Types can optionally specify memory limits and CPU shares.

_images/create-container-type.png

Container type are characterized by the following parameters:

Parameter Description
Name Logical name for this container type
Description Short description specifying the intent of this container type
CPU Shares Number of CPU shares required to run the associated service. Ranges from 1 to 10.
Memory Memory (MB) required to run the associated service.

Environments

Deploy an Application to a Environment

An Environment is a running instance of an Application. You can create multiple Environments from an Application, or from a version of an Application. Each Environment can contain all, or a subset of, the services defined in the Application and each Service can have one or more running instances, based on the scaling policies. Each Service instance is associated with a unique version of a Service.

When an Environment is created, the Application’s Scaling Policy, Routing Policies and its Gateway configuration are copied into the Environment. This allows you to change the Environment runtime policies, without modifying the application blueprint. Since the Environment has a copy of the information, only that Environment is impacted by changes.

To create a new Environment, provide a unique name, select an Environment Type, the Application and version. All other settings are optional.

_images/environment-create.png

The Environment Type and each Services’ Container Type are used to select the Host Group for placing Services.

While creating an Environment, you can optionally select which Services should be deployed, configure the environment dependencies, configure the Environment’s Scaling Policy, it’s Application Gateway Service, configure the Environment’s inter-service Routing Policy or create a Label Selector.

Label Selector

Service Instances are placed on hosts based on the Resource Selection Rules and the Service Affinity Rules that you have configured. You can further control on which host Service Instances are placed by using Labels and Label Selectors. You must first assign Labels to a Host or to a group of Hosts.

_images/host1-label-create.png

Labels are key/value pairs

_images/host1-label-create-2.png

When deploying an environment, you can define a Label Selector. Nirmata orchestration will only use the Hosts matching the Label Selector to run the placement algorithms.

_images/environment1-label-selector-create.png

The Label Selectors defined above indicates that the Customer and Catalog services must be placed on hosts configured with the Label storage-class=huge. The ratings service must be placed on hosts configured with the Label vm-type=public.

The following Label Operators are supported: . equals . not equals . in . not in . exists

When using the “in” or “not in” operators you can specify one or more values. When using the “exists” operator you don’t have to specify any label value.

Label Selectors can also be specified when adding a service to an already deployed environment.

_images/environment1-add-service-label-selector.png

Label Selectors can also be edited once the Environment is deployed:

_images/environment1-label-selector-edit.png

Environment Dependencies

When deploying your environment, you can optionaly specify that your environment depends on other environments already deployed. To use this feature you need first to define the dependencies at the application level.

In the following example we a deploying the Staging Shopme environment which relies on the Staging MongoDB environment and the Staging Elasticsearch environment:

_images/environment-dependencies-1.png

A Routing Policy is automatically added for each required environment. The Routing Policy default values are copied from the Application definition. Routing Policies can be edited once the environment is deployed.

Managing Services

Once the services are deployed, you can select an instance and view its logs, status, and also can stop, start, and delete the instance.

_images/environment-manage-service.png

To launch multiple instances of the same service, you can edit the Environment’s Scaling Policy

Edit Scaling Policies

The Environment contains a Scaling Policy, that is copied from the Application, when the Environment is created. You can edit the Scaling Policy of an Environment to change the number of desired instances for a service:

_images/environment-scaling-policy.png

Edit Gateway Routes

If the Nirmata Gateway Service is included in the Application blueprint, each Environment created from the Application will have one or more Gateway Service instances, and you can change the URL or DNS routes to manage traffic flows into the Environment:

_images/environment-gateway.png

Edit Routing Policies

Each Environment has one or more Routing Policies. You can edit the Routing Policy to manage traffic flows across service instances within the environment or across environments.

The Routing Policy defining the traffic flows within the environment is called ‘Internal’. The Routing Policies defining the traffic flows with the required environments are named using the name of the required environment. In the example below, the Shopme has 3 Routing Policies: . Internal: defines the traffic flows within the shopme environment . Staging-elasticsearch: Defines the traffic flows between Shopme and the Staging-elasticsearch environment . Staging-mongodb: Defines the traffic flows between Shopme and the Staging-mongodb environment:

_images/environment-routing-policy-1.png

To create a route, you can specify the from and to service names and versions and then can choose to allow or deny communication across those service instances.

_images/environment-routing-policy-2.png

Edit Update Policy

For each environment there is an update policy which allows you to control when service instances are upgraded. If Auto-Restart is enabled, service instances will be automatically restarted when a new image is available for that service. This will ensure that your services instances are always using the latest image. You can configure the restart interval and the quite period to mange the rolling upgrade. For automatic updates to work you will need Docker Hub Integration or CI/CD Integration.

_images/edit-update-policy.png

Deploy a new Version of a Service

You can add new versions of any Service, defined in the Application blueprint, to an Environment. This allows deploying and testing new features, and when used with the Gateway and Routing Policy enable powerful continuous delivery best practices such canary launches, and blue-green deployment.

_images/environment-add-service.png

If all versions of a Service have already been deployed, the dialog will show an error message and prompt you to use the Scaling Policy to manage the service instances.

Environment Access Controls

You can restrict access for an environment, to a role or a set of users. The Environment Access Control policy is inherited from the EnvironmentType. Once an environment is created, you can further customize the access control privileges for that environment, without impacting other environments.

For details on setting up access control, see Access Control.

Monitoring an Environment

Nirmata collects and aggregates several statistic from each Service container. You can view these statistics (aggregated) at the Environment level, or for an individual Service instance:

_images/environment-monitor.png

To view statistics for an individual Service instance, simply click on the Service name in the table:

_images/environment-monitor-service.png

The Environment provides a logical view of an Application instance. You can also view physical host statistics, by drilling down to an individual host (select Host Groups -> <Cloud Provider> -> <Host>).

_images/environment-monitor-host.png

Docker Hub Integration

You can automate the update of your service instances deployed in an environment by setting up Nirmata as a web hook receiver in Docker Hub. To setup Nirmata as a web hook receiver in a repository, configure the below URL in Docker Hub:

https://www.nirmata.io/imageregistry/api/dockerhub/notify/<tenant-id>

You can find the tenant-id on the Settings->Account page in Nirmata web interface.

Once the web hook receiver is configured for a repository, Nirmata will receive a http POST request whenever the repository is updated. When Nirmata receives the POST request, all the environments that contain service instances using the image will be notified. To automatically update your services when an image is updated, you can enable the Update policy for your environment. When Update policy is enabled, Nirmata will perform a rolling upgrade of your services based on the Update policy settings

_images/auto-image-update.png

CI/CD Integration

You can also automate the update of your services instances deployed in an environment by integrating Nirmata with your build tools such as jenkins. To notify Nirmata, you can send a http POST request to the URL:

https://www.nirmata.io/imageregistry/api/build/notify/<tenant-id>

You can find the tenant-id on the Settings->Account page in Nirmata web interface.

In the http POST request, you need to include the following json:

{
  "owner": <reposotory owner>,
  "repository" : <repository name>,
  "user" : <user name>,
  "tag" : <image tag>
}

When the POST request is received, Nirmata will notify all the environments with service instances that are using this image. Automatic update will be performed if enabled in the Update policy. See Edit Update Policy for more details.

Image Registries

Nirmata supports the following image registries:
  1. Docker Hub
  2. Docker Private Registry
  3. Amazon EC2 Container Registry (ECR)
  4. JFrog Artifactory

Docker Image Registries can be managed using Nirmata. By default, the Docker Hub is already created. Additional public or private registries can also be added.

Docker Hub

Users can search the Docker Hub for images by specifying a keyword. Once an image is found, users can view the details of the image or create a new service with that image.

Docker Private Registry

To securely connect Nirmata with a Docker Registry in your Private Cloud of Data Center, first setup a Private Cloud.

A private registry can be added by specifying the registry URL and the credentials (optional). Once the registry is added, users can view the image repositories in that registry as well as the list of tags for each image repository. Users can also create a new service for an image, view details of a tag or compare two tags.

Amazon ECR

To securely connect Nirmata with Amazon ECR first setup an AWS Cloud Provider. When setting up the cloud provider, ensure that AmazonEC2ContainerRegistryFullAccess is selected.

_images/ecr-setup.png

Next, add the Image Registry in Nimrata. Enter the name, select Amazon ECR as the provider, enter the location and select cloud provider. Ensure that this cloud provider is using an IAM role that has policy AmazonEC2ContainerRegistryFullAccess enabled.

The registry location can found from the Amazon ECR page.

_images/ecr-repo-view.png _images/add-ecr.png

Once the repository is added, you can view the images in that registry as well as the tags for each image repository. Now you can select the tag when creating a new service for an image.

JFrog Artifactory

To add Artifactory in Nirmata, go to the Image Registries screen and click on the ‘Add Image Registry…’ button. In the dialog, select JFrog Artifactory as the Registry Provider. Specify the registry name. For Artifactory, the registry name should be same as the registry name displayed in the Artifactory Repository Browser. Also, specify the artifactory URL in the location field along with the username and password. If all the information is correctly specified, you should be able to see a list of all the images stored in the registry.

Create a Service from an Docker Image

To create a new Application, or to add a new Service to an existing Application, you can select an image listed in a registry and select the Create Service option. This will launch a wizard where you can create a new Application or select an existing Application, and then configure the service properties:

_images/image-registry-create-service.png

The remaining steps in the Add Service Wizard are the same as adding a Service to an existing application, and are documented at: Add a Service to an Application.

Example: Deploy Wordpress

Here are the steps to create a Wordpress service based on the jbfink/docker-wordpress image available on Docker index.

  1. Select Image Registries -> Docker Index and search for a ‘wordpress’. Several wordpress images will be displayed in the table.
  2. Select the ‘Create Service...’ action for the ‘jbfink/docker-word press’ image
  3. In the Add Service Wizard, choose to create an Application and specify a name for the new application.
  4. On the Networking page, disable service networking and remove the mapping for HTTPS port. Specify the HTTP host port.
  5. Other sections can be ignored for this example.
  6. Finish the wizard.

You will now see the new application in the Applications section. If you have configured a Host Group, you can also deploy the application to an environment. (See Deploy an Application to a Environment).

Settings

Users and Roles

An account can have multiple users, and each user has a role that defines what they can see and do. When a new account is created, the first user has an admin role which allows that user to create and manage additional users for the account.

The following user roles are available:

Role Description
admin admin users have full access to the account and can also manage other users and their access.
platform platform users can all other resorces including Cloud Providers, Host Groups, Policies, Applications, and Environments, but cannot manage users.
devops devops users can manage Applications and Environments. They do not have access to Cloud Providers, Host Groups, and Policies, and cannot manage users.
readonly readonly users can view all data, but create, edit, or delete anything. This role is ideal for system accounts that collect and report data.

REST API

Nirmata provides a REST API for easy integration, using a language of your choice. The API provides support for all operations that can be performed from the Nirmata HTML 5 web application.

API Compatibility

The API is currently in development and is likely to change. Although we will strive to maintain compatibility, some changes may break existing integrations.

API Basics

The Nirmata API model is composed of resources. Each resource type is described by a class and is made up of attributes and relations. Each resource has a modelIndex that indicates its class, and a uri that describes how it can be queried. At runtime, each resource can contain relations to other resources. The relations can be parent-child relations or reference relations.

The Nirmata API is accessible at:

Note: trying the API URL in a browser will return an empty page, as the required HTTP headers are not specified. You can use a REST client, like Postman (http://www.getpostman.com/), to try and learn the API.

Authentication

To authenticate your account, you can use HTTP BASIC authentication or an API Key:

Authorization BASIC <Bas64 user:password>

or:

Authorization NIRMATA-API <your api key>

Since the Nirmata API is only accessible via HTTPS, your credentials are sent over an encrypted connection.

To manage your API Key, login to Nirmata and navigate to Settings -> Account -> Generate API Key. An API key is associated with a User account. When you authenticate an application using the API key, it will get the role and priviliges associated with the account.

A best practice recommendation is to create separare accounts for each application, and provide the minimum required role and priviliges to the account.

Operations

Operation HTTP Method URI Syntax Description
Create POST /{modelIndex} Creates a new resource. The modelIndex is the resource name.
/{parent}/{id}/{relation} Create a new resource, as child of the ‘{parent}/{id}’ resource
Retrieve GET /{modelIndex} Returns all resources of type specified by ‘modelIndex’
/{modelIndex}/{id} Returns a single resource
Update PUT /{modelIndex}/{id} Updates a resource
Delete DELETE /{modelIndex}/{id} Deletes a resource
Discover OPTIONS / Returns the class definitions for all resources
/{modelIndex} Returns the class definition for a single resource

HTTP Response Status Codes

The following table lists common HTTP response codes used by the API:

HTTP Status Code Description
200 The operation succeeded
401 The user authentication failed
403 The request was not permitted
406 The request results in an invalid configuration
500 The request caused a server error

Resources

The following resources are available via the API:

  • Root
  • Application
  • EnvironmentType
  • ContainerType
  • ResourceSelectionPolicy
  • ResourceSelectionRule
  • CloudProvider
  • HostGroup
  • Host
  • Container
  • Service
  • Environment
  • ServiceInstance
  • ServicePort
  • ServiceSpec
  • Registry
  • LaunchConfiguration
  • VSphereConfig
  • VCloudConfig
  • OpenStackConfig
  • AzureConfig
  • WebHook
  • ScalingPolicy
  • DesiredService
  • ScalingRule
  • ServiceInstanceAction
  • ServiceAffinityRule
  • ServiceScalingRule
  • RoutingPolicy
  • RoutingRule
  • GatewayRoute
  • GatewayPolicy
  • GatewayRule

HTTP Headers

The following HTTP headers are required:

Header Sample Description
Authorization NIRMATA-API <api-key> API key based authentication
Accept application/json Specifies request for JSON data

URL Parameters

URL Parameters are used to control the returned JSON data.

mode

The mode parameter is used to control the JSON output format for a GET request. Here are the allowed values:
  • normal
  • export
  • exportDetails

In the normal mode the JSON data for each object contains all attributes and relations are output as links. In the export mode the JSON data contains all attributes and child relation objects are directly embedded into the parent. Reference relations are represented as links, and also contain the unique attributes of the referred to object. The mode parameter can be applied to all GET requests and can be combined with other parameters. The export mode is useful when an entire sub-document needs to be retrieved and stored externally. For example, the following query will export the full {model} as JSON:

GET http://www.nirmata.io/api/applications/{id}?mode=export
Accept: application/json

The exportDetails mode also includes meta-data fields and relations to external entities (outside of the target document).

query

The query parameter is used to control which objects are included in a JSON response. The query data is specified as a JSON object. The query parameter can be applied to any GET request that returns multiple objects. For example the following query returns all instances of the class Device that have attribute state with the value up:

GET  http://www.nirmata.io/api/applications/api/serviceInstances?query={"state":"failed"}
Accept: application/json

Multiple attributes can be specified for a logical AND operation:

GET  http://www.nirmata.io/api/applications/api/serviceInstances?query={"state":"failed", "serviceRef":"01101AKAJAL10107252418HG"}
Accept: application/json

fields

The fields parameter is used to control which fields, of an resource, should be included in the JSON response. The fields parameter can be applied to any GET request. The fields parameter value is specified as a single field name, or a comma separated list of field names, that should be included in the response.

For example, this query returns only the id and state fields of all ServiceInstance resources:

GET  http://www.nirmata.io/api/applications/api/serviceInstances?fields=id,state
Accept: application/json

excludeFields

The excludeFields parameter is used to exclude one or more fields from the response. The excludeFields parameter can be applied to any GET request. The fields parameter value is specified as a single field name, or a comma separated list of field names, that should be excluded from the response.

For example, this query excludes the state field from ServiceInstance resources:

GET  http://www.nirmata.io/api/applications/api/serviceInstances?excludeFields=state
Accept: application/json

count

The count parameter is used to specify the maximum number of resources to be returned. It can be used when querying a relation, a list of resources, or using a query parameter. A positive value is expected. When not specified, all matching resources are returned. For example the following query returns up to 100 instances of the resource serviceInstances that have attribute state with the value up:

GET  http://www.nirmata.io/api/applications/api/serviceInstances?count=100&query={"state":"up"}
Accept: application/json

start

The start parameter is used to specify the start index in a list of resources. It can be used when querying a relation, a list of resources, or using a query parameter. A positive value is expected. When not specified a value of 0 is assumed. For example the following query returns instances 100-200 (or less) of the resource serviceInstances that have attribute state with the value up:

GET  http://www.nirmata.io/api/applications/api/serviceInstances&query={"state":"up"}
Accept: application/json


Service Instance Error Codes
--------------------

The API user can retrieve a failed Service Instance and read the errorCode field in order to get more details about the error.
This field can take the following values:

===============================  ================================================================
Error Code                       Description
===============================  ================================================================
NoError                           The Service Instance is not in failed state
UnknownError                      Unknown error
ServiceInstanceFailure            Service instance failed: unknown reason
InternalError                     Internal error
ContainerFailed                   Container failed on host
ContainerCreationError            Failed to create container on host
ContainerNotRunning               Container found in non running state on host
ContainerNotFound                 No container found on the host
NoSuitableHost                    Container placement failed: couldn't find suitable host
DependencyInjectionError          Dependency injection error
VolumeCreationFailed              Failed to create storage volume on host
NetworkCreationFailed             Failed to create network on host
MemoryLimitReached                Container exited abruptly, memory limit reached
HostNotConnectedAndUnreachable    Host not connected and unreachable through cloud provider
HostNotConnected                  Host not connected
PullImageFailed                   Failed to pull image
NoSaceLeftOnHost                  No space left on host
HealthCheckFailed                 Health check failed
===============================  ================================================================

Sample Operations

Get all applications

Request:

GET /api/applications
Accept: application/json
Authorization: NIRMATA-API <key>

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

[
 {
     "id": "1696d434-0e97-4428-8875-e53069ea418b",
     "service": "Config",
     "modelIndex": "Application",
     "uri": "/config/api/applications/1696d434-0e97-4428-8875-e53069ea418b",
     "parent": {
         "id": "132eb66a-1c78-4a3e-a096-3ce9c57aefec",
         "service": "Config",
         "modelIndex": "Root",
         "uri": "/config/api/roots/132eb66a-1c78-4a3e-a096-3ce9c57aefec",
         "childRelation": "applications"
     },
     "createdBy": "damien",
     "createdOn": 1421778225729,
     "versionName": "latest",
     "versions": [],
     "name": "shopme-damien",
     "description": "A Microservices architecture style shopping application.",
     "services": [
         {
             "id": "fbd5098b-a096-41f0-9f95-a012b9c86d96",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/fbd5098b-a096-41f0-9f95-a012b9c86d96"
         },
         {
             "id": "eb3b740e-96e8-45ff-b981-76300d491f86",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/eb3b740e-96e8-45ff-b981-76300d491f86"
         },
         {
             "id": "587d997f-c931-4c5c-a6a4-08787b273884",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/587d997f-c931-4c5c-a6a4-08787b273884"
         },
         {
             "id": "cb28c6fd-1583-4151-b76c-c7ef07255670",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/cb28c6fd-1583-4151-b76c-c7ef07255670"
         },
         {
             "id": "2a14cb82-82e9-4d95-a087-8471c131a5c2",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/2a14cb82-82e9-4d95-a087-8471c131a5c2"
         },
         {
             "id": "738ffeb1-3829-4488-b310-56e33d7c3181",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/738ffeb1-3829-4488-b310-56e33d7c3181"
         },
         {
             "id": "4cfda00b-0781-4f73-b08d-40c1cfb96952",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/4cfda00b-0781-4f73-b08d-40c1cfb96952"
         },
         {
             "id": "589dda7c-908a-468e-81be-dd63ba9a3fa9",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/589dda7c-908a-468e-81be-dd63ba9a3fa9"
         },
         {
             "id": "8cace559-8024-44e9-9e7b-a61263361358",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/8cace559-8024-44e9-9e7b-a61263361358"
         },
         {
             "id": "0f1daa18-0493-4fbc-aadf-9a2a26134f6d",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/0f1daa18-0493-4fbc-aadf-9a2a26134f6d"
         },
         {
             "id": "edb40d39-5578-4353-9649-22cbe7554822",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/edb40d39-5578-4353-9649-22cbe7554822"
         }
     ],
     "webHooks": [],
     "serviceAffinityRules": [],
     "serviceScalingRules": [],
     "environments": [],
     "gateway": [
         {
             "id": "cb28c6fd-1583-4151-b76c-c7ef07255670",
             "service": "Config",
             "modelIndex": "Service",
             "uri": "/config/api/services/cb28c6fd-1583-4151-b76c-c7ef07255670"
         }
     ]
 },
     ...
]

Get Service Details

The mode parameter is used to export all relations.

Request:

GET /api/services/fbd5098b-a096-41f0-9f95-a012b9c86d96?mode=exportDetails
Accept: application/json
Authorization: NIRMATA-API <key>

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

{
"id": "fbd5098b-a096-41f0-9f95-a012b9c86d96",
"service": "Config",
"modelIndex": "Service",
"uri": "/config/api/services/fbd5098b-a096-41f0-9f95-a012b9c86d96",
"parent": {
    "id": "1696d434-0e97-4428-8875-e53069ea418b",
    "service": "Config",
    "modelIndex": "Application",
    "uri": "/config/api/applications/1696d434-0e97-4428-8875-e53069ea418b",
    "childRelation": "services"
},
"createdBy": "damien",
"createdOn": 1421778225744,
"versionName": "latest",
"versions": [],
"name": "orders",
"imageRepository": "nirmata/orders",
"serviceSpec": [
    {
        "id": "64917ec6-19cb-4abe-826e-a4a35716f5b6",
        "service": "Config",
        "modelIndex": "ServiceSpec",
        "uri": "/config/api/serviceSpecs/64917ec6-19cb-4abe-826e-a4a35716f5b6",
        "parent": {
            "id": "fbd5098b-a096-41f0-9f95-a012b9c86d96",
            "service": "Config",
            "modelIndex": "Service",
            "uri": "/config/api/services/fbd5098b-a096-41f0-9f95-a012b9c86d96",
            "childRelation": "serviceSpec"
        },
        "createdBy": "damien",
        "createdOn": 1421778225755,
        "versionName": "latest",
        "versions": [],
        "runCommand": "",
        "properties": [],
        "hostname": "",
        "volumeMappings": [],
        "isPrivileged": false,
        "volumesFrom": [],
        "dns": [],
        "networkMode": "bridge",
        "enableServiceNetworking": true,
        "portMappings": [
            {
                "id": "9630c407-003e-4a63-8ba4-2be42be7709e",
                "service": "Config",
                "modelIndex": "ServicePort",
                "uri": "/config/api/servicePorts/9630c407-003e-4a63-8ba4-2be42be7709e",
                "parent": {
                    "id": "64917ec6-19cb-4abe-826e-a4a35716f5b6",
                    "service": "Config",
                    "modelIndex": "ServiceSpec",
                    "uri": "/config/api/serviceSpecs/64917ec6-19cb-4abe-826e-a4a35716f5b6",
                    "childRelation": "portMappings"
                },
                "createdBy": "damien",
                "createdOn": 1421778225769,
                "versionName": "latest",
                "versions": [],
                "containerPort": 80,
                "type": "TCP",
                "hostPort": 0,
                "name": "HTTP"
            }
        ]
    }
],
"gatewayRoutes": [
    {
        "id": "d20f0221-5a1a-48e0-a586-3f91b3333856",
        "service": "Config",
        "modelIndex": "GatewayRoute",
        "uri": "/config/api/gatewayRoutes/d20f0221-5a1a-48e0-a586-3f91b3333856",
        "parent": {
            "id": "fbd5098b-a096-41f0-9f95-a012b9c86d96",
            "service": "Config",
            "modelIndex": "Service",
            "uri": "/config/api/services/fbd5098b-a096-41f0-9f95-a012b9c86d96",
            "childRelation": "gatewayRoutes"
        },
        "createdBy": "damien",
        "createdOn": 1421778225788,
        "versionName": "latest",
        "versions": [],
        "type": "urlRoute",
        "route": "/orders"
    }
],
"containerType": [
    {
        "id": "ffeb8b87-2d61-4144-ac23-ead227fb6c1b",
        "service": "Config",
        "modelIndex": "ContainerType",
        "uri": "/config/api/containerTypes/ffeb8b87-2d61-4144-ac23-ead227fb6c1b",
        "name": "512mb"
    }
],
"dependsOn": [],
"requiredBy": [
    {
        "id": "cce2798b-f92f-49d3-afae-625aea3f77fd",
        "service": "Config",
        "modelIndex": "Service",
        "uri": "/config/api/services/cce2798b-f92f-49d3-afae-625aea3f77fd",
        "name": "customer"
    },
    {
        "id": "eb3b740e-96e8-45ff-b981-76300d491f86",
        "service": "Config",
        "modelIndex": "Service",
        "uri": "/config/api/services/eb3b740e-96e8-45ff-b981-76300d491f86",
        "name": "customer"
    }
],
"serviceInstances": [],
"desiredServices": [],
"serviceAffinityRules": [],
"serviceScalingRules": [],
"gatewayApplication": []
}

Create an Application

Request:

POST https://www.nirmata.io/api/applications/
Accept: application/json
Authorization: NIRMATA-API <key>

{
    "name": "MyApp",
    "description": "this is a sample application"
}

A successful POST returns the complete resource as its result:

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

{
  "id" : "c7bf4bfa-346e-44f8-b62c-9b8fdf5c1980",
  "service" : "Config",
  "modelIndex" : "Application",
  "uri" : "/config/api/applications/c7bf4bfa-346e-44f8-b62c-9b8fdf5c1980",
  "parent" : {
    "id" : "132eb66a-1c78-4a3e-a096-3ce9c57aefec",
    "service" : "Config",
    "modelIndex" : "Root",
    "uri" : "/config/api/roots/132eb66a-1c78-4a3e-a096-3ce9c57aefec",
    "childRelation" : "applications"
  },
  "createdBy" : "jim",
  "createdOn" : 1422318766175,
  "versionName" : "latest",
  "versions" : [ ],
  "name" : "MyApp",
  "description" : "this is a sample application",
  "services" : [ ],
  "webHooks" : [ ],
  "serviceAffinityRules" : [ ],
  "serviceScalingRules" : [ ],
  "environments" : [ ],
  "gateway" : [ ]
}

Delete an Application

Request:

DELETE /api/applications/c7bf4bfa-346e-44f8-b62c-9b8fdf5c1980
Accept: application/json
Authorization: NIRMATA-API <key>

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

{}

Get the ID of an Application named ‘hello-world’

Request:

GET /api/applications?query={"name": "hello-world"}&fields={"include":"id"}
Accept: application/json
Authorization: NIRMATA-API <key>

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

[
    {
        "id": "8032f2ce-eae0-402b-8d80-2a47717b52ef"
    }
]

Get all Environment IDs and names

Request:

GET /api/environments?fields=id,name
Accept: application/json
Authorization: NIRMATA-API <key>

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

[
    {
        "id": "f9594d4a-605e-4ced-a9d0-4c00c4b5abf3",
        "name": "workflow"
    },
    {
        "id": "dd333b0f-0e63-4644-aedc-8871f026a028",
        "name": "zookeeper"
    },
    {
        "id": "dee56b58-792c-4a43-bc51-2b3785868858",
        "name": "test21"
    }
]

Create a new Environment

Request:

POST /api/environments/
Accept: application/json
Authorization: NIRMATA-API <key>

{
  "name": "testEnvironment",
  "environmentType": {"name": "AWS"},
  "application": {"name": "hello-world"}
}
Note the different ways you can specify a relation in the JSON. For a 1-1 relationship you can:
  1. Specify an ID as a JSON string, as done for application in the above example
  2. Specify an object with fields that can be used to match the relation. In the example above, we use the name field for the EnvironmentType.

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

{
  "id" : "bf24dc3f-c729-4c8b-911c-991aab09df70",
  "service" : "Config",
  "modelIndex" : "Environment",
  "uri" : "/config/api/environments/bf24dc3f-c729-4c8b-911c-991aab09df70",
  "parent" : {
    "id" : "132eb66a-1c78-4a3e-a096-3ce9c57aefec",
    "service" : "Config",
    "modelIndex" : "Root",
    "uri" : "/config/api/roots/132eb66a-1c78-4a3e-a096-3ce9c57aefec",
    "childRelation" : "environments"
  },
  "createdBy" : "jim",
  "createdOn" : 1422319764842,
  "versionName" : "latest",
  "versions" : [ ],
  "name" : "testEnvironment",
  "serviceInstances" : [ ],
  "desiredServices" : [ ],
  "scalingPolicy" : [ {
    "id" : "55a3399a-5ed0-499e-aa6c-ef06984a8b69",
    "service" : "Config",
    "modelIndex" : "ScalingPolicy",
    "uri" : "/config/api/scalingPolicies/55a3399a-5ed0-499e-aa6c-ef06984a8b69"
  } ],
  "routingPolicy" : [ {
    "id" : "65d60c9c-f98b-4a81-84ef-3a0865164af9",
    "service" : "Config",
    "modelIndex" : "RoutingPolicy",
    "uri" : "/config/api/routingPolicies/65d60c9c-f98b-4a81-84ef-3a0865164af9"
  } ],
  "gatewayPolicy" : [ ],
  "environmentType" : [ {
    "id" : "9d789292-5911-47d3-b4dd-c66bb1f64858",
    "service" : "Config",
    "modelIndex" : "EnvironmentType",
    "uri" : "/config/api/environmentTypes/9d789292-5911-47d3-b4dd-c66bb1f64858"
  } ],
  "application" : [ {
    "id" : "8032f2ce-eae0-402b-8d80-2a47717b52ef",
    "service" : "Config",
    "modelIndex" : "Application",
    "uri" : "/config/api/applications/8032f2ce-eae0-402b-8d80-2a47717b52ef"
  } ]
}

Create a new Environment with environment variables, labels, and Gateway rules

Request Headers:

POST /api/environments/
Accept: application/json
Authorization: NIRMATA-API <key>

Request Body:

{
    "name": "helloOpenStack3",
    "environmentType": {"name": "openstack-private-cloud"},
    "application": {"name": "hello-world"},
    "desiredServices" : [
        {"name": "helloworld:latest", "serviceRef": {"name": "helloworld"}, "tag": "latest" },
        {"name": "hello-world-gateway:latest", "serviceRef": {"name": "hello-world-gateway"}, "tag": "latest" }
    ],
    "environmentVariables": [
        { "key": "DATABASE", "value": "mydb.cluster1.com", "desiredServices": [{"name": "helloworld:latest"}] },
        { "key": "TYPE", "value": "staging", "desiredServices": [] }
    ],
    "labelSelectors": [
        {
            "labelSelectorItem": [{"key": "LOCATION", "operator": "equals", "values": ["Denver"]}],
            "desiredServices": [{"name": "helloworld:latest"}]
        }
    ],
    "gatewayPolicy": [
        {
            "redirectHttp": false,
            "rules": [
                {
                    "type": "urlRoute",
                    "route": "/helloworld",
                    "toService": "helloworld",
                    "toTag": "",
                    "toPort": 8080,
                    "toPortType": "HTTP",
                    "targetUrl": "",
                    "stickySessions": true
                }
            ]
        }
    ]
}

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

{
  "id" : "392f7627-4add-4231-8e2f-7f4faabad0dd",
  "service" : "Config",
  "modelIndex" : "Environment",
  "uri" : "/config/api/environments/392f7627-4add-4231-8e2f-7f4faabad0dd",
  "parent" : {
    "id" : "132eb66a-1c78-4a3e-a096-3ce9c57aefec",
    "service" : "Config",
    "modelIndex" : "Root",
    "uri" : "/config/api/roots/132eb66a-1c78-4a3e-a096-3ce9c57aefec",
    "childRelation" : "environments"
  },
  "createdBy" : "jim@nirmata.com",
  "createdOn" : 1451973613713,
  "versionName" : "latest",
  "versions" : [ ],
  "name" : "helloOpenStack3",
  "state" : "pending",
  "status" : [ ],
  "serviceInstances" : [ ],
  "desiredServices" : [ {
    "id" : "69212ca9-ecd9-4636-ac6a-1269790cf986",
    "service" : "Config",
    "modelIndex" : "DesiredService",
    "uri" : "/config/api/desiredServices/69212ca9-ecd9-4636-ac6a-1269790cf986"
  }, {
    "id" : "fe56d2a3-c1d6-43f2-a857-b766c6a18335",
    "service" : "Config",
    "modelIndex" : "DesiredService",
    "uri" : "/config/api/desiredServices/fe56d2a3-c1d6-43f2-a857-b766c6a18335"
  } ],
  "scalingPolicy" : [ {
    "id" : "128bc9f5-da9a-437b-8b3f-b825eaf8a949",
    "service" : "Config",
    "modelIndex" : "ScalingPolicy",
    "uri" : "/config/api/scalingPolicies/128bc9f5-da9a-437b-8b3f-b825eaf8a949"
  } ],
  "routingPolicy" : [ {
    "id" : "f66208cc-b27e-4cac-912f-d9b624efb10b",
    "service" : "Config",
    "modelIndex" : "RoutingPolicy",
    "uri" : "/config/api/routingPolicies/f66208cc-b27e-4cac-912f-d9b624efb10b"
  } ],
  "gatewayPolicy" : [ {
    "id" : "3a37f07f-1c83-4e7c-8e93-f0c4813438b8",
    "service" : "Config",
    "modelIndex" : "GatewayPolicy",
    "uri" : "/config/api/gatewayPolicies/3a37f07f-1c83-4e7c-8e93-f0c4813438b8"
  } ],
  "clusters" : [ ],
  "updatePolicy" : [ {
    "id" : "6302d966-7a12-4237-968c-94dc3f9ddc81",
    "service" : "Config",
    "modelIndex" : "UpdatePolicy",
    "uri" : "/config/api/updatePolicies/6302d966-7a12-4237-968c-94dc3f9ddc81"
  } ],
  "labelSelectors" : [ {
    "id" : "29905746-5792-45bf-8d12-0d98416af33a",
    "service" : "Config",
    "modelIndex" : "DesiredServiceLabelSelector",
    "uri" : "/config/api/labelSelectors/29905746-5792-45bf-8d12-0d98416af33a"
  } ],
  "environmentVariables" : [ {
    "id" : "69777ea7-806d-489a-8ba0-3e46ef5c1e0c",
    "service" : "Config",
    "modelIndex" : "EnvironmentVariable",
    "uri" : "/config/api/environmentVariables/69777ea7-806d-489a-8ba0-3e46ef5c1e0c"
  }, {
    "id" : "cccaeb71-5a5f-45d1-ad44-08d4a8e3e977",
    "service" : "Config",
    "modelIndex" : "EnvironmentVariable",
    "uri" : "/config/api/environmentVariables/cccaeb71-5a5f-45d1-ad44-08d4a8e3e977"
  } ],
  "environmentType" : [ {
    "id" : "d1383ef4-ee02-46eb-9c98-fd9a4d975eed",
    "service" : "Config",
    "modelIndex" : "EnvironmentType",
    "uri" : "/config/api/environmentTypes/d1383ef4-ee02-46eb-9c98-fd9a4d975eed"
  } ],
  "application" : [ {
    "id" : "7eb02867-dc89-44fc-81f3-3cea699377d9",
    "service" : "Config",
    "modelIndex" : "Application",
    "uri" : "/config/api/applications/7eb02867-dc89-44fc-81f3-3cea699377d9"
  } ]
}

Get Services in an Environment

Request:

GET /api/environments/dee56b58-792c-4a43-bc51-2b3785868858/desiredServices?fields=name,id
Accept: application/json
Authorization: NIRMATA-API <key>

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

[
    {
        "id": "d0ca8e5e-815d-42e4-b85a-2dff6afb395c",
        "name": "orders:latest"
    },
    {
        "id": "7fae24aa-51e0-4f48-8cf9-43f4587bce27",
        "name": "customer:latest"
    },
    {
        "id": "ca2d9888-4e18-4209-952f-73b1ece9c02c",
        "name": "catalog:latest"
    },
    {
        "id": "66474274-5800-44b2-8c7d-ad4c8265ab04",
        "name": "ratings:latest"
    },
    {
        "id": "ed243ae3-77ff-458b-ae70-21643a328b34",
        "name": "recommendations:latest"
    }
]

Delete a Service Instance

Request:

DELETE /api/serviceInstances/1c98b95b-29bc-4c2b-a0e0-8f46a353115e
Accept: application/json
Authorization: NIRMATA-API <key>

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

{}

Delete all instances of a Service

You can easily combine multiple operations using a programming language, scripting, or command line tools like JQ:

#!/bin/bash

# Environment ID and Service name:tag are passed in
EnvId=$1

BaseUrl="https://nirmata.io/config/api"
ApiKey=<apikey>
Accept="Accept: application/json"
Auth="Authorization: NIRMATA-API $ApiKey"

curl -H "$Accept" -H "$Auth" $BaseUrl/environments/$EnvId/desiredServices?fields=name,id,serviceInstances | \
    jq -r --arg ServiceName "$2" '.[] | select(.name == $ServiceName) | .serviceInstances | .[] .id' |  \
    xargs -I% curl -X DELETE -H "$Accept" -H "$Auth" $BaseUrl/serviceInstances/%
The bash script does the following:
  1. Gets the service instances for an environment
  2. Filters service instances for a specific service name
  3. Pipes the matching IDs to a API call to delete the matching service instances

Add a Service to an Environment

Request:

POST /api/environments/70e12f6e-9961-40d1-92eb-2d8db1c37b34/desiredServices
Accept: application/json
Authorization: NIRMATA-API <key>

{
    "name": "catalog:blue",
    "tag": "blue",
    "serviceRef": {"name": "catalog"}
}

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

{
    "id" : "3b45c735-4e71-41ad-9c96-ca5b46b28d88",
    "service" : "Config",
    "modelIndex" : "DesiredService",
    "uri" : "/config/api/desiredServices/3b45c735-4e71-41ad-9c96-ca5b46b28d88",
    "parent" : {
    "id" : "70e12f6e-9961-40d1-92eb-2d8db1c37b34",
    "service" : "Config",
    "modelIndex" : "Environment",
    "uri" : "/config/api/environments/70e12f6e-9961-40d1-92eb-2d8db1c37b34",
    "childRelation" : "desiredServices"
    },
    "createdBy" : "jim@nirmata.com",
    "createdOn" : 1450667141749,
    "versionName" : "latest",
    "versions" : [ ],
    "tag" : "blue",
    "lastStateChange" : 0,
    "startImageUpdateQuietPeriod" : 0,
    "name" : "catalog:blue",
    "actions" : [ ],
    "imageUpdateEvent" : [ ],
    "serviceRef" : [ {
    "id" : "8f7c848f-b7cc-4dd6-805c-5fe2462c555c",
    "service" : "Config",
    "modelIndex" : "Service",
    "uri" : "/config/api/services/8f7c848f-b7cc-4dd6-805c-5fe2462c555c"
    } ],
    "scalingRule" : [ ],
    "serviceInstances" : [ ],
    "cluster" : [ ],
    "labelSelectors" : [ ],
    "environmentVariables" : [ ]
}

Get Environment Information

Request:

GET /api/environments?query={"name" :"shop-demo"}&mode=exportDetails&fields=id,name,scalingPolicy,scalingRules,desiredServices
Accept: application/json
Authorization: NIRMATA-API <key>

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

[
    {
        "id": "70e12f6e-9961-40d1-92eb-2d8db1c37b34",
        "name": "shop-demo",
        "desiredServices": [
            {
                "id": "faefb6bf-4eb4-4c84-844c-1486bb7071b4",
                "name": "orders:latest"
            },
            {
                "id": "684b2add-26cb-4873-b6f5-f885a827c017",
                "name": "customer:latest"
            },
            {
                "id": "ec27e08f-c702-4832-bb50-69008393b000",
                "name": "catalog:latest"
            },
            {
                "id": "018f1ca8-b2dd-4c37-a1f7-315d704aa01c",
                "name": "ratings:latest"
            },
            {
                "id": "3dad36c8-12ac-4280-a862-fb1b38bb6ed4",
                "name": "recommendations:latest"
            },
            {
                "id": "2b2f6089-728d-4759-a8c4-f51c98c47077",
                "name": "shop4me-gateway:latest"
            },
            {
                "id": "3b45c735-4e71-41ad-9c96-ca5b46b28d88",
                "name": "catalog:blue"
            }
        ],
        "scalingPolicy": [
            {
                "id": "a9bf6aa3-0d98-401f-b7a8-21965b8f65b5",
                "scalingRules": [
                    {
                        "id": "0c616025-babe-42b3-ba3d-b93f22348baf",
                        "desiredServices": [
                            {
                                "id": "ec27e08f-c702-4832-bb50-69008393b000",
                                "name": "catalog:latest"
                            }
                        ]
                    }
                ]
            }
        ]
    }
]

Scale-up or scale-down a Service

Request:

PUT /api/scalingRules/0c616025-babe-42b3-ba3d-b93f22348baf
Accept: application/json
Authorization: NIRMATA-API <key>

{
    "desiredCount" : 5
}

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

{
  "id" : "0c616025-babe-42b3-ba3d-b93f22348baf",
  "service" : "Config",
  "modelIndex" : "ScalingRule",
  "uri" : "/config/api/scalingRules/0c616025-babe-42b3-ba3d-b93f22348baf",
  "parent" : {
    "id" : "a9bf6aa3-0d98-401f-b7a8-21965b8f65b5",
    "service" : "Config",
    "modelIndex" : "ScalingPolicy",
    "uri" : "/config/api/scalingPolicies/a9bf6aa3-0d98-401f-b7a8-21965b8f65b5",
    "childRelation" : "scalingRules"
  },
  "createdBy" : "jim@nirmata.com",
  "createdOn" : 1450667592016,
  "modifiedBy" : "jim@nirmata.com",
  "modifiedOn" : 1450667690973,
  "versionName" : "latest",
  "versions" : [ ],
  "desiredCount" : 5,
  "autoRecovery" : true,
  "desiredServices" : [ {
    "id" : "ec27e08f-c702-4832-bb50-69008393b000",
    "service" : "Config",
    "modelIndex" : "DesiredService",
    "uri" : "/config/api/desiredServices/ec27e08f-c702-4832-bb50-69008393b000"
  } ]
}

Creating an AWS Host Group from an AMI

Request:

POST /api/cloudProviders/{cloudProviderId}/hostGroups
Accept: application/json
Authorization: NIRMATA-API <key>

{
    "name" : "testHostGroup",
    "desiredCount" : 1,
    "awsConfig" : {
        "awsConfigType" : "ami",
        "region" : "us-west-1",
        "image" : "ami-d8bdebb8",
        "instanceType" : "m3.medium",
        "keypair" : "{keypair name}",
        "storageCapacity" : 30,
        "network" : "subnet-8bf9f0ee",
        "securityGroups" : ["sg-22db3266", "sg-1a38d97f"],
        "userData" : "#!/bin/bash -x \n# Install docker \n# Please verify docker engine installation for your OS: \n#      https://docs.docker.com/engine/installation/ \nsudo curl -sSL https://get.docker.com/ | sh \n\n# Install Nirmata Agent \nsudo curl -sSL https://www.nirmata.io/nirmata-host-agent/setup-nirmata-agent.sh | sudo sh -s -- --cloud aws \n"
    }
}

Response Headers:

HTTP/1.1 200 OK
...

Response Body:

{
  "id" : "3f07601e-57b3-4e2d-b960-7df375653ae5",
  "service" : "Config",
  "modelIndex" : "HostGroup",
  "uri" : "/config/api/hostGroups/3f07601e-57b3-4e2d-b960-7df375653ae5",
  "parent" : {
    "id" : "125b0984-0476-4c76-850c-b857b9ef3fe3",
    "service" : "Config",
    "modelIndex" : "CloudProvider",
    "uri" : "/config/api/cloudProviders/125b0984-0476-4c76-850c-b857b9ef3fe3",
    "childRelation" : "hostGroups"
  },
  "createdBy" : "jim@nirmata.com",
  "createdOn" : 1483757064734,
  "modifiedBy" : "jim@nirmata.com",
  "modifiedOn" : 1483757064734,
  "alarms" : [ ],
  "name" : "testHostGroup",
  "desiredCount" : 1,
  "status" : [ ],
  "usePublicIpAddresses" : false,
  "hosts" : [ ],
  "vSphereConfig" : [ ],
  "openStackConfig" : [ ],
  "vCloudConfig" : [ ],
  "azureConfig" : [ ],
  "ciscoIcfConfig" : [ ],
  "ciscoUcsdConfig" : [ ],
  "digitalOceanConfig" : [ ],
  "profitBricksConfig" : [ ],
  "awsConfig" : [ {
    "id" : "462b263f-858b-437d-a9ac-95a0a63ed2f2",
    "service" : "Config",
    "modelIndex" : "AwsConfig",
    "uri" : "/config/api/awsConfigs/462b263f-858b-437d-a9ac-95a0a63ed2f2"
  } ],
  "googleComputeConfig" : [ ],
  "softLayerConfig" : [ ],
  "resourceSelectionRules" : [ ],
  "hostScalingRule" : [ ]
}

To use a host group, it must be associated with a ResourceSelectionRule. You can create a new ResourceSelectionRule as follows.

Request:

POST /api/cloudProviders/resourceSelectionRules
Accept: application/json
Authorization: NIRMATA-API <key>

Discover the REST API schema

The Nirmata REST API schema can be queried using the HTTP OPTIONS method. Here is a query and JQ filter to obtain all endpoints (model classes):

Request:

curl -X OPTIONS -H "Accept: application/json" -H "Authorization: NIRMATA-API <key>" https://nirmata.io/config/api | jq ".modelClasses | .[] .modelIndex"

Response:

"Root"
"Application"
"EnvironmentType"
"ContainerType"
"ResourceSelectionPolicy"
"ResourceSelectionRule"
"CloudProvider"
"HostGroup"
"Host"
"Container"
"Service"
"Environment"
"ServiceInstance"
"ServicePort"
"ServiceSpec"
"Registry"
"LaunchConfiguration"
"PortRange"
"VSphereConfig"
"OpenStackConfig"
"WebHook"
"ScalingPolicy"
"DesiredService"
"ScalingRule"
"VCloudConfig"
"ServiceInstanceAction"
"ServiceAffinityRule"
"ServiceScalingRule"
"RoutingPolicy"
"RoutingRule"
"GatewayRoute"
"GatewayPolicy"
"GatewayRule"
"AzureConfig"
"DesiredServiceAction"
"PrivateCloud"
"CiscoIcfConfig"
"CiscoUcsdConfig"
"HealthCheck"
"ClusterNode"
"Cluster"
"UpdatePolicy"
"ImageUpdateEvent"
"GatewayConfig"
"ContainerAction"
"DesiredServiceLabelSelector"
"LabelSelectorItem"
"EnvironmentVariable"
"HostAction"
"DigitalOceanConfig"
"ProfitBricksConfig"
"AwsConfig"
"HostScalingRule"

Discover attributes and relations

You can query a single class, to get its attributes and relations.

Request:

curl -X OPTIONS -H "Accept: application/json" -H "Authorization: NIRMATA-API <key>" https://nirmata.io/config/api/ContainerType

Response Body:

{
  "name" : "Config",
  "id" : "d05f7751-8dae-4733-9e14-601d6f3516b3",
  "rootIndex" : "Root",
  "modelClasses" : [ {
    "modelIndex" : "ContainerType",
    "uri" : "/config/api/containerTypes",
    "methods" : [ "OPTIONS", "GET", "POST", "DELETE", "PUT" ],
    "apiLabel" : "containerTypes",
    "isDeleteable" : true,
    "keyField" : "name",
    "parents" : [ "Root" ],
    "attributes" : [ {
      "name" : "name",
      "type" : "String",
      "isRequired" : true,
      "uniqueScope" : "PARENT",
      "isKey" : false,
      "length" : 0
    }, {
      "name" : "description",
      "type" : "String",
      "isRequired" : false,
      "uniqueScope" : "NONE",
      "isKey" : false,
      "length" : 0
    }, {
      "name" : "cpuShares",
      "type" : "Integer",
      "isRequired" : false,
      "uniqueScope" : "NONE",
      "isKey" : false,
      "min" : 0,
      "max" : 1024,
      "default" : 0
    }, {
      "name" : "memory",
      "type" : "Integer",
      "isRequired" : false,
      "uniqueScope" : "NONE",
      "isKey" : false,
      "min" : 0,
      "max" : 64000,
      "default" : 0
    } ],
    "relations" : [ {
      "name" : "resourceSelectionRules",
      "type" : "Reference",
      "relationClass" : "ResourceSelectionRule",
      "cardinality" : {
        "min" : 0,
        "type" : "zeroOrMore"
      },
      "isMany" : true,
      "relationField" : "containerTypes",
      "isStrongReference" : false,
      "uri" : "/config/api/containerTypes/{id}/resourceSelectionRules",
      "methods" : [ "OPTIONS", "GET" ]
    }, {
      "name" : "services",
      "type" : "Reference",
      "relationClass" : "Service",
      "cardinality" : {
        "min" : 0,
        "type" : "zeroOrMore"
      },
      "isMany" : true,
      "relationField" : "containerType",
      "isStrongReference" : false,
      "uri" : "/config/api/containerTypes/{id}/services",
      "methods" : [ "OPTIONS", "GET" ]
    } ]
  } ]
}

Webhooks

Nirmata provides a Webhooks for easy integration with any web service. Webhooks can be used to be notified when an application lifecycle event occurs.

Currently, webhooks are supported for the following events:
  • Environment post deploy - triggered once all services in the environment are deployed and running
  • Environment post delete - triggered when the environment, including all services, has been deleted

Setup

Webhooks can be created for each application.

For each webhook, the user can specify the following:
  • URL - the URL to be called
  • Application - the application that the webhook is being created for
  • Events - the events that should trigger the webhook. Multiple events can be selected
  • Enable - enable or disable the webhook
  • Secret - optional secret that will be used to compute the HMAC hex digest of the body and passed in HTTP requests as ‘X-Nirmata-Signature’ header.

Webhook request/response

Once the webhook is setup, Nirmata SaaS will issue a HTTP(S) POST request to the URL specified every time that event occurs.

Request:

POST /17ls5j11
Host: my.webhook-server.com
Connection: keep-alive
Content-Length: 0
Cache-Control: no-cache
Accept: application/json

{ "id":"9999",
  "triggerEvent":"environment-deploy",
  "applicationName":"Nirmata Demo",
  "applicationId":"54ce9479-0092-4fa9-97e1-2c36413b192d",
  "environmentName":"Staging",
  "url":"http://my.webhook-server.com/17ls5j11",
  "environmentId":"edb5d0e1-aada-4086-b2a7-06371dcf1706"
 }

Response:

HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8

The webhook server/receiver should return a valid HTTP status code e.g. 200 OK - if successful, 40x, 50x in case of error

Indices and tables