Welcome to OriginTrail documentation!

This document contains an overview of the ODN network structure, guides for installing and running an OriginTrail node, presents ODN functionalities and discusses supported data structuring on the network.

Feel free to leave suggestions or feedback on the github repository for this documentation.

Table of Contents

Company Website: origintrail.io

Introduction

What is OriginTrail

OriginTrail is a purpose-built, open protocol for cross-organizational data sharing in supply chains, supported by blockchain.

The key issues OriginTrail tackles are:

  • Fragmented and siloed data across supply chains
  • Low data interoperability
  • Preventing vendor lock-in
  • Ensuring the integrity of exchanged data

The OriginTrail Ecosystem is built on 3 main pillars:

Neutrality — Being an open-source, decentralized system, based on open global standards, neutrality is crucial for the OriginTrail ecosystem as it prevents vendor lock-ins, ensures integrity, and effectively breaks data silos. Neutrality means adopting co-creation principles, working with other blockchain ecosystems and solutions builders even as they may be competing in the same market on the application level.

Usability — Both blockchain environments, as well as OriginTrail, are fundamental technologies. In order to ensure the onboarding of enterprises, there needs to be a great focus on enhancing the user experience, as solutions need to meet the expectations of rapid value generation.

Inclusiveness — Continuing to form partnerships with technological and business global leaders that can employ the OriginTrail ecosystem for their communities. Catering to the needs of leading global communities requires us to be making strides in designing technical infrastructure and business models that support the adoption of the OriginTrail in diverse business communities.

OriginTrail Decentralized Network Overview

OriginTrail protocol is utilized within the permissionless OriginTrail Decentralized Network (ODN). The ODN as a network holds a growing Decentralized Knowledge Graph (DKG) with the following characteristics:

  • Linked data first structure - the graph, enabling connections between data points from all published datasets on the network, conformant with Semantic Web technologies such as RDF and JSON-LD
  • Schema flexibility - enabling the mapping of virtually any data model, preferably structured according to relevant standards (such as GS1 EPCIS and CBV) and recommendations (W3C Web of Things, Verifiable Credentials, PROV, etc.) for machine readability
  • Identity verification - enabling the utilization of novel identity frameworks such as Self-sovereign identity, in conjunction with industry-specific identity frameworks (such as GS1 GTIN, GIAI, GRAI and other identification schemes)
  • Efficient cryptographic integrity verification of subgraphs, using associated dataset graph fingerprints, computed as Merkle roots of the input datasets
  • Cryptographic connection entanglement - allowing linking of data points only when specific cryptographic rules are satisfied
  • Trust minimization through decentralization - utilizing a decentralized p2p overlay network for data exchange and the Ethereum blockchain in the consensus layer

Therefore the key development principles of OriginTrail ecosystem are:

  • Connection-first approach - providing ways to connect the world’s data into a global, decentralized knowledge graph
  • Technological neutrality - avoiding technological lock-ins and striving towards agnosticism where possible
  • Decentralization - designing, implementing and utilizing that are not based on trusted-third parties or centralized entities
  • Privacy-by-Design approach - according to the 7 Foundational Principles of Privacy by Design
  • Development transparency - towards the OriginTrail Ecosystem community of developers, node holders and businesses
  • Open Source Development - according to Open Source Software principle

Dataset operations

Importing datasets to the local node knowledge graph

The ODN supports a multitude of data models (as listed below), with extensible support due to the flexible native OT-JSON data structure supported by the protocol, based on JSON-LD.

To introduce a structured dataset to the network, we first need to import it to the node’s local knowledge graph using the import command. This command initiates the import process and returns an internally-generated handler ID, which is a UUID. Using this handler ID, you can check the status of the import process, which can take a certain amount of time, depending on the input dataset size. If an import fails for any reason, the data field of the import result response will contain an error message and the status field will have the value “FAILED”.

The import API details are explained on the following link

https://app.swaggerhub.com/apis-docs/otteam/ot-node-api/v2.0#/import

Supported standards

The OriginTrail node supports the following standard data models for importing and connecting data in its Decentralized Knowledge Graph. This is by no means a definite list, as the development community and core development team are extending the support for multiple standards on an ongoing basis.

Standard / Data model name Standardization body Import Standard ID to be used Description Official documentation
CBV GS1 GS1-EPCIS Supported within standard EPCIS XML structure Core Business Vocabulary
EPCIS 1.2 GS1 GS1-EPCIS Supported within standard EPCIS XML structure EPCIS standard documentation
ID keys (GIAI,GRAI, GTIN etc) GS1 GS1-EPCIS Supported within standard EPCIS XML structure GS1 Keys
Web of Things W3C WOT Supporting native Web of Things JSON data model Web of Things documentation <https://www.w 3.org/Submissi on/wot-model> __
Verifiable Credentials W3C OT-JSON Supported through protocol native OT-JSON file structure Verifiable Credentials Data model
PROV W3C OT-JSON Supported through protocol native OT-JSON file structure W3C Provenance data model

The following API route returns a list of supported standard IDs:

https://app.swaggerhub.com/apis-docs/otteam/ot-node-api/v2.0#/info/get_standards

Replicating datasets across the ODN

In order to replicate a dataset to the network, it first has to be imported to the local knowledge graph on your node, in order for it to be properly structured and prepared for the network replication process. To initiate the replication process,  utilize the replicate API call. This command will create a network data holding offer on the ODN and the node will start a negotiation process with other nodes using the protocol’s incentivized replication procedure. If the replication fails to be completed for any reason, the data field in the replication result call will contain an appropriate error message, and the status field will have the value “FAILED”. Once when replication process is complete the ODN offer details are written on the blockchain.

The replication API details are explained on the following link

https://app.swaggerhub.com/apis-docs/otteam/ot-node-api/v2.0#/replicate

Verification and signature checks

When importing a dataset, the graph representation of the dataset is hashed to generate a unique identifier for that dataset, called the dataset_id.

The entire dataset (graph data together with dataset metadata) is used to generate a Merkle tree of the dataset, and the root hash of the Merkle tree is considered to be the dataset root hash. This process ensures data integrity because changing any part of the dataset would cause the dataset root hash to change.

When a dataset is replicated on the network, the integrity of the dataset can be verified by fetching the Merkle tree root hash from the blockchain (published during the replication procedure).

The fingerprint API route returns the Merkle tree root hash from the blockchain of the dataset with the given dataset_id. The fingerprint API details are explained on the following link

https://app.swaggerhub.com/apis-docs/otteam/ot-node-api/v2.0#/info/get_fingerprint__id

Querying the data

Decentralized Knowledge Graph querying - Network query

Querying the DKG is done using the network query API. It is used to look up all datasets containing a specific identifier (such as a supply chain identifier, like a GS1 barcode or RFID value).

The query request is an array values that identify a particular object in a dataset. These identifiers are sent as an array of objects, where the path parameter is the type of identifier (such as ean13, sgtin, sgln, or id for general identifier), value is the identifier value or an array of possible values, and opcode is either EQ or IN, depending on whether the queried object identifier needs to equal or belong to the given value parameter

{
  "query": [
    {
      "path": "sgtin",
      "value": "urn:epc:id:sgtin:271119.100294475",
      "opcode": "EQ"
    },
    {
      "path": "urn:epcglobal:cbv:mda#bestBeforeDate",
      "value": ["20-09-2020","21-09-2020", "22-09-2020"],
      "opcode": "IN"
    }
  ]
}

The returned responses contain an array of datasets which contain objects whose identifiers fit the given query. This response can then be used to import a desired dataset on one’s node, which will enable querying the graph locally or exporting and viewing the dataset.

The network query API details are explained on the following link

https://app.swaggerhub.com/apis-docs/otteam/ot-node-api/v2.0#/network/post_network_query

Local Knowledge Graph querying - Graph Trail

Identifier types and values and trail depth

Querying the local knowledge graph performs a graph traversal starting from a particular vertex in the graph and traversing over the specified edge types.

The result of the trail represents all objects found on the trail (the historical provenance trail spanning all datasets), along with an array that indicates which datasets those objects belong to.

{
  "identifier_types": [
    "ean13"
  ],
  "identifier_values": [
    "83213023"
  ],
  "depth": 5,
  "connection_types": [
    "EPC"
  ]
}

identifier_types and identifier_values are two arrays used to determine the starting object of the trail traversal. Note that these two arrays must be of the same length, and will be paired in the order they were given (first element of the identifier_types array corresponds to the first element of the identifier_values array, etc).

The depth parameter determines how far from the starting vertex will the traversal go. If the depth is set to 0 the traversal will return only the objects identified by the given parameters.

Connection types

connection_types is an array which serves as a filter in the graph trail traversal operation. When observing a vertex in the graph, only the vertices which are connected to the currently observed vertex by a relation type which is in the connection_types array will be visited and included in the graph.

_images/connection-example1.png

Example: In the graph pictured above, if the connection_types contained rel_type_1 and not rel_type_2, a traversal starting from vertex B would return vertex A and would not return vertex C

In order to avoid backtracking in the trail and attaching superfluous information, a vertex will not be visited if the relation types on the path to that vertex are the same two times in a row.

_images/connection-example2.png

Example: In the graph pictured above, if the connection_types contained rel_type_1, a traversal starting from vertex A would return vertex B and would not return vertex C

If the connection_types parameter is omitted, the entire graph is traversed (to the specified depth), without the backtracking prevention feature. It should be noted that the knowledge graph can be a highly dense graph, and traversing without filters can return extremely large results and might cause problems with node performance.

Reach parameter

Reach is an optional parameter that can be used to modify which objects are retrieved. When the reach parameter is specified as extended the node will execute the trail, then check which objects are referenced in the trail but are not included in it. These objects are then additionally retrieved from the local knowledge graph and appended to the trail response.

The default behaviour can be explicitly called by setting the reach parameter value to narrow .


The trail API details are explained on the following link

https://app.swaggerhub.com/apis-docs/otteam/ot-node-api/v2.0#/trail

Connectors

Connectors are a special type of vertex in the graph that enable traversing data from different data creators within a single trail request.

When creating a connector it is required to specify a connection identifier and the ERC-725 identity address of the data creator for whom the connection is intended.

If the connector vertices have matching identifiers, and the connector vertices were created by the corresponding dataset creators (which is determined by the dataset creator’s identity address), an additional set of two “connector” graph edges is created to enable a cryptographically verifiable connection between the two respective subgraphs.

Note that the connection between two connector vertices will only be created if both data creators specified the other’s ERC-725 node identity in the connector

Example: Data creator Alice replicates a dataset containing a connector CONN1 designated for a data creator Bob. When Bob adds a dataset containing a connector CONN1 designated for Alice to the local knowledge graph, a pair of edges (one for each direction) with relation type CONNECTION_DOWNSTREAM will be created between two connector vertices.

For specific information on how to create connectors depending on the data standard, see Data Structure Guidelines

Vertex Data permissioning

In cases when disclosing the full graph data publicly is not applicable to the implementation, it is possible to attach permissioned data to graph vertices, so that  trusted identities with permission will be able to read and verify it. This kind of functionality is possible through the OriginTrail protocol by using the permissioned data property.

Example of a permissioned data object can be found here.

Currently, permissioned data is only supported for datasets using the OT-JSON standard. By adding a permissioned_data attribute to any object in the @graph array, that data will not be shared with an arbitrary data holder when replicated. Instead, a merkle root hash will be created of the data inside the permissioned data object in order to ensure data integrity. If the original data creator discloses the contents of the permissioned data, it’s integrity can be verified since the appropriate Merkle root of the data was published to the OriginTrail Decentralized Network.

If a data creator wishes to share permissioned data with a trusted third party, it can enable (whitelist) the specific Decentralized identity (a ERC725 node identity, but compatible with upcoming identity standards such as DID and SSI framework) to be able to view this data upon network read (see our API for more information). Although sharing of the permissioned data between parties is enabled by the protocol based on the decentralized identity authentication scheme, the protocol doesn’t govern the usage of the data after it has been accessed.

Permissioned data trading and monetization features are currently in development, with support for blockchain purchase verification by implementing the FairSwap blockchain protocol.

Introduction to API

The purpose of this API is to allow data operations on a single node you trust, in order to control the data flow via API routes. For example, importing a single data file into a node’s database, replicating the data on the network or reading it from the node.

Detailed Api routes can be found at this link .


Getting started with a single server node

Setup & manage your node

Hardware requirements

The recommended minimum specifications are 2.2GHz CPU, and 2GB RAM with at least 20 GB of storage space.

Installation instructions

Read Me First

Please keep in mind that we will give our best to support you while setting up and testing the nodes. Some features are subject to change and we are aware that some defects may show up on different installations and usage scenarios.

If you need help installing OT Node or troubleshooting your installation, you can either:

  • engage in our Discord community and post your question,
  • contact us directly via email at tech@origin-trail.com.

Nodes can be installed in two ways:

  • via docker, which is recommended way, also explained on our website
  • manually

NOTE: For best performance on running a node we recommend usage of services like Digital Ocean.

Prerequisites

System requirements

  • minimum of 2Gb of RAM memory

  • at least 2 GB RAM / 2.2 GHz CPU

  • at least 20 GB storage space

  • Ethereum and/or xDai wallet (You can see wallet setup instructions here Identity Configuration)

  • for a testnet node:

    • For Ethereum: at least 3000 test TRAC tokens and at least 0.05 test Ether
    • For xDai: at least 3000 test xTRAC tokens and at least 0.01 xDai
  • for a mainnet node:

    • For Ethereum: at least 3000 TRAC tokens and at least 0.05 Ether
    • For xDai: at least 3000 xTRAC tokens and at least 0.01 xDai

Installation via Docker

Prerequisites

Public IP or open communication

A public IP address, domain name, or open network communication with the Internet is required. If behind NAT, please manually setup port forwarding to all the node’s ports.

Docker installed

The host machine needs to have Docker installed to be able to run the Docker commands specified below. You can find instructions on how to install Docker here:

For Mac https://docs.docker.com/docker-for-mac/install/

For Windows https://docs.docker.com/docker-for-windows/install/

For Ubuntu https://docs.docker.com/install/linux/docker-ce/ubuntu/

It is strongly suggested to use the latest official version.

Open Ports

By default Docker container will use 8900, 5278 and 3000 ports. These can be mapped differently in Docker container initialization command. Make sure they’re not blocked by your firewall and that they are open to the public.

Please note: port 8900 is used for REST API access which is not available until OT node is fully started. This can be concluded after the following log message is displayed in the running node.

info - OT Node started
Installation

Before running a node make sure you configure it properly first. You can proceed to the Node Configuration page.

Run a node on the MAINNET

Let’s just point Docker to the right image and configuration file with the following command:

sudo docker run -i --log-driver json-file --log-opt max-size=1g --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v ~/.origintrail_noderc:/ot-node/.origintrail_noderc origintrail/ot-node:release_mainnet

NOTE: Please make sure that your .origintrail_noderc file is ready before running the following commands. In this example, the configuration file .origintrail_noderc is placed into the home folder of the current user (ie. /home/ubuntu). You should point to the path where you created .origintrail_noderc on your file system.

Run a node on the TESTNET

Let’s just point Docker to the right image and configuration file with the following command:

sudo docker run -i --log-driver json-file --log-opt max-size=1g --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v ~/.origintrail_noderc:/ot-node/.origintrail_noderc origintrail/ot-node:release_testnet

NOTE: Please make sure that your .origintrail_noderc file is ready before running the following commands. In this example, the configuration file .origintrail_noderc is placed into the home folder of the current user (ie. /home/ubuntu). You should point to the path where you created .origintrail_noderc on your file system.

Manual installation

Prerequisites
NodeJS

If you don’t have Node.js installed head to https://nodejs.org/en/ and install version 9.x.x.

Note: Make sure you have the precisely above specified version of Node.js installed. Some features will not work well on versions less or greater then 9.x.x.

Before starting, make sure your server is up-to-date. You can do this with the following commands:

curl -sL https://deb.nodesource.com/setup\_9.x | sudo -E bash
sudo apt-get install -y nodejs
Database - ArangoDB

ArangoDB is a native multi-model, open-source database with flexible data models for documents, graphs, and key-values. We are using ArangoDB to store data. In order to run OT node with ArangoDB you need to have a local ArangoDB server installed and running.

Head to arangodb.com/download, select your operating system and download ArangoDB. You may also follow the instructions on how to install with a package manager, if available. Remember credentials (username and password) used to log in to Arango server, since later on you will need to set them in .origintrail_noderc .

Installation

Clone the repository

git clone -b release/mainnet https://github.com/OriginTrail/ot-node.git

in the root folder of a project (ot-node), create .env file. For manually running a mainnet node, add following variable in .env file:

NODE_ENV=mainnet

or for manually running a testnet node,

NODE_ENV=testnet

Before running a node make sure you configure it properly first. You can proceed to node Node Configuration page.

and then run npm from root project folder

cd ot-node
npm install
npm run setup
Starting The Node

OT node consists of two servers RPC and Kademlia node. Run both servers in a single command.

npm start

You can see instructions regarding the data import on the following Import data

Important Notes

Before running your node for the first time you need to execute npm run setup to apply the initial configuration.

If you want to reset all settings you can use npm run setup:hard. If you want to clear all the cache and recreate the database and not delete your identity just run npm run setup.

In order to make the initial import, your node must whitelist the IP or host of the machine that is requesting the import in configuration i.e

{
    "network": {
        "remoteWhitelist": [ "host.domain.com", "127.0.0.1"]
    }
}

By default only localhost is whitelisted.

For more information see Node Configuration.

Useful commands

Check node status

To check if your node is running in Terminal, run the following command:

docker ps -a

This command will indicate if your node is running.

Starting OT Node

This command will start your node as a background process.

docker start otnode

This command will start your node in interactive mode and you will see the node’s process written in the terminal, but this command will not run your node as a background process, which means your node will stop if you close your Terminal/Console.

docker start -i otnode
Stopping OT Node

You can stop your node in the following two ways:

If you started your node with the docker start otnode command and you wish to stop it from running, use the following command in your terminal:

docker stop otnode

If you started your node by using the docker start -i otnode command, you can stop it either by closing the Terminal or simply by pressing the ctrl + c.

Configuration

Prerequisites

There’s a minimum set of config parameters that need to be provided in order to run a node, without which the node will refuse to start.

Basic configuration

To properly configure the node you will need to create a config file in JSON format and provide some basic parameters for node operation. This file will be loaded by ot-node upon startup. Let’s create the file .origintrail_noderc in OT node root directory and store all the information about what kind of configuration we want to set up. The bare minimum of settings that need to be provided are two valid blockchain wallet addresses (currently xDai and Ethereum are supported):

  • The address and private key of the operational wallet (OW), which maps to node_wallet (OW public address) and node_private_key (OW private key). The operational wallet will be used by your node to execute basic node functionalities like applying for data holding offers and confirming completed offers.
  • The public address of the management wallet in the management_wallet parameter. The management wallet will be used to indicate which wallet has the rights to withdraw funds from your profile. Make sure that you have access to this wallet and that it is secure

You have to have at least one blockchain implementation specified for your node to function, but you’re free to use any and all of the supported blockchain implementations. Please do not to change the blockchain_title and network_id parameters, as they are used to properly connect your configuration with your node.

You also need to provide a public web address or domain name of your node in the hostname field.

We create the .origintrail_noderc file with following content:

{
    "network": {
        "hostname": "your external IP or domain name here",
        "remoteWhitelist": [ "IP or host of the machine that is requesting the import", "127.0.0.1"]
    },
    "blockchain": {
        "implementations": [
            {
                "blockchain_title": "Ethereum",
                "network_id": "ethr:mainnet",
                "rpc_server_url": "url to your RPC server i.e. Infura or own Geth server",
                "node_wallet": "your ethereum wallet address here",
                "node_private_key": "your ethereum wallet's private key here",
                "management_wallet": "your ethereum management wallet public key here"
            },
            {
                "blockchain_title": "xDai",
                "network_id": "xdai:mainnet",
                "rpc_server_url": "url to your RPC server i.e. Infura or own Geth",
                "node_wallet": "your xDai wallet address here",
                "node_private_key": "your xDai wallet's private key here",
                "management_wallet": "your xDai management wallet public key here"
            }
        ]
    }
}

node_wallet and node_private_key - the operational xDai/Ethereum wallet address and its private key.

management_wallet - the management wallet for your node (note: the Management wallet private key is NOT stored on the node)

hostname - the public network address or hostname that will be used in P2P communication with other nodes for node’s self identification.

remoteWhitelist - list of IPs or hosts of the machines (“host.domain.com”) that are allowed to communicate with REST API.

rpc_server_url - an URL to RPC host server, usually Infura or self hosted Geth server. For more see RPC server host

Configuration file

In general OT node uses [RC](https://www.npmjs.com/package/rc) nodejs package to load configuration and everything mentioned there applies to the OT node.

Application name that will be used in detecting the config files is origintrail_node. Translated from RC package page a configuration file lookup will be like this (from bottom towards top):

command line arguments, parsed by minimist (e.g. –foo baz, also nested: –foo.bar=baz)

environment variables prefixed with origintrail_node_

or use “__” to indicate nested properties (e.g. origintrail_node_foo__bar__baz => foo.bar.baz)

if you passed an option –config file then from that file

a local .origintrail_noderc or the first found looking in ./ ../ ../../ ../../../ etc.

  • $HOME/.origintrail_noderc
  • $HOME/.origintrail_node/config
  • $HOME/.config/origintrail_node
  • $HOME/.config/origintrail_node/config
  • /etc/origintrail_noderc
  • /etc/origintrail_node/config

the defaults object you passed in.

All configuration sources that were found will be flattened into one object, so that sources earlier in this list override later ones.

NOTE: To see all configuration parameters and their default values you can check this link:

https://github.com/OriginTrail/ot-node/blob/develop/config/config.json

Setting up an Ethereum RPC

For an OT node to use the Ethereum blockchain implementation it must communicate with the Ethereum blockchain. Such communication is achieved using the Ethereum JSON RPC protocol and a RPC compatible server.

RPC server configuration

The RPC server URL must be provided in the OT node’s configuration file and it should be placed in the Ethereum blockchain section as rpc_server_url. For example:

{
    "blockchain": {
        "implementations": [
            {
                "blockchain_title": "Ethereum",
                "network_id": "ethr:mainnet",
                "rpc_server_url": "https://my.rpc.server.url:9000/"
            }
        ]
    }
}

For more on how to set up the configuration file go to Node Configuration

Using Infura as RPC host

Using Infura gives a lot of advantages such as not needing to host your own server or configuring the Ethereum node client or even not scaling the whole infrastructure.

In order to use it create an account at https://infura.io . Once logged-in you can create a project for which you’ll have project ID, project secret and the endpoint. That endpoint is the RPC server URL needed for the node to run. Make sure you pick the right one for the target network. Select RINKEBY to get the URL that will be used in the Testnet or MAINNET for the OriginTrail’s mainnet.

Using own Ethereum node as RPC host

To use the Ethereum node as an RPC server make sure it is properly configured and RPC feature is enabled (–rpc parameter). For more details on how to install and configure Ethereum node see: https://github.com/ethereum/go-ethereum/wiki/Installing-Geth .

Once the Ethereum node is up and running use its URL to point to the OT node to use it.

Setting up an xDai RPC

The RPC server for the xDai blockchain is publicly available, so you do not need to add it in your configuration as it is already included in the ot-node default configuration.

Setting up SSL on a node

Before you begin setting up an SSL connection for a node’s remote API, make sure you have prepared certificates and registered a domain. Once you have enabled a secure connection, it will be used for both API (default port 8900) and remote control (default port 3000). If you are using different ports than the defaults, make sure you map them correctly during container initialization.

Prerequisites

Make sure your certificates are in PEM format and stored locally, as you will need to provide them to the node or Docker container running the node.

Configuration

Let’s assume that your domain certificates (for example: my.domain.com) are stored in /home/user/certs. The fullchain.pem and privkey.pem files should be in that dir.

Edit the node’s configuration file and make sure it has the following items in the JSON root:

"node_rpc_use_ssl": true,
"node_rpc_ssl_cert_path": "/ot-node/certs/fullchain.pem",
"node_rpc_ssl_key_path": "/ot-node/certs/privkey.pem",

With the above, we are telling the node to find a certificate at the following path: /ot-node/certs/. That is where we are going to leave them in the container.

Now, create the docker container and mount cert dir into the container. We can achieve this by adding additional parameters ‘-v /home/user/certs:/ot-node/certs/’ to the container creation command. For example, the initialization of the Docker container for the OT node for the mainnet could look like this:

sudo docker run -i --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v /home/user/certs:/ot-node/certs/ -v ~/.origintrail_noderc:/ot-node/.origintrail_noderc origintrail/ot-node:release_mainnet

After this, the running container will be able to find certificate files at the ‘/ot-node/certs/’ location.

How to update

OT Node has a built-in update functionality which will be triggered upon OT Node start.

Docker

In order to trigger the update, you must restart the OT Node by using the following command:

docker restart otnode

After a successful update OT Node will be rebooted automatically.

NOTE: By default node comes with the auto update feature turned on (it can be turned off using configuration). If auto update is on, Node checks for the update every 6 hours and it will automatically download and install the newest version when it’s available. Without need for manual restart.

Manual installation

Make sure that you are in the root directory of OT Node. The following commands will update the OT Node.

git pull
docker stop otnode

Database migrations need to be triggered manually.

node_modules/.bin/sequelize --config=./config/sequelizeConfig.js db:migrate

Database seed needs to be triggered manually as well.

node_modules/.bin/sequelize --config=./config/sequelizeConfig.js db:seed

In order to apply the update, you must restart the OT Node by using the following command:

docker start otnode

Setting up a high availability node

Coming soon…

Identity management

Introduction

On the OriginTrail Decentralized Network, your node is identified using two identities, a network layer identity, and a blockchain layer identity.

Network Identity

The network layer identity is used for other nodes on the network to contact your node, whether it be for offers, queries, or any other functionalities

Blockchain profile and identity

Each node on the OriginTrail network is represented by their profile, which contains information about node identification, profile and token balance.

The node profile is a structure inside of a smart contract that is identified by your node’s ERC725 identity, and contains network identity and other operational information. This contract is also responsible for operating with tokens.

ERC 725 is a proposed standard for blockchain-based identity authored by Fabian Vogelsteller, creator of the ERC 20 token standard and Web3.js. ERC 725 describes proxy smart contracts that can be controlled by multiple keys and other smart contracts and it’s contracts are deployed on Ethereum blockchain.

The OriginTrail Blockchain Identity is an ERC725 compatible smart contract and utilizes the standard for key management. It distinguishes two different types of keys in the identity contract:

  • The operational key (wallet), whose private key is stored on the node itself and is used to perform a multitude of operations in the ODN (signing, execution, etc). It requires a small balance of ETH in order to be able to publish transactions to the blockchain, and it can be filled periodically. No TRAC tokens are required for this wallet, except for initial node startup
  • The management key (wallet), whose private key is NOT stored on the node and is used to deal with the funds (TRAC rewards) and to manage the keys associated with the ERC725 identity. The management wallet can be any ERC20 supporting wallet (Trezor, Ledger, MetaMask etc).

This approach is taken as a safety and convenience measure to provide flexibility with key management and to minimize the risk of losing funds in case the operational key stored on the node somehow gets compromised. It is the node holder’s responsibility to keep both their node and wallet safe.

Identity values and identity files

ERC725 Identity value

On an installed node the easiest way to find the ERC725 identity value is to look for it in the node log

notify - Identity created for node ab2e1b1e520cac0d1321cd3760c2e7473970ec8a. Identity is 0x99c67054a8c7b7fa62243f0446eacd80c6ff0aff.

The last value (in above case 0x99c67054a8c7b7fa62243f0446eacd80c6ff0aff) represents the blockchain identity. Alternatively, it can be copied from node’s container

# Copies file to HOME dir
docker cp otnode:/ot-node/data/erc725_identity.json ~

Network Identity value

In order to find out the node’s network identity, it can be found in the node startup log, looking similar to this:

notify - My network identity: ab2e1b1e520cac0d1321cd3760c2e7473970ec8a

and this value ( in above example ab2e1b1e520cac0d1321cd3760c2e7473970ec8a) is the value of the network identity. Alternatively, it can be copied from the node’s container

# Copies file to HOME dir
docker cp otnode:/ot-node/data/identity.json ~

Both the network and blockchain identities will be automatically generated when a node is started for the first time (if they were not pre generated), creating identity files in the configuration directory. These files enable the node to have the same identity every subsequent time it is ran.

We highly recommend backing up the identity files as soon as the node is set up.

How to use existing identity files

If you wish to run an identical node on another machine, then in addition to backing up you node configuration (.origintrail_noderc) file, you should back up erc725_identity.json and identity.json files. If you start a node on a different machine without providing the identity files, the node will create completely new identities, and you will end up having a different node on the network.

Let’s say a user already has the network and ERC725 identity files in the home dir.

  • .origintrail_noderc - node configuration
  • .identity.json - network identity
  • .erc725_identity.json - ERC725 identity
docker run -it --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v ~/.origintrail_noderc:/ot-node/.origintrail_noderc -v ~/.identity.json:/ot-node/data/identity.json -v ~/.erc725_identity.json:/ot-node/data/erc725_identity.json origintrail/ot-node:release_mainnet

Note

Please note this example is for mainnet. For testnet use origintrail/ot-node:release_testnet instead

Identity management

To make it easier to interact with your node blockchain profile (to deposit and withdraw tokens) and identity (to edit your operational or management keys), we have provided a convenient UI at this link.

Token management

Staking and locking tokens

In order for a node to create or accept offers on the network, it needs to stake tokens. Those tokens are locked for the duration of the offer and cannot be directly withdrawn by either party. A data holder can pay out a portion of the tokens allotted for the offer, proportional to the percentage of the agreed holding time. Paying out the tokens includes transferring tokens from the data creator’s profile to the data holder’s and unlocking the data holder’s staked tokens.

Withdrawing tokens

Tokens which are not locked can be withdrawn to your management wallet. Be aware that withdrawal is a two step process, where the node requests to withdraw tokens and, after the withdrawal period, the tokens are transferred to the management wallet which executed the second step. This two step process ensures that your node gracefully adapts to new offers within the withdrawal period. The withdrawal period is currently set to 5 minutes.

You can stake or withdraw tokens on the Node Profile interface.

Key (wallet) management

Important: Please note that changing a wallet in the node configuration file does not change the wallet in your ERC725 identity. The wallet you wish to add first needs to have the appropriate permissions on the ERC725 identity before it can be changed in the node configuration.

Multiple management and operational wallets can be registered on a single ERC725 identity. One management wallet must always be registered. It is possible to remove all operational wallets and use a management wallet as the operational wallet at the same time, but we strongly discourage this scenario as it is not as secure as using separate wallets.

We recommend using the Node Profile interface for any changes of key permissions on the ERC725 identity.

Changing operational keys (wallets)

Changing the operational wallet on a node is done using the following steps

  1. Add new operational wallet to the ERC725 identity
  2. Set the new operational wallet and corresponding private key as the node_wallet and node_private_key in the node configuration file (.origintrail_noderc)
  3. Restart the OriginTrail node
  4. Remove the old operational wallet from the ERC725 identity
Changing management keys (wallets)

Changing the management wallet is done by adding the new management wallet to the ERC725 identity and then removing the old one.

The latest version of OriginTrail node supports data backup and restoration. With it you can save all your current data and restore it on a clean docker image. Below is a guide on how to back up and restore your node.

If you need additional assistance there is a support chat available on our knowledge base.

Configuration parameters

The ot-node has many configuration parameters which change how the node behaves and what it’s identity on the network is.

Note

This page currently does not cover all parameters. The intention is to over time cover all available configuration parameters, but at the time of writing it’s a work in progress.

Important

Some numeric values are specified as strings, while others are specified as numbers. Please be careful to use the correct format for those parameters, otherwise it might cause issues with your node.

Dataset pruning section

How to check your node’s storage usage

Note

The dataset pruning feature is a new addition to the ot-node and we would like to hear from you if you’re experiencing unexpected behaviour. This section will explain how to check your node’s storage usage so you can identify unexpected behaviour and inform us so we can fix any edge cases.

To check your nodes storage usage run the following two commands

du -h --max-depth 0 /var/lib/docker/
docker system df

Take note of how much space the docker and docker containers are using. Now enable the dataset pruning feature in your node configuration and restart your node. Wait until your node shows a log line stating Sucessfully pruned XYZ datasets, then measure your new storage usage with the two previous commands.

If your node has pruned more than a hundred datasets and your node storage usage hasn’t changed, make sure your node is running and execute docker system prune -a -f to prune unused docker data. Along with that you can restart the server your node is running on. If the usage still remains the same, please contact us at tech@origin-trail.com with the subject “Dataset pruning storage issue”.

Dataset pruning section explained

This section enables the node to remove datasets which have expired, freeing up space for new datasets on the network. The dataset_pruning parameter should be specified in the root level of the configuration object.

Warning

Because of their opposite behaviour, ot-node restore process does not work well when the dataset pruning feature is enabled. Because of this, we strongly recommend disabling the dataset pruning feature before you run a node restore process, then re-enable the feature after the restore process has been completed.

The structure and the default values for the section are shown below:

{
    "dataset_pruning": {
        "enabled": false,
        "imported_pruning_delay_in_minutes": 1440,
        "replicated_pruning_delay_in_minutes": 1440
    }
}

The enabled parameter is a boolean value which determines whether the datasets should be pruned or not. If the pruning feature is enabled the node will check every 24 hours which datasets should be pruned and remove them from the node’s graph database and remove corresponding data from its operational database.

The replicated_pruning_delay_in_minutes parameter is a number value and determines how much the node should wait after all offers for a dataset have completed (the holding time for the offers has passed) before pruning the dataset. This is used for datasets which data holder nodes have received over the network and which data creator nodes have replicated.

The imported_pruning_delay_in_minutes parameter is a number value and determines how much the node should wait after importing a dataset before pruning it. This is used for datasets which data creator nodes have imported but have not replicated on the network.

Backup and restore

Introduction

If you’ve found your node failing due to some unexplained error, a clean docker image might remove the issue. But just downloading a new docker image would cause your node to lose the data it’s currently holding, making it susceptible to litigation and loss of tokens. So here we guide you through the process of how to reinstall your ot-node while keeping all the data necessary for your node to keep its jobs.

⚠️ Before you start ⚠️

Make sure your node is running the latest version of ot-node. This tutorial downloads the latest version of the code and using different ot-node versions for backing up and restoring your node can cause issues

Step 1/3: Backing up your data

The first thing to do is to back up your files and store them outside your docker so that you can delete the container and install a new one. Run the following commands to create a backup

docker exec otnode node /ot-node/current/scripts/backup.js --configDir=/ot-node/data
docker cp otnode:/ot-node/backup ./
ls ./backup

These commands should show something similar to the following image

_images/backup.png

Please check the contents of the latest backup directory, indicated by the directory name (in the above case, the folder named 2020-03-20T09:10:28.059Z. They should contain all of the files shown below.

_images/backup-contents.png

Warning

DO NOT proceed unless the backup folder contains all of the files shown above. Contact support at tech@origin-trail.com for guidance on backing up your node safely.

Step 2/3: Reinstalling your docker image

Now we can stop the container and download a new one. Run the following commands:

docker stop otnode
docker rm otnode
imageId=$(docker images | grep otnode | awk '{print $3}')
docker rmi $imageId

Now you’ve successfully removed your image, and can download a new one.Run the following command to download a new docker image

sudo docker create -i --log-driver json-file --log-opt max-size=1g --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v ~/.origintrail_noderc:/ot-node/.origintrail_noderc origintrail/ot-node:release_mainnet

Note

If you’re running a testnet node, just replace mainnet with testnet in the command. Also, thanks for helping us test new features, you rock! 🤘

The last thing to do is to put your backup into your new container.

Step 3/3: Restoring the node data

Warning

Because of their opposite behaviour, ot-node restore process does not work well when the dataset pruning feature is enabled. Because of this, we strongly recommend disabling the dataset pruning feature before you run a node restore process, then re-enable the feature after the restore process has been completed.

The dataset pruning feature is disabled by default, but if you have it enabled you can see how to disable it in the configuration parameters section

Extract the restore script from the container with the following command

docker cp otnode:/ot-node/current/scripts/restore.sh ./

And now run it:

./restore.sh

That’s it! Your node should be running now, you can go ahead and see the logs by running:

docker logs otnode -f

Additional options

If you’ve backed up your files in a different place or are using a custom directory for your data on the node, you can edit those in the restore script.Run the following command to see all the options for the restore command:

./restore.sh --help

How to update your node from v4 to v5 and enable additional blockchains

This article will guide you through updating your node to version v5 and enabling additional blockchain integrations.

⚠️ Before you start ⚠️

Preparing your host machine

The newest ot-node update can require more memory than what the minimum required hardware specifies. This is why if you’re running the ot-node on a system with 2GB of memory we recommend that you do one of two things before you update your node:

  • Increase the amount of memory the host machine has. If you’re running the node on a server please stop your node with docker stop otnode before making changes to a server.
  • Enable swap space on your machine. You can see how to do so here. Once you enable swap space please restart your node.

Command variables

In order to run the commands in this guide you will need to know the name of your docker container and the path to your node’s configuration file.

In the commands listed below you should substitute DOCKER_CONTAINER_NAME with the docker container, and you should substitute NODE_RC_PATH with the path to your configuration file on your server.

Note

If you followed the default installation instructions, your container name will be otnode and your configuration file path will be .origintrail_noderc (including the dot)

Checking node version

Before you start, make sure your ot-node is running on v4.1.17, which you can check by running the following command.

docker logs DOCKER_CONTAINER_NAME | grep "Version check" | tail -1

The number beside the “local version” is your node’s version. If it is not 4.1.17 this guide will not work, please update your node to v4.1.17 before proceeding with this guide.

Step 0: Back up your node data

We strongly recommend backing up your node data before you update your node in case of an unexpected failure. You can find the instructions on backing up your node data on the Backup and Restore section. ou only need to complete the first step (“Backing up your node data”) and you can safely continue with updating your node.

Warning

If you do not backup your node it will not be possible to recover your node in case of an error, avoid this step at your own risk

How to update your node configuration manually

Open your node configuration file in an editor you’re familiar with, for example nano

nano .origintrail_noderc

Then apply the following changes:

1. Edit your blockchain section so that it contains an array called “implementations” which contains objects

Before After
{
  "blockchain": {
     "rpc_server_url": "...",
     "...": "..."
  },
  "...": "..."
}
{
 "blockchain": {
    "implementations": [
       {
          "rpc_server_url": "...",
          "...": "..."
       }
    ]
 },
 "...": "..."
}

2. Move your node wallet values inside the blockchain implementation

Before After
{
  "blockchain": {
     "rpc_server_url": "...",
     "...": "..."
  },
  "node_wallet": "0x123...",
  "node_private_key": "481...",
  "management_wallet": "0xabc...",
  "...": "..."
}
{
  "blockchain": {
     "implementations": [
       {
          "rpc_server_url": "...",
          "node_wallet": "0x123...",
          "node_private_key": "481...",
          "management_wallet": "0xabc...",
           "...": "..."
        }
     ]
  },
  "...": "..."
 }

3. If you have a custom ERC725 identity filepath set, move it also to the blockchain section and rename the parameter to “identity_filepath”

Before After
{
  "blockchain": {
     "rpc_server_url": "...",
     "...": "..."
  },
  "node_wallet": "0x123...",
  "node_private_key": "481...",
  "management_wallet": "0xabc...",
  "erc725_identity_filepath": "myid.json",
  "...": "..."
}
{
  "blockchain": {
     "implementations": [
       {
          "rpc_server_url": "...",
          "node_wallet": "0x123...",
          "node_private_key": "481...",
          "management_wallet": "0xabc...",
          "identity_filepath": "myid.json",
           "...": "..."
        }
     ]
  },
  "...": "..."
 }

4. If you have the “id” parameter specified in the “network” section, remove it so that is loaded from the default configuration

Before After
{
  "network": {
    "id": "MainnetV4.0",
    "remoteWhitelist": ["..."],
    "...": "..."
  },
  "...": "..."
}
{
  "network": {
    "remoteWhitelist": ["..."],
    "...": "..."
  },
  "...": "..."
}

5. Add the new necessary fields, “blockchain_title” and “network_id”, to the blockchain implementation:

Before After (for mainnet)
{
  "blockchain": {
     "rpc_server_url": "...",
     "...": "..."
  },
  "node_wallet": "0x123...",
  "node_private_key": "481...",
  "management_wallet": "0xabc...",
  "erc725_identity_filepath": "myid.json",
  "...": "..."
}
{
  "blockchain": {
     "implementations": [
       {
          "blockchain_title": "Ethereum",
          "network_id": "ethr:mainnet",
          "rpc_server_url": "...",
          "node_wallet": "0x123...",
          "node_private_key": "481...",
          "management_wallet": "0xabc...",
          "identity_filepath": "myid.json",
           "...": "..."
        }
     ]
  },
  "...": "..."
 }
Before After (for testnet)
{
  "blockchain": {
     "rpc_server_url": "...",
     "...": "..."
  },
  "node_wallet": "0x123...",
  "node_private_key": "481...",
  "management_wallet": "0xabc...",
  "erc725_identity_filepath": "myid.json",
  "...": "..."
}
{
  "blockchain": {
     "implementations": [
       {
          "blockchain_title": "Ethereum",
          "network_id": "ethr:rinkeby:1",
          "rpc_server_url": "...",
          "node_wallet": "0x123...",
          "node_private_key": "481...",
          "management_wallet": "0xabc...",
          "identity_filepath": "myid.json",
           "...": "..."
        }
     ]
  },
  "...": "..."
 }

6. Restart your node and verify update

Restart your node with the following command so that the changes are loaded into the node:

docker restart otnode

After restarting, we recommend observing your node logs with the following command and watching for any errors that show up:

docker logs otnode --tail 1000 -f

Once you see a log line stating OT Node started your node is successfully updated and running on the newest version, congratulations!

In case of any problems or questions, please direct your inquiries to the #v5-update OriginTrail Discord channel to get the quickest support by the OriginTrail community and core developers

If you have decided to enable xDAI support, please consult the Enabling xDai section to understand the procedure and how it refers to tokens being used.

Your node identity on Ethereum will not change and there will be no additional transactions (cost) if you update your configuration with only the Ethereum blockchain enabled. In case of any issues please get in touch via support@origin-trail.com

How to update your node automatically (both testnet and mainnet nodes)

Step 1: Extract the migration script for updating the node

First run the following command:

curl -O https://raw.githubusercontent.com/OriginTrail/ot-node/feature/update-migrate-script/scripts/migrate_to_v5.sh

This will extract the migration script from the docker container to your node server, which you need for the next step (step 2).

Step 2: Run the script

Run the following command:

chmod +x migrate_to_v5.sh

This will set the migration script as an executable file, enabling you to run it.

To update your node run the migration script with the following command:

./migrate_to_v5.sh --node_container_name=DOCKER_CONTAINTER_NAME --node_rc_path=NODE_RC_PATH

This command will adapt your configuration file to the new format required by OT-node v5, install the new node version and restart your node so it starts running on the new version.

Note

If you’re using the default docker container name and configuration file path you can just run the command without any parameters (shown below) instead of the command shown above.

./migrate_to_v5.sh

Step 3: Verifying the update

After the migration script finishes executing, we recommend observing your node logs with the following command and watching for any errors that show up.

docker logs DOCKER_CONTAINER_NAME --tail 1000 -f

Once you see a log line stating OT Node started your node is successfully updated and running on the newest version, congratulations!

In case of any problems or questions, please direct your inquiries to the #v5-update OriginTrail Discord channel to get the quickest support by the OriginTrail community and core developers

Step 4: Enabling additional blockchain integrations

Once you’ve updated your node to version 5 you can follow the steps below to enable newly introduced OriginTrail blockchain implementations such as xDai on mainnet or an additional rinkeby implementation on testnet.

The instructions below explain how to enable the xDai implementation on a mainnet node, if you’re running a testnet node got to the Testnet Update steps.

MAINNET UPDATE: Enabling xDai on OriginTrail mainnet

Before you start: Acquiring funds

In order for your node to operate with the xDAI blockchain, you’re going to need TRAC on xDAI and xDai tokens, in the same way that your node needs TRAC and ETH to function on Ethereum.

Note

For your OT node to run on xDAI blockchain you will need at least 3000 TRAC on xDAI as the minimum required stake to run an ODN node.

Edit your configuration

The first thing to do when implementing the xDai blockchain is to open your node config file (which is in the root folder and by default it will be named .origintrail_noderc ).

In order to edit your config file, you should open it in a text editor and change it’s contents. For example, if you’re familiar with using the nano editor, you could run this command:

nano .origintrail_noderc

Once you’ve opened the config file for editing, find the blockchain object and the “implementations” array and add another object to the config, so that it looks as follows:

{
    "implementations": [
        {
            "blockchain_title": "Ethereum",
            "network_id": "ethr:mainnet",
            "node_wallet": "your_wallet_address",
            "node_private_key": "your_wallet_private_key",
            "management_wallet": "your_management_wallet",
            "identity_filepath": "erc_725_identity.json",
            "rpc_server_url": "your_rpc_url"
        },
        {
            "blockchain_title": "xDai",
            "network_id": "xdai:mainnet",
            "node_wallet": "your_wallet_address",
            "node_private_key": "your_wallet_private_key",
            "management_wallet": "your_management_wallet",
            "identity_filepath": "xdai_identity.json"
        }
    ]
}

Replace the values starting with your_ (your_wallet_address, your_wallet_private_key, your_management_wallet, your_rpc_url) with real values and save your changes.

Note

You can use different wallets for different blockchain implementations, assuming you have the appropriate funds on the wallet you specified for each blockchain implementation (ETH and TRAC for the Ethereum implementation and xDai and xTRAC for the xDai implementation). In the case of Ethereum and xDAI, you can use the same wallet as they are compatible.

Restart your node

Once you’ve edited the config, restart your node by running the command below to apply the changes to your node.

docker restart DOCKER_CONTAINER_NAME

Once your node starts it should create a new blockchain identity and profile and start listening to blockchain events on the xDai blockchain.

You can verify that your node successfully connected to the xDai blockchain by checking that there is a log similar to the one pictured below (notice the xdai:mainnet blockchain id)

_images/xdai-profile-creation.png

After that your node will listen to blockchain events from the xDai blockchain and will accept offers that are published via xDai. Your node is successfully running on the xDai chain, congratulations!

Note

If you wish to set a custom dh_price_factor value, you should know that it should be specified inside the implementation object (for example, below the network_id parameter) and thus you need to add the parameter inside every blockchain implementation you have declared.

TESTNET UPDATE: Enabling the additional rinkeby implementation for OriginTrail testnet nodes

Before you start: Acquiring funds

In order to attach your node to the additional testnet rinkeby ODN implementation, you’re going to need at least 3000 ATRAC tokens and 0.01 rinkeby Ether on your wallet.

To acquire the ATRAC, you can use the ODN-Faucet discord bot by joining our Discord server then sending a message with !fundme your_wallet_address (replace your_wallet_address with the actual wallet address). You can see an example of how to do it in the image below:

_images/faucet-usage.png
Edit your configuration

The first thing to do when implementing the additional implementation is to open your node config file (which is in the root folder and by default it will be named .origintrail_noderc ).

In order to edit your config file, you should open it in a text editor and change it’s contents. For example, if you’re familiar with using the nano editor, you could run this command:

nano .origintrail_noderc

Once you’ve opened the config file for editing, find the blockchain object and the “implementations” array and add another object to the config, so that it looks as follows:

{
    "implementations": [
          {
              "blockchain_title": "Ethereum",
              "network_id": "ethr:rinkeby:1",
              "node_wallet": "your_wallet_address",
              "node_private_key": "your_wallet_private_key",
              "management_wallet": "your_management_wallet",
              "identity_filepath": "erc_725_identity.json",
              "rpc_server_url": "your_rpc_url"
          },
          {
              "blockchain_title": "xDai",
              "network_id": "ethr:rinkeby:2",
              "node_wallet": "your_wallet_address",
              "node_private_key": "your_wallet_private_key",
              "management_wallet": "your_management_wallet",
              "identity_filepath": "rinkeby_2_identity.json",
              "rpc_server_url": "your_rpc_url"
          }
    ]
}

Replace the values starting with your_ (your_wallet_address, your_wallet_private_key, your_management_wallet, your_rpc_url) with real values and save your changes.

Note

You can use different wallets for different blockchain implementations, assuming you have the appropriate funds on the wallet you specified for each blockchain implementation

Restart your node

Once you’ve edited the config, restart your node by running the command below to apply the changes to your node.

docker restart DOCKER_CONTAINER_NAME

Once your node starts it should create a new blockchain identity and profile and start listening to blockchain events.

You can verify that your node successfully connected to the additional implementation by checking that there is a log similar to the one pictured below (notice the ethr:rinkeby:2 blockchain id):

_images/rinkeby-2-profile-creation.png

After that your node will listen to blockchain events from the additional implementation and will accept offers that are replicated using it. Your node is successfully running on multiple blockchain implementations simultaneously, congratulations!

In case of any problems or questions, please direct your inquiries to the #v5-update OriginTrail Discord channel to get the quickest support by the OriginTrail community and core developers

Data Structure Guidelines

GS1 EPCIS XML

The OriginTrail node supports the GS1 EPCIS 1.2 standard for importing and connecting data in the knowledge graph. You can learn more about the GS1 EPCIS standard here.

This document will show how the GS1 EPCIS data is represented in the Knowledge Graph inside one node.

Document data

EPCIS guideline suggests “Standard Business Document Header” SBDH standard for description of the document data. This part of data is in the EPCIS Header part of the file. It has basic information about the file (sender, receiver, ID, purpose…).

Although OriginTrail is the receiver of the file and it can be named as receiver (SBDH allows defining multiple receivers) it is not necessary to include this. Receiver is some entity involved in a business process, not in the data processing.

This data will be stored separately from the dataset contents within the knowledge graph, as metadata.

Master data

EPCIS standard describes 4 ways to process Master data. OriginTrail currently supports the most common way: including the Master data in the Header of an EPCIS XML document.

Since visibility event data contains only identifiers of objects, locations or parties, the Master data serves to further describe them in a more human readable way. This data will be connected to the visibility event data as long as the identifiers of master data are found inside visibility event data.

Visibility event data

Main focus of the EPCIS standard is formalizing description of event data that are generated by activities within the supply chain. OriginTrail supports ObjectEvent, AggregationEvent, TransformationEvent and TransactionEvent, which are thoroughly described in the standard. We strongly advise reading the GS1 EPCIS implementation guideline and to evaluate our example files.

Event data describes interactions between entities described with master data. OriginTrail distinguishes between two types of event data:

  • Internal events are related to processes of object movements or transformations (production, repackaging etc) within the scope of one supply chain participant’s business location (read point) as part of some business process.

    For example, this could be production or assembly that results in output which is used for further production or for sale (repackaging, labeling etc). The important distinction is that the ownership of event objects does not change during the event.

  • External events are related to processes between different supply chain participants (sales/purchases, transport). They represent processes where the jurisdiction or ownership of the objects gets changed in the supply chain. These types of events should use connectors for connecting between parties.

How an event is represented in the graph

When converting an EPCIS Visibility Event to graph, a central vertex will be created for the event. Any event identifiers will be created as separate vertices in the graph, connected to the event vertex, in order to enable connection to other entities with the same identifier.

Any observed objects in the event (the name varies depending on the event type, see the EPCIS data structure) will be added as separate vertices, with the relation created from the object to the event). This enables the objects to connect to their respective master data if available, as the information about the object will be set as that object’s properties.

If the event contains bizLocation and/or readPoint attributes, those will be created as separate vertices, similar to the way it is done for observed objects in the event.

Another part of a visibility event that generates a separate vertex is a connector, which is explained in the following section.

Connectors in EPCIS files

If the event is external (see above) and it should be connected to an event from another data creator’s dataset (such as a business partner) the bizTransactionList should have a bizTransaction attribute containing a connection identifier and the corresponding data creator’s decentralized identity (currently supported is the ethereum ERC-725 identity), separated by a colon. This will create a connector vertex in the graph, and connect it to the event it belongs to.

Once the corresponding data creator creates an event containing the same connection identifier with your decentralized identity, an analogous connector vertex will be created and the two connector vertices will be connected together. This feature enables querying the knowledge graph data belonging to multiple parties.

Permissioned data in EPCIS files

In cases when disclosing the full data publicly is not applicable to the implementation, it is possible to add a visibility property to an attribute of a VocabularyElement in the EPCISMasterData section. The data marked as permissioned will be visible only to the data creator and the parties the data creator marks as whitelisted via the API. More information on permissioned data is available at Vertex Data permissioning

There are two visibility options available:

In cases that only value of the attribute needs to be hidden this option should be used visibility="permissioned.show_attribute". Example:

<VocabularyElement id="id:Company_Green_with_permissioned_data">
    <attribute id="id:name" visibility="permissioned.show_attribute">Green</attribute>
</VocabularyElement>

In cases that whole attribute needs to be hidden this option should be used visibility="permissioned.hide_attribute". Example:

<VocabularyElement id="id:Company_Green_with_permissioned_data">
    <attribute id="id:wallet" visibility="permissioned.hide_attribute">0xBbAaAd7BD40602B78C0649032D2532dEFa23A4C0</attribute>
</VocabularyElement>

For more information on structuring XML EPCIS files, see XML EPCIS Examples

Verifiable credentials data model

What is a Verifiable Credential

If we look at the physical world, a credential might consist of:

  • Information related to identifying the subject of the credential (for example, a photo, name, or identification number)
  • Information related to the issuing authority (for example, a city government, national agency, or certification body)
  • Information related to the type of credential this is (for example, a Dutch passport, an American driving license, or a health insurance card)
  • Information related to specific attributes or properties being asserted by the issuing authority about the subject (for example, nationality, the classes of vehicle entitled to drive, or date of birth)
  • Evidence related to how the credential was derived
  • Information related to constraints on the credential (for example, expiration date, or terms of use).

A verifiable credential can represent all of the same information that a physical credential represents. The addition of technologies, such as digital signatures, makes verifiable credentials more tamper-evident and more trustworthy than their physical counterparts.

Verifiable credentials data can be placed inside generic OT-JSON object (OT-JSON Structure) with an additional identifier and can be queried using local knowledge graph querying system (Querying the data).

More detailed information about verifiable credentials can be found here:

https://www.w3.org/TR/vc-data-model/

OT-JSON Data Structure and Guidelines

Introduction and Motivation

In order to have a database and standard agnostic data structure, the protocol utilizes a generic data structure format called OT-JSON, based on JSON-LD. The guiding principles for OT-JSON development are:

  • 1-1 convertibility from/to higher level data formats (XML, JSON, CSV, … )
  • 1-1 convertibility from/to generic graph data structure.
  • Generic, use case agnostic graph representation
  • Extendable for future use cases of the protocol
  • Versionable format

OT-JSON essentials

An OT-JSON document represents a dataset as a graph of interconnected dataset objects (use case entities), such as actors, products, batches, etc. together with relations between them. Structure of dataset objects is generally defined, but extendable to support new use cases.

  • Objects - Use case entities (products, locations, vehicles, people, … )
  • Relations - Relations between use case entities (INSTANCE_OF, BELONGS_TO, … )
  • Metadata - Data about dataset (integrity hashes, data creator, signature, transpilation data, ….)

Example: Assuming that use case request is to connect products with factories there they are produced. Entities of the use case are Product and Producer. These entities are represented as objects in OT-JSON format. Product can have relation PRODUCED_BY with producer that produces it and the producer can have relation HAS_PRODUCED with the product. Product and producer have unique identifiers Product1, Producer1 respectively.

_images/datalayer4.png

Figure 2. Diagram of the example entities and relations

{
    "@graph": [
        {
            "@id": "Product1",
            "@type": "OTObject",
            "identifiers": [
                {
                    "identifierType": "ean13",
                    "identifierValue": "0123456789123"
                }
            ],
            "properties": {
               "name": "Product 1",
               "quantity": {
                   "value": "0.5",
                   "unit": "l"
                }
            },
            "relations": [
                {
                    "@type": "OTRelation",
                    "linkedObject": {
                            "@id": "Producer1"
                        },
                    "properties": {
                            "relationType": "PRODUCED_BY"
                        }
                }
            ]
        },
        {
            "@id": "Producer1",
            "@type": "OTObject",
            "identifiers": [
                {
                    "identifierType": "sgln",
                    "identifierValue": "0123456789123"
                }
            ],
            "properties": {
               "name": "Factory 1",
               "geolocation": {
                   "lat": "44.123213",
                   "lon": "20.489383"
                }
            },
            "relations": [
                {
                    "@type": "OTRelation",
                    "linkedObject": {
                            "@id": "Product1"
                        },
                    "properties": {
                            "relationType": "HAS_PRODUCED"
                        }
                }
            ]
        }
    ]
}

Figure 3. OT-JSON graph representing example entities

Conceptual essentials

Here are some essential conceptual things related to the data in a dataset. Try to fit example of book as an object from the physical world with its information as the data.

  • Every OT-JSON entity (Object) is identified with at least one unique identifier. An identifier is represented as a non-empty string.
  • Entities can have multiple identifiers along with the unique one. For example: EAN13, LOT number and time of some event.
  • Data can be connected by arbitrary relations. A user can define own relations that can be used with others defined by standard.
  • Relations are directed from one entity to another. It is possible to create multiple relations between two objects in both directions.

For more specific information about OT-JSON, see OT-JSON Structure

Web of Things

WoT (Web of Things) provides mechanisms to formally describe IoT interfaces to allow IoT (Internet of Things) devices and services to communicate with each other, independent of their underlying implementation, and across multiple networking protocols. The OriginTrail node supports the WOT standard for importing and connecting data in the knowledge graph.

The goals of the WOT are to improve the interoperability and usability of the IoT. Through a collaboration involving many stakeholders over the past years, several building blocks have been identified that address these challenges. The first set of WoT building blocks is now defined:

  • the Web of Things (WoT) Thing Description
  • the Web of Things (WoT) Binding Templates
  • the Web of Things (WoT) Scripting API
  • the Web of Things (WoT) Security and Privacy Considerations

More details for defined building blocks and use cases are available on the following link: https://www.w3.org/TR/wot-architecture/

Data model is composed of the following resources:

  • Things – A web Thing can be a gateway to other devices that don’t have an internet connection. This resource contains all the web Things that are proxied by this web Thing. This is mainly used by clouds or gateways because they can proxy other devices.
  • Model – A web Thing always has a set of metadata that defines various aspects about it such as its name, description, or configurations.
  • Properties – A property is a variable of a web Thing. Properties represent the internal state of a web Thing. Clients can subscribe to properties to receive a notification message when specific conditions are met; for example, the value of one or more properties changed.
  • Actions – An action is a function offered by a web Thing. Clients can invoke a function on a web Thing by sending an action to the web Thing. Examples of actions are “open” or “close” for a garage door, “enable” or “disable” for a smoke alarm, and “scan” or “check in” for a bottle of soda or a place. The direction of an action is usually from the client to the web Thing. Actions represent the public interface of a web Thing and properties are the private parts.

All these resources are semantically described by simple models serialized in JSON. Resource findability is based Web Linking standard and semantic extensions using JSON-LD are supported. This allows extending basic descriptions using a well-known semantic format such as the GS1 Web Vocabulary. Using this approach, existing services like search engines can automatically get and understand what Things are and how to interact with them. An example of WOT file is available on the following link:

https://www.w3.org/TR/wot-thing-description/

How an event is represented in the graph

When converting a WOT file to graph, a central vertex will be created for the device described in the file. All sensor measurements will be created as separate vertices in the graph, connected to the main event vertex, in order to enable connection to the rest of the graph via the main vertex. There are two custom vertices denoted as readPoint and observerdLocation. These two vertices are considered as connectors which connect data with the rest of the graph. An example of WOT file with connectors is available on the following link: https://github.com/OriginTrail/ot-node/blob/develop/importers/use_cases/perutnina_kakaxi/kakaxi.wot

OT-JSON Structure

Dataset structure

OT-JSON dataset is the main structure of objects that are transferred in the OriginTrail network. The structure of dataset consists of dataset header, dataset graph and dataset signature. Dataset header contains dataset metadata, such as dataset timestamp, data creator information, transpiler data, verification schemes versions etc. Identifier of a dataset is calculated as a SHA3-256 digest of dataset header and dataset graph sections. Dataset signature is calculated for the canonicalized form of the entire, unsigned, dataset object.

_images/graphrepresentation.png

Figure 1. Graphic representation of a dataset

Example

{
  "@type": "Dataset",
  "@id": "0x123456789034567894567890",
  "datasetHeader": {...},
  "@graph": [...],
  "signature": {...}
}

Example 1. Dataset structure example

Attribute definitions

_images/table4.1.png

Dataset header

Dataset header contains metadata information about dataset, transpilation process from:

  • Version of OT-JSON document
  • Dataset creation timestamp
  • Dataset title
  • Dataset tags
  • Related datasets
  • Validation schemas
  • Data validation information
  • Data creator
  • Transpilation information
{
    "datasetHeader": {
        "OTJSONVersion": "1.0",
        "datasetCreationTimestamp": "2019-01-15T09:43:58Z",
        "datasetTitle": "",
        "datasetTags": ["gs1-datasets", "..."],

        "relatedDatasets": [{
           "datasetId": "0x232134875876125375761936",
           "relationType": "UPDATED",
           "relationDescription": "...",
           "relationDirection": "direct"
        }],

        "validationSchemas": {
          "erc725-main": {
            "schemaType": "ethereum-725",
            "networkId": "1",
            "networkType": "private",
            "hubContractAddress": "0x2345678902345678912321"
          },

          "merkleRoot": {
            "schemaType": "merkle-root",
            "networkId": "1",
            "networkType": "private",
            "hubContractAddress": "0x2345678902345678912321"
          }
        },

        "dataIntegrity": {
          "proofs": [
            {
              "proofValue": "0x54364576754632364577543",
              "proofType": "merkleRootHash",
              "validationSchema": "/schemas/merkleRoot"
            }
          ],
        },

        "dataCreator": {
          "identifiers": [
             {
              "identifierValue": "0x213182735128735218673587612",
              "identifierType": "ERC725",
              "validationSchema": "/schemas/erc725-main"
             }
            ],
          },
        },

        "transpilationInfo": {
          "transpilerType": "GS1-EPCIS",
          "transpilerVersion": "1.0",
          "sourceMetadata": {
            "created": "",
            "modified": "",
            "standard": "GS1-EPCIS",
            "XMLversion": "1.0",
            "encoding": "UTF-8"
          },
          "diff": { "...": "..."}
    }
}

Example 2. Dataset header structure example

Validation schemas

Validation schemas are objects that provide information on how to validate specific values, like identifiers and hashes. Schemas can contain addresses of smart contracts where identifiers are created, network identities, locations of proof hashes, etc.

Attribute definitions

_images/table4.2.png

Hash structure

OT-JSON document is uniquely identified with data hash and root hash. Those hashes are generated from the OT-JSON graph object which stores a user defined data. Before calculating dataset hashes it is important to determine a uniform order of objects in OT-JSON object in order to always obtain the same hash values. When a user imports a dataset, depending on the standard, OT-Node converts the dataset to OT-JSON format, sorts the dataset and calculates data hash and root hash.

OT-JSON service supports 1.0 and 1.1 versions which differ in sorting algorithms. OT-JSON 1.0 version service sorts the entire dataset before calculating hash values and saves unsorted dataset in the graph database. OT-JSON 1.1 version service sorts the entire dataset except arrays in properties and saves sorted dataset in graph database. The new version of OT-JSON service improves overall performance and ensures data integrity by sorting datasets during the import process and when reading data from graph database. Such an approach ensures that the dataset is always sorted during processing and only requires one sorting call for dataset processing functionalit, such as import or replication.

The following sequence diagrams describe the usage of sort methods for both versions of OT-JSON during the import process.

_images/sortOtJson1.0.png

Figure 2. Import process for OT-JSON version 1.0

_images/sortOtJson1.1.png

Figure 3. Import process for OT-JSON version 1.1

Signing

When the unsigned OT-JSON document is formed, resulting object is canonicalized (serialized) and prepared for signing by data creator. Dataset signing process can be done using different signature schemas/suits. The canonicalization of OT-JSON dataset is creating sorted stringified JSON object.

Structure of a signature object is defined according to selected signature suit specifications. Signing is done using Koblitz elliptic curve signatures (Ethereum private keys).

Also, id using JSON-LD as a format for OT-JSON, Koblitz 2016 signature suit can be used.

Example of JSON-LD Koblitz signature 2016 Signature Suite

The entire JSON-LD dataset document is canonicalized using URDNA2015 algorithm for JSON-LD canonicalization. Resulting N-QUADS data is digested using SHA256 algorithm. Finally, the digest is signed with ECDSA private key using Koblitz elliptic curve. Koblitz curve is used for generating Ethereum and Bitcoin wallets, so private keys for Ethereum and Bitcoin wallets can be used for signing.

_images/kobilitzSignature.png

Figure 4. Diagram of dataset signing procedure using Koblitz Signature 2016 Signature Suite

Object structure

OT-JSON dataset objects represent entities which can be interconnected with relations in a graph-like form. Every OT-JSON dataset object is required to have it’s unique identifier (@id), type (@type) and signature. Other required sections include identifiers, properties and relations, while optional sections include attachments.

Attribute definitions

_images/table4.3.png
{
    "@id": "<UNIQUE_OBJECT_IDENTIFIER>",
    "@type": "<OBJECT_TYPE>",

    "identifiers": ["..."],

    "properties": {"...": "..."},

    "relations": ["..."],

    "attachments": ["..."],

    "signature": {"...": "..."}
}

Example 3. Dataset object structure template

Object identifiers section

Object identifiers section is a list of objects that represent identifier values for certain object. Identifier objects contain information about identifier type, identifier value, and optionally validation schema that is used for validating identity.

{
    "identifiers": [
        {
            "@type": "sgtin",
            "@value": "1234567.0001",
            "validationSchema": "/datasetHeader/validationSchemas/urn:ot:sgtin"
        },
        {
            "@type": "sgln",
            "@value": "3232317.0001",
            "validationSchema": "/datasetHeader/validationSchemas/urn:ot:sgln"
        }
    ]
}

Example 4. Example of identifiers section

Attribute definitions

_images/table4.4.png

Object properties section

Object properties section is defined as container for all object property attributes. OT-JSON does not provide specific rules for structuring object properties, those rules are defined within recommendations and data formatting guidelines.

Attribute definitions

_images/table4.5.png

Attachments section

Attachments section contains a list of objects that represent metadata about files that are related with the object. Objects in attachment section list contain information about related file id (@id, as URI), attachment type (@type), attachment role (such as certificate, lab results, etc.), attachment description, attachment file type, and SHA3-256 digest of a file content.

{
    "attachments": [
        {
            "@id": "0x4672354967832649786379821",
            "@type": "Attachment",
            "attachmentRole": "Certificate",
            "attachmentDescription": "...",
            "fileUri": "/path/file.jpg",
            "metadata": {
                "fileType": "image/jpeg",
                "fileSize": 1024
            }
        }
    ]
}

Example 6. Example of attachments section

Attribute definitions

_images/table4.6.png

Connector objects

Special type of graph objects are Connectors. Connectors are used to connect data from multiple datasets of possibly different data providers. Every connector contains connectionId attribute, which represents value on which connectors are connected to each other. Also, the list expectedConnectionCreators contains list of data creators that are allowed to connect to a connector.

{
    "@id": "urn:uuid:1230c84b-5cd6-45a7-b6b5-da7ab8b6f2dd",
    "@type": "otConnector",
    "identifiers": [
        {
            "@type": "id",
            "@value": "1A794-2019-01-01"
        }
    ],
    "properties": {
        "expectedConnectionCreators": [
            {
                "@type": "ERC725",
                "@value": "0x9353a6c07787170a43c4eb23f59567811336a8f3",
                "validationSchema": "../ethereum-erc"
            }
        ]
    },
    "relations": [
        {
            "@type": "otRelation",
            "direction": "direct",
            "linkedObject": {
                "@id": "urn:uuid:fe7d4949-6f34-4f4e-8a11-d048e9c0b835"
            },
            "properties": null,
            "relationType": "CONNECTOR_FOR"
        }
    ]
}

Example 7. Example of a connector object

OT-JSON Versions

In order to improve the simplicity and consistency of generating data integrity values, such as dataset signatures, dataset IDs and dataset root hashes, there have been revisions to how dataset integrity values are calculated. These revisions have been created in order to preserve the ability to validate the integrity of datasets already published to the network.

The differences between OT-JSON versions are in how data is ordered when generating three different data integrity values:

  1. datasetID , which is generated as a hash of the @graph section of the dataset, and is used to verify data integrity of the dataset
  2. rootHash , which is generated as a hash of the @graph section along with the dataset creator, and is used for verifying the dataset creator
  3. signature, which is generated as a signed hash of the entire dataset, and is used to verify the creator and integrity of a dataset off chain.

OT-JSON 1.2

Note

OT-JSON 1.2 was introduced in order to sort the dataset when generating a signature. Along with that, sorting of non user generated arrays (such as identifiers and relations) was reimplemented.

The datasetID for OT-JSON 1.1 is generated out of the @graph section after sorting every object and array, including the the @graph array, without changing the order of any array inside of a properties object.

The rootHash for OT-JSON 1.1 is generated out of the @graph section in the same was as it is for the datasetID.

The signature for OT-JSON 1.1 is generated out of the dataset when the datasetHeader is attached, after sorting the dataset in the same way it was done for datasetID and rootHash.

OT-JSON 1.1

Note

OT-JSON 1.1 was introduced in order to have the same sorting method for generating hashes. Along with that, sorting of arrays was removed in order to prevent unintentionally changing user defined data (such as properties of OT-JSON objects).

The datasetID for OT-JSON 1.1 is generated out of the @graph section after sorting every object in the the @graph array, without changing the order of any array.

The rootHash for OT-JSON 1.1 is generated out of the @graph section in the same was as it is for the datasetID.

The signature for OT-JSON 1.1 is generated out of the dataset when the datasetHeader is attached.

OT-JSON 1.0

The datasetID for OT-JSON 1.0 is generated out of the @graph section after sorting every object and array, including the @graph array.

The rootHash for OT-JSON 1.0 is generated out of the @graph section after sorting the relations and identifiers of each element, and sorting the @graph array by each array element @id.

The signature for OT-JSON 1.0 is generated out of the dataset after first sorting the relations and identifiers of each element, and sorting the @graph array by each array element @id, and then sorting every object in the dataset.

Sorting differences overview

Below is an image visually showing the differences of how the data integrity values are calculated between the OT-JSON versions

_images/sorting-process-overview.png

XML EPCIS Examples

Provided examples describe proposed data structure and data flow. The main goal is to elaborate data structuring process and features on ODN. We have set out simple Manufacturer-Distributor-Retail (MDR) supply chain where goods move only forward.

Supply chain consists of 4 entities:

  • Green - Manufacturer of wine
  • Pink - Distributor of beverages
  • Orange and Red - Retail shops

For clarity and analysis examples deal with generic items called Product1 and generic locations (with generic read points). Real life use cases should utilize GS1 identifiers for values (GLN,GTIN…). For example, instead value urn:epc:id:sgln:Building_Green there should be GLN number like urn:epc:id:sgln:0614141.12345.0.

1. Basic sales example

Supply chain participants map:

_images/Basic_sale.jpg

Use case: Green is producing wine and selling it to Pink. Shipping and receiving events are generating data that is being processed on ODN.

GS1 EPCIS design:

_images/Design.JPG

Sample files

2. Complex manufacturer-distributor-retail (MDR) sale

Supply chain participants map:

_images/MDR.jpg

Use case: Green is producing wine and selling it to Pink. Pink is distributing (selling) wine to retail shop (Orange). Batches on Pink are sold partially. Shipping and receiving events are generating data that is being processed on ODN.

GS1 EPCIS design

_images/DesignMDR.JPG

Sample files

Contribution Guidelines

We’d love for you to contribute to our source code and to make OT protocol better than it is today! Here are the guidelines we’d like you to follow.

If you’re new to OT node development, there are guides in this wiki for getting your dev environment set up. Get to know the commit process with something small like a bug fix. If you’re not sure where to start post a message on the Discord #general channel.

Once you’ve got your feet under you then you can start working on larger features. For anything more than a bug fix, it probably makes sense to coordinate through the Discord, since it’s possible someone else is working on the same thing.

Please make descriptive commit messages.

The following checklist is worked through for every commit:

  • Check out and try the changeset.
  • Ensure that the code follows the language coding conventions.
  • Ensure that the code is well designed and architected.

Pull Requests

If you report an issue, we’d love to see a pull request attached. Please keep in mind that your commit may end up getting modified. Sometimes we’ll make the change ourselves, but often we’ll just let you know what needs to happen and help you fix it up yourself.

Contributor Code of Conduct

As contributors and maintainers of the OT Node project, we pledge to respect everyone who contributes by posting issues, updating documentation, submitting pull requests, providing feedback in comments, and any other activities.

Communication through any of our channels (GitHub, Discord, Twitter, etc.) must be constructive and never resort to personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct.

We promise to extend courtesy and respect to everyone involved in this project regardless of gender, gender identity, sexual orientation, disability, age, race, ethnicity, religion, or level of experience. We expect anyone contributing to the project to do the same.

If any member of the community violates this code of conduct, the maintainers of the OT Node project may take action, removing issues, comments, and PRs or blocking accounts as deemed appropriate.

If you are subject to or witness unacceptable behavior, or have any other concerns, please email us.

Questions, Bugs, Features

Got a Question or Problem?

Do not open issues for general support questions as we want to keep GitHub issues for bug reports and feature requests. You’ve got much better chances of getting your question answered on Discord.

Found an Issue or Bug?

If you find a bug in the source code, you can help us by submitting an issue to our [GitHub Repository][github]. Even better, you can submit a Pull Request with a fix.

Missing a Feature?

You can request a new feature by submitting an issue to our [GitHub Repository][github-issues].

If you would like to implement a new feature then consider what kind of change it is:

  • Major Changes that you wish to contribute to the project should be discussed first in an [GitHub issue][github-issues] that clearly outlines the changes and benefits of the feature.
  • Small Changes can directly be crafted and submitted to the [GitHub Repository][github] as a Pull Request. See the section about Pull Request Submission Guidelines, and for detailed information the [core development documentation][developers].

Want a Doc Fix?

Should you have a suggestion for the documentation, you can open an issue and outline the problem or improvement you have - however, creating the doc fix yourself is much better!

If you want to help improve the docs, it’s a good idea to let others know what you’re working on to minimize duplication of effort. Create a new issue (or comment on a related existing one) to let others know what you’re working on.

If you’re making a small change (typo, phrasing) don’t worry about filing an issue first. Use the friendly blue “Improve this doc” button at the top right of the doc page to fork the repository in-place and make a quick change on the fly. The commit message is preformatted to the right type and scope, so you only have to add the description.

For large fixes, please build and test the documentation before submitting the PR to be sure you haven’t accidentally introduced any layout or formatting issues.

Indices and tables