歡迎使用 RYU,一套軟體定義網路開發框架

文件目錄:

開始使用

什麼是 Ryu ?

Ryu 是一個以元件為基礎(component-based)的軟體定義網路(SDN)開發框架。

透過 Ryu 所提供的 API,開發者可以很輕鬆的去實作他們需要的網路管理功能, 而 Ryu 也提供了許多軟體定義網路管理協定的開發介面,如 OpenFlow, NetConf, OF-config 等等 OpenFlow 的部份,Ryu 能夠完整支援 1.0, 1.2, 1.3, 1.4 以及 Nicira 擴充版本。

所有 Ryu 的程式碼都是基於 Python 所撰寫,並且使用 Apache 2.0 授權

快速開始

如果要安裝 Ryu,只需要在終端機中輸入以下指令:

% pip install ryu

如果你想要使用原始碼來安裝 Ryu,只需要在終端機中輸入以下指令:

% git clone git://github.com/osrg/ryu.git
% cd ryu; python ./setup.py install

如果你想要在 OpenStack 中使用 Ryu, 請參考 networking-ofagent project

如果你想要撰寫自己的 Ryu 應用程式,請參考: Writing ryu application 轉寫完 Ryu 應用程式之後,直接輸入以下指令即可執行:

% ryu-manager yourapp.py

額外的函式庫需求

在 Ryu 中,部分的功能可能會需要安裝其他的函式庫:

  • OF-Config 需要安裝 lxml
  • NETCONF 需要安裝 paramiko
  • BGP speaker (ssh console) 需要安裝 paramiko

如果您想要使用上述功能,使用 pip 去安裝需要的函式庫即可:

% pip install lxml
% pip install paramiko

支援

Ryu 的官方頁面為 http://osrg.github.io/ryu/

如果您有任何問題、建議或是提供補丁(patch),請來信至 Ryu 的 If you have any questions, suggestions, and patches, the mailing list is available at 郵件討論串 (mailing list)與其他人討論

討論串也可以至 Gmane 觀看

撰寫你的 Ryu 應用程式

第一支 Ryu 應用程式

關於應用程式

如果你想要使用你自己的網路邏輯去管理網路設備(交換器、路由器等等),你會需要 撰寫一支 Ryu 應用程式。你的應用程式會告訴 Ryu 該如何去管理這些網路設備, 然後 Ryu 會透過 OpenFlow 協定去配置這些設備。

撰寫 Ryu 應用程式相當簡單,只需要撰寫 Python script 即可。

開始撰寫

我們透過 Ryu 應用程式將一個 OpenFlow 交換器轉變成為一個第二層(OSI Layer 2)交換器

開啟文字編輯器,並撰寫以下程式:

from ryu.base import app_manager

class L2Switch(app_manager.RyuApp):
    def __init__(self, *args, **kwargs):
        super(L2Switch, self).__init__(*args, **kwargs)

由於 Ryu 應用程式是使用 python 所撰寫而成,所以你可以儲存成任何名稱以及任何 副檔名,在這個範例中,我們將它儲存成 ‘l2.py’,並放置在家目錄中。

目前這一支程式並沒有任何的功能,但是他卻是一支完整的 Ryu 應用程式,事實上, 你可以輸入以下指令來執行這一支程式:

% ryu-manager ~/l2.py
loading app /Users/fujita/l2.py
instantiating app /Users/fujita/l2.py

在上述程式中我們可以知道,要撰寫一支 Ryu 應用程式,你只需要將你的應用程式類別繼承自 RyuApp 即可。

接者讓我們新增一個可以接收來自所有埠口(port)封包進入事件(Packet in event)的功能, 其程式碼如下

from ryu.base import app_manager
from ryu.controller import ofp_event
from ryu.controller.handler import MAIN_DISPATCHER
from ryu.controller.handler import set_ev_cls

class L2Switch(app_manager.RyuApp):
    def __init__(self, *args, **kwargs):
        super(L2Switch, self).__init__(*args, **kwargs)

    @set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
    def packet_in_handler(self, ev):
        msg = ev.msg
        dp = msg.datapath
        ofp = dp.ofproto
        ofp_parser = dp.ofproto_parser

        actions = [ofp_parser.OFPActionOutput(ofp.OFPP_FLOOD)]
        out = ofp_parser.OFPPacketOut(
            datapath=dp, buffer_id=msg.buffer_id, in_port=msg.in_port,
            actions=actions)
        dp.send_msg(out)

在 L2Switch 中新增一個叫做 ‘packet_in_handler’ 的方法,這一個方法會在 Ryu 接收到 OpenFlow packet_in 訊息時被呼叫。 我們可以透過 ‘set_ev_cls’ 去讓 Ryu 知道當 packet in 訊息傳入時, 它需要將這一個事件帶入此一方法當中。

註:同一事件可以註冊給多個不同的應用程式以及方法

在 set_ev_cls 中,第一個參數(ev)是這一個事件的類別,該類別定義包含事件所需要 的訊息,當 Ryu 收到該事件所對應到的訊息時,都會透過該類別產生事件, 並去呼叫已註冊的方法。

第二個參數則表示該事件要在交換器的某一個狀態下發送,例如開發者一般來說 在交換器與 Ryu 在完成連接之前不希望收到 packet in 訊息。 使用 ‘MAIN_DISPATCHER’ 可以確保交換器與 Ryu 在完成連接之後才會收到 該訊息。

接下來我們來看一下 ‘packet_in_handler’ 中的一些細節。

  • ev.msg 表示 packet_in 訊息的資料
  • msg.dp 表示這一個訊息是由哪一個 Datapath(switch) 送過來,我們能夠透過 這一個物件去與該交換器去做互動。
  • dp.ofproto 以及 dp.ofproto_parser 表示了這一個訊息使用了哪一個版本 的 OpenFlow 協定,以及它的解析器。
  • OFPActionOutput 是在 packet_out 訊息中,用於設定它應該要走哪一個交換器的 埠口(port)出去。這一個應用程式會將一個封包送到所有其他的埠口,因此我們這邊使 用了 OFPP_FLOOD 這一個常去來設定它的目的地。
  • OFPPacketOut 類別用來建立 packet_out 訊息
  • 如果你呼叫了在 Datapath 中的 send_msg 方法,並給予 OpenFlow 訊息物件, Ryu 會將訊息轉換並且送至該交換器中。

在這邊你完成了你的第一個 Ryu 應用程式,你已經能夠使用這個應用程式去讓網路 能夠以第二層交換器的邏輯運作。

若你覺得 L2 交換器過於笨拙以及簡單,您可以參考其他的 應用程式範例

你也可以在 ryu/app 資料夾以及 綜合測試 中學到其他的應用程式以及網路功能的撰寫方式

Ryu 的主要元件

註:本章節部分內容是直接採用程式碼中文件,並由程式自動產生,故部分內容無法直接翻譯。

可執行的

bin/ryu-manager

Ryu 最主要的執行檔

基礎元件

ryu.base.app_manager

OpenFlow 控制器

ryu.controller.controller
ryu.controller.dpset
ryu.controller.ofp_event
ryu.controller.ofp_handler

OpenFlow 協定之編碼器(encoder)及解碼器(decoder)

ryu.ofproto.ofproto_v1_0
ryu.ofproto.ofproto_v1_0_parser
ryu.ofproto.ofproto_v1_2
ryu.ofproto.ofproto_v1_2_parser
ryu.ofproto.ofproto_v1_3
ryu.ofproto.ofproto_v1_3_parser
ryu.ofproto.ofproto_v1_4
ryu.ofproto.ofproto_v1_4_parser
ryu.ofproto.ofproto_v1_5
ryu.ofproto.ofproto_v1_5_parser

Ryu 預設可使用的應用程式

ryu.app.cbench
ryu.app.simple_switch
ryu.topology

Switch and link discovery module. Planned to replace ryu/controller/dpset.

函式庫

ryu.lib.packet
ryu.lib.ovs

ovsdb interaction library.

ryu.lib.of_config

OF-Config implementation.

ryu.lib.netconf

NETCONF definitions used by ryu/lib/of_config.

ryu.lib.xflow

An implementation of sFlow and NetFlow.

第三方函式庫

ryu.contrib.ovs

Open vSwitch python binding. Used by ryu.lib.ovs.

ryu.contrib.oslo.config

Oslo configuration library. Used for ryu-manager’s command-line options and configuration files.

ryu.contrib.ncclient

Python library for NETCONF client. Used by ryu.lib.of_config.

Ryu 應用程式開發介面(API)

Ryu 應用程式開發模型

執行緒(Threads)、事件、事件隊列(Queues)

在Ryu 中,Ryu 應用程式是一個實作網路功能的單執行緒(single-threaded)應用 程式。事件與訊息會在不同的應用程式之間傳遞。

Ryu 應用程式會透過非同步的方式將事件傳送至其他應用程式中。 事件傳送與接收的執行緒不一定是一支 Ryu 應用程式,他有可能會是其他非 Ryu 應用程式產生的執行緒,舉例來說,OpenFlow Controller 就是一個 會產生並傳送 Ryu 事件的一支非 Ryu 的應用程式。 一個事件可以夾帶許多已經封裝好的 Python 物件,在這邊我們並不鼓勵將過於複雜 (例如:未封裝的資料)的物件封裝在事件當中並傳送。

每一個不同的 Ryu 應用程式都會包含一個專門用來接收事件的隊列。 隊列提供了先進先出(FIFO)的規則,讓每一個事件都會依序的被執行。 而每一個 Ryu 應用程式都會有一個負責處理事件的執行緒,這一個執行緒會持續的 將事件隊列中的事件取出並且將事件傳送給對應的方法。 每一個事件都會在同一個執行緒中處理,因此在設計事件處理的方法時必須要非常小心 避免一個處理方法將事件執行緒停住(Blocking),若一個事件執行緒被其中一個處理事件的 方法給停止,則該 Ryu 應用程式將不會再接收(處理)任何的事件。

有部分的事件是以同步(synchronous)的方式處理,他可以讓不同的 Ryu 應用程式 相互呼叫。 當一個事件以同步的方式呼叫,則它們的回應(reply)會被放置在隊列中以避免死結(deadlock) 問題。

在 Ryu 中所使用的執行緒以及隊列是使用 eventlet/greenlet 這一套函式庫所實作,這邊 並不鼓勵直接在 Ryu 應用程式中直接使用它們。

Contexts

Contexts 是一個共享於不同 Ryu 應用程式之間的 Python 物件。使用共享的物件 可以避免一個功能或是物件重複被創造。

建立一個 Ryu 應用程式

Ryu 應用程式是一個繼承自 ryu.base.app_manager.RyuApp 的類別。 如果在一個 Python 模組(module)中定義了兩個以上的 Ryu 應用程式類別, 則 app_manager 會以名稱作為排序,並且取用第一個應用程式作為這一個模組中 的 Ryu 應用程式執行。

Ryu 應用程式為一個獨立執行個體(singleton):每一個 Ryu 應用程式僅會有一個 實體(instance)。

事件接收

Ryu 應用程式會事先透過 ryu.controller.handler.set_ev_cls 去註冊需要接收的事件。

產生事件

Ryu 應用程式在產生出事件物件之後,可以透過 send_event 傳送事件至特定的 Ryu 應用程式, 或是透過 send_event_to_observers 傳送到有註冊該事件的應用程式。

事件類別

事件類別表示了在 Ryu 中所產生(發生)的事件。 一般而言,所有事件的類別都會在名稱前方加上「Event」用以區別。 一個事件可以被 Ryu 的核心程式或是一般 Ryu 應用程式產生。 Ryu 應用程式可以事先註冊他所想要接收的 Event 類別,只需在處理該事件的方法前 加上 ryu.controller.handler.set_ev_cls 修飾詞(decorator)即可。

OpenFlow 事件類別

當連上的交換器傳送 OpenFlow 訊息時,ryu.controller.ofp_event 模組能夠將該訊息 轉換成事件類別。

原則上,每一個事件類別都會以 EventOFP 作為開頭,舉例來說 EventOFPPacketIn 就是 packet-in 訊息所產生的事件類別。

Ryu 架構下的 Controller 會自動將接收到的 OpenFlow 訊息解碼,並將事件物件傳送到 使用 ryu.controller.handler.set_ev_cls 去註冊該事件的 Ryu 應用程式。

OpenFlow 事件包含了下列兩項屬性:

Attribute Description
msg OpenFlow 訊息內容,依據接收到的訊息會有所不同。
msg.datapath 接收到該訊息的 ryu.controller.controller.Datapath 實體

每一個訊息物件中都會包含許多而外的資訊,這些資訊都是從原始的訊息所解碼出來的。 詳細可以參考 OpenFlow 協定 API 章節。

ryu.base.app_manager.RyuApp

請參閱 Ryu API 參考說明 章節。

ryu.controller.handler.set_ev_cls(ev_cls, dispatchers=None)

set_ev_cls 是一個用於將方法註冊成 Ryu 事件處理器的一個修飾器,被修飾的 方法將會成為一個事件處理器。

ev_cls 表示了一個想要被該 Ryu 應用程式接收的事件類別。

dispatchers 則表示了該事件處理器將會在哪些談判階段(negotiation phases) 去接收此一類型的事件,舉例來說,HANDSHAKE_DISPATCHER 表示了在交換器與 控制器連線(交握)階段所產生的事件。

談判階段(Negotiation phase) 說明
ryu.controller.handler.HANDSHAKE_DISPATCHER 送出以及等待 hello 訊息
ryu.controller.handler.CONFIG_DISPATCHER 版本協議以及送出 feature-request 訊息
ryu.controller.handler.MAIN_DISPATCHER 接收 Switch-features 訊息以及 傳送 set-config 訊息
ryu.controller.handler.DEAD_DISPATCHER 連線被其中一方中斷,或是未知錯誤導致 雙方連線中斷。

ryu.controller.controller.Datapath

一個包含了連上控制器交換器資訊的物件,任何程式要傳送訊息(OpenFlow Message)給控制 器均需透過本物件來傳送。

Datapath 類別中包含了以下屬性:

屬性 說明
id 64-bit OpenFlow Datapath ID。 這一個屬性只有在 ryu.controller.handler.MAIN_DISPATCHER 階段有效。
ofproto 一個能夠表示該控制器所使用的 OpenFlow 版本以及該版本訊息之定義,詳細的定義 可以參考 OpenFlow 協定 API 章節, 此屬性會以 ryu.ofproto.ofproto_vxxx 為主 ,舉例來說 ofproto_v1_0 表示了該交換器使用 OpenFlow 1.0 協定。
ofproto_parser 此屬性是一個以該交換器協定所實作的編碼及 解碼器,這一個物件是依據上一個屬性所定義的 舉例來說,若他是 ryu.ofproto.ofproto_v1_0_parser 則他就會透過 OpenFlow 1.0 協定去編碼及 解碼訊息。
ofproto_parser.OFPxxxx(datapath, ....) 透過呼叫 OFPxxxx 來產生出訊息,這一個訊息 可以透過 send_msg 這一個方法去傳送給實體 的交換器。xxxx 表示了訊息名稱,舉例來說 OFPFlowMod 表示了一個 flow-mod 的訊息 每一個訊息的參數都是基於原始訊息去定義的。
set_xid(self, msg) 產生一個 OpenFlow 的 XID 然後將這一個 XID 放置到 msg.xid 中。
send_msg(self, msg) 將訊息放置到一個傳送專用的隊列(queue) 中,隨後將會被一個專門傳送訊息的執行緒給 傳送。如果msg.xid 為 None,則會自動先 呼叫 set_xid 方法,在放入隊列中。
send_packet_out 將被棄用。
send_flow_mod 將被棄用。
send_flow_del 將被棄用。
send_delete_all_flows 將被棄用。
send_barrier 將 barrier 訊息放置傳送用的隊列中。
send_nxt_set_flow_format 將被棄用。
is_reserved_port 將被棄用。

ryu.controller.event.EventBase

所有的事件類別都會繼承自 EventBase。 若需自行設計事件類別,只需要建立一個繼承自它的類別即可。

ryu.controller.event.EventRequestBase

若需透過 RyuApp.send_request 傳送同步(synchronous)的事件,則 需要讓事件類別繼承自 EventRequestBase。

ryu.controller.event.EventReplyBase

若需要透過 RyuApp.send_reply 來回覆同步請求事件,則該事件需要 繼承 EventReplyBase。

ryu.controller.ofp_event.EventOFPStateChange

用於傳送談判階段(negotiation phase)在替換時所產生的事件,當一個 階段轉換完成時,此事件會被傳送。 這一個類別包含了以下屬性。

屬性 說明
datapath ryu.controller.controller.Datapath 的實體

ryu.controller.dpset.EventDP

當一個實體交換器連上或是斷線的時候會產生此事件。 對於 OpenFlow 交換器,這一個事件原則上跟 ryu.controller.ofp_event.EventOFPStateChange 是一樣的。EventDP 包含了以下屬性。

屬性 說明
dp ryu.controller.controller.Datapath 的實體,用於表示交換器。
enter 若表示一個交換器連上,則為 True,若斷線則為 False。

ryu.controller.dpset.EventPortAdd

當一個新的埠口連接到一台交換器上面,則此事件會被觸發。 對於 OpenFlow 交換器,這一個事件等同於 ryu.controller.ofp_event.EventOFPPortStatus 這一個事件至少包含了以下屬性:

Attribute Description
dp ryu.controller.controller.Datapath 的實體,用於表示交換器。
port 該埠口的埠口編號

ryu.controller.dpset.EventPortDelete

當一個埠口從交換器上面移除,則此事件會被觸發。 對於 OpenFlow 交換器,這一個事件等同於 ryu.controller.ofp_event.EventOFPPortStatus 這一個事件至少包含了以下屬性:

Attribute Description
dp ryu.controller.controller.Datapath 的實體,用於表示交換器。
port 該埠口的埠口編號

ryu.controller.dpset.EventPortModify

當一個埠口的屬性被更改(例如將埠口設定成OFPPC_NO_STP),則此事件會被觸發。 對於 OpenFlow 交換器,這一個事件等同於 ryu.controller.ofp_event.EventOFPPortStatus 這一個事件至少包含了以下屬性:

Attribute Description
dp ryu.controller.controller.Datapath 的實體,用於表示交換器。
port 該埠口的埠口編號

ryu.controller.network.EventNetworkPort

當一個埠口透過 REST API 在一個網路中加入或是移除,則此事件會被觸發。 這一個事件至少包含了以下屬性:

屬性 說明
network_id 網路編號(Network ID)
dpid 該埠口所存在交換器之 OpenFlow Datapath ID。
port_no 該埠口的 OpenFlow 埠口編號。
add_del 新增時為 True,刪除時則是 False。

ryu.controller.network.EventNetworkDel

當透過 REST API 刪除一個網路資料時便會觸發。 這一個事件至少包含了以下屬性:

屬性 說明
network_id 網路編號(Network ID)

ryu.controller.network.EventMacAddress

當一個終端設備(特定埠口下)的 Mac 位址透過 REST API 更新時,則會觸發此事件。 這一個事件至少包含了以下屬性:

屬性 說明
network_id 網路編號(Network ID)
dpid 該埠口所存在交換器之 OpenFlow Datapath ID。
port_no 該埠口的 OpenFlow 埠口編號。
mac_address 若 add_del 為 False,則此一屬性為舊的 MAC 位址,否則就會是新的 MAC 位址。
add_del 若要移除該終端設備,則此屬性為 False,否則為 True。

ryu.controller.tunnels.EventTunnelKeyAdd

當透過 RESP API 註冊(新增)或是更新一個 Tunnel Key 時,則會觸發此一事件。 這一個事件至少包含了以下屬性:

屬性 說明
network_id 網路編號(Network ID)
tunnel_key Tunnel Key

ryu.controller.tunnels.EventTunnelKeyDel

當透過 RESP API 刪除一個 Tunnel Key 時,則會觸發此一事件。 這一個事件至少包含了以下屬性:

屬性 說明
network_id 網路編號(Network ID)
tunnel_key Tunnel Key

ryu.controller.tunnels.EventTunnelPort

當一個 tunnel 埠口透過 REST API 新增或是刪除時,則會觸發此一事件。 這一個事件至少包含了以下屬性:

屬性 說明
dpid OpenFlow Datapath ID
port_no OpenFlow 埠口編號。
remote_dpid tunnel 另一端的埠口編號。
add_del 新增為 True,刪除為 False。

Ryu 提供的第三方函式庫

Ryu 提供了一些針對網路應用程式常用的函式庫

Packet 函式庫

簡介

Ryu 提供的封包函式庫可以讓開發者去解析封包的內容,或是以現有的資料去產生一個 自定義的封包。另一方面,dpkt 函式庫與此函式庫的木但相同,但是他並沒有辦法處理 部分網路協定,例如 vlan, mpls, gre 等等,因此我們在 Ryu 中實作了我們自己的 封包處理函式庫。

網路位址

除非有其他的定,否則網路位址如 MAC/IPv4/IPv6 位址都會是可讀的,舉例來說: ‘08:60:6e:7f:74:e7’, ‘192.0.2.1’, ‘fe80::a60:6eff:fe7f:74e7’ 等等

解析封包

下方範例程式中,我們使用了封包解析函式庫去解析來自 OFPacketIn 訊息所夾帶的封包 資料。

from ryu.lib.packet import packet

@handler.set_ev_cls(ofp_event.EventOFPPacketIn, handler.MAIN_DISPATCHER)
def packet_in_handler(self, ev):
    pkt = packet.Packet(array.array('B', ev.msg.data))
    for p in pkt.protocols:
        print p

你可以直接使用接收到的原始資料去產生一個 Packet 物件。這一個物件會去解析 輸入的原始資料,並且將這些資料轉換成為各個針對不同協定的類別物件,這些物件當中 各自包含了該協定的資料(例如 ipv4 包含了 IP 位址)。

在 Packet 中,protocols 這一個屬性是這一個封包所包含的協定物件列表,我們可以從此列表 中取得這一個封包所有的網路協定。

當一個 TCP 封包被控制器接收到並解析,我們可以看到類似下方的協定列表:

<ryu.lib.packet.ethernet.ethernet object at 0x107a5d790>
<ryu.lib.packet.vlan.vlan object at 0x107a5d7d0>
<ryu.lib.packet.ipv4.ipv4 object at 0x107a5d810>
<ryu.lib.packet.tcp.tcp object at 0x107a5d850>

如果該封包不包含 vlan,則我們可以獲得像是下方的列表:

<ryu.lib.packet.ethernet.ethernet object at 0x107a5d790>
<ryu.lib.packet.ipv4.ipv4 object at 0x107a5d810>
<ryu.lib.packet.tcp.tcp object at 0x107a5d850>

我們可以隨意的去存取各種不同協定所產生的物件,下方是一個將 VLAN 資料 取出的的範例程式:

from ryu.lib.packet import packet

@handler.set_ev_cls(ofp_event.EventOFPPacketIn, handler.MAIN_DISPATCHER)
def packet_in_handler(self, ev):
    pkt = packet.Packet(array.array('B', ev.msg.data))
    for p in pkt:
        print p.protocol_name, p
        if p.protocol_name == 'vlan':
            print 'vid = ', p.vid

你可以看到類似下方的結果:

ethernet <ryu.lib.packet.ethernet.ethernet object at 0x107a5d790>
vlan <ryu.lib.packet.vlan.vlan object at 0x107a5d7d0>
vid = 10
ipv4 <ryu.lib.packet.ipv4.ipv4 object at 0x107a5d810>
tcp <ryu.lib.packet.tcp.tcp object at 0x107a5d850>
產生一個封包

開發者可以自行透過此函式庫去產生一個封包,在產生一個 Packet 類別物件之後,透過 add_protocol 這一個方法,我們可以在一個封包當中新增不同的網路協定以及資料, 當一個封包被製作好之後,我們將該封包物件序列化(serialize)產生資料(raw data) ,並將這一分資料送出,下方範例程式示範了如何透過程式產生一個自定義的封包。

from ryu.ofproto import ether
from ryu.lib.packet import ethernet, arp, packet

e = ethernet.ethernet(dst='ff:ff:ff:ff:ff:ff',
                      src='08:60:6e:7f:74:e7',
                      ethertype=ether.ETH_TYPE_ARP)
a = arp.arp(hwtype=1, proto=0x0800, hlen=6, plen=4, opcode=2,
            src_mac='08:60:6e:7f:74:e7', src_ip='192.0.2.1',
            dst_mac='00:00:00:00:00:00', dst_ip='192.0.2.2')
p = packet.Packet()
p.add_protocol(e)
p.add_protocol(a)
p.serialize()
print repr(p.data)  # the on-wire packet

Packet 函式庫 API 參考

註:本章節部分內容是直接採用程式碼中文件,並由程式自動產生,故部分內容無法直接翻譯。

Packet class
Stream Parser class
Protocol Header classes

OF-Config 支援

Ryu 提供了一套支援 OF-Config 協定的函式庫。

NETCONFIG 以及 OFConfig 所用到的的 XML 設定檔案

NETCONFIG 以及 OFConfig 所用到的的 XML 設定檔案是從 LINC 中所提取 ,並以 Apache 2.0 授權釋出。 這些檔案僅支援部分 OF-Config 中所定義的規格,所以它僅能夠在一些有限的 設備上面執行。

當更多支援 OF-Config 的交換機被這一套函式庫測試之後,這一個函式庫將會去更新 原有的 XML 設定檔,而當它們被更新之後,函式庫就可以支援更多不同的網路設備。

BGP speaker 函式庫

簡介

Ryu BGP speaker 函式庫可以讓開發者能夠去操作以及廣播 BGP 協定的訊息。 這一套函式庫支援了 ipv4, ipv4 vpn 以及 ipv6 vpn 相關網路定址協定

範例

以下範例程式說明了如何去產生一個 AS 編號為 64512 以及路由編號(Route ID)為 10.0.0.1 的 BGP 實體。它會使用自身的資訊(IP 為 192.168.117.32,AS 為 64512) 去試圖建立一個 BGP session。這一個 BGP 實體會在他執行階段中去新增一些 prefix.

import eventlet

# BGPSpeaker needs sockets patched
eventlet.monkey_patch()

# initialize a log handler
# this is not strictly necessary but useful if you get messages like:
#    No handlers could be found for logger "ryu.lib.hub"
import logging
import sys
log = logging.getLogger()
log.addHandler(logging.StreamHandler(sys.stderr))

from ryu.services.protocols.bgp.bgpspeaker import BGPSpeaker

def dump_remote_best_path_change(event):
    print 'the best path changed:', event.remote_as, event.prefix,\
        event.nexthop, event.is_withdraw

def detect_peer_down(remote_ip, remote_as):
    print 'Peer down:', remote_ip, remote_as

if __name__ == "__main__":
    speaker = BGPSpeaker(as_number=64512, router_id='10.0.0.1',
                         best_path_change_handler=dump_remote_best_path_change,
                         peer_down_handler=detect_peer_down)

    speaker.neighbor_add('192.168.177.32', 64513)
    # uncomment the below line if the speaker needs to talk with a bmp server.
    # speaker.bmp_server_add('192.168.177.2', 11019)
    count = 1
    while True:
        eventlet.sleep(30)
        prefix = '10.20.' + str(count) + '.0/24'
        print "add a new prefix", prefix
        speaker.prefix_add(prefix)
        count += 1
        if count == 4:
            speaker.shutdown()
            break

BGP speaker 函式庫 API 參考

BGPSpeaker class

註:本章節部分內容是直接採用程式碼中文件,並由程式自動產生,故部分內容無法直接翻譯。

OVSDB Manager library

Introduction

Ryu OVSDB Manager library allows your code to interact with devices speaking the OVSDB protocol. This enables your code to perform remote management of the devices and react to topology changes on them.

Example

The following logs all new OVSDB connections and allows creating a port on a bridge.

import uuid

from ryu.base import app_manager
from ryu.services.protocols.ovsdb import api as ovsdb
from ryu.services.protocols.ovsdb import event as ovsdb_event


class MyApp(app_manager.RyuApp):
    @set_ev_cls(ovsdb_event.EventNewOVSDBConnection)
    def handle_new_ovsdb_connection(self, ev):
        system_id = ev.system_id
        self.logger.info('New OVSDB connection from system id %s',
                         systemd_id)

    def create_port(self, systemd_id, bridge_name, name):
        new_iface_uuid = uuid.uuid4()
        new_port_uuid = uuid.uuid4()

        def _create_port(tables, insert):
            bridge = ovsdb.row_by_name(self, system_id, bridge_name)

            iface = insert(tables['Interface'], new_iface_uuid)
            iface.name = name
            iface.type = 'internal'

            port = insert(tables['Port'], new_port_uuid)
            port.name = name
            port.interfaces = [iface]

            brdige.ports = bridfe.ports + [port]

            return (new_port_uuid, new_iface_uuid)

        req = ovsdb_event.EventModifyRequest(system_id, _create_port)
        rep = self.send_request(req)

        if rep.status != 'success':
            self.logger.error('Error creating port %s on bridge %s: %s',
                              name, bridge, rep.status)
            return None

        return reply.insert_uuid[new_port_uuid]

OpenFlow 協定 API

OpenFlow 相關類別及函式

OpenFlow 訊息所使用的基底類別(Base class)

註:本章節部分內容是直接採用程式碼中文件,並由程式自動產生,故部分內容無法直接翻譯。

函式

OpenFlow v1.0 Messages and Structures

Controller-to-Switch Messages
Handshake
Switch Configuration
Modify State Messages
Queue Configuration Messages
Read State Messages
Send Packet Message
Barrier Message
Asynchronous Messages
Packet-In Message
Flow Removed Message
Port Status Message
Error Message
Symmetric Messages
Hello
Echo Request
Echo Reply
Vendor
Port Structures
Flow Match Structure
Action Structures

OpenFlow v1.2 訊息以及結構

註:本章節部分內容是直接採用程式碼中文件,並由程式自動產生,故部分內容無法直接翻譯。 註二:本章節並未針對部分專有名詞進行翻譯

控制器對交換器訊息
OpenFlow 交握(Handshake)協定
交換器設定
Flow Table 設定
狀態修改訊息
讀取訊息
隊列(Queue)設定訊息
Packet-Out 訊息
Barrier 訊息
角色請求(Role Request)訊息
非同步訊息
Packet-In 訊息
Flow 移除訊息
埠口狀態訊息
錯誤訊息
同步(Symmetric)訊息
OpenFlow Hello 訊息
回應(Echo)要求
回應(Echo)回覆
實驗訊息
Port Structures
Flow Match 架構
Flow 指令(Instruction)架構
動作(Action)架構

OpenFlow v1.3 訊息以及結構

註:本章節部分內容是直接採用程式碼中文件,並由程式自動產生,故部分內容無法直接翻譯。 註二:本章節並未針對部分專有名詞進行翻譯

控制器對交換器訊息
OpenFlow 交握(Handshake)協定
交換器設定
Flow Table 設定
狀態修改訊息
Multipart 訊息
隊列(Queue)設定訊息
Packet-Out 訊息
Barrier 訊息
角色請求(Role Request)訊息
非同步設定訊息
非同步訊息
Packet-In 訊息
Flow 移除訊息
埠口狀態訊息
錯誤訊息
同步(Symmetric)訊息
OpenFlow Hello 訊息
回應(Echo)要求
回應(Echo)回覆
實驗訊息
Port Structures
Flow Match 架構
Flow 指令(Instruction)架構
動作(Action)架構

OpenFlow v1.4 訊息以及結構

註:本章節部分內容是直接採用程式碼中文件,並由程式自動產生,故部分內容無法直接翻譯。 註二:本章節並未針對部分專有名詞進行翻譯

控制器對交換器訊息
OpenFlow 交握(Handshake)協定
交換器設定
狀態修改訊息
Multipart 訊息
Packet-Out 訊息
Barrier 訊息
角色請求(Role Request)訊息
Bundle 訊息
非同步設定訊息
非同步訊息
Packet-In 訊息
Flow 移除訊息
埠口狀態訊息
控制器角色狀態(Role Status)訊息
Table 狀態訊息
Request Forward 訊息
錯誤訊息
同步(Symmetric)訊息
OpenFlow Hello 訊息
回應(Echo)要求
回應(Echo)回覆
實驗訊息
Port Structures
Flow Match 架構
Flow 指令(Instruction)架構
動作(Action)架構

OpenFlow v1.5 Messages and Structures

Controller-to-Switch Messages
Handshake
Switch Configuration
Modify State Messages
Multipart Messages
Packet-Out Message
Barrier Message
Role Request Message
Bundle Messages
Set Asynchronous Configuration Message
Asynchronous Messages
Packet-In Message
Flow Removed Message
Port Status Message
Controller Role Status Message
Table Status Message
Request Forward Message
Controller Status Message
Symmetric Messages
Hello
Echo Request
Echo Reply
Error Message
Experimenter
Port Structures
Flow Match Structure
Flow Stats Structures
Flow Instruction Structures
Action Structures
Controller Status Structure

Ryu API 參考說明

註:本章節部分內容是直接採用程式碼中文件,並由程式自動產生,故部分內容無法直接翻譯。 .. XXX list all members explicitly to workaround a sphinx bug .. XXX https://bitbucket.org/birkenfeld/sphinx/issue/1362

相關設定

設定 TLS 連線

If you want to use secure channel to connect OpenFlow switches, you need to use TLS connection. This document describes how to setup Ryu to connect to the Open vSwitch over TLS.

Configuring a Public Key Infrastructure

If you don’t have a PKI, the ovs-pki script included with Open vSwitch can help you. This section is based on the INSTALL.SSL in the Open vSwitch source code.

NOTE: How to install Open vSwitch isn’t described in this document. Please refer to the Open vSwitch documents.

Create a PKI by using ovs-pki script:

% ovs-pki init
(Default directory is /usr/local/var/lib/openvswitch/pki)

The pki directory consists of controllerca and switchca subdirectories. Each directory contains CA files.

Create a controller private key and certificate:

% ovs-pki req+sign ctl controller

ctl-privkey.pem and ctl-cert.pem are generated in the current directory.

Create a switch private key and certificate:

% ovs-pki req+sign sc switch

sc-privkey.pem and sc-cert.pem are generated in the current directory.

Testing TLS Connection

Configuring ovs-vswitchd to use CA files using the ovs-vsctl “set-ssl” command, e.g.:

% ovs-vsctl set-ssl /etc/openvswitch/sc-privkey.pem \
  /etc/openvswitch/sc-cert.pem \
  /usr/local/var/lib/openvswitch/pki/controllerca/cacert.pem
% ovs-vsctl add-br br0
% ovs-vsctl set-controller br0 ssl:127.0.0.1:6633

Substitute the correct file names, if they differ from the ones used above. You should use absolute file names.

Run Ryu with CA files:

% ryu-manager --ctl-privkey ctl-privkey.pem \
              --ctl-cert ctl-cert.pem \
              --ca-certs /usr/local/var/lib/openvswitch/pki/switchca/cacert.pem \
              --verbose

You can see something like:

loading app ryu.controller.ofp_handler
instantiating app ryu.controller.ofp_handler
BRICK ofp_event
  CONSUMES EventOFPSwitchFeatures
  CONSUMES EventOFPErrorMsg
  CONSUMES EventOFPHello
  CONSUMES EventOFPEchoRequest
connected socket:<SSLSocket fileno=4 sock=127.0.0.1:6633 peer=127.0.0.1:61302> a
ddress:('127.0.0.1', 61302)
hello ev <ryu.controller.ofp_event.EventOFPHello object at 0x1047806d0>
move onto config mode
switch features ev version: 0x1 msg_type 0x6 xid 0xb0bb34e5 port OFPPhyPort(port
_no=65534, hw_addr='\x16\xdc\xa2\xe2}K', name='br0\x00\x00\x00\x00\x00\x00\x00\x
00\x00\x00\x00\x00\x00', config=0, state=0, curr=0, advertised=0, supported=0, p
eer=0)
move onto main mode

網路拓樸瀏覽器

ryu.app.gui_topology.gui_topology 提供了拓樸視覺化的功能

以下 Ryu 應用程式為本程式在執行階段所需要之相關應用程式。

ryu.app.rest_topology 取得所有節點(交換器)以及連結資訊
ryu.app.ws_topology 在新增連結與中斷連結時會對前端程式送出觸發
ryu.app.ofctl_rest 從交換器上取得 FlowEntry

使用方式

執行 Mininet 網路模擬器(或是實體網路拓樸):

$ sudo mn --controller remote --topo tree,depth=3

執行 Ryu 圖形化應用程式:

$ PYTHONPATH=. ./bin/ryu run --observe-links ryu/app/gui_topology/gui_topology.py

在瀏覽器連接中輸入:

http://<您的主機位址(IP Address)>:8080

預覽畫面

_images/gui.png

相關測試

測試 VRRP 模組

This page describes how to test Ryu VRRP service

Running integrated tests

Some testing scripts are available.

  • ryu/tests/integrated/test_vrrp_linux_multi.py
  • ryu/tests/integrated/test_vrrp_multi.py

Each files include how to run in the comment. Please refer to it.

Running multiple Ryu VRRP in network namespace

The following command lines set up necessary bridges and interfaces.

And then run RYU-VRRP:

# ip netns add gateway1
# ip netns add gateway2

# brctl addbr vrrp-br0
# brctl addbr vrrp-br1

# ip link add veth0 type veth peer name veth0-br0
# ip link add veth1 type veth peer name veth1-br0
# ip link add veth2 type veth peer name veth2-br0
# ip link add veth3 type veth peer name veth3-br1
# ip link add veth4 type veth peer name veth4-br1
# ip link add veth5 type veth peer name veth5-br1

# brctl addif vrrp-br0 veth0-br0
# brctl addif vrrp-br0 veth1-br0
# brctl addif vrrp-br0 veth2-br0
# brctl addif vrrp-br1 veth3-br1
# brctl addif vrrp-br1 veth4-br1
# brctl addif vrrp-br1 veth5-br1

# ip link set vrrp-br0 up
# ip link set vrrp-br1 up

# ip link set veth0 up
# ip link set veth0-br0 up
# ip link set veth1-br0 up
# ip link set veth2-br0 up
# ip link set veth3-br1 up
# ip link set veth4-br1 up
# ip link set veth5 up
# ip link set veth5-br1 up

# ip link set veth1 netns gateway1
# ip link set veth2 netns gateway2
# ip link set veth3 netns gateway1
# ip link set veth4 netns gateway2

# ip netns exec gateway1 ip link set veth1 up
# ip netns exec gateway2 ip link set veth2 up
# ip netns exec gateway1 ip link set veth3 up
# ip netns exec gateway2 ip link set veth4 up

# ip netns exec gateway1 .ryu-vrrp veth1 '10.0.0.2' 254
# ip netns exec gateway2 .ryu-vrrp veth2 '10.0.0.3' 100

Caveats

Please make sure that all interfaces and bridges are UP. Don’t forget interfaces in netns gateway1/gateway2.

               ^ veth5
               |
               V veth5-br1
       -----------------------
       |Linux Brirge vrrp-br1|
       -----------------------
veth3-br1^            ^ veth4-br1
         |            |
    veth3V            V veth4
    ----------       ----------
    |netns   |       |netns   |
    |gateway1|       |gateway2|
    |ryu-vrrp|       |ryu-vrrp|
    ----------       ----------
    veth1^            ^ veth2
         |            |
veth1-br0V            V veth2-br0
       -----------------------
       |Linux Brirge vrrp-br0|
       -----------------------
               ^ veth0-br0
               |
               V veth0

Here’s the helper executable, ryu-vrrp:

#!/usr/bin/env python
#
# Copyright (C) 2013 Nippon Telegraph and Telephone Corporation.
# Copyright (C) 2013 Isaku Yamahata <yamahata at valinux co jp>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from ryu.lib import hub
hub.patch()

# TODO:
#   Right now, we have our own patched copy of ovs python bindings
#   Once our modification is upstreamed and widely deployed,
#   use it
#
# NOTE: this modifies sys.path and thus affects the following imports.
# eg. oslo.config.cfg.
import ryu.contrib

from oslo.config import cfg
import logging
import netaddr
import sys
import time

from ryu import log
log.early_init_log(logging.DEBUG)

from ryu import flags
from ryu import version
from ryu.base import app_manager
from ryu.controller import controller
from ryu.lib import mac as lib_mac
from ryu.lib.packet import vrrp
from ryu.services.protocols.vrrp import api as vrrp_api
from ryu.services.protocols.vrrp import event as vrrp_event


CONF = cfg.CONF

_VRID = 7
_IP_ADDRESS = '10.0.0.1'
_PRIORITY = 100


class VRRPTestRouter(app_manager.RyuApp):
    def __init__(self, *args, **kwargs):
        super(VRRPTestRouter, self).__init__(*args, **kwargs)
        print args
        self.logger.debug('vrrp_config %s', args)
        self._ifname = args[0]
        self._primary_ip_address = args[1]
        self._priority = int(args[2])

    def start(self):
        print 'start'
        hub.spawn(self._main)

    def _main(self):
        print self
        interface = vrrp_event.VRRPInterfaceNetworkDevice(
            lib_mac.DONTCARE, self._primary_ip_address, None, self._ifname)
        self.logger.debug('%s', interface)

        ip_addresses = [_IP_ADDRESS]
        config = vrrp_event.VRRPConfig(
            version=vrrp.VRRP_VERSION_V3, vrid=_VRID, priority=self._priority,
            ip_addresses=ip_addresses)
        self.logger.debug('%s', config)

        rep = vrrp_api.vrrp_config(self, interface, config)
        self.logger.debug('%s', rep)


def main():
    vrrp_config = sys.argv[-3:]
    sys.argv = sys.argv[:-3]
    CONF(project='ryu', version='ryu-vrrp %s' % version)

    log.init_log()
    # always enable ofp for now.
    app_lists = ['ryu.services.protocols.vrrp.manager',
                 'ryu.services.protocols.vrrp.dumper',
                 'ryu.services.protocols.vrrp.sample_manager']

    app_mgr = app_manager.AppManager.get_instance()
    app_mgr.load_apps(app_lists)
    contexts = app_mgr.create_contexts()
    app_mgr.instantiate_apps(**contexts)
    vrrp_router = app_mgr.instantiate(VRRPTestRouter, *vrrp_config, **contexts)
    vrrp_router.start()

    while True:
        time.sleep(999999)

    app_mgr.close()


if __name__ == "__main__":
    main()

在 LINC 上測試 OF-config

This page describes how to setup LINC and test Ryu OF-config with it.

The procedure is as follows. Although all the procedure is written for reader’s convenience, please refer to LINC document for latest informations of LINC.

The test procedure

  • install Erlang environment
  • build LINC
  • configure LINC switch
  • setup for LINC
  • run LINC switch
  • run Ryu test_of_config app

For getting/installing Ryu itself, please refer to http://osrg.github.io/ryu/

Install Erlang environment

Since LINC is written in Erlang, you need to install Erlang execution environment. Required version is R15B+.

The easiest way is to use binary package from https://www.erlang-solutions.com/downloads/download-erlang-otp

The distribution may also provide Erlang package.

build LINC

install necessary packages for build
install necessary build tools

On Ubuntu:

# apt-get install git-core bridge-utils libpcap0.8 libpcap-dev libcap2-bin uml-utilities

On RedHat/CentOS:

# yum install git sudo bridge-utils libpcap libpcap-devel libcap tunctl

Note that on RedHat/CentOS 5.x you need a newer version of libpcap:

# yum erase libpcap libpcap-devel
# yum install flex byacc
# wget http://www.tcpdump.org/release/libpcap-1.2.1.tar.gz
# tar xzf libpcap-1.2.1.tar.gz
# cd libpcap-1.2.1
# ./configure
# make && make install
get LINC repo and built

Clone LINC repo:

% git clone git://github.com/FlowForwarding/LINC-Switch.git

Then compile everything:

% cd LINC-Switch
% make

註解

At the time of this writing, test_of_config fails due to a bug of LINC. You can try this test with LINC which is built by the following methods.

% cd LINC-Switch
% make
% cd deps/of_config
% git reset --hard f772af4b765984381ad024ca8e5b5b8c54362638
% cd ../..
% make offline

Setup LINC

edit LINC switch configuration file. rel/linc/releases/0.1/sys.config Here is the sample sys.config for test_of_config.py to run.

[{linc,
     [{of_config,enabled},
      {capable_switch_ports,
          [{port,1,[{interface,"linc-port"}]},
           {port,2,[{interface,"linc-port2"}]},
           {port,3,[{interface,"linc-port3"}]},
           {port,4,[{interface,"linc-port4"}]}]},
      {capable_switch_queues,
          [
            {queue,991,[{min_rate,10},{max_rate,120}]},
            {queue,992,[{min_rate,10},{max_rate,130}]},
            {queue,993,[{min_rate,200},{max_rate,300}]},
            {queue,994,[{min_rate,400},{max_rate,900}]}
            ]},
      {logical_switches,
          [{switch,0,
               [{backend,linc_us4},
                {controllers,[{"Switch0-Default-Controller","127.0.0.1",6633,tcp}]},
                {controllers_listener,{"127.0.0.1",9998,tcp}},
                {queues_status,enabled},
                {ports,[{port,1,{queues,[]}},{port,2,{queues,[991,992]}}]}]}
                ,
           {switch,7,
               [{backend,linc_us3},
                {controllers,[{"Switch7-Controller","127.0.0.1",6633,tcp}]},
                {controllers_listener,disabled},
                {queues_status,enabled},
                {ports,[{port,4,{queues,[]}},{port,3,{queues,[993,994]}}]}]}
        ]}]},
 {enetconf,
     [{capabilities,
          [{base,{1,0}},
           {base,{1,1}},
           {startup,{1,0}},
           {'writable-running',{1,0}}]},
      {callback_module,linc_ofconfig},
      {sshd_ip,{127,0,0,1}},
      {sshd_port,1830},
      {sshd_user_passwords,[{"linc","linc"}]}]},
 {lager,
     [{handlers,
          [{lager_console_backend,debug},
           {lager_file_backend,
               [{"log/error.log",error,10485760,"$D0",5},
                {"log/console.log",info,10485760,"$D0",5}]}]}]},
 {sasl,
     [{sasl_error_logger,{file,"log/sasl-error.log"}},
      {errlog_type,error},
      {error_logger_mf_dir,"log/sasl"},
      {error_logger_mf_maxbytes,10485760},
      {error_logger_mf_maxfiles,5}]},
 {sync,[{excluded_modules,[procket]}]}].

setup for LINC

As the above sys.config requires some network interface, create them:

# ip link add linc-port type veth peer name linc-port-peer
# ip link set linc-port up
# ip link add linc-port2 type veth peer name linc-port-peer2
# ip link set linc-port2 up
# ip link add linc-port3 type veth peer name linc-port-peer3
# ip link set linc-port3 up
# ip link add linc-port4 type veth peer name linc-port-peer4
# ip link set linc-port4 up

After stopping LINC, those created interfaces can be deleted:

# ip link delete linc-port
# ip link delete linc-port2
# ip link delete linc-port3
# ip link delete linc-port4

Starting LINC OpenFlow switch

Then run LINC:

# rel/linc/bin/linc console

Run Ryu test_of_config app

Run test_of_config app:

# ryu-manager --verbose ryu.tests.integrated.test_of_config ryu.app.rest

If you don’t install ryu and are working in the git repo directly:

# PYTHONPATH=. ./bin/ryu-manager --verbose ryu.tests.integrated.test_of_config ryu.app.rest

在 OpenStack 中使用 Ryu 作為網路控制器

Ryu cooperates with OpenStack using Quantum Ryu plugin. The plugin is available in the official Quantum releases.

For more information, please visit http://github.com/osrg/ryu/wiki/OpenStack . We described instructions of the installation / configuration of OpenStack with Ryu, and we provide pre-configured VM image to be able to easily try OpenStack with Ryu.


與 Snort 整合

This document describes how to integrate Ryu with Snort.

Overview

There are two options can send alert to Ryu controller. The Option 1 is easier if you just want to demonstrate or test. Since Snort need very large computation power for analyzing packets you can choose Option 2 to separate them.

[Option 1] Ryu and Snort are on the same machine

      +---------------------+
      |      unixsock       |
      |    Ryu  ==  snort   |
      +----eth0-----eth1----+
             |       |
+-------+   +----------+   +-------+
| HostA |---| OFSwitch |---| HostB |
+-------+   +----------+   +-------+

The above depicts Ryu and Snort architecture. Ryu receives Snort alert packet via Unix Domain Socket . To monitor packets between HostA and HostB, installing a flow that mirrors packets to Snort.

[Option 2] Ryu and Snort are on the different machines

          +---------------+
          |    Snort     eth0--|
          |   Sniffer     |    |
          +-----eth1------+    |
                 |             |
+-------+   +----------+   +-----------+
| HostA |---| OFSwitch |---| LAN (*CP) |
+-------+   +----------+   +-----------+
                 |             |
            +----------+   +----------+
            |  HostB   |   |   Ryu    |
            +----------+   +----------+

*CP: Control Plane

The above depicts Ryu and Snort architecture. Ryu receives Snort alert packet via Network Socket . To monitor packets between HostA and HostB, installing a flow that mirrors packets to Snort.

Installation Snort

Snort is an open source network intrusion prevention and detectionsystem developed by Sourcefire. If you are not familiar with installing/setting up Snort, please referto snort setup guides.

http://www.snort.org/documents

Configure Snort

The configuration example is below:

  • Add a snort rules file into /etc/snort/rules named Myrules.rules

    alert icmp any any -> any any (msg:"Pinging...";sid:1000004;)
    alert tcp any any -> any 80 (msg:"Port 80 is accessing"; sid:1000003;)
    
  • Add the custom rules in /etc/snort/snort.conf

    include $RULE_PATH/Myrules.rules
    

Configure NIC as a promiscuous mode.

$ sudo ifconfig eth1 promisc

Usage

[Option 1]

  1. Modify the simple_switch_snort.py:

    socket_config = {'unixsock': True}
    # True: Unix Domain Socket Server [Option1]
    # False: Network Socket Server [Option2]
    
  2. Run Ryu with sample application:

    $ sudo ./bin/ryu-manager ryu/app/simple_switch_snort.py
    

The incoming packets will all mirror to port 3 which should be connect to Snort network interface. You can modify the mirror port by assign a new value in the self.snort_port = 3 of simple_switch_snort.py

  1. Run Snort:

    $ sudo -i
    $ snort -i eth1 -A unsock -l /tmp -c /etc/snort/snort.conf
    
  2. Send an ICMP packet from HostA (192.168.8.40) to HostB (192.168.8.50):

    $ ping 192.168.8.50
    
  3. You can see the result under next section.

[Option 2]

  1. Modify the simple_switch_snort.py:

    socket_config = {'unixsock': False}
    # True: Unix Domain Socket Server [Option1]
    # False: Network Socket Server [Option2]
    
  2. Run Ryu with sample application (On the Controller):

    $ ./bin/ryu-manager ryu/app/simple_switch_snort.py
    
  3. Run Snort (On the Snort machine):

    $ sudo -i
    $ snort -i eth1 -A unsock -l /tmp -c /etc/snort/snort.conf
    
  4. Run pigrelay.py (On the Snort machine):

    $ sudo python pigrelay.py
    

This program listening snort alert messages from unix domain socket and sending it to Ryu using network socket.

You can clone the source code from this repo. https://github.com/John-Lin/pigrelay

  1. Send an ICMP packet from HostA (192.168.8.40) to HostB (192.168.8.50):

    $ ping 192.168.8.50
    
  2. You can see the alert message below:

    alertmsg: Pinging...
    icmp(code=0,csum=19725,data=echo(data=array('B', [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 97, 98, 99, 100, 101, 102, 103, 104, 105]),id=1,seq=78),type=8)
    
    ipv4(csum=42562,dst='192.168.8.50',flags=0,header_length=5,identification=724,offset=0,option=None,proto=1,src='192.168.8.40',tos=0,total_length=60,ttl=128,version=4)
    
    ethernet(dst='00:23:54:5a:05:14',ethertype=2048,src='00:23:54:6c:1d:17')
    
    
    alertmsg: Pinging...
    icmp(code=0,csum=21773,data=echo(data=array('B', [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 97, 98, 99, 100, 101, 102, 103, 104, 105]),id=1,seq=78),type=0)
    
    ipv4(csum=52095,dst='192.168.8.40',flags=0,header_length=5,identification=7575,offset=0,option=None,proto=1,src='192.168.8.50',tos=0,total_length=60,ttl=64,version=4)
    

Ryu 預設的應用程式

Ryu 中包含了一些預設的應用程式 部分的應用程式為範例應用程式 也有部分的應用程式主要是用來支援其他應用程式

ryu.app.ofctl

ryu.app.ofctl provides a convenient way to use OpenFlow messages synchronously.

OfctlService ryu application is automatically loaded if your Ryu application imports ofctl.api module.

Example:

import ryu.app.ofctl.api

OfctlService application internally uses OpenFlow barrier messages to ensure message boundaries. As OpenFlow messages are asynchronous and some of messages does not have any replies on success, barriers are necessary for correct error handling.

api module

exceptions

exception ryu.app.ofctl.exception.InvalidDatapath(result)

Datapath is invalid.

This can happen when the bridge disconnects.

exception ryu.app.ofctl.exception.OFError(result)

OFPErrorMsg is received.

exception ryu.app.ofctl.exception.UnexpectedMultiReply(result)

Two or more replies are received for reply_muiti=False request.

ryu.app.ofctl_rest

ryu.app.ofctl_rest provides REST APIs for retrieving the switch stats and Updating the switch stats. This application helps you debug your application and get various statistics.

This application supports OpenFlow version 1.0, 1.2 and 1.3.

Retrieve the switch stats

Get all switches

Get the list of all switches which connected to the controller.

Usage:

Method GET
URI /stats/switches

Response message body:

Attribute Description Example
dpid Datapath ID 1

Example of use:

$ curl -X GET http://localhost:8080/stats/switches
[
  1,
  2,
  3
]

註解

The result of the REST command is formatted for easy viewing.

Get the desc stats

Get the desc stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/desc/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
mfr_desc Manufacturer description “Nicira, Inc.”,
hw_desc Hardware description “Open vSwitch”,
sw_desc Software description “2.3.90”,
serial_num Serial number “None”,
dp_desc Human readable description of datapath “None”

Example of use:

$ curl -X GET http://localhost:8080/stats/desc/1
{
  "1": {
    "mfr_desc": "Nicira, Inc.",
    "hw_desc": "Open vSwitch",
    "sw_desc": "2.3.90",
    "serial_num": "None",
    "dp_desc": "None"
  }
}
Get all flows stats

Get all flows stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/flow/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
length Length of this entry 88
table_id Table ID 0
duration_sec Time flow has been alive in seconds 2
duration_nsec Time flow has been alive in nanoseconds beyond duration_sec 6.76e+08
priority Priority of the entry 11111
idle_timeout Number of seconds idle before expiration 0
hard_timeout Number of seconds before expiration 0
flags Bitmap of OFPFF_* flags 1
cookie Opaque controller-issued identifier 1
packet_count Number of packets in flow 0
byte_count Number of bytes in flow 0
match Fields to match {“in_port”: 1}
actions Instruction set [“OUTPUT:2”]

Example of use:

$ curl -X GET http://localhost:8080/stats/flow/1
{
  "1": [
    {
      "length": 88,
      "table_id": 0,
      "duration_sec": 2,
      "duration_nsec": 6.76e+08,
      "priority": 11111,
      "idle_timeout": 0,
      "hard_timeout": 0,
      "flags": 1,
      "cookie": 1,
      "packet_count": 0,
      "byte_count": 0,
      "match": {
        "in_port": 1
      },
      "actions": [
        "OUTPUT:2"
      ]
    }
  ]
}
Get flows stats filtered by fields

Get flows stats of the switch filtered by the OFPFlowStats fields. This is POST method version of Get all flows stats.

Usage:

Method POST
URI /stats/flow/<dpid>

Request message body:

Attribute Description Example Default
table_id Table ID (int) 0 OFPTT_ALL
out_port Require matching entries to include this as an output port (int) 2 OFPP_ANY
out_group Require matching entries to include this as an output group (int) 1 OFPG_ANY
cookie Require matching entries to contain this cookie value (int) 1 0
cookie_mask Mask used to restrict the cookie bits that must match (int) 1 0
match Fields to match (dict) {“in_port”: 1} {} #wildcarded
Response message body:
The same as Get all flows stats

Example of use:

$ curl -X POST -d '{
     "table_id": 0,
     "out_port": 2,
     "cookie": 1,
     "cookie_mask": 1,
     "match":{
         "in_port":1
     }
 }' http://localhost:8080/stats/flow/1
{
  "1": [
    {
      "table_id": 0,
      "duration_sec": 2,
      "duration_nsec": 6.76e+08,
      "priority": 11111,
      "idle_timeout": 0,
      "hard_timeout": 0,
      "cookie": 1,
      "packet_count": 0,
      "byte_count": 0,
      "match": {
        "in_port": 1
      },
      "actions": [
        "OUTPUT:2"
      ]
    }
  ]
}
Get aggregate flow stats

Get aggregate flow stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/aggregateflow/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
packet_count Number of packets in flows 18
byte_count Number of bytes in flows 756
flow_count Number of flows 3

Example of use:

$ curl -X GET http://localhost:8080/stats/aggregateflow/1
{
  "1": [
    {
      "packet_count": 18,
      "byte_count": 756,
      "flow_count": 3
    }
  ]
}
Get aggregate flow stats filtered by fields

Get aggregate flow stats of the switch filtered by the OFPAggregateStats fields. This is POST method version of Get aggregate flow stats.

Usage:

Method POST
URI /stats/aggregateflow/<dpid>

Request message body:

Attribute Description Example Default
table_id Table ID (int) 0 OFPTT_ALL
out_port Require matching entries to include this as an output port (int) 2 OFPP_ANY
out_group Require matching entries to include this as an output group (int) 1 OFPG_ANY
cookie Require matching entries to contain this cookie value (int) 1 0
cookie_mask Mask used to restrict the cookie bits that must match (int) 1 0
match Fields to match (dict) {“in_port”: 1} {} #wildcarded
Response message body:
The same as Get aggregate flow stats

Example of use:

$ curl -X POST -d '{
     "table_id": 0,
     "out_port": 2,
     "cookie": 1,
     "cookie_mask": 1,
     "match":{
         "in_port":1
     }
 }' http://localhost:8080/stats/aggregateflow/1
{
  "1": [
    {
      "packet_count": 18,
      "byte_count": 756,
      "flow_count": 3
    }
  ]
}
Get table stats

Get table stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/table/<dpid>

Response message body(OpenFlow1.0):

Attribute Description Example
dpid Datapath ID “1”
table_id Table ID 0
name Name of Table “classifier”
max_entries Max number of entries supported 1e+06
wildcards Bitmap of OFPFW_* wildcards that are supported by the table [“IN_PORT”,”DL_VLAN”]
active_count Number of active entries 0
lookup_count Number of packets looked up in table 8
matched_count Number of packets that hit table 0

Response message body(OpenFlow1.2):

Attribute Description Example
dpid Datapath ID “1”
table_id Table ID 0
name Name of Table “classifier”
match Bitmap of (1 << OFPXMT_*) that indicate the fields the table can match on [“OFB_IN_PORT”,”OFB_METADATA”]
wildcards Bitmap of (1 << OFPXMT_*) wildcards that are supported by the table [“OFB_IN_PORT”,”OFB_METADATA”]
write_actions Bitmap of OFPAT_* that are supported by the table with OFPIT_WRITE_ACTIONS [“OUTPUT”,”SET_MPLS_TTL”]
apply_actions Bitmap of OFPAT_* that are supported by the table with OFPIT_APPLY_ACTIONS [“OUTPUT”,”SET_MPLS_TTL”]
write_setfields Bitmap of (1 << OFPXMT_*) header fields that can be set with OFPIT_WRITE_ACTIONS [“OFB_IN_PORT”,”OFB_METADATA”]
apply_setfields Bitmap of (1 << OFPXMT_*) header fields that can be set with OFPIT_APPLY_ACTIONS [“OFB_IN_PORT”,”OFB_METADATA”]
metadata_match Bits of metadata table can match 18446744073709552000
metadata_write Bits of metadata table can write 18446744073709552000
instructions Bitmap of OFPIT_* values supported [“GOTO_TABLE”,”WRITE_METADATA”]
config Bitmap of OFPTC_* values []
max_entries Max number of entries supported 1e+06
active_count Number of active entries 0
lookup_count Number of packets looked up in table 0
matched_count Number of packets that hit table 8

Response message body(OpenFlow1.3):

Attribute Description Example
dpid Datapath ID “1”
table_id Table ID 0
active_count Number of active entries 0
lookup_count Number of packets looked up in table 8
matched_count Number of packets that hit table 0

Example of use:

$ curl -X GET http://localhost:8080/stats/table/1

Response (OpenFlow1.0):

{
  "1": [
    {
      "table_id": 0,
      "lookup_count": 8,
      "max_entries": 1e+06,
      "active_count": 0,
      "name": "classifier",
      "matched_count": 0,
      "wildcards": [
       "IN_PORT",
       "DL_VLAN"
      ]
    },
    ...
    {
      "table_id": 253,
      "lookup_count": 0,
      "max_entries": 1e+06,
      "active_count": 0,
      "name": "table253",
      "matched_count": 0,
      "wildcards": [
       "IN_PORT",
       "DL_VLAN"
      ]
    }
  ]
}

Response (OpenFlow1.2):

{
  "1": [
    {
      "apply_setfields": [
       "OFB_IN_PORT",
       "OFB_METADATA"
      ],
      "match": [
       "OFB_IN_PORT",
       "OFB_METADATA"
      ],
      "metadata_write": 18446744073709552000,
      "config": [],
      "instructions":[
       "GOTO_TABLE",
       "WRITE_METADATA"
      ],
      "table_id": 0,
      "metadata_match": 18446744073709552000,
      "lookup_count": 8,
      "wildcards": [
       "OFB_IN_PORT",
       "OFB_METADATA"
      ],
      "write_setfields": [
       "OFB_IN_PORT",
       "OFB_METADATA"
      ],
      "write_actions": [
       "OUTPUT",
       "SET_MPLS_TTL"
      ],
      "name": "classifier",
      "matched_count": 0,
      "apply_actions": [
       "OUTPUT",
       "SET_MPLS_TTL"
      ],
      "active_count": 0,
      "max_entries": 1e+06
    },
    ...
    {
      "apply_setfields": [
       "OFB_IN_PORT",
       "OFB_METADATA"
      ],
      "match": [
       "OFB_IN_PORT",
       "OFB_METADATA"
      ],
      "metadata_write": 18446744073709552000,
      "config": [],
      "instructions": [
       "GOTO_TABLE",
       "WRITE_METADATA"
      ],
      "table_id": 253,
      "metadata_match": 18446744073709552000,
      "lookup_count": 0,
      "wildcards": [
       "OFB_IN_PORT",
       "OFB_METADATA"
      ],
      "write_setfields": [
       "OFB_IN_PORT",
       "OFB_METADATA"
      ],
      "write_actions": [
       "OUTPUT",
       "SET_MPLS_TTL"
      ],
      "name": "table253",
      "matched_count": 0,
      "apply_actions": [
       "OUTPUT",
       "SET_MPLS_TTL"
      ],
      "active_count": 0,
      "max_entries": 1e+06
    }
  ]
}

Response (OpenFlow1.3):

{
  "1": [
    {
      "active_count": 0,
      "table_id": 0,
      "lookup_count": 8,
      "matched_count": 0
    },
    ...
    {
      "active_count": 0,
      "table_id": 253,
      "lookup_count": 0,
      "matched_count": 0
    }
  ]
}
Get table features

Get table features of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/tablefeatures/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
table_id Table ID 0
name Name of Table “table_0”
metadata_match Bits of metadata table can match 18446744073709552000
metadata_write Bits of metadata table can write 18446744073709552000
config Bitmap of OFPTC_* values 0
max_entries Max number of entries supported 4096
properties struct ofp_table_feature_prop_header [{“type”: “INSTRUCTIONS”,”instruction_ids”: [...]},...]

Example of use:

$ curl -X GET http://localhost:8080/stats/tablefeatures/1
{
  "1": [
    {
      "metadata_write": 18446744073709552000,
      "config": 0,
      "table_id": 0,
      "metadata_match": 18446744073709552000,
      "max_entries": 4096,
      "properties": [
        {
          "type": "INSTRUCTIONS",
          "instruction_ids": [
           {
           "len": 4,
           "type": 1
           },
           ...
          ]
        },
        ...
      ],
      "name": "table_0"
    },
    {
      "metadata_write": 18446744073709552000,
      "config": 0,
      "table_id": 1,
      "metadata_match": 18446744073709552000,
      "max_entries": 4096,
      "properties": [
        {
          "type": "INSTRUCTIONS",
          "instruction_ids": [
           {
           "len": 4,
           "type": 1
           },
           ...
          ]
        },
        ...
      ],
      "name": "table_1"
    },
    ...
  ]
}
Get ports stats

Get ports stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/port/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
port_no Port number 1
rx_packets Number of received packets 9
tx_packets Number of transmitted packets 6
rx_bytes Number of received bytes 738
tx_bytes Number of transmitted bytes 252
rx_dropped Number of packets dropped by RX 0
tx_dropped Number of packets dropped by TX 0
rx_errors Number of receive errors 0
tx_errors Number of transmit errors 0
rx_frame_err Number of frame alignment errors 0
rx_over_err Number of packets with RX overrun 0
rx_crc_err Number of CRC errors 0
collisions Number of collisions 0
duration_sec Time port has been alive in seconds 12
duration_nsec Time port has been alive in nanoseconds beyond duration_sec 9.76e+08

Example of use:

$ curl -X GET http://localhost:8080/stats/port/1
{
  "1": [
    {
      "port_no": 1,
      "rx_packets": 9,
      "tx_packets": 6,
      "rx_bytes": 738,
      "tx_bytes": 252,
      "rx_dropped": 0,
      "tx_dropped": 0,
      "rx_errors": 0,
      "tx_errors": 0,
      "rx_frame_err": 0,
      "rx_over_err": 0,
      "rx_crc_err": 0,
      "collisions": 0,
      "duration_sec": 12,
      "duration_nsec": 9.76e+08
    },
    {
      :
      :
    }
  ]
}
Get ports description

Get ports description of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/portdesc/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
port_no Port number 1
hw_addr Ethernet hardware address “0a:b6:d0:0c:e1:d7”
name Name of port “s1-eth1”
config Bitmap of OFPPC_* flags 0
state Bitmap of OFPPS_* flags 0
curr Current features 2112
advertised Features being advertised by the port 0
supported Features supported by the port 0
peer Features advertised by peer 0
curr_speed Current port bitrate in kbps 1e+07
max_speed Max port bitrate in kbps 0

Example of use:

$ curl -X GET http://localhost:8080/stats/portdesc/1
{
  "1": [
    {
      "port_no": 1,
      "hw_addr": "0a:b6:d0:0c:e1:d7",
      "name": "s1-eth1",
      "config": 0,
      "state": 0,
      "curr": 2112,
      "advertised": 0,
      "supported": 0,
      "peer": 0,
      "curr_speed": 1e+07,
      "max_speed": 0
    },
    {
      :
      :
    }
  ]
}
Get queues stats

Get queues stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/queue/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
port_no Port number 1
queue_id Queue ID 0
tx_bytes Number of transmitted bytes 0
tx_packets Number of transmitted packets 0
tx_errors Number of packets dropped due to overrun 0
duration_sec Time queue has been alive in seconds 4294963425
duration_nsec Time queue has been alive in nanoseconds beyond duration_sec 3912967296

Example of use:

$ curl -X GET http://localhost:8080/stats/queue/1
{
  "1": [
    {
      "port_no": 1,
      "queue_id": 0,
      "tx_bytes": 0,
      "tx_packets": 0,
      "tx_errors": 0,
      "duration_sec": 4294963425,
      "duration_nsec": 3912967296
    },
    {
      "port_no": 1,
      "queue_id": 1,
      "tx_bytes": 0,
      "tx_packets": 0,
      "tx_errors": 0,
      "duration_sec": 4294963425,
      "duration_nsec": 3912967296
    }
  ]
}
Get queues config

Get queues config of the switch which specified with Datapath ID and Port in URI.

Usage:

Method GET
URI /stats/queueconfig/<dpid>/<port>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
port Port which was queried 1
queues struct ofp_packet_queue  
– queue_id ID for the specific queue 2
– port Port this queue is attached to 0
– properties struct ofp_queue_prop_header properties [{“property”: “MIN_RATE”,”rate”: 80}]

Example of use:

$ curl -X GET http://localhost:8080/stats/queueconfig/1/1
{
  "1": [
    {
      "port": 1,
      "queues": [
        {
          "properties": [
            {
              "property": "MIN_RATE",
              "rate": 80
            }
          ],
          "port": 0,
          "queue_id": 1
        },
        {
          "properties": [
            {
              "property": "MAX_RATE",
              "rate": 120
            }
          ],
          "port": 2,
          "queue_id": 2
        },
        {
          "properties": [
            {
              "property": "EXPERIMENTER",
              "data": [],
              "experimenter": 999
            }
          ],
          "port": 3,
          "queue_id": 3
        }
      ]
    }
  ]
}
Get groups stats

Get groups stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/group/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
length Length of this entry 56
group_id Group ID 1
ref_count Number of flows or groups that directly forward to this group 1
packet_count Number of packets processed by group 0
byte_count Number of bytes processed by group 0
duration_sec Time group has been alive in seconds 161
duration_nsec Time group has been alive in nanoseconds beyond duration_sec 3.03e+08
bucket_stats struct ofp_bucket_counter  
– packet_count Number of packets processed by bucket 0
– byte_count Number of bytes processed by bucket 0

Example of use:

$ curl -X GET http://localhost:8080/stats/group/1
{
  "1": [
    {
      "length": 56,
      "group_id": 1,
      "ref_count": 1,
      "packet_count": 0,
      "byte_count": 0,
      "duration_sec": 161,
      "duration_nsec": 3.03e+08,
      "bucket_stats": [
        {
          "packet_count": 0,
          "byte_count": 0
        }
      ]
    }
  ]
}
Get group description stats

Get group description stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/groupdesc/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
type One of OFPGT_* “ALL”
group_id Group ID 1
buckets struct ofp_bucket  
– weight Relative weight of bucket (Only defined for select groups) 0
– watch_port Port whose state affects whether this bucket is live (Only required for fast failover groups) 4294967295
– watch_group Group whose state affects whether this bucket is live (Only required for fast failover groups) 4294967295
– actions 0 or more actions associated with the bucket [“OUTPUT:1”]

Example of use:

$ curl -X GET http://localhost:8080/stats/groupdesc/1
{
  "1": [
    {
      "type": "ALL",
      "group_id": 1,
      "buckets": [
        {
          "weight": 0,
          "watch_port": 4294967295,
          "watch_group": 4294967295,
          "actions": [
            "OUTPUT:1"
          ]
        }
      ]
    }
  ]
}
Get group features stats

Get group features stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/groupfeatures/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
types Bitmap of (1 << OFPGT_*) values supported []
capabilities Bitmap of OFPGFC_* capability supported [“SELECT_WEIGHT”,”SELECT_LIVENESS”,”CHAINING”]
max_groups Maximum number of groups for each type [{“ALL”: 4294967040},...]
actions Bitmaps of (1 << OFPAT_*) values supported [{“ALL”: [“OUTPUT”,...]},...]

Example of use:

$ curl -X GET http://localhost:8080/stats/groupfeatures/1
{
  "1": [
    {
      "types": [],
      "capabilities": [
        "SELECT_WEIGHT",
        "SELECT_LIVENESS",
        "CHAINING"
      ],
      "max_groups": [
        {
          "ALL": 4294967040
        },
        {
          "SELECT": 4294967040
        },
        {
          "INDIRECT": 4294967040
        },
        {
          "FF": 4294967040
        }
      ],
      "actions": [
        {
          "ALL": [
            "OUTPUT",
            "COPY_TTL_OUT",
            "COPY_TTL_IN",
            "SET_MPLS_TTL",
            "DEC_MPLS_TTL",
            "PUSH_VLAN",
            "POP_VLAN",
            "PUSH_MPLS",
            "POP_MPLS",
            "SET_QUEUE",
            "GROUP",
            "SET_NW_TTL",
            "DEC_NW_TTL",
            "SET_FIELD"
          ]
        },
        {
          "SELECT": []
        },
        {
          "INDIRECT": []
        },
        {
          "FF": []
        }
      ]
    }
  ]
}
Get meters stats

Get meters stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/meter/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
meter_id Meter ID 1
len Length in bytes of this stats 56
flow_count Number of flows bound to meter 0
packet_in_count Number of packets in input 0
byte_in_count Number of bytes in input 0
duration_sec Time meter has been alive in seconds 37
duration_nsec Time meter has been alive in nanoseconds beyond duration_sec 988000
band_stats struct ofp_meter_band_stats  
– packet_band_count Number of packets in band 0
– byte_band_count Number of bytes in band 0

Example of use:

$ curl -X GET http://localhost:8080/stats/meter/1
{
  "1": [
    {
      "meter_id": 1,
      "len": 56,
      "flow_count": 0,
      "packet_in_count": 0,
      "byte_in_count": 0,
      "duration_sec": 37,
      "duration_nsec": 988000,
      "band_stats": [
        {
          "packet_band_count": 0,
          "byte_band_count": 0
        }
      ]
    }
  ]
}
Get meter config stats

Get meter config stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/meterconfig/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
flags All OFPMC_* that apply “KBPS”
meter_id Meter ID 1
bands struct ofp_meter_band_header  
– type One of OFPMBT_* “DROP”
– rate Rate for this band 1000
– burst_size Size of bursts 0

Example of use:

$ curl -X GET http://localhost:8080/stats/meterconfig/1
{
  "1": [
    {
      "flags": [
        "KBPS"
      ],
      "meter_id": 1,
      "bands": [
        {
          "type": "DROP",
          "rate": 1000,
          "burst_size": 0
        }
      ]
    }
  ]
}
Get meter features stats

Get meter features stats of the switch which specified with Datapath ID in URI.

Usage:

Method GET
URI /stats/meterfeatures/<dpid>

Response message body:

Attribute Description Example
dpid Datapath ID “1”
max_meter Maximum number of meters 256
band_types Bitmaps of (1 << OFPMBT_*) values supported [“DROP”]
capabilities Bitmaps of “ofp_meter_flags” [“KBPS”, “BURST”, “STATS”]
max_bands Maximum bands per meters 16
max_color Maximum color value 8

Example of use:

$ curl -X GET http://localhost:8080/stats/meterfeatures/1
{
  "1": [
    {
      "max_meter": 256,
      "band_types": [
        "DROP"
      ],
      "capabilities": [
        "KBPS",
        "BURST",
        "STATS"
      ],
      "max_bands": 16,
      "max_color": 8
    }
  ]
}

Update the switch stats

Add a flow entry

Add a flow entry to the switch.

Usage:

Method POST
URI /stats/flowentry/add

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
cookie Opaque controller-issued identifier (int) 1 0
cookie_mask Mask used to restrict the cookie bits (int) 1 0
table_id Table ID to put the flow in (int) 0 0
idle_timeout Idle time before discarding (seconds) (int) 30 0
hard_timeout Max time before discarding (seconds) (int) 30 0
priority Priority level of flow entry (int) 11111 0
buffer_id Buffered packet to apply to, or OFP_NO_BUFFER (int) 1 OFP_NO_BUFFER
flags Bitmap of OFPFF_* flags (int) 1 0
match Fields to match (dict) {“in_port”:1} {} #wildcarded
actions Instruction set (list of dict) [{“type”:”OUTPUT”, “port”:2}] [] #DROP

註解

For description of match and actions, please see Reference: Description of Match and Actions.

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "cookie": 1,
    "cookie_mask": 1,
    "table_id": 0,
    "idle_timeout": 30,
    "hard_timeout": 30,
    "priority": 11111,
    "flags": 1,
    "match":{
        "in_port":1
    },
    "actions":[
        {
            "type":"OUTPUT",
            "port": 2
        }
    ]
 }' http://localhost:8080/stats/flowentry/add
$ curl -X POST -d '{
    "dpid": 1,
    "priority": 22222,
    "match":{
        "in_port":1
    },
    "actions":[
        {
            "type":"GOTO_TABLE",
            "table_id": 1
        }
    ]
 }' http://localhost:8080/stats/flowentry/add
$ curl -X POST -d '{
    "dpid": 1,
    "priority": 33333,
    "match":{
        "in_port":1
    },
    "actions":[
        {
            "type":"WRITE_METADATA",
            "metadata": 1,
            "metadata_mask": 1
        }
    ]
 }' http://localhost:8080/stats/flowentry/add
$ curl -X POST -d '{
    "dpid": 1,
    "priority": 44444,
    "match":{
        "in_port":1
    },
    "actions":[
        {
            "type":"METER",
            "meter_id": 1
        }
    ]
 }' http://localhost:8080/stats/flowentry/add

註解

To confirm flow entry registration, please see Get all flows stats or Get flows stats filtered by fields.

Modify all matching flow entries

Modify all matching flow entries of the switch.

Usage:

Method POST
URI /stats/flowentry/modify

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
cookie Opaque controller-issued identifier (int) 1 0
cookie_mask Mask used to restrict the cookie bits (int) 1 0
table_id Table ID to put the flow in (int) 0 0
idle_timeout Idle time before discarding (seconds) (int) 30 0
hard_timeout Max time before discarding (seconds) (int) 30 0
priority Priority level of flow entry (int) 11111 0
buffer_id Buffered packet to apply to, or OFP_NO_BUFFER (int) 1 OFP_NO_BUFFER
flags Bitmap of OFPFF_* flags (int) 1 0
match Fields to match (dict) {“in_port”:1} {} #wildcarded
actions Instruction set (list of dict) [{“type”:”OUTPUT”, “port”:2}] [] #DROP

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "cookie": 1,
    "cookie_mask": 1,
    "table_id": 0,
    "idle_timeout": 30,
    "hard_timeout": 30,
    "priority": 11111,
    "flags": 1,
    "match":{
        "in_port":1
    },
    "actions":[
        {
            "type":"OUTPUT",
            "port": 2
        }
    ]
 }' http://localhost:8080/stats/flowentry/modify
Modify flow entry strictly

Modify flow entry strictly matching wildcards and priority

Usage:

Method POST
URI /stats/flowentry/modify_strict

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
cookie Opaque controller-issued identifier (int) 1 0
cookie_mask Mask used to restrict the cookie bits (int) 1 0
table_id Table ID to put the flow in (int) 0 0
idle_timeout Idle time before discarding (seconds) (int) 30 0
hard_timeout Max time before discarding (seconds) (int) 30 0
priority Priority level of flow entry (int) 11111 0
buffer_id Buffered packet to apply to, or OFP_NO_BUFFER (int) 1 OFP_NO_BUFFER
flags Bitmap of OFPFF_* flags (int) 1 0
match Fields to match (dict) {“in_port”:1} {} #wildcarded
actions Instruction set (list of dict) [{“type”:”OUTPUT”, “port”:2}] [] #DROP

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "cookie": 1,
    "cookie_mask": 1,
    "table_id": 0,
    "idle_timeout": 30,
    "hard_timeout": 30,
    "priority": 11111,
    "flags": 1,
    "match":{
        "in_port":1
    },
    "actions":[
        {
            "type":"OUTPUT",
            "port": 2
        }
    ]
 }' http://localhost:8080/stats/flowentry/modify_strict
Delete all matching flow entries

Delete all matching flow entries of the switch.

Usage:

Method POST
URI /stats/flowentry/delete

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
cookie Opaque controller-issued identifier (int) 1 0
cookie_mask Mask used to restrict the cookie bits (int) 1 0
table_id Table ID to put the flow in (int) 0 0
idle_timeout Idle time before discarding (seconds) (int) 30 0
hard_timeout Max time before discarding (seconds) (int) 30 0
priority Priority level of flow entry (int) 11111 0
buffer_id Buffered packet to apply to, or OFP_NO_BUFFER (int) 1 OFP_NO_BUFFER
out_port Output port (int) 1 OFPP_ANY
out_group Output group (int) 1 OFPG_ANY
flags Bitmap of OFPFF_* flags (int) 1 0
match Fields to match (dict) {“in_port”:1} {} #wildcarded
actions Instruction set (list of dict) [{“type”:”OUTPUT”, “port”:2}] [] #DROP

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "cookie": 1,
    "cookie_mask": 1,
    "table_id": 0,
    "idle_timeout": 30,
    "hard_timeout": 30,
    "priority": 11111,
    "flags": 1,
    "match":{
        "in_port":1
    },
    "actions":[
        {
            "type":"OUTPUT",
            "port": 2
        }
    ]
 }' http://localhost:8080/stats/flowentry/delete
Delete flow entry strictly

Delete flow entry strictly matching wildcards and priority.

Usage:

Method POST
URI /stats/flowentry/delete_strict

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
cookie Opaque controller-issued identifier (int) 1 0
cookie_mask Mask used to restrict the cookie bits (int) 1 0
table_id Table ID to put the flow in (int) 0 0
idle_timeout Idle time before discarding (seconds) (int) 30 0
hard_timeout Max time before discarding (seconds) (int) 30 0
priority Priority level of flow entry (int) 11111 0
buffer_id Buffered packet to apply to, or OFP_NO_BUFFER (int) 1 OFP_NO_BUFFER
out_port Output port (int) 1 OFPP_ANY
out_group Output group (int) 1 OFPG_ANY
flags Bitmap of OFPFF_* flags (int) 1 0
match Fields to match (dict) {“in_port”:1} {} #wildcarded
actions Instruction set (list of dict) [{“type”:”OUTPUT”, “port”:2}] [] #DROP

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "cookie": 1,
    "cookie_mask": 1,
    "table_id": 0,
    "idle_timeout": 30,
    "hard_timeout": 30,
    "priority": 11111,
    "flags": 1,
    "match":{
        "in_port":1
    },
    "actions":[
        {
            "type":"OUTPUT",
            "port": 2
        }
    ]
 }' http://localhost:8080/stats/flowentry/delete_strict
Delete all flow entries

Delete all flow entries of the switch which specified with Datapath ID in URI.

Usage:

Method DELETE
URI /stats/flowentry/clear/<dpid>

Example of use:

$ curl -X DELETE http://localhost:8080/stats/flowentry/clear/1
Add a group entry

Add a group entry to the switch.

Usage:

Method POST
URI /stats/groupentry/add

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
type One of OFPGT_* (string) “ALL” “ALL”
group_id Group ID (int) 1 0
buckets struct ofp_bucket    
– weight Relative weight of bucket (Only defined for select groups) 0 0
– watch_port Port whose state affects whether this bucket is live (Only required for fast failover groups) 4294967295 OFPP_ANY
– watch_group Group whose state affects whether this bucket is live (Only required for fast failover groups) 4294967295 OFPG_ANY
– actions 0 or more actions associated with the bucket (list of dict) [{“type”: “OUTPUT”, “port”: 1}] [] #DROP

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "type": "ALL",
    "group_id": 1,
    "buckets": [
        {
            "actions": [
                {
                    "type": "OUTPUT",
                    "port": 1
                }
            ]
        }
    ]
 }' http://localhost:8080/stats/groupentry/add

註解

To confirm group entry registration, please see Get group description stats.

Modify a group entry

Modify a group entry to the switch.

Usage:

Method POST
URI /stats/groupentry/modify

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
type One of OFPGT_* (string) “ALL” “ALL”
group_id Group ID (int) 1 0
buckets struct ofp_bucket    
– weight Relative weight of bucket (Only defined for select groups) 0 0
– watch_port Port whose state affects whether this bucket is live (Only required for fast failover groups) 4294967295 OFPP_ANY
– watch_group Group whose state affects whether this bucket is live (Only required for fast failover groups) 4294967295 OFPG_ANY
– actions 0 or more actions associated with the bucket (list of dict) [{“type”: “OUTPUT”, “port”: 1}] [] #DROP

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "type": "ALL",
    "group_id": 1,
    "buckets": [
        {
            "actions": [
                {
                    "type": "OUTPUT",
                    "port": 1
                }
            ]
        }
    ]
 }' http://localhost:8080/stats/groupentry/modify
Delete a group entry

Delete a group entry to the switch.

Usage:

Method POST
URI /stats/groupentry/delete

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
group_id Group ID (int) 1 0

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "group_id": 1
 }' http://localhost:8080/stats/groupentry/delete
Modify the behavior of the port

Modify the behavior of the physical port.

Usage:

Method POST
URI /stats/portdesc/modify

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
port_no Port number (int) 1 0
config Bitmap of OFPPC_* flags (int) 1 0
mask Bitmap of OFPPC_* flags to be changed (int) 1 0

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "port_no": 1,
    "config": 1,
    "mask": 1
    }' http://localhost:8080/stats/portdesc/modify

註解

To confirm port description, please see Get ports description.

Add a meter entry

Add a meter entry to the switch.

Usage:

Method POST
URI /stats/meterentry/add

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
flags Bitmap of OFPMF_* flags (list) [“KBPS”] [] #Empty
meter_id Meter ID (int) 1 0
bands struct ofp_meter_band_header    
– type One of OFPMBT_* (string) “DROP” None
– rate Rate for this band (int) 1000 None
– burst_size Size of bursts (int) 100 None

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "flags": "KBPS",
    "meter_id": 1,
    "bands": [
        {
            "type": "DROP",
            "rate": 1000
        }
    ]
 }' http://localhost:8080/stats/meterentry/add

註解

To confirm meter entry registration, please see Get meter config stats.

Modify a meter entry

Modify a meter entry to the switch.

Usage:

Method POST
URI /stats/meterentry/modify

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
flags Bitmap of OFPMF_* flags (list) [“KBPS”] [] #Empty
meter_id Meter ID (int) 1 0
bands struct ofp_meter_band_header    
– type One of OFPMBT_* (string) “DROP” None
– rate Rate for this band (int) 1000 None
– burst_size Size of bursts (int) 100 None

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "meter_id": 1,
    "flags": "KBPS",
    "bands": [
        {
            "type": "DROP",
            "rate": 1000
        }
    ]
 }' http://localhost:8080/stats/meterentry/modify
Delete a meter entry

Delete a meter entry to the switch.

Usage:

Method POST
URI /stats/meterentry/delete

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
meter_id Meter ID (int) 1 0

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "meter_id": 1
 }' http://localhost:8080/stats/meterentry/delete

Support for experimenter multipart

Send a experimenter message

Send a experimenter message to the switch which specified with Datapath ID in URI.

Usage:

Method POST
URI /stats/experimenter/<dpid>

Request message body:

Attribute Description Example Default
dpid Datapath ID (int) 1 (Mandatory)
experimenter Experimenter ID (int) 1 0
exp_type Experimenter defined (int) 1 0
data_type Data format type (“ascii” or “base64”) “ascii” “ascii”
data Data to send (string) “data” “” #Empty

Example of use:

$ curl -X POST -d '{
    "dpid": 1,
    "experimenter": 1,
    "exp_type": 1,
    "data_type": "ascii",
    "data": "data"
    }' http://localhost:8080/stats/experimenter/1

Reference: Description of Match and Actions

Description of Match on request messages

List of Match fields (OpenFlow1.0):

Match field Description Example
in_port Input switch port (int) {“in_port”: 7}
dl_src Ethernet source address (string) {“dl_src”: “aa:bb:cc:11:22:33”}
dl_dst Ethernet destination address (string) {“dl_dst”: “aa:bb:cc:11:22:33”}
dl_vlan Input VLAN id (int) {“dl_vlan”: 5}
dl_vlan_pcp Input VLAN priority (int) {“dl_vlan_pcp”: 3, “dl_vlan”: 3}
dl_type Ethernet frame type (int) {“dl_type”: 123}
nw_tos IP ToS (int) {“nw_tos”: 16, “dl_type”: 2048}
nw_proto IP protocol or lower 8 bits of ARP opcode (int) {“nw_proto”: 5, “dl_type”: 2048}
nw_src IPv4 source address (string) {“nw_src”: “192.168.0.1”, “dl_type”: 2048}
nw_dst IPv4 destination address (string) {“nw_dst”: “192.168.0.1/24”, “dl_type”: 2048}
tp_src TCP/UDP source port (int) {“tp_src”: 1, “nw_proto”: 6, “dl_type”: 2048}
tp_dst TCP/UDP destination port (int) {“tp_dst”: 2, “nw_proto”: 6, “dl_type”: 2048}

註解

IPv4 address field can be described as IP Prefix like as follows.

IPv4 address:

"192.168.0.1"
"192.168.0.2/24"

List of Match fields (OpenFlow1.2 or later):

Match field Description Example
in_port Switch input port (int) {“in_port”: 7}
in_phy_port Switch physical input port (int) {“in_phy_port”: 5, “in_port”: 3}
metadata Metadata passed between tables (int or string)

{“metadata”: 12345}

{“metadata”: “0x1212/0xffff”}
dl_dst Ethernet destination address (string) {“dl_dst”: “aa:bb:cc:11:22:33/00:00:00:00:ff:ff”}
dl_src Ethernet source address (string) {“dl_src”: “aa:bb:cc:11:22:33”}
eth_dst Ethernet destination address (string) {“eth_dst”: “aa:bb:cc:11:22:33/00:00:00:00:ff:ff”}
eth_src Ethernet source address (string) {“eth_src”: “aa:bb:cc:11:22:33”}
dl_type Ethernet frame type (int) {“dl_type”: 123}
eth_type Ethernet frame type (int) {“eth_type”: 2048}
dl_vlan VLAN id (int or string) See Example of VLAN ID match field
vlan_vid VLAN id (int or string) See Example of VLAN ID match field
vlan_pcp VLAN priority (int) {“vlan_pcp”: 3, “vlan_vid”: 3}
ip_dscp IP DSCP (6 bits in ToS field) (int) {“ip_dscp”: 3, “eth_type”: 2048}
ip_ecn IP ECN (2 bits in ToS field) (int) {“ip_ecn”: 0, “eth_type”: 34525}
nw_proto IP protocol (int) {“nw_proto”: 5, “eth_type”: 2048}
ip_proto IP protocol (int) {“ip_proto”: 5, “eth_type”: 34525}
tp_src Transport layer source port (int) {“tp_src”: 1, “ip_proto”: 6, “eth_type”: 2048}
tp_dst Transport layer destination port (int) {“tp_dst”: 2, “ip_proto”: 6, “eth_type”: 2048}
nw_src IPv4 source address (string) {“nw_src”: “192.168.0.1”, “eth_type”: 2048}
nw_dst IPv4 destination address (string) {“nw_dst”: “192.168.0.1/24”, “eth_type”: 2048}
ipv4_src IPv4 source address (string) {“ipv4_src”: “192.168.0.1”, “eth_type”: 2048}
ipv4_dst IPv4 destination address (string) {“ipv4_dst”: “192.168.10.10/255.255.255.0”, “eth_type”: 2048}
tcp_src TCP source port (int) {“tcp_src”: 3, “ip_proto”: 6, “eth_type”: 2048}
tcp_dst TCP destination port (int) {“tcp_dst”: 5, “ip_proto”: 6, “eth_type”: 2048}
udp_src UDP source port (int) {“udp_src”: 2, “ip_proto”: 17, “eth_type”: 2048}
udp_dst UDP destination port (int) {“udp_dst”: 6, “ip_proto”: 17, “eth_type”: 2048}
sctp_src SCTP source port (int) {“sctp_src”: 99, “ip_proto”: 132, “eth_type”: 2048}
sctp_dst SCTP destination port (int) {“sctp_dst”: 99, “ip_proto”: 132, “eth_type”: 2048}
icmpv4_type ICMP type (int) {“icmpv4_type”: 5, “ip_proto”: 1, “eth_type”: 2048}
icmpv4_code ICMP code (int) {“icmpv4_code”: 6, “ip_proto”: 1, “eth_type”: 2048}
arp_op ARP opcode (int) {“arp_op”: 3, “eth_type”: 2054}
arp_spa ARP source IPv4 address (string) {“arp_spa”: “192.168.0.11”, “eth_type”: 2054}
arp_tpa ARP target IPv4 address (string) {“arp_tpa”: “192.168.0.44/24”, “eth_type”: 2054}
arp_sha ARP source hardware address (string) {“arp_sha”: “aa:bb:cc:11:22:33”, “eth_type”: 2054}
arp_tha ARP target hardware address (string) {“arp_tha”: “aa:bb:cc:11:22:33/00:00:00:00:ff:ff”, “eth_type”: 2054}
ipv6_src IPv6 source address (string) {“ipv6_src”: “2001::aaaa:bbbb:cccc:1111”, “eth_type”: 34525}
ipv6_dst IPv6 destination address (string) {“ipv6_dst”: “2001::ffff:cccc:bbbb:1111/64”, “eth_type”: 34525}
ipv6_flabel IPv6 Flow Label (int) {“ipv6_flabel”: 2, “eth_type”: 34525}
icmpv6_type ICMPv6 type (int) {“icmpv6_type”: 3, “ip_proto”: 58, “eth_type”: 34525}
icmpv6_code ICMPv6 code (int) {“icmpv6_code”: 4, “ip_proto”: 58, “eth_type”: 34525}
ipv6_nd_target Target address for Neighbor Discovery (string) {“ipv6_nd_target”: “2001::ffff:cccc:bbbb:1111”, “icmpv6_type”: 135, “ip_proto”: 58, “eth_type”: 34525}
ipv6_nd_sll Source link-layer for Neighbor Discovery (string) {“ipv6_nd_sll”: “aa:bb:cc:11:22:33”, “icmpv6_type”: 135, “ip_proto”: 58, “eth_type”: 34525}
ipv6_nd_tll Target link-layer for Neighbor Discovery (string) {“ipv6_nd_tll”: “aa:bb:cc:11:22:33”, “icmpv6_type”: 136, “ip_proto”: 58, “eth_type”: 34525}
mpls_label MPLS label (int) {“mpls_label”: 3, “eth_type”: 34888}
mpls_tc MPLS Traffic Class (int) {“mpls_tc”: 2, “eth_type”: 34888}
mpls_bos MPLS BoS bit (int) {“mpls_bos”: 1, “eth_type”: 34888}
pbb_isid PBB I-SID (int or string)

{“pbb_isid”: 5, “eth_type”: 35047}

{“pbb_isid”: “0x05/0xff”, “eth_type”: 35047}
tunnel_id Logical Port Metadata (int or string)

{“tunnel_id”: 7}

{“tunnel_id”: “0x07/0xff”}
ipv6_exthdr IPv6 Extension Header pseudo-field (int or string)

{“ipv6_exthdr”: 3, “eth_type”: 34525}

{“ipv6_exthdr”: “0x40/0x1F0”, “eth_type”: 34525}

註解

Some field can be described with mask like as follows.

Ethernet address:

"aa:bb:cc:11:22:33"
"aa:bb:cc:11:22:33/00:00:00:00:ff:ff"

IPv4 address:

"192.168.0.11"
"192.168.0.44/24"
"192.168.10.10/255.255.255.0"

IPv6 address:

"2001::ffff:cccc:bbbb:1111"
"2001::ffff:cccc:bbbb:2222/64"
"2001::ffff:cccc:bbbb:2222/ffff:ffff:ffff:ffff::0"

Metadata:

"0x1212121212121212"
"0x3434343434343434/0x01010101010101010"
Example of VLAN ID match field

The following is available in OpenFlow1.0 or later.

  • To match only packets with VLAN tag and VLAN ID equal value 5:

    $ curl -X POST -d '{
        "dpid": 1,
        "match":{
            "dl_vlan": 5
        },
        "actions":[
            {
                "type":"OUTPUT",
                "port": 1
            }
        ]
     }' http://localhost:8080/stats/flowentry/add
    

註解

When “dl_vlan” field is described as decimal int value, OFPVID_PRESENT(0x1000) bit is automatically applied.

The following is available in OpenFlow1.2 or later.

  • To match only packets without a VLAN tag:

    $ curl -X POST -d '{
        "dpid": 1,
        "match":{
            "dl_vlan": "0x0000"   # Describe OFPVID_NONE(0x0000)
        },
        "actions":[
            {
                "type":"OUTPUT",
                "port": 1
            }
        ]
     }' http://localhost:8080/stats/flowentry/add
    
  • To match only packets with a VLAN tag regardless of its value:

    $ curl -X POST -d '{
        "dpid": 1,
        "match":{
            "dl_vlan": "0x1000/0x1000"   # Describe OFPVID_PRESENT(0x1000/0x1000)
        },
        "actions":[
            {
                "type":"OUTPUT",
                "port": 1
            }
        ]
     }' http://localhost:8080/stats/flowentry/add
    
  • To match only packets with VLAN tag and VLAN ID equal value 5:

    $ curl -X POST -d '{
        "dpid": 1,
        "match":{
            "dl_vlan": "0x1005"   # Describe sum of VLAN-ID(e.g. 5) | OFPVID_PRESENT(0x1000)
        },
        "actions":[
            {
                "type":"OUTPUT",
                "port": 1
            }
        ]
     }' http://localhost:8080/stats/flowentry/add
    

註解

When using the descriptions for OpenFlow1.2 or later, please describe “dl_vlan” field as hexadecimal string value, and OFPVID_PRESENT(0x1000) bit is NOT automatically applied.

Description of Actions on request messages

List of Actions (OpenFlow1.0):

Actions Description Example
OUTPUT Output packet from “port” {“type”: “OUTPUT”, “port”: 3}
SET_VLAN_VID Set the 802.1Q VLAN ID using “vlan_vid” {“type”: “SET_VLAN_VID”, “vlan_vid”: 5}
SET_VLAN_PCP Set the 802.1Q priority using “vlan_pcp” {“type”: “SET_VLAN_PCP”, “vlan_pcp”: 3}
STRIP_VLAN Strip the 802.1Q header {“type”: “STRIP_VLAN”}
SET_DL_SRC Set ethernet source address using “dl_src” {“type”: “SET_DL_SRC”, “dl_src”: “aa:bb:cc:11:22:33”}
SET_DL_DST Set ethernet destination address using “dl_dst” {“type”: “SET_DL_DST”, “dl_dst”: “aa:bb:cc:11:22:33”}
SET_NW_SRC IP source address using “nw_src” {“type”: “SET_NW_SRC”, “nw_src”: “10.0.0.1”}
SET_NW_DST IP destination address using “nw_dst” {“type”: “SET_NW_DST”, “nw_dst”: “10.0.0.1”}
SET_NW_TOS Set IP ToS (DSCP field, 6 bits) using “nw_tos” {“type”: “SET_NW_TOS”, “nw_tos”: 184}
SET_TP_SRC Set TCP/UDP source port using “tp_src” {“type”: “SET_TP_SRC”, “tp_src”: 8080}
SET_TP_DST Set TCP/UDP destination port using “tp_dst” {“type”: “SET_TP_DST”, “tp_dst”: 8080}
ENQUEUE Output to queue with “queue_id” attached to “port” {“type”: “ENQUEUE”, “queue_id”: 3, “port”: 1}

List of Actions (OpenFlow1.2 or later):

Actions Description Example
OUTPUT Output packet from “port” {“type”: “OUTPUT”, “port”: 3}
COPY_TTL_OUT Copy TTL outwards {“type”: “COPY_TTL_OUT”}
COPY_TTL_IN Copy TTL inwards {“type”: “COPY_TTL_IN”}
SET_MPLS_TTL Set MPLS TTL using “mpls_ttl” {“type”: “SET_MPLS_TTL”, “mpls_ttl”: 64}
DEC_MPLS_TTL Decrement MPLS TTL {“type”: “DEC_MPLS_TTL”}
PUSH_VLAN Push a new VLAN tag with “ethertype” {“type”: “PUSH_VLAN”, “ethertype”: 33024}
POP_VLAN Pop the outer VLAN tag {“type”: “POP_VLAN”}
PUSH_MPLS Push a new MPLS tag with “ethertype” {“type”: “PUSH_MPLS”, “ethertype”: 34887}
POP_MPLS Pop the outer MPLS tag with “ethertype” {“type”: “POP_MPLS”, “ethertype”: 2054}
SET_QUEUE Set queue id using “queue_id” when outputting to a port {“type”: “SET_QUEUE”, “queue_id”: 7}
GROUP Apply group identified by “group_id” {“type”: “GROUP”, “group_id”: 5}
SET_NW_TTL Set IP TTL using “nw_ttl” {“type”: “SET_NW_TTL”, “nw_ttl”: 64}
DEC_NW_TTL Decrement IP TTL {“type”: “DEC_NW_TTL”}
SET_FIELD Set a “field” using “value” (The set of keywords available for “field” is the same as match field) See Example of set-field action
PUSH_PBB Push a new PBB service tag with “ethertype” {“type”: “PUSH_PBB”, “ethertype”: 35047}
POP_PBB Pop the outer PBB service tag {“type”: “POP_PBB”}
GOTO_TABLE (Instruction) Setup the next table identified by “table_id” {“type”: “GOTO_TABLE”, “table_id”: 8}
WRITE_METADATA (Instruction) Setup the metadata field using “metadata” and “metadata_mask” {“type”: “WRITE_METADATA”, “metadata”: 0x3, “metadata_mask”: 0x3}
METER (Instruction) Apply meter identified by “meter_id” {“type”: “METER”, “meter_id”: 3}
WRITE_ACTIONS (Instruction) Write the action(s) onto the datapath action set {“type”: “WRITE_ACTIONS”, actions”:[{“type”:”POP_VLAN”,},{ “type”:”OUTPUT”, “port”: 2}]}
CLEAR_ACTIONS (Instruction) Clears all actions from the datapath action set {“type”: “CLEAR_ACTIONS”}
Example of set-field action

To set VLAN ID to non-VLAN-tagged frame:

"actions":[
    {
        "type": "PUSH_VLAN",     # Push a new VLAN tag if a input frame is non-VLAN-tagged
        "ethertype": 33024       # Ethertype 0x8100(=33024): IEEE 802.1Q VLAN-tagged frame
    },
    {
        "type": "SET_FIELD",
        "field": "vlan_vid",     # Set VLAN ID
        "value": 4102            # Describe sum of vlan_id(e.g. 6) | OFPVID_PRESENT(0x1000=4096)
    },
    {
        "type": "OUTPUT",
        "port": 2
    }
]