diff mbox

[ovs-dev,v2] Add Docker integration for OVN.

Message ID 1445291545-16352-1-git-send-email-gshetty@nicira.com
State Changes Requested
Headers show

Commit Message

Gurucharan Shetty Oct. 19, 2015, 9:52 p.m. UTC
Docker removed 'experimental' tag for their multi-host
networking constructs last week and did a code freeze for
Docker 1.9.

This commit adds two drivers for OVN integration
with Docker. The first driver is a pure overlay driver
that does not need OpenStack integration. The second driver
needs OVN+OpenStack.

The description of the Docker API exists here:
https://github.com/docker/libnetwork/blob/master/docs/remote.md

Signed-off-by: Gurucharan Shetty <gshetty@nicira.com>
---
v1->v2:
Some style adustments with error messages.
Consolidation of some duplicate code to function: get_logical_port_addresses
---
 INSTALL.Docker.md                        | 301 ++++++++++----
 ovn/utilities/automake.mk                |   8 +
 ovn/utilities/ovn-docker-overlay-driver  | 442 ++++++++++++++++++++
 ovn/utilities/ovn-docker-underlay-driver | 675 +++++++++++++++++++++++++++++++
 rhel/openvswitch-fedora.spec.in          |   2 +
 5 files changed, 1358 insertions(+), 70 deletions(-)
 create mode 100755 ovn/utilities/ovn-docker-overlay-driver
 create mode 100755 ovn/utilities/ovn-docker-underlay-driver

Comments

Murali R Oct. 19, 2015, 10:48 p.m. UTC | #1
Thanks Gurucharan, for this driver. I will try this pure overlay driver in
my next iteration in Nov adding l3 without neutron. For docker 1.8 at this
time I am leveraging neutron api for most network mgmt and some flask based
for api and some shell at this time for a next week target (not openstack
summit). The docker network commands look promising. Can't wait to try this
out..


-Murali

On Mon, Oct 19, 2015 at 2:52 PM, Gurucharan Shetty <shettyg@nicira.com>
wrote:

> Docker removed 'experimental' tag for their multi-host
> networking constructs last week and did a code freeze for
> Docker 1.9.
>
> This commit adds two drivers for OVN integration
> with Docker. The first driver is a pure overlay driver
> that does not need OpenStack integration. The second driver
> needs OVN+OpenStack.
>
> The description of the Docker API exists here:
> https://github.com/docker/libnetwork/blob/master/docs/remote.md
>
> Signed-off-by: Gurucharan Shetty <gshetty@nicira.com>
> ---
> v1->v2:
> Some style adustments with error messages.
> Consolidation of some duplicate code to function:
> get_logical_port_addresses
> ---
>  INSTALL.Docker.md                        | 301 ++++++++++----
>  ovn/utilities/automake.mk                |   8 +
>  ovn/utilities/ovn-docker-overlay-driver  | 442 ++++++++++++++++++++
>  ovn/utilities/ovn-docker-underlay-driver | 675
> +++++++++++++++++++++++++++++++
>  rhel/openvswitch-fedora.spec.in          |   2 +
>  5 files changed, 1358 insertions(+), 70 deletions(-)
>  create mode 100755 ovn/utilities/ovn-docker-overlay-driver
>  create mode 100755 ovn/utilities/ovn-docker-underlay-driver
>
> diff --git a/INSTALL.Docker.md b/INSTALL.Docker.md
> index 9e14043..d523ecd 100644
> --- a/INSTALL.Docker.md
> +++ b/INSTALL.Docker.md
> @@ -1,109 +1,270 @@
>  How to Use Open vSwitch with Docker
>  ====================================
>
> -This document describes how to use Open vSwitch with Docker 1.2.0 or
> +This document describes how to use Open vSwitch with Docker 1.9.0 or
>  later.  This document assumes that you installed Open vSwitch by following
>  [INSTALL.md] or by using the distribution packages such as .deb or .rpm.
>  Consult www.docker.com for instructions on how to install Docker.
>
> -Limitations
> ------------
> -Currently there is no native integration of Open vSwitch in Docker, i.e.,
> -one cannot use the Docker client to automatically add a container's
> -network interface to an Open vSwitch bridge during the creation of the
> -container.  This document describes addition of new network interfaces to
> an
> -already created container and in turn attaching that interface as a port
> to an
> -Open vSwitch bridge.  If and when there is a native integration of Open
> vSwitch
> -with Docker, the ovs-docker utility described in this document is
> expected to
> -be retired.
> +Docker 1.9.0 comes with support for multi-host networking.  Integration
> +of Docker networking and Open vSwitch can be achieved via Open vSwitch
> +virtual network (OVN).
> +
>
>  Setup
> ------
> -* Create your container, e.g.:
> +=====
> +
> +For multi-host networking with OVN and Docker, Docker has to be started
> +with a destributed key-value store.  For e.g., if you decide to use consul
> +as your distributed key-value store, and your host IP address is $HOST_IP,
> +start your Docker daemon with:
> +
> +```
> +docker daemon --cluster-store=consul://127.0.0.1:8500
> --cluster-advertise=$IP:0
> +```
> +
> +OVN provides network virtualization to containers.  OVN's integration with
> +Docker currently works in two modes - the "underlay" mode or the "overlay"
> +mode.
> +
> +In the "underlay" mode, OVN requires a OpenStack setup to provide
> container
> +networking.  In this mode, one can create logical networks and can have
> +containers running inside VMs, standalone VMs (without having any
> containers
> +running inside them) and physical machines connected to the same logical
> +network.  This is a multi-tenant, multi-host solution.
> +
> +In the "overlay" mode, OVN can create a logical network amongst containers
> +running on multiple hosts.  This is a single-tenant (extendable to
> +multi-tenants depending on the security characteristics of the workloads),
> +multi-host solution.  In this mode, you do not need a pre-created
> OpenStack
> +setup.
> +
> +For both the modes to work, a user has to install and start Open vSwitch
> in
> +each VM/host that he plans to run his containers.
> +
> +
> +The "overlay" mode
> +==================
> +
> +OVN in "overlay" mode needs a minimum Open vSwitch version of 2.5.
> +
> +* Start the central components.
> +
> +OVN architecture has a central component which stores your networking
> intent
> +in a database.  So on any machine, with an IP Address of $CENTRAL_IP,
> where you
> +have installed and started Open vSwitch, you will need to start some
> +central components.
> +
> +Begin by making ovsdb-server listen on a TCP port by running:
> +
> +```
> +ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640
> +```
> +
> +Start ovn_northd daemon.  This daemon translates networking intent from
> Docker
> +stored in OVN_Northbound databse to logical flows in OVN_Southbound
> database.
> +
> +```
> +/usr/share/openvswitch/scripts/ovn-ctl start_northd
> +```
> +
> +* One time setup.
> +
> +On each host, where you plan to spawn your containers, you will need to
> +run the following commands once.
> +
> +$LOCAL_IP in the below command is the IP address via which other hosts
> +can reach this host.  This acts as your local tunnel endpoint.
> +
> +$ENCAP_TYPE is the type of tunnel that you would like to use for overlay
> +networking.  The options are "geneve" or "stt".
> +
> +```
> +ovs-vsctl set Open_vSwitch .
> external_ids:ovn-remote="tcp:$CENTRAL_IP:6640 \
> +    external_ids:ovn-encap-ip=$LOCAL_IP
> external_ids:ovn-encap-type="geneve"
> +```
> +
> +And finally, start the ovn-controller.
> +
> +```
> +/usr/share/openvswitch/scripts/ovn-ctl start_controller
> +```
> +
> +* Start the Open vSwitch network driver.
> +
> +By default Docker uses Linux bridge for networking.  But it has support
> +for external drivers.  To use Open vSwitch instead of the Linux bridge,
> +you will need to start the Open vSwitch driver.
> +
> +The Open vSwitch driver uses the Python's flask module to listen to
> +Docker's networking api calls.  So, if your host does not have Python's
> +flask module, install it with:
> +
> +```
> +easy_install -U pip
> +pip install Flask
> +```
> +
> +Start the Open vSwitch driver on every host where you plan to create your
> +containers.
> +
> +```
> +ovn-docker-overlay-driver --overlay-mode --detach
> +```
> +
> +Docker has inbuilt primitives that closely match OVN's logical switches
> +and logical port concepts.  Please consult Docker's documentation for
> +all the possible commands.  Here are some examples.
> +
> +* Create your logical switch.
> +
> +To create a logical switch with name 'foo', on subnet '192.168.1.0/24'
> run:
> +
> +```
> +NID=`docker network create -d openvswitch --subnet=192.168.1.0/24 foo`
> +```
> +
> +* List your logical switches.
> +
> +```
> +docker network ls
> +```
> +
> +You can also look at this logical switch in OVN's northbound database by
> +running the following command.
> +
> +```
> +ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lswitch-list
> +```
> +
> +* Docker creates your logical port and attaches it to the logical network
> +in a single step.
> +
> +For e.g., to attach a logical port to network 'foo' inside cotainer
> busybox,
> +run:
> +
> +```
> +docker run -itd --net=foo --name=busybox busybox
> +```
> +
> +* List all your logical ports.
> +
> +Docker currently does not have a CLI command to list all your logical
> ports.
> +But you can look at them in the OVN database, by running:
>
>  ```
> -% docker run -d ubuntu:14.04 /bin/sh -c \
> -"while true; do echo hello world; sleep 1; done"
> +ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lport-list $NID
>  ```
>
> -The above command creates a container with one network interface 'eth0'
> -and attaches it to a Linux bridge called 'docker0'.  'eth0' by default
> -gets an IP address in the 172.17.0.0/16 space.  Docker sets up iptables
> -NAT rules to let this interface talk to the outside world.  Also since
> -it is connected to 'docker0' bridge, it can talk to all other containers
> -connected to the same bridge.  If you prefer that no network interface be
> -created by default, you can start your container with
> -the option '--net=none', e,g.:
> +* You can also create a logical port and attach it to a running container.
>
>  ```
> -% docker run -d --net=none ubuntu:14.04 /bin/sh -c \
> -"while true; do echo hello world; sleep 1; done"
> +docker network create -d openvswitch --subnet=192.168.2.0/24 bar
> +docker network connect bar busybox
>  ```
>
> -The above commands will return a container id.  You will need to pass this
> -value to the utility 'ovs-docker' to create network interfaces attached
> to an
> -Open vSwitch bridge as a port.  This document will reference this value
> -as $CONTAINER_ID in the next steps.
> +You can delete your logical port and detach it from a running container by
> +running:
> +
> +```
> +docker network disconnect bar busybox
> +```
>
> -* Add a new network interface to the container and attach it to an Open
> vSwitch
> -  bridge.  e.g.:
> +* You can delete your logical switch by running:
>
> -`% ovs-docker add-port br-int eth1 $CONTAINER_ID`
> +```
> +docker network rm bar
> +```
>
> -The above command will create a network interface 'eth1' inside the
> container
> -and then attaches it to the Open vSwitch bridge 'br-int'.  This is done by
> -creating a veth pair.  One end of the interface becomes 'eth1' inside the
> -container and the other end attaches to 'br-int'.
>
> -The script also lets one to add IP address, MAC address, Gateway address
> and
> -MTU for the interface.  e.g.:
> +The "underlay" mode
> +===================
> +
> +This mode requires that you have a OpenStack setup pre-installed with OVN
> +providing the underlay networking.
> +
> +* One time setup.
> +
> +A OpenStack tenant creates a VM with a single network interface (or
> multiple)
> +that belongs to management logical networks.  The tenant needs to fetch
> the
> +port-id associated with the interface via which he plans to send the
> container
> +traffic inside the spawned VM.  This can be obtained by running the
> +below command to fetch the 'id'  associated with the VM.
>
>  ```
> -% ovs-docker add-port br-int eth1 $CONTAINER_ID --ipaddress=
> 192.168.1.2/24 \
> ---macaddress=a2:c3:0d:49:7f:f8 --gateway=192.168.1.1 --mtu=1450
> +nova list
>  ```
>
> -* A previously added network interface can be deleted.  e.g.:
> +and then by running:
>
> -`% ovs-docker del-port br-int eth1 $CONTAINER_ID`
> +```
> +neutron port-list --device_id=$id
> +```
>
> -All the previously added Open vSwitch interfaces inside a container can be
> -deleted.  e.g.:
> +Inside the VM, download the OpenStack RC file that contains the tenant
> +information (henceforth referred to as 'openrc.sh').  Edit the file and
> add the
> +previously obtained port-id information to the file by appending the
> following
> +line: export OS_VIF_ID=$port_id.  After this edit, the file will look
> something
> +like:
>
> -`% ovs-docker del-ports br-int $CONTAINER_ID`
> +```
> +#!/bin/bash
> +export OS_AUTH_URL=http://10.33.75.122:5000/v2.0
> +export OS_TENANT_ID=fab106b215d943c3bad519492278443d
> +export OS_TENANT_NAME="demo"
> +export OS_USERNAME="demo"
> +export OS_VIF_ID=e798c371-85f4-4f2d-ad65-d09dd1d3c1c9
> +```
> +
> +* Create the Open vSwitch bridge.
> +
> +If your VM has one ethernet interface (e.g.: 'eth0'), you will need to add
> +that device as a port to an Open vSwitch bridge 'breth0' and move its IP
> +address and route related information to that bridge. (If it has multiple
> +network interfaces, you will need to create and attach an Open vSwitch
> bridge
> +for the interface via which you plan to send your container traffic.)
> +
> +If you use DHCP to obtain an IP address, then you should kill the DHCP
> client
> +that was listening on the physical Ethernet interface (e.g. eth0) and
> start
> +one listening on the Open vSwitch bridge (e.g. breth0).
>
> -It is important that the same $CONTAINER_ID be passed to both add-port
> -and del-port[s] commands.
> +Depending on your VM, you can make the above step persistent across
> reboots.
> +For e.g.:, if your VM is Debian/Ubuntu, you can read
> +[openvswitch-switch.README.Debian]
> +If your VM is RHEL based, you can read [README.RHEL]
>
> -* More network control.
>
> -Once a container interface is added to an Open vSwitch bridge, one can
> -set VLANs, create Tunnels, add OpenFlow rules etc for more network
> control.
> -Many times, it is important that the underlying network infrastructure is
> -plumbed (or programmed) before the application inside the container
> starts.
> -To handle this, one can create a micro-container, attach an Open vSwitch
> -interface to that container, set the UUIDS in OVSDB as mentioned in
> -[IntegrationGuide.md] and then program the bridge to handle traffic
> coming out
> -of that container. Now, you can start the main container asking it
> -to share the network of the micro-container. When your application starts,
> -the underlying network infrastructure would be ready. e.g.:
> +* Start the Open vSwitch network driver.
>
> +The Open vSwitch driver uses the Python's flask module to listen to
> +Docker's networking api calls.  The driver also uses OpenStack's
> +python-neutronclient libraries.  So, if your host does not have Python's
> +flask module or python-neutronclient install them with:
> +
> +```
> +easy_install -U pip
> +pip install python-neutronclient
> +pip install Flask
>  ```
> -% docker run -d --net=container:$MICROCONTAINER_ID ubuntu:14.04 /bin/sh
> -c \
> -"while true; do echo hello world; sleep 1; done"
> +
> +Source the openrc file. e.g.:
> +````
> +source openrc.sh
>  ```
>
> -Please read the man pages of ovs-vsctl, ovs-ofctl, ovs-vswitchd,
> -ovsdb-server and ovs-vswitchd.conf.db etc for more details about Open
> vSwitch.
> +Start the network driver and provide your OpenStack tenant password
> +when prompted.
>
> -Docker networking is quite flexible and can be used in multiple ways.
> For more
> -information, please read:
> -https://docs.docker.com/articles/networking
> +```
> +ovn-docker-underlay-driver --bridge breth0 --detach
> +```
>
> -Bug Reporting
> --------------
> +From here-on you can use the same Docker commands as described in the
> +section 'The "overlay" mode'.
>
> -Please report problems to bugs@openvswitch.org.
> +Please read 'man ovn-architecture' to understand OVN's architecture in
> +detail.
>
> -[INSTALL.md]:INSTALL.md
> -[IntegrationGuide.md]:IntegrationGuide.md
> +[INSTALL.md]: INSTALL.md
> +[openvswitch-switch.README.Debian]:
> debian/openvswitch-switch.README.Debian
> +[README.RHEL]: rhel/README.RHEL
> diff --git a/ovn/utilities/automake.mk b/ovn/utilities/automake.mk
> index b247a54..50fb4e7 100644
> --- a/ovn/utilities/automake.mk
> +++ b/ovn/utilities/automake.mk
> @@ -8,9 +8,16 @@ man_MANS += \
>
>  MAN_ROOTS += ovn/utilities/ovn-sbctl.8.in
>
> +# Docker drivers
> +bin_SCRIPTS += \
> +    ovn/utilities/ovn-docker-overlay-driver \
> +    ovn/utilities/ovn-docker-underlay-driver
> +
>  EXTRA_DIST += \
>      ovn/utilities/ovn-ctl \
>      ovn/utilities/ovn-ctl.8.xml \
> +    ovn/utilities/ovn-docker-overlay-driver \
> +    ovn/utilities/ovn-docker-underlay-driver \
>      ovn/utilities/ovn-nbctl.8.xml
>
>  DISTCLEANFILES += \
> @@ -27,3 +34,4 @@ ovn_utilities_ovn_nbctl_LDADD = ovn/lib/libovn.la ovsdb/
> libovsdb.la lib/libopenv
>  bin_PROGRAMS += ovn/utilities/ovn-sbctl
>  ovn_utilities_ovn_sbctl_SOURCES = ovn/utilities/ovn-sbctl.c
>  ovn_utilities_ovn_sbctl_LDADD = ovn/lib/libovn.la ovsdb/libovsdb.la lib/
> libopenvswitch.la
> +
> diff --git a/ovn/utilities/ovn-docker-overlay-driver
> b/ovn/utilities/ovn-docker-overlay-driver
> new file mode 100755
> index 0000000..71eac93
> --- /dev/null
> +++ b/ovn/utilities/ovn-docker-overlay-driver
> @@ -0,0 +1,442 @@
> +#! /usr/bin/python
> +# Copyright (C) 2015 Nicira, Inc.
> +#
> +# Licensed under the Apache License, Version 2.0 (the "License");
> +# you may not use this file except in compliance with the License.
> +# You may obtain a copy of the License at:
> +#
> +#     http://www.apache.org/licenses/LICENSE-2.0
> +#
> +# Unless required by applicable law or agreed to in writing, software
> +# distributed under the License is distributed on an "AS IS" BASIS,
> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> +# See the License for the specific language governing permissions and
> +# limitations under the License.
> +
> +import argparse
> +import ast
> +import atexit
> +import json
> +import os
> +import random
> +import re
> +import shlex
> +import subprocess
> +import sys
> +
> +import ovs.dirs
> +import ovs.util
> +import ovs.daemon
> +import ovs.vlog
> +
> +from flask import Flask, jsonify
> +from flask import request, abort
> +
> +app = Flask(__name__)
> +vlog = ovs.vlog.Vlog("ovn-docker-overlay-driver")
> +
> +OVN_BRIDGE = "br-int"
> +OVN_REMOTE = ""
> +PLUGIN_DIR = "/etc/docker/plugins"
> +PLUGIN_FILE = "/etc/docker/plugins/openvswitch.spec"
> +
> +
> +def call_popen(cmd):
> +    child = subprocess.Popen(cmd, stdout=subprocess.PIPE)
> +    output = child.communicate()
> +    if child.returncode:
> +        raise RuntimeError("Fatal error executing %s" % (cmd))
> +    if len(output) == 0 or output[0] == None:
> +        output = ""
> +    else:
> +        output = output[0].strip()
> +    return output
> +
> +
> +def call_prog(prog, args_list):
> +    cmd = [prog, "--timeout=5", "-vconsole:off"] + args_list
> +    return call_popen(cmd)
> +
> +
> +def ovs_vsctl(args):
> +    return call_prog("ovs-vsctl", shlex.split(args))
> +
> +
> +def ovn_nbctl(args):
> +    args_list = shlex.split(args)
> +    database_option = "%s=%s" % ("--db", OVN_REMOTE)
> +    args_list.insert(0, database_option)
> +    return call_prog("ovn-nbctl", args_list)
> +
> +
> +def cleanup():
> +    if os.path.isfile(PLUGIN_FILE):
> +        os.remove(PLUGIN_FILE)
> +
> +
> +def ovn_init_overlay():
> +    br_list = ovs_vsctl("list-br").split()
> +    if OVN_BRIDGE not in br_list:
> +        ovs_vsctl("-- --may-exist add-br %s "
> +                  "-- br-set-external-id %s bridge-id %s "
> +                  "-- set bridge %s other-config:disable-in-band=true "
> +                  "-- set bridge %s fail-mode=secure"
> +                  % (OVN_BRIDGE, OVN_BRIDGE, OVN_BRIDGE, OVN_BRIDGE,
> +                     OVN_BRIDGE))
> +
> +    global OVN_REMOTE
> +    OVN_REMOTE = ovs_vsctl("get Open_vSwitch . "
> +                           "external_ids:ovn-remote").strip('"')
> +    if not OVN_REMOTE:
> +        sys.exit("OVN central database's ip address not set")
> +
> +    ovs_vsctl("set open_vswitch . external_ids:ovn-bridge=%s "
> +              % OVN_BRIDGE)
> +
> +
> +def prepare():
> +    parser = argparse.ArgumentParser()
> +
> +    ovs.vlog.add_args(parser)
> +    ovs.daemon.add_args(parser)
> +    args = parser.parse_args()
> +    ovs.vlog.handle_args(args)
> +    ovs.daemon.handle_args(args)
> +    ovn_init_overlay()
> +
> +    if not os.path.isdir(PLUGIN_DIR):
> +        os.makedirs(PLUGIN_DIR)
> +
> +    ovs.daemon.daemonize()
> +    try:
> +        fo = open(PLUGIN_FILE, "w")
> +        fo.write("tcp://0.0.0.0:5000")
> +        fo.close()
> +    except Exception as e:
> +        ovs.util.ovs_fatal(0, "Failed to write to spec file (%s)" %
> str(e),
> +                           vlog)
> +
> +    atexit.register(cleanup)
> +
> +
> +@app.route('/Plugin.Activate', methods=['POST'])
> +def plugin_activate():
> +    return jsonify({"Implements": ["NetworkDriver"]})
> +
> +
> +@app.route('/NetworkDriver.GetCapabilities', methods=['POST'])
> +def get_capability():
> +    return jsonify({"Scope": "global"})
> +
> +
> +@app.route('/NetworkDriver.DiscoverNew', methods=['POST'])
> +def new_discovery():
> +    return jsonify({})
> +
> +
> +@app.route('/NetworkDriver.DiscoverDelete', methods=['POST'])
> +def delete_discovery():
> +    return jsonify({})
> +
> +
> +@app.route('/NetworkDriver.CreateNetwork', methods=['POST'])
> +def create_network():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    # NetworkID will have docker generated network uuid and it
> +    # becomes 'name' in a OVN Logical switch record.
> +    network = data.get("NetworkID", "")
> +    if not network:
> +        abort(400)
> +
> +    # Limit subnet handling to ipv4 till ipv6 usecase is clear.
> +    ipv4_data = data.get("IPv4Data", "")
> +    if not ipv4_data:
> +        error = "create_network: No ipv4 subnet provided"
> +        return jsonify({'Err': error})
> +
> +    subnet = ipv4_data[0].get("Pool", "")
> +    if not subnet:
> +        error = "create_network: no subnet in ipv4 data from libnetwork"
> +        return jsonify({'Err': error})
> +
> +    gateway_ip = ipv4_data[0].get("Gateway", "").rsplit('/', 1)[0]
> +    if not gateway_ip:
> +        error = "create_network: no gateway in ipv4 data from libnetwork"
> +        return jsonify({'Err': error})
> +
> +    try:
> +        ovn_nbctl("lswitch-add %s -- set Logical_Switch %s "
> +                  "external_ids:subnet=%s external_ids:gateway_ip=%s"
> +                  % (network, network, subnet, gateway_ip))
> +    except Exception as e:
> +        error = "create_network: lswitch-add %s" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    return jsonify({})
> +
> +
> +@app.route('/NetworkDriver.DeleteNetwork', methods=['POST'])
> +def delete_network():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    try:
> +        ovn_nbctl("lswitch-del %s" % (nid))
> +    except Exception as e:
> +        error = "delete_network: lswitch-del %s" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    return jsonify({})
> +
> +
> +@app.route('/NetworkDriver.CreateEndpoint', methods=['POST'])
> +def create_endpoint():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    interface = data.get("Interface", "")
> +    if not interface:
> +        error = "create_endpoint: no interfaces structure supplied by " \
> +                "libnetwork"
> +        return jsonify({'Err': error})
> +
> +    ip_address_and_mask = interface.get("Address", "")
> +    if not ip_address_and_mask:
> +        error = "create_endpoint: ip address not provided by libnetwork"
> +        return jsonify({'Err': error})
> +
> +    ip_address = ip_address_and_mask.rsplit('/', 1)[0]
> +    mac_address_input = interface.get("MacAddress", "")
> +    mac_address_output = ""
> +
> +    try:
> +        ovn_nbctl("lport-add %s %s" % (nid, eid))
> +    except Exception as e:
> +        error = "create_endpoint: lport-add (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    if not mac_address_input:
> +        mac_address = "02:%02x:%02x:%02x:%02x:%02x" % (random.randint(0,
> 255),
> +                                                       random.randint(0,
> 255),
> +                                                       random.randint(0,
> 255),
> +                                                       random.randint(0,
> 255),
> +                                                       random.randint(0,
> 255))
> +    else:
> +        mac_address = mac_address_input
> +
> +    try:
> +        ovn_nbctl("lport-set-addresses %s \"%s %s\""
> +                  % (eid, mac_address, ip_address))
> +    except Exception as e:
> +        error = "create_endpoint: lport-set-addresses (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    # Only return a mac address if one did not come as request.
> +    mac_address_output = ""
> +    if not mac_address_input:
> +        mac_address_output = mac_address
> +
> +    return jsonify({"Interface": {
> +                                    "Address": "",
> +                                    "AddressIPv6": "",
> +                                    "MacAddress": mac_address_output
> +                                    }})
> +
> +
> +def get_logical_port_addresses(eid):
> +    ret = ovn_nbctl("--if-exists get Logical_port %s addresses" % (eid))
> +    if not ret:
> +        error = "endpoint not found in OVN database"
> +        return (None, None, error)
> +    addresses = ast.literal_eval(ret)
> +    if len(addresses) == 0:
> +        error = "unexpected return while fetching addresses"
> +        return (None, None, error)
> +    (mac_address, ip_address) = addresses[0].split()
> +    return (mac_address, ip_address, None)
> +
> +
> +@app.route('/NetworkDriver.EndpointOperInfo', methods=['POST'])
> +def show_endpoint():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    try:
> +        (mac_address, ip_address, error) = get_logical_port_addresses(eid)
> +        if error:
> +            jsonify({'Err': error})
> +    except Exception as e:
> +        error = "show_endpoint: get Logical_port addresses. (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    veth_outside = eid[0:15]
> +    return jsonify({"Value": {"ip_address": ip_address,
> +                              "mac_address": mac_address,
> +                              "veth_outside": veth_outside
> +                              }})
> +
> +
> +@app.route('/NetworkDriver.DeleteEndpoint', methods=['POST'])
> +def delete_endpoint():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    try:
> +        ovn_nbctl("lport-del %s" % eid)
> +    except Exception as e:
> +        error = "delete_endpoint: lport-del %s" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    return jsonify({})
> +
> +
> +@app.route('/NetworkDriver.Join', methods=['POST'])
> +def network_join():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    sboxkey = data.get("SandboxKey", "")
> +    if not sboxkey:
> +        abort(400)
> +
> +    # sboxkey is of the form: /var/run/docker/netns/CONTAINER_ID
> +    vm_id = sboxkey.rsplit('/')[-1]
> +
> +    try:
> +        (mac_address, ip_address, error) = get_logical_port_addresses(eid)
> +        if error:
> +            jsonify({'Err': error})
> +    except Exception as e:
> +        error = "network_join: %s" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    veth_outside = eid[0:15]
> +    veth_inside = eid[0:13] + "_c"
> +    command = "ip link add %s type veth peer name %s" \
> +              % (veth_inside, veth_outside)
> +    try:
> +        call_popen(shlex.split(command))
> +    except Exception as e:
> +        error = "network_join: failed to create veth pair (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    command = "ip link set dev %s address %s" \
> +              % (veth_inside, mac_address)
> +
> +    try:
> +        call_popen(shlex.split(command))
> +    except Exception as e:
> +        error = "network_join: failed to set veth mac address (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    command = "ip link set %s up" % (veth_outside)
> +
> +    try:
> +        call_popen(shlex.split(command))
> +    except Exception as e:
> +        error = "network_join: failed to up the veth interface (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        ovs_vsctl("add-port %s %s" % (OVN_BRIDGE, veth_outside))
> +        ovs_vsctl("set interface %s external_ids:attached-mac=%s "
> +                  "external_ids:iface-id=%s "
> +                  "external_ids:vm-id=%s "
> +                  "external_ids:iface-status=%s "
> +                  % (veth_outside, mac_address, eid, vm_id, "active"))
> +    except Exception as e:
> +        error = "network_join: failed to create a port (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    return jsonify({"InterfaceName": {
> +                                        "SrcName": veth_inside,
> +                                        "DstPrefix": "eth"
> +                                     },
> +                    "Gateway": "",
> +                    "GatewayIPv6": ""})
> +
> +
> +@app.route('/NetworkDriver.Leave', methods=['POST'])
> +def network_leave():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    veth_outside = eid[0:15]
> +    command = "ip link delete %s" % (veth_outside)
> +    try:
> +        call_popen(shlex.split(command))
> +    except Exception as e:
> +        error = "network_leave: failed to delete veth pair (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        ovs_vsctl("--if-exists del-port %s" % (veth_outside))
> +    except Exception as e:
> +        error = "network_leave: failed to delete port (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    return jsonify({})
> +
> +if __name__ == '__main__':
> +    prepare()
> +    app.run(host='0.0.0.0')
> diff --git a/ovn/utilities/ovn-docker-underlay-driver
> b/ovn/utilities/ovn-docker-underlay-driver
> new file mode 100755
> index 0000000..46364da
> --- /dev/null
> +++ b/ovn/utilities/ovn-docker-underlay-driver
> @@ -0,0 +1,675 @@
> +#! /usr/bin/python
> +# Copyright (C) 2015 Nicira, Inc.
> +#
> +# Licensed under the Apache License, Version 2.0 (the "License");
> +# you may not use this file except in compliance with the License.
> +# You may obtain a copy of the License at:
> +#
> +#     http://www.apache.org/licenses/LICENSE-2.0
> +#
> +# Unless required by applicable law or agreed to in writing, software
> +# distributed under the License is distributed on an "AS IS" BASIS,
> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> +# See the License for the specific language governing permissions and
> +# limitations under the License.
> +
> +import argparse
> +import atexit
> +import getpass
> +import json
> +import os
> +import re
> +import shlex
> +import subprocess
> +import sys
> +import time
> +import uuid
> +
> +import ovs.dirs
> +import ovs.util
> +import ovs.daemon
> +import ovs.unixctl.server
> +import ovs.vlog
> +
> +from neutronclient.v2_0 import client
> +from flask import Flask, jsonify
> +from flask import request, abort
> +
> +app = Flask(__name__)
> +vlog = ovs.vlog.Vlog("ovn-docker-underlay-driver")
> +
> +AUTH_STRATEGY = ""
> +AUTH_URL = ""
> +ENDPOINT_URL = ""
> +OVN_BRIDGE = ""
> +PASSWORD = ""
> +PLUGIN_DIR = "/etc/docker/plugins"
> +PLUGIN_FILE = "/etc/docker/plugins/openvswitch.spec"
> +TENANT_ID = ""
> +USERNAME = ""
> +VIF_ID = ""
> +
> +
> +def call_popen(cmd):
> +    child = subprocess.Popen(cmd, stdout=subprocess.PIPE)
> +    output = child.communicate()
> +    if child.returncode:
> +        raise RuntimeError("Fatal error executing %s" % (cmd))
> +    if len(output) == 0 or output[0] == None:
> +        output = ""
> +    else:
> +        output = output[0].strip()
> +    return output
> +
> +
> +def call_prog(prog, args_list):
> +    cmd = [prog, "--timeout=5", "-vconsole:off"] + args_list
> +    return call_popen(cmd)
> +
> +
> +def ovs_vsctl(args):
> +    return call_prog("ovs-vsctl", shlex.split(args))
> +
> +
> +def cleanup():
> +    if os.path.isfile(PLUGIN_FILE):
> +        os.remove(PLUGIN_FILE)
> +
> +
> +def ovn_init_underlay(args):
> +    global USERNAME, PASSWORD, TENANT_ID, AUTH_URL, AUTH_STRATEGY, VIF_ID
> +    global OVN_BRIDGE
> +
> +    if not args.bridge:
> +        sys.exit("OVS bridge name not provided")
> +    OVN_BRIDGE = args.bridge
> +
> +    VIF_ID = os.environ.get('OS_VIF_ID', '')
> +    if not VIF_ID:
> +        sys.exit("env OS_VIF_ID not set")
> +    USERNAME = os.environ.get('OS_USERNAME', '')
> +    if not USERNAME:
> +        sys.exit("env OS_USERNAME not set")
> +    TENANT_ID = os.environ.get('OS_TENANT_ID', '')
> +    if not TENANT_ID:
> +        sys.exit("env OS_TENANT_ID not set")
> +    AUTH_URL = os.environ.get('OS_AUTH_URL', '')
> +    if not AUTH_URL:
> +        sys.exit("env OS_AUTH_URL not set")
> +    AUTH_STRATEGY = "keystone"
> +
> +    PASSWORD = os.environ.get('OS_PASSWORD', '')
> +    if not PASSWORD:
> +        PASSWORD = getpass.getpass()
> +
> +
> +def prepare():
> +    parser = argparse.ArgumentParser()
> +    parser.add_argument('--bridge', help="The Bridge to which containers "
> +                        "interfaces connect to.")
> +
> +    ovs.vlog.add_args(parser)
> +    ovs.daemon.add_args(parser)
> +    args = parser.parse_args()
> +    ovs.vlog.handle_args(args)
> +    ovs.daemon.handle_args(args)
> +    ovn_init_underlay(args)
> +
> +    if not os.path.isdir(PLUGIN_DIR):
> +        os.makedirs(PLUGIN_DIR)
> +
> +    ovs.daemon.daemonize()
> +    try:
> +        fo = open(PLUGIN_FILE, "w")
> +        fo.write("tcp://0.0.0.0:5000")
> +        fo.close()
> +    except Exception as e:
> +        ovs.util.ovs_fatal(0, "Failed to write to spec file (%s)" %
> str(e),
> +                           vlog)
> +
> +    atexit.register(cleanup)
> +
> +
> +@app.route('/Plugin.Activate', methods=['POST'])
> +def plugin_activate():
> +    return jsonify({"Implements": ["NetworkDriver"]})
> +
> +
> +@app.route('/NetworkDriver.GetCapabilities', methods=['POST'])
> +def get_capability():
> +    return jsonify({"Scope": "global"})
> +
> +
> +@app.route('/NetworkDriver.DiscoverNew', methods=['POST'])
> +def new_discovery():
> +    return jsonify({})
> +
> +
> +@app.route('/NetworkDriver.DiscoverDelete', methods=['POST'])
> +def delete_discovery():
> +    return jsonify({})
> +
> +
> +def neutron_login():
> +    try:
> +        neutron = client.Client(username=USERNAME,
> +                                password=PASSWORD,
> +                                tenant_id=TENANT_ID,
> +                                auth_url=AUTH_URL,
> +                                endpoint_url=ENDPOINT_URL,
> +                                auth_strategy=AUTH_STRATEGY)
> +    except Exception as e:
> +        raise RuntimeError("Failed to login into Neutron(%s)" % str(e))
> +    return neutron
> +
> +
> +def get_networkuuid_by_name(neutron, name):
> +    param = {'fields': 'id', 'name': name}
> +    ret = neutron.list_networks(**param)
> +    if len(ret['networks']) > 1:
> +        raise RuntimeError("More than one network for the given name")
> +    elif len(ret['networks']) == 0:
> +        network = None
> +    else:
> +        network = ret['networks'][0]['id']
> +    return network
> +
> +
> +def get_subnetuuid_by_name(neutron, name):
> +    param = {'fields': 'id', 'name': name}
> +    ret = neutron.list_subnets(**param)
> +    if len(ret['subnets']) > 1:
> +        raise RuntimeError("More than one subnet for the given name")
> +    elif len(ret['subnets']) == 0:
> +        subnet = None
> +    else:
> +        subnet = ret['subnets'][0]['id']
> +    return subnet
> +
> +
> +@app.route('/NetworkDriver.CreateNetwork', methods=['POST'])
> +def create_network():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    # NetworkID will have docker generated network uuid and it
> +    # becomes 'name' in a neutron network record.
> +    network = data.get("NetworkID", "")
> +    if not network:
> +        abort(400)
> +
> +    # Limit subnet handling to ipv4 till ipv6 usecase is clear.
> +    ipv4_data = data.get("IPv4Data", "")
> +    if not ipv4_data:
> +        error = "create_network: No ipv4 subnet provided"
> +        return jsonify({'Err': error})
> +
> +    subnet = ipv4_data[0].get("Pool", "")
> +    if not subnet:
> +        error = "create_network: no subnet in ipv4 data from libnetwork"
> +        return jsonify({'Err': error})
> +
> +    gateway_ip = ipv4_data[0].get("Gateway", "").rsplit('/', 1)[0]
> +    if not gateway_ip:
> +        error = "create_network: no gateway in ipv4 data from libnetwork"
> +        return jsonify({'Err': error})
> +
> +    try:
> +        neutron = neutron_login()
> +    except Exception as e:
> +        error = "create_network: neutron login. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        if get_networkuuid_by_name(neutron, network):
> +            error = "create_network: network has already been created"
> +            return jsonify({'Err': error})
> +    except Exception as e:
> +        error = "create_network: neutron network uuid by name. (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        body = {'network': {'name': network, 'admin_state_up': True}}
> +        ret = neutron.create_network(body)
> +        network_id = ret['network']['id']
> +    except Exception as e:
> +        error = "create_network: neutron net-create call. (%s)" % str(e)
> +        return jsonify({'Err': error})
> +
> +    subnet_name = "docker-%s" % (network)
> +
> +    try:
> +        body = {'subnet': {'network_id': network_id,
> +                           'ip_version': 4,
> +                           'cidr': subnet,
> +                           'gateway_ip': gateway_ip,
> +                           'name': subnet_name}}
> +        created_subnet = neutron.create_subnet(body)
> +    except Exception as e:
> +        error = "create_network: neutron subnet-create call. (%s)" %
> str(e)
> +        return jsonify({'Err': error})
> +
> +    return jsonify({})
> +
> +
> +@app.route('/NetworkDriver.DeleteNetwork', methods=['POST'])
> +def delete_network():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    try:
> +        neutron = neutron_login()
> +    except Exception as e:
> +        error = "delete_network: neutron login. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        network = get_networkuuid_by_name(neutron, nid)
> +        if not network:
> +            error = "delete_network: failed in network by name. (%s)" %
> (nid)
> +            return jsonify({'Err': error})
> +    except Exception as e:
> +        error = "delete_network: network uuid by name. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        neutron.delete_network(network)
> +    except Exception as e:
> +        error = "delete_network: neutron net-delete. (%s)" % str(e)
> +        return jsonify({'Err': error})
> +
> +    return jsonify({})
> +
> +
> +def reserve_vlan():
> +    reserved_vlan = 0
> +    vlans = ovs_vsctl("--if-exists get Open_vSwitch . "
> +                      "external_ids:vlans").strip('"')
> +    if not vlans:
> +        reserved_vlan = 1
> +        ovs_vsctl("set Open_vSwitch . external_ids:vlans=%s" %
> reserved_vlan)
> +        return reserved_vlan
> +
> +    vlan_set = str(vlans).split(',')
> +
> +    for vlan in range(1, 4095):
> +        if str(vlan) not in vlan_set:
> +            vlan_set.append(str(vlan))
> +            reserved_vlan = vlan
> +            vlans = re.sub(r'[ \[\]\']', '', str(vlan_set))
> +            ovs_vsctl("set Open_vSwitch . external_ids:vlans=%s" % vlans)
> +            return reserved_vlan
> +
> +    if not reserved_vlan:
> +        raise RuntimeError("No more vlans available on this host")
> +
> +
> +def unreserve_vlan(reserved_vlan):
> +    vlans = ovs_vsctl("--if-exists get Open_vSwitch . "
> +                      "external_ids:vlans").strip('"')
> +    if not vlans:
> +        return
> +
> +    vlan_set = str(vlans).split(',')
> +    if str(reserved_vlan) not in vlan_set:
> +        return
> +
> +    vlan_set.remove(str(reserved_vlan))
> +    vlans = re.sub(r'[ \[\]\']', '', str(vlan_set))
> +    if vlans:
> +        ovs_vsctl("set Open_vSwitch . external_ids:vlans=%s" % vlans)
> +    else:
> +        ovs_vsctl("remove Open_vSwitch . external_ids vlans")
> +
> +
> +def create_port_underlay(neutron, network, eid, ip_address, mac_address):
> +    reserved_vlan = reserve_vlan()
> +    if mac_address:
> +        body = {'port': {'network_id': network,
> +                         'binding:profile': {'parent_name': VIF_ID,
> +                                             'tag': int(reserved_vlan)},
> +                         'mac_address': mac_address,
> +                         'fixed_ips': [{'ip_address': ip_address}],
> +                         'name': eid,
> +                         'admin_state_up': True}}
> +    else:
> +        body = {'port': {'network_id': network,
> +                         'binding:profile': {'parent_name': VIF_ID,
> +                                             'tag': int(reserved_vlan)},
> +                         'fixed_ips': [{'ip_address': ip_address}],
> +                         'name': eid,
> +                         'admin_state_up': True}}
> +
> +    try:
> +        ret = neutron.create_port(body)
> +        mac_address = ret['port']['mac_address']
> +    except Exception as e:
> +        unreserve_vlan(reserved_vlan)
> +        raise RuntimeError("Failed in creation of neutron port (%s)." %
> str(e))
> +
> +    ovs_vsctl("set Open_vSwitch . external_ids:%s_vlan=%s"
> +              % (eid, reserved_vlan))
> +
> +    return mac_address
> +
> +
> +def get_endpointuuid_by_name(neutron, name):
> +    param = {'fields': 'id', 'name': name}
> +    ret = neutron.list_ports(**param)
> +    if len(ret['ports']) > 1:
> +        raise RuntimeError("More than one endpoint for the given name")
> +    elif len(ret['ports']) == 0:
> +        endpoint = None
> +    else:
> +        endpoint = ret['ports'][0]['id']
> +    return endpoint
> +
> +
> +@app.route('/NetworkDriver.CreateEndpoint', methods=['POST'])
> +def create_endpoint():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    interface = data.get("Interface", "")
> +    if not interface:
> +        error = "create_endpoint: no interfaces supplied by libnetwork"
> +        return jsonify({'Err': error})
> +
> +    ip_address_and_mask = interface.get("Address", "")
> +    if not ip_address_and_mask:
> +        error = "create_endpoint: ip address not provided by libnetwork"
> +        return jsonify({'Err': error})
> +
> +    ip_address = ip_address_and_mask.rsplit('/', 1)[0]
> +    mac_address_input = interface.get("MacAddress", "")
> +    mac_address_output = ""
> +
> +    try:
> +        neutron = neutron_login()
> +    except Exception as e:
> +        error = "create_endpoint: neutron login. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        endpoint = get_endpointuuid_by_name(neutron, eid)
> +        if endpoint:
> +            error = "create_endpoint: Endpoint has already been created"
> +            return jsonify({'Err': error})
> +    except Exception as e:
> +        error = "create_endpoint: endpoint uuid by name. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        network = get_networkuuid_by_name(neutron, nid)
> +        if not network:
> +            error = "create_endpoint: neutron network by name. (%s)" %
> (nid)
> +            return jsonify({'Err': error})
> +    except Exception as e:
> +        error = "create_endpoint: network uuid by name. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        mac_address = create_port_underlay(neutron, network, eid,
> ip_address,
> +                                           mac_address_input)
> +    except Exception as e:
> +        error = "create_endpoint: neutron port-create (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    if not mac_address_input:
> +        mac_address_output = mac_address
> +
> +    return jsonify({"Interface": {
> +                                    "Address": "",
> +                                    "AddressIPv6": "",
> +                                    "MacAddress": mac_address_output
> +                                    }})
> +
> +
> +@app.route('/NetworkDriver.EndpointOperInfo', methods=['POST'])
> +def show_endpoint():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    try:
> +        neutron = neutron_login()
> +    except Exception as e:
> +        error = "%s" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        endpoint = get_endpointuuid_by_name(neutron, eid)
> +        if not endpoint:
> +            error = "show_endpoint: Failed to get endpoint by name"
> +            return jsonify({'Err': error})
> +    except Exception as e:
> +        error = "show_endpoint: get endpoint by name. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        ret = neutron.show_port(endpoint)
> +        mac_address = ret['port']['mac_address']
> +        ip_address = ret['port']['fixed_ips'][0]['ip_address']
> +    except Exception as e:
> +        error = "show_endpoint: show port (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    veth_outside = eid[0:15]
> +    return jsonify({"Value": {"ip_address": ip_address,
> +                              "mac_address": mac_address,
> +                              "veth_outside": veth_outside
> +                              }})
> +
> +
> +@app.route('/NetworkDriver.DeleteEndpoint', methods=['POST'])
> +def delete_endpoint():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    try:
> +        neutron = neutron_login()
> +    except Exception as e:
> +        error = "delete_endpoint: neutron login (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    endpoint = get_endpointuuid_by_name(neutron, eid)
> +    if not endpoint:
> +        return jsonify({})
> +
> +    reserved_vlan = ovs_vsctl("--if-exists get Open_vSwitch . "
> +                              "external_ids:%s_vlan" % eid).strip('"')
> +    if reserved_vlan:
> +        unreserve_vlan(reserved_vlan)
> +        ovs_vsctl("remove Open_vSwitch . external_ids %s_vlan" % eid)
> +
> +    try:
> +        neutron.delete_port(endpoint)
> +    except Exception as e:
> +        error = "delete_endpoint: neutron port-delete. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    return jsonify({})
> +
> +
> +@app.route('/NetworkDriver.Join', methods=['POST'])
> +def network_join():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    sboxkey = data.get("SandboxKey", "")
> +    if not sboxkey:
> +        abort(400)
> +
> +    # sboxkey is of the form: /var/run/docker/netns/CONTAINER_ID
> +    vm_id = sboxkey.rsplit('/')[-1]
> +
> +    try:
> +        neutron = neutron_login()
> +    except Exception as e:
> +        error = "network_join: neutron login. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    subnet_name = "docker-%s" % (nid)
> +    try:
> +        subnet = get_subnetuuid_by_name(neutron, subnet_name)
> +        if not subnet:
> +            error = "network_join: can't find subnet in neutron"
> +            return jsonify({'Err': error})
> +    except Exception as e:
> +        error = "network_join: subnet uuid by name. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        ret = neutron.show_subnet(subnet)
> +        gateway_ip = ret['subnet']['gateway_ip']
> +        if not gateway_ip:
> +            error = "network_join: no gateway_ip for the subnet"
> +            return jsonify({'Err': error})
> +    except Exception as e:
> +        error = "network_join: neutron show subnet. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        endpoint = get_endpointuuid_by_name(neutron, eid)
> +        if not endpoint:
> +            error = "network_join: Failed to get endpoint by name"
> +            return jsonify({'Err': error})
> +    except Exception as e:
> +        error = "network_join: neutron endpoint by name. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        ret = neutron.show_port(endpoint)
> +        mac_address = ret['port']['mac_address']
> +    except Exception as e:
> +        error = "network_join: neutron show port. (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    veth_outside = eid[0:15]
> +    veth_inside = eid[0:13] + "_c"
> +    command = "ip link add %s type veth peer name %s" \
> +              % (veth_inside, veth_outside)
> +    try:
> +        call_popen(shlex.split(command))
> +    except Exception as e:
> +        error = "network_join: failed to create veth pair. (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    command = "ip link set dev %s address %s" \
> +              % (veth_inside, mac_address)
> +
> +    try:
> +        call_popen(shlex.split(command))
> +    except Exception as e:
> +        error = "network_join: failed to set veth mac address. (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    command = "ip link set %s up" % (veth_outside)
> +
> +    try:
> +        call_popen(shlex.split(command))
> +    except Exception as e:
> +        error = "network_join: failed to up the veth iface. (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        reserved_vlan = ovs_vsctl("--if-exists get Open_vSwitch . "
> +                                  "external_ids:%s_vlan" % eid).strip('"')
> +        if not reserved_vlan:
> +            error = "network_join: no reserved vlan for this endpoint"
> +            return jsonify({'Err': error})
> +        ovs_vsctl("add-port %s %s tag=%s"
> +                  % (OVN_BRIDGE, veth_outside, reserved_vlan))
> +    except Exception as e:
> +        error = "network_join: failed to create a OVS port. (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    return jsonify({"InterfaceName": {
> +                                        "SrcName": veth_inside,
> +                                        "DstPrefix": "eth"
> +                                     },
> +                    "Gateway": gateway_ip,
> +                    "GatewayIPv6": ""})
> +
> +
> +@app.route('/NetworkDriver.Leave', methods=['POST'])
> +def network_leave():
> +    if not request.data:
> +        abort(400)
> +
> +    data = json.loads(request.data)
> +
> +    nid = data.get("NetworkID", "")
> +    if not nid:
> +        abort(400)
> +
> +    eid = data.get("EndpointID", "")
> +    if not eid:
> +        abort(400)
> +
> +    veth_outside = eid[0:15]
> +    command = "ip link delete %s" % (veth_outside)
> +    try:
> +        call_popen(shlex.split(command))
> +    except Exception as e:
> +        error = "network_leave: failed to delete veth pair. (%s)" %
> (str(e))
> +        return jsonify({'Err': error})
> +
> +    try:
> +        ovs_vsctl("--if-exists del-port %s" % (veth_outside))
> +    except Exception as e:
> +        error = "network_leave: Failed to delete port (%s)" % (str(e))
> +        return jsonify({'Err': error})
> +
> +    return jsonify({})
> +
> +if __name__ == '__main__':
> +    prepare()
> +    app.run(host='0.0.0.0')
> diff --git a/rhel/openvswitch-fedora.spec.in b/rhel/
> openvswitch-fedora.spec.in
> index 066086c..cb76500 100644
> --- a/rhel/openvswitch-fedora.spec.in
> +++ b/rhel/openvswitch-fedora.spec.in
> @@ -346,6 +346,8 @@ rm -rf $RPM_BUILD_ROOT
>  %files ovn
>  %{_bindir}/ovn-controller
>  %{_bindir}/ovn-controller-vtep
> +%{_bindir}/ovn-docker-overlay-driver
> +%{_bindir}/ovn-docker-underlay-driver
>  %{_bindir}/ovn-nbctl
>  %{_bindir}/ovn-northd
>  %{_bindir}/ovn-sbctl
> --
> 1.9.1
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
>
Ben Pfaff Nov. 10, 2015, 5:49 a.m. UTC | #2
On Mon, Oct 19, 2015 at 02:52:25PM -0700, Gurucharan Shetty wrote:
> Docker removed 'experimental' tag for their multi-host
> networking constructs last week and did a code freeze for
> Docker 1.9.
> 
> This commit adds two drivers for OVN integration
> with Docker. The first driver is a pure overlay driver
> that does not need OpenStack integration. The second driver
> needs OVN+OpenStack.
> 
> The description of the Docker API exists here:
> https://github.com/docker/libnetwork/blob/master/docs/remote.md
> 
> Signed-off-by: Gurucharan Shetty <gshetty@nicira.com>
> ---
> v1->v2:
> Some style adustments with error messages.
> Consolidation of some duplicate code to function: get_logical_port_addresses

Thanks for doing this!  I have a few comments, see below.

> +For multi-host networking with OVN and Docker, Docker has to be started
> +with a destributed key-value store.  For e.g., if you decide to use consul
> +as your distributed key-value store, and your host IP address is $HOST_IP,
> +start your Docker daemon with:
> +
> +```
> +docker daemon --cluster-store=consul://127.0.0.1:8500 --cluster-advertise=$IP:0
> +```

I guess that $IP should be $HOST_IP here.

> +The "overlay" mode
> +==================
> +
> +OVN in "overlay" mode needs a minimum Open vSwitch version of 2.5.
> +
> +* Start the central components.
> +
> +OVN architecture has a central component which stores your networking intent
> +in a database.  So on any machine, with an IP Address of $CENTRAL_IP, where you
> +have installed and started Open vSwitch, you will need to start some
> +central components.

I think that the paragraph above means that you have to do this on one
machine selected from your hypervisors.  But "on any machine...where you
have installed and started Open vSwitch" could be interpreted to mean
that you run this on *every* hypervisor.  Maybe s/on any machine/on one
of your machines/?

> +Start ovn_northd daemon.  This daemon translates networking intent from Docker
> +stored in OVN_Northbound databse to logical flows in OVN_Southbound database.

It might be worth s/ovn_northd/ovn-northd/ above since that's the
correct name of the daemon.

> +```
> +/usr/share/openvswitch/scripts/ovn-ctl start_northd
> +```
> +
> +* One time setup.
> +
> +On each host, where you plan to spawn your containers, you will need to
> +run the following commands once.

I wonder whether it's worth adding some additional clarification, e.g.:

    (You need to run it again if your OVS database gets cleared.  It is
    harmless to run it again in any case.)

> +```
> +ovs-vsctl set Open_vSwitch . external_ids:ovn-remote="tcp:$CENTRAL_IP:6640 \

There's a missing " near the end of the line above.

> +    external_ids:ovn-encap-ip=$LOCAL_IP external_ids:ovn-encap-type="geneve"
> +```
> +
> +And finally, start the ovn-controller.
> +
> +```
> +/usr/share/openvswitch/scripts/ovn-ctl start_controller
> +```

I don't know whether that's really "one time" since it needs to happen
on each boot.

> +Source the openrc file. e.g.:
> +````
> +source openrc.sh
>  ```

The "source" command name isn't portable, since POSIX does not require
it.  You can use "." instead.

Are you expecting openrc.sh to be in the current directory?  POSIX says
that ". ./openrc.sh" is required to source a file in the current
directory (unless . is in $PATH).

In the Python code, I wonder whether there are any concerns about
malicious input.  I mean, what if someone names a subnet "--
emer-reset", for example (or similar)?  Would that delete basically the
whole OVS database?  Or does everything show up as a UUID and therefore
make it safe?  I didn't investigate enough to figure that out.

Assuming there's some kind of security story that way,
Acked-by: Ben Pfaff <blp@ovn.org>
Gurucharan Shetty Nov. 10, 2015, 10:45 p.m. UTC | #3
> Thanks for doing this!  I have a few comments, see below.

Thank you for the review!

>
> I guess that $IP should be $HOST_IP here.
True. I corrected it.


> I think that the paragraph above means that you have to do this on one
> machine selected from your hypervisors.  But "on any machine...where you
> have installed and started Open vSwitch" could be interpreted to mean
> that you run this on *every* hypervisor.  Maybe s/on any machine/on one
> of your machines/?

That is better. I changed the words.

>
>> +Start ovn_northd daemon.  This daemon translates networking intent from Docker
>> +stored in OVN_Northbound databse to logical flows in OVN_Southbound database.
>
> It might be worth s/ovn_northd/ovn-northd/ above since that's the
> correct name of the daemon.

corrected it.

>
>> +```
>> +/usr/share/openvswitch/scripts/ovn-ctl start_northd
>> +```
>> +
>> +* One time setup.
>> +
>> +On each host, where you plan to spawn your containers, you will need to
>> +run the following commands once.
>
> I wonder whether it's worth adding some additional clarification, e.g.:
>
>     (You need to run it again if your OVS database gets cleared.  It is
>     harmless to run it again in any case.)

I added the suggested sentences.

>
>> +```
>> +ovs-vsctl set Open_vSwitch . external_ids:ovn-remote="tcp:$CENTRAL_IP:6640 \
>
> There's a missing " near the end of the line above.

Right, I corrected it. I also changed the paragraph to now read:

$ENCAP_TYPE is the type of tunnel that you would like to use for overlay
networking.  The options are "geneve" or "stt".  (Please note that your
kernel should have support for your chosen $ENCAP_TYPE.  Both geneve
and stt are part of the Open vSwitch kernel module that is compiled from this
repo.  If you use the Open vSwitch kernel module from upstream Linux,
you will need a minumum kernel version of 3.18 for geneve.  There is no stt
support in upstream Linux.  You can verify whether you have the support in your
kernel by doing a "lsmod | grep $ENCAP_TYPE".)

```
ovs-vsctl set Open_vSwitch . external_ids:ovn-remote="tcp:$CENTRAL_IP:6640" \
  external_ids:ovn-encap-ip=$LOCAL_IP external_ids:ovn-encap-type="$ENCAP_TYPE"
```

>> +```
>> +
>> +And finally, start the ovn-controller.
>> +
>> +```
>> +/usr/share/openvswitch/scripts/ovn-ctl start_controller
>> +```
>
> I don't know whether that's really "one time" since it needs to happen
> on each boot.

I changed the line to now read:
And finally, start the ovn-controller.  (You need to run the below command
on every boot)


>
>> +Source the openrc file. e.g.:
>> +````
>> +source openrc.sh
>>  ```
>
> The "source" command name isn't portable, since POSIX does not require
> it.  You can use "." instead.
>
> Are you expecting openrc.sh to be in the current directory?  POSIX says
> that ". ./openrc.sh" is required to source a file in the current
> directory (unless . is in $PATH).

I changed it to now read:
". ./openrc.sh

>
> In the Python code, I wonder whether there are any concerns about
> malicious input.  I mean, what if someone names a subnet "--
> emer-reset", for example (or similar)?  Would that delete basically the
> whole OVS database?  Or does everything show up as a UUID and therefore
> make it safe?  I didn't investigate enough to figure that out.

Thanks for the above warning. Though one could not send malicious
input via docker api (as they check for the validity there), once
could still send a TCP request directly to the driver to carefully
insert " -- $database_command --" as arguments for ovs-vsctl and
ovn-nbctl commands. To handle that I was thinking of doing something
like this:


+def vet_inputs(*args):
+    for arg in args:
+        if arg.find(" -- ") != -1:
+            raise RuntimeError("Input contains invalid characters")
+

     try:
+        vet_inputs(network, subnet, gateway_ip)
         ovn_nbctl("lswitch-add %s -- set Logical_Switch %s "
                   "external_ids:subnet=%s external_ids:gateway_ip=%s"
                   % (network, network, subnet, gateway_ip))


And everywhere else where we pass the user input to ovn_nbctl or
ovs_vsctl calls.
What do you think?

>
> Assuming there's some kind of security story that way,
> Acked-by: Ben Pfaff <blp@ovn.org>
Ben Pfaff Nov. 10, 2015, 11:43 p.m. UTC | #4
On Tue, Nov 10, 2015 at 02:45:05PM -0800, Gurucharan Shetty wrote:
> > In the Python code, I wonder whether there are any concerns about
> > malicious input.  I mean, what if someone names a subnet "--
> > emer-reset", for example (or similar)?  Would that delete basically the
> > whole OVS database?  Or does everything show up as a UUID and therefore
> > make it safe?  I didn't investigate enough to figure that out.
> 
> Thanks for the above warning. Though one could not send malicious
> input via docker api (as they check for the validity there), once
> could still send a TCP request directly to the driver to carefully
> insert " -- $database_command --" as arguments for ovs-vsctl and
> ovn-nbctl commands. To handle that I was thinking of doing something
> like this:
> 
> 
> +def vet_inputs(*args):
> +    for arg in args:
> +        if arg.find(" -- ") != -1:
> +            raise RuntimeError("Input contains invalid characters")
> +
> 
>      try:
> +        vet_inputs(network, subnet, gateway_ip)
>          ovn_nbctl("lswitch-add %s -- set Logical_Switch %s "
>                    "external_ids:subnet=%s external_ids:gateway_ip=%s"
>                    % (network, network, subnet, gateway_ip))
> 
> 
> And everywhere else where we pass the user input to ovn_nbctl or
> ovs_vsctl calls.
> What do you think?

I was expecting something more like:

    ovn_nbctl("lswitch-add", network, "--", "set", "Logical_Switch",
              network, "external_ids:subnet=" + subnet,
              "external_ids:gateway_ip=" + gateway_ip)

and then change ovn_nbctl to take argv instead of a string to break up.
Is that difficult?
Gurucharan Shetty Nov. 11, 2015, 7:08 p.m. UTC | #5
> I was expecting something more like:
>
>     ovn_nbctl("lswitch-add", network, "--", "set", "Logical_Switch",
>               network, "external_ids:subnet=" + subnet,
>               "external_ids:gateway_ip=" + gateway_ip)
>
> and then change ovn_nbctl to take argv instead of a string to break up.
> Is that difficult?

Your suggestion is clearly better. I changed all the calls to follow
the above model, did a round of sanity testing for both overlay and
underlay drivers and sent a v3.
diff mbox

Patch

diff --git a/INSTALL.Docker.md b/INSTALL.Docker.md
index 9e14043..d523ecd 100644
--- a/INSTALL.Docker.md
+++ b/INSTALL.Docker.md
@@ -1,109 +1,270 @@ 
 How to Use Open vSwitch with Docker
 ====================================
 
-This document describes how to use Open vSwitch with Docker 1.2.0 or
+This document describes how to use Open vSwitch with Docker 1.9.0 or
 later.  This document assumes that you installed Open vSwitch by following
 [INSTALL.md] or by using the distribution packages such as .deb or .rpm.
 Consult www.docker.com for instructions on how to install Docker.
 
-Limitations
------------
-Currently there is no native integration of Open vSwitch in Docker, i.e.,
-one cannot use the Docker client to automatically add a container's
-network interface to an Open vSwitch bridge during the creation of the
-container.  This document describes addition of new network interfaces to an
-already created container and in turn attaching that interface as a port to an
-Open vSwitch bridge.  If and when there is a native integration of Open vSwitch
-with Docker, the ovs-docker utility described in this document is expected to
-be retired.
+Docker 1.9.0 comes with support for multi-host networking.  Integration
+of Docker networking and Open vSwitch can be achieved via Open vSwitch
+virtual network (OVN).
+
 
 Setup
------
-* Create your container, e.g.:
+=====
+
+For multi-host networking with OVN and Docker, Docker has to be started
+with a destributed key-value store.  For e.g., if you decide to use consul
+as your distributed key-value store, and your host IP address is $HOST_IP,
+start your Docker daemon with:
+
+```
+docker daemon --cluster-store=consul://127.0.0.1:8500 --cluster-advertise=$IP:0
+```
+
+OVN provides network virtualization to containers.  OVN's integration with
+Docker currently works in two modes - the "underlay" mode or the "overlay"
+mode.
+
+In the "underlay" mode, OVN requires a OpenStack setup to provide container
+networking.  In this mode, one can create logical networks and can have
+containers running inside VMs, standalone VMs (without having any containers
+running inside them) and physical machines connected to the same logical
+network.  This is a multi-tenant, multi-host solution.
+
+In the "overlay" mode, OVN can create a logical network amongst containers
+running on multiple hosts.  This is a single-tenant (extendable to
+multi-tenants depending on the security characteristics of the workloads),
+multi-host solution.  In this mode, you do not need a pre-created OpenStack
+setup.
+
+For both the modes to work, a user has to install and start Open vSwitch in
+each VM/host that he plans to run his containers.
+
+
+The "overlay" mode
+==================
+
+OVN in "overlay" mode needs a minimum Open vSwitch version of 2.5.
+
+* Start the central components.
+
+OVN architecture has a central component which stores your networking intent
+in a database.  So on any machine, with an IP Address of $CENTRAL_IP, where you
+have installed and started Open vSwitch, you will need to start some
+central components.
+
+Begin by making ovsdb-server listen on a TCP port by running:
+
+```
+ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640
+```
+
+Start ovn_northd daemon.  This daemon translates networking intent from Docker
+stored in OVN_Northbound databse to logical flows in OVN_Southbound database.
+
+```
+/usr/share/openvswitch/scripts/ovn-ctl start_northd
+```
+
+* One time setup.
+
+On each host, where you plan to spawn your containers, you will need to
+run the following commands once.
+
+$LOCAL_IP in the below command is the IP address via which other hosts
+can reach this host.  This acts as your local tunnel endpoint.
+
+$ENCAP_TYPE is the type of tunnel that you would like to use for overlay
+networking.  The options are "geneve" or "stt".
+
+```
+ovs-vsctl set Open_vSwitch . external_ids:ovn-remote="tcp:$CENTRAL_IP:6640 \
+    external_ids:ovn-encap-ip=$LOCAL_IP external_ids:ovn-encap-type="geneve"
+```
+
+And finally, start the ovn-controller.
+
+```
+/usr/share/openvswitch/scripts/ovn-ctl start_controller
+```
+
+* Start the Open vSwitch network driver.
+
+By default Docker uses Linux bridge for networking.  But it has support
+for external drivers.  To use Open vSwitch instead of the Linux bridge,
+you will need to start the Open vSwitch driver.
+
+The Open vSwitch driver uses the Python's flask module to listen to
+Docker's networking api calls.  So, if your host does not have Python's
+flask module, install it with:
+
+```
+easy_install -U pip
+pip install Flask
+```
+
+Start the Open vSwitch driver on every host where you plan to create your
+containers.
+
+```
+ovn-docker-overlay-driver --overlay-mode --detach
+```
+
+Docker has inbuilt primitives that closely match OVN's logical switches
+and logical port concepts.  Please consult Docker's documentation for
+all the possible commands.  Here are some examples.
+
+* Create your logical switch.
+
+To create a logical switch with name 'foo', on subnet '192.168.1.0/24' run:
+
+```
+NID=`docker network create -d openvswitch --subnet=192.168.1.0/24 foo`
+```
+
+* List your logical switches.
+
+```
+docker network ls
+```
+
+You can also look at this logical switch in OVN's northbound database by
+running the following command.
+
+```
+ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lswitch-list
+```
+
+* Docker creates your logical port and attaches it to the logical network
+in a single step.
+
+For e.g., to attach a logical port to network 'foo' inside cotainer busybox,
+run:
+
+```
+docker run -itd --net=foo --name=busybox busybox
+```
+
+* List all your logical ports.
+
+Docker currently does not have a CLI command to list all your logical ports.
+But you can look at them in the OVN database, by running:
 
 ```
-% docker run -d ubuntu:14.04 /bin/sh -c \
-"while true; do echo hello world; sleep 1; done"
+ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lport-list $NID
 ```
 
-The above command creates a container with one network interface 'eth0'
-and attaches it to a Linux bridge called 'docker0'.  'eth0' by default
-gets an IP address in the 172.17.0.0/16 space.  Docker sets up iptables
-NAT rules to let this interface talk to the outside world.  Also since
-it is connected to 'docker0' bridge, it can talk to all other containers
-connected to the same bridge.  If you prefer that no network interface be
-created by default, you can start your container with
-the option '--net=none', e,g.:
+* You can also create a logical port and attach it to a running container.
 
 ```
-% docker run -d --net=none ubuntu:14.04 /bin/sh -c \
-"while true; do echo hello world; sleep 1; done"
+docker network create -d openvswitch --subnet=192.168.2.0/24 bar
+docker network connect bar busybox
 ```
 
-The above commands will return a container id.  You will need to pass this
-value to the utility 'ovs-docker' to create network interfaces attached to an
-Open vSwitch bridge as a port.  This document will reference this value
-as $CONTAINER_ID in the next steps.
+You can delete your logical port and detach it from a running container by
+running:
+
+```
+docker network disconnect bar busybox
+```
 
-* Add a new network interface to the container and attach it to an Open vSwitch
-  bridge.  e.g.:
+* You can delete your logical switch by running:
 
-`% ovs-docker add-port br-int eth1 $CONTAINER_ID`
+```
+docker network rm bar
+```
 
-The above command will create a network interface 'eth1' inside the container
-and then attaches it to the Open vSwitch bridge 'br-int'.  This is done by
-creating a veth pair.  One end of the interface becomes 'eth1' inside the
-container and the other end attaches to 'br-int'.
 
-The script also lets one to add IP address, MAC address, Gateway address and
-MTU for the interface.  e.g.:
+The "underlay" mode
+===================
+
+This mode requires that you have a OpenStack setup pre-installed with OVN
+providing the underlay networking.
+
+* One time setup.
+
+A OpenStack tenant creates a VM with a single network interface (or multiple)
+that belongs to management logical networks.  The tenant needs to fetch the
+port-id associated with the interface via which he plans to send the container
+traffic inside the spawned VM.  This can be obtained by running the
+below command to fetch the 'id'  associated with the VM.
 
 ```
-% ovs-docker add-port br-int eth1 $CONTAINER_ID --ipaddress=192.168.1.2/24 \
---macaddress=a2:c3:0d:49:7f:f8 --gateway=192.168.1.1 --mtu=1450
+nova list
 ```
 
-* A previously added network interface can be deleted.  e.g.:
+and then by running:
 
-`% ovs-docker del-port br-int eth1 $CONTAINER_ID`
+```
+neutron port-list --device_id=$id
+```
 
-All the previously added Open vSwitch interfaces inside a container can be
-deleted.  e.g.:
+Inside the VM, download the OpenStack RC file that contains the tenant
+information (henceforth referred to as 'openrc.sh').  Edit the file and add the
+previously obtained port-id information to the file by appending the following
+line: export OS_VIF_ID=$port_id.  After this edit, the file will look something
+like:
 
-`% ovs-docker del-ports br-int $CONTAINER_ID`
+```
+#!/bin/bash
+export OS_AUTH_URL=http://10.33.75.122:5000/v2.0
+export OS_TENANT_ID=fab106b215d943c3bad519492278443d
+export OS_TENANT_NAME="demo"
+export OS_USERNAME="demo"
+export OS_VIF_ID=e798c371-85f4-4f2d-ad65-d09dd1d3c1c9
+```
+
+* Create the Open vSwitch bridge.
+
+If your VM has one ethernet interface (e.g.: 'eth0'), you will need to add
+that device as a port to an Open vSwitch bridge 'breth0' and move its IP
+address and route related information to that bridge. (If it has multiple
+network interfaces, you will need to create and attach an Open vSwitch bridge
+for the interface via which you plan to send your container traffic.)
+
+If you use DHCP to obtain an IP address, then you should kill the DHCP client
+that was listening on the physical Ethernet interface (e.g. eth0) and start
+one listening on the Open vSwitch bridge (e.g. breth0).
 
-It is important that the same $CONTAINER_ID be passed to both add-port
-and del-port[s] commands.
+Depending on your VM, you can make the above step persistent across reboots.
+For e.g.:, if your VM is Debian/Ubuntu, you can read
+[openvswitch-switch.README.Debian]
+If your VM is RHEL based, you can read [README.RHEL]
 
-* More network control.
 
-Once a container interface is added to an Open vSwitch bridge, one can
-set VLANs, create Tunnels, add OpenFlow rules etc for more network control.
-Many times, it is important that the underlying network infrastructure is
-plumbed (or programmed) before the application inside the container starts.
-To handle this, one can create a micro-container, attach an Open vSwitch
-interface to that container, set the UUIDS in OVSDB as mentioned in
-[IntegrationGuide.md] and then program the bridge to handle traffic coming out
-of that container. Now, you can start the main container asking it
-to share the network of the micro-container. When your application starts,
-the underlying network infrastructure would be ready. e.g.:
+* Start the Open vSwitch network driver.
 
+The Open vSwitch driver uses the Python's flask module to listen to
+Docker's networking api calls.  The driver also uses OpenStack's
+python-neutronclient libraries.  So, if your host does not have Python's
+flask module or python-neutronclient install them with:
+
+```
+easy_install -U pip
+pip install python-neutronclient
+pip install Flask
 ```
-% docker run -d --net=container:$MICROCONTAINER_ID ubuntu:14.04 /bin/sh -c \
-"while true; do echo hello world; sleep 1; done"
+
+Source the openrc file. e.g.:
+````
+source openrc.sh
 ```
 
-Please read the man pages of ovs-vsctl, ovs-ofctl, ovs-vswitchd,
-ovsdb-server and ovs-vswitchd.conf.db etc for more details about Open vSwitch.
+Start the network driver and provide your OpenStack tenant password
+when prompted.
 
-Docker networking is quite flexible and can be used in multiple ways.  For more
-information, please read:
-https://docs.docker.com/articles/networking
+```
+ovn-docker-underlay-driver --bridge breth0 --detach
+```
 
-Bug Reporting
--------------
+From here-on you can use the same Docker commands as described in the
+section 'The "overlay" mode'.
 
-Please report problems to bugs@openvswitch.org.
+Please read 'man ovn-architecture' to understand OVN's architecture in
+detail.
 
-[INSTALL.md]:INSTALL.md
-[IntegrationGuide.md]:IntegrationGuide.md
+[INSTALL.md]: INSTALL.md
+[openvswitch-switch.README.Debian]: debian/openvswitch-switch.README.Debian
+[README.RHEL]: rhel/README.RHEL
diff --git a/ovn/utilities/automake.mk b/ovn/utilities/automake.mk
index b247a54..50fb4e7 100644
--- a/ovn/utilities/automake.mk
+++ b/ovn/utilities/automake.mk
@@ -8,9 +8,16 @@  man_MANS += \
 
 MAN_ROOTS += ovn/utilities/ovn-sbctl.8.in
 
+# Docker drivers
+bin_SCRIPTS += \
+    ovn/utilities/ovn-docker-overlay-driver \
+    ovn/utilities/ovn-docker-underlay-driver
+
 EXTRA_DIST += \
     ovn/utilities/ovn-ctl \
     ovn/utilities/ovn-ctl.8.xml \
+    ovn/utilities/ovn-docker-overlay-driver \
+    ovn/utilities/ovn-docker-underlay-driver \
     ovn/utilities/ovn-nbctl.8.xml
 
 DISTCLEANFILES += \
@@ -27,3 +34,4 @@  ovn_utilities_ovn_nbctl_LDADD = ovn/lib/libovn.la ovsdb/libovsdb.la lib/libopenv
 bin_PROGRAMS += ovn/utilities/ovn-sbctl
 ovn_utilities_ovn_sbctl_SOURCES = ovn/utilities/ovn-sbctl.c
 ovn_utilities_ovn_sbctl_LDADD = ovn/lib/libovn.la ovsdb/libovsdb.la lib/libopenvswitch.la
+
diff --git a/ovn/utilities/ovn-docker-overlay-driver b/ovn/utilities/ovn-docker-overlay-driver
new file mode 100755
index 0000000..71eac93
--- /dev/null
+++ b/ovn/utilities/ovn-docker-overlay-driver
@@ -0,0 +1,442 @@ 
+#! /usr/bin/python
+# Copyright (C) 2015 Nicira, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at:
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import ast
+import atexit
+import json
+import os
+import random
+import re
+import shlex
+import subprocess
+import sys
+
+import ovs.dirs
+import ovs.util
+import ovs.daemon
+import ovs.vlog
+
+from flask import Flask, jsonify
+from flask import request, abort
+
+app = Flask(__name__)
+vlog = ovs.vlog.Vlog("ovn-docker-overlay-driver")
+
+OVN_BRIDGE = "br-int"
+OVN_REMOTE = ""
+PLUGIN_DIR = "/etc/docker/plugins"
+PLUGIN_FILE = "/etc/docker/plugins/openvswitch.spec"
+
+
+def call_popen(cmd):
+    child = subprocess.Popen(cmd, stdout=subprocess.PIPE)
+    output = child.communicate()
+    if child.returncode:
+        raise RuntimeError("Fatal error executing %s" % (cmd))
+    if len(output) == 0 or output[0] == None:
+        output = ""
+    else:
+        output = output[0].strip()
+    return output
+
+
+def call_prog(prog, args_list):
+    cmd = [prog, "--timeout=5", "-vconsole:off"] + args_list
+    return call_popen(cmd)
+
+
+def ovs_vsctl(args):
+    return call_prog("ovs-vsctl", shlex.split(args))
+
+
+def ovn_nbctl(args):
+    args_list = shlex.split(args)
+    database_option = "%s=%s" % ("--db", OVN_REMOTE)
+    args_list.insert(0, database_option)
+    return call_prog("ovn-nbctl", args_list)
+
+
+def cleanup():
+    if os.path.isfile(PLUGIN_FILE):
+        os.remove(PLUGIN_FILE)
+
+
+def ovn_init_overlay():
+    br_list = ovs_vsctl("list-br").split()
+    if OVN_BRIDGE not in br_list:
+        ovs_vsctl("-- --may-exist add-br %s "
+                  "-- br-set-external-id %s bridge-id %s "
+                  "-- set bridge %s other-config:disable-in-band=true "
+                  "-- set bridge %s fail-mode=secure"
+                  % (OVN_BRIDGE, OVN_BRIDGE, OVN_BRIDGE, OVN_BRIDGE,
+                     OVN_BRIDGE))
+
+    global OVN_REMOTE
+    OVN_REMOTE = ovs_vsctl("get Open_vSwitch . "
+                           "external_ids:ovn-remote").strip('"')
+    if not OVN_REMOTE:
+        sys.exit("OVN central database's ip address not set")
+
+    ovs_vsctl("set open_vswitch . external_ids:ovn-bridge=%s "
+              % OVN_BRIDGE)
+
+
+def prepare():
+    parser = argparse.ArgumentParser()
+
+    ovs.vlog.add_args(parser)
+    ovs.daemon.add_args(parser)
+    args = parser.parse_args()
+    ovs.vlog.handle_args(args)
+    ovs.daemon.handle_args(args)
+    ovn_init_overlay()
+
+    if not os.path.isdir(PLUGIN_DIR):
+        os.makedirs(PLUGIN_DIR)
+
+    ovs.daemon.daemonize()
+    try:
+        fo = open(PLUGIN_FILE, "w")
+        fo.write("tcp://0.0.0.0:5000")
+        fo.close()
+    except Exception as e:
+        ovs.util.ovs_fatal(0, "Failed to write to spec file (%s)" % str(e),
+                           vlog)
+
+    atexit.register(cleanup)
+
+
+@app.route('/Plugin.Activate', methods=['POST'])
+def plugin_activate():
+    return jsonify({"Implements": ["NetworkDriver"]})
+
+
+@app.route('/NetworkDriver.GetCapabilities', methods=['POST'])
+def get_capability():
+    return jsonify({"Scope": "global"})
+
+
+@app.route('/NetworkDriver.DiscoverNew', methods=['POST'])
+def new_discovery():
+    return jsonify({})
+
+
+@app.route('/NetworkDriver.DiscoverDelete', methods=['POST'])
+def delete_discovery():
+    return jsonify({})
+
+
+@app.route('/NetworkDriver.CreateNetwork', methods=['POST'])
+def create_network():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    # NetworkID will have docker generated network uuid and it
+    # becomes 'name' in a OVN Logical switch record.
+    network = data.get("NetworkID", "")
+    if not network:
+        abort(400)
+
+    # Limit subnet handling to ipv4 till ipv6 usecase is clear.
+    ipv4_data = data.get("IPv4Data", "")
+    if not ipv4_data:
+        error = "create_network: No ipv4 subnet provided"
+        return jsonify({'Err': error})
+
+    subnet = ipv4_data[0].get("Pool", "")
+    if not subnet:
+        error = "create_network: no subnet in ipv4 data from libnetwork"
+        return jsonify({'Err': error})
+
+    gateway_ip = ipv4_data[0].get("Gateway", "").rsplit('/', 1)[0]
+    if not gateway_ip:
+        error = "create_network: no gateway in ipv4 data from libnetwork"
+        return jsonify({'Err': error})
+
+    try:
+        ovn_nbctl("lswitch-add %s -- set Logical_Switch %s "
+                  "external_ids:subnet=%s external_ids:gateway_ip=%s"
+                  % (network, network, subnet, gateway_ip))
+    except Exception as e:
+        error = "create_network: lswitch-add %s" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+
+@app.route('/NetworkDriver.DeleteNetwork', methods=['POST'])
+def delete_network():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    try:
+        ovn_nbctl("lswitch-del %s" % (nid))
+    except Exception as e:
+        error = "delete_network: lswitch-del %s" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+
+@app.route('/NetworkDriver.CreateEndpoint', methods=['POST'])
+def create_endpoint():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    interface = data.get("Interface", "")
+    if not interface:
+        error = "create_endpoint: no interfaces structure supplied by " \
+                "libnetwork"
+        return jsonify({'Err': error})
+
+    ip_address_and_mask = interface.get("Address", "")
+    if not ip_address_and_mask:
+        error = "create_endpoint: ip address not provided by libnetwork"
+        return jsonify({'Err': error})
+
+    ip_address = ip_address_and_mask.rsplit('/', 1)[0]
+    mac_address_input = interface.get("MacAddress", "")
+    mac_address_output = ""
+
+    try:
+        ovn_nbctl("lport-add %s %s" % (nid, eid))
+    except Exception as e:
+        error = "create_endpoint: lport-add (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    if not mac_address_input:
+        mac_address = "02:%02x:%02x:%02x:%02x:%02x" % (random.randint(0, 255),
+                                                       random.randint(0, 255),
+                                                       random.randint(0, 255),
+                                                       random.randint(0, 255),
+                                                       random.randint(0, 255))
+    else:
+        mac_address = mac_address_input
+
+    try:
+        ovn_nbctl("lport-set-addresses %s \"%s %s\""
+                  % (eid, mac_address, ip_address))
+    except Exception as e:
+        error = "create_endpoint: lport-set-addresses (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    # Only return a mac address if one did not come as request.
+    mac_address_output = ""
+    if not mac_address_input:
+        mac_address_output = mac_address
+
+    return jsonify({"Interface": {
+                                    "Address": "",
+                                    "AddressIPv6": "",
+                                    "MacAddress": mac_address_output
+                                    }})
+
+
+def get_logical_port_addresses(eid):
+    ret = ovn_nbctl("--if-exists get Logical_port %s addresses" % (eid))
+    if not ret:
+        error = "endpoint not found in OVN database"
+        return (None, None, error)
+    addresses = ast.literal_eval(ret)
+    if len(addresses) == 0:
+        error = "unexpected return while fetching addresses"
+        return (None, None, error)
+    (mac_address, ip_address) = addresses[0].split()
+    return (mac_address, ip_address, None)
+
+
+@app.route('/NetworkDriver.EndpointOperInfo', methods=['POST'])
+def show_endpoint():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    try:
+        (mac_address, ip_address, error) = get_logical_port_addresses(eid)
+        if error:
+            jsonify({'Err': error})
+    except Exception as e:
+        error = "show_endpoint: get Logical_port addresses. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    veth_outside = eid[0:15]
+    return jsonify({"Value": {"ip_address": ip_address,
+                              "mac_address": mac_address,
+                              "veth_outside": veth_outside
+                              }})
+
+
+@app.route('/NetworkDriver.DeleteEndpoint', methods=['POST'])
+def delete_endpoint():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    try:
+        ovn_nbctl("lport-del %s" % eid)
+    except Exception as e:
+        error = "delete_endpoint: lport-del %s" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+
+@app.route('/NetworkDriver.Join', methods=['POST'])
+def network_join():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    sboxkey = data.get("SandboxKey", "")
+    if not sboxkey:
+        abort(400)
+
+    # sboxkey is of the form: /var/run/docker/netns/CONTAINER_ID
+    vm_id = sboxkey.rsplit('/')[-1]
+
+    try:
+        (mac_address, ip_address, error) = get_logical_port_addresses(eid)
+        if error:
+            jsonify({'Err': error})
+    except Exception as e:
+        error = "network_join: %s" % (str(e))
+        return jsonify({'Err': error})
+
+    veth_outside = eid[0:15]
+    veth_inside = eid[0:13] + "_c"
+    command = "ip link add %s type veth peer name %s" \
+              % (veth_inside, veth_outside)
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "network_join: failed to create veth pair (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    command = "ip link set dev %s address %s" \
+              % (veth_inside, mac_address)
+
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "network_join: failed to set veth mac address (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    command = "ip link set %s up" % (veth_outside)
+
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "network_join: failed to up the veth interface (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        ovs_vsctl("add-port %s %s" % (OVN_BRIDGE, veth_outside))
+        ovs_vsctl("set interface %s external_ids:attached-mac=%s "
+                  "external_ids:iface-id=%s "
+                  "external_ids:vm-id=%s "
+                  "external_ids:iface-status=%s "
+                  % (veth_outside, mac_address, eid, vm_id, "active"))
+    except Exception as e:
+        error = "network_join: failed to create a port (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({"InterfaceName": {
+                                        "SrcName": veth_inside,
+                                        "DstPrefix": "eth"
+                                     },
+                    "Gateway": "",
+                    "GatewayIPv6": ""})
+
+
+@app.route('/NetworkDriver.Leave', methods=['POST'])
+def network_leave():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    veth_outside = eid[0:15]
+    command = "ip link delete %s" % (veth_outside)
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "network_leave: failed to delete veth pair (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        ovs_vsctl("--if-exists del-port %s" % (veth_outside))
+    except Exception as e:
+        error = "network_leave: failed to delete port (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+if __name__ == '__main__':
+    prepare()
+    app.run(host='0.0.0.0')
diff --git a/ovn/utilities/ovn-docker-underlay-driver b/ovn/utilities/ovn-docker-underlay-driver
new file mode 100755
index 0000000..46364da
--- /dev/null
+++ b/ovn/utilities/ovn-docker-underlay-driver
@@ -0,0 +1,675 @@ 
+#! /usr/bin/python
+# Copyright (C) 2015 Nicira, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at:
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import atexit
+import getpass
+import json
+import os
+import re
+import shlex
+import subprocess
+import sys
+import time
+import uuid
+
+import ovs.dirs
+import ovs.util
+import ovs.daemon
+import ovs.unixctl.server
+import ovs.vlog
+
+from neutronclient.v2_0 import client
+from flask import Flask, jsonify
+from flask import request, abort
+
+app = Flask(__name__)
+vlog = ovs.vlog.Vlog("ovn-docker-underlay-driver")
+
+AUTH_STRATEGY = ""
+AUTH_URL = ""
+ENDPOINT_URL = ""
+OVN_BRIDGE = ""
+PASSWORD = ""
+PLUGIN_DIR = "/etc/docker/plugins"
+PLUGIN_FILE = "/etc/docker/plugins/openvswitch.spec"
+TENANT_ID = ""
+USERNAME = ""
+VIF_ID = ""
+
+
+def call_popen(cmd):
+    child = subprocess.Popen(cmd, stdout=subprocess.PIPE)
+    output = child.communicate()
+    if child.returncode:
+        raise RuntimeError("Fatal error executing %s" % (cmd))
+    if len(output) == 0 or output[0] == None:
+        output = ""
+    else:
+        output = output[0].strip()
+    return output
+
+
+def call_prog(prog, args_list):
+    cmd = [prog, "--timeout=5", "-vconsole:off"] + args_list
+    return call_popen(cmd)
+
+
+def ovs_vsctl(args):
+    return call_prog("ovs-vsctl", shlex.split(args))
+
+
+def cleanup():
+    if os.path.isfile(PLUGIN_FILE):
+        os.remove(PLUGIN_FILE)
+
+
+def ovn_init_underlay(args):
+    global USERNAME, PASSWORD, TENANT_ID, AUTH_URL, AUTH_STRATEGY, VIF_ID
+    global OVN_BRIDGE
+
+    if not args.bridge:
+        sys.exit("OVS bridge name not provided")
+    OVN_BRIDGE = args.bridge
+
+    VIF_ID = os.environ.get('OS_VIF_ID', '')
+    if not VIF_ID:
+        sys.exit("env OS_VIF_ID not set")
+    USERNAME = os.environ.get('OS_USERNAME', '')
+    if not USERNAME:
+        sys.exit("env OS_USERNAME not set")
+    TENANT_ID = os.environ.get('OS_TENANT_ID', '')
+    if not TENANT_ID:
+        sys.exit("env OS_TENANT_ID not set")
+    AUTH_URL = os.environ.get('OS_AUTH_URL', '')
+    if not AUTH_URL:
+        sys.exit("env OS_AUTH_URL not set")
+    AUTH_STRATEGY = "keystone"
+
+    PASSWORD = os.environ.get('OS_PASSWORD', '')
+    if not PASSWORD:
+        PASSWORD = getpass.getpass()
+
+
+def prepare():
+    parser = argparse.ArgumentParser()
+    parser.add_argument('--bridge', help="The Bridge to which containers "
+                        "interfaces connect to.")
+
+    ovs.vlog.add_args(parser)
+    ovs.daemon.add_args(parser)
+    args = parser.parse_args()
+    ovs.vlog.handle_args(args)
+    ovs.daemon.handle_args(args)
+    ovn_init_underlay(args)
+
+    if not os.path.isdir(PLUGIN_DIR):
+        os.makedirs(PLUGIN_DIR)
+
+    ovs.daemon.daemonize()
+    try:
+        fo = open(PLUGIN_FILE, "w")
+        fo.write("tcp://0.0.0.0:5000")
+        fo.close()
+    except Exception as e:
+        ovs.util.ovs_fatal(0, "Failed to write to spec file (%s)" % str(e),
+                           vlog)
+
+    atexit.register(cleanup)
+
+
+@app.route('/Plugin.Activate', methods=['POST'])
+def plugin_activate():
+    return jsonify({"Implements": ["NetworkDriver"]})
+
+
+@app.route('/NetworkDriver.GetCapabilities', methods=['POST'])
+def get_capability():
+    return jsonify({"Scope": "global"})
+
+
+@app.route('/NetworkDriver.DiscoverNew', methods=['POST'])
+def new_discovery():
+    return jsonify({})
+
+
+@app.route('/NetworkDriver.DiscoverDelete', methods=['POST'])
+def delete_discovery():
+    return jsonify({})
+
+
+def neutron_login():
+    try:
+        neutron = client.Client(username=USERNAME,
+                                password=PASSWORD,
+                                tenant_id=TENANT_ID,
+                                auth_url=AUTH_URL,
+                                endpoint_url=ENDPOINT_URL,
+                                auth_strategy=AUTH_STRATEGY)
+    except Exception as e:
+        raise RuntimeError("Failed to login into Neutron(%s)" % str(e))
+    return neutron
+
+
+def get_networkuuid_by_name(neutron, name):
+    param = {'fields': 'id', 'name': name}
+    ret = neutron.list_networks(**param)
+    if len(ret['networks']) > 1:
+        raise RuntimeError("More than one network for the given name")
+    elif len(ret['networks']) == 0:
+        network = None
+    else:
+        network = ret['networks'][0]['id']
+    return network
+
+
+def get_subnetuuid_by_name(neutron, name):
+    param = {'fields': 'id', 'name': name}
+    ret = neutron.list_subnets(**param)
+    if len(ret['subnets']) > 1:
+        raise RuntimeError("More than one subnet for the given name")
+    elif len(ret['subnets']) == 0:
+        subnet = None
+    else:
+        subnet = ret['subnets'][0]['id']
+    return subnet
+
+
+@app.route('/NetworkDriver.CreateNetwork', methods=['POST'])
+def create_network():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    # NetworkID will have docker generated network uuid and it
+    # becomes 'name' in a neutron network record.
+    network = data.get("NetworkID", "")
+    if not network:
+        abort(400)
+
+    # Limit subnet handling to ipv4 till ipv6 usecase is clear.
+    ipv4_data = data.get("IPv4Data", "")
+    if not ipv4_data:
+        error = "create_network: No ipv4 subnet provided"
+        return jsonify({'Err': error})
+
+    subnet = ipv4_data[0].get("Pool", "")
+    if not subnet:
+        error = "create_network: no subnet in ipv4 data from libnetwork"
+        return jsonify({'Err': error})
+
+    gateway_ip = ipv4_data[0].get("Gateway", "").rsplit('/', 1)[0]
+    if not gateway_ip:
+        error = "create_network: no gateway in ipv4 data from libnetwork"
+        return jsonify({'Err': error})
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "create_network: neutron login. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        if get_networkuuid_by_name(neutron, network):
+            error = "create_network: network has already been created"
+            return jsonify({'Err': error})
+    except Exception as e:
+        error = "create_network: neutron network uuid by name. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        body = {'network': {'name': network, 'admin_state_up': True}}
+        ret = neutron.create_network(body)
+        network_id = ret['network']['id']
+    except Exception as e:
+        error = "create_network: neutron net-create call. (%s)" % str(e)
+        return jsonify({'Err': error})
+
+    subnet_name = "docker-%s" % (network)
+
+    try:
+        body = {'subnet': {'network_id': network_id,
+                           'ip_version': 4,
+                           'cidr': subnet,
+                           'gateway_ip': gateway_ip,
+                           'name': subnet_name}}
+        created_subnet = neutron.create_subnet(body)
+    except Exception as e:
+        error = "create_network: neutron subnet-create call. (%s)" % str(e)
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+
+@app.route('/NetworkDriver.DeleteNetwork', methods=['POST'])
+def delete_network():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "delete_network: neutron login. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        network = get_networkuuid_by_name(neutron, nid)
+        if not network:
+            error = "delete_network: failed in network by name. (%s)" % (nid)
+            return jsonify({'Err': error})
+    except Exception as e:
+        error = "delete_network: network uuid by name. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        neutron.delete_network(network)
+    except Exception as e:
+        error = "delete_network: neutron net-delete. (%s)" % str(e)
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+
+def reserve_vlan():
+    reserved_vlan = 0
+    vlans = ovs_vsctl("--if-exists get Open_vSwitch . "
+                      "external_ids:vlans").strip('"')
+    if not vlans:
+        reserved_vlan = 1
+        ovs_vsctl("set Open_vSwitch . external_ids:vlans=%s" % reserved_vlan)
+        return reserved_vlan
+
+    vlan_set = str(vlans).split(',')
+
+    for vlan in range(1, 4095):
+        if str(vlan) not in vlan_set:
+            vlan_set.append(str(vlan))
+            reserved_vlan = vlan
+            vlans = re.sub(r'[ \[\]\']', '', str(vlan_set))
+            ovs_vsctl("set Open_vSwitch . external_ids:vlans=%s" % vlans)
+            return reserved_vlan
+
+    if not reserved_vlan:
+        raise RuntimeError("No more vlans available on this host")
+
+
+def unreserve_vlan(reserved_vlan):
+    vlans = ovs_vsctl("--if-exists get Open_vSwitch . "
+                      "external_ids:vlans").strip('"')
+    if not vlans:
+        return
+
+    vlan_set = str(vlans).split(',')
+    if str(reserved_vlan) not in vlan_set:
+        return
+
+    vlan_set.remove(str(reserved_vlan))
+    vlans = re.sub(r'[ \[\]\']', '', str(vlan_set))
+    if vlans:
+        ovs_vsctl("set Open_vSwitch . external_ids:vlans=%s" % vlans)
+    else:
+        ovs_vsctl("remove Open_vSwitch . external_ids vlans")
+
+
+def create_port_underlay(neutron, network, eid, ip_address, mac_address):
+    reserved_vlan = reserve_vlan()
+    if mac_address:
+        body = {'port': {'network_id': network,
+                         'binding:profile': {'parent_name': VIF_ID,
+                                             'tag': int(reserved_vlan)},
+                         'mac_address': mac_address,
+                         'fixed_ips': [{'ip_address': ip_address}],
+                         'name': eid,
+                         'admin_state_up': True}}
+    else:
+        body = {'port': {'network_id': network,
+                         'binding:profile': {'parent_name': VIF_ID,
+                                             'tag': int(reserved_vlan)},
+                         'fixed_ips': [{'ip_address': ip_address}],
+                         'name': eid,
+                         'admin_state_up': True}}
+
+    try:
+        ret = neutron.create_port(body)
+        mac_address = ret['port']['mac_address']
+    except Exception as e:
+        unreserve_vlan(reserved_vlan)
+        raise RuntimeError("Failed in creation of neutron port (%s)." % str(e))
+
+    ovs_vsctl("set Open_vSwitch . external_ids:%s_vlan=%s"
+              % (eid, reserved_vlan))
+
+    return mac_address
+
+
+def get_endpointuuid_by_name(neutron, name):
+    param = {'fields': 'id', 'name': name}
+    ret = neutron.list_ports(**param)
+    if len(ret['ports']) > 1:
+        raise RuntimeError("More than one endpoint for the given name")
+    elif len(ret['ports']) == 0:
+        endpoint = None
+    else:
+        endpoint = ret['ports'][0]['id']
+    return endpoint
+
+
+@app.route('/NetworkDriver.CreateEndpoint', methods=['POST'])
+def create_endpoint():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    interface = data.get("Interface", "")
+    if not interface:
+        error = "create_endpoint: no interfaces supplied by libnetwork"
+        return jsonify({'Err': error})
+
+    ip_address_and_mask = interface.get("Address", "")
+    if not ip_address_and_mask:
+        error = "create_endpoint: ip address not provided by libnetwork"
+        return jsonify({'Err': error})
+
+    ip_address = ip_address_and_mask.rsplit('/', 1)[0]
+    mac_address_input = interface.get("MacAddress", "")
+    mac_address_output = ""
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "create_endpoint: neutron login. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        endpoint = get_endpointuuid_by_name(neutron, eid)
+        if endpoint:
+            error = "create_endpoint: Endpoint has already been created"
+            return jsonify({'Err': error})
+    except Exception as e:
+        error = "create_endpoint: endpoint uuid by name. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        network = get_networkuuid_by_name(neutron, nid)
+        if not network:
+            error = "create_endpoint: neutron network by name. (%s)" % (nid)
+            return jsonify({'Err': error})
+    except Exception as e:
+        error = "create_endpoint: network uuid by name. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        mac_address = create_port_underlay(neutron, network, eid, ip_address,
+                                           mac_address_input)
+    except Exception as e:
+        error = "create_endpoint: neutron port-create (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    if not mac_address_input:
+        mac_address_output = mac_address
+
+    return jsonify({"Interface": {
+                                    "Address": "",
+                                    "AddressIPv6": "",
+                                    "MacAddress": mac_address_output
+                                    }})
+
+
+@app.route('/NetworkDriver.EndpointOperInfo', methods=['POST'])
+def show_endpoint():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "%s" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        endpoint = get_endpointuuid_by_name(neutron, eid)
+        if not endpoint:
+            error = "show_endpoint: Failed to get endpoint by name"
+            return jsonify({'Err': error})
+    except Exception as e:
+        error = "show_endpoint: get endpoint by name. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        ret = neutron.show_port(endpoint)
+        mac_address = ret['port']['mac_address']
+        ip_address = ret['port']['fixed_ips'][0]['ip_address']
+    except Exception as e:
+        error = "show_endpoint: show port (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    veth_outside = eid[0:15]
+    return jsonify({"Value": {"ip_address": ip_address,
+                              "mac_address": mac_address,
+                              "veth_outside": veth_outside
+                              }})
+
+
+@app.route('/NetworkDriver.DeleteEndpoint', methods=['POST'])
+def delete_endpoint():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "delete_endpoint: neutron login (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    endpoint = get_endpointuuid_by_name(neutron, eid)
+    if not endpoint:
+        return jsonify({})
+
+    reserved_vlan = ovs_vsctl("--if-exists get Open_vSwitch . "
+                              "external_ids:%s_vlan" % eid).strip('"')
+    if reserved_vlan:
+        unreserve_vlan(reserved_vlan)
+        ovs_vsctl("remove Open_vSwitch . external_ids %s_vlan" % eid)
+
+    try:
+        neutron.delete_port(endpoint)
+    except Exception as e:
+        error = "delete_endpoint: neutron port-delete. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+
+@app.route('/NetworkDriver.Join', methods=['POST'])
+def network_join():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    sboxkey = data.get("SandboxKey", "")
+    if not sboxkey:
+        abort(400)
+
+    # sboxkey is of the form: /var/run/docker/netns/CONTAINER_ID
+    vm_id = sboxkey.rsplit('/')[-1]
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "network_join: neutron login. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    subnet_name = "docker-%s" % (nid)
+    try:
+        subnet = get_subnetuuid_by_name(neutron, subnet_name)
+        if not subnet:
+            error = "network_join: can't find subnet in neutron"
+            return jsonify({'Err': error})
+    except Exception as e:
+        error = "network_join: subnet uuid by name. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        ret = neutron.show_subnet(subnet)
+        gateway_ip = ret['subnet']['gateway_ip']
+        if not gateway_ip:
+            error = "network_join: no gateway_ip for the subnet"
+            return jsonify({'Err': error})
+    except Exception as e:
+        error = "network_join: neutron show subnet. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        endpoint = get_endpointuuid_by_name(neutron, eid)
+        if not endpoint:
+            error = "network_join: Failed to get endpoint by name"
+            return jsonify({'Err': error})
+    except Exception as e:
+        error = "network_join: neutron endpoint by name. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        ret = neutron.show_port(endpoint)
+        mac_address = ret['port']['mac_address']
+    except Exception as e:
+        error = "network_join: neutron show port. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    veth_outside = eid[0:15]
+    veth_inside = eid[0:13] + "_c"
+    command = "ip link add %s type veth peer name %s" \
+              % (veth_inside, veth_outside)
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "network_join: failed to create veth pair. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    command = "ip link set dev %s address %s" \
+              % (veth_inside, mac_address)
+
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "network_join: failed to set veth mac address. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    command = "ip link set %s up" % (veth_outside)
+
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "network_join: failed to up the veth iface. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        reserved_vlan = ovs_vsctl("--if-exists get Open_vSwitch . "
+                                  "external_ids:%s_vlan" % eid).strip('"')
+        if not reserved_vlan:
+            error = "network_join: no reserved vlan for this endpoint"
+            return jsonify({'Err': error})
+        ovs_vsctl("add-port %s %s tag=%s"
+                  % (OVN_BRIDGE, veth_outside, reserved_vlan))
+    except Exception as e:
+        error = "network_join: failed to create a OVS port. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({"InterfaceName": {
+                                        "SrcName": veth_inside,
+                                        "DstPrefix": "eth"
+                                     },
+                    "Gateway": gateway_ip,
+                    "GatewayIPv6": ""})
+
+
+@app.route('/NetworkDriver.Leave', methods=['POST'])
+def network_leave():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    veth_outside = eid[0:15]
+    command = "ip link delete %s" % (veth_outside)
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "network_leave: failed to delete veth pair. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        ovs_vsctl("--if-exists del-port %s" % (veth_outside))
+    except Exception as e:
+        error = "network_leave: Failed to delete port (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+if __name__ == '__main__':
+    prepare()
+    app.run(host='0.0.0.0')
diff --git a/rhel/openvswitch-fedora.spec.in b/rhel/openvswitch-fedora.spec.in
index 066086c..cb76500 100644
--- a/rhel/openvswitch-fedora.spec.in
+++ b/rhel/openvswitch-fedora.spec.in
@@ -346,6 +346,8 @@  rm -rf $RPM_BUILD_ROOT
 %files ovn
 %{_bindir}/ovn-controller
 %{_bindir}/ovn-controller-vtep
+%{_bindir}/ovn-docker-overlay-driver
+%{_bindir}/ovn-docker-underlay-driver
 %{_bindir}/ovn-nbctl
 %{_bindir}/ovn-northd
 %{_bindir}/ovn-sbctl