From patchwork Sat Oct 8 16:30:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Finucane X-Patchwork-Id: 679921 X-Patchwork-Delegate: rbryant@redhat.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3srsNX34Ylz9ryv for ; Sun, 9 Oct 2016 03:31:56 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" (0-bit key; unprotected) header.d=that.guru header.i=@that.guru header.b=KeFV8z9d; dkim-atps=neutral Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 22AC810A63; Sat, 8 Oct 2016 09:31:08 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 642EC10ABA for ; Sat, 8 Oct 2016 09:31:06 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id F033B162C25 for ; Sat, 8 Oct 2016 10:31:05 -0600 (MDT) X-ASG-Debug-ID: 1475944263-0b32374d4901210001-byXFYA Received: from mx3-pf1.cudamail.com ([192.168.14.2]) by bar6.cudamail.com with ESMTP id SVvOy21pnbwrVMbC (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sat, 08 Oct 2016 10:31:03 -0600 (MDT) X-Barracuda-Envelope-From: stephen@that.guru X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.2 Received: from unknown (HELO buffalo.birch.relay.mailchannels.net) (23.83.209.24) by mx3-pf1.cudamail.com with ESMTPS (DHE-RSA-AES256-SHA encrypted); 8 Oct 2016 16:31:01 -0000 Received-SPF: none (mx3-pf1.cudamail.com: domain at that.guru does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 23.83.209.24 X-Barracuda-RBL-IP: 23.83.209.24 X-Sender-Id: mxroute|x-authuser|stephen@that.guru Received: from relay.mailchannels.net (localhost [127.0.0.1]) by relay.mailchannels.net (Postfix) with ESMTP id B28D4A0C40 for ; Sat, 8 Oct 2016 16:31:00 +0000 (UTC) Received: from one.mxroute.com (ip-10-229-10-199.us-west-2.compute.internal [10.229.10.199]) by relay.mailchannels.net (Postfix) with ESMTPA id 5B3CFA0375 for ; Sat, 8 Oct 2016 16:30:52 +0000 (UTC) X-Sender-Id: mxroute|x-authuser|stephen@that.guru Received: from one.mxroute.com ([TEMPUNAVAIL]. [10.104.207.55]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384) by 0.0.0.0:2500 (trex/5.7.8); Sat, 08 Oct 2016 16:31:00 +0000 X-MC-Relay: Neutral X-MailChannels-SenderId: mxroute|x-authuser|stephen@that.guru X-MailChannels-Auth-Id: mxroute X-MC-Loop-Signature: 1475944252751:977737726 X-MC-Ingress-Time: 1475944252750 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=that.guru; s=default; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=JMmRv5kRf4TBioufPG2/wbvwl9RvVRIFc2BB9SnxPoY=; b=KeFV8z9dcquMzcDFGyXMDeJD/4 PoHT3gz3klNektT7GzH5oGeqOUX8y5u+8jG//VOiibZIcbFRyUBrs0UlpQZr2O4uFMSfCSNfeW5AR mHlv5NwNE6+VTHdxVjfe/U5yzbp8mWCZYu65wLbmDxGsYyiPfHOZkWFNnhKR46BWMqP1BfNZyzh3P zJoopeNAAF+t7yeikq6WW5KJZcf6dEKuJL6KJoeEjrmYfAeN6tbvH9K4fBoSE1C21aCeKqKo007t7 SHKcaiRPP/ueJMqMzZfHqVkGVJdYYQbhnBQaiA9yj5URAhMyNBUORhGNanK7gCy1o9UHNURGdLNSg 98G7v2Ew==; X-CudaMail-Envelope-Sender: stephen@that.guru From: Stephen Finucane To: dev@openvswitch.org X-CudaMail-MID: CM-V1-1007010202 X-CudaMail-DTE: 100816 X-CudaMail-Originating-IP: 23.83.209.24 Date: Sat, 8 Oct 2016 17:30:27 +0100 X-ASG-Orig-Subj: [##CM-V1-1007010202##][PATCH 5/9] doc: Convert INSTALL.Docker to rST Message-Id: <1475944231-25192-6-git-send-email-stephen@that.guru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1475944231-25192-1-git-send-email-stephen@that.guru> References: <1475944231-25192-1-git-send-email-stephen@that.guru> X-AuthUser: stephen@that.guru X-GBUdb-Analysis: 0, 23.83.209.24, Ugly c=0.226425 p=-0.111111 Source Normal X-MessageSniffer-Rules: 0-0-0-32767-c X-Barracuda-Connect: UNKNOWN[192.168.14.2] X-Barracuda-Start-Time: 1475944263 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-Spam-Score: 1.60 X-Barracuda-Spam-Status: No, SCORE=1.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC0_MV0713, BSF_SC5_MJ1963, DKIM_SIGNED, NORMAL_HTTP_TO_IP, RDNS_NONE, WEIRD_PORT X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.33578 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 DKIM_SIGNED Domain Keys Identified Mail: message has a signature 0.00 NORMAL_HTTP_TO_IP URI: Uses a dotted-decimal IP address in URL 0.50 WEIRD_PORT URI: Uses non-standard port number for HTTP 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC0_MV0713 Custom rule MV0713 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH 5/9] doc: Convert INSTALL.Docker to rST X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" Signed-off-by: Stephen Finucane --- INSTALL.Docker.md | 298 ------------------------------------------- INSTALL.Docker.rst | 320 +++++++++++++++++++++++++++++++++++++++++++++++ Makefile.am | 2 +- README.md | 4 +- tutorial/OVN-Tutorial.md | 2 +- 5 files changed, 324 insertions(+), 302 deletions(-) delete mode 100644 INSTALL.Docker.md create mode 100644 INSTALL.Docker.rst diff --git a/INSTALL.Docker.md b/INSTALL.Docker.md deleted file mode 100644 index bb5e711..0000000 --- a/INSTALL.Docker.md +++ /dev/null @@ -1,298 +0,0 @@ -How to Use Open Virtual Networking With Docker -============================================== - -This document describes how to use Open Virtual Networking with Docker -1.9.0 or later. This document assumes that you have installed Open -vSwitch by following [INSTALL.rst] or by using the distribution packages -such as .deb or.rpm. Consult www.docker.com for instructions on how to -install Docker. Docker 1.9.0 comes with support for multi-host networking. - -Setup -===== - -For multi-host networking with OVN and Docker, Docker has to be started -with a destributed key-value store. For e.g., if you decide to use consul -as your distributed key-value store, and your host IP address is $HOST_IP, -start your Docker daemon with: - -``` -docker daemon --cluster-store=consul://127.0.0.1:8500 \ ---cluster-advertise=$HOST_IP:0 -``` - -OVN provides network virtualization to containers. OVN's integration with -Docker currently works in two modes - the "underlay" mode or the "overlay" -mode. - -In the "underlay" mode, OVN requires a OpenStack setup to provide container -networking. In this mode, one can create logical networks and can have -containers running inside VMs, standalone VMs (without having any containers -running inside them) and physical machines connected to the same logical -network. This is a multi-tenant, multi-host solution. - -In the "overlay" mode, OVN can create a logical network amongst containers -running on multiple hosts. This is a single-tenant (extendable to -multi-tenants depending on the security characteristics of the workloads), -multi-host solution. In this mode, you do not need a pre-created OpenStack -setup. - -For both the modes to work, a user has to install and start Open vSwitch in -each VM/host that he plans to run his containers. - - -The "overlay" mode -================== - -OVN in "overlay" mode needs a minimum Open vSwitch version of 2.5. - -* Start the central components. - -OVN architecture has a central component which stores your networking intent -in a database. On one of your machines, with an IP Address of $CENTRAL_IP, -where you have installed and started Open vSwitch, you will need to start some -central components. - -Start ovn-northd daemon. This daemon translates networking intent from Docker -stored in the OVN_Northbound database to logical flows in OVN_Southbound -database. - -``` -/usr/share/openvswitch/scripts/ovn-ctl start_northd -``` - -* One time setup. - -On each host, where you plan to spawn your containers, you will need to -run the following command once. (You need to run it again if your OVS database -gets cleared. It is harmless to run it again in any case.) - -$LOCAL_IP in the below command is the IP address via which other hosts -can reach this host. This acts as your local tunnel endpoint. - -$ENCAP_TYPE is the type of tunnel that you would like to use for overlay -networking. The options are "geneve" or "stt". (Please note that your -kernel should have support for your chosen $ENCAP_TYPE. Both geneve -and stt are part of the Open vSwitch kernel module that is compiled from this -repo. If you use the Open vSwitch kernel module from upstream Linux, -you will need a minumum kernel version of 3.18 for geneve. There is no stt -support in upstream Linux. You can verify whether you have the support in your -kernel by doing a "lsmod | grep $ENCAP_TYPE".) - -``` -ovs-vsctl set Open_vSwitch . external_ids:ovn-remote="tcp:$CENTRAL_IP:6642" \ - external_ids:ovn-nb="tcp:$CENTRAL_IP:6641" external_ids:ovn-encap-ip=$LOCAL_IP external_ids:ovn-encap-type="$ENCAP_TYPE" -``` - -Each Open vSwitch instance in an OVN deployment needs a unique, persistent -identifier, called the "system-id". If you install OVS from distribution -packaging for Open vSwitch (e.g. .deb or .rpm packages), or if you use the -ovs-ctl utility included with Open vSwitch, it automatically configures a -system-id. If you start Open vSwitch manually, you should set one up yourself, -e.g.: - -``` -id_file=/etc/openvswitch/system-id.conf -test -e $id_file || uuidgen > $id_file -ovs-vsctl set Open_vSwitch . external_ids:system-id=$(cat $id_file) -``` - -And finally, start the ovn-controller. (You need to run the below command -on every boot) - -``` -/usr/share/openvswitch/scripts/ovn-ctl start_controller -``` - -* Start the Open vSwitch network driver. - -By default Docker uses Linux bridge for networking. But it has support -for external drivers. To use Open vSwitch instead of the Linux bridge, -you will need to start the Open vSwitch driver. - -The Open vSwitch driver uses the Python's flask module to listen to -Docker's networking api calls. So, if your host does not have Python's -flask module, install it with: - -``` -easy_install -U pip -pip install Flask -``` - -Start the Open vSwitch driver on every host where you plan to create your -containers. (Please read a note on $OVS_PYTHON_LIBS_PATH that is used below -at the end of this document.) - -``` -PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-overlay-driver --detach -``` - -Docker has inbuilt primitives that closely match OVN's logical switches -and logical port concepts. Please consult Docker's documentation for -all the possible commands. Here are some examples. - -* Create your logical switch. - -To create a logical switch with name 'foo', on subnet '192.168.1.0/24' run: - -``` -NID=`docker network create -d openvswitch --subnet=192.168.1.0/24 foo` -``` - -* List your logical switches. - -``` -docker network ls -``` - -You can also look at this logical switch in OVN's northbound database by -running the following command. - -``` -ovn-nbctl --db=tcp:$CENTRAL_IP:6640 ls-list -``` - -* Docker creates your logical port and attaches it to the logical network -in a single step. - -For e.g., to attach a logical port to network 'foo' inside cotainer busybox, -run: - -``` -docker run -itd --net=foo --name=busybox busybox -``` - -* List all your logical ports. - -Docker currently does not have a CLI command to list all your logical ports. -But you can look at them in the OVN database, by running: - -``` -ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lsp-list $NID -``` - -* You can also create a logical port and attach it to a running container. - -``` -docker network create -d openvswitch --subnet=192.168.2.0/24 bar -docker network connect bar busybox -``` - -You can delete your logical port and detach it from a running container by -running: - -``` -docker network disconnect bar busybox -``` - -* You can delete your logical switch by running: - -``` -docker network rm bar -``` - - -The "underlay" mode -=================== - -This mode requires that you have a OpenStack setup pre-installed with OVN -providing the underlay networking. - -* One time setup. - -A OpenStack tenant creates a VM with a single network interface (or multiple) -that belongs to management logical networks. The tenant needs to fetch the -port-id associated with the interface via which he plans to send the container -traffic inside the spawned VM. This can be obtained by running the -below command to fetch the 'id' associated with the VM. - -``` -nova list -``` - -and then by running: - -``` -neutron port-list --device_id=$id -``` - -Inside the VM, download the OpenStack RC file that contains the tenant -information (henceforth referred to as 'openrc.sh'). Edit the file and add the -previously obtained port-id information to the file by appending the following -line: export OS_VIF_ID=$port_id. After this edit, the file will look something -like: - -``` -#!/bin/bash -export OS_AUTH_URL=http://10.33.75.122:5000/v2.0 -export OS_TENANT_ID=fab106b215d943c3bad519492278443d -export OS_TENANT_NAME="demo" -export OS_USERNAME="demo" -export OS_VIF_ID=e798c371-85f4-4f2d-ad65-d09dd1d3c1c9 -``` - -* Create the Open vSwitch bridge. - -If your VM has one ethernet interface (e.g.: 'eth0'), you will need to add -that device as a port to an Open vSwitch bridge 'breth0' and move its IP -address and route related information to that bridge. (If it has multiple -network interfaces, you will need to create and attach an Open vSwitch bridge -for the interface via which you plan to send your container traffic.) - -If you use DHCP to obtain an IP address, then you should kill the DHCP client -that was listening on the physical Ethernet interface (e.g. eth0) and start -one listening on the Open vSwitch bridge (e.g. breth0). - -Depending on your VM, you can make the above step persistent across reboots. -For e.g.:, if your VM is Debian/Ubuntu, you can read -[openvswitch-switch.README.Debian]. If your VM is RHEL based, you can read -[README.RHEL] - - -* Start the Open vSwitch network driver. - -The Open vSwitch driver uses the Python's flask module to listen to -Docker's networking api calls. The driver also uses OpenStack's -python-neutronclient libraries. So, if your host does not have Python's -flask module or python-neutronclient install them with: - -``` -easy_install -U pip -pip install python-neutronclient -pip install Flask -``` - -Source the openrc file. e.g.: -```` -. ./openrc.sh -``` - -Start the network driver and provide your OpenStack tenant password -when prompted. (Please read a note on $OVS_PYTHON_LIBS_PATH that is used below -at the end of this document.) - -``` -PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-underlay-driver --bridge breth0 \ ---detach -``` - -From here-on you can use the same Docker commands as described in the -section 'The "overlay" mode'. - -Please read 'man ovn-architecture' to understand OVN's architecture in -detail. - -Note on $OVS_PYTHON_LIBS_PATH -============================= - -$OVS_PYTHON_LIBS_PATH should point to the directory where Open vSwitch -python modules are installed. If you installed Open vSwitch python -modules via the debian package of 'python-openvswitch' or via pip by -running 'pip install ovs', you do not need to specify the path. -If you installed it by following the instructions in INSTALL.rst, you -should specify the path. The path in that case depends on the options passed -to ./configure. (It is usually either '/usr/share/openvswitch/python' or -'/usr/local/share/openvswitch/python'.) - -[INSTALL.rst]: INSTALL.rst -[openvswitch-switch.README.Debian]: debian/openvswitch-switch.README.Debian -[README.RHEL]: rhel/README.RHEL diff --git a/INSTALL.Docker.rst b/INSTALL.Docker.rst new file mode 100644 index 0000000..35dcce2 --- /dev/null +++ b/INSTALL.Docker.rst @@ -0,0 +1,320 @@ +.. + Licensed under the Apache License, Version 2.0 (the "License"); you may + not use this file except in compliance with the License. You may obtain + a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + License for the specific language governing permissions and limitations + under the License. + + Convention for heading levels in Open vSwitch documentation: + + ======= Heading 0 (reserved for the title in a document) + ------- Heading 1 + ~~~~~~~ Heading 2 + +++++++ Heading 3 + ''''''' Heading 4 + + Avoid deeper levels because they do not render well. + +=================================== +Open Virtual Networking With Docker +=================================== + +This document describes how to use Open Virtual Networking with Docker 1.9.0 +or later. + +.. important:: + + Requires Docker version 1.9.0 or later. Only Docker 1.9.0+ comes with support + for multi-host networking. Consult www.docker.com for instructions on how to + install Docker. + +.. note:: + + You must build and install Open vSwitch before proceeding with the below + guide. Refer to the `installation guide `__ for more + information. + +Setup +----- + +For multi-host networking with OVN and Docker, Docker has to be started with a +destributed key-value store. For example, if you decide to use consul as your +distributed key-value store and your host IP address is ``$HOST_IP``, start +your Docker daemon with::: + + $ docker daemon --cluster-store=consul://127.0.0.1:8500 \ + --cluster-advertise=$HOST_IP:0 + +OVN provides network virtualization to containers. OVN's integration with +Docker currently works in two modes - the "underlay" mode or the "overlay" +mode. + +In the "underlay" mode, OVN requires a OpenStack setup to provide container +networking. In this mode, one can create logical networks and can have +containers running inside VMs, standalone VMs (without having any containers +running inside them) and physical machines connected to the same logical +network. This is a multi-tenant, multi-host solution. + +In the "overlay" mode, OVN can create a logical network amongst containers +running on multiple hosts. This is a single-tenant (extendable to multi-tenants +depending on the security characteristics of the workloads), multi-host +solution. In this mode, you do not need a pre-created OpenStack setup. + +For both the modes to work, a user has to install and start Open vSwitch in +each VM/host that they plan to run their containers on. + +.. _docker-overlay: + +The "overlay" mode +------------------ + +.. note:: + + OVN in "overlay" mode needs a minimum Open vSwitch version of 2.5. + +1. Start the central components. + + OVN architecture has a central component which stores your networking intent + in a database. On one of your machines, with an IP Address of + ``$CENTRAL_IP``, where you have installed and started Open vSwitch, you will + need to start some central components. + + Start ovn-northd daemon. This daemon translates networking intent from Docker + stored in the OVN\_Northbound database to logical flows in ``OVN_Southbound`` + database. For example::: + + $ /usr/share/openvswitch/scripts/ovn-ctl start_northd + +2. One time setup + + On each host, where you plan to spawn your containers, you will need to run + the below command once. You may need to run it again if your OVS database + gets cleared. It is harmless to run it again in any case::: + + $ ovs-vsctl set Open_vSwitch . \ + external_ids:ovn-remote="tcp:$CENTRAL_IP:6642" \ + external_ids:ovn-nb="tcp:$CENTRAL_IP:6641" \ + external_ids:ovn-encap-ip=$LOCAL_IP \ + external_ids:ovn-encap-type="$ENCAP_TYPE" + + where: + + ``$LOCAL_IP`` + is the IP address via which other hosts can reach this host. This acts as + your local tunnel endpoint. + + ``$ENCAP_TYPE`` + is the type of tunnel that you would like to use for overlay networking. + The options are ``geneve`` or ``stt``. Your kernel must have support for + your chosen ``$ENCAP_TYPE``. Both ``geneve`` and ``stt`` are part of the + Open vSwitch kernel module that is compiled from this repo. If you use the + Open vSwitch kernel module from upstream Linux, you will need a minumum + kernel version of 3.18 for ``geneve``. There is no ``stt`` support in + upstream Linux. You can verify whether you have the support in your kernel + as follows::: + + $ lsmod | grep $ENCAP_TYPE + + In addition, each Open vSwitch instance in an OVN deployment needs a unique, + persistent identifier, called the ``system-id``. If you install OVS from + distribution packaging for Open vSwitch (e.g. .deb or .rpm packages), or if + you use the ovs-ctl utility included with Open vSwitch, it automatically + configures a system-id. If you start Open vSwitch manually, you should set + one up yourself. For example::: + + $ id_file=/etc/openvswitch/system-id.conf + $ test -e $id_file || uuidgen > $id_file + $ ovs-vsctl set Open_vSwitch . external_ids:system-id=$(cat $id_file) + +3. Start the ``ovn-controller``. + + You need to run the below command on every boot::: + + $ /usr/share/openvswitch/scripts/ovn-ctl start_controller + +4. Start the Open vSwitch network driver. + + By default Docker uses Linux bridge for networking. But it has support for + external drivers. To use Open vSwitch instead of the Linux bridge, you will + need to start the Open vSwitch driver. + + The Open vSwitch driver uses the Python's flask module to listen to Docker's + networking api calls. So, if your host does not have Python's flask module, + install it::: + + $ sudo pip install Flask + + Start the Open vSwitch driver on every host where you plan to create your + containers. Refer to the note on ``$OVS_PYTHON_LIBS_PATH`` that is used below + at the end of this document::: + + $ PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-overlay-driver --detach + + .. note:: + + The ``$OVS_PYTHON_LIBS_PATH`` variable should point to the directory where + Open vSwitch Python modules are installed. If you installed Open vSwitch + Python modules via the Debian package of ``python-openvswitch`` or via pip + by running ``pip install ovs``, you do not need to specify the PATH. If + you installed it by following the instructions in the `installation guide + `__, then you should specify the PATH. In this case, the PATH + depends on the options passed to ``./configure``. It is usually either + ``/usr/share/openvswitch/python`` or + ``/usr/local/share/openvswitch/python`` + +Docker has inbuilt primitives that closely match OVN's logical switches and +logical port concepts. Consult Docker's documentation for all the possible +commands. Here are some examples. + +Create a logical switch +~~~~~~~~~~~~~~~~~~~~~~~ + +To create a logical switch with name 'foo', on subnet '192.168.1.0/24', run::: + + $ NID=`docker network create -d openvswitch --subnet=192.168.1.0/24 foo` + +List all logical switches +~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + $ docker network ls + +You can also look at this logical switch in OVN's northbound database by +running the following command::: + + $ ovn-nbctl --db=tcp:$CENTRAL_IP:6640 ls-list + +Delete a logical switch +~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + $ docker network rm bar + + +Create a logical port +~~~~~~~~~~~~~~~~~~~~~ + +Docker creates your logical port and attaches it to the logical network in a +single step. For example, to attach a logical port to network ``foo`` inside +container busybox, run::: + + $ docker run -itd --net=foo --name=busybox busybox + +List all logical ports +~~~~~~~~~~~~~~~~~~~~~~ + +Docker does not currently have a CLI command to list all logical ports but you +can look at them in the OVN database by running::: + + $ ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lsp-list $NID + +Create and attach a logical port to a running container +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:: + + $ docker network create -d openvswitch --subnet=192.168.2.0/24 bar + $ docker network connect bar busybox + +Detach and delete a logical port from a running container +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You can delete your logical port and detach it from a running container +by running: + +:: + + $ docker network disconnect bar busybox + +.. _docker-underlay: + +The "underlay" mode +------------------- + +.. note:: + + This mode requires that you have a OpenStack setup pre-installed with + OVN providing the underlay networking. + +1. One time setup + + A OpenStack tenant creates a VM with a single network interface (or multiple) + that belongs to management logical networks. The tenant needs to fetch the + port-id associated with the interface via which he plans to send the container + traffic inside the spawned VM. This can be obtained by running the below + command to fetch the 'id' associated with the VM::: + + $ nova list + + and then by running::: + + $ neutron port-list --device_id=$id + + Inside the VM, download the OpenStack RC file that contains the tenant + information (henceforth referred to as ``openrc.sh``). Edit the file and add the + previously obtained port-id information to the file by appending the following + line::: + + $ export OS_VIF_ID=$port_id + + After this edit, the file will look something like::: + + #!/bin/bash + export OS_AUTH_URL=http://10.33.75.122:5000/v2.0 + export OS_TENANT_ID=fab106b215d943c3bad519492278443d + export OS_TENANT_NAME="demo" + export OS_USERNAME="demo" + export OS_VIF_ID=e798c371-85f4-4f2d-ad65-d09dd1d3c1c9 + +2. Create the Open vSwitch bridge + + If your VM has one ethernet interface (e.g.: 'eth0'), you will need to add + that device as a port to an Open vSwitch bridge 'breth0' and move its IP + address and route related information to that bridge. (If it has multiple + network interfaces, you will need to create and attach an Open vSwitch + bridge for the interface via which you plan to send your container + traffic.) + + If you use DHCP to obtain an IP address, then you should kill the DHCP + client that was listening on the physical Ethernet interface (e.g. eth0) and + start one listening on the Open vSwitch bridge (e.g. breth0). + + Depending on your VM, you can make the above step persistent across reboots. + For example, if your VM is Debian/Ubuntu-based, read + `openvswitch-switch.README.Debian` found in `debian` folder. If your VM is + RHEL-based, refer to the `RHEL install guide <../../INSTALL.RHEL.md>`__. + +3. Start the Open vSwitch network driver + + The Open vSwitch driver uses the Python's flask module to listen to Docker's + networking api calls. The driver also uses OpenStack's + ``python-neutronclient`` libraries. If your host does not have Python's + ``flask`` module or ``python-neutronclient`` you must install them. For + example::: + + $ pip install python-neutronclient + $ pip install Flask + + Once installed, source the ``openrc`` file::: + + $ . ./openrc.sh + + Start the network driver and provide your OpenStack tenant password when + prompted::: + + $ PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-underlay-driver \ + --bridge breth0 --detach + +From here-on you can use the same Docker commands as described in +`docker-overlay`_. + +Refer the the ovs-architecture man pages (``man ovn-architecture``) to +understand OVN's architecture in detail. diff --git a/Makefile.am b/Makefile.am index 4cd5ece..2bee565 100644 --- a/Makefile.am +++ b/Makefile.am @@ -72,7 +72,7 @@ docs = \ FAQ.md \ INSTALL.rst \ INSTALL.Debian.rst \ - INSTALL.Docker.md \ + INSTALL.Docker.rst \ INSTALL.DPDK-ADVANCED.md \ INSTALL.DPDK.rst \ INSTALL.Fedora.md \ diff --git a/README.md b/README.md index ff23ee9..ab9b1ff 100644 --- a/README.md +++ b/README.md @@ -86,7 +86,7 @@ platform, please see one of these files: To use Open vSwitch... -- ...with Docker on Linux, read [INSTALL.Docker.md] +- ...with Docker on Linux, read [INSTALL.Docker.rst] - ...with KVM on Linux, read [INSTALL.rst], read [INSTALL.KVM.md] @@ -117,7 +117,7 @@ bugs@openvswitch.org [INSTALL.rst]:INSTALL.rst [INSTALL.Debian.rst]:INSTALL.Debian.rst -[INSTALL.Docker.md]:INSTALL.Docker.md +[INSTALL.Docker.rst]:INSTALL.Docker.rst [INSTALL.DPDK.rst]:INSTALL.DPDK.rst [INSTALL.Fedora.md]:INSTALL.Fedora.md [INSTALL.KVM.md]:INSTALL.KVM.md diff --git a/tutorial/OVN-Tutorial.md b/tutorial/OVN-Tutorial.md index 5ae8ed5..2f094f7 100644 --- a/tutorial/OVN-Tutorial.md +++ b/tutorial/OVN-Tutorial.md @@ -1033,4 +1033,4 @@ and `lport2`. [env8packet1]:https://github.com/nickcooper-zhangtonghao/ovs/blob/master/tutorial/ovn/env8/packet1.sh [env8packet2]:https://github.com/nickcooper-zhangtonghao/ovs/blob/master/tutorial/ovn/env8/packet2.sh [openstack-ovn-acl-blog]:http://blog.russellbryant.net/2015/10/22/openstack-security-groups-using-ovn-acls/ -[openvswitch-docker]:http://openvswitch.org/support/dist-docs/INSTALL.Docker.md.txt +[openvswitch-docker]:http://openvswitch.org/support/dist-docs/INSTALL.Docker.rst.txt