From ed4be23d294adbe4bc604c0b08def77fd8832d0c Mon Sep 17 00:00:00 2001 From: Maciek Konstantynowicz Date: Tue, 5 Feb 2019 21:22:17 +0000 Subject: [PATCH] Report: methodology section, added nfv service density. Change-Id: Ia5f3a8befd5a9cc6c4b644ddd785e21f11b1c156 Signed-off-by: Maciek Konstantynowicz (cherry picked from commit 1eb5821ae2975d69d1c655049db02348bb79a5ca) --- docs/report/introduction/methodology.rst | 1 + .../methodology_nfv_service_density.rst | 106 +++++++++++++++++++++ 2 files changed, 107 insertions(+) create mode 100644 docs/report/introduction/methodology_nfv_service_density.rst diff --git a/docs/report/introduction/methodology.rst b/docs/report/introduction/methodology.rst index 1f9fcfe7fb..da8f859d8d 100644 --- a/docs/report/introduction/methodology.rst +++ b/docs/report/introduction/methodology.rst @@ -18,6 +18,7 @@ Test Methodology methodology_kvm_vms_vhost_user methodology_lxc_drc_container_memif methodology_k8s_container_memif + methodology_nfv_service_density methodology_vpp_device_functional methodology_ipsec_on_intel_qat methodology_trex_traffic_generator diff --git a/docs/report/introduction/methodology_nfv_service_density.rst b/docs/report/introduction/methodology_nfv_service_density.rst new file mode 100644 index 0000000000..2946ba2777 --- /dev/null +++ b/docs/report/introduction/methodology_nfv_service_density.rst @@ -0,0 +1,106 @@ +NFV Service Density +------------------- + +Network Function Virtualization (NFV) service density tests focus on +measuring total per server throughput at varied NFV service “packing” +densities with vswitch providing host dataplane. The goal is to compare +and contrast performance of a shared vswitch for different network +topologies and virtualization technologies, and their impact on vswitch +performance and efficiency in a range of NFV service configurations. + +Each NFV service instance consists of a set of Network Functions (NFs), +running in VMs (VNFs) or in Containers (CNFs), that are connected into a +virtual network topology using VPP vswitch running in Linux user-mode. +Multiple service instances share the vswitch that in turn provides per +service chain forwarding context(s). In order to provide a most complete +picture, each network topology and service configuration is tested in +different service density setups by varying two parameters: + +- Number of service instances (e.g. 1,2,4..10). +- Number of NFs per service instance (e.g. 1,2,4..10). + +The initial implementation of NFV service density tests in +|csit-release| is using two NF applications: + +- VNF: DPDK L3fwd running in KVM VM, configured with /8 IPv4 prefix + routing. L3fwd got chosen as a lightweight fast IPv4 VNF application, + and follows CSIT approach of using DPDK sample applications in VMs for + performance testing. +- CNF: VPP running in Docker Container, configured with /24 IPv4 prefix + routing. VPP got chosen as a fast IPv4 NF application that supports + required memif interface (L3fwd does not). This is similar to all + other Container tests in CSIT that use VPP. + +Tests are designed such that in all tested cases VPP vswitch is the most +stressed application, as for each flow vswitch is processing each packet +multiple times, whereas VNFs and CNFs process each packets only once. To +that end, all VNFs and CNFs are allocated enough resources to not become +a bottleneck. + +Service Configurations +~~~~~~~~~~~~~~~~~~~~~~ + +Following NFV network topologies and configurations are tested: + +- VNF Service Chains (VSC) with L2 vswitch + + - *Network Topology*: Sets of VNFs dual-homed to VPP vswitch over + virtio-vhost links. Each set belongs to separate service instance. + - *Network Configuration*: VPP L2 bridge-domain contexts form logical + service chains of VNF sets and connect each chain to physical + interfaces. + +- CNF Service Chains (CSC) with L2 vswitch + + - *Network Topology*: Sets of CNFs dual-homed to VPP vswitch over + memif links. Each set belongs to separate service instance. + - *Network Configuration*: VPP L2 bridge-domain contexts form logical + service chains of CNF sets and connect each chain to physical + interfaces. + +- CNF Service Pipelines (CSP) with L2 vswitch + + - *Network Topology*: Sets of CNFs connected into pipelines over a + series of memif links, with edge CNFs single-homed to VPP vswitch + over memif links. Each set belongs to separate service instance. + - *Network Configuration*: VPP L2 bridge-domain contexts connect each + CNF pipeline to physical interfaces. + +Thread-to-Core Mapping +~~~~~~~~~~~~~~~~~~~~~~ + +CSIT defines specific ratios for mapping software threads of vswitch and +VNFs/CNFs to physical cores, with separate ratios defined for main +control threads and data-plane threads. + +In |csit-release| NFV service density tests run on Intel Xeon testbeds +with Intel Hyper-Threading enabled, so each physical core is associated +with a pair of sibling logical cores corresponding to the hyper-threads. + +|csit-release| executes tests with the following software thread to +physical core mapping ratios: + +- vSwitch + + - Data-plane on single core + + - (data:core) = (1:1) => 2dt1c - 2 Data-plane Threads on 1 Core. + - (main:core) = (1:1) => 1mt1c - 1 Main Thread on 1 Core. + + - Data-plane on two cores + + - (data:core) = (1:2) => 4dt2c - 4 Data-plane Threads on 2 Cores. + - (main:core) = (1:1) => 1mt1c - 1 Main Thread on 1 Core. + +- VNF and CNF + + - Data-plane on single core + + - (data:core) = (1:1) => 2dt1c - 2 Data-plane Threads on 1 Core per + NF. + - (main:core) = (2:1) => 2mt1c - 2 Main Threads on 1 Core, 1 Thread + per NF, core shared between two NFs. + +Maximum tested service densities are limited by a number of physical +cores per NUMA. |csit-release| allocates cores within NUMA0. Support for +multi NUMA tests is to be added in future release. \ No newline at end of file -- 2.16.6