X-Git-Url: https://gerrit.fd.io/r/gitweb?p=csit.git;a=blobdiff_plain;f=docs%2Freport%2Fintroduction%2Fmethodology_nfv_service_density.rst;h=c5407b5125bacbf1c581c2f08b11ba078fd452e0;hp=51e56e294d7e44381c0a43abb826cc77b6a1a17e;hb=83d6c91ed14a474f3c3bfdd0b7366e772fdd13d5;hpb=bbfcee8d3cf51ec01d269245970ef41bb072c580 diff --git a/docs/report/introduction/methodology_nfv_service_density.rst b/docs/report/introduction/methodology_nfv_service_density.rst index 51e56e294d..c5407b5125 100644 --- a/docs/report/introduction/methodology_nfv_service_density.rst +++ b/docs/report/introduction/methodology_nfv_service_density.rst @@ -16,20 +16,16 @@ service chain forwarding context(s). In order to provide a most complete picture, each network topology and service configuration is tested in different service density setups by varying two parameters: -- Number of service instances (e.g. 1,2,4..10). -- Number of NFs per service instance (e.g. 1,2,4..10). +- Number of service instances (e.g. 1, 2, 4, 6, 8, 10). +- Number of NFs per service instance (e.g. 1, 2, 4, 6, 8, 10). -The initial implementation of NFV service density tests in -|csit-release| is using two NF applications: +Implementation of NFV service density tests in |csit-release| is using two NF +applications: -- VNF: DPDK L3fwd running in KVM VM, configured with /8 IPv4 prefix - routing. L3fwd got chosen as a lightweight fast IPv4 VNF application, - and follows CSIT approach of using DPDK sample applications in VMs for - performance testing. -- CNF: VPP running in Docker Container, configured with /24 IPv4 prefix - routing. VPP got chosen as a fast IPv4 NF application that supports - required memif interface (L3fwd does not). This is similar to all - other Container tests in CSIT that use VPP. +- VNF: VPP of the same version as vswitch running in KVM VM, configured with /8 + IPv4 prefix routing. +- CNF: VPP of the same version as vswitch running in Docker Container, + configured with /8 IPv4 prefix routing. Tests are designed such that in all tested cases VPP vswitch is the most stressed application, as for each flow vswitch is processing each packet @@ -84,22 +80,29 @@ physical core mapping ratios: - Data-plane on single core + - (main:core) = (1:1) => 1mt1c - 1 main thread on 1 core. - (data:core) = (1:1) => 2dt1c - 2 Data-plane Threads on 1 Core. - - (main:core) = (1:1) => 1mt1c - 1 Main Thread on 1 Core. - Data-plane on two cores - - (data:core) = (1:2) => 4dt2c - 4 Data-plane Threads on 2 Cores. - (main:core) = (1:1) => 1mt1c - 1 Main Thread on 1 Core. + - (data:core) = (1:2) => 4dt2c - 4 Data-plane Threads on 2 Cores. - VNF and CNF - Data-plane on single core + - (main:core) = (2:1) => 2mt1c - 2 Main Threads on 1 Core, 1 Thread + per NF, core shared between two NFs. - (data:core) = (1:1) => 2dt1c - 2 Data-plane Threads on 1 Core per NF. + + - Data-plane on single logical core (Two NFs per physical core) + - (main:core) = (2:1) => 2mt1c - 2 Main Threads on 1 Core, 1 Thread per NF, core shared between two NFs. + - (data:core) = (2:1) => 2dt1c - 2 Data-plane Threads on 1 Core, 1 + Thread per NF, core shared between two NFs. Maximum tested service densities are limited by a number of physical cores per NUMA. |csit-release| allocates cores within NUMA0. Support for