-#. Test environment changes in VPP data plane performance tests:\r
-\r
- - Further characterization and optimizations of VPP vhost-user and VM test\r
- methodology and test environment;\r
-\r
- - Tests with varying Qemu virtio queue (a.k.a. vring) sizes:\r
- [vr256] default 256 descriptors, [vr1024] 1024 descriptors to\r
- optimize for packet throughput;\r
-\r
- - Tests with varying Linux :abbr:`CFS (Completely Fair Scheduler)`\r
- settings: [cfs] default settings, [cfsrr1] :abbr:`CFS (Completely Fair\r
- Scheduler)` RoundRobin(1) policy applied to all data plane threads\r
- handling test packet path including all VPP worker threads and all Qemu\r
- testpmd poll-mode threads;\r
-\r
- - Resulting test cases are all combinations with [vr256,vr1024] and\r
- [cfs,cfsrr1] settings;\r
-\r
- - For more detail see performance results observations section in\r
- this report;\r
-\r
-#. Code updates and optimizations in CSIT performance framework:\r
-\r
- - Complete CSIT framework code revision and optimizations as descried\r
- on CSIT wiki page `Design_Optimizations\r
- <https://wiki.fd.io/view/CSIT/Design_Optimizations>`_.\r
-\r
- - For more detail see the :ref:`CSIT Framework Design <csit-design>` section\r
- in this report;\r
-\r
-#. Changes to CSIT driver for TRex Traffic Generator:\r
-\r
- - Complete refactor of TRex CSIT driver;\r
-\r
- - Introduction of packet traffic profiles to improve usability and\r
- manageability of traffic profiles for a growing number of test\r
- scenarios.\r
-\r
- - Support for packet traffic profiles to test IPv4/IPv6 stateful and\r
- stateless DUT data plane features;\r
-\r
-#. Added VPP performance tests\r
-\r
- - **Linux Container VPP memif virtual interface tests**\r
-\r
- - VPP Memif virtual interface (shared memory interface) tests\r
- interconnecting VPP instances over memif. VPP vswitch\r
- instance runs in bare-metal user-mode handling Intel x520 NIC\r
- 10GbE interfaces and connecting over memif (Master side) virtual\r
- interfaces to another instance of VPP running in bare-metal Linux\r
- Container (LXC) with memif virtual interfaces (Slave side). LXC\r
- runs in a priviliged mode with VPP data plane worker threads\r
- pinned to dedicated physical CPU cores per usual CSIT practice.\r
- Both VPP run the same version of software. This test topology is\r
- equivalent to existing tests with vhost-user and VMs.\r
-\r
- - **Stateful Security Groups**\r
-\r
- - New tests of VPP stateful security-groups a.k.a. acl-plugin\r
- functionally compatible with networking-vpp OpenStack;\r
-\r
- - New tested security-groups access-control-lists (acl)\r
- configuration variants include: [iaclNsl] input acl stateless,\r
- [oaclNsl] output acl stateless, [iaclNsf] input acl stateful\r
- a.k.a. reflect, [oaclNsf] output acl stateful a.k.a. reflect,\r
- where N is number of access-control-entries (ace) in the acl.\r
-\r
- - Testing packet flows transmitted by TG: 100, 10k, 100k, always\r
- hitting the last permit entry in acl.\r
-\r
- - **VPP vhost and VM tests**\r
-\r
- - New VPP vhost-user and VM test cases to benchmark performance of\r
- VPP and VM topologies with Qemu and CFS policy combinations of\r
- [vr256,vr1024] x [cfs,cfsrr1];\r
-\r
- - Statistical analysis of repeatibility of results;\r
-\r
-Performance Improvements\r
-------------------------\r
-\r
-Substantial improvements in measured packet throughput have been observed in a\r
-number of CSIT |release| tests listed below, with relative increase of\r
-double-digit percentage points. Relative improvements for this release are\r
-calculated against the test results listed in CSIT |release-1| report. The\r
-comparison is calculated between the mean values based on collected and\r
-archived test results' samples for involved VPP releases. Standard deviation\r
-has been also listed for CSIT |release|. VPP-16.09 and VPP-17.01 numbers are\r
-provided for reference.\r
-\r
-NDR Throughput\r
-~~~~~~~~~~~~~~\r
-\r
-Non-Drop Rate Throughput discovery tests:\r
-\r
-.. only:: html\r
-\r
- .. csv-table::\r
- :align: center\r
- :file: performance_improvements/performance_improvements_ndr_top.csv\r
-\r
-.. only:: latex\r
-\r
- .. raw:: latex\r
-\r
- \makeatletter\r
- \csvset{\r
- perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{ m{1.5cm} m{5cm} m{#1} m{#1} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},\r
- }\r
- \makeatother\r
-\r
- {\tiny\r
- \csvautobooklongtable[separator=comma,\r
- respect all,\r
- no check column count,\r
- perfimprovements column width=1cm,\r
- late after line={\\\hline},\r
- late after last line={\end{longtable}}\r
- ]{../_tmp/src/vpp_performance_tests/performance_improvements/performance_improvements_ndr_top.csv}\r
- }\r
-\r
-\r
-PDR Throughput\r
-~~~~~~~~~~~~~~\r
-\r
-Partial Drop Rate thoughput discovery tests with packet Loss Tolerance of 0.5%:\r
-\r
-.. only:: html\r
-\r
- .. csv-table::\r
- :align: center\r
- :file: performance_improvements/performance_improvements_pdr_top.csv\r
-\r
-.. only:: latex\r
-\r
- .. raw:: latex\r
-\r
- \makeatletter\r
- \csvset{\r
- perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{ m{1.5cm} m{5cm} m{#1} m{#1} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},\r
- }\r
- \makeatother\r
-\r
- {\tiny\r
- \csvautobooklongtable[separator=comma,\r
- respect all,\r
- no check column count,\r
- perfimprovements column width=1cm,\r
- late after line={\\\hline},\r
- late after last line={\end{longtable}}\r
- ]{../_tmp/src/vpp_performance_tests/performance_improvements/performance_improvements_pdr_top.csv}\r
- }\r
-\r
-\r
-Measured improvements are in line with VPP code optimizations listed in\r
-`VPP-17.10 release notes\r
-<https://docs.fd.io/vpp/17.10/release_notes_1710.html>`_.\r
-\r
-Other Performance Changes\r
--------------------------\r
-\r
-Other changes in measured packet throughput, with either minor relative increase\r
-or decrease, have been observed in a number of CSIT |release| tests listed\r
-below. Relative changes are calculated against the test results listed in CSIT\r
-|release-1| report.\r
-\r
-NDR Throughput\r
-~~~~~~~~~~~~~~\r
-\r
-Non-Drop Rate Throughput discovery tests:\r
-\r
-.. only:: html\r
-\r
- .. csv-table::\r
- :align: center\r
- :file: performance_improvements/performance_improvements_ndr_low.csv\r
-\r
-.. only:: latex\r
-\r
- .. raw:: latex\r
-\r
- \makeatletter\r
- \csvset{\r
- perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{ m{1.5cm} m{5cm} m{#1} m{#1} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},\r
- }\r
- \makeatother\r
-\r
- {\tiny\r
- \csvautobooklongtable[separator=comma,\r
- respect all,\r
- no check column count,\r
- perfimprovements column width=1cm,\r
- late after line={\\\hline},\r
- late after last line={\end{longtable}}\r
- ]{../_tmp/src/vpp_performance_tests/performance_improvements/performance_improvements_ndr_low.csv}\r
- }\r
-\r
-\r
-PDR Throughput\r
-~~~~~~~~~~~~~~\r
-\r
-Partial Drop Rate thoughput discovery tests with packet Loss Tolerance of 0.5%:\r
-\r
-.. only:: html\r
-\r
- .. csv-table::\r
- :align: center\r
- :file: performance_improvements/performance_improvements_pdr_low.csv\r
-\r
-.. only:: latex\r
-\r
- .. raw:: latex\r
-\r
- \makeatletter\r
- \csvset{\r
- perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{ m{1.5cm} m{5cm} m{#1} m{#1} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},\r
- }\r
- \makeatother\r
-\r
- {\tiny\r
- \csvautobooklongtable[separator=comma,\r
- respect all,\r
- no check column count,\r
- perfimprovements column width=1cm,\r
- late after line={\\\hline},\r
- late after last line={\end{longtable}}\r
- ]{../_tmp/src/vpp_performance_tests/performance_improvements/performance_improvements_pdr_low.csv}\r
- }\r
-\r
+#. **Added VPP performance tests**\r
+\r
+ - *MRR tests :* New MRR tests measure the packet forwarding rate\r
+ under the maximum load offered by traffic generator over a set\r
+ trial duration, regardless of packet loss. Maximum load for\r
+ specified Ethernet frame size is set to the bi-directional link\r
+ rate. MRR tests are used for continuous performance trending and\r
+ for comparison between releases.\r
+\r
+ - *Service Chaining with SRv6 :* SRv6 (Segment Routing IPv6) proxy tests\r
+ verifying performance of Endpoint to SR-unaware appliance via\r
+ masquerading (End.AM), dynamic proxy (End.AD) or static proxy (End.AS)\r
+ functions.\r
+\r
+#. **Presentation and Analytics Layer (PAL)**\r
+\r
+ - Added continuous performance measuring, trending and anomaly\r
+ detection. Includes new PAL code and Jenkins jobs for Performance\r
+ Trending (PT) and Performance Analysis (PA) producing performance\r
+ trending dashboard and trendline graphs with summary and drill-\r
+ down views across all specified tests that can be reviewed and\r
+ inspected regularly by FD.io developers and users community.\r
+\r
+#. **Test Framework Optimizations**\r
+\r
+ - *Performance tests efficiency :* Qemu build/install\r
+ optimizations, warmup phase handling, vpp restart handling.\r
+ Resulted in improved stability and reduced total execution time by\r
+ 30% for single pkt size e.g. 64B/78B.\r
+\r
+ - *General code housekeeping :* ongoing RF keywords\r
+ optimizations, removal of redundant RF keywords.\r
+\r
+Performance Changes\r
+-------------------\r
+\r
+Relative performance changes in measured packet throughput in CSIT\r
+|release| are calculated against the results from CSIT |release-1|\r
+report. Listed mean and standard deviation values are computed based on\r
+a series of the same tests executed against respective VPP releases to\r
+verify test results repeatibility, with percentage change calculated for\r
+mean values. Note that the standard deviation is quite high for a small\r
+number of packet throughput tests, what indicates poor test results\r
+repeatability and makes the relative change of mean throughput value not\r
+fully representative for these tests. The root causes behind poor\r
+results repeatibility vary between the test cases.\r
+\r
+NDR Throughput Changes\r
+~~~~~~~~~~~~~~~~~~~~~~\r
+\r
+NDR small packet throughput changes between releases are available in a CSV and\r
+pretty ASCII formats:\r
+\r
+ - `csv format for 1t1c <../_static/vpp/performance-changes-ndr-1t1c-full.csv>`_,\r
+ - `csv format for 2t2c <../_static/vpp/performance-changes-ndr-2t2c-full.csv>`_,\r
+ - `pretty ASCII format for 1t1c <../_static/vpp/performance-changes-ndr-1t1c-full.txt>`_,\r
+ - `pretty ASCII format for 2t2c <../_static/vpp/performance-changes-ndr-2t2c-full.txt>`_.\r
+\r
+PDR Throughput Changes\r
+~~~~~~~~~~~~~~~~~~~~~~\r
+\r
+NDR small packet throughput changes between releases are available in a CSV and\r
+pretty ASCII formats:\r
+\r
+ - `csv format for 1t1c <../_static/vpp/performance-changes-pdr-1t1c-full.csv>`_,\r
+ - `csv format for 2t2c <../_static/vpp/performance-changes-pdr-2t2c-full.csv>`_,\r
+ - `pretty ASCII format for 1t1c <../_static/vpp/performance-changes-pdr-1t1c-full.txt>`_,\r
+ - `pretty ASCII format for 2t2c <../_static/vpp/performance-changes-pdr-2t2c-full.txt>`_.\r
+\r
+MRR Throughput Changes\r
+~~~~~~~~~~~~~~~~~~~~~~\r
+\r
+MRR changes between releases are available in a CSV and\r
+pretty ASCII formats:\r
+\r
+ - `csv format for 1t1c <../_static/vpp/performance-changes-mrr-1t1c-full.csv>`_,\r
+ - `csv format for 2t2c <../_static/vpp/performance-changes-mrr-2t2c-full.csv>`_,\r
+ - `csv format for 4t4c <../_static/vpp/performance-changes-mrr-4t4c-full.csv>`_,\r
+ - `pretty ASCII format for 1t1c <../_static/vpp/performance-changes-mrr-1t1c-full.txt>`_,\r
+ - `pretty ASCII format for 2t2c <../_static/vpp/performance-changes-mrr-2t2c-full.txt>`_,\r
+ - `pretty ASCII format for 4t4c <../_static/vpp/performance-changes-mrr-4t4c-full.txt>`_.\r
+\r
+Throughput Trending\r
+-------------------\r
+\r
+In addition to reporting throughput changes between VPP releases, CSIT provides\r
+continuous performance trending for VPP master branch:\r
+\r
+#. `VPP Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_\r
+- per VPP test case throughput trend, trend compliance and summary of detected\r
+anomalies.\r
+\r
+#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_\r
+- throughput test metrics, trend calculations and anomaly classification\r
+(progression, regression, outlier).\r
+\r
+#. `Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/index.html>`_\r
+- per VPP build MRR throughput measurements against the trendline with anomaly\r
+highlights, with associated CSIT test jobs.\r