-#. Test environment changes in VPP data plane performance tests:\r
-\r
- - Further characterization and optimizations of VPP vhost-user and VM\r
- test methodology and test environment;\r
-\r
- - Tests with varying Qemu virtio queue (a.k.a. vring) sizes:\r
- [vr256] default 256 descriptors, [vr1024] 1024 descriptors to\r
- optimize for packet throughput;\r
-\r
- - Tests with varying Linux CFS (Completely Fair Scheduler)\r
- settings: [cfs] default settings, [cfsrr1] CFS RoundRobin(1)\r
- policy applied to all data plane threads handling test packet\r
- path including all VPP worker threads and all Qemu testpmd\r
- poll-mode threads;\r
-\r
- - Resulting test cases are all combinations with [vr256,vr1024] and\r
- [cfs,cfsrr1] settings;\r
-\r
- - For more detail see performance results observations section in\r
- this report;\r
-\r
-#. Code updates and optimizations in CSIT performance framework:\r
-\r
- - Complete CSIT framework code revision and optimizations as descried\r
- on CSIT wiki page\r
- `Design_Optimizations <https://wiki.fd.io/view/CSIT/Design_Optimizations>`_.\r
-\r
- - For more detail see the CSIT Framework Design section in this\r
- report;\r
-\r
-#. Changes to CSIT driver for TRex Traffic Generator:\r
-\r
- - Complete refactor of TRex CSIT driver;\r
-\r
- - Introduction of packet traffic profiles to improve usability and\r
- manageability of traffic profiles for a growing number of test\r
- scenarios.\r
-\r
- - Support for packet traffic profiles to test IPv4/IPv6 stateful and\r
- stateless DUT data plane features;\r
-\r
-#. Added VPP performance tests\r
-\r
- - **Linux Container VPP memif virtual interface tests**\r
-\r
- - VPP Memif virtual interface (shared memory interface) tests\r
- interconnecting VPP instances over memif. VPP vswitch\r
- instance runs in bare-metal user-mode handling Intel x520 NIC\r
- 10GbE interfaces and connecting over memif (Master side) virtual\r
- interfaces to another instance of VPP running in bare-metal Linux\r
- Container (LXC) with memif virtual interfaces (Slave side). LXC\r
- runs in a priviliged mode with VPP data plane worker threads\r
- pinned to dedicated physical CPU cores per usual CSIT practice.\r
- Both VPP run the same version of software. This test topology is\r
- equivalent to existing tests with vhost-user and VMs.\r
-\r
- - **Stateful Security Groups**\r
-\r
- - New tests of VPP stateful security-groups a.k.a. acl-plugin\r
- functionally compatible with networking-vpp OpenStack;\r
-\r
- - New tested security-groups access-control-lists (acl)\r
- configuration variants include: [iaclNsl] input acl stateless,\r
- [oaclNsl] output acl stateless, [iaclNsf] input acl stateful\r
- a.k.a. reflect, [oaclNsf] output acl stateful a.k.a. reflect,\r
- where N is number of access-control-entries (ace) in the acl.\r
-\r
- - Testing packet flows transmitted by TG: 100, 10k, 100k, always\r
- hitting the last permit entry in acl.\r
-\r
- - **VPP vhost and VM tests**\r
-\r
- - New VPP vhost-user and VM test cases to benchmark performance of\r
- VPP and VM topologies with Qemu and CFS policy combinations of\r
- [vr256,vr1024] x [cfs,cfsrr1];\r
-\r
- - Statistical analysis of repeatibility of results;\r
-\r
-Performance Improvements\r
-------------------------\r
-\r
-Substantial improvements in measured packet throughput have been\r
-observed in a number of CSIT |release| tests listed below, with relative\r
-increase of double-digit percentage points. Relative improvements for\r
-this release are calculated against the test results listed in CSIT\r
-|release-1| report. The comparison is calculated between the mean values\r
-based on collected and archived test results' samples for involved VPP\r
-releases. Standard deviation has been also listed for CSIT |release|.\r
-VPP-16.09 and VPP-17.01 numbers are provided for reference.\r
-\r
-NDR Throughput\r
-~~~~~~~~~~~~~~\r
-\r
-Non-Drop Rate Throughput discovery tests:\r
-\r
-.. csv-table::\r
- :align: center\r
- :header: VPP Functionality,Test Name,VPP-16.09 [Mpps],VPP-17.01 [Mpps],VPP-17.04 mean [Mpps],VPP-17.07 mean [Mpps],VPP-17.07 stdev [Mpps],17.04 to 17.07 change\r
- :file: ../../../docs/report/vpp_performance_tests/performance_improvements/ndr_throughput.csv\r
-\r
-PDR Throughput\r
-~~~~~~~~~~~~~~\r
-\r
-Partial Drop Rate thoughput discovery tests with packet Loss Tolerance of 0.5%:\r
-\r
-.. csv-table::\r
- :align: center\r
- :header: VPP Functionality,Test Name,VPP-16.09 [Mpps],VPP-17.01 [Mpps],VPP-17.04 mean [Mpps],VPP-17.07 mean [Mpps],VPP-17.07 stdev [Mpps],17.04 to 17.07 change\r
- :file: ../../../docs/report/vpp_performance_tests/performance_improvements/pdr_throughput.csv\r
-\r
-Measured improvements are in line with VPP code optimizations listed in\r
-`VPP-17.07 release notes\r
-<https://docs.fd.io/vpp/17.07/release_notes_1707.html>`_.\r
-\r
-Other Performance Changes\r
--------------------------\r
-\r
-Other changes in measured packet throughput, with either minor relative\r
-increase or decrease, have been observed in a number of CSIT |release|\r
-tests listed below. Relative changes are calculated against the test\r
-results listed in CSIT |release-1| report.\r
-\r
-NDR Throughput\r
-~~~~~~~~~~~~~~\r
-\r
-Non-Drop Rate Throughput discovery tests:\r
-\r
-.. csv-table::\r
- :align: center\r
- :header: VPP Functionality,Test Name,VPP-16.09 [Mpps],VPP-17.01 [Mpps],VPP-17.04 mean [Mpps],VPP-17.07 mean [Mpps],VPP-17.07 stdev [Mpps],17.04 to 17.07 change\r
- :file: ../../../docs/report/vpp_performance_tests/performance_improvements/ndr_throughput_others.csv\r
-\r
-PDR Throughput\r
-~~~~~~~~~~~~~~\r
-\r
-Partial Drop Rate thoughput discovery tests with packet Loss Tolerance of 0.5%:\r
-\r
-.. csv-table::\r
- :align: center\r
- :header: VPP Functionality,Test Name,VPP-16.09 [Mpps],VPP-17.01 [Mpps],VPP-17.04 mean [Mpps],VPP-17.07 mean [Mpps],VPP-17.07 stdev [Mpps],17.04 to 17.07 change\r
- :file: ../../../docs/report/vpp_performance_tests/performance_improvements/pdr_throughput_others.csv\r
-\r
+#. **Added VPP performance tests**\r
+\r
+ - *MRR tests :* New MRR tests measure the packet forwarding rate\r
+ under the maximum load offered by traffic generator over a set\r
+ trial duration, regardless of packet loss. Maximum load for\r
+ specified Ethernet frame size is set to the bi-directional link\r
+ rate. MRR tests are used for continuous performance trending and\r
+ for comparison between releases.\r
+\r
+ - *Service Chaining with SRv6 :* SRv6 (Segment Routing IPv6) proxy tests\r
+ verifying performance of Endpoint to SR-unaware appliance via\r
+ masquerading (End.AM), dynamic proxy (End.AD) or static proxy (End.AS)\r
+ functions.\r
+\r
+#. **Presentation and Analytics Layer (PAL)**\r
+\r
+ - Added continuous performance measuring, trending and anomaly\r
+ detection. Includes new PAL code and Jenkins jobs for Performance\r
+ Trending (PT) and Performance Analysis (PA) producing performance\r
+ trending dashboard and trendline graphs with summary and drill-\r
+ down views across all specified tests that can be reviewed and\r
+ inspected regularly by FD.io developers and users community.\r
+\r
+#. **Test Framework Optimizations**\r
+\r
+ - *Performance tests efficiency :* Qemu build/install\r
+ optimizations, warmup phase handling, vpp restart handling.\r
+ Resulted in improved stability and reduced total execution time by\r
+ 30% for single pkt size e.g. 64B/78B.\r
+\r
+ - *General code housekeeping :* ongoing RF keywords\r
+ optimizations, removal of redundant RF keywords.\r
+\r
+Performance Changes\r
+-------------------\r
+\r
+Relative performance changes in measured packet throughput in CSIT\r
+|release| are calculated against the results from CSIT |release-1|\r
+report. Listed mean and standard deviation values are computed based on\r
+a series of the same tests executed against respective VPP releases to\r
+verify test results repeatibility, with percentage change calculated for\r
+mean values. Note that the standard deviation is quite high for a small\r
+number of packet throughput tests, what indicates poor test results\r
+repeatability and makes the relative change of mean throughput value not\r
+fully representative for these tests. The root causes behind poor\r
+results repeatibility vary between the test cases.\r
+\r
+NDR Throughput Changes\r
+~~~~~~~~~~~~~~~~~~~~~~\r
+\r
+NDR small packet throughput changes between releases are available in a CSV and\r
+pretty ASCII formats:\r
+\r
+ - `csv format for 1t1c <../_static/vpp/performance-changes-ndr-1t1c-full.csv>`_,\r
+ - `csv format for 2t2c <../_static/vpp/performance-changes-ndr-2t2c-full.csv>`_,\r
+ - `pretty ASCII format for 1t1c <../_static/vpp/performance-changes-ndr-1t1c-full.txt>`_,\r
+ - `pretty ASCII format for 2t2c <../_static/vpp/performance-changes-ndr-2t2c-full.txt>`_.\r
+\r
+PDR Throughput Changes\r
+~~~~~~~~~~~~~~~~~~~~~~\r
+\r
+NDR small packet throughput changes between releases are available in a CSV and\r
+pretty ASCII formats:\r
+\r
+ - `csv format for 1t1c <../_static/vpp/performance-changes-pdr-1t1c-full.csv>`_,\r
+ - `csv format for 2t2c <../_static/vpp/performance-changes-pdr-2t2c-full.csv>`_,\r
+ - `pretty ASCII format for 1t1c <../_static/vpp/performance-changes-pdr-1t1c-full.txt>`_,\r
+ - `pretty ASCII format for 2t2c <../_static/vpp/performance-changes-pdr-2t2c-full.txt>`_.\r
+\r
+MRR Throughput Changes\r
+~~~~~~~~~~~~~~~~~~~~~~\r
+\r
+MRR changes between releases are available in a CSV and\r
+pretty ASCII formats:\r
+\r
+ - `csv format for 1t1c <../_static/vpp/performance-changes-mrr-1t1c-full.csv>`_,\r
+ - `csv format for 2t2c <../_static/vpp/performance-changes-mrr-2t2c-full.csv>`_,\r
+ - `csv format for 4t4c <../_static/vpp/performance-changes-mrr-4t4c-full.csv>`_,\r
+ - `pretty ASCII format for 1t1c <../_static/vpp/performance-changes-mrr-1t1c-full.txt>`_,\r
+ - `pretty ASCII format for 2t2c <../_static/vpp/performance-changes-mrr-2t2c-full.txt>`_,\r
+ - `pretty ASCII format for 4t4c <../_static/vpp/performance-changes-mrr-4t4c-full.txt>`_.\r
+\r
+Throughput Trending\r
+-------------------\r
+\r
+In addition to reporting throughput changes between VPP releases, CSIT provides\r
+continuous performance trending for VPP master branch:\r
+\r
+#. `VPP Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_\r
+- per VPP test case throughput trend, trend compliance and summary of detected\r
+anomalies.\r
+\r
+#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_\r
+- throughput test metrics, trend calculations and anomaly classification\r
+(progression, regression, outlier).\r
+\r
+#. `Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/index.html>`_\r
+- per VPP build MRR throughput measurements against the trendline with anomaly\r
+highlights, with associated CSIT test jobs.\r