summary |
shortlog |
log |
commit | commitdiff |
review |
tree
raw |
patch |
inline | side by side (from parent 1:
ec497a5)
Formatting and removing excessive white space in static content.
Change-Id: I7400f4ba6386b85b59b667db026558685ec0d1a1
Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com>
-For description of physical testbeds used for DPDK performance tests
-please refer to :ref:`tested_physical_topologies`.
+DPDK performance test results are reported for all three physical
+testbed types present in FD.io labs: 3-Node Xeon Haswell (3n-hsw),
+3-Node Xeon Skylake (3n-skx), 2-Node Xeon Skylake (2n-skx) and installed
+NIC models. For description of physical testbeds used for DPDK
+performance tests please refer to :ref:`tested_physical_topologies`.
Logical Topologies
------------------
Logical Topologies
------------------
In addition to reporting throughput comparison between DPDK releases,
CSIT provides regular performance trending for DPDK release branches:
In addition to reporting throughput comparison between DPDK releases,
CSIT provides regular performance trending for DPDK release branches:
-#. `Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_
- - per DPDK test case throughput trend, trend compliance and summary of
+#. `Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_:
+ per DPDK test case throughput trend, trend compliance and summary of
-#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_
- - throughput test metrics, trend calculations and anomaly
+#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_:
+ throughput test metrics, trend calculations and anomaly
classification (progression, regression).
classification (progression, regression).
-#. `DPDK Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/dpdk.html>`_
- - weekly DPDK Testpmd and L3fwd MRR throughput measurements against
+#. `DPDK Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/dpdk.html>`_:
+ weekly DPDK Testpmd and L3fwd MRR throughput measurements against
the trendline with anomaly highlights and associated CSIT test jobs.
\ No newline at end of file
the trendline with anomaly highlights and associated CSIT test jobs.
\ No newline at end of file
============
FD.io |csit-release| report contains system performance and functional
============
FD.io |csit-release| report contains system performance and functional
-testing data of |vpp-release|.
-
-`PDF version of this report`_ is also available for download.
+testing data of |vpp-release|. `PDF version of this report`_ is
+available for download.
|csit-release| report is structured as follows:
|csit-release| report is structured as follows:
Following MLRsearch values are measured across a range of L2 frame sizes
and reported:
Following MLRsearch values are measured across a range of L2 frame sizes
and reported:
-- **Non Drop Rate (NDR)**: packet and bandwidth throughput at PLR=0%.
+- NON DROP RATE (NDR): packet and bandwidth throughput at PLR=0%.
- **Aggregate packet rate**: NDR_LOWER <bi-directional packet rate>
pps.
- **Aggregate bandwidth rate**: NDR_LOWER <bi-directional bandwidth
rate> Gbps.
- **Aggregate packet rate**: NDR_LOWER <bi-directional packet rate>
pps.
- **Aggregate bandwidth rate**: NDR_LOWER <bi-directional bandwidth
rate> Gbps.
-- **Partial Drop Rate (PDR)**: packet and bandwidth throughput at
- PLR=0.5%.
+- PARTIAL DROP RATE (PDR): packet and bandwidth throughput at PLR=0.5%.
- **Aggregate packet rate**: PDR_LOWER <bi-directional packet rate>
pps.
- **Aggregate packet rate**: PDR_LOWER <bi-directional packet rate>
pps.
All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
Integration and Testing)` performance testing listed in this report are
executed on physical testbeds built with bare-metal servers hosted by
All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
Integration and Testing)` performance testing listed in this report are
executed on physical testbeds built with bare-metal servers hosted by
-:abbr:`LF (Linux Foundation)` FD.io project.
-
-Two testbed topologies are used:
+:abbr:`LF (Linux Foundation)` FD.io project. Two testbed topologies are
+used:
- **3-Node Topology**: Consisting of two servers acting as SUTs
(Systems Under Test) and one server as TG (Traffic Generator), all
- **3-Node Topology**: Consisting of two servers acting as SUTs
(Systems Under Test) and one server as TG (Traffic Generator), all
-----------------
FD.io CSIT performance tests are executed in physical testbeds hosted by
-----------------
FD.io CSIT performance tests are executed in physical testbeds hosted by
-:abbr:`LF (Linux Foundation)` for FD.io project.
-
-Two physical testbed topology types are used:
+:abbr:`LF (Linux Foundation)` for FD.io project. Two physical testbed
+topology types are used:
- **3-Node Topology**: Consisting of two servers acting as SUTs
(Systems Under Test) and one server as TG (Traffic Generator), all
- **3-Node Topology**: Consisting of two servers acting as SUTs
(Systems Under Test) and one server as TG (Traffic Generator), all
Tested SUT servers are based on a range of processors including Intel
Xeon Haswell-SP, Intel Xeon Skylake-SP, Arm, Intel Atom. More detailed
description is provided in
Tested SUT servers are based on a range of processors including Intel
Xeon Haswell-SP, Intel Xeon Skylake-SP, Arm, Intel Atom. More detailed
description is provided in
-:ref:`tested_physical_topologies`.
-
-Tested logical topologies are described in
-:ref:`tested_logical_topologies`.
+:ref:`tested_physical_topologies`. Tested logical topologies are
+described in :ref:`tested_logical_topologies`.
Server Specifications
---------------------
Server Specifications
---------------------
physical (performance tests) and virtual environments (functional
tests).
physical (performance tests) and virtual environments (functional
tests).
-Following list provides a brief overview of test scenarios covered in
-this report:
+Brief overview of test scenarios covered in this report:
#. **VPP Performance**: VPP performance tests are executed in physical
FD.io testbeds, focusing on VPP network data plane performance in
#. **VPP Performance**: VPP performance tests are executed in physical
FD.io testbeds, focusing on VPP network data plane performance in
client (DUT2) scenario using DMM framework and Linux kernel TCP/IP
stack.
client (DUT2) scenario using DMM framework and Linux kernel TCP/IP
stack.
-All CSIT test results listed in this report are sourced and auto-
+All CSIT test data included in this report is auto-
generated from :abbr:`RF (Robot Framework)` :file:`output.xml` files
generated from :abbr:`RF (Robot Framework)` :file:`output.xml` files
-resulting from :abbr:`LF (Linux Foundation)` FD.io Jenkins jobs executed
-against |vpp-release| release artifacts. References are provided to the
-original FD.io Jenkins job results. Additional references are provided
-to the :abbr:`RF (Robot Framework)` result files that got archived in
-FD.io Nexus online storage system.
+produced by :abbr:`LF (Linux Foundation)` FD.io Jenkins jobs executed
+against |vpp-release| artifacts. References are provided to the
+original FD.io Jenkins job results and all archived source files.
FD.io CSIT system is developed using two main coding platforms: :abbr:`RF (Robot
Framework)` and Python2.7. |csit-release| source code for the executed test
FD.io CSIT system is developed using two main coding platforms: :abbr:`RF (Robot
Framework)` and Python2.7. |csit-release| source code for the executed test
Virtual Topologies
------------------
Virtual Topologies
------------------
-CSIT NSH_SFC functional tests are executed in VM-based virtual topologies
-created on demand using :abbr:`VIRL (Virtual Internet Routing Lab)`
-simulation platform contributed by Cisco. VIRL runs on physical
-baremetal servers hosted by LF FD.io project.
-
-All tests are executed in three-node virtual test topology shown in the
-figure below.
+CSIT NSH_SFC functional tests are executed in VM-based virtual
+topologies created on demand using :abbr:`VIRL (Virtual Internet Routing
+Lab)` simulation platform contributed by Cisco. VIRL runs on physical
+baremetal servers hosted by LF FD.io project. All tests are executed in
+three-node virtual test topology shown in the figure below.
CSIT VPP functional tests are executed in VM-based virtual topologies
created on demand using :abbr:`VIRL (Virtual Internet Routing Lab)`
simulation platform contributed by Cisco. VIRL runs on physical
CSIT VPP functional tests are executed in VM-based virtual topologies
created on demand using :abbr:`VIRL (Virtual Internet Routing Lab)`
simulation platform contributed by Cisco. VIRL runs on physical
-baremetal servers hosted by LF FD.io project.
-
-Based on the packet path thru SUT VMs, two distinct logical topology
-types are used for VPP DUT data plane testing:
+baremetal servers hosted by LF FD.io project. Based on the packet path
+thru SUT VMs, two distinct logical topology types are used for VPP DUT
+data plane testing:
#. vNIC-to-vNIC switching topologies.
#. Nested-VM service switching topologies.
#. vNIC-to-vNIC switching topologies.
#. Nested-VM service switching topologies.
+VPP performance test results are reported for all three physical testbed
+types present in FD.io labs: 3-Node Xeon Haswell (3n-hsw), 3-Node Xeon
+Skylake (3n-skx), 2-Node Xeon Skylake (2n-skx) and installed NIC models.
For description of physical testbeds used for VPP performance tests
please refer to :ref:`tested_physical_topologies`.
For description of physical testbeds used for VPP performance tests
please refer to :ref:`tested_physical_topologies`.
In addition to reporting throughput comparison between VPP releases,
CSIT provides continuous performance trending for VPP master branch:
In addition to reporting throughput comparison between VPP releases,
CSIT provides continuous performance trending for VPP master branch:
-#. `Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_
- - per VPP test case throughput trend, trend compliance and summary of
+#. `Performance Dashboard <https://docs.fd.io/csit/master/trending/introduction/index.html>`_:
+ per VPP test case throughput trend, trend compliance and summary of
-#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_
- - throughput test metrics, trend calculations and anomaly
+#. `Trending Methodology <https://docs.fd.io/csit/master/trending/methodology/index.html>`_:
+ throughput test metrics, trend calculations and anomaly
classification (progression, regression).
classification (progression, regression).
-#. `VPP Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/index.html>`_
- - per VPP build MRR throughput measurements against the trendline
+#. `VPP Trendline Graphs <https://docs.fd.io/csit/master/trending/trending/index.html>`_:
+ per VPP build MRR throughput measurements against the trendline
with anomaly highlights and associated CSIT test jobs.
\ No newline at end of file
with anomaly highlights and associated CSIT test jobs.
\ No newline at end of file