summary |
shortlog |
log |
commit | commitdiff |
review |
tree
raw |
patch |
inline | side by side (from parent 1:
1493f60)
Change-Id: I0ed346ff30c61d28b5a2232ef2f9d32d26c1ae2c
Signed-off-by: Vratko Polak <vrpolak@cisco.com>
To identify performance changes due to TRex code development between previous
To identify performance changes due to TRex code development between previous
-and current TRex version, both have been tested in CSIT environment of latest
+and current TRex version, both have been tested in CSIT environment of latest
version and compared against each other. All substantial progressions and
regressions have been marked up with RCA analysis. See :ref:`trex_known_issues`.
version and compared against each other. All substantial progressions and
regressions have been marked up with RCA analysis. See :ref:`trex_known_issues`.
modifications of the test environment.
Any benchmark anomalies (progressions, regressions) between releases of
modifications of the test environment.
Any benchmark anomalies (progressions, regressions) between releases of
-a DUT application (e.g. VPP, DPDK, TRex), are determined by testing it in the
+a DUT application (e.g. VPP, DPDK), are determined by testing it in the
same test environment, to avoid test environment changes clouding the
picture.
same test environment, to avoid test environment changes clouding the
picture.
+To beter distinguish impact of test environment changes,
+we also execute tests without any SUT (just with TRex TG sending packets
+over a link looping back to TG).
A mirror approach is introduced to determine benchmarking anomalies due
to the test environment change. This is achieved by testing the same DUT
A mirror approach is introduced to determine benchmarking anomalies due
to the test environment change. This is achieved by testing the same DUT
-TREX performance test results are reported for a range of processors.
-For description of physical testbeds used for TREX performance tests
+TRex performance test results are reported for a range of processors.
+For description of physical testbeds used for TRex performance tests
please refer to :ref:`tested_physical_topologies`.
Logical Topology
----------------
please refer to :ref:`tested_physical_topologies`.
Logical Topology
----------------
-CSIT TREX performance tests are executed on physical testbeds described
+CSIT TRex performance tests are executed on physical testbeds described
in :ref:`tested_physical_topologies`. Logical topology use 1 nic that has
loopback connected ports. See figure below.
in :ref:`tested_physical_topologies`. Logical topology use 1 nic that has
loopback connected ports. See figure below.
- 10% of discovered PDR throughput.
- Minimal offered load.
- 10% of discovered PDR throughput.
- Minimal offered load.
-|csit-release| includes following TRex data plane functionality
+|csit-release| includes tests using the following TRex traffic profiles
+(corresponding to data plane functionality when DUT is used)
performance tested across a range of NIC drivers and NIC models:
+-----------------------+----------------------------------------------+
performance tested across a range of NIC drivers and NIC models:
+-----------------------+----------------------------------------------+
-| Functionality | Description |
+| Traffic profile | Corresponding dataplane functionality |
+=======================+==============================================+
| IPv4 Base | IPv4 routing. |
+-----------------------+----------------------------------------------+
+=======================+==============================================+
| IPv4 Base | IPv4 routing. |
+-----------------------+----------------------------------------------+
Throughput Trending
-------------------
Throughput Trending
-------------------
-In addition to reporting throughput comparison between TRex releases,
CSIT provides continuous performance trending for master branch:
#. `TRex Trending Graphs <https://s3-docs.fd.io/csit/master/trending/ndrpdr_trending/trex.html>`_:
per TRex test case throughput trend, trend compliance and summary of
CSIT provides continuous performance trending for master branch:
#. `TRex Trending Graphs <https://s3-docs.fd.io/csit/master/trending/ndrpdr_trending/trex.html>`_:
per TRex test case throughput trend, trend compliance and summary of
+ detected anomalies. We expect TRex to hit the curently used bps or pps limit,
+ so no anomalies here (unless we change those limits in CSIT).
#. `TRex Latency Graphs <https://s3-docs.fd.io/csit/master/trending/ndrpdr_latency_trending/trex.html>`_:
per TRex build NDRPDR latency measurements against the trendline.
#. `TRex Latency Graphs <https://s3-docs.fd.io/csit/master/trending/ndrpdr_latency_trending/trex.html>`_:
per TRex build NDRPDR latency measurements against the trendline.
+ We have seen in past that the latency numbers can depend on TRex version,
+ NIC firmware, or driver used.