X-Git-Url: https://gerrit.fd.io/r/gitweb?a=blobdiff_plain;f=docs%2Fcpta%2Fintroduction%2Findex.rst;h=76aed6bbcd35058480e7b43a1ea1adcc7836112d;hb=faadf83599b8640c9235c38a4ab57c7adfb9eb96;hp=8b3c17029d4cb9fbc0427c2d02a59e4f83cc69bb;hpb=cadab3a8d0f465cc5ac8b005d9670b80c102170b;p=csit.git diff --git a/docs/cpta/introduction/index.rst b/docs/cpta/introduction/index.rst index 8b3c17029d..76aed6bbcd 100644 --- a/docs/cpta/introduction/index.rst +++ b/docs/cpta/introduction/index.rst @@ -4,37 +4,39 @@ VPP Performance Dashboard Description ----------- -Dashboard tables list a summary of latest per test-case VPP Maximum -Receive Rate (MRR) performance trend, trend compliance metrics and -detected number of anomalies. Data samples come from the CSIT VPP -performance trending jobs executed twice a day, every 12 hrs (02:00, -14:00 UTC). All trend and anomaly evaluation is based on a rolling -window of data samples, covering last 7 days. - -Legend to table: - - - **Test Case** : name of CSIT test case, naming convention in - `CSIT wiki `_. - - **Trend [Mpps]** : last value of trend. - - **Short-Term Change [%]** : Relative change of last trend value +Performance dashboard tables provide the latest VPP throughput trend, +trend compliance and detected anomalies, all on a per VPP test case +basis. Linked trendline graphs enable further drill-down into the +trendline compliance, sequence and nature of anomalies, as well as +pointers to performance test builds/logs and VPP (or DPDK) builds. +Performance trending is currently based on the Maximum Receive Rate (MRR) tests. +MRR tests measure the packet forwarding rate under the maximum load offered +by traffic generator over a set trial duration, regardless of packet +loss. See :ref:`trending_methodology` section for more detail including +trend and anomaly calculations. + +Data samples are generated by the CSIT VPP (and DPDK) performance trending jobs +executed twice a day (target start: every 12 hrs, 02:00, 14:00 UTC). All +trend and anomaly evaluation is based on an algorithm which divides test runs +into groups according to minimum description length principle. +The trend value is the population average of the results within a group. + +Legend to the tables: + + - **Test Case**: name of FD.io CSIT test case, naming convention + `here `_. + - **Trend [Mpps]**: last value of performance trend. + - **Short-Term Change [%]**: Relative change of last trend value vs. last week trend value. - - **Long-Term Change [%]** : Relative change of last trend value vs. + - **Long-Term Change [%]**: Relative change of last trend value vs. maximum of trend values over the last quarter except last week. - - **Regressions [#]** : Number of regressions detected. - - **Progressions [#]** : Number of progressions detected. - - **Outliers [#]** : Number of outliers detected. - -MRR tests measure the packet forwarding rate under the maximum load -offered by traffic generator over a set trial duration, regardless of -packet loss. - -For more detail about MRR tests, trend and anomaly calculations please -refer to :ref:`trending_methodology` section. + - **Regressions [#]**: Number of regressions detected. + - **Progressions [#]**: Number of progressions detected. Tested VPP worker-thread-core combinations (1t1c, 2t2c, 4t4c) are listed in separate tables in section 1.x. Followed by trending methodology in -section 2. and daily trending graphs in sections 3.x. Daily trending -data used is provided in sections 4.x. +section 2. and trendline graphs in sections 3.x. Performance test data +used for trendline graphs is provided in sections 4.x. VPP worker on 1t1c ------------------