X-Git-Url: https://gerrit.fd.io/r/gitweb?a=blobdiff_plain;ds=sidebyside;f=docs%2Fcpta%2Fintroduction%2Findex.rst;h=016037b067c5580a37ff1d24894d4ecdffc571b4;hb=c6aea4422456d455efd0c7ffce94aa0bc0a4dcbf;hp=516e8b36e09a96f2a1184387bc04322967ca9da6;hpb=6942369b1102a8b9a3b705f9192f1ecb959382d1;p=csit.git diff --git a/docs/cpta/introduction/index.rst b/docs/cpta/introduction/index.rst index 516e8b36e0..016037b067 100644 --- a/docs/cpta/introduction/index.rst +++ b/docs/cpta/introduction/index.rst @@ -1,36 +1,42 @@ -VPP MRR Performance Dashboard -============================= +VPP Performance Dashboard +========================= Description ----------- -Dashboard tables list a summary of per test-case VPP MRR performance trend -values and detected anomalies (Maximum Receive Rate - received packet rate -under line rate load). Data comes from trending MRR jobs executed every 12 -hrs (2:00, 14:00 UTC). Trend, trend compliance and anomaly calculations are -based on a rolling window of samples, currently N=14 covering last 7 days. -Separate tables are generated for tested VPP worker-thread-core combinations -(1t1c, 2t2c, 4t4c). +Performance dashboard tables provide the latest VPP throughput trend, +trend compliance and detected anomalies, all on a per VPP test case +basis. Linked trendline graphs enable further drill-down into the +trendline compliance, sequence and nature of anomalies, as well as +pointers to performance test builds/logs and VPP builds. Performance +trending is currently based on the Maximum Receive Rate (MRR) tests. MRR +tests measure the packet forwarding rate under the maximum load offered +by traffic generator over a set trial duration, regardless of packet +loss. See :ref:`trending_methodology` section for more detail including +trend and anomaly calculations. + +Data samples are generated by the CSIT VPP performance trending jobs +executed twice a day (target start: every 12 hrs, 02:00, 14:00 UTC). All +trend and anomaly evaluation is based on a rolling window of data +samples, covering last 7 days. Legend to table: - - "Test Case": name of CSIT test case, naming convention on - `CSIT wiki `_. - - "Throughput Trend [Mpps]": last value of trend calculated over a - rolling window. - - "Trend Compliance": calculated based on detected anomalies, listed in - precedence order - i) "failure" if 3 consecutive outliers, - ii) "regression" if any regressions, iii) "progression" if any - progressions, iv) "normal" if data compliant with trend. - - "Anomaly Value [Mpps]": i) highest outlier if "failure", ii) highest - regression if "regression", iii) highest progression if "progression", - iv) "-" if normal i.e. within trend. - - "Change [%]": "Anomaly Value" vs. "Throughput Trend", "-" if normal. - - "# Outliers": number of outliers detected within a rolling window. - -Tables are listed in sections 1.x. Followed by daily trending graphs in -sections 2.x. Daily trending data used to generate the graphs is listed in -sections 3.x. + - **Test Case** : name of FD.io CSIT test case, naming convention + `here `_. + - **Trend [Mpps]** : last value of performance trend. + - **Short-Term Change [%]** : Relative change of last trend value + vs. last week trend value. + - **Long-Term Change [%]** : Relative change of last trend value vs. + maximum of trend values over the last quarter except last week. + - **Regressions [#]** : Number of regressions detected. + - **Progressions [#]** : Number of progressions detected. + - **Outliers [#]** : Number of outliers detected. + +Tested VPP worker-thread-core combinations (1t1c, 2t2c, 4t4c) are listed +in separate tables in section 1.x. Followed by trending methodology in +section 2. and trendline graphs in sections 3.x. Performance test data +used for trendline graphs is provided in sections 4.x. VPP worker on 1t1c ------------------