X-Git-Url: https://gerrit.fd.io/r/gitweb?a=blobdiff_plain;ds=sidebyside;f=docs%2Fcpta%2Fintroduction%2Findex.rst;h=76aed6bbcd35058480e7b43a1ea1adcc7836112d;hb=c298d66734d2d40e343ac4c60703b9838bdd6301;hp=991181aff461e14ba66155708b4d3d8c8e8fd978;hpb=2d001ed910d3835848fccb7bb96a98a5270698fe;p=csit.git diff --git a/docs/cpta/introduction/index.rst b/docs/cpta/introduction/index.rst index 991181aff4..76aed6bbcd 100644 --- a/docs/cpta/introduction/index.rst +++ b/docs/cpta/introduction/index.rst @@ -8,39 +8,18 @@ Performance dashboard tables provide the latest VPP throughput trend, trend compliance and detected anomalies, all on a per VPP test case basis. Linked trendline graphs enable further drill-down into the trendline compliance, sequence and nature of anomalies, as well as -pointers to performance test builds/logs and VPP builds. Performance -trending is currently based on the Maximum Receive Rate (MRR) tests. MRR -tests measure the packet forwarding rate under the maximum load offered +pointers to performance test builds/logs and VPP (or DPDK) builds. +Performance trending is currently based on the Maximum Receive Rate (MRR) tests. +MRR tests measure the packet forwarding rate under the maximum load offered by traffic generator over a set trial duration, regardless of packet loss. See :ref:`trending_methodology` section for more detail including trend and anomaly calculations. -Data samples are generated by the CSIT VPP performance trending jobs +Data samples are generated by the CSIT VPP (and DPDK) performance trending jobs executed twice a day (target start: every 12 hrs, 02:00, 14:00 UTC). All -trend and anomaly evaluation is based on a rolling window of data -samples, covering last 7 days. - -Failed tests ------------- - -The table lists the tests which failed over the runs of the trending -jobs. - -Legend to the table: - - - **Test Case**: name of FD.io CSIT test case, naming convention - `here `_. - - **Fails [#]**: number of fails of the tests over the period. - - **Last Fail [Date]**: the date and time when the test failed the last - time. - - **Last Fail [VPP Build]**: VPP build which was tested when the test failed - the last time. - - **Last Fail [CSIT Build]**: the last CSIT build where the test failed. - -.. include:: ../../../_build/_static/vpp/failed-tests.rst - -Dashboard ---------- +trend and anomaly evaluation is based on an algorithm which divides test runs +into groups according to minimum description length principle. +The trend value is the population average of the results within a group. Legend to the tables: @@ -53,7 +32,6 @@ Legend to the tables: maximum of trend values over the last quarter except last week. - **Regressions [#]**: Number of regressions detected. - **Progressions [#]**: Number of progressions detected. - - **Outliers [#]**: Number of outliers detected. Tested VPP worker-thread-core combinations (1t1c, 2t2c, 4t4c) are listed in separate tables in section 1.x. Followed by trending methodology in @@ -61,16 +39,16 @@ section 2. and trendline graphs in sections 3.x. Performance test data used for trendline graphs is provided in sections 4.x. VPP worker on 1t1c -`````````````````` +------------------ .. include:: ../../../_build/_static/vpp/performance-trending-dashboard-1t1c.rst VPP worker on 2t2c -`````````````````` +------------------ .. include:: ../../../_build/_static/vpp/performance-trending-dashboard-2t2c.rst VPP worker on 4t4c -`````````````````` +------------------ .. include:: ../../../_build/_static/vpp/performance-trending-dashboard-4t4c.rst