-Dashboard tables list a summary of per test-case VPP MRR performance trend
-values and detected anomalies (Maximum Receive Rate - received packet rate
-under line rate load). Data comes from trending MRR jobs executed every 12 hrs
-(2:00, 14:00 UTC). Trend and anomaly calculations are done over a rolling
-window of <N> samples, currently with N=14 covering last 7 days. Separate
-tables are generated for tested VPP worker-thread-core combinations (1t1c,
-2t2c, 4t4c).
-
-Legend to table:
-
- - "Test case": name of CSIT test case, naming convention here
- `CSIT/csit-test-naming <https://wiki.fd.io/view/CSIT/csit-test-naming>`_
- - "Thput trend [Mpps]": last value of trend over rolling window.
- - "Anomaly value [Mpps]": in precedence - i) highest outlier if 3
- consecutive outliers, ii) highest regression if regressions detected,
- iii) highest progression if progressions detected, iv) nil if normal i.e.
- within trend.
- - "Anomaly vs. Trend [%]": anomaly value vs. trend value.
- - "Classification": outlier, regression, progression, normal - observed
- over a rolling window.
- - "# Outliers": number of outliers detected.
-
-Tables are listed in sections 1.x. Followed by daily trending graphs in
-sections 2.x. Daily trending data used to generate the graphs is listed in
-sections 3.x.
+Performance dashboard tables provide the latest VPP throughput trend,
+trend compliance and detected anomalies, all on a per VPP test case
+basis. Linked trendline graphs enable further drill-down into the
+trendline compliance, sequence and nature of anomalies, as well as
+pointers to performance test builds/logs and VPP (or DPDK) builds.
+Performance trending is currently based on the Maximum Receive Rate (MRR) tests.
+MRR tests measure the packet forwarding rate under the maximum load offered
+by traffic generator over a set trial duration, regardless of packet
+loss. See :ref:`trending_methodology` section for more detail including
+trend and anomaly calculations.
+
+Data samples are generated by the CSIT VPP (and DPDK) performance trending jobs
+executed twice a day (target start: every 12 hrs, 02:00, 14:00 UTC). All
+trend and anomaly evaluation is based on an algorithm which divides test runs
+into groups according to minimum description length principle.
+The trend value is the population average of the results within a group.
+
+Failed tests
+------------
+
+The table lists the tests which failed over the <N=14> runs of the trending
+jobs.
+
+Legend to the table:
+
+ - **Test Case**: name of FD.io CSIT test case, naming convention
+ `here <https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
+ - **Fails [#]**: number of fails of the tests over the period.
+ - **Last Fail [Date]**: the date and time when the test failed the last
+ time.
+ - **Last Fail [VPP Build]**: VPP build which was tested when the test failed
+ the last time.
+ - **Last Fail [CSIT Build]**: the last CSIT build where the test failed.
+
+.. include:: ../../../_build/_static/vpp/failed-tests.rst
+
+Dashboard
+---------
+
+Legend to the tables:
+
+ - **Test Case**: name of FD.io CSIT test case, naming convention
+ `here <https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
+ - **Trend [Mpps]**: last value of performance trend.
+ - **Short-Term Change [%]**: Relative change of last trend value
+ vs. last week trend value.
+ - **Long-Term Change [%]**: Relative change of last trend value vs.
+ maximum of trend values over the last quarter except last week.
+ - **Regressions [#]**: Number of regressions detected.
+ - **Progressions [#]**: Number of progressions detected.
+
+Tested VPP worker-thread-core combinations (1t1c, 2t2c, 4t4c) are listed
+in separate tables in section 1.x. Followed by trending methodology in
+section 2. and trendline graphs in sections 3.x. Performance test data
+used for trendline graphs is provided in sections 4.x.