From cadab3a8d0f465cc5ac8b005d9670b80c102170b Mon Sep 17 00:00:00 2001
From: Maciek Konstantynowicz
Date: Fri, 27 Apr 2018 12:20:05 +0100
Subject: [PATCH] Edits in trending docs: methodology, dashboard.
ChangeId: I137f4fd9c0a32435de65c9e25088e6775d6c2dca
Signedoffby: Maciek Konstantynowicz

docs/cpta/introduction/index.rst  31 +++
docs/cpta/methodology/index.rst  233 +++++++++++++++++++++
2 files changed, 145 insertions(+), 119 deletions()
diff git a/docs/cpta/introduction/index.rst b/docs/cpta/introduction/index.rst
index df47dc5cd9..8b3c17029d 100644
 a/docs/cpta/introduction/index.rst
+++ b/docs/cpta/introduction/index.rst
@@ 1,30 +1,35 @@
VPP MRR Performance Dashboard
=============================
+VPP Performance Dashboard
+=========================
Description

Dashboard tables list a summary of per testcase VPP MRR performance
trend and trend compliance metrics, and detected number of anomalies.
Data samples come from the CSIT VPP trending MRR jobs executed twice a
day, every 12 hrs (02:00, 14:00 UTC). All trend and anomaly evaluation
is based on a rolling window of data samples, covering last 7
days.
+Dashboard tables list a summary of latest per testcase VPP Maximum
+Receive Rate (MRR) performance trend, trend compliance metrics and
+detected number of anomalies. Data samples come from the CSIT VPP
+performance trending jobs executed twice a day, every 12 hrs (02:00,
+14:00 UTC). All trend and anomaly evaluation is based on a rolling
+window of data samples, covering last 7 days.
Legend to table:
 **Test Case** : name of CSIT test case, naming convention in
`CSIT wiki `_.
 **Trend [Mpps]** : last value of trend.
  **ShortTerm Change [%]** : Relative change of last trend value vs. last
 week trend value.
  **LongTerm Change [%]** : Relative change of last trend value vs. maximum
 of trend values over the last quarter except last week.
+  **ShortTerm Change [%]** : Relative change of last trend value
+ vs. last week trend value.
+  **LongTerm Change [%]** : Relative change of last trend value vs.
+ maximum of trend values over the last quarter except last week.
 **Regressions [#]** : Number of regressions detected.
 **Progressions [#]** : Number of progressions detected.
 **Outliers [#]** : Number of outliers detected.
All trend and anomaly calculations are defined in :ref:`trending_methodology`.
+MRR tests measure the packet forwarding rate under the maximum load
+offered by traffic generator over a set trial duration, regardless of
+packet loss.
+
+For more detail about MRR tests, trend and anomaly calculations please
+refer to :ref:`trending_methodology` section.
Tested VPP workerthreadcore combinations (1t1c, 2t2c, 4t4c) are listed
in separate tables in section 1.x. Followed by trending methodology in
diff git a/docs/cpta/methodology/index.rst b/docs/cpta/methodology/index.rst
index 1b3a4c553e..29dcae2e7f 100644
 a/docs/cpta/methodology/index.rst
+++ b/docs/cpta/methodology/index.rst
@@ 1,11 +1,13 @@
Trending Methodology
====================
+Performance Trending Methodology
+================================
+
+.. _trending_methodology:
Continuous Trending and Analysis

This document describes a highlevel design of a system for continuous
measuring, trending and performance change detection for FD.io VPP SW
+performance measuring, trending and change detection for FD.io VPP SW
data plane. It builds upon the existing FD.io CSIT framework with
extensions to its throughput testing methodology, CSIT data analytics
engine (PAL â PresentationandAnalyticsLayer) and associated Jenkins
@@ 24,16 +26,35 @@ Performance Trending Tests

Performance trending is currently relying on the Maximum Receive Rate
(MRR) tests. MRR tests measure the maximum forwarding rate under the
line rate packet load over a set trial duration, regardless of packet
loss.
+(MRR) tests. MRR tests measure the packet forwarding rate under the
+maximum load offered by traffic generator over a set trial duration,
+regardless of packet loss. Maximum load for specified Ethernet frame
+size is set to the bidirectional link rate.
Current parameters for performance trending MRR tests:
  packet sizes: 64B (78B for IPv6 tests) for all tests, IMIX for
 selected tests (vhost, memif).
  trial duration: 10sec.
  execution frequency: twice a day, every 12 hrs (02:00, 14:00 UTC).
+ Ethernet frame sizes: 64B (78B for IPv6 tests) for all tests, IMIX for
+ selected tests (vhost, memif); all quoted sizes include frame CRC, but
+ exclude per frame transmission overhead of 20B (preamble, inter frame
+ gap).
+
+ Maximum load offered: 10GE and 40GE link (sub)rates depending on NIC
+ tested, with the actual packet rate depending on frame size,
+ transmission overhead and traffic generator NIC forwarding capacity.
+
+  For 10GE NICs the maximum packet rate load is 2* 14.88 Mpps for 64B,
+ a 10GE bidirectional link rate.
+  For 40GE NICs the maximum packet rate load is 2* 18.75 Mpps for 64B,
+ a 40GE bidirectional link subrate limited by TG 40GE NIC used,
+ XL710.
+
+ Trial duration: 10sec.
+ Execution frequency: twice a day, every 12 hrs (02:00, 14:00 UTC).
+
+In the future if tested VPP configuration can handle the packet rate
+higher than bidirectional 10GE link rate, e.g. all IMIX tests and
+64B/78B multicore tests, a higher maximum load will be offered
+(25GE40GE100GE).
Performance Trend Analysis

@@ 51,25 +72,24 @@ Following statistical metrics are proposed as performance trend
indicators over the rolling window of last sets of historical
measurement data:
  Q1, Q2, Q3 : Quartiles, three points dividing a ranked data set
 into four equal parts, Q2 is the median of the data.
  IQR = Q3  Q1 : Inter Quartile Range, measure of variability, used
 here to calculate and eliminate outliers.
  Outliers : extreme values that are at least (1.5 * IQR) below Q1.

  Note: extreme values that are at least (1.5 * IQR) above Q3 are not
 considered outliers, and are likely to be classified as
 progressions.

  TMA: Trimmed Moving Average, average across the data set of the
 rolling window of values without the outliers. Used here to
 calculate TMSD.
  TMSD: Trimmed Moving Standard Deviation, standard deviation over the
 data set of the rolling window of values without the outliers,
 requires calculating TMA. Used for anomaly detection.
  TMM: Trimmed Moving Median, median across the data set of the rolling
 window of values with all data points, excluding the outliers.
 Used as a trending value and as a reference for anomaly detection.
+ Q1, Q2, Q3 : Quartiles, three points dividing a ranked data set
+ of values into four equal parts, Q2 is the median of the data.
+ IQR = Q3  Q1 : Inter Quartile Range, measure of variability, used
+ here to calculate and eliminate outliers.
+ Outliers : extreme values that are at least (1.5 * IQR) below Q1.
+
+  Note: extreme values that are at least (1.5 * IQR) above Q3 are not
+ considered outliers, and are likely to be classified as
+ progressions.
+
+ TMA : Trimmed Moving Average, average across the data set of
+ values without the outliers. Used here to calculate TMSD.
+ TMSD : Trimmed Moving Standard Deviation, standard deviation over the
+ data set of values without the outliers,
+ requires calculating TMA. Used for anomaly detection.
+ TMM : Trimmed Moving Median, median across the data set of values
+ excluding the outliers. Used as a trending value and as a reference
+ for anomaly detection.
Outlier Detection
`````````````````
@@ 77,28 +97,24 @@ Outlier Detection
Outlier evaluation of test result of value follows the definition
from previous section:
::

 Outlier Evaluation Formula Evaluation Result
 ====================================================
 X < (Q1  1.5 * IQR) Outlier
 X >= (Q1  1.5 * IQR) Valid (For Trending)
+ Outlier Evaluation Formula Evaluation Result
+ ====================================================
+ X < (Q1  1.5 * IQR) Outlier
+ X >= (Q1  1.5 * IQR) Valid (For Trending)
Anomaly Detection
`````````````````
To verify compliance of test result of value against defined trend
metrics and detect anomalies, three simple evaluation formulas are
+To verify compliance of test result of valid value against defined
+trend metrics and detect anomalies, three simple evaluation formulas are
used:
::

 Anomaly Compliance Evaluation
 Evaluation Formula Confidence Level Result
 =============================================================================
 (TMM  3 * TMSD) <= X <= (TMM + 3 * TMSD) 99.73% Normal
 X < (TMM  3 * TMSD) Anomaly Regression
 X > (TMM + 3 * TMSD) Anomaly Progression
+ Anomaly Compliance Evaluation
+ Evaluation Formula Confidence Level Result
+ =============================================================================
+ (TMM  3 * TMSD) <= X <= (TMM + 3 * TMSD) 99.73% Normal
+ X < (TMM  3 * TMSD) Anomaly Regression
+ X > (TMM + 3 * TMSD) Anomaly Progression
TMM is used for the central trend reference point instead of TMA as it
is more robust to anomalies.
@@ 113,19 +129,17 @@ ago, TMM[last  1week] and to the maximum of trend values over last
quarter except last week, max(TMM[(last  3mths)..(last  1week)]),
respectively. This results in following trend compliance calculations:
::

 Trend
 Compliance Metric Change Formula V(alue) R(eference)
 =============================================================================================
 ShortTerm Change ((V  R) / R) TMM[last] TMM[last  1week]
 LongTerm Change ((V  R) / R) TMM[last] max(TMM[(last  3mths)..(last  1week)])
+ Trend
+ Compliance Metric Change Formula V(alue) R(eference)
+ =============================================================================================
+ ShortTerm Change ((V  R) / R) TMM[last] TMM[last  1week]
+ LongTerm Change ((V  R) / R) TMM[last] max(TMM[(last  3mths)..(last  1week)])
Trend Presentation

+Performance Trend Presentation
+
Trend Dashboard
```````````````
+Performance Dashboard
+`````````````````````
Dashboard tables list a summary of per testcase VPP MRR performance
trend and trend compliance metrics and detected number of anomalies.
@@ 134,20 +148,20 @@ Separate tables are generated for tested VPP workerthreadcore
combinations (1t1c, 2t2c, 4t4c). Test case names are linked to
respective trending graphs for ease of navigation thru the test data.
Trend Graphs
``````````````
+Trendline Graphs
+````````````````
Trends graphs show per test case measured MRR throughput values with
+Trendline graphs show per test case measured MRR throughput values with
associated trendlines. The graphs are constructed as follows:
  Xaxis represents performance trend job build Id (csitvppperfmrr
 dailymasterbuild).
  Yaxis represents MRR throughput in Mpps.
  Markers to indicate anomaly classification:
+ Xaxis represents performance trend job build Id (csitvppperfmrr
+ dailymasterbuild).
+ Yaxis represents MRR throughput in Mpps.
+ Markers to indicate anomaly classification:
  Outlier  gray circle around MRR value point.
  Regression  red circle.
  Progression  green circle.
+  Outlier  gray circle around MRR value point.
+  Regression  red circle.
+  Progression  green circle.
In addition the graphs show dynamic labels while hovering over graph
data points, representing (trend job build Id, MRR value) and the actual
@@ 160,63 +174,70 @@ Jenkins Jobs Description
Performance Trending (PT)
`````````````````````````
CSIT PT runs regular performance test jobs finding MRR per test case. PT
is designed as follows:
+CSIT PT runs regular performance test jobs measuring and collecting MRR
+data per test case. PT is designed as follows:
 #. PT job triggers:
+#. PT job triggers:
 #. Periodic e.g. daily.
 #. Ondemand gerrit triggered.
+ #. Periodic e.g. daily.
+ #. Ondemand gerrit triggered.
 #. Measurements and calculations per test case:
+#. Measurements and data calculations per test case:
 #. MRR Max Received Rate
+ #. MRR Max Received Rate
 #. Measured: Unlimited tolerance of packet loss.
 #. Send packets at link rate, count total received packets, divide
 by test trial period.
+ #. Measured: Unlimited tolerance of packet loss.
+ #. Send packets at link rate, count total received packets, divide
+ by test trial period.
 #. Archive MRR per test case.
 #. Archive all counters collected at MRR.
+#. Archive MRR per test case.
+#. Archive all counters collected at MRR.
Performance Analysis (PA)
`````````````````````````
CSIT PA runs performance analysis including trending and anomaly
detection using specified trend analysis metrics over the rolling window
of last sets of historical measurement data. PA is defined as
follows:
+CSIT PA runs performance analysis including trendline calculation, trend
+compliance and anomaly detection using specified trend analysis metrics
+over the rolling window of last sets of historical measurement data.
+PA is defined as follows:
+
+#. PA job triggers:
+
+ #. By PT job at its completion.
+ #. Ondemand gerrit triggered.
 #. PA job triggers:
+#. Download and parse archived historical data and the new data:
 #. By PT job at its completion.
 #. Ondemand gerrit triggered.
+ #. Download RF output.xml files from latest PT job and compressed
+ archived data.
 #. Download and parse archived historical data and the new data:
+ #. Parse out the data filtering test cases listed in PA specification
+ (part of CSIT PAL specification file).
 #. Evalute new data from latest PT job against the rolling window of
 sets of historical data.
 #. Download RF output.xml files and compressed archived data.
 #. Parse out the data filtering test cases listed in PA specification
 (part of CSIT PAL specification file).
+ #. Evalute new data from latest PT job against the rolling window of
+ sets of historical data for trendline calculation, anomaly
+ detection and shortterm trend compliance. And against longterm
+ trendline metrics for longterm trend compliance.
 #. Calculate trend metrics for the rolling window of sets of
 historical data:
+#. Calculate trend metrics for the rolling window of sets of
+ historical data:
 #. Calculate quartiles Q1, Q2, Q3.
 #. Trim outliers using IQR.
 #. Calculate TMA and TMSD.
 #. Calculate normal trending range per test case based on TMM and TMSD.
+ #. Calculate quartiles Q1, Q2, Q3.
+ #. Trim outliers using IQR.
+ #. Calculate TMA and TMSD.
+ #. Calculate normal trending range per test case based on TMM and
+ TMSD.
 #. Evaluate new test data against trend metrics:
+#. Evaluate new test data against trend metrics:
 #. If within the range of (TMA +/ 3*TMSD) => Result = Pass,
 Reason = Normal.
 #. If below the range => Result = Fail, Reason = Regression.
 #. If above the range => Result = Pass, Reason = Progression.
+ #. If within the range of (TMA +/ 3*TMSD) => Result = Pass,
+ Reason = Normal. (to be updated base on final Jenkins code)
+ #. If below the range => Result = Fail, Reason = Regression.
+ #. If above the range => Result = Pass, Reason = Progression.
 #. Generate and publish results
+#. Generate and publish results
 #. Relay evaluation result to job result.
 #. Generate a new set of trend summary dashboard and graphs.
 #. Publish trend dashboard and graphs in html format on https://docs.fd.io/.
+ #. Relay evaluation result to job result. (to be updated base on final
+ Jenkins code)
+ #. Generate a new set of trend summary dashboard and graphs.
+ #. Publish trend dashboard and graphs in html format on https://docs.fd.io/.

2.16.6