PAL: Add s3_storage as the data source
[csit.git] / resources / tools / presentation / doc / pal_lld.rst
index 64bde3e..38ab6d9 100644 (file)
@@ -1,5 +1,5 @@
-Presentation and Analytics Layer
-================================
+Presentation and Analytics
+==========================
 
 Overview
 --------
@@ -42,9 +42,10 @@ sub-layers, bottom up:
     .. raw:: latex
 
         \begin{figure}[H]
-        \centering
-            \includesvg[width=0.90\textwidth]{../_tmp/src/csit_framework_documentation/pal_layers}
-            \label{fig:pal_layers}
+            \centering
+                \graphicspath{{../_tmp/src/csit_framework_documentation/}}
+                \includegraphics[width=0.90\textwidth]{pal_layers}
+                \label{fig:pal_layers}
         \end{figure}
 
 .. only:: html
@@ -168,10 +169,7 @@ The structure of the section "Environment" is as follows (example):
         DIR[DTR]: "{DIR[WORKING,SRC]}/detailed_test_results"
         DIR[DTR,PERF,DPDK]: "{DIR[DTR]}/dpdk_performance_results"
         DIR[DTR,PERF,VPP]: "{DIR[DTR]}/vpp_performance_results"
-        DIR[DTR,PERF,HC]: "{DIR[DTR]}/honeycomb_performance_results"
         DIR[DTR,FUNC,VPP]: "{DIR[DTR]}/vpp_functional_results"
-        DIR[DTR,FUNC,HC]: "{DIR[DTR]}/honeycomb_functional_results"
-        DIR[DTR,FUNC,NSHSFC]: "{DIR[DTR]}/nshsfc_functional_results"
         DIR[DTR,PERF,VPP,IMPRV]: "{DIR[WORKING,SRC]}/vpp_performance_tests/performance_improvements"
 
         # Detailed test configurations
@@ -189,7 +187,10 @@ The structure of the section "Environment" is as follows (example):
 
       urls:
         URL[JENKINS,CSIT]: "https://jenkins.fd.io/view/csit/job"
-        URL[JENKINS,HC]: "https://jenkins.fd.io/view/hc2vpp/job"
+        URL[S3_STORAGE,LOG]: "https://logs.nginx.service.consul/vex-yul-rot-jenkins-1"
+        URL[NEXUS,LOG]: "https://logs.fd.io/production/vex-yul-rot-jenkins-1"
+        URL[NEXUS,DOC]: "https://docs.fd.io/csit"
+        DIR[NEXUS,DOC]: "report/_static/archive"
 
       make-dirs:
       # List the directories which are created while preparing the environment.
@@ -366,10 +367,6 @@ The structure of the section "Debug" is as follows (example):
         -
           build: 9
           file: "csit-dpdk-perf-1707-all__9.xml"
-        csit-nsh_sfc-verify-func-1707-ubuntu1604-virl:
-        -
-          build: 2
-          file: "csit-nsh_sfc-verify-func-1707-ubuntu1604-virl-2.xml"
         csit-vpp-functional-1707-ubuntu1604-virl:
         -
           build: lastSuccessfulBuild
@@ -401,6 +398,7 @@ This section has these parts:
    processed.
 
 ::
+
     -
       type: "static"
       src-path: "{DIR[RST]}"
@@ -471,9 +469,6 @@ The structure of the section "Input" is as follows (example from 17.07 report):
         - 9
         hc2vpp-csit-integration-1707-ubuntu1604:
         - lastSuccessfulBuild
-        csit-nsh_sfc-verify-func-1707-ubuntu1604-virl:
-        - 2
-
 
 Section: Output
 '''''''''''''''
@@ -840,6 +835,35 @@ latency in a box chart):
         width: 700
         height: 1000
 
+The structure of the section "Plot" is as follows (example of a plot showing
+VPP HTTP server performance in a box chart with pre-defined data
+"plot-vpp-http-server-performance" set and  plot layout "plot-cps"):
+
+::
+
+    -
+      type: "plot"
+      title: "VPP HTTP Server Performance"
+      algorithm: "plot_http_server_perf_box"
+      output-file-type: ".html"
+      output-file: "{DIR[STATIC,VPP]}/http-server-performance-cps"
+      data:
+        "plot-vpp-httlp-server-performance"
+      # Keep this formatting, the filter is enclosed with " (quotation mark) and
+      # each tag is enclosed with ' (apostrophe).
+      filter: "'HTTP' and 'TCP_CPS'"
+      parameters:
+      - "result"
+      - "name"
+      traces:
+        hoverinfo: "x+y"
+        boxpoints: "outliers"
+        whiskerwidth: 0
+      layout:
+        title: "VPP HTTP Server Performance"
+        layout:
+          "plot-cps"
+
 
 Section: file
 '''''''''''''
@@ -1242,7 +1266,7 @@ The model specifies:
     -
       type: "table"
       title: "Performance comparison"
-      algorithm: "table_performance_comparison"
+      algorithm: "table_perf_comparison"
       output-file-ext: ".csv"
       output-file: "{DIR[DTR,PERF,VPP,IMPRV]}/vpp_performance_comparison"
       reference:
@@ -1339,6 +1363,209 @@ of an element is required, only a new algorithm needs to be implemented
 and integrated.
 
 
+Continuous Performance Measurements and Trending
+------------------------------------------------
+
+Performance analysis and trending execution sequence:
+`````````````````````````````````````````````````````
+
+CSIT PA runs performance analysis, change detection and trending using specified
+trend analysis metrics over the rolling window of last <N> sets of historical
+measurement data. PA is defined as follows:
+
+    #. PA job triggers:
+
+        #. By PT job at its completion.
+        #. Manually from Jenkins UI.
+
+    #. Download and parse archived historical data and the new data:
+
+        #. New data from latest PT job is evaluated against the rolling window
+           of <N> sets of historical data.
+        #. Download RF output.xml files and compressed archived data.
+        #. Parse out the data filtering test cases listed in PA specification
+           (part of CSIT PAL specification file).
+
+    #. Calculate trend metrics for the rolling window of <N> sets of historical
+       data:
+
+        #. Calculate quartiles Q1, Q2, Q3.
+        #. Trim outliers using IQR.
+        #. Calculate TMA and TMSD.
+        #. Calculate normal trending range per test case based on TMA and TMSD.
+
+    #. Evaluate new test data against trend metrics:
+
+        #. If within the range of (TMA +/- 3*TMSD) => Result = Pass,
+           Reason = Normal.
+        #. If below the range => Result = Fail, Reason = Regression.
+        #. If above the range => Result = Pass, Reason = Progression.
+
+    #. Generate and publish results
+
+        #. Relay evaluation result to job result.
+        #. Generate a new set of trend analysis summary graphs and drill-down
+           graphs.
+
+            #. Summary graphs to include measured values with Normal,
+               Progression and Regression markers. MM shown in the background if
+               possible.
+            #. Drill-down graphs to include MM, TMA and TMSD.
+
+        #. Publish trend analysis graphs in html format on
+           https://docs.fd.io/csit/master/trending/.
+
+
+Parameters to specify:
+``````````````````````
+
+*General section - parameters common to all plots:*
+
+    - type: "cpta";
+    - title: The title of this section;
+    - output-file-type: only ".html" is supported;
+    - output-file: path where the generated files will be stored.
+
+*Plots section:*
+
+    - plot title;
+    - output file name;
+    - input data for plots;
+
+        - job to be monitored - the Jenkins job which results are used as input
+          data for this test;
+        - builds used for trending plot(s) - specified by a list of build
+          numbers or by a range of builds defined by the first and the last
+          build number;
+
+    - tests to be displayed in the plot defined by a filter;
+    - list of parameters to extract from the data;
+    - plot layout
+
+*Example:*
+
+::
+
+    -
+      type: "cpta"
+      title: "Continuous Performance Trending and Analysis"
+      output-file-type: ".html"
+      output-file: "{DIR[STATIC,VPP]}/cpta"
+      plots:
+
+        - title: "VPP 1T1C L2 64B Packet Throughput - Trending"
+          output-file-name: "l2-1t1c-x520"
+          data: "plot-performance-trending-vpp"
+          filter: "'NIC_Intel-X520-DA2' and 'MRR' and '64B' and ('BASE' or 'SCALE') and '1T1C' and ('L2BDMACSTAT' or 'L2BDMACLRN' or 'L2XCFWD') and not 'VHOST' and not 'MEMIF'"
+          parameters:
+          - "result"
+          layout: "plot-cpta-vpp"
+
+        - title: "DPDK 4T4C IMIX MRR Trending"
+          output-file-name: "dpdk-imix-4t4c-xl710"
+          data: "plot-performance-trending-dpdk"
+          filter: "'NIC_Intel-XL710' and 'IMIX' and 'MRR' and '4T4C' and 'DPDK'"
+          parameters:
+          - "result"
+          layout: "plot-cpta-dpdk"
+
+The Dashboard
+`````````````
+
+Performance dashboard tables provide the latest VPP throughput trend, trend
+compliance and detected anomalies, all on a per VPP test case basis.
+The Dashboard is generated as three tables for 1t1c, 2t2c and 4t4c MRR tests.
+
+At first, the .csv tables are generated (only the table for 1t1c is shown):
+
+::
+
+    -
+      type: "table"
+      title: "Performance trending dashboard"
+      algorithm: "table_perf_trending_dash"
+      output-file-ext: ".csv"
+      output-file: "{DIR[STATIC,VPP]}/performance-trending-dashboard-1t1c"
+      data: "plot-performance-trending-all"
+      filter: "'MRR' and '1T1C'"
+      parameters:
+      - "name"
+      - "parent"
+      - "result"
+      ignore-list:
+      - "tests.vpp.perf.l2.10ge2p1x520-eth-l2bdscale1mmaclrn-mrr.tc01-64b-1t1c-eth-l2bdscale1mmaclrn-ndrdisc"
+      outlier-const: 1.5
+      window: 14
+      evaluated-window: 14
+      long-trend-window: 180
+
+Then, html tables stored inside .rst files are generated:
+
+::
+
+    -
+      type: "table"
+      title: "HTML performance trending dashboard 1t1c"
+      algorithm: "table_perf_trending_dash_html"
+      input-file: "{DIR[STATIC,VPP]}/performance-trending-dashboard-1t1c.csv"
+      output-file: "{DIR[STATIC,VPP]}/performance-trending-dashboard-1t1c.rst"
+
+Root Cause Analysis
+-------------------
+
+Root Cause Analysis (RCA) by analysing archived performance results – re-analyse
+available data for specified:
+
+    - range of jobs builds,
+    - set of specific tests and
+    - PASS/FAIL criteria to detect performance change.
+
+In addition, PAL generates trending plots to show performance over the specified
+time interval.
+
+Root Cause Analysis - Option 1: Analysing Archived VPP Results
+``````````````````````````````````````````````````````````````
+
+It can be used to speed-up the process, or when the existing data is sufficient.
+In this case, PAL uses existing data saved in Nexus, searches for performance
+degradations and generates plots to show performance over the specified time
+interval for the selected tests.
+
+Execution Sequence
+''''''''''''''''''
+
+    #. Download and parse archived historical data and the new data.
+    #. Calculate trend metrics.
+    #. Find regression / progression.
+    #. Generate and publish results:
+
+        #. Summary graphs to include measured values with Progression and
+           Regression markers.
+        #. List the DUT build(s) where the anomalies were detected.
+
+CSIT PAL Specification
+''''''''''''''''''''''
+
+    - What to test:
+
+        - first build (Good); specified by the Jenkins job name and the build
+          number
+        - last build (Bad); specified by the Jenkins job name and the build
+          number
+        - step (1..n).
+
+    - Data:
+
+        - tests of interest; list of tests (full name is used) which results are
+          used
+
+*Example:*
+
+::
+
+    TODO
+
+
 API
 ---
 
@@ -1461,9 +1688,10 @@ PAL functional diagram
     .. raw:: latex
 
         \begin{figure}[H]
-        \centering
-            \includesvg[width=0.90\textwidth]{../_tmp/src/csit_framework_documentation/pal_func_diagram}
-            \label{fig:pal_func_diagram}
+            \centering
+                \graphicspath{{../_tmp/src/csit_framework_documentation/}}
+                \includegraphics[width=0.90\textwidth]{pal_func_diagram}
+                \label{fig:pal_func_diagram}
         \end{figure}
 
 .. only:: html