-Presentation and Analytics Layer
-================================
+Presentation and Analytics
+==========================
Overview
--------
.. raw:: latex
\begin{figure}[H]
- \centering
- \includesvg[width=0.90\textwidth]{../_tmp/src/csit_framework_documentation/pal_layers}
- \label{fig:pal_layers}
+ \centering
+ \graphicspath{{../_tmp/src/csit_framework_documentation/}}
+ \includegraphics[width=0.90\textwidth]{pal_layers}
+ \label{fig:pal_layers}
\end{figure}
.. only:: html
DIR[DTR]: "{DIR[WORKING,SRC]}/detailed_test_results"
DIR[DTR,PERF,DPDK]: "{DIR[DTR]}/dpdk_performance_results"
DIR[DTR,PERF,VPP]: "{DIR[DTR]}/vpp_performance_results"
- DIR[DTR,PERF,HC]: "{DIR[DTR]}/honeycomb_performance_results"
DIR[DTR,FUNC,VPP]: "{DIR[DTR]}/vpp_functional_results"
- DIR[DTR,FUNC,HC]: "{DIR[DTR]}/honeycomb_functional_results"
- DIR[DTR,FUNC,NSHSFC]: "{DIR[DTR]}/nshsfc_functional_results"
DIR[DTR,PERF,VPP,IMPRV]: "{DIR[WORKING,SRC]}/vpp_performance_tests/performance_improvements"
# Detailed test configurations
urls:
URL[JENKINS,CSIT]: "https://jenkins.fd.io/view/csit/job"
- URL[JENKINS,HC]: "https://jenkins.fd.io/view/hc2vpp/job"
+ URL[S3_STORAGE,LOG]: "https://logs.nginx.service.consul/vex-yul-rot-jenkins-1"
+ URL[NEXUS,LOG]: "https://logs.fd.io/production/vex-yul-rot-jenkins-1"
+ URL[NEXUS,DOC]: "https://docs.fd.io/csit"
+ DIR[NEXUS,DOC]: "report/_static/archive"
make-dirs:
# List the directories which are created while preparing the environment.
-
build: 9
file: "csit-dpdk-perf-1707-all__9.xml"
- csit-nsh_sfc-verify-func-1707-ubuntu1604-virl:
- -
- build: 2
- file: "csit-nsh_sfc-verify-func-1707-ubuntu1604-virl-2.xml"
csit-vpp-functional-1707-ubuntu1604-virl:
-
build: lastSuccessfulBuild
processed.
::
+
-
type: "static"
src-path: "{DIR[RST]}"
- 9
hc2vpp-csit-integration-1707-ubuntu1604:
- lastSuccessfulBuild
- csit-nsh_sfc-verify-func-1707-ubuntu1604-virl:
- - 2
-
Section: Output
'''''''''''''''
The structure of the section "Plot" is as follows (example of a plot showing
VPP HTTP server performance in a box chart with pre-defined data
-"plot-vpp-httlp-server-performance" set and plot layout "plot-cps"):
+"plot-vpp-http-server-performance" set and plot layout "plot-cps"):
::
-
type: "plot"
title: "VPP HTTP Server Performance"
- algorithm: "plot_http_server_performance_box"
+ algorithm: "plot_http_server_perf_box"
output-file-type: ".html"
output-file: "{DIR[STATIC,VPP]}/http-server-performance-cps"
data:
-
type: "table"
title: "Performance comparison"
- algorithm: "table_performance_comparison"
+ algorithm: "table_perf_comparison"
output-file-ext: ".csv"
output-file: "{DIR[DTR,PERF,VPP,IMPRV]}/vpp_performance_comparison"
reference:
and integrated.
+Continuous Performance Measurements and Trending
+------------------------------------------------
+
+Performance analysis and trending execution sequence:
+`````````````````````````````````````````````````````
+
+CSIT PA runs performance analysis, change detection and trending using specified
+trend analysis metrics over the rolling window of last <N> sets of historical
+measurement data. PA is defined as follows:
+
+ #. PA job triggers:
+
+ #. By PT job at its completion.
+ #. Manually from Jenkins UI.
+
+ #. Download and parse archived historical data and the new data:
+
+ #. New data from latest PT job is evaluated against the rolling window
+ of <N> sets of historical data.
+ #. Download RF output.xml files and compressed archived data.
+ #. Parse out the data filtering test cases listed in PA specification
+ (part of CSIT PAL specification file).
+
+ #. Calculate trend metrics for the rolling window of <N> sets of historical
+ data:
+
+ #. Calculate quartiles Q1, Q2, Q3.
+ #. Trim outliers using IQR.
+ #. Calculate TMA and TMSD.
+ #. Calculate normal trending range per test case based on TMA and TMSD.
+
+ #. Evaluate new test data against trend metrics:
+
+ #. If within the range of (TMA +/- 3*TMSD) => Result = Pass,
+ Reason = Normal.
+ #. If below the range => Result = Fail, Reason = Regression.
+ #. If above the range => Result = Pass, Reason = Progression.
+
+ #. Generate and publish results
+
+ #. Relay evaluation result to job result.
+ #. Generate a new set of trend analysis summary graphs and drill-down
+ graphs.
+
+ #. Summary graphs to include measured values with Normal,
+ Progression and Regression markers. MM shown in the background if
+ possible.
+ #. Drill-down graphs to include MM, TMA and TMSD.
+
+ #. Publish trend analysis graphs in html format on
+ https://s3-docs.fd.io/csit/master/trending/.
+
+
+Parameters to specify:
+``````````````````````
+
+*General section - parameters common to all plots:*
+
+ - type: "cpta";
+ - title: The title of this section;
+ - output-file-type: only ".html" is supported;
+ - output-file: path where the generated files will be stored.
+
+*Plots section:*
+
+ - plot title;
+ - output file name;
+ - input data for plots;
+
+ - job to be monitored - the Jenkins job which results are used as input
+ data for this test;
+ - builds used for trending plot(s) - specified by a list of build
+ numbers or by a range of builds defined by the first and the last
+ build number;
+
+ - tests to be displayed in the plot defined by a filter;
+ - list of parameters to extract from the data;
+ - plot layout
+
+*Example:*
+
+::
+
+ -
+ type: "cpta"
+ title: "Continuous Performance Trending and Analysis"
+ output-file-type: ".html"
+ output-file: "{DIR[STATIC,VPP]}/cpta"
+ plots:
+
+ - title: "VPP 1T1C L2 64B Packet Throughput - Trending"
+ output-file-name: "l2-1t1c-x520"
+ data: "plot-performance-trending-vpp"
+ filter: "'NIC_Intel-X520-DA2' and 'MRR' and '64B' and ('BASE' or 'SCALE') and '1T1C' and ('L2BDMACSTAT' or 'L2BDMACLRN' or 'L2XCFWD') and not 'VHOST' and not 'MEMIF'"
+ parameters:
+ - "result"
+ layout: "plot-cpta-vpp"
+
+ - title: "DPDK 4T4C IMIX MRR Trending"
+ output-file-name: "dpdk-imix-4t4c-xl710"
+ data: "plot-performance-trending-dpdk"
+ filter: "'NIC_Intel-XL710' and 'IMIX' and 'MRR' and '4T4C' and 'DPDK'"
+ parameters:
+ - "result"
+ layout: "plot-cpta-dpdk"
+
+The Dashboard
+`````````````
+
+Performance dashboard tables provide the latest VPP throughput trend, trend
+compliance and detected anomalies, all on a per VPP test case basis.
+The Dashboard is generated as three tables for 1t1c, 2t2c and 4t4c MRR tests.
+
+At first, the .csv tables are generated (only the table for 1t1c is shown):
+
+::
+
+ -
+ type: "table"
+ title: "Performance trending dashboard"
+ algorithm: "table_perf_trending_dash"
+ output-file-ext: ".csv"
+ output-file: "{DIR[STATIC,VPP]}/performance-trending-dashboard-1t1c"
+ data: "plot-performance-trending-all"
+ filter: "'MRR' and '1T1C'"
+ parameters:
+ - "name"
+ - "parent"
+ - "result"
+ ignore-list:
+ - "tests.vpp.perf.l2.10ge2p1x520-eth-l2bdscale1mmaclrn-mrr.tc01-64b-1t1c-eth-l2bdscale1mmaclrn-ndrdisc"
+ outlier-const: 1.5
+ window: 14
+ evaluated-window: 14
+ long-trend-window: 180
+
+Then, html tables stored inside .rst files are generated:
+
+::
+
+ -
+ type: "table"
+ title: "HTML performance trending dashboard 1t1c"
+ algorithm: "table_perf_trending_dash_html"
+ input-file: "{DIR[STATIC,VPP]}/performance-trending-dashboard-1t1c.csv"
+ output-file: "{DIR[STATIC,VPP]}/performance-trending-dashboard-1t1c.rst"
+
+Root Cause Analysis
+-------------------
+
+Root Cause Analysis (RCA) by analysing archived performance results – re-analyse
+available data for specified:
+
+ - range of jobs builds,
+ - set of specific tests and
+ - PASS/FAIL criteria to detect performance change.
+
+In addition, PAL generates trending plots to show performance over the specified
+time interval.
+
+Root Cause Analysis - Option 1: Analysing Archived VPP Results
+``````````````````````````````````````````````````````````````
+
+It can be used to speed-up the process, or when the existing data is sufficient.
+In this case, PAL uses existing data saved in Nexus, searches for performance
+degradations and generates plots to show performance over the specified time
+interval for the selected tests.
+
+Execution Sequence
+''''''''''''''''''
+
+ #. Download and parse archived historical data and the new data.
+ #. Calculate trend metrics.
+ #. Find regression / progression.
+ #. Generate and publish results:
+
+ #. Summary graphs to include measured values with Progression and
+ Regression markers.
+ #. List the DUT build(s) where the anomalies were detected.
+
+CSIT PAL Specification
+''''''''''''''''''''''
+
+ - What to test:
+
+ - first build (Good); specified by the Jenkins job name and the build
+ number
+ - last build (Bad); specified by the Jenkins job name and the build
+ number
+ - step (1..n).
+
+ - Data:
+
+ - tests of interest; list of tests (full name is used) which results are
+ used
+
+*Example:*
+
+::
+
+ TODO
+
+
API
---
.. raw:: latex
\begin{figure}[H]
- \centering
- \includesvg[width=0.90\textwidth]{../_tmp/src/csit_framework_documentation/pal_func_diagram}
- \label{fig:pal_func_diagram}
+ \centering
+ \graphicspath{{../_tmp/src/csit_framework_documentation/}}
+ \includegraphics[width=0.90\textwidth]{pal_func_diagram}
+ \label{fig:pal_func_diagram}
\end{figure}
.. only:: html