CSIT-902: Compare performance results before/after meltdown/spectre 97/10197/11
authorTibor Frank <tifrank@cisco.com>
Mon, 22 Jan 2018 14:43:22 +0000 (15:43 +0100)
committerTibor Frank <tifrank@cisco.com>
Fri, 26 Jan 2018 07:40:29 +0000 (08:40 +0100)
 - CSIT-903: LLD
 - CSIT-904: Data model
 - CSIT-905: Algorithm
 - CSIT-906: Static content

Change-Id: Ia7b77fc35ab852110c2f50efb7756ac15576749a
Signed-off-by: Tibor Frank <tifrank@cisco.com>
Signed-off-by: Maciek Konstantynowicz <mkonstan@cisco.com>
docs/report/vpp_performance_tests/index.rst
docs/report/vpp_performance_tests/performance_impact_meltdown/index.rst [new file with mode: 0644]
docs/report/vpp_performance_tests/performance_impact_spectre/index.rst [new file with mode: 0644]
resources/tools/presentation/doc/pal_lld.rst
resources/tools/presentation/generator_tables.py
resources/tools/presentation/requirements.txt
resources/tools/presentation/specification.yaml

index 2d234ce..0cdea48 100644 (file)
@@ -7,6 +7,8 @@ VPP Performance Tests
     csit_release_notes
     packet_throughput_graphs/index
     packet_latency_graphs/index
-    throughput_speedup_multi_core
+    throughput_speedup_multi_core/index
+    performance_impact_meltdown/index
+    performance_impact_spectre/index
     test_environment
     documentation/index
diff --git a/docs/report/vpp_performance_tests/performance_impact_meltdown/index.rst b/docs/report/vpp_performance_tests/performance_impact_meltdown/index.rst
new file mode 100644 (file)
index 0000000..bd3a377
--- /dev/null
@@ -0,0 +1,160 @@
+Performance Impact of Meltdown Patches
+======================================
+
+The following tables present performance impact on VPP after
+applying patches addressing Meltdown (Variant3: Rogue Data Cache Load) security
+ ulnerabilities. Incremental kernel patches are applied for Ubuntu 16.04LTS as
+documented on `Ubuntu SpectreAndMeltdown page <https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown>`_.
+For detailed listing of used software versions and patches please refer
+to :ref:`test_environment`.
+
+NDR and PDR packet throughput results are compared for 1-core/1-thread,
+2-cores/2-threads and 4-cores/4-threads VPP configurations, with
+reference performance numbers coming from tests without the Meltdown
+patches.
+
+NDR throughput: Best 20 changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. only:: html
+
+   .. csv-table::
+      :align: center
+      :file: performance_impact_meltdown/meltdown-impact-ndr-1t1c-top.csv
+
+.. only:: latex
+
+   .. raw:: latex
+
+      \makeatletter
+      \csvset{
+        perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{m{4cm} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},
+      }
+      \makeatother
+
+      {\tiny
+      \csvautobooklongtable[separator=comma,
+        respect all,
+        no check column count,
+        perfimprovements column width=1cm,
+        late after line={\\\hline},
+        late after last line={\end{longtable}}
+        ]{../_tmp/src/vpp_performance_tests/performance_improvements/meltdown-impact-ndr-1t1c-top.csv}
+      }
+
+NDR throughput: Worst 20 changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. only:: html
+
+   .. csv-table::
+      :align: center
+      :file: performance_impact_meltdown/meltdown-impact-ndr-1t1c-bottom.csv
+
+.. only:: latex
+
+   .. raw:: latex
+
+      \makeatletter
+      \csvset{
+        perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{m{4cm} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},
+      }
+      \makeatother
+
+      {\tiny
+      \csvautobooklongtable[separator=comma,
+        respect all,
+        no check column count,
+        perfimprovements column width=1cm,
+        late after line={\\\hline},
+        late after last line={\end{longtable}}
+        ]{../_tmp/src/vpp_performance_tests/performance_improvements/meltdown-impact-ndr-1t1c-bottom.csv}
+      }
+
+.. only:: html
+
+NDR throughput: All changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Complete results for all NDR tests are available in a CSV and pretty
+ASCII formats:
+
+  - `csv format for 1t1c <meltdown-impact-ndr-1t1c-full.csv>`_,
+  - `csv format for 2t2c <meltdown-impact-ndr-2t2c-full.csv>`_,
+  - `csv format for 4t4c <meltdown-impact-ndr-4t4c-full.csv>`_,
+  - `pretty ASCII format for 1t1c <meltdown-impact-ndr-1t1c-full.txt>`_,
+  - `pretty ASCII format for 2t2c <meltdown-impact-ndr-2t2c-full.txt>`_,
+  - `pretty ASCII format for 4t4c <meltdown-impact-ndr-4t4c-full.txt>`_.
+
+PDR throughput: Best 20 changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. only:: html
+
+   .. csv-table::
+      :align: center
+      :file: performance_impact_meltdown/meltdown-impact-pdr-1t1c-top.csv
+
+.. only:: latex
+
+   .. raw:: latex
+
+      \makeatletter
+      \csvset{
+        perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{m{4cm} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},
+      }
+      \makeatother
+
+      {\tiny
+      \csvautobooklongtable[separator=comma,
+        respect all,
+        no check column count,
+        perfimprovements column width=1cm,
+        late after line={\\\hline},
+        late after last line={\end{longtable}}
+        ]{../_tmp/src/vpp_performance_tests/performance_improvements/meltdown-impact-pdr-1t1c-top.csv}
+      }
+
+PDR throughput: Worst 20 changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. only:: html
+
+   .. csv-table::
+      :align: center
+      :file: performance_impact_meltdown/meltdown-impact-pdr-1t1c-bottom.csv
+
+.. only:: latex
+
+   .. raw:: latex
+
+      \makeatletter
+      \csvset{
+        perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{m{4cm} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},
+      }
+      \makeatother
+
+      {\tiny
+      \csvautobooklongtable[separator=comma,
+        respect all,
+        no check column count,
+        perfimprovements column width=1cm,
+        late after line={\\\hline},
+        late after last line={\end{longtable}}
+        ]{../_tmp/src/vpp_performance_tests/performance_improvements/meltdown-impact-pdr-1t1c-bottom.csv}
+      }
+
+.. only:: html
+
+PDR throughput: All changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Complete results for all PDR tests are available in a CSV and pretty
+ASCII formats:
+
+  - `csv format for 1t1c <meltdown-impact-pdr-1t1c-full.csv>`_,
+  - `csv format for 2t2c <meltdown-impact-pdr-2t2c-full.csv>`_,
+  - `csv format for 4t4c <meltdown-impact-pdr-4t4c-full.csv>`_,
+  - `pretty ASCII format for 1t1c <meltdown-impact-pdr-1t1c-full.txt>`_,
+  - `pretty ASCII format for 2t2c <meltdown-impact-pdr-2t2c-full.txt>`_,
+  - `pretty ASCII format for 4t4c <meltdown-impact-pdr-4t4c-full.txt>`_.
diff --git a/docs/report/vpp_performance_tests/performance_impact_spectre/index.rst b/docs/report/vpp_performance_tests/performance_impact_spectre/index.rst
new file mode 100644 (file)
index 0000000..cb3b030
--- /dev/null
@@ -0,0 +1,164 @@
+Performance Impact of Meltdown and Spectre Patches
+==================================================
+
+The following tables present performance impact on VPP after applying
+patches addressing Meltdown (Variant3: Rogue Data Cache Load) and
+Spectre (Variant1: Bounds Check Bypass; Variant2: Branch Target
+Injection) security vulnerabilities. Incremental kernel patches for
+Ubuntu 16.04 LTS as documented on
+`Ubuntu SpectreAndMeltdown page <https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown>`_.
+For Spectre additional Processor microcode and BIOS firmware changes are
+applied. For detailed listing of used software versions and patches
+please refer to :ref:`test_environment`.
+
+NDR and PDR packet throughput results are compared for 1-core/1-thread,
+2-cores/2-threads and 4-cores/4-threads VPP configurations, with
+reference performance numbers coming from tests without the Meltdown and
+Spectre patches.
+
+NDR throughput: Best 20 changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. only:: html
+
+   .. csv-table::
+      :align: center
+      :file: performance_impact_meltdown/spectre-impact-ndr-1t1c-top.csv
+
+.. only:: latex
+
+   .. raw:: latex
+
+      \makeatletter
+      \csvset{
+        perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{m{4cm} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},
+      }
+      \makeatother
+
+      {\tiny
+      \csvautobooklongtable[separator=comma,
+        respect all,
+        no check column count,
+        perfimprovements column width=1cm,
+        late after line={\\\hline},
+        late after last line={\end{longtable}}
+        ]{../_tmp/src/vpp_performance_tests/performance_improvements/spectre-impact-ndr-1t1c-top.csv}
+      }
+
+NDR throughput: Worst 20 changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. only:: html
+
+   .. csv-table::
+      :align: center
+      :file: performance_impact_meltdown/spectre-impact-ndr-1t1c-bottom.csv
+
+.. only:: latex
+
+   .. raw:: latex
+
+      \makeatletter
+      \csvset{
+        perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{m{4cm} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},
+      }
+      \makeatother
+
+      {\tiny
+      \csvautobooklongtable[separator=comma,
+        respect all,
+        no check column count,
+        perfimprovements column width=1cm,
+        late after line={\\\hline},
+        late after last line={\end{longtable}}
+        ]{../_tmp/src/vpp_performance_tests/performance_improvements/spectre-impact-ndr-1t1c-bottom.csv}
+      }
+
+.. only:: html
+
+
+NDR throughput: All changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Complete results for all NDR tests are available in a CSV and pretty
+ASCII formats:
+
+  - `csv format for 1t1c <meltdown-spectre-impact-ndr-1t1c-full.csv>`_,
+  - `csv format for 2t2c <meltdown-spectre-impact-ndr-2t2c-full.csv>`_,
+  - `csv format for 4t4c <meltdown-spectre-impact-ndr-4t4c-full.csv>`_,
+  - `pretty ASCII format for 1t1c <meltdown-spectre-impact-ndr-1t1c-full.txt>`_,
+  - `pretty ASCII format for 2t2c <meltdown-spectre-impact-ndr-2t2c-full.txt>`_,
+  - `pretty ASCII format for 4t4c <meltdown-spectre-impact-ndr-4t4c-full.txt>`_.
+
+PDR throughput: Best 20 changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. only:: html
+
+   .. csv-table::
+      :align: center
+      :file: performance_impact_meltdown/spectre-impact-pdr-1t1c-top.csv
+
+.. only:: latex
+
+   .. raw:: latex
+
+      \makeatletter
+      \csvset{
+        perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{m{4cm} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},
+      }
+      \makeatother
+
+      {\tiny
+      \csvautobooklongtable[separator=comma,
+        respect all,
+        no check column count,
+        perfimprovements column width=1cm,
+        late after line={\\\hline},
+        late after last line={\end{longtable}}
+        ]{../_tmp/src/vpp_performance_tests/performance_improvements/spectre-impact-pdr-1t1c-top.csv}
+      }
+
+PDR throughput: Worst 20 changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. only:: html
+
+   .. csv-table::
+      :align: center
+      :file: performance_impact_meltdown/spectre-impact-pdr-1t1c-bottom.csv
+
+.. only:: latex
+
+   .. raw:: latex
+
+      \makeatletter
+      \csvset{
+        perfimprovements column width/.style={after head=\csv@pretable\begin{longtable}{m{4cm} m{#1} m{#1} m{#1} m{#1} m{#1}}\csv@tablehead},
+      }
+      \makeatother
+
+      {\tiny
+      \csvautobooklongtable[separator=comma,
+        respect all,
+        no check column count,
+        perfimprovements column width=1cm,
+        late after line={\\\hline},
+        late after last line={\end{longtable}}
+        ]{../_tmp/src/vpp_performance_tests/performance_improvements/spectre-impact-pdr-1t1c-bottom.csv}
+      }
+
+.. only:: html
+
+PDR throughput: All changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Complete results for all PDR tests are available in a CSV and pretty
+ASCII formats:
+
+  - `csv format for 1t1c <meltdown-spectre-impact-pdr-1t1c-full.csv>`_,
+  - `csv format for 2t2c <meltdown-spectre-impact-pdr-2t2c-full.csv>`_,
+  - `csv format for 4t4c <meltdown-spectre-impact-pdr-4t4c-full.csv>`_,
+  - `pretty ASCII format for 1t1c <meltdown-spectre-impact-pdr-1t1c-full.txt>`_,
+  - `pretty ASCII format for 2t2c <meltdown-spectre-impact-pdr-2t2c-full.txt>`_,
+  - `pretty ASCII format for 4t4c <meltdown-spectre-impact-pdr-4t4c-full.txt>`_.
index 027d6b3..64bde3e 100644 (file)
@@ -1109,8 +1109,9 @@ For example, the element which specification includes:
     filter:
       - "'64B' and 'BASE' and 'NDRDISC' and '1T1C' and ('L2BDMACSTAT' or 'L2BDMACLRN' or 'L2XCFWD') and not 'VHOST'"
 
-will be constructed using data from the job "csit-vpp-perf-1707-all", for all listed
-builds and the tests with the list of tags matching the filter conditions.
+will be constructed using data from the job "csit-vpp-perf-1707-all", for all
+listed builds and the tests with the list of tags matching the filter
+conditions.
 
 The output data structure for filtered test data is:
 
@@ -1189,6 +1190,83 @@ Subset of existing performance tests is covered by TSA graphs.
           "plot-throughput-speedup-analysis"
 
 
+Comparison of results from two sets of the same test executions
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+This algorithm enables comparison of results coming from two sets of the
+same test executions. It is used to quantify performance changes across
+all tests after test environment changes e.g. Operating System
+upgrades/patches, Hardware changes.
+
+It is assumed that each set of test executions includes multiple runs
+of the same tests, 10 or more, to verify test results repeatibility and
+to yield statistically meaningful results data.
+
+Comparison results are presented in a table with a specified number of
+the best and the worst relative changes between the two sets. Following table
+columns are defined:
+
+    - name of the test;
+    - throughput mean values of the reference set;
+    - throughput standard deviation  of the reference set;
+    - throughput mean values of the set to compare;
+    - throughput standard deviation  of the set to compare;
+    - relative change of the mean values.
+
+**The model**
+
+The model specifies:
+
+    - type: "table" - means this section defines a table.
+    - title: Title of the table.
+    - algorithm: Algorithm which is used to generate the table. The other
+      parameters in this section must provide all information needed by the used
+      algorithm.
+    - output-file-ext: Extension of the output file.
+    - output-file: File which the table will be written to.
+    - reference - the builds which are used as the reference for comparison.
+    - compare - the builds which are compared to the reference.
+    - data: Specify the sources, jobs and builds, providing data for generating
+      the table.
+    - filter: Filter based on tags applied on the input data, if "template" is
+      used, filtering is based on the template.
+    - parameters: Only these parameters will be put to the output data
+      structure.
+    - nr-of-tests-shown: Number of the best and the worst tests presented in the
+      table. Use 0 (zero) to present all tests.
+
+*Example:*
+
+::
+
+    -
+      type: "table"
+      title: "Performance comparison"
+      algorithm: "table_performance_comparison"
+      output-file-ext: ".csv"
+      output-file: "{DIR[DTR,PERF,VPP,IMPRV]}/vpp_performance_comparison"
+      reference:
+        title: "csit-vpp-perf-1801-all - 1"
+        data:
+          csit-vpp-perf-1801-all:
+          - 1
+          - 2
+      compare:
+        title: "csit-vpp-perf-1801-all - 2"
+        data:
+          csit-vpp-perf-1801-all:
+          - 1
+          - 2
+      data:
+        "vpp-perf-comparison"
+      filter: "all"
+      parameters:
+      - "name"
+      - "parent"
+      - "throughput"
+      nr-of-tests-shown: 20
+
+
 Advanced data analytics
 ```````````````````````
 
@@ -1216,7 +1294,8 @@ Tables
  - tables are generated by algorithms implemented in PAL, the model includes the
    algorithm and all necessary information.
  - output format: csv
- - generated tables are stored in specified directories and linked to .rst files.
+ - generated tables are stored in specified directories and linked to .rst
+   files.
 
 
 Plots
@@ -1232,8 +1311,8 @@ Report generation
 -----------------
 
 Report is generated using Sphinx and Read_the_Docs template. PAL generates html
-and pdf formats. It is possible to define the content of the report by specifying
-the version (TODO: define the names and content of versions).
+and pdf formats. It is possible to define the content of the report by
+specifying the version (TODO: define the names and content of versions).
 
 
 The process
@@ -1251,12 +1330,13 @@ The process
 5. Generate the report.
 6. Store the report (Nexus).
 
-The process is model driven. The elements’ models (tables, plots, files and
-report itself) are defined in the specification file. Script reads the elements’
-models from specification file and generates the elements.
+The process is model driven. The elements' models (tables, plots, files
+and report itself) are defined in the specification file. Script reads
+the elements' models from specification file and generates the elements.
 
-It is easy to add elements to be generated, if a new kind of element is
-required, only a new algorithm is implemented and integrated.
+It is easy to add elements to be generated in the report. If a new type
+of an element is required, only a new algorithm needs to be implemented
+and integrated.
 
 
 API
@@ -1396,12 +1476,12 @@ PAL functional diagram
 How to add an element
 `````````````````````
 
-Element can be added by adding its model to the specification file. If the
-element will be generated by an existing algorithm, only its parameters must be
-set.
+Element can be added by adding it's model to the specification file. If
+the element is to be generated by an existing algorithm, only it's
+parameters must be set.
 
-If a brand new type of element will be added, also the algorithm must be
-implemented.
-The algorithms are implemented in the files which names start with "generator".
-The name of the function implementing the algorithm and the name of algorithm in
-the specification file had to be the same.
+If a brand new type of element needs to be added, also the algorithm
+must be implemented. Element generation algorithms are implemented in
+the files with names starting with "generator" prefix. The name of the
+function implementing the algorithm and the name of algorithm in the
+specification file have to be the same.
index 3bb30b5..71ec431 100644 (file)
@@ -16,6 +16,9 @@
 
 
 import logging
+import csv
+import prettytable
+
 from string import replace
 
 from errors import PresentationError
@@ -64,7 +67,6 @@ def table_details(table, input_data):
 
     # Generate the data for the table according to the model in the table
     # specification
-
     job = table["data"].keys()[0]
     build = str(table["data"][job][0])
     try:
@@ -331,3 +333,174 @@ def _read_csv_template(file_name):
         return tmpl_data
     except IOError as err:
         raise PresentationError(str(err), level="ERROR")
+
+
+def table_performance_comparision(table, input_data):
+    """Generate the table(s) with algorithm: table_performance_comparision
+    specified in the specification file.
+
+    :param table: Table to generate.
+    :param input_data: Data to process.
+    :type table: pandas.Series
+    :type input_data: InputData
+    """
+
+    # Transform the data
+    data = input_data.filter_data(table)
+
+    # Prepare the header of the tables
+    try:
+        header = ["Test case",
+                  "{0} Throughput [Mpps]".format(table["reference"]["title"]),
+                  "{0} stdev [Mpps]".format(table["reference"]["title"]),
+                  "{0} Throughput [Mpps]".format(table["compare"]["title"]),
+                  "{0} stdev [Mpps]".format(table["compare"]["title"]),
+                  "Change [%]"]
+        header_str = ",".join(header) + "\n"
+    except (AttributeError, KeyError) as err:
+        logging.error("The model is invalid, missing parameter: {0}".
+                      format(err))
+        return
+
+    # Prepare data to the table:
+    tbl_dict = dict()
+    for job, builds in table["reference"]["data"].items():
+        for build in builds:
+            for tst_name, tst_data in data[job][str(build)].iteritems():
+                if tbl_dict.get(tst_name, None) is None:
+                    name = "{0}-{1}".format(tst_data["parent"].split("-")[0],
+                                            "-".join(tst_data["name"].
+                                                     split("-")[1:]))
+                    tbl_dict[tst_name] = {"name": name,
+                                          "ref-data": list(),
+                                          "cmp-data": list()}
+                tbl_dict[tst_name]["ref-data"].\
+                    append(tst_data["throughput"]["value"])
+
+    for job, builds in table["compare"]["data"].items():
+        for build in builds:
+            for tst_name, tst_data in data[job][str(build)].iteritems():
+                tbl_dict[tst_name]["cmp-data"].\
+                    append(tst_data["throughput"]["value"])
+
+    tbl_lst = list()
+    for tst_name in tbl_dict.keys():
+        item = [tbl_dict[tst_name]["name"], ]
+        if tbl_dict[tst_name]["ref-data"]:
+            item.append(round(mean(tbl_dict[tst_name]["ref-data"]) / 1000000,
+                              2))
+            item.append(round(stdev(tbl_dict[tst_name]["ref-data"]) / 1000000,
+                              2))
+        else:
+            item.extend([None, None])
+        if tbl_dict[tst_name]["cmp-data"]:
+            item.append(round(mean(tbl_dict[tst_name]["cmp-data"]) / 1000000,
+                              2))
+            item.append(round(stdev(tbl_dict[tst_name]["cmp-data"]) / 1000000,
+                              2))
+        else:
+            item.extend([None, None])
+        if item[1] is not None and item[3] is not None:
+            item.append(int(relative_change(float(item[1]), float(item[3]))))
+        if len(item) == 6:
+            tbl_lst.append(item)
+
+    # Sort the table according to the relative change
+    tbl_lst.sort(key=lambda rel: rel[-1], reverse=True)
+
+    # Generate tables:
+    # All tests in csv:
+    tbl_names = ["{0}-ndr-1t1c-full{1}".format(table["output-file"],
+                                               table["output-file-ext"]),
+                 "{0}-ndr-2t2c-full{1}".format(table["output-file"],
+                                               table["output-file-ext"]),
+                 "{0}-ndr-4t4c-full{1}".format(table["output-file"],
+                                               table["output-file-ext"]),
+                 "{0}-pdr-1t1c-full{1}".format(table["output-file"],
+                                               table["output-file-ext"]),
+                 "{0}-pdr-2t2c-full{1}".format(table["output-file"],
+                                               table["output-file-ext"]),
+                 "{0}-pdr-4t4c-full{1}".format(table["output-file"],
+                                               table["output-file-ext"])
+                 ]
+    for file_name in tbl_names:
+        with open(file_name, "w") as file_handler:
+            file_handler.write(header_str)
+            for test in tbl_lst:
+                if (file_name.split("-")[-3] in test[0] and    # NDR vs PDR
+                        file_name.split("-")[-2] in test[0]):  # cores
+                    test[0] = "-".join(test[0].split("-")[:-1])
+                    file_handler.write(",".join([str(item) for item in test]) +
+                                       "\n")
+
+    # All tests in txt:
+    tbl_names_txt = ["{0}-ndr-1t1c-full.txt".format(table["output-file"]),
+                     "{0}-ndr-2t2c-full.txt".format(table["output-file"]),
+                     "{0}-ndr-4t4c-full.txt".format(table["output-file"]),
+                     "{0}-pdr-1t1c-full.txt".format(table["output-file"]),
+                     "{0}-pdr-2t2c-full.txt".format(table["output-file"]),
+                     "{0}-pdr-4t4c-full.txt".format(table["output-file"])
+                     ]
+
+    for i, txt_name in enumerate(tbl_names_txt):
+        txt_table = None
+        with open(tbl_names[i], 'rb') as csv_file:
+            csv_content = csv.reader(csv_file, delimiter=',', quotechar='"')
+            for row in csv_content:
+                if txt_table is None:
+                    txt_table = prettytable.PrettyTable(row)
+                else:
+                    txt_table.add_row(row)
+        with open(txt_name, "w") as txt_file:
+            txt_file.write(str(txt_table))
+
+    # Selected tests in csv:
+    input_file = "{0}-ndr-1t1c-full{1}".format(table["output-file"],
+                                               table["output-file-ext"])
+    with open(input_file, "r") as in_file:
+        lines = list()
+        for line in in_file:
+            lines.append(line)
+
+    output_file = "{0}-ndr-1t1c-top{1}".format(table["output-file"],
+                                               table["output-file-ext"])
+    with open(output_file, "w") as out_file:
+        out_file.write(header_str)
+        for i, line in enumerate(lines[1:]):
+            if i == table["nr-of-tests-shown"]:
+                break
+            out_file.write(line)
+
+    output_file = "{0}-ndr-1t1c-bottom{1}".format(table["output-file"],
+                                                  table["output-file-ext"])
+    with open(output_file, "w") as out_file:
+        out_file.write(header_str)
+        for i, line in enumerate(lines[-1:0:-1]):
+            if i == table["nr-of-tests-shown"]:
+                break
+            out_file.write(line)
+
+    input_file = "{0}-pdr-1t1c-full{1}".format(table["output-file"],
+                                               table["output-file-ext"])
+    with open(input_file, "r") as in_file:
+        lines = list()
+        for line in in_file:
+            lines.append(line)
+
+    output_file = "{0}-pdr-1t1c-top{1}".format(table["output-file"],
+                                               table["output-file-ext"])
+    with open(output_file, "w") as out_file:
+        out_file.write(header_str)
+        for i, line in enumerate(lines[1:]):
+            if i == table["nr-of-tests-shown"]:
+                break
+            out_file.write(line)
+
+    output_file = "{0}-pdr-1t1c-bottom{1}".format(table["output-file"],
+                                                  table["output-file-ext"])
+    with open(output_file, "w") as out_file:
+        out_file.write(header_str)
+        for i, line in enumerate(lines[-1:0:-1]):
+            if i == table["nr-of-tests-shown"]:
+                break
+            out_file.write(line)
index 8a105fe..d6e0b0e 100644 (file)
@@ -44,6 +44,8 @@
     DIR[DTR,FUNC,HC]: "{DIR[DTR]}/honeycomb_functional_results"
     DIR[DTR,FUNC,NSHSFC]: "{DIR[DTR]}/nshsfc_functional_results"
     DIR[DTR,PERF,VPP,IMPRV]: "{DIR[WORKING,SRC]}/vpp_performance_tests/performance_improvements"
+    DIR[DTR,PERF,VPP,IMPACT,SPECTRE]: "{DIR[WORKING,SRC]}/vpp_performance_tests/performance_impact_spectre"
+    DIR[DTR,PERF,VPP,IMPACT,MELTDOWN]: "{DIR[WORKING,SRC]}/vpp_performance_tests/performance_improvements"
 
     # Detailed test configurations
     DIR[DTC]: "{DIR[WORKING,SRC]}/test_configuration"
 -
   type: "configuration"
   data-sets:
+    vpp-meltdown-impact:
+# TODO: specify data sources
+#      csit-vpp-perf-1801-all:
+#      - 1
+#      - 2
+    plot-throughput-speedup-analysis:
+# TODO: Add the data sources
+#      csit-vpp-perf-1801-all:
+#      - 1
+#      - 2
+    vpp-spectre-impact:
+# TODO: specify data sources
+#      csit-vpp-perf-1801-all:
+#      - 1
+#      - 2
     plot-throughput-speedup-analysis:
 # TODO: Add the data sources
 #      csit-vpp-perf-1801-all:
 ###                               T A B L E S                                ###
 ################################################################################
 
+-
+  type: "table"
+  title: "Performance Impact of Meltdown Patches"
+  algorithm: "table_performance_comparision"
+  output-file-ext: ".csv"
+# TODO: specify dir
+  output-file: "{DIR[DTR,PERF,VPP,IMPACT,MELTDOWN]}/meltdown-impact"
+  reference:
+    title: "No Meltdown"
+# TODO: specify data sources
+#    data:
+#      csit-vpp-perf-1801-all:
+#      - 1
+#      - 2
+  compare:
+    title: "Meltdown Patches Applied"
+# TODO: specify data sources
+#    data:
+#      csit-vpp-perf-1801-all:
+#      - 1
+#      - 2
+  data:
+    "vpp-meltdown-impact"
+  filter: "all"
+  parameters:
+  - "name"
+  - "parent"
+  - "throughput"
+  # Number of the best and the worst tests presented in the table. Use 0 (zero)
+  # to present all tests.
+  nr-of-tests-shown: 20
+
+-
+  type: "table"
+  title: "Performance Impact of Spectre Patches"
+  algorithm: "table_performance_comparision"
+  output-file-ext: ".csv"
+# TODO: specify dir
+  output-file: "{DIR[DTR,PERF,VPP,IMPACT,SPECTRE]}/spectre-impact"
+  reference:
+    title: "No Spectre"
+# TODO: specify data sources
+#    data:
+#      csit-vpp-perf-1801-all:
+#      - 1
+#      - 2
+  compare:
+    title: "Spectre Patches Applied"
+# TODO: specify data sources
+#    data:
+#      csit-vpp-perf-1801-all:
+#      - 1
+#      - 2
+  data:
+    "vpp-spectre-impact"
+  filter: "all"
+  parameters:
+  - "name"
+  - "parent"
+  - "throughput"
+  # Number of the best and the worst tests presented in the table. Use 0 (zero)
+  # to present all tests.
+  nr-of-tests-shown: 20
+
 -
   type: "table"
   title: "Performance improvements"