X-Git-Url: https://gerrit.fd.io/r/gitweb?a=blobdiff_plain;f=docs%2Fcpta%2Fintroduction%2Fintroduction.rst;fp=docs%2Fcpta%2Fintroduction%2Fintroduction.rst;h=0000000000000000000000000000000000000000;hb=59734b2f72358f6315cbfadc1b1a0ef56b7e23ec;hp=e095d8f18bdf5a913619624e6fe4b61f4eef50fb;hpb=4aa89eb4eb3c954a5540cb70b23315ce652eef4f;p=csit.git diff --git a/docs/cpta/introduction/introduction.rst b/docs/cpta/introduction/introduction.rst deleted file mode 100644 index e095d8f18b..0000000000 --- a/docs/cpta/introduction/introduction.rst +++ /dev/null @@ -1,24 +0,0 @@ -Description -=========== - -Performance dashboard tables provide the latest VPP throughput trend, -trend compliance and detected anomalies, all on a per VPP test case -basis. Linked trendline graphs enable further drill-down into the -trendline compliance, sequence and nature of anomalies, as well as -pointers to performance test builds/logs and VPP (or DPDK) builds. -Performance trending is currently based on the Maximum Receive Rate (MRR) tests. -MRR tests measure the packet forwarding rate under the maximum load offered -by traffic generator over a set trial duration, regardless of packet -loss. See :ref:`trending_methodology` section for more detail including -trend and anomaly calculations. - -Data samples are generated by the CSIT VPP (and DPDK) performance trending jobs -executed twice a day (target start: every 12 hrs, 02:00, 14:00 UTC). All -trend and anomaly evaluation is based on an algorithm which divides test runs -into groups according to minimum description length principle. -The trend value is the population average of the results within a group. - -Tested VPP worker-thread-core combinations (1t1c, 2t1c, 2t2c, 4t2c, 4t4c, 8t4c) -are listed in separate tables in section 1.x. Followed by trending methodology -in section 2. and trending graphs in sections 3.x. Performance test data -used for trending graphs is provided in sections 4.x.