X-Git-Url: https://gerrit.fd.io/r/gitweb?a=blobdiff_plain;f=docs%2Freport%2Fintroduction%2Fmethodology_multi_core_speedup.rst;h=095f0f7796afbdccc63e03300a8fa0b44ce8ec24;hb=60b531215d36e2402b1b6c768bd4fd4d4b210fd0;hp=94840406a1afd6ce2dd7c287088b60ea1ac3f949;hpb=124101d22151239b0411a73ae4d2bf8d70970937;p=csit.git diff --git a/docs/report/introduction/methodology_multi_core_speedup.rst b/docs/report/introduction/methodology_multi_core_speedup.rst index 94840406a1..095f0f7796 100644 --- a/docs/report/introduction/methodology_multi_core_speedup.rst +++ b/docs/report/introduction/methodology_multi_core_speedup.rst @@ -1,7 +1,7 @@ Multi-Core Speedup ------------------ -All performance tests are executed with single processor core and with +All performance tests are executed with single physical core and with multiple cores scenarios. Intel Hyper-Threading (HT) @@ -16,7 +16,7 @@ making it impractical for continuous changes of HT mode of operation. |csit-release| performance tests are executed with server SUTs' Intel XEON processors configured with Intel Hyper-Threading Disabled for all Xeon Haswell testbeds (3n-hsw) and with Intel Hyper-Threading Enabled -for all Xeon Skylake testbeds. +for all Xeon Skylake and Xeon Cascadelake testbeds. More information about physical testbeds is provided in :ref:`tested_physical_topologies`. @@ -34,8 +34,8 @@ thread and physical core configurations: #. 2t2c - 2 VPP worker threads on 2 physical cores. #. 4t4c - 4 VPP worker threads on 4 physical cores. -#. Intel Xeon Skylake testbeds (2n-skx, 3n-skx) with Intel HT enabled - (2 logical CPU cores per each physical core): +#. Intel Xeon Skylake and Cascadelake testbeds (2n-skx, 3n-skx, 2n-clx) + with Intel HT enabled (2 logical CPU cores per each physical core): #. 2t1c - 2 VPP worker threads on 1 physical core. #. 4t2c - 4 VPP worker threads on 2 physical cores. @@ -51,7 +51,7 @@ In all CSIT tests care is taken to ensure that each VPP worker handles the same amount of received packet load and does the same amount of packet processing work. This is achieved by evenly distributing per interface type (e.g. physical, virtual) receive queues over VPP workers -using default VPP round- robin mapping and by loading these queues with +using default VPP round-robin mapping and by loading these queues with the same amount of packet flows. If number of VPP workers is higher than number of physical or virtual @@ -62,5 +62,5 @@ for virtual interfaces are used for this purpose. Section :ref:`throughput_speedup_multi_core` includes a set of graphs illustrating packet throughout speedup when running VPP worker threads on multiple cores. Note that in quite a few test cases running VPP -workers on 2 or 4 physical cores hits the I/O bandwidth or packets-per- -second limit of tested NIC. +workers on 2 or 4 physical cores hits the I/O bandwidth +or packets-per-second limit of tested NIC.