Report: Placeholder for LD preload tests 52/24652/7
authorTibor Frank <tifrank@cisco.com>
Thu, 30 Jan 2020 08:10:38 +0000 (09:10 +0100)
committerTibor Frank <tifrank@cisco.com>
Fri, 7 Feb 2020 06:27:40 +0000 (06:27 +0000)
- methodology
- test results

Change-Id: I0d102875045ab295d9b44fa7bc328f2a728803d7
Signed-off-by: Tibor Frank <tifrank@cisco.com>
Signed-off-by: Dave Wallace <dwallacelf@gmail.com>
docs/report/index.html.template
docs/report/introduction/methodology.rst
docs/report/introduction/methodology_http_tcp_with_wrk.rst [moved from docs/report/introduction/methodology_http_tcp_with_wrk_tool.rst with 75% similarity]
docs/report/introduction/methodology_quic_with_vppecho.rst [new file with mode: 0644]
docs/report/introduction/methodology_tcp_with_iperf3.rst [new file with mode: 0644]
docs/report/vpp_performance_tests/hoststack_testing/http_server_performance/index.rst [moved from docs/report/vpp_performance_tests/http_server_performance/index.rst with 100% similarity]
docs/report/vpp_performance_tests/hoststack_testing/index.rst [new file with mode: 0644]
docs/report/vpp_performance_tests/hoststack_testing/iperf3/index.rst [new file with mode: 0644]
docs/report/vpp_performance_tests/hoststack_testing/quic/index.rst [new file with mode: 0644]
docs/report/vpp_performance_tests/index.rst

index d8f3b6f..cbbde1e 100644 (file)
@@ -25,7 +25,7 @@ CSIT-2001
     vpp_performance_tests/soak_tests/index
     vpp_performance_tests/reconf_tests/index
     vpp_performance_tests/nf_service_density/index
     vpp_performance_tests/soak_tests/index
     vpp_performance_tests/reconf_tests/index
     vpp_performance_tests/nf_service_density/index
-    vpp_performance_tests/http_server_performance/index
+    vpp_performance_tests/hoststack_testing/index
     vpp_performance_tests/comparisons/index
     vpp_performance_tests/throughput_trending
     vpp_performance_tests/test_environment
     vpp_performance_tests/comparisons/index
     vpp_performance_tests/throughput_trending
     vpp_performance_tests/test_environment
index ff3ecc3..107a695 100644 (file)
@@ -13,6 +13,9 @@ Test Methodology
     methodology_data_plane_throughput/index
     methodology_packet_latency
     methodology_multi_core_speedup
     methodology_data_plane_throughput/index
     methodology_packet_latency
     methodology_multi_core_speedup
+    methodology_http_tcp_with_wrk
+    methodology_tcp_with_iperf3
+    methodology_quic_with_vppecho
     methodology_reconf
     methodology_vpp_startup_settings
     methodology_kvm_vms_vhost_user
     methodology_reconf
     methodology_vpp_startup_settings
     methodology_kvm_vms_vhost_user
@@ -21,6 +24,3 @@ Test Methodology
     methodology_vpp_device_functional
     methodology_ipsec_on_intel_qat
     methodology_trex_traffic_generator
     methodology_vpp_device_functional
     methodology_ipsec_on_intel_qat
     methodology_trex_traffic_generator
-
-..
-    methodology_http_tcp_with_wrk_tool
@@ -1,15 +1,14 @@
-HTTP/TCP with WRK Tool
-----------------------
+HTTP/TCP with WRK
+-----------------
 
 `WRK HTTP benchmarking tool <https://github.com/wg/wrk>`_ is used for
 
 `WRK HTTP benchmarking tool <https://github.com/wg/wrk>`_ is used for
-experimental TCP/IP and HTTP tests of VPP TCP/IP stack and built-in
-static HTTP server. WRK has been chosen as it is capable of generating
-significant TCP/IP and HTTP loads by scaling number of threads across
-multi-core processors.
-
-This in turn enables quite high scale benchmarking of the main TCP/IP
-and HTTP service including HTTP TCP/IP Connections-Per-Second (CPS),
-HTTP Requests-Per-Second and HTTP Bandwidth Throughput.
+TCP/IP and HTTP tests of VPP Host Stack and built-in static HTTP server.
+WRK has been chosen as it is capable of generating significant TCP/IP
+and HTTP loads by scaling number of threads across multi-core processors.
+
+This in turn enables high scale benchmarking of the VPP Host Stack TCP/IP
+and HTTP service including HTTP TCP/IP Connections-Per-Second (CPS) and
+HTTP Requests-Per-Second.
 
 The initial tests are designed as follows:
 
 
 The initial tests are designed as follows:
 
diff --git a/docs/report/introduction/methodology_quic_with_vppecho.rst b/docs/report/introduction/methodology_quic_with_vppecho.rst
new file mode 100644 (file)
index 0000000..12b6420
--- /dev/null
@@ -0,0 +1,43 @@
+Hoststack Throughput Testing over QUIC/UDP/IP with vpp_echo
+-----------------------------------------------------------
+
+`vpp_echo performance testing tool <https://wiki.fd.io/view/VPP/HostStack#External_Echo_Server.2FClient_.28vpp_echo.29>`_
+is a bespoke performance test application which utilizes the 'native
+HostStack APIs' to verify performance and correct handling of
+connection/stream events with uni-directional and bi-directional
+streams of data.
+
+Because iperf3 does not support the QUIC transport protocol, vpp_echo
+is used for measuring the maximum attainable bandwidth of the VPP Host
+Stack connection utilzing the QUIC transport protocol across two
+instances of VPP running on separate DUT nodes.  The QUIC transport
+protocol supports multiple streams per connection and test cases
+utilize different combinations of QUIC connections and number of
+streams per connection.
+
+The test configuration is as follows:
+
+        DUT1               Network                DUT2
+[ vpp_echo-client -> VPP1 ]=======[ VPP2 -> vpp_echo-server]
+                      N-streams/connection
+
+where,
+
+ 1. vpp_echo server attaches to VPP2 and LISTENs on VPP2:TCP port 1234.
+ 2. vpp_echo client creates one or more connections to VPP1 and opens
+    one or more stream per connection to VPP2:TCP port 1234.
+ 3. vpp_echo client transmits a uni-directional stream as fast as the
+    VPP Host Stack allows to the vpp_echo server for the test duration.
+ 4. At the end of the test the vpp_echo client emits the goodput
+    measurements for all streams and the sum of all streams.
+
+ Test cases include
+ 1. 1 QUIC Connection with 1 Stream
+ 2. 1 QUIC connection with 10 Streams
+ 3. 10 QUIC connetions with 1 Stream
+ 4. 10 QUIC connections with 10 Streams
+
+ with stream sizes to provide reasonable test durations. The VPP Host
+ Stack QUIC transport is configured to utilize the picotls encryption
+ library.  In the future, tests utilizing addtional encryption
+ algorithms will be added.
diff --git a/docs/report/introduction/methodology_tcp_with_iperf3.rst b/docs/report/introduction/methodology_tcp_with_iperf3.rst
new file mode 100644 (file)
index 0000000..ef28dec
--- /dev/null
@@ -0,0 +1,41 @@
+Hoststack Throughput Testing over TCP/IP with iperf3
+----------------------------------------------------
+
+`iperf3 bandwidth measurement tool <https://github.com/esnet/iperf>`_
+is used for measuring the maximum attainable bandwidth of the VPP Host
+Stack connection across two instances of VPP running on separate DUT
+nodes. iperf3 is a popular open source tool for active measurements
+of the maximum achievable bandwidth on IP networks.
+
+Because iperf3 utilizes the POSIX socket interface APIs, the current
+test configuration utilizes the LD_PRELOAD mechanism in the linux
+kernel to connect iperf3 to the VPP Host Stack using the VPP
+Communications Library (VCL) LD_PRELOAD library (libvcl_ldpreload.so).
+
+In the future, a forked version of iperf3 which has been modified to
+directly use the VCL application APIs may be added to determine the
+difference in performance of 'VCL Native' applications .vs. utilizing
+LD_PRELOAD which inherently has more overhead and other limitations.
+
+The test configuration is as follows:
+
+       DUT1              Network               DUT2
+[ iperf3-client -> VPP1 ]=======[ VPP2 -> iperf3-server]
+
+where,
+
+ 1. iperf3 server attaches to VPP2 and LISTENs on VPP2:TCP port 5201.
+ 2. iperf3 client attaches to VPP1 and opens one or more stream
+    connections to VPP2:TCP port 5201.
+ 3. iperf3 client transmits a uni-directional stream as fast as the
+    VPP Host Stack allows to the iperf3 server for the test duration.
+ 4. At the end of the test the iperf3 client emits the goodput
+    measurements for all streams and the sum of all streams.
+
+ Test cases include 1 and 10 Streams with a 20 second test duration
+ with the VPP Host Stack configured to utilize the Cubic TCP
+ congestion algorithm.
+
+ Note: iperf3 is single threaded, so it is expected that the 10 stream
+ test does not show any performance improvement due to
+ multi-thread/multi-core execution.
diff --git a/docs/report/vpp_performance_tests/hoststack_testing/index.rst b/docs/report/vpp_performance_tests/hoststack_testing/index.rst
new file mode 100644 (file)
index 0000000..e6da504
--- /dev/null
@@ -0,0 +1,8 @@
+Hoststack Testing
+=================
+
+.. toctree::
+
+    http_server_performance/index
+    iperf3/index
+    quic/index
diff --git a/docs/report/vpp_performance_tests/hoststack_testing/iperf3/index.rst b/docs/report/vpp_performance_tests/hoststack_testing/iperf3/index.rst
new file mode 100644 (file)
index 0000000..85d120c
--- /dev/null
@@ -0,0 +1,2 @@
+Hoststack Throughput Testing over TCP/IP with iperf3
+----------------------------------------------------
diff --git a/docs/report/vpp_performance_tests/hoststack_testing/quic/index.rst b/docs/report/vpp_performance_tests/hoststack_testing/quic/index.rst
new file mode 100644 (file)
index 0000000..c1ec15b
--- /dev/null
@@ -0,0 +1,2 @@
+Hoststack Throughput Testing over QUIC(picotls)/UDP/IP with vpp_echo
+--------------------------------------------------------------------
index beb4dce..042ee4f 100644 (file)
@@ -13,6 +13,7 @@ VPP Performance
     soak_tests/index
     reconf_tests/index
     nf_service_density/index
     soak_tests/index
     reconf_tests/index
     nf_service_density/index
+    hoststack_testing/index
     http_server_performance/index
     comparisons/index
     throughput_trending
     http_server_performance/index
     comparisons/index
     throughput_trending