Static report content - Honeycomb
[csit.git] / docs / report / honeycomb_performance_tests / overview.rst
1 Overview
2 ========
3
4 Tested Physical Topologies
5 --------------------------
6
7 CSIT VPP performance tests are executed on physical baremetal servers hosted by
8 LF FD.io project. Testbed physical topology is shown in the figure below.
9
10 ::
11
12     +------------------------+           +------------------------+
13     |                        |           |                        |
14     |  +------------------+  |           |  +------------------+  |
15     |  |                  |  |           |  |                  |  |
16     |  |                  <----------------->                  |  |
17     |  |       DUT1       |  |           |  |       DUT2       |  |
18     |  +--^---------------+  |           |  +---------------^--+  |
19     |     |                  |           |                  |     |
20     |     |            SUT1  |           |  SUT2            |     |
21     +------------------------+           +------------------^-----+
22           |                                                 |
23           |                                                 |
24           |                  +-----------+                  |
25           |                  |           |                  |
26           +------------------>    TG     <------------------+
27                              |           |
28                              +-----------+
29
30 SUT1 runs VPP SW application in Linux user-mode as a
31 Device Under Test (DUT), and a python script to generate traffic. SUT2 and TG
32 are unused.
33 sical connectivity between SUTs and to TG is provided using
34 different NIC model. Currently installed NIC models include:
35
36 Performance tests involve sending Netconf requests over localhost to the
37 Honeycomb listener port, and measuring response time.
38
39 Note that reported performance results are specific to the SUTs tested.
40 Current LF FD.io SUTs are based on Intel XEON E5-2699v3 2.3GHz CPUs. SUTs with
41 other CPUs are likely to yield different results.
42
43 For detailed LF FD.io test bed specification and physical topology please refer
44 to `LF FDio CSIT testbed wiki page
45 <https://wiki.fd.io/view/CSIT/CSIT_LF_testbed>`_.
46
47 Performance Tests Coverage
48 --------------------------
49
50 As of right now, there is only a single Honeycomb performance test. Measuring
51 response time for a simple read operation, performed synchronously and using
52 single (not batch) requests.
53
54 Currently the tests do not trigger automatically, but can be run on-demand from
55 the hc2vpp project.
56
57 Performance Tests Naming
58 ------------------------
59
60 CSIT |release| follows a common structured naming convention for all
61 performance and system functional tests, introduced in CSIT |release-1|.
62
63 The naming should be intuitive for majority of the tests. Complete
64 description of CSIT test naming convention is provided on `CSIT test naming wiki
65 <https://wiki.fd.io/view/CSIT/csit-test-naming>`_.
66
67 Here few illustrative examples of the new naming usage for performance test
68 suites:
69
70 #. **Physical port to physical port - a.k.a. NIC-to-NIC, Phy-to-Phy, P2P**
71
72    - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
73      PacketProcessingFunction1-...-PacketProcessingFunctionN-TestType*
74    - *10ge2p1x520-dot1q-l2bdbasemaclrn-ndrdisc.robot* => 2 ports of 10GE on
75      Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain baseline switching
76      with MAC learning, NDR throughput discovery.
77    - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-ndrchk.robot* => 2 ports of 10GE
78      on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain baseline
79      switching with MAC learning, NDR throughput discovery.
80    - *10ge2p1x520-ethip4-ip4base-ndrdisc.robot* => 2 ports of 10GE on Intel
81      x520 NIC, IPv4 baseline routed forwarding, NDR throughput discovery.
82    - *10ge2p1x520-ethip6-ip6scale200k-ndrdisc.robot* => 2 ports of 10GE on
83      Intel x520 NIC, IPv6 scaled up routed forwarding, NDR throughput
84      discovery.
85
86 #. **Physical port to VM (or VM chain) to physical port - a.k.a. NIC2VM2NIC,
87    P2V2P, NIC2VMchain2NIC, P2V2V2P**
88
89    - *PortNICConfig-WireEncapsulation-PacketForwardingFunction-
90      PacketProcessingFunction1-...-PacketProcessingFunctionN-VirtEncapsulation-
91      VirtPortConfig-VMconfig-TestType*
92    - *10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2 ports
93      of 10GE on Intel x520 NIC, dot1q tagged Ethernet, L2 bridge-domain
94      switching to/from two vhost interfaces and one VM, NDR throughput
95      discovery.
96    - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm-ndrdisc.robot* => 2
97      ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
98      switching to/from two vhost interfaces and one VM, NDR throughput
99      discovery.
100    - *10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-4vhost-2vm-ndrdisc.robot* => 2
101      ports of 10GE on Intel x520 NIC, IPv4 VXLAN Ethernet, L2 bridge-domain
102      switching to/from four vhost interfaces and two VMs, NDR throughput
103      discovery.
104
105 Methodology: Multi-Core
106 -----------------------
107
108 **Multi-core Test** - CSIT |release| multi-core tests are executed in the
109 following thread and core configurations:
110
111 #. 1t - 1 Honeycomb Netconf thread on 1 CPU physical core.
112 #. 8t - 8 Honeycomb Netconf thread on 8 CPU physical core.
113 #. 16t - 16 Honeycomb Netconf thread on 16 CPU physical core.
114
115 Traffic generator also uses multiple threads/cores, to simulate multiple
116 Netconf clients accessing the Honeycomb server.
117
118 Methodology: Performance measurement
119 ------------------------------------
120
121 The following values are measured and reported in tests:
122
123 - Average request rate. Averaged over the entire test duration, over all client
124   threads. Negative replies (if any) are not counted and are reported separately.