1 .. _tested_physical_topologies:
6 All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
7 Integration and Testing)` performance test results included in this
8 report are executed on the physical testbeds hosted by :abbr:`LF (Linux
9 Foundation)` FD.io project, unless otherwise noted.
11 Two physical server topology types are used:
13 - **2-Node Topology**: Consists of one server acting as a System Under
14 Test (SUT) and one server acting as a Traffic Generator (TG), with
15 both servers connected into a ring topology. Used for executing tests
16 that require frame encapsulations supported by TG.
18 - **3-Node Topology**: Consists of two servers acting as a Systems Under
19 Test (SUTs) and one server acting as a Traffic Generator (TG), with
20 all servers connected into a ring topology. Used for executing tests
21 that require frame encapsulations not supported by TG e.g. certain
22 overlay tunnel encapsulations and IPsec. Number of native Ethernet,
23 IPv4 and IPv6 encapsulation tests are also executed on these testbeds,
24 for comparison with 2-Node Topology.
26 Current FD.io production testbeds are built with SUT servers based on
27 the following processor architectures:
29 - Intel Xeon: Skylake Platinum 8180 and Haswell-SP E5-2699v3.
30 - Intel Atom: Denverton C3858.
31 - ARM: TaiShan 2280, hip07-d05.
33 Server SUT performance depends on server and processor type, hence
34 results for testbeds based on different servers must be reported
35 separately, and compared if appropriate.
37 Complete technical specifications of compute servers used in CSIT
38 physical testbeds are maintained in FD.io CSIT repository:
39 https://git.fd.io/csit/tree/docs/lab/testbed_specifications.md.
41 Following is the description of existing production testbeds.
43 2-Node Xeon Skylake (2n-skx)
44 ----------------------------
46 Four 2n-skx testbeds are in operation in FD.io labs. Each 2n-skx testbed
47 is built with two SuperMicro SYS-7049GP-TRT servers, each in turn
48 equipped with two Intel Xeon Skylake Platinum 8180 processors (38.5 MB
49 Cache, 2.50 GHz, 28 cores). 2n-skx physical topology is shown below.
57 \graphicspath{{../_tmp/src/introduction/}}
58 \includegraphics[width=0.90\textwidth]{testbed-2n-skx}
59 \label{fig:testbed-2n-skx}
64 .. figure:: testbed-2n-skx.svg
68 SUT servers are populated with the following NIC models:
70 #. NIC-1: x710-DA4 4p10GE Intel.
71 #. NIC-2: xxv710-DA2 2p25GE Intel.
72 #. NIC-3: mcx556a-edat ConnectX5 2p100GE Mellanox. (Not used yet.)
73 #. NIC-4: empty, future expansion.
74 #. NIC-5: empty, future expansion.
75 #. NIC-6: empty, future expansion.
77 TG servers run T-Rex application and are populated with the following
80 #. NIC-1: x710-DA4 4p10GE Intel.
81 #. NIC-2: xxv710-DA2 2p25GE Intel.
82 #. NIC-3: mcx556a-edat ConnectX5 2p100GE Mellanox. (Not used yet.)
83 #. NIC-4: empty, future expansion.
84 #. NIC-5: empty, future expansion.
85 #. NIC-6: x710-DA4 4p10GE Intel. (For self-tests.)
87 All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
88 doubling the number of logical cores exposed to Linux, with 56 logical
89 cores and 28 physical cores per processor socket.
91 3-Node Xeon Skylake (3n-skx)
92 ----------------------------
94 Two 3n-skx testbeds are in operation in FD.io labs. Each 3n-skx testbed
95 is built with three SuperMicro SYS-7049GP-TRT servers, each in turn
96 equipped with two Intel Xeon Skylake Platinum 8180 processors (38.5 MB
97 Cache, 2.50 GHz, 28 cores). 3n-skx physical topology is shown below.
105 \graphicspath{{../_tmp/src/introduction/}}
106 \includegraphics[width=0.90\textwidth]{testbed-3n-skx}
107 \label{fig:testbed-3n-skx}
112 .. figure:: testbed-3n-skx.svg
116 SUT1 and SUT2 servers are populated with the following NIC models:
118 #. NIC-1: x710-DA4 4p10GE Intel.
119 #. NIC-2: xxv710-DA2 2p25GE Intel.
120 #. NIC-3: empty, future expansion.
121 #. NIC-4: empty, future expansion.
122 #. NIC-5: empty, future expansion.
123 #. NIC-6: empty, future expansion.
125 TG servers run T-Rex application and are populated with the following
128 #. NIC-1: x710-DA4 4p10GE Intel.
129 #. NIC-2: xxv710-DA2 2p25GE Intel.
130 #. NIC-3: empty, future expansion.
131 #. NIC-4: empty, future expansion.
132 #. NIC-5: empty, future expansion.
133 #. NIC-6: x710-DA4 4p10GE Intel. (For self-tests.)
135 All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
136 doubling the number of logical cores exposed to Linux, with 56 logical
137 cores and 28 physical cores per processor socket.
139 3-Node Xeon Haswell (3n-hsw)
140 ----------------------------
142 Three 3n-hsw testbeds are in operation in FD.io labs. Each 3n-hsw
143 testbed is built with three Cisco UCS-c240m3 servers, each in turn
144 equipped with two Intel Xeon Haswell-SP E5-2699v3 processors (45 MB
145 Cache, 2.3 GHz, 18 cores). 3n-hsw physical topology is shown below.
153 \graphicspath{{../_tmp/src/introduction/}}
154 \includegraphics[width=0.90\textwidth]{testbed-3n-hsw}
155 \label{fig:testbed-3n-hsw}
160 .. figure:: testbed-3n-hsw.svg
164 SUT1 and SUT2 servers are populated with the following NIC models:
166 #. NIC-1: VIC 1385 2p40GE Cisco.
167 #. NIC-2: NIC x520 2p10GE Intel.
169 #. NIC-4: NIC xl710-QDA2 2p40GE Intel.
170 #. NIC-5: NIC x710-DA2 2p10GE Intel.
171 #. NIC-6: QAT 8950 50G (Walnut Hill) Intel.
173 TG servers run T-Rex application and are populated with the following
176 #. NIC-1: NIC xl710-QDA2 2p40GE Intel.
177 #. NIC-2: NIC x710-DA2 2p10GE Intel.
179 #. NIC-4: NIC xl710-QDA2 2p40GE Intel.
180 #. NIC-5: NIC x710-DA2 2p10GE Intel.
181 #. NIC-6: NIC x710-DA2 2p10GE Intel. (For self-tests.)
183 All Intel Xeon Haswell servers run with Intel Hyper-Threading disabled,
184 making the number of logical cores exposed to Linux match the number of
185 18 physical cores per processor socket.
187 2-Node Atom Denverton (2n-dnv)
188 ------------------------------
190 2n-dnv testbed is built with: i) one Intel S2600WFT server acting as TG
191 and equipped with two Intel Xeon Skylake Platinum 8180 processors (38.5
192 MB Cache, 2.50 GHz, 28 cores), and ii) one SuperMicro SYS-E300-9A server
193 acting as SUT and equipped with one Intel Atom C3858 processor (12 MB
194 Cache, 2.00 GHz, 12 cores). 2n-dnv physical topology is shown below.
202 \graphicspath{{../_tmp/src/introduction/}}
203 \includegraphics[width=0.90\textwidth]{testbed-2n-dnv}
204 \label{fig:testbed-2n-dnv}
209 .. figure:: testbed-2n-dnv.svg
213 SUT server have four internal 10G NIC port:
215 #. P-1: x553 copper port.
216 #. P-2: x553 copper port.
217 #. P-3: x553 fiber port.
218 #. P-4: x553 fiber port.
220 TG server run T-Rex software traffic generator and are populated with the
221 following NIC models:
223 #. NIC-1: x550-T2 2p10GE Intel.
224 #. NIC-2: x550-T2 2p10GE Intel.
225 #. NIC-3: x520-DA2 2p10GE Intel.
226 #. NIC-4: x520-DA2 2p10GE Intel.
228 The 2n-dnv testbed is in operation in Intel SH labs.
230 3-Node Atom Denverton (3n-dnv)
231 ------------------------------
233 One 3n-dnv testbed is built with: i) one SuperMicro SYS-7049GP-TRT
234 server acting as TG and equipped with two Intel Xeon Skylake Platinum
235 8180 processors (38.5 MB Cache, 2.50 GHz, 28 cores), and ii) one
236 SuperMicro SYS-E300-9A server acting as SUT and equipped with one Intel
237 Atom C3858 processor (12 MB Cache, 2.00 GHz, 12 cores). 3n-dnv physical
238 topology is shown below.
246 \graphicspath{{../_tmp/src/introduction/}}
247 \includegraphics[width=0.90\textwidth]{testbed-3n-dnv}
248 \label{fig:testbed-3n-dnv}
253 .. figure:: testbed-3n-dnv.svg
257 SUT1 and SUT2 servers are populated with the following NIC models:
259 #. NIC-1: x553 2p10GE fiber Intel.
260 #. NIC-2: x553 2p10GE copper Intel.
262 TG servers run T-Rex application and are populated with the following
265 #. NIC-1: x710-DA4 4p10GE Intel.
267 3-Node ARM TaiShan (3n-tsh)
268 ---------------------------
270 One 3n-tsh testbed is built with: i) one SuperMicro SYS-7049GP-TRT
271 server acting as TG and equipped with two Intel Xeon Skylake Platinum
272 8180 processors (38.5 MB Cache, 2.50 GHz, 28 cores), and ii) one Huawei
273 TaiShan 2280 server acting as SUT and equipped with one hip07-d05
274 processor (64* ARM Cortex-A72). 3n-tsh physical topology is shown below.
282 \graphicspath{{../_tmp/src/introduction/}}
283 \includegraphics[width=0.90\textwidth]{testbed-3n-tsh}
284 \label{fig:testbed-3n-tsh}
289 .. figure:: testbed-3n-tsh.svg
293 SUT1 and SUT2 servers are populated with the following NIC models:
295 #. NIC-1: connectx4 2p25GE Mellanox.
296 #. NIC-2: x520 2p10GE Intel.
298 TG servers run T-Rex application and are populated with the following
301 #. NIC-1: x710-DA4 4p10GE Intel.
302 #. NIC-2: xxv710-DA2 2p25GE Intel.