1 .. _tested_physical_topologies:
6 All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
7 Integration and Testing)` performance test results included in this
8 report are executed on the physical testbeds hosted by :abbr:`LF (Linux
9 Foundation)` FD.io project, unless otherwise noted.
11 Two physical server topology types are used:
13 - **2-Node Topology**: Consists of one server acting as a System Under
14 Test (SUT) and one server acting as a Traffic Generator (TG), with
15 both servers connected into a ring topology. Used for executing tests
16 that require frame encapsulations supported by TG.
18 - **3-Node Topology**: Consists of two servers acting as a Systems Under
19 Test (SUTs) and one server acting as a Traffic Generator (TG), with
20 all servers connected into a ring topology. Used for executing tests
21 that require frame encapsulations not supported by TG e.g. certain
22 overlay tunnel encapsulations and IPsec. Number of native Ethernet,
23 IPv4 and IPv6 encapsulation tests are also executed on these testbeds,
24 for comparison with 2-Node Topology.
26 Current FD.io production testbeds are built with SUT servers based on
27 the following processor architectures:
29 - Intel Xeon: Skylake Platinum 8180, Haswell-SP E5-2699v3,
30 Cascade Lake Platinum 8280, Cascade Lake 6252N.
31 - Intel Atom: Denverton C3858.
32 - Arm: TaiShan 2280, hip07-d05.
33 - AMD EPYC: Zen2 7532.
35 Server SUT performance depends on server and processor type, hence
36 results for testbeds based on different servers must be reported
37 separately, and compared if appropriate.
39 Complete technical specifications of compute servers used in CSIT
40 physical testbeds are maintained in FD.io CSIT repository:
41 https://git.fd.io/csit/tree/docs/lab/testbed_specifications.md.
43 Following is the description of existing production testbeds.
45 2-Node AMD EPYC Zen2 (2n-zn2)
46 -----------------------------
48 One 2n-zn2 testbed in in operation in FD.io labs. It is built based on
49 two SuperMicro SuperMicro AS-1114S-WTRT servers, with SUT and TG servers
50 equipped with one AMD EPYC Zen2 7532 processor each (256 MB Cache, 2.40
51 GHz, 32 cores). 2n-zn2 physical topology is shown below.
59 \graphicspath{{../_tmp/src/introduction/}}
60 \includegraphics[width=0.90\textwidth]{testbed-2n-zn2}
61 \label{fig:testbed-2n-zn2}
66 .. figure:: testbed-2n-zn2.svg
70 SUT server is populated with the following NIC models:
72 #. NIC-1: x710-DA4 4p10GE Intel.
73 #. NIC-2: xxv710-DA2 2p25GE Intel.
74 #. NIC-3: cx556a-edat ConnectX5 2p100GE Mellanox.
76 TG server runs TRex application and is populated with the following
79 #. NIC-1: x710-DA4 4p10GE Intel.
80 #. NIC-2: xxv710-DA2 2p25GE Intel.
81 #. NIC-3: cx556a-edat ConnectX5 2p100GE Mellanox.
83 All AMD EPYC Zen2 7532 servers run with AMD SMT enabled, doubling the
84 number of logical cores exposed to Linux.
86 2-Node Xeon Cascade Lake (2n-clx)
87 ---------------------------------
89 Three 2n-clx testbeds are in operation in FD.io labs. Each 2n-clx testbed
90 is built with two SuperMicro SYS-7049GP-TRT servers, SUTs are equipped with two
91 Intel Xeon Gold 6252N processors (35.75 MB Cache, 2.30 GHz, 24 cores).
92 TGs are equiped with Intel Xeon Cascade Lake Platinum 8280 processors (38.5 MB
93 Cache, 2.70 GHz, 28 cores). 2n-clx physical topology is shown below.
101 \graphicspath{{../_tmp/src/introduction/}}
102 \includegraphics[width=0.90\textwidth]{testbed-2n-clx}
103 \label{fig:testbed-2n-clx}
108 .. figure:: testbed-2n-clx.svg
112 SUT servers are populated with the following NIC models:
114 #. NIC-1: x710-DA4 4p10GE Intel.
115 #. NIC-2: xxv710-DA2 2p25GE Intel.
116 #. NIC-3: cx556a-edat ConnectX5 2p100GE Mellanox.
117 #. NIC-4: empty, future expansion.
118 #. NIC-5: empty, future expansion.
119 #. NIC-6: empty, future expansion.
121 TG servers run T-Rex application and are populated with the following
124 #. NIC-1: x710-DA4 4p10GE Intel.
125 #. NIC-2: xxv710-DA2 2p25GE Intel.
126 #. NIC-3: cx556a-edat ConnectX5 2p100GE Mellanox.
127 #. NIC-4: empty, future expansion.
128 #. NIC-5: empty, future expansion.
129 #. NIC-6: x710-DA4 4p10GE Intel. (For self-tests.)
131 All Intel Xeon Cascade Lake servers run with Intel Hyper-Threading enabled,
132 doubling the number of logical cores exposed to Linux.
134 2-Node Xeon Skylake (2n-skx)
135 ----------------------------
137 Four 2n-skx testbeds are in operation in FD.io labs. Each 2n-skx testbed
138 is built with two SuperMicro SYS-7049GP-TRT servers, each in turn
139 equipped with two Intel Xeon Skylake Platinum 8180 processors (38.5 MB
140 Cache, 2.50 GHz, 28 cores). 2n-skx physical topology is shown below.
148 \graphicspath{{../_tmp/src/introduction/}}
149 \includegraphics[width=0.90\textwidth]{testbed-2n-skx}
150 \label{fig:testbed-2n-skx}
155 .. figure:: testbed-2n-skx.svg
159 SUT servers are populated with the following NIC models:
161 #. NIC-1: x710-DA4 4p10GE Intel.
162 #. NIC-2: xxv710-DA2 2p25GE Intel.
163 #. NIC-3: empty, future expansion.
164 #. NIC-4: empty, future expansion.
165 #. NIC-5: empty, future expansion.
166 #. NIC-6: empty, future expansion.
168 TG servers run T-Rex application and are populated with the following
171 #. NIC-1: x710-DA4 4p10GE Intel.
172 #. NIC-2: xxv710-DA2 2p25GE Intel.
173 #. NIC-3: empty, future expansion.
174 #. NIC-4: empty, future expansion.
175 #. NIC-5: empty, future expansion.
176 #. NIC-6: x710-DA4 4p10GE Intel. (For self-tests.)
178 All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
179 doubling the number of logical cores exposed to Linux, with 56 logical
180 cores and 28 physical cores per processor socket.
182 3-Node Xeon Skylake (3n-skx)
183 ----------------------------
185 Two 3n-skx testbeds are in operation in FD.io labs. Each 3n-skx testbed
186 is built with three SuperMicro SYS-7049GP-TRT servers, each in turn
187 equipped with two Intel Xeon Skylake Platinum 8180 processors (38.5 MB
188 Cache, 2.50 GHz, 28 cores). 3n-skx physical topology is shown below.
196 \graphicspath{{../_tmp/src/introduction/}}
197 \includegraphics[width=0.90\textwidth]{testbed-3n-skx}
198 \label{fig:testbed-3n-skx}
203 .. figure:: testbed-3n-skx.svg
207 SUT1 and SUT2 servers are populated with the following NIC models:
209 #. NIC-1: x710-DA4 4p10GE Intel.
210 #. NIC-2: xxv710-DA2 2p25GE Intel.
211 #. NIC-3: empty, future expansion.
212 #. NIC-4: empty, future expansion.
213 #. NIC-5: empty, future expansion.
214 #. NIC-6: empty, future expansion.
216 TG servers run T-Rex application and are populated with the following
219 #. NIC-1: x710-DA4 4p10GE Intel.
220 #. NIC-2: xxv710-DA2 2p25GE Intel.
221 #. NIC-3: empty, future expansion.
222 #. NIC-4: empty, future expansion.
223 #. NIC-5: empty, future expansion.
224 #. NIC-6: x710-DA4 4p10GE Intel. (For self-tests.)
226 All Intel Xeon Skylake servers run with Intel Hyper-Threading enabled,
227 doubling the number of logical cores exposed to Linux, with 56 logical
228 cores and 28 physical cores per processor socket.
230 2-Node Atom Denverton (2n-dnv)
231 ------------------------------
233 2n-dnv testbed is built with: i) one Intel S2600WFT server acting as TG
234 and equipped with two Intel Xeon Skylake Platinum 8180 processors (38.5
235 MB Cache, 2.50 GHz, 28 cores), and ii) one SuperMicro SYS-E300-9A server
236 acting as SUT and equipped with one Intel Atom C3858 processor (12 MB
237 Cache, 2.00 GHz, 12 cores). 2n-dnv physical topology is shown below.
245 \graphicspath{{../_tmp/src/introduction/}}
246 \includegraphics[width=0.90\textwidth]{testbed-2n-dnv}
247 \label{fig:testbed-2n-dnv}
252 .. figure:: testbed-2n-dnv.svg
256 SUT server have four internal 10G NIC port:
258 #. P-1: x553 copper port.
259 #. P-2: x553 copper port.
260 #. P-3: x553 fiber port.
261 #. P-4: x553 fiber port.
263 TG server run T-Rex software traffic generator and are populated with the
264 following NIC models:
266 #. NIC-1: x550-T2 2p10GE Intel.
267 #. NIC-2: x550-T2 2p10GE Intel.
268 #. NIC-3: x520-DA2 2p10GE Intel.
269 #. NIC-4: x520-DA2 2p10GE Intel.
271 The 2n-dnv testbed is in operation in Intel SH labs.
273 3-Node Atom Denverton (3n-dnv)
274 ------------------------------
276 One 3n-dnv testbed is built with: i) one SuperMicro SYS-7049GP-TRT
277 server acting as TG and equipped with two Intel Xeon Skylake Platinum
278 8180 processors (38.5 MB Cache, 2.50 GHz, 28 cores), and ii) one
279 SuperMicro SYS-E300-9A server acting as SUT and equipped with one Intel
280 Atom C3858 processor (12 MB Cache, 2.00 GHz, 12 cores). 3n-dnv physical
281 topology is shown below.
289 \graphicspath{{../_tmp/src/introduction/}}
290 \includegraphics[width=0.90\textwidth]{testbed-3n-dnv}
291 \label{fig:testbed-3n-dnv}
296 .. figure:: testbed-3n-dnv.svg
300 SUT1 and SUT2 servers are populated with the following NIC models:
302 #. NIC-1: x553 2p10GE fiber Intel.
303 #. NIC-2: x553 2p10GE copper Intel.
305 TG servers run T-Rex application and are populated with the following
308 #. NIC-1: x710-DA4 4p10GE Intel.
310 3-Node ARM TaiShan (3n-tsh)
311 ---------------------------
313 One 3n-tsh testbed is built with: i) one SuperMicro SYS-7049GP-TRT
314 server acting as TG and equipped with two Intel Xeon Skylake Platinum
315 8180 processors (38.5 MB Cache, 2.50 GHz, 28 cores), and ii) one Huawei
316 TaiShan 2280 server acting as SUT and equipped with one hip07-d05
317 processor (64* ARM Cortex-A72). 3n-tsh physical topology is shown below.
325 \graphicspath{{../_tmp/src/introduction/}}
326 \includegraphics[width=0.90\textwidth]{testbed-3n-tsh}
327 \label{fig:testbed-3n-tsh}
332 .. figure:: testbed-3n-tsh.svg
336 SUT1 and SUT2 servers are populated with the following NIC models:
338 #. NIC-1: connectx4 2p25GE Mellanox.
339 #. NIC-2: x520 2p10GE Intel.
341 TG server runs T-Rex application and is populated with the following
344 #. NIC-1: x710-DA4 4p10GE Intel.
345 #. NIC-2: xxv710-DA2 2p25GE Intel.
346 #. NIC-3: xl710-QDA2 2p40GE Intel.
348 2-Node ARM ThunderX2 (2n-tx2)
349 ---------------------------
351 One 2n-tx2 testbed is built with: i) one SuperMicro SYS-7049GP-TRT
352 server acting as TG and equipped with two Intel Xeon Skylake Platinum
353 8180 processors (38.5 MB Cache, 2.50 GHz, 28 cores), and ii) one Marvell
354 ThnderX2 9975 (28* ThunderX2) server acting as SUT and equipped with two
355 ThunderX2 ARMv8 CN9975 processors. 2n-tx2 physical topology is shown below.
363 \graphicspath{{../_tmp/src/introduction/}}
364 \includegraphics[width=0.90\textwidth]{testbed-2n-tx2}
365 \label{fig:testbed-2n-tx2}
370 .. figure:: testbed-2n-tx2.svg
374 SUT server is populated with the following NIC models:
376 #. NIC-1: xl710-QDA2 2p40GE Intel (not connected).
377 #. NIC-2: xl710-QDA2 2p40GE Intel.
379 TG server run T-Rex application and is populated with the following
382 #. NIC-1: xl710-QDA2 2p40GE Intel.