1 .. _tested_physical_topologies:
3 Performance Physical Testbeds
4 =============================
6 All :abbr:`FD.io (Fast Data Input/Ouput)` :abbr:`CSIT (Continuous System
7 Integration and Testing)` performance test results included in this
8 report are executed on the physical testbeds hosted by :abbr:`LF (Linux
9 Foundation)` FD.io project, unless otherwise noted.
11 Two physical server topology types are used:
13 - **2-Node Topology**: Consists of one server acting as a System Under
14 Test (SUT) and one server acting as a Traffic Generator (TG), with
15 both servers connected into a ring topology. Used for executing tests
16 that require frame encapsulations supported by TG.
18 - **3-Node Topology**: Consists of two servers acting as a Systems Under
19 Test (SUTs) and one server acting as a Traffic Generator (TG), with
20 all servers connected into a ring topology. Used for executing tests
21 that require frame encapsulations not supported by TG e.g. certain
22 overlay tunnel encapsulations and IPsec. Number of native Ethernet,
23 IPv4 and IPv6 encapsulation tests are also executed on these testbeds,
24 for comparison with 2-Node Topology.
26 Current FD.io production testbeds are built with SUT servers based on
27 the following processor architectures:
29 - Intel Xeon: Cascadelake 6252N, Icelake 8358.
30 - Intel Atom: Denverton C3858, Snowridge P5362.
31 - Arm: TaiShan 2280, hip07-d05, Neoverse N1.
32 - AMD EPYC: Zen2 7532.
34 Server SUT performance depends on server and processor type, hence
35 results for testbeds based on different servers must be reported
36 separately, and compared if appropriate.
38 Complete technical specifications of compute servers used in CSIT
39 physical testbeds are maintained in FD.io CSIT repository:
40 https://git.fd.io/csit/tree/docs/lab/testbed_specifications.md.
42 Physical NICs and Drivers
43 -------------------------
45 SUT and TG servers are equipped with a number of different NIC models.
47 VPP is performance tested on SUTs with the following NICs and drivers:
49 #. 2p10GE: x550, x553 Intel (codename Niantic)
50 - DPDK Poll Mode Driver (PMD).
51 #. 4p10GE: x710-DA4 Intel (codename Fortville, FVL)
55 #. 2p25GE: xxv710-DA2 Intel (codename Fortville, FVL)
59 #. 4p25GE: xxv710-DA4 Intel (codename Fortville, FVL)
63 #. 4p25GE: E822-CQDA4 Intel (codename Columbiaville, CVL)
66 #. 2p100GE: cx556a-edat Mellanox ConnectX5
67 - RDMA_core in PMD mode.
68 #. 2p100GE: E810-2CQDA2 Intel (codename Columbiaville, CVL)
72 DPDK applications, testpmd and l3fwd, are performance tested on the same
73 SUTs exclusively with DPDK drivers for all NICs.
75 TRex running on TGs is using DPDK drivers for all NICs.
77 VPP hoststack tests utilize ab (Apache HTTP server benchmarking tool)
78 running on TGs and using Linux drivers for all NICs.
80 For more information see :ref:`vpp_test_environment`
81 and :ref:`dpdk_test_environment`.
83 .. _physical_testbeds_2n_zn2:
85 2-Node AMD EPYC Zen2 (2n-zn2)
86 -----------------------------
88 One 2n-zn2 testbed in in operation in FD.io labs. It is built based on
89 two SuperMicro SuperMicro AS-1114S-WTRT servers, with SUT and TG servers
90 equipped with one AMD EPYC Zen2 7532 processor each (256 MB Cache, 2.40
91 GHz, 32 cores). 2n-zn2 physical topology is shown below.
99 \graphicspath{{../_tmp/src/introduction/}}
100 \includegraphics[width=0.90\textwidth]{testbed-2n-zn2}
101 \label{fig:testbed-2n-zn2}
106 .. figure:: testbed-2n-zn2.svg
112 #. NIC-1: x710-DA4 4p10GE Intel.
113 #. NIC-2: xxv710-DA2 2p25GE Intel.
114 #. NIC-3: cx556a-edat ConnectX5 2p100GE Mellanox.
118 #. NIC-1: x710-DA4 4p10GE Intel.
119 #. NIC-2: xxv710-DA2 2p25GE Intel.
120 #. NIC-3: cx556a-edat ConnectX5 2p100GE Mellanox.
122 All AMD EPYC Zen2 7532 servers run with AMD SMT enabled, doubling the
123 number of logical cores exposed to Linux.
125 .. _physical_testbeds_2n_clx:
127 2-Node Xeon Cascadelake (2n-clx)
128 --------------------------------
130 Three 2n-clx testbeds are in operation in FD.io labs. Each 2n-clx testbed
131 is built with two SuperMicro SYS-7049GP-TRT servers, SUTs are equipped with two
132 Intel Xeon Gold 6252N processors (35.75 MB Cache, 2.30 GHz, 24 cores).
133 TGs are equiped with Intel Xeon Cascade Lake Platinum 8280 processors (38.5 MB
134 Cache, 2.70 GHz, 28 cores). 2n-clx physical topology is shown below.
142 \graphicspath{{../_tmp/src/introduction/}}
143 \includegraphics[width=0.90\textwidth]{testbed-2n-clx}
144 \label{fig:testbed-2n-clx}
149 .. figure:: testbed-2n-clx.svg
155 #. NIC-1: x710-DA4 4p10GE Intel.
156 #. NIC-2: xxv710-DA2 2p25GE Intel.
157 #. NIC-3: cx556a-edat ConnectX5 2p100GE Mellanox.
158 #. NIC-4: empty, future expansion.
159 #. NIC-5: empty, future expansion.
160 #. NIC-6: empty, future expansion.
164 #. NIC-1: x710-DA4 4p10GE Intel.
165 #. NIC-2: xxv710-DA2 2p25GE Intel.
166 #. NIC-3: cx556a-edat ConnectX5 2p100GE Mellanox.
167 #. NIC-4: empty, future expansion.
168 #. NIC-5: empty, future expansion.
169 #. NIC-6: x710-DA4 4p10GE Intel. (For self-tests.)
171 All Intel Xeon Cascadelake servers run with Intel Hyper-Threading enabled,
172 doubling the number of logical cores exposed to Linux.
174 .. _physical_testbeds_2n_icx:
176 2-Node Xeon Icelake (2n-icx)
177 ----------------------------
179 One 2n-icx testbed is in operation in FD.io labs. It is built with two
180 SuperMicro SYS-740GP-TNRT servers, each in turn equipped with two Intel Xeon
181 Platinum 8358 processors (48 MB Cache, 2.60 GHz, 32 cores).
189 \graphicspath{{../_tmp/src/introduction/}}
190 \includegraphics[width=0.90\textwidth]{testbed-2n-icx}
191 \label{fig:testbed-2n-icx}
196 .. figure:: testbed-2n-icx.svg
202 #. NIC-1: xxv710-DA2 2p25GE Intel.
203 #. NIC-2: E810-2CQDA2 2p100GbE Intel (* to be added).
204 #. NIC-3: E810-CQDA4 4p100GbE Intel (* to be added).
206 All Intel Xeon Icelake servers run with Intel Hyper-Threading enabled,
207 doubling the number of logical cores exposed to Linux.
209 .. _physical_testbeds_3n_icx:
211 3-Node Xeon Icelake (3n-icx)
212 ----------------------------
214 One 3n-icx testbed is in operation in FD.io labs. It is built with three
215 SuperMicro SYS-740GP-TNRT servers, each in turn equipped with two Intel Xeon
216 Platinum 8358 processors (48 MB Cache, 2.60 GHz, 32 cores).
224 \graphicspath{{../_tmp/src/introduction/}}
225 \includegraphics[width=0.90\textwidth]{testbed-3n-icx}
226 \label{fig:testbed-3n-icx}
231 .. figure:: testbed-3n-icx.svg
237 #. NIC-1: xxv710-DA2 2p25GE Intel.
238 #. NIC-2: E810-2CQDA2 2p100GbE Intel (* to be added).
239 #. NIC-3: E810-CQDA4 4p100GbE Intel (* to be added).
241 All Intel Xeon Icelake servers run with Intel Hyper-Threading enabled,
242 doubling the number of logical cores exposed to Linux.
244 .. _physical_testbeds_3n_alt:
246 3-Node ARM Altra (3n-alt)
247 ---------------------------
249 One 3n-tsh testbed is built with: i) one SuperMicro SYS-740GP-TNRT
250 server acting as TG and equipped with two Intel Xeon Icelake Platinum
251 8358 processors (80 MB Cache, 2.60 GHz, 32 cores), and ii) one Ampere
252 Altra server acting as SUT and equipped with two Q80-30 processors
253 (80* ARM Neoverse N1). 3n-alt physical topology is shown below.
261 \graphicspath{{../_tmp/src/introduction/}}
262 \includegraphics[width=0.90\textwidth]{testbed-3n-alt}
263 \label{fig:testbed-3n-alt}
268 .. figure:: testbed-3n-alt.svg
274 #. NIC-1: xl710-QDA2-2p40GE Intel.
278 #. NIC-1: xxv710-DA2-2p25GE Intel.
279 #. NIC-2: xl710-QDA2-2p40GE Intel.
280 #. NIC-3: e810-XXVDA4-4p25GE Intel.
281 #. NIC-4: e810-2CQDA2-2p100GE Intel.
283 .. _physical_testbeds_3n_tsh:
285 3-Node ARM TaiShan (3n-tsh)
286 ---------------------------
288 One 3n-tsh testbed is built with: i) one SuperMicro SYS-7049GP-TRT
289 server acting as TG and equipped with two Intel Xeon Skylake Platinum
290 8180 processors (38.5 MB Cache, 2.50 GHz, 28 cores), and ii) one Huawei
291 TaiShan 2280 server acting as SUT and equipped with one hip07-d05
292 processor (64* ARM Cortex-A72). 3n-tsh physical topology is shown below.
300 \graphicspath{{../_tmp/src/introduction/}}
301 \includegraphics[width=0.90\textwidth]{testbed-3n-tsh}
302 \label{fig:testbed-3n-tsh}
307 .. figure:: testbed-3n-tsh.svg
313 #. NIC-1: connectx4 2p25GE Mellanox.
314 #. NIC-2: x520 2p10GE Intel.
318 #. NIC-1: x710-DA4 4p10GE Intel.
319 #. NIC-2: xxv710-DA2 2p25GE Intel.
320 #. NIC-3: xl710-QDA2 2p40GE Intel.
322 .. _physical_testbeds_2n_tx2:
324 2-Node ARM ThunderX2 (2n-tx2)
325 -----------------------------
327 One 2n-tx2 testbed is built with: i) one SuperMicro SYS-7049GP-TRT
328 server acting as TG and equipped with two Intel Xeon Skylake Platinum
329 8180 processors (38.5 MB Cache, 2.50 GHz, 28 cores), and ii) one Marvell
330 ThnderX2 9975 (28* ThunderX2) server acting as SUT and equipped with two
331 ThunderX2 ARMv8 CN9975 processors. 2n-tx2 physical topology is shown below.
339 \graphicspath{{../_tmp/src/introduction/}}
340 \includegraphics[width=0.90\textwidth]{testbed-2n-tx2}
341 \label{fig:testbed-2n-tx2}
346 .. figure:: testbed-2n-tx2.svg
352 #. NIC-1: xl710-QDA2 2p40GE Intel (not connected).
353 #. NIC-2: xl710-QDA2 2p40GE Intel.
357 #. NIC-1: xl710-QDA2 2p40GE Intel.
359 .. _physical_testbeds_3n_snr:
361 3-Node Atom Snowridge (3n-snr)
362 ------------------------------
364 One 3n-snr testbed is built with: i) one SuperMicro SYS-740GP-TNRT
365 server acting as TG and equipped with two Intel Xeon Icelake Platinum
366 8358 processors (48 MB Cache, 2.60 GHz, 32 cores), and ii) SUT equipped with
367 one Intel Atom P5362 processor (27 MB Cache, 2.20 GHz, 24 cores). 3n-snr
368 physical topology is shown below.
376 \graphicspath{{../_tmp/src/introduction/}}
377 \includegraphics[width=0.90\textwidth]{testbed-3n-snr}
378 \label{fig:testbed-3n-snr}
383 .. figure:: testbed-3n-snr.svg
389 #. NIC-1: e822cq-DA4 4p25GE fiber Intel.
393 #. NIC-1: e810xxv-DA4 4p25GE Intel.