ietf draft: nfv service density rev. -00
[csit.git] / docs / ietf / draft-mkonstan-nf-service-density-00.md
1 ---
2 title: NFV Service Density Benchmarking
3 # abbrev: nf-svc-density
4 docname: draft-mkonstan-nf-service-density-00
5 date: 2019-03-11
6
7 ipr: trust200902
8 area: ops
9 wg: Benchmarking Working Group
10 kw: Internet-Draft
11 cat: info
12
13 coding: us-ascii
14 pi:    # can use array (if all yes) or hash here
15 #  - toc
16 #  - sortrefs
17 #  - symrefs
18   toc: yes
19   sortrefs:   # defaults to yes
20   symrefs: yes
21
22 author:
23       -
24         ins: M. Konstantynowicz
25         name: Maciek Konstantynowicz
26         org: Cisco Systems
27         role: editor
28         email: mkonstan@cisco.com
29       -
30         ins: P. Mikus
31         name: Peter Mikus
32         org: Cisco Systems
33         role: editor
34         email: pmikus@cisco.com
35
36 normative:
37   RFC2544:
38   RFC8174:
39
40 informative:
41   RFC8204:
42   TST009:
43     target: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/009/03.01.01_60/gs_NFV-TST009v030101p.pdf
44     title: "ETSI GS NFV-TST 009 V3.1.1 (2018-10), Network Functions Virtualisation (NFV) Release 3; Testing; Specification of Networking Benchmarks and Measurement Methods for NFVI"
45     date: 2018-10
46   BSDP:
47     target: https://fd.io/wp-content/uploads/sites/34/2019/03/benchmarking_sw_data_planes_skx_bdx_mar07_2019.pdf
48     title: "Benchmarking Software Data Planes Intel® Xeon® Skylake vs. Broadwell"
49     date: 2019-03
50   draft-vpolak-mkonstan-bmwg-mlrsearch:
51     target: https://tools.ietf.org/html/draft-vpolak-mkonstan-bmwg-mlrsearch-00
52     title: "Multiple Loss Ratio Search for Packet Throughput (MLRsearch)"
53     date: 2018-11
54   draft-vpolak-bmwg-plrsearch:
55     target: https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-00
56     title: "Probabilistic Loss Ratio Search for Packet Throughput (PLRsearch)"
57     date: 2018-11
58   LFN-FDio-CSIT:
59     target: https://wiki.fd.io/view/CSIT
60     title: "Fast Data io, Continuous System Integration and Testing Project"
61     date: 2019-03
62   CNCF-CNF-Testbed:
63     target: https://github.com/cncf/cnf-testbed/
64     title: "Cloud native Network Function (CNF) Testbed"
65     date: 2019-03
66   TRex:
67     target: https://github.com/cisco-system-traffic-generator/trex-core
68     title: "TRex Low-Cost, High-Speed Stateful Traffic Generator"
69     date: 2019-03
70   CSIT-1901-testbed-2n-skx:
71     target: https://docs.fd.io/csit/rls1901/report/introduction/physical_testbeds.html#node-xeon-skylake-2n-skx
72     title: "FD.io CSIT Test Bed"
73     date: 2019-03
74   CSIT-1901-test-enviroment:
75     target: https://docs.fd.io/csit/rls1901/report/vpp_performance_tests/test_environment.html
76     title: "FD.io CSIT Test Environment"
77     date: 2019-03
78   CSIT-1901-nfv-density-methodology:
79     target: https://docs.fd.io/csit/rls1901/report/introduction/methodology_nfv_service_density.html
80     title: "FD.io CSIT Test Methodology: NFV Service Density"
81     date: 2019-03
82   CSIT-1901-nfv-density-results:
83     target: https://docs.fd.io/csit/rls1901/report/vpp_performance_tests/nf_service_density/index.html
84     title: "FD.io CSIT Test Results: NFV Service Density"
85     date: 2019-03
86   CNCF-CNF-Testbed-Results:
87     target: https://github.com/cncf/cnf-testbed/blob/master/comparison/doc/cncf-cnfs-results-summary.md
88     title: "CNCF CNF Testbed: NFV Service Density Benchmarking"
89     date: 2018-12
90
91 --- abstract
92
93 Network Function Virtualization (NFV) system designers and operators
94 continuously grapple with the problem of qualifying performance of
95 network services realised with software Network Functions (NF) running
96 on Commercial-Off-The-Shelf (COTS) servers. One of the main challenges
97 is getting repeatable and portable benchmarking results and using them
98 to derive deterministic operating range that is production deployment
99 worthy.
100
101 This document specifies benchmarking methodology for NFV services that
102 aims to address this problem space. It defines a way for measuring
103 performance of multiple NFV service instances, each composed of multiple
104 software NFs, and running them at a varied service “packing” density on
105 a single server.
106
107 The aim is to discover deterministic usage range of NFV system. In
108 addition specified methodology can be used to compare and contrast
109 different NFV virtualization technologies.
110
111 --- middle
112
113 # Terminology
114
115 * NFV - Network Function Virtualization, a general industry term
116   describing network functionality implemented in software.
117 * NFV service - a software based network service realized by a topology
118   of interconnected constituent software network function applications.
119 * NFV service instance - a single instantiation of NFV service.
120 * Data-plane optimized software - any software with dedicated threads
121   handling data-plane packet processing e.g. FD.io VPP (Vector Packet
122   Processor), OVS-DPDK.
123
124 # Motivation
125
126 ## Problem Description
127
128 Network Function Virtualization (NFV) system designers and operators
129 continuously grapple with the problem of qualifying performance of
130 network services realised with software Network Functions (NF) running
131 on Commercial-Off-The-Shelf (COTS) servers. One of the main challenges
132 is getting repeatable and portable benchmarking results and using them
133 to derive deterministic operating range that is production deployment
134 worthy.
135
136 Lack of well defined and standardised NFV centric performance
137 methodology and metrics makes it hard to address fundamental questions
138 that underpin NFV production deployments:
139
140 1. What NFV service and how many instances can run on a single compute
141    node?
142 2. How to choose the best compute resource allocation scheme to maximise
143    service yield per node?
144 3. How do different NF applications compare from the service density
145    perspective?
146 4. How do the virtualisation technologies compare e.g. Virtual Machines,
147    Containers?
148
149 Getting answers to these points should allow designers to make a data
150 based decision about the NFV technology and service design best suited
151 to meet requirements of their use cases. Equally, obtaining the
152 benchmarking data underpinning those answers should make it easier for
153 operators to work out expected deterministic operating range of chosen
154 design.
155
156 ## Proposed Solution
157
158 The primary goal of the proposed benchmarking methodology is to focus on
159 NFV technologies used to construct NFV services. More specifically to i)
160 measure packet data-plane performance of multiple NFV service instances
161 while running them at varied service “packing” densities on a single
162 server and ii) quantify the impact of using multiple NFs to construct
163 each NFV service instance and introducing multiple packet processing
164 hops and links on each packet path.
165
166 The overarching aim is to discover a set of deterministic usage ranges
167 that are of interest to NFV system designers and operators. In addition,
168 specified methodology can be used to compare and contrast different NFV
169 virtualisation technologies.
170
171 In order to ensure wide applicability of the benchmarking methodology,
172 the approach is to separate NFV service packet processing from the
173 shared virtualisation infrastructure by decomposing the software
174 technology stack into three building blocks:
175
176                   +-------------------------------+
177                   |          NFV Service          |
178                   +-------------------------------+
179                   |   Virtualization Technology   |
180                   +-------------------------------+
181                   |        Host Networking        |
182                   +-------------------------------+
183
184               Figure 1. NFV software technology stack.
185
186 Proposed methodology is complementary to existing NFV benchmarking
187 industry efforts focusing on vSwitch benchmarking [RFC8204], [TST009]
188 and extends the benchmarking scope to NFV services.
189
190 This document does not describe a complete benchmarking methodology,
191 instead it is focusing on system under test configuration part. Each of
192 the compute node configurations identified by (RowIndex, ColumnIndex) is
193 to be evaluated for NFV service data-plane performance using existing
194 and/or emerging network benchmarking standards. This may include
195 methodologies specified in [RFC2544], [TST009],
196 [draft-vpolak-mkonstan-bmwg-mlrsearch] and/or
197 [draft-vpolak-bmwg-plrsearch].
198
199 # NFV Service
200
201 It is assumed that each NFV service instance is built of one or more
202 constituent NFs and is described by: topology, configuration and
203 resulting packet path(s).
204
205 Each set of NFs forms an independent NFV service instance, with multiple
206 sets present in the host.
207
208 ## Topology
209
210 NFV topology describes the number of network functions per service
211 instance, and their inter-connections over packet interfaces. It
212 includes all point-to-point virtual packet links within the compute
213 node, Layer-2 Ethernet or Layer-3 IP, including the ones to host
214 networking data-plane.
215
216 Theoretically, a large set of possible NFV topologies can be realised
217 using software virtualisation topologies, e.g. ring, partial -/full-
218 mesh, star, line, tree, ladder. In practice however, only a few
219 topologies are in the actual use as NFV services mostly perform either
220 bumps-in-a-wire packet operations (e.g. security filtering/inspection,
221 monitoring/telemetry) and/or inter-site forwarding decisions (e.g.
222 routing, switching).
223
224 Two main NFV topologies have been identified so far for NFV service
225 density benchmarking:
226
227 1. Chain topology: a set of NFs connect to host data-plane with minimum
228    of two virtual interfaces each, enabling host data-plane to
229    facilitate NF to NF service chain forwarding and provide connectivity
230    with external network.
231
232 2. Pipeline topology: a set of NFs connect to each other in a line
233    fashion with edge NFs homed to host data-plane. Host data-plane
234    provides connectivity with external network.
235
236 Both topologies are shown in figures below.
237
238 NF chain topology:
239
240     +-----------------------------------------------------------+
241     |                     Host Compute Node                     |
242     |                                                           |
243     | +--------+  +--------+              +--------+            |
244     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
245     | |        |  |        |     ....     |        | Service1   |
246     | |        |  |        |              |        |            |
247     | +-+----+-+  +-+----+-+    +    +    +-+----+-+            |
248     |   |    |      |    |      |    |      |    |   Virtual    |
249     |   |    |<-CS->|    |<-CS->|    |<-CS->|    |   Interfaces |
250     | +-+----+------+----+------+----+------+----+-+            |
251     | |                                            | CS: Chain  |
252     | |                                            |   Segment  |
253     | |             Host Data-Plane                |            |
254     | +-+--+----------------------------------+--+-+            |
255     |   |  |                                  |  |              |
256     +-----------------------------------------------------------+
257         |  |                                  |  |   Physical
258         |  |                                  |  |   Interfaces
259     +---+--+----------------------------------+--+--------------+
260     |                                                           |
261     |                    Traffic Generator                      |
262     |                                                           |
263     +-----------------------------------------------------------+
264
265       Figure 2. NF chain topology forming a service instance.
266
267 NF pipeline topology:
268
269     +-----------------------------------------------------------+
270     |                     Host Compute Node                     |
271     |                                                           |
272     | +--------+  +--------+              +--------+            |
273     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
274     | |        +--+        +--+  ....  +--+        | Service1   |
275     | |        |  |        |              |        |            |
276     | +-+------+  +--------+              +------+-+            |
277     |   |                                        |   Virtual    |
278     |   |<-Pipeline Edge          Pipeline Edge->|   Interfaces |
279     | +-+----------------------------------------+-+            |
280     | |                                            |            |
281     | |                                            |            |
282     | |             Host Data-Plane                |            |
283     | +-+--+----------------------------------+--+-+            |
284     |   |  |                                  |  |              |
285     +-----------------------------------------------------------+
286         |  |                                  |  |   Physical
287         |  |                                  |  |   Interfaces
288     +---+--+----------------------------------+--+--------------+
289     |                                                           |
290     |                    Traffic Generator                      |
291     |                                                           |
292     +-----------------------------------------------------------+
293
294       Figure 3. NF pipeline topology forming a service instance.
295
296
297 ## Configuration
298
299 NFV configuration includes all packet processing functions in NFs
300 including Layer-2, Layer-3 and/or Layer-4-to-7 processing as appropriate
301 to specific NF and NFV service design. L2 sub- interface encapsulations
302 (e.g. 802.1q, 802.1ad) and IP overlay encapsulation (e.g. VXLAN, IPSec,
303 GRE) may be represented here too as appropriate, although in most cases
304 they are used as external encapsulation and handled by host networking
305 data-plane.
306
307 NFV configuration determines logical network connectivity that is
308 Layer-2 and/or IPv4/IPv6 switching/routing modes, as well as NFV service
309 specific aspects. In the context of NFV density benchmarking methodology
310 the initial focus is on the former.
311
312 Building on the two identified NFV topologies, two common NFV
313 configurations are considered:
314
315 1. Chain configuration:
316    * Relies on chain topology to form NFV service chains.
317    * NF packet forwarding designs:
318      * IPv4/IPv6 routing.
319    * Requirements for host data-plane:
320      * L2 switching with L2 forwarding context per each NF chain
321        segment, or
322      * IPv4/IPv6 routing with IP forwarding context per each NF chain
323        segment or per NF chain.
324
325 2. Pipeline configuration:
326    * Relies on pipeline topology to form NFV service pipelines.
327    * Packet forwarding designs:
328      * IPv4/IPv6 routing.
329    * Requirements for host data-plane:
330      * L2 switching with L2 forwarding context per each NF pipeline
331        edge link, or
332      * IPv4/IPv6 routing with IP forwarding context per each NF pipeline
333        edge link or per NF pipeline.
334
335 ## Packet Path(s)
336
337 NFV packet path(s) describe the actual packet forwarding path(s) used
338 for benchmarking, resulting from NFV topology and configuration. They
339 are aimed to resemble true packet forwarding actions during the NFV
340 service lifecycle.
341
342 Based on the specified NFV topologies and configurations two NFV packet
343 paths are taken for benchmarking:
344
345 1. Snake packet path
346    * Requires chain topology and configuration.
347    * Packets enter the NFV chain through one edge NF and progress to the
348      other edge NF of the chain.
349    * Within the chain, packets follow a zigzagging "snake" path entering
350      and leaving host data-plane as they progress through the NF chain.
351    * Host data-plane is involved in packet forwarding operations between
352      NIC interfaces and edge NFs, as well as between NFs in the chain.
353
354 2. Pipeline packet path
355    * Requires pipeline topology and configuration.
356    * Packets enter the NFV chain through one edge NF and progress to the
357      other edge NF of the pipeline.
358    * Within the chain, packets follow a straight path entering and
359      leaving subsequent NFs as they progress through the NF pipeline.
360    * Host data-plane is involved in packet forwarding operations between
361      NIC interfaces and edge NFs only.
362
363 Both packet paths are shown in figures below.
364
365 Snake packet path:
366
367     +-----------------------------------------------------------+
368     |                     Host Compute Node                     |
369     |                                                           |
370     | +--------+  +--------+              +--------+            |
371     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
372     | |        |  |        |     ....     |        | Service1   |
373     | |  XXXX  |  |  XXXX  |              |  XXXX  |            |
374     | +-+X--X+-+  +-+X--X+-+    +X  X+    +-+X--X+-+            |
375     |   |X  X|      |X  X|      |X  X|      |X  X|   Virtual    |
376     |   |X  X|      |X  X|      |X  X|      |X  X|   Interfaces |
377     | +-+X--X+------+X--X+------+X--X+------+X--X+-+            |
378     | |  X  XXXXXXXXXX  XXXXXXXXXX  XXXXXXXXXX  X  |            |
379     | |  X                                      X  |            |
380     | |  X          Host Data-Plane             X  |            |
381     | +-+X-+----------------------------------+-X+-+            |
382     |   |X |                                  | X|              |
383     +----X--------------------------------------X---------------+
384         |X |                                  | X|   Physical
385         |X |                                  | X|   Interfaces
386     +---+X-+----------------------------------+-X+--------------+
387     |                                                           |
388     |                    Traffic Generator                      |
389     |                                                           |
390     +-----------------------------------------------------------+
391
392         Figure 4. Snake packet path thru NF chain topology.
393
394
395 Pipeline packet path:
396
397     +-----------------------------------------------------------+
398     |                     Host Compute Node                     |
399     |                                                           |
400     | +--------+  +--------+              +--------+            |
401     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
402     | |        +--+        +--+  ....  +--+        | Service1   |
403     | |  XXXXXXXXXXXXXXXXXXXXXXXX    XXXXXXXXXXXX  |            |
404     | +--X-----+  +--------+              +-----X--+            |
405     |   |X                                      X|   Virtual    |
406     |   |X                                      X|   Interfaces |
407     | +-+X--------------------------------------X+-+            |
408     | |  X                                      X  |            |
409     | |  X                                      X  |            |
410     | |  X          Host Data-Plane             X  |            |
411     | +-+X-+----------------------------------+-X+-+            |
412     |   |X |                                  | X|              |
413     +----X--------------------------------------X---------------+
414         |X |                                  | X|   Physical
415         |X |                                  | X|   Interfaces
416     +---+X-+----------------------------------+-X+--------------+
417     |                                                           |
418     |                    Traffic Generator                      |
419     |                                                           |
420     +-----------------------------------------------------------+
421
422       Figure 5. Pipeline packet path thru NF pipeline topology.
423
424 In all cases packets enter NFV system via shared physical NIC interfaces
425 controlled by shared host data-plane, are then associated with specific
426 NFV service (based on service discriminator) and subsequently are cross-
427 connected/switched/routed by host data-plane to and through NF
428 topologies per one of above listed schemes.
429
430 # Virtualization Technology
431
432 NFV services are built of composite isolated NFs, with virtualisation
433 technology providing the workload isolation. Following virtualisation
434 technology types are considered for NFV service density benchmarking:
435
436 1. Virtual Machines (VMs)
437    * Relying on host hypervisor technology e.g. KVM, ESXi, Xen.
438    * NFs running in VMs are referred to as VNFs.
439 2. Containers
440    * Relying on Linux container technology e.g. LXC, Docker.
441    * NFs running in Containers are referred to as CNFs.
442
443 Different virtual interface types are available to VNFs and CNFs:
444
445 1. VNF
446    * virtio-vhostuser: fully user-mode based virtual interface.
447    * virtio-vhostnet: involves kernel-mode based backend.
448 2. CNF
449    * memif: fully user-mode based virtual interface.
450    * af_packet: involves kernel-mode based backend.
451    * (add more common ones)
452
453 # Host Networking
454
455 Host networking data-plane is the central shared resource that underpins
456 creation of NFV services. It handles all of the connectivity to external
457 physical network devices through physical network connections using
458 NICs, through which the benchmarking is done.
459
460 Assuming that NIC interface resources are shared, here is the list of
461 widely available host data-plane options for providing packet
462 connectivity to/from NICs and constructing NFV chain and pipeline
463 topologies and configurations:
464
465 * Linux Kernel-Mode Networking.
466 * Linux User-Mode vSwitch.
467 * Virtual Machine vSwitch.
468 * Linux Container vSwitch.
469 * SRIOV NIC Virtual Function - note: restricted support for chain and
470   pipeline topologies, as it requires hair-pinning through the NIC and
471   oftentimes also through external physical switch.
472
473 Analysing properties of each of these options and their Pros/Cons for
474 specified NFV topologies and configurations is outside the scope of this
475 document.
476
477 From all listed options, performance optimised Linux user-mode vswitch
478 deserves special attention. Linux user-mode switch decouples NFV service
479 from the underlying NIC hardware, offers rich multi-tenant functionality
480 and most flexibility for supporting NFV services. But in the same time
481 it is consuming compute resources and is harder to benchmark in NFV
482 service density scenarios.
483
484 Following sections focus on using Linux user-mode vSwitch, focusing on
485 its performance benchmarking at increasing levels of NFV service
486 density.
487
488 # NFV Service Density Matrix
489
490 In order to evaluate performance of multiple NFV services running on a
491 compute node, NFV service instances are benchmarked at increasing
492 density, allowing to construct an NFV Service Density Matrix. Table
493 below shows an example of such a matrix, capturing number of NFV service
494 instances (row indices), number of NFs per service instance (column
495 indices) and resulting total number of NFs (values).
496
497     NFV Service Density - NF Count View
498
499     SVC   001   002   004   006   008   00N
500     001     1     2     4     6     8   1*N
501     002     2     4     8    12    16   2*N
502     004     4     8    16    24    32   4*N
503     006     6    12    24    36    48   6*N
504     008     8    16    32    48    64   8*N
505     00M   M*1   M*2   M*4   M*6   M*8   M*N
506
507     RowIndex:     Number of NFV Service Instances, 1..M.
508     ColumnIndex:  Number of NFs per NFV Service Instance, 1..N.
509     Value:        Total number of NFs running in the system.
510
511 In order to deliver good and repeatable network data-plane performance,
512 NFs and host data-plane software require direct access to critical
513 compute resources. Due to a shared nature of all resources on a compute
514 node, a clearly defined resource allocation scheme is defined in the
515 next section to address this.
516
517 In each tested configuration host data-plane is a gateway between the
518 external network and the internal NFV network topologies. Offered packet
519 load is generated and received by an external traffic generator per
520 usual benchmarking practice.
521
522 It is proposed that initial benchmarks are done with the offered packet
523 load distributed equally across all configured NFV service instances.
524 This could be followed by various per NFV service instance load ratios
525 mimicking expected production deployment scenario(s).
526
527 Following sections specify compute resource allocation, followed by
528 examples of applying NFV service density methodology to VNF and CNF
529 benchmarking use cases.
530
531 # Compute Resource Allocation
532
533 Performance optimized NF and host data-plane software threads require
534 timely execution of packet processing instructions and are very
535 sensitive to any interruptions (or stalls) to this execution e.g. cpu
536 core context switching, or cpu jitter. To that end, NFV service density
537 methodology treats controlled mapping ratios of data plane software
538 threads to physical processor cores with directly allocated cache
539 hierarchies as the first order requirement.
540
541 Other compute resources including memory bandwidth and PCIe bandwidth
542 have lesser impact and as such are subject for further study. For more
543 detail and deep-dive analysis of software data plane performance and
544 impact on different shared compute resources is available in [BSDP].
545
546 It is assumed that NFs as well as host data-plane (e.g. vswitch) are
547 performance optimized, with their tasks executed in two types of
548 software threads:
549
550 * data-plane - handling data-plane packet processing and forwarding,
551   time critical, requires dedicated cores. To scale data-plane
552   performance, most NF apps use multiple data-plane threads and rely on
553   NIC RSS (Receive Side Scaling), virtual interface multi-queue and/or
554   integrated software hashing to distribute packets across the data
555   threads.
556
557 * main-control - handling application management, statistics and
558   control-planes, less time critical, allows for core sharing. For most
559   NF apps this is a single main thread, but often statistics (counters)
560   and various control protocol software are run in separate threads.
561
562 Core mapping scheme described below allocates cores for all threads of
563 specified type belonging to each NF app instance, and separately lists
564 number of threads to a number of logical/physical core mappings for
565 processor configurations with enabled/disabled Symmetric Multi-
566 Threading (SMT) (e.g. AMD SMT, Intel Hyper-Threading).
567
568 If NFV service density benchmarking is run on server nodes with
569 Symmetric Multi-Threading (SMT) (e.g. AMD SMT, Intel Hyper-Threading)
570 for higher performance and efficiency, logical cores allocated to data-
571 plane threads should be allocated as pairs of sibling logical cores
572 corresponding to the hyper-threads running on the same physical core.
573
574 Separate core ratios are defined for mapping threads of vSwitch and NFs.
575 In order to get consistent benchmarking results, the mapping ratios are
576 enforced using Linux core pinning.
577
578 | application | thread type | app:core ratio | threads/pcores (SMT disabled)   | threads/lcores map (SMT enabled)   |
579 |:-----------:|:-----------:|:--------------:|:-------------------------------:|:----------------------------------:|
580 | vSwitch-1c  | data        | 1:1            | 1DT/1PC                         | 2DT/2LC                            |
581 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
582 |             |             |                |                                 |                                    |
583 | vSwitch-2c  | data        | 1:2            | 2DT/2PC                         | 4DT/4LC                            |
584 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
585 |             |             |                |                                 |                                    |
586 | vSwitch-4c  | data        | 1:4            | 4DT/4PC                         | 8DT/8LC                            |
587 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
588 |             |             |                |                                 |                                    |
589 | NF-0.5c     | data        | 1:S2           | 1DT/S2PC                        | 1DT/1LC                            |
590 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
591 |             |             |                |                                 |                                    |
592 | NF-1c       | data        | 1:1            | 1DT/1PC                         | 2DT/2LC                            |
593 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
594 |             |             |                |                                 |                                    |
595 | NF-2c       | data        | 1:2            | 2DT/2PC                         | 4DT/4LC                            |
596 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
597
598 * Legend to table
599   * Header row
600     * application - network application with optimized data-plane, a
601       vSwitch or Network Function (NF) application.
602     * thread type - either "data", short for data-plane; or "main",
603       short for all main-control threads.
604     * app:core ratio - ratio of per application instance threads of
605       specific thread type to physical cores.
606     * threads/pcores (SMT disabled) - number of threads of specific
607       type (DT for data-plane thread, MT for main thread) running on a
608       number of physical cores, with SMT disabled.
609     * threads/lcores map (SMT enabled) - number of threads of specific
610       type (DT, MT) running on a number of logical cores, with SMT
611       enabled. Two logical cores per one physical core.
612   * Content rows
613     * vSwitch-(1c|2c|4c) - vSwitch with 1 physical core (or 2, or 4)
614       allocated to its data-plane software worker threads.
615     * NF-(0.5c|1c|2c) - NF application with half of a physical core (or
616       1, or 2) allocated to its data-plane software worker threads.
617     * Sn - shared core, sharing ratio of (n).
618     * DT - data-plane thread.
619     * MT - main-control thread.
620     * PC - physical core, with SMT/HT enabled has many (mostly 2 today)
621       logical cores associated with it.
622     * LC - logical core, if more than one lc get allocated in sets of
623       two sibling logical cores running on the same physical core.
624     * SnPC - shared physical core, sharing ratio of (n).
625     * SnLC - shared logical core, sharing ratio of (n).
626
627 Maximum benchmarked NFV service densities are limited by a number of
628 physical cores on a compute node.
629
630 A sample physical core usage view is shown in the matrix below.
631
632     NFV Service Density - Core Usage View
633     vSwitch-1c, NF-1c
634
635     SVC   001   002   004   006   008   010
636     001     2     3     6     9    12    15
637     002     3     6    12    18    24    30
638     004     6    12    24    36    48    60
639     006     9    18    36    54    72    90
640     008    12    24    48    72    96   120
641     010    15    30    60    90   120   150
642
643     RowIndex:     Number of NFV Service Instances, 1..10.
644     ColumnIndex:  Number of NFs per NFV Service Instance, 1..10.
645     Value:        Total number of physical processor cores used for NFs.
646
647 # NFV Service Density Benchmarks
648
649 To illustrate defined NFV service density applicability, following
650 sections describe three sets of NFV service topologies and
651 configurations that have been benchmarked in open-source: i) in
652 [LFN-FDio-CSIT], a continuous testing and data-plane benchmarking
653 project, and ii) as part of CNCF CNF Testbed initiative
654 [CNCF-CNF-Testbed].
655
656 In both cases each NFV service instance definition is based on the same
657 set of NF applications, and varies only by network addressing
658 configuration to emulate multi-tenant operating environment.
659
660 ## Test Methodology - MRR Throughput
661
662 Initial NFV density throughput benchmarks have been performed using
663 Maximum Receive Rate (MRR) test methodology defined and used in FD.io
664 CSIT.
665
666 MRR tests measure the packet forwarding rate under the maximum load
667 offered by traffic generator over a set trial duration, regardless of
668 packet loss. Maximum load for specified Ethernet frame size is set to
669 the bi-directional link rate (2x 10GbE in referred results).
670
671 Tests were conducted with two traffic profiles: i) continuous stream of
672 64B frames, ii) continuous stream of IMIX sequence of (7x 64B, 4x 570B,
673 1x 1518B), all sizes are L2 untagged Ethernet.
674
675 NFV service topologies tested include: VNF service chains, CNF service
676 chains and CNF service pipelines.
677
678 ## VNF Service Chain
679
680 VNF Service Chain (VSC) topology is tested with KVM hypervisor (Ubuntu
681 18.04-LTS), with NFV service instances consisting of NFs running in VMs
682 (VNFs). Host data-plane is provided by FD.io VPP vswitch. Virtual
683 interfaces are virtio-vhostuser. Snake forwarding packet path is tested
684 using [TRex] traffic generator, see figure.
685
686     +-----------------------------------------------------------+
687     |                     Host Compute Node                     |
688     |                                                           |
689     | +--------+  +--------+              +--------+            |
690     | | S1VNF1 |  | S1VNF2 |              | S1VNFn |            |
691     | |        |  |        |     ....     |        | Service1   |
692     | |  XXXX  |  |  XXXX  |              |  XXXX  |            |
693     | +-+X--X+-+  +-+X--X+-+              +-+X--X+-+            |
694     |   |X  X|      |X  X|                  |X  X|   Virtual    |
695     |   |X  X|      |X  X|      |X  X|      |X  X|   Interfaces |
696     | +-+X--X+------+X--X+------+X--X+------+X--X+-+            |
697     | |  X  XXXXXXXXXX  XXXXXXXXXX  XXXXXXXXXX  X  |            |
698     | |  X                                      X  |            |
699     | |  X          FD.io VPP vSwitch           X  |            |
700     | +-+X-+----------------------------------+-X+-+            |
701     |   |X |                                  | X|              |
702     +----X--------------------------------------X---------------+
703         |X |                                  | X|   Physical
704         |X |                                  | X|   Interfaces
705     +---+X-+----------------------------------+-X+--------------+
706     |                                                           |
707     |                  Traffic Generator (TRex)                 |
708     |                                                           |
709     +-----------------------------------------------------------+
710
711             Figure 6. VNF service chain test setup.
712
713
714 ## CNF Service Chain
715
716 CNF Service Chain (CSC) topology is tested with Docker containers
717 (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs running
718 in Containers (CNFs). Host data-plane is provided by FD.io VPP vswitch.
719 Virtual interfaces are memif. Snake forwarding packet path is tested
720 using [TRex] traffic generator, see figure.
721
722     +-----------------------------------------------------------+
723     |                     Host Compute Node                     |
724     |                                                           |
725     | +--------+  +--------+              +--------+            |
726     | | S1CNF1 |  | S1CNF2 |              | S1CNFn |            |
727     | |        |  |        |     ....     |        | Service1   |
728     | |  XXXX  |  |  XXXX  |              |  XXXX  |            |
729     | +-+X--X+-+  +-+X--X+-+              +-+X--X+-+            |
730     |   |X  X|      |X  X|                  |X  X|   Virtual    |
731     |   |X  X|      |X  X|      |X  X|      |X  X|   Interfaces |
732     | +-+X--X+------+X--X+------+X--X+------+X--X+-+            |
733     | |  X  XXXXXXXXXX  XXXXXXXXXX  XXXXXXXXXX  X  |            |
734     | |  X                                      X  |            |
735     | |  X          FD.io VPP vSwitch           X  |            |
736     | +-+X-+----------------------------------+-X+-+            |
737     |   |X |                                  | X|              |
738     +----X--------------------------------------X---------------+
739         |X |                                  | X|   Physical
740         |X |                                  | X|   Interfaces
741     +---+X-+----------------------------------+-X+--------------+
742     |                                                           |
743     |                  Traffic Generator (TRex)                 |
744     |                                                           |
745     +-----------------------------------------------------------+
746
747           Figure 7. CNF service chain test setup.
748
749 ## CNF Service Pipeline
750
751 CNF Service Pipeline (CSP) topology is tested with Docker containers
752 (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs running
753 in Containers (CNFs). Host data-plane is provided by FD.io VPP vswitch.
754 Virtual interfaces are memif. Pipeline forwarding packet path is tested
755 using [TRex] traffic generator, see figure.
756
757     +-----------------------------------------------------------+
758     |                     Host Compute Node                     |
759     |                                                           |
760     | +--------+  +--------+              +--------+            |
761     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
762     | |        +--+        +--+  ....  +--+        | Service1   |
763     | |  XXXXXXXXXXXXXXXXXXXXXXXX    XXXXXXXXXXXX  |            |
764     | +--X-----+  +--------+              +-----X--+            |
765     |   |X                                      X|   Virtual    |
766     |   |X                                      X|   Interfaces |
767     | +-+X--------------------------------------X+-+            |
768     | |  X                                      X  |            |
769     | |  X                                      X  |            |
770     | |  X          FD.io VPP vSwitch           X  |            |
771     | +-+X-+----------------------------------+-X+-+            |
772     |   |X |                                  | X|              |
773     +----X--------------------------------------X---------------+
774         |X |                                  | X|   Physical
775         |X |                                  | X|   Interfaces
776     +---+X-+----------------------------------+-X+--------------+
777     |                                                           |
778     |                  Traffic Generator (TRex)                 |
779     |                                                           |
780     +-----------------------------------------------------------+
781
782               Figure 8. CNF service chain test setup.
783
784 ## Sample Results: FD.io CSIT
785
786 FD.io CSIT project introduced NFV density benchmarking in release
787 CSIT-1901 and published results for the following NFV service topologies
788 and configurations:
789
790 1. VNF Service Chains
791    * VNF: DPDK-L3FWD v18.10
792      * IPv4 forwarding
793      * NF-1c
794    * vSwitch: VPP v19.01-release
795      * L2 MAC switching
796      * vSwitch-1c, vSwitch-2c
797    * frame sizes: 64B, IMIX
798 2. CNF Service Chains
799    * CNF: VPP v19.01-release
800      * IPv4 routing
801      * NF-1c
802    * vSwitch: VPP v19.01-release
803      * L2 MAC switching
804      * vSwitch-1c, vSwitch-2c
805    * frame sizes: 64B, IMIX
806 3. CNF Service Pipelines
807    * CNF: VPP v19.01-release
808      * IPv4 routing
809      * NF-1c
810    * vSwitch: VPP v19.01-release
811      * L2 MAC switching
812      * vSwitch-1c, vSwitch-2c
813    * frame sizes: 64B, IMIX
814
815 More information is available in FD.io CSIT-1901 report, with specific
816 references listed below:
817
818 * Testbed: [CSIT-1901-testbed-2n-skx]
819 * Test environment: [CSIT-1901-test-enviroment]
820 * Methodology: [CSIT-1901-nfv-density-methodology]
821 * Results: [CSIT-1901-nfv-density-results]
822
823 ## Sample Results: CNCF/CNFs
824
825 CNCF CI team introduced a CNF testbed initiative focusing on benchmaring
826 NFV density with open-source network applications running as VNFs and
827 CNFs. Following NFV service topologies and configurations have been
828 tested to date:
829
830 1. VNF Service Chains
831    * VNF: VPP v18.10-release
832      * IPv4 routing
833      * NF-1c
834    * vSwitch: VPP v18.10-release
835      * L2 MAC switching
836      * vSwitch-1c, vSwitch-2c
837    * frame sizes: 64B, IMIX
838 2. CNF Service Chains
839    * CNF: VPP v18.10-release
840      * IPv4 routing
841      * NF-1c
842    * vSwitch: VPP v18.10-release
843      * L2 MAC switching
844      * vSwitch-1c, vSwitch-2c
845    * frame sizes: 64B, IMIX
846 3. CNF Service Pipelines
847    * CNF: VPP v18.10-release
848      * IPv4 routing
849      * NF-1c
850    * vSwitch: VPP v18.10-release
851      * L2 MAC switching
852      * vSwitch-1c, vSwitch-2c
853    * frame sizes: 64B, IMIX
854
855 More information is available in CNCF CNF Testbed github, with summary
856 test results presented in summary markdown file, references listed
857 below:
858
859 * Results: [CNCF-CNF-Testbed-Results]
860
861 # IANA Considerations
862
863 No requests of IANA
864
865 # Security Considerations
866
867 ..
868
869 # Acknowledgements
870
871 Thanks to Vratko Polak of FD.io CSIT project and Michael Pedersen of the
872 CNCF Testbed initiative for their contributions and useful suggestions.
873
874 --- back