Remove import of removed qemu.robot
[csit.git] / docs / ietf / draft-mkonstan-nf-service-density-01.md
1 ---
2 title: NFV Service Density Benchmarking
3 # abbrev: nf-svc-density
4 docname: draft-mkonstan-nf-service-density-01
5 date: 2019-07-08
6
7 ipr: trust200902
8 area: ops
9 wg: Benchmarking Working Group
10 kw: Internet-Draft
11 cat: info
12
13 coding: us-ascii
14 pi:    # can use array (if all yes) or hash here
15 #  - toc
16 #  - sortrefs
17 #  - symrefs
18   toc: yes
19   sortrefs:   # defaults to yes
20   symrefs: yes
21
22 author:
23       -
24         ins: M. Konstantynowicz
25         name: Maciek Konstantynowicz
26         org: Cisco Systems
27         role: editor
28         email: mkonstan@cisco.com
29       -
30         ins: P. Mikus
31         name: Peter Mikus
32         org: Cisco Systems
33         role: editor
34         email: pmikus@cisco.com
35
36 normative:
37   RFC2544:
38   RFC8174:
39
40 informative:
41   RFC8204:
42   TST009:
43     target: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/009/03.01.01_60/gs_NFV-TST009v030101p.pdf
44     title: "ETSI GS NFV-TST 009 V3.1.1 (2018-10), Network Functions Virtualisation (NFV) Release 3; Testing; Specification of Networking Benchmarks and Measurement Methods for NFVI"
45     date: 2018-10
46   BSDP:
47     target: https://fd.io/wp-content/uploads/sites/34/2019/03/benchmarking_sw_data_planes_skx_bdx_mar07_2019.pdf
48     title: "Benchmarking Software Data Planes Intel® Xeon® Skylake vs. Broadwell"
49     date: 2019-03
50   draft-vpolak-mkonstan-bmwg-mlrsearch:
51     target: https://tools.ietf.org/html/draft-vpolak-mkonstan-bmwg-mlrsearch
52     title: "Multiple Loss Ratio Search for Packet Throughput (MLRsearch)"
53     date: 2019-07
54   draft-vpolak-bmwg-plrsearch:
55     target: https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch
56     title: "Probabilistic Loss Ratio Search for Packet Throughput (PLRsearch)"
57     date: 2019-07
58   LFN-FDio-CSIT:
59     target: https://wiki.fd.io/view/CSIT
60     title: "Fast Data io, Continuous System Integration and Testing Project"
61     date: 2019-07
62   CNCF-CNF-Testbed:
63     target: https://github.com/cncf/cnf-testbed/
64     title: "Cloud native Network Function (CNF) Testbed"
65     date: 2019-07
66   TRex:
67     target: https://github.com/cisco-system-traffic-generator/trex-core
68     title: "TRex Low-Cost, High-Speed Stateful Traffic Generator"
69     date: 2019-07
70   CSIT-1904-testbed-2n-skx:
71     target: https://docs.fd.io/csit/rls1904/report/introduction/physical_testbeds.html#node-xeon-skylake-2n-skx
72     title: "FD.io CSIT Test Bed"
73     date: 2019-06
74   CSIT-1904-test-enviroment:
75     target: https://docs.fd.io/csit/rls1904/report/vpp_performance_tests/test_environment.html
76     title: "FD.io CSIT Test Environment"
77     date: 2019-06
78   CSIT-1904-nfv-density-methodology:
79     target: https://docs.fd.io/csit/rls1904/report/introduction/methodology_nfv_service_density.html
80     title: "FD.io CSIT Test Methodology: NFV Service Density"
81     date: 2019-06
82   CSIT-1904-nfv-density-results:
83     target: https://docs.fd.io/csit/rls1904/report/vpp_performance_tests/nf_service_density/index.html
84     title: "FD.io CSIT Test Results: NFV Service Density"
85     date: 2019-06
86   CNCF-CNF-Testbed-Results:
87     target: https://github.com/cncf/cnf-testbed/blob/master/comparison/doc/cncf-cnfs-results-summary.md
88     title: "CNCF CNF Testbed: NFV Service Density Benchmarking"
89     date: 2018-12
90   NFVbench:
91     target: https://opnfv-nfvbench.readthedocs.io/en/latest/testing/user/userguide/readme.html
92     title: NFVbench Data Plane Performance Measurement Features
93     date: 2019-07
94
95 --- abstract
96
97 Network Function Virtualization (NFV) system designers and operators
98 continuously grapple with the problem of qualifying performance of
99 network services realised with software Network Functions (NF) running
100 on Commercial-Off-The-Shelf (COTS) servers. One of the main challenges
101 is getting repeatable and portable benchmarking results and using them
102 to derive deterministic operating range that is production deployment
103 worthy.
104
105 This document specifies benchmarking methodology for NFV services that
106 aims to address this problem space. It defines a way for measuring
107 performance of multiple NFV service instances, each composed of multiple
108 software NFs, and running them at a varied service “packing” density on
109 a single server.
110
111 The aim is to discover deterministic usage range of NFV system. In
112 addition specified methodology can be used to compare and contrast
113 different NFV virtualization technologies.
114
115 --- middle
116
117 # Terminology
118
119 * NFV: Network Function Virtualization, a general industry term
120   describing network functionality implemented in software.
121 * NFV service: a software based network service realized by a topology
122   of interconnected constituent software network function applications.
123 * NFV service instance: a single instantiation of NFV service.
124 * Data-plane optimized software: any software with dedicated threads
125   handling data-plane packet processing e.g. FD.io VPP (Vector Packet
126   Processor), OVS-DPDK.
127 * Packet Loss Ratio (PLR): ratio of packets received relative to packets
128   transmitted over the test trial duration, calculated using formula:
129   PLR = ( pkts_transmitted - pkts_received ) / pkts_transmitted.
130   For bi-directional throughput tests aggregate PLR is calculated based
131   on the aggregate number of packets transmitted and received.
132 * Packet Throughput Rate: maximum packet offered load DUT/SUT forwards
133   within the specified Packet Loss Ratio (PLR). In many cases the rate
134   depends on the frame size processed by DUT/SUT. Hence packet
135   throughput rate MUST be quoted with specific frame size as received by
136   DUT/SUT during the measurement. For bi-directional tests, packet
137   throughput rate should be reported as aggregate for both directions.
138   Measured in packets-per-second (pps) or frames-per-second (fps),
139   equivalent metrics.
140 * Non Drop Rate (NDR): maximum packet/bandwith throughput rate sustained
141   by DUT/SUT at PLR equal zero (zero packet loss) specific to tested
142   frame size(s). MUST be quoted with specific packet size as received by
143   DUT/SUT during the measurement. Packet NDR measured in
144   packets-per-second (or fps), bandwidth NDR expressed in
145   bits-per-second (bps).
146 * Partial Drop Rate (PDR): maximum packet/bandwith throughput rate
147   sustained by DUT/SUT at PLR greater than zero (non-zero packet loss)
148   specific to tested frame size(s). MUST be quoted with specific packet
149   size as received by DUT/SUT during the measurement. Packet PDR
150   measured in packets-per-second (or fps), bandwidth PDR expressed in
151   bits-per-second (bps).
152 * Maximum Receive Rate (MRR): packet/bandwidth rate regardless of PLR
153   sustained by DUT/SUT under specified Maximum Transmit Rate (MTR)
154   packet load offered by traffic generator. MUST be quoted with both
155   specific packet size and MTR as received by DUT/SUT during the
156   measurement. Packet MRR measured in packets-per-second (or fps),
157   bandwidth MRR expressed in bits-per-second (bps).
158
159 # Motivation
160
161 ## Problem Description
162
163 Network Function Virtualization (NFV) system designers and operators
164 continuously grapple with the problem of qualifying performance of
165 network services realised with software Network Functions (NF) running
166 on Commercial-Off-The-Shelf (COTS) servers. One of the main challenges
167 is getting repeatable and portable benchmarking results and using them
168 to derive deterministic operating range that is production deployment
169 worthy.
170
171 Lack of well defined and standardised NFV centric performance
172 methodology and metrics makes it hard to address fundamental questions
173 that underpin NFV production deployments:
174
175 1. What NFV service and how many instances can run on a single compute
176    node?
177 2. How to choose the best compute resource allocation scheme to maximise
178    service yield per node?
179 3. How do different NF applications compare from the service density
180    perspective?
181 4. How do the virtualisation technologies compare e.g. Virtual Machines,
182    Containers?
183
184 Getting answers to these points should allow designers to make data
185 based decisions about the NFV technology and service design best suited
186 to meet requirements of their use cases. Thereby obtained benchmarking
187 data would aid in selection of the most appropriate NFV infrastructure
188 design and platform and enable more accurate capacity planning, an
189 important element for commercial viability of the NFV service.
190
191 ## Proposed Solution
192
193 The primary goal of the proposed benchmarking methodology is to focus on
194 NFV technologies used to construct NFV services. More specifically to i)
195 measure packet data-plane performance of multiple NFV service instances
196 while running them at varied service “packing” densities on a single
197 server and ii) quantify the impact of using multiple NFs to construct
198 each NFV service instance and introducing multiple packet processing
199 hops and links on each packet path.
200
201 The overarching aim is to discover a set of deterministic usage ranges
202 that are of interest to NFV system designers and operators. In addition,
203 specified methodology can be used to compare and contrast different NFV
204 virtualisation technologies.
205
206 In order to ensure wide applicability of the benchmarking methodology,
207 the approach is to separate NFV service packet processing from the
208 shared virtualisation infrastructure by decomposing the software
209 technology stack into three building blocks:
210
211                   +-------------------------------+
212                   |          NFV Service          |
213                   +-------------------------------+
214                   |   Virtualization Technology   |
215                   +-------------------------------+
216                   |        Host Networking        |
217                   +-------------------------------+
218
219               Figure 1. NFV software technology stack.
220
221 Proposed methodology is complementary to existing NFV benchmarking
222 industry efforts focusing on vSwitch benchmarking [RFC8204], [TST009]
223 and extends the benchmarking scope to NFV services.
224
225 This document does not describe a complete benchmarking methodology,
226 instead it is focusing on the system under test configuration. Each of
227 the compute node configurations identified in this document is
228 to be evaluated for NFV service data-plane performance using existing
229 and/or emerging network benchmarking standards. This may include
230 methodologies specified in [RFC2544], [TST009],
231 [draft-vpolak-mkonstan-bmwg-mlrsearch] and/or
232 [draft-vpolak-bmwg-plrsearch].
233
234 # NFV Service
235
236 It is assumed that each NFV service instance is built of one or more
237 constituent NFs and is described by: topology, configuration and
238 resulting packet path(s).
239
240 Each set of NFs forms an independent NFV service instance, with multiple
241 sets present in the host.
242
243 ## Topology
244
245 NFV topology describes the number of network functions per service
246 instance, and their inter-connections over packet interfaces. It
247 includes all point-to-point virtual packet links within the compute
248 node, Layer-2 Ethernet or Layer-3 IP, including the ones to host
249 networking data-plane.
250
251 Theoretically, a large set of possible NFV topologies can be realised
252 using software virtualisation topologies, e.g. ring, partial -/full-
253 mesh, star, line, tree, ladder. In practice however, only a few
254 topologies are in the actual use as NFV services mostly perform either
255 bumps-in-a-wire packet operations (e.g. security filtering/inspection,
256 monitoring/telemetry) and/or inter-site forwarding decisions (e.g.
257 routing, switching).
258
259 Two main NFV topologies have been identified so far for NFV service
260 density benchmarking:
261
262 1. Chain topology: a set of NFs connect to host data-plane with minimum
263    of two virtual interfaces each, enabling host data-plane to
264    facilitate NF to NF service chain forwarding and provide connectivity
265    with external network.
266
267 2. Pipeline topology: a set of NFs connect to each other in a line
268    fashion with edge NFs homed to host data-plane. Host data-plane
269    provides connectivity with external network.
270
271 In both cases multiple NFV service topologies are running in parallel.
272 Both topologies are shown in figures 2. and 3. below.
273
274 NF chain topology:
275
276     +-----------------------------------------------------------+
277     |                     Host Compute Node                     |
278     |                                                           |
279     |    SmNF1       SmNF2                   SmNFn   Service-m  |
280     |     ...         ...                     ...       ...     |
281     |    S2NF1       S2NF2                   S2NFn   Service-2  |
282     | +--------+  +--------+              +--------+            |
283     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
284     | |        |  |        |     ....     |        | Service-1  |
285     | |        |  |        |              |        |            |
286     | +-+----+-+  +-+----+-+    +    +    +-+----+-+            |
287     |   |    |      |    |      |    |      |    |   Virtual    |
288     |   |    |<-CS->|    |<-CS->|    |<-CS->|    |   Interfaces |
289     | +-+----+------+----+------+----+------+----+-+            |
290     | |                                            | CS: Chain  |
291     | |                                            |   Segment  |
292     | |             Host Data-Plane                |            |
293     | +-+--+----------------------------------+--+-+            |
294     |   |  |                                  |  |              |
295     +-----------------------------------------------------------+
296         |  |                                  |  |   Physical
297         |  |                                  |  |   Interfaces
298     +---+--+----------------------------------+--+--------------+
299     |                                                           |
300     |                    Traffic Generator                      |
301     |                                                           |
302     +-----------------------------------------------------------+
303
304       Figure 2. NF chain topology forming a service instance.
305
306 NF pipeline topology:
307
308     +-----------------------------------------------------------+
309     |                     Host Compute Node                     |
310     |                                                           |
311     |    SmNF1       SmNF2                   SmNFn   Service-m  |
312     |     ...         ...                     ...       ...     |
313     |    S2NF1       S2NF2                   S2NFn   Service-2  |
314     | +--------+  +--------+              +--------+            |
315     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
316     | |        +--+        +--+  ....  +--+        | Service1   |
317     | |        |  |        |              |        |            |
318     | +-+------+  +--------+              +------+-+            |
319     |   |                                        |   Virtual    |
320     |   |<-Pipeline Edge          Pipeline Edge->|   Interfaces |
321     | +-+----------------------------------------+-+            |
322     | |                                            |            |
323     | |                                            |            |
324     | |             Host Data-Plane                |            |
325     | +-+--+----------------------------------+--+-+            |
326     |   |  |                                  |  |              |
327     +-----------------------------------------------------------+
328         |  |                                  |  |   Physical
329         |  |                                  |  |   Interfaces
330     +---+--+----------------------------------+--+--------------+
331     |                                                           |
332     |                    Traffic Generator                      |
333     |                                                           |
334     +-----------------------------------------------------------+
335
336       Figure 3. NF pipeline topology forming a service instance.
337
338
339 ## Configuration
340
341 NFV configuration includes all packet processing functions in NFs
342 including Layer-2, Layer-3 and/or Layer-4-to-7 processing as appropriate
343 to specific NF and NFV service design. L2 sub- interface encapsulations
344 (e.g. 802.1q, 802.1ad) and IP overlay encapsulation (e.g. VXLAN, IPSec,
345 GRE) may be represented here too as appropriate, although in most cases
346 they are used as external encapsulation and handled by host networking
347 data-plane.
348
349 NFV configuration determines logical network connectivity that is
350 Layer-2 and/or IPv4/IPv6 switching/routing modes, as well as NFV service
351 specific aspects. In the context of NFV density benchmarking methodology
352 the initial focus is on logical network connectivity between the NFs,
353 and no NFV service specific configurations. NF specific functionality is
354 emulated using IPv4/IPv6 routing.
355
356 Building on the two identified NFV topologies, two common NFV
357 configurations are considered:
358
359 1. Chain configuration:
360    * Relies on chain topology to form NFV service chains.
361    * NF packet forwarding designs:
362      * IPv4/IPv6 routing.
363    * Requirements for host data-plane:
364      * L2 switching with L2 forwarding context per each NF chain
365        segment, or
366      * IPv4/IPv6 routing with IP forwarding context per each NF chain
367        segment or per NF chain.
368
369 2. Pipeline configuration:
370    * Relies on pipeline topology to form NFV service pipelines.
371    * Packet forwarding designs:
372      * IPv4/IPv6 routing.
373    * Requirements for host data-plane:
374      * L2 switching with L2 forwarding context per each NF pipeline
375        edge link, or
376      * IPv4/IPv6 routing with IP forwarding context per each NF pipeline
377        edge link or per NF pipeline.
378
379 ## Packet Path(s)
380
381 NFV packet path(s) describe the actual packet forwarding path(s) used
382 for benchmarking, resulting from NFV topology and configuration. They
383 are aimed to resemble true packet forwarding actions during the NFV
384 service lifecycle.
385
386 Based on the specified NFV topologies and configurations two NFV packet
387 paths are taken for benchmarking:
388
389 1. Snake packet path
390    * Requires chain topology and configuration.
391    * Packets enter the NFV chain through one edge NF and progress to the
392      other edge NF of the chain.
393    * Within the chain, packets follow a zigzagging "snake" path entering
394      and leaving host data-plane as they progress through the NF chain.
395    * Host data-plane is involved in packet forwarding operations between
396      NIC interfaces and edge NFs, as well as between NFs in the chain.
397
398 2. Pipeline packet path
399    * Requires pipeline topology and configuration.
400    * Packets enter the NFV chain through one edge NF and progress to the
401      other edge NF of the pipeline.
402    * Within the chain, packets follow a straight path entering and
403      leaving subsequent NFs as they progress through the NF pipeline.
404    * Host data-plane is involved in packet forwarding operations between
405      NIC interfaces and edge NFs only.
406
407 Both packet paths are shown in figures below.
408
409 Snake packet path:
410
411     +-----------------------------------------------------------+
412     |                     Host Compute Node                     |
413     |                                                           |
414     |    SmNF1       SmNF2                   SmNFn   Service-m  |
415     |     ...         ...                     ...       ...     |
416     |    S2NF1       S2NF2                   S2NFn   Service-2  |
417     | +--------+  +--------+              +--------+            |
418     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
419     | |        |  |        |     ....     |        | Service1   |
420     | |  XXXX  |  |  XXXX  |              |  XXXX  |            |
421     | +-+X--X+-+  +-+X--X+-+    +X  X+    +-+X--X+-+            |
422     |   |X  X|      |X  X|      |X  X|      |X  X|   Virtual    |
423     |   |X  X|      |X  X|      |X  X|      |X  X|   Interfaces |
424     | +-+X--X+------+X--X+------+X--X+------+X--X+-+            |
425     | |  X  XXXXXXXXXX  XXXXXXXXXX  XXXXXXXXXX  X  |            |
426     | |  X                                      X  |            |
427     | |  X          Host Data-Plane             X  |            |
428     | +-+X-+----------------------------------+-X+-+            |
429     |   |X |                                  | X|              |
430     +----X--------------------------------------X---------------+
431         |X |                                  | X|   Physical
432         |X |                                  | X|   Interfaces
433     +---+X-+----------------------------------+-X+--------------+
434     |                                                           |
435     |                    Traffic Generator                      |
436     |                                                           |
437     +-----------------------------------------------------------+
438
439         Figure 4. Snake packet path thru NF chain topology.
440
441
442 Pipeline packet path:
443
444     +-----------------------------------------------------------+
445     |                     Host Compute Node                     |
446     |                                                           |
447     |    SmNF1       SmNF2                   SmNFn   Service-m  |
448     |     ...         ...                     ...       ...     |
449     |    S2NF1       S2NF2                   S2NFn   Service-2  |
450     | +--------+  +--------+              +--------+            |
451     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
452     | |        +--+        +--+  ....  +--+        | Service1   |
453     | |  XXXXXXXXXXXXXXXXXXXXXXXX    XXXXXXXXXXXX  |            |
454     | +--X-----+  +--------+              +-----X--+            |
455     |   |X                                      X|   Virtual    |
456     |   |X                                      X|   Interfaces |
457     | +-+X--------------------------------------X+-+            |
458     | |  X                                      X  |            |
459     | |  X                                      X  |            |
460     | |  X          Host Data-Plane             X  |            |
461     | +-+X-+----------------------------------+-X+-+            |
462     |   |X |                                  | X|              |
463     +----X--------------------------------------X---------------+
464         |X |                                  | X|   Physical
465         |X |                                  | X|   Interfaces
466     +---+X-+----------------------------------+-X+--------------+
467     |                                                           |
468     |                    Traffic Generator                      |
469     |                                                           |
470     +-----------------------------------------------------------+
471
472       Figure 5. Pipeline packet path thru NF pipeline topology.
473
474 In all cases packets enter NFV system via shared physical NIC interfaces
475 controlled by shared host data-plane, are then associated with specific
476 NFV service (based on service discriminator) and subsequently are cross-
477 connected/switched/routed by host data-plane to and through NF
478 topologies per one of the above listed schemes.
479
480 # Virtualization Technology
481
482 NFV services are built of composite isolated NFs, with virtualisation
483 technology providing the workload isolation. Following virtualisation
484 technology types are considered for NFV service density benchmarking:
485
486 1. Virtual Machines (VMs)
487    * Relying on host hypervisor technology e.g. KVM, ESXi, Xen.
488    * NFs running in VMs are referred to as VNFs.
489 2. Containers
490    * Relying on Linux container technology e.g. LXC, Docker.
491    * NFs running in Containers are referred to as CNFs.
492
493 Different virtual interface types are available to VNFs and CNFs:
494
495 1. VNF
496    * virtio-vhostuser: fully user-mode based virtual interface.
497    * virtio-vhostnet: involves kernel-mode based backend.
498 2. CNF
499    * memif: fully user-mode based virtual interface.
500    * af_packet: involves kernel-mode based backend.
501    * (add more common ones)
502
503 # Host Networking
504
505 Host networking data-plane is the central shared resource that underpins
506 creation of NFV services. It handles all of the connectivity to external
507 physical network devices through physical network connections using
508 NICs, through which the benchmarking is done.
509
510 Assuming that NIC interface resources are shared, here is the list of
511 widely available host data-plane options for providing packet
512 connectivity to/from NICs and constructing NFV chain and pipeline
513 topologies and configurations:
514
515 * Linux Kernel-Mode Networking.
516 * Linux User-Mode vSwitch.
517 * Virtual Machine vSwitch.
518 * Linux Container vSwitch.
519 * SRIOV NIC Virtual Function - note: restricted support for chain and
520   pipeline topologies, as it requires hair-pinning through the NIC and
521   oftentimes also through external physical switch.
522
523 Analysing properties of each of these options and their Pros/Cons for
524 specified NFV topologies and configurations is outside the scope of this
525 document.
526
527 From all listed options, performance optimised Linux user-mode vswitch
528 deserves special attention. Linux user-mode switch decouples NFV service
529 from the underlying NIC hardware, offers rich multi-tenant functionality
530 and most flexibility for supporting NFV services. But in the same time
531 it is consuming compute resources and is harder to benchmark in NFV
532 service density scenarios.
533
534 Following sections focus on using Linux user-mode vSwitch, focusing on
535 its performance benchmarking at increasing levels of NFV service
536 density.
537
538 # NFV Service Density Matrix
539
540 In order to evaluate performance of multiple NFV services running on a
541 compute node, NFV service instances are benchmarked at increasing
542 density, allowing to construct an NFV Service Density Matrix. Table
543 below shows an example of such a matrix, capturing number of NFV service
544 instances (row indices), number of NFs per service instance (column
545 indices) and resulting total number of NFs (values).
546
547     NFV Service Density - NF Count View
548
549     SVC   001   002   004   006   008   00N
550     001     1     2     4     6     8   1*N
551     002     2     4     8    12    16   2*N
552     004     4     8    16    24    32   4*N
553     006     6    12    24    36    48   6*N
554     008     8    16    32    48    64   8*N
555     00M   M*1   M*2   M*4   M*6   M*8   M*N
556
557     RowIndex:     Number of NFV Service Instances, 1..M.
558     ColumnIndex:  Number of NFs per NFV Service Instance, 1..N.
559     Value:        Total number of NFs running in the system.
560
561 In order to deliver good and repeatable network data-plane performance,
562 NFs and host data-plane software require direct access to critical
563 compute resources. Due to a shared nature of all resources on a compute
564 node, a clearly defined resource allocation scheme is defined in the
565 next section to address this.
566
567 In each tested configuration host data-plane is a gateway between the
568 external network and the internal NFV network topologies. Offered packet
569 load is generated and received by an external traffic generator per
570 usual benchmarking practice.
571
572 It is proposed that benchmarks are done with the offered packet load
573 distributed equally across all configured NFV service instances.
574 This approach should provide representative benchmarking data for each
575 tested topology and configuraiton, and a good guesstimate of maximum
576 performance required for capacity planning.
577
578 Following sections specify compute resource allocation, followed by
579 examples of applying NFV service density methodology to VNF and CNF
580 benchmarking use cases.
581
582 # Compute Resource Allocation
583
584 Performance optimized NF and host data-plane software threads require
585 timely execution of packet processing instructions and are very
586 sensitive to any interruptions (or stalls) to this execution e.g. cpu
587 core context switching, or cpu jitter. To that end, NFV service density
588 methodology treats controlled mapping ratios of data plane software
589 threads to physical processor cores with directly allocated cache
590 hierarchies as the first order requirement.
591
592 Other compute resources including memory bandwidth and PCIe bandwidth
593 have lesser impact and as such are subject for further study. For more
594 detail and deep-dive analysis of software data plane performance and
595 impact on different shared compute resources is available in [BSDP].
596
597 It is assumed that NFs as well as host data-plane (e.g. vswitch) are
598 performance optimized, with their tasks executed in two types of
599 software threads:
600
601 * data-plane - handling data-plane packet processing and forwarding,
602   time critical, requires dedicated cores. To scale data-plane
603   performance, most NF apps use multiple data-plane threads and rely on
604   NIC RSS (Receive Side Scaling), virtual interface multi-queue and/or
605   integrated software hashing to distribute packets across the data
606   threads.
607
608 * main-control - handling application management, statistics and
609   control-planes, less time critical, allows for core sharing. For most
610   NF apps this is a single main thread, but often statistics (counters)
611   and various control protocol software are run in separate threads.
612
613 Core mapping scheme described below allocates cores for all threads of
614 specified type belonging to each NF app instance, and separately lists
615 number of threads to a number of logical/physical core mappings for
616 processor configurations with enabled/disabled Symmetric Multi-
617 Threading (SMT) (e.g. AMD SMT, Intel Hyper-Threading).
618
619 If NFV service density benchmarking is run on server nodes with
620 Symmetric Multi-Threading (SMT) (e.g. AMD SMT, Intel Hyper-Threading)
621 for higher performance and efficiency, logical cores allocated to data-
622 plane threads should be allocated as pairs of sibling logical cores
623 corresponding to the hyper-threads running on the same physical core.
624
625 Separate core ratios are defined for mapping threads of vSwitch and NFs.
626 In order to get consistent benchmarking results, the mapping ratios are
627 enforced using Linux core pinning.
628
629 | application | thread type | app:core ratio | threads/pcores (SMT disabled)   | threads/lcores map (SMT enabled)   |
630 |:-----------:|:-----------:|:--------------:|:-------------------------------:|:----------------------------------:|
631 | vSwitch-1c  | data        | 1:1            | 1DT/1PC                         | 2DT/2LC                            |
632 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
633 |             |             |                |                                 |                                    |
634 | vSwitch-2c  | data        | 1:2            | 2DT/2PC                         | 4DT/4LC                            |
635 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
636 |             |             |                |                                 |                                    |
637 | vSwitch-4c  | data        | 1:4            | 4DT/4PC                         | 8DT/8LC                            |
638 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
639 |             |             |                |                                 |                                    |
640 | NF-0.5c     | data        | 1:S2           | 1DT/S2PC                        | 1DT/1LC                            |
641 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
642 |             |             |                |                                 |                                    |
643 | NF-1c       | data        | 1:1            | 1DT/1PC                         | 2DT/2LC                            |
644 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
645 |             |             |                |                                 |                                    |
646 | NF-2c       | data        | 1:2            | 2DT/2PC                         | 4DT/4LC                            |
647 |             | main        | 1:S2           | 1MT/S2PC                        | 1MT/1LC                            |
648
649 * Legend to table
650   * Header row
651     * application - network application with optimized data-plane, a
652       vSwitch or Network Function (NF) application.
653     * thread type - either "data", short for data-plane; or "main",
654       short for all main-control threads.
655     * app:core ratio - ratio of per application instance threads of
656       specific thread type to physical cores.
657     * threads/pcores (SMT disabled) - number of threads of specific
658       type (DT for data-plane thread, MT for main thread) running on a
659       number of physical cores, with SMT disabled.
660     * threads/lcores map (SMT enabled) - number of threads of specific
661       type (DT, MT) running on a number of logical cores, with SMT
662       enabled. Two logical cores per one physical core.
663   * Content rows
664     * vSwitch-(1c|2c|4c) - vSwitch with 1 physical core (or 2, or 4)
665       allocated to its data-plane software worker threads.
666     * NF-(0.5c|1c|2c) - NF application with half of a physical core (or
667       1, or 2) allocated to its data-plane software worker threads.
668     * Sn - shared core, sharing ratio of (n).
669     * DT - data-plane thread.
670     * MT - main-control thread.
671     * PC - physical core, with SMT/HT enabled has many (mostly 2 today)
672       logical cores associated with it.
673     * LC - logical core, if more than one lc get allocated in sets of
674       two sibling logical cores running on the same physical core.
675     * SnPC - shared physical core, sharing ratio of (n).
676     * SnLC - shared logical core, sharing ratio of (n).
677
678 Maximum benchmarked NFV service densities are limited by a number of
679 physical cores on a compute node.
680
681 A sample physical core usage view is shown in the matrix below.
682
683     NFV Service Density - Core Usage View
684     vSwitch-1c, NF-1c
685
686     SVC   001   002   004   006   008   010
687     001     2     3     6     9    12    15
688     002     3     6    12    18    24    30
689     004     6    12    24    36    48    60
690     006     9    18    36    54    72    90
691     008    12    24    48    72    96   120
692     010    15    30    60    90   120   150
693
694     RowIndex:     Number of NFV Service Instances, 1..10.
695     ColumnIndex:  Number of NFs per NFV Service Instance, 1..10.
696     Value:        Total number of physical processor cores used for NFs.
697
698 # NFV Service Data-Plane Benchmarking
699
700 NF service density scenarios should have their data-plane performance
701 benchmarked using existing and/or emerging network benchmarking
702 standards as noted earlier.
703
704 Following metrics should be measured (or calculated) and reported:
705
706 * Packet throughput rate (packets-per-second)
707   * Specific to tested packet size or packet sequence (e.g. some type of
708     packet size mix sent in recurrent sequence).
709   * Applicable types of throughput rate: NDR, PDR, MRR.
710 * (Calculated) Bandwidth throughput rate (bits-per-second) corresponding
711   to the measured packet throughput rate.
712 * Packet one-way latency (seconds)
713   * Measured at different packet throughput rates load e.g. light,
714     medium, heavy.
715
716 Listed metrics should be itemized per service instance and per direction
717 (e.g. forward/reverse) for latency.
718
719 # Sample NFV Service Density Benchmarks
720
721 To illustrate defined NFV service density applicability, following
722 sections describe three sets of NFV service topologies and
723 configurations that have been benchmarked in open-source: i) in
724 [LFN-FDio-CSIT], a continuous testing and data-plane benchmarking
725 project, ii) as part of CNCF CNF Testbed initiative [CNCF-CNF-Testbed]
726 and iii) in OPNFV NFVbench project.
727
728 In the first two cases each NFV service instance definition is based on
729 the same set of NF applications, and varies only by network addressing
730 configuration to emulate multi-tenant operating environment.
731
732 OPNFV NFVbench project is focusing on benchmarking the actual production
733 deployments that are aligned with OPNFV specifications.
734
735 ## Intrepreting the Sample Results
736
737 TODO How to interpret and avoid misreading included results? And how to
738 avoid falling into the trap of using these results to draw generilized
739 conclusions about performance of different virtualization technologies,
740 e.g. VM and Containers, irrespective of deployment scenarios and what
741 VNFs and CNFs are in the actual use.
742
743 ## Benchmarking MRR Throughput
744
745 Initial NFV density throughput benchmarks have been performed using
746 Maximum Receive Rate (MRR) test methodology defined and used in FD.io
747 CSIT.
748
749 MRR tests measure the packet forwarding rate under specified Maximum
750 Transmit Rate (MTR) packet load  offered by traffic generator over a set
751 trial duration, regardless of packet loss ratio (PLR). MTR for specified
752 Ethernet frame size was set to the bi-directional link rate, 2x 10GbE in
753 referred results.
754
755 Tests were conducted with two traffic profiles: i) continuous stream of
756 64B frames, ii) continuous stream of IMIX sequence of (7x 64B, 4x 570B,
757 1x 1518B), all sizes are L2 untagged Ethernet.
758
759 NFV service topologies tested include: VNF service chains, CNF service
760 chains and CNF service pipelines.
761
762 ## VNF Service Chain
763
764 VNF Service Chain (VSC) topology is tested with KVM hypervisor (Ubuntu
765 18.04-LTS), with NFV service instances consisting of NFs running in VMs
766 (VNFs). Host data-plane is provided by FD.io VPP vswitch. Virtual
767 interfaces are virtio-vhostuser. Snake forwarding packet path is tested
768 using [TRex] traffic generator, see figure.
769
770     +-----------------------------------------------------------+
771     |                     Host Compute Node                     |
772     |                                                           |
773     | +--------+  +--------+              +--------+            |
774     | | S1VNF1 |  | S1VNF2 |              | S1VNFn |            |
775     | |        |  |        |     ....     |        | Service1   |
776     | |  XXXX  |  |  XXXX  |              |  XXXX  |            |
777     | +-+X--X+-+  +-+X--X+-+              +-+X--X+-+            |
778     |   |X  X|      |X  X|                  |X  X|   Virtual    |
779     |   |X  X|      |X  X|      |X  X|      |X  X|   Interfaces |
780     | +-+X--X+------+X--X+------+X--X+------+X--X+-+            |
781     | |  X  XXXXXXXXXX  XXXXXXXXXX  XXXXXXXXXX  X  |            |
782     | |  X                                      X  |            |
783     | |  X          FD.io VPP vSwitch           X  |            |
784     | +-+X-+----------------------------------+-X+-+            |
785     |   |X |                                  | X|              |
786     +----X--------------------------------------X---------------+
787         |X |                                  | X|   Physical
788         |X |                                  | X|   Interfaces
789     +---+X-+----------------------------------+-X+--------------+
790     |                                                           |
791     |                  Traffic Generator (TRex)                 |
792     |                                                           |
793     +-----------------------------------------------------------+
794
795             Figure 6. VNF service chain test setup.
796
797
798 ## CNF Service Chain
799
800 CNF Service Chain (CSC) topology is tested with Docker containers
801 (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs running
802 in Containers (CNFs). Host data-plane is provided by FD.io VPP vswitch.
803 Virtual interfaces are memif. Snake forwarding packet path is tested
804 using [TRex] traffic generator, see figure.
805
806     +-----------------------------------------------------------+
807     |                     Host Compute Node                     |
808     |                                                           |
809     | +--------+  +--------+              +--------+            |
810     | | S1CNF1 |  | S1CNF2 |              | S1CNFn |            |
811     | |        |  |        |     ....     |        | Service1   |
812     | |  XXXX  |  |  XXXX  |              |  XXXX  |            |
813     | +-+X--X+-+  +-+X--X+-+              +-+X--X+-+            |
814     |   |X  X|      |X  X|                  |X  X|   Virtual    |
815     |   |X  X|      |X  X|      |X  X|      |X  X|   Interfaces |
816     | +-+X--X+------+X--X+------+X--X+------+X--X+-+            |
817     | |  X  XXXXXXXXXX  XXXXXXXXXX  XXXXXXXXXX  X  |            |
818     | |  X                                      X  |            |
819     | |  X          FD.io VPP vSwitch           X  |            |
820     | +-+X-+----------------------------------+-X+-+            |
821     |   |X |                                  | X|              |
822     +----X--------------------------------------X---------------+
823         |X |                                  | X|   Physical
824         |X |                                  | X|   Interfaces
825     +---+X-+----------------------------------+-X+--------------+
826     |                                                           |
827     |                  Traffic Generator (TRex)                 |
828     |                                                           |
829     +-----------------------------------------------------------+
830
831           Figure 7. CNF service chain test setup.
832
833 ## CNF Service Pipeline
834
835 CNF Service Pipeline (CSP) topology is tested with Docker containers
836 (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs running
837 in Containers (CNFs). Host data-plane is provided by FD.io VPP vswitch.
838 Virtual interfaces are memif. Pipeline forwarding packet path is tested
839 using [TRex] traffic generator, see figure.
840
841     +-----------------------------------------------------------+
842     |                     Host Compute Node                     |
843     |                                                           |
844     | +--------+  +--------+              +--------+            |
845     | |  S1NF1 |  |  S1NF2 |              |  S1NFn |            |
846     | |        +--+        +--+  ....  +--+        | Service1   |
847     | |  XXXXXXXXXXXXXXXXXXXXXXXX    XXXXXXXXXXXX  |            |
848     | +--X-----+  +--------+              +-----X--+            |
849     |   |X                                      X|   Virtual    |
850     |   |X                                      X|   Interfaces |
851     | +-+X--------------------------------------X+-+            |
852     | |  X                                      X  |            |
853     | |  X                                      X  |            |
854     | |  X          FD.io VPP vSwitch           X  |            |
855     | +-+X-+----------------------------------+-X+-+            |
856     |   |X |                                  | X|              |
857     +----X--------------------------------------X---------------+
858         |X |                                  | X|   Physical
859         |X |                                  | X|   Interfaces
860     +---+X-+----------------------------------+-X+--------------+
861     |                                                           |
862     |                  Traffic Generator (TRex)                 |
863     |                                                           |
864     +-----------------------------------------------------------+
865
866               Figure 8. CNF service chain test setup.
867
868 ## Sample Results: FD.io CSIT
869
870 FD.io CSIT project introduced NFV density benchmarking in release
871 CSIT-1904 and published results for the following NFV service topologies
872 and configurations:
873
874 1. VNF Service Chains
875    * VNF: DPDK-L3FWD v19.02
876      * IPv4 forwarding
877      * NF-1c
878    * vSwitch: VPP v19.04-release
879      * L2 MAC switching
880      * vSwitch-1c, vSwitch-2c
881    * frame sizes: 64B, IMIX
882 2. CNF Service Chains
883    * CNF: VPP v19.04-release
884      * IPv4 routing
885      * NF-1c
886    * vSwitch: VPP v19.04-release
887      * L2 MAC switching
888      * vSwitch-1c, vSwitch-2c
889    * frame sizes: 64B, IMIX
890 3. CNF Service Pipelines
891    * CNF: VPP v19.04-release
892      * IPv4 routing
893      * NF-1c
894    * vSwitch: VPP v19.04-release
895      * L2 MAC switching
896      * vSwitch-1c, vSwitch-2c
897    * frame sizes: 64B, IMIX
898
899 More information is available in FD.io CSIT-1904 report, with specific
900 references listed below:
901
902 * Testbed: [CSIT-1904-testbed-2n-skx]
903 * Test environment: [CSIT-1904-test-enviroment]
904 * Methodology: [CSIT-1904-nfv-density-methodology]
905 * Results: [CSIT-1904-nfv-density-results]
906
907 ## Sample Results: CNCF/CNFs
908
909 CNCF CI team introduced a CNF testbed initiative focusing on benchmaring
910 NFV density with open-source network applications running as VNFs and
911 CNFs. Following NFV service topologies and configurations have been
912 tested to date:
913
914 1. VNF Service Chains
915    * VNF: VPP v18.10-release
916      * IPv4 routing
917      * NF-1c
918    * vSwitch: VPP v18.10-release
919      * L2 MAC switching
920      * vSwitch-1c, vSwitch-2c
921    * frame sizes: 64B, IMIX
922 2. CNF Service Chains
923    * CNF: VPP v18.10-release
924      * IPv4 routing
925      * NF-1c
926    * vSwitch: VPP v18.10-release
927      * L2 MAC switching
928      * vSwitch-1c, vSwitch-2c
929    * frame sizes: 64B, IMIX
930 3. CNF Service Pipelines
931    * CNF: VPP v18.10-release
932      * IPv4 routing
933      * NF-1c
934    * vSwitch: VPP v18.10-release
935      * L2 MAC switching
936      * vSwitch-1c, vSwitch-2c
937    * frame sizes: 64B, IMIX
938
939 More information is available in CNCF CNF Testbed github, with summary
940 test results presented in summary markdown file, references listed
941 below:
942
943 * Results: [CNCF-CNF-Testbed-Results]
944
945 ## Sample Results: OPNFV NFVbench
946
947 TODO Add short NFVbench based test description, and NFVbench sweep chart
948 with single VM per service instance: Y-axis packet throughput rate or
949 bandwidth throughput rate, X-axis number of concurrent service
950 instances.
951
952 # IANA Considerations
953
954 No requests of IANA.
955
956 # Security Considerations
957
958 Benchmarking activities as described in this memo are limited to
959 technology characterization of a DUT/SUT using controlled stimuli in a
960 laboratory environment, with dedicated address space and the constraints
961 specified in the sections above.
962
963 The benchmarking network topology will be an independent test setup and
964 MUST NOT be connected to devices that may forward the test traffic into
965 a production network or misroute traffic to the test management network.
966
967 Further, benchmarking is performed on a "black-box" basis, relying
968 solely on measurements observable external to the DUT/SUT.
969
970 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
971 benchmarking purposes.  Any implications for network security arising
972 from the DUT/SUT SHOULD be identical in the lab and in production
973 networks.
974
975 # Acknowledgements
976
977 Thanks to Vratko Polak of FD.io CSIT project and Michael Pedersen of the
978 CNCF Testbed initiative for their contributions and useful suggestions.
979 Extended thanks to Alec Hothan of OPNFV NFVbench project for numerous
980 comments, suggestions and references to his/team work in the
981 OPNFV/NVFbench project.
982
983 --- back

©2016 FD.io a Linux Foundation Collaborative Project. All Rights Reserved.
Linux Foundation is a registered trademark of The Linux Foundation. Linux is a registered trademark of Linus Torvalds.
Please see our privacy policy and terms of use.