2 title: NFV Service Density Benchmarking
3 # abbrev: nf-svc-density
4 docname: draft-mkonstan-nf-service-density-00
9 wg: Benchmarking Working Group
14 pi: # can use array (if all yes) or hash here
19 sortrefs: # defaults to yes
24 ins: M. Konstantynowicz
25 name: Maciek Konstantynowicz
28 email: mkonstan@cisco.com
34 email: pmikus@cisco.com
43 target: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/009/03.01.01_60/gs_NFV-TST009v030101p.pdf
44 title: "ETSI GS NFV-TST 009 V3.1.1 (2018-10), Network Functions Virtualisation (NFV) Release 3; Testing; Specification of Networking Benchmarks and Measurement Methods for NFVI"
47 target: https://fd.io/wp-content/uploads/sites/34/2019/03/benchmarking_sw_data_planes_skx_bdx_mar07_2019.pdf
48 title: "Benchmarking Software Data Planes Intel® Xeon® Skylake vs. Broadwell"
50 draft-vpolak-mkonstan-bmwg-mlrsearch:
51 target: https://tools.ietf.org/html/draft-vpolak-mkonstan-bmwg-mlrsearch-00
52 title: "Multiple Loss Ratio Search for Packet Throughput (MLRsearch)"
54 draft-vpolak-bmwg-plrsearch:
55 target: https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch-00
56 title: "Probabilistic Loss Ratio Search for Packet Throughput (PLRsearch)"
59 target: https://wiki.fd.io/view/CSIT
60 title: "Fast Data io, Continuous System Integration and Testing Project"
63 target: https://github.com/cncf/cnf-testbed/
64 title: "Cloud native Network Function (CNF) Testbed"
67 target: https://github.com/cisco-system-traffic-generator/trex-core
68 title: "TRex Low-Cost, High-Speed Stateful Traffic Generator"
70 CSIT-1901-testbed-2n-skx:
71 target: https://docs.fd.io/csit/rls1901/report/introduction/physical_testbeds.html#node-xeon-skylake-2n-skx
72 title: "FD.io CSIT Test Bed"
74 CSIT-1901-test-enviroment:
75 target: https://docs.fd.io/csit/rls1901/report/vpp_performance_tests/test_environment.html
76 title: "FD.io CSIT Test Environment"
78 CSIT-1901-nfv-density-methodology:
79 target: https://docs.fd.io/csit/rls1901/report/introduction/methodology_nfv_service_density.html
80 title: "FD.io CSIT Test Methodology: NFV Service Density"
82 CSIT-1901-nfv-density-results:
83 target: https://docs.fd.io/csit/rls1901/report/vpp_performance_tests/nf_service_density/index.html
84 title: "FD.io CSIT Test Results: NFV Service Density"
86 CNCF-CNF-Testbed-Results:
87 target: https://github.com/cncf/cnf-testbed/blob/master/comparison/doc/cncf-cnfs-results-summary.md
88 title: "CNCF CNF Testbed: NFV Service Density Benchmarking"
93 Network Function Virtualization (NFV) system designers and operators
94 continuously grapple with the problem of qualifying performance of
95 network services realised with software Network Functions (NF) running
96 on Commercial-Off-The-Shelf (COTS) servers. One of the main challenges
97 is getting repeatable and portable benchmarking results and using them
98 to derive deterministic operating range that is production deployment
101 This document specifies benchmarking methodology for NFV services that
102 aims to address this problem space. It defines a way for measuring
103 performance of multiple NFV service instances, each composed of multiple
104 software NFs, and running them at a varied service “packing” density on
107 The aim is to discover deterministic usage range of NFV system. In
108 addition specified methodology can be used to compare and contrast
109 different NFV virtualization technologies.
115 * NFV - Network Function Virtualization, a general industry term
116 describing network functionality implemented in software.
117 * NFV service - a software based network service realized by a topology
118 of interconnected constituent software network function applications.
119 * NFV service instance - a single instantiation of NFV service.
120 * Data-plane optimized software - any software with dedicated threads
121 handling data-plane packet processing e.g. FD.io VPP (Vector Packet
122 Processor), OVS-DPDK.
126 ## Problem Description
128 Network Function Virtualization (NFV) system designers and operators
129 continuously grapple with the problem of qualifying performance of
130 network services realised with software Network Functions (NF) running
131 on Commercial-Off-The-Shelf (COTS) servers. One of the main challenges
132 is getting repeatable and portable benchmarking results and using them
133 to derive deterministic operating range that is production deployment
136 Lack of well defined and standardised NFV centric performance
137 methodology and metrics makes it hard to address fundamental questions
138 that underpin NFV production deployments:
140 1. What NFV service and how many instances can run on a single compute
142 2. How to choose the best compute resource allocation scheme to maximise
143 service yield per node?
144 3. How do different NF applications compare from the service density
146 4. How do the virtualisation technologies compare e.g. Virtual Machines,
149 Getting answers to these points should allow designers to make a data
150 based decision about the NFV technology and service design best suited
151 to meet requirements of their use cases. Equally, obtaining the
152 benchmarking data underpinning those answers should make it easier for
153 operators to work out expected deterministic operating range of chosen
158 The primary goal of the proposed benchmarking methodology is to focus on
159 NFV technologies used to construct NFV services. More specifically to i)
160 measure packet data-plane performance of multiple NFV service instances
161 while running them at varied service “packing” densities on a single
162 server and ii) quantify the impact of using multiple NFs to construct
163 each NFV service instance and introducing multiple packet processing
164 hops and links on each packet path.
166 The overarching aim is to discover a set of deterministic usage ranges
167 that are of interest to NFV system designers and operators. In addition,
168 specified methodology can be used to compare and contrast different NFV
169 virtualisation technologies.
171 In order to ensure wide applicability of the benchmarking methodology,
172 the approach is to separate NFV service packet processing from the
173 shared virtualisation infrastructure by decomposing the software
174 technology stack into three building blocks:
176 +-------------------------------+
178 +-------------------------------+
179 | Virtualization Technology |
180 +-------------------------------+
182 +-------------------------------+
184 Figure 1. NFV software technology stack.
186 Proposed methodology is complementary to existing NFV benchmarking
187 industry efforts focusing on vSwitch benchmarking [RFC8204], [TST009]
188 and extends the benchmarking scope to NFV services.
190 This document does not describe a complete benchmarking methodology,
191 instead it is focusing on system under test configuration part. Each of
192 the compute node configurations identified by (RowIndex, ColumnIndex) is
193 to be evaluated for NFV service data-plane performance using existing
194 and/or emerging network benchmarking standards. This may include
195 methodologies specified in [RFC2544], [TST009],
196 [draft-vpolak-mkonstan-bmwg-mlrsearch] and/or
197 [draft-vpolak-bmwg-plrsearch].
201 It is assumed that each NFV service instance is built of one or more
202 constituent NFs and is described by: topology, configuration and
203 resulting packet path(s).
205 Each set of NFs forms an independent NFV service instance, with multiple
206 sets present in the host.
210 NFV topology describes the number of network functions per service
211 instance, and their inter-connections over packet interfaces. It
212 includes all point-to-point virtual packet links within the compute
213 node, Layer-2 Ethernet or Layer-3 IP, including the ones to host
214 networking data-plane.
216 Theoretically, a large set of possible NFV topologies can be realised
217 using software virtualisation topologies, e.g. ring, partial -/full-
218 mesh, star, line, tree, ladder. In practice however, only a few
219 topologies are in the actual use as NFV services mostly perform either
220 bumps-in-a-wire packet operations (e.g. security filtering/inspection,
221 monitoring/telemetry) and/or inter-site forwarding decisions (e.g.
224 Two main NFV topologies have been identified so far for NFV service
225 density benchmarking:
227 1. Chain topology: a set of NFs connect to host data-plane with minimum
228 of two virtual interfaces each, enabling host data-plane to
229 facilitate NF to NF service chain forwarding and provide connectivity
230 with external network.
232 2. Pipeline topology: a set of NFs connect to each other in a line
233 fashion with edge NFs homed to host data-plane. Host data-plane
234 provides connectivity with external network.
236 Both topologies are shown in figures below.
240 +-----------------------------------------------------------+
241 | Host Compute Node |
243 | +--------+ +--------+ +--------+ |
244 | | S1NF1 | | S1NF2 | | S1NFn | |
245 | | | | | .... | | Service1 |
247 | +-+----+-+ +-+----+-+ + + +-+----+-+ |
248 | | | | | | | | | Virtual |
249 | | |<-CS->| |<-CS->| |<-CS->| | Interfaces |
250 | +-+----+------+----+------+----+------+----+-+ |
253 | | Host Data-Plane | |
254 | +-+--+----------------------------------+--+-+ |
256 +-----------------------------------------------------------+
259 +---+--+----------------------------------+--+--------------+
261 | Traffic Generator |
263 +-----------------------------------------------------------+
265 Figure 2. NF chain topology forming a service instance.
267 NF pipeline topology:
269 +-----------------------------------------------------------+
270 | Host Compute Node |
272 | +--------+ +--------+ +--------+ |
273 | | S1NF1 | | S1NF2 | | S1NFn | |
274 | | +--+ +--+ .... +--+ | Service1 |
276 | +-+------+ +--------+ +------+-+ |
278 | |<-Pipeline Edge Pipeline Edge->| Interfaces |
279 | +-+----------------------------------------+-+ |
282 | | Host Data-Plane | |
283 | +-+--+----------------------------------+--+-+ |
285 +-----------------------------------------------------------+
288 +---+--+----------------------------------+--+--------------+
290 | Traffic Generator |
292 +-----------------------------------------------------------+
294 Figure 3. NF pipeline topology forming a service instance.
299 NFV configuration includes all packet processing functions in NFs
300 including Layer-2, Layer-3 and/or Layer-4-to-7 processing as appropriate
301 to specific NF and NFV service design. L2 sub- interface encapsulations
302 (e.g. 802.1q, 802.1ad) and IP overlay encapsulation (e.g. VXLAN, IPSec,
303 GRE) may be represented here too as appropriate, although in most cases
304 they are used as external encapsulation and handled by host networking
307 NFV configuration determines logical network connectivity that is
308 Layer-2 and/or IPv4/IPv6 switching/routing modes, as well as NFV service
309 specific aspects. In the context of NFV density benchmarking methodology
310 the initial focus is on the former.
312 Building on the two identified NFV topologies, two common NFV
313 configurations are considered:
315 1. Chain configuration:
316 * Relies on chain topology to form NFV service chains.
317 * NF packet forwarding designs:
319 * Requirements for host data-plane:
320 * L2 switching with L2 forwarding context per each NF chain
322 * IPv4/IPv6 routing with IP forwarding context per each NF chain
323 segment or per NF chain.
325 2. Pipeline configuration:
326 * Relies on pipeline topology to form NFV service pipelines.
327 * Packet forwarding designs:
329 * Requirements for host data-plane:
330 * L2 switching with L2 forwarding context per each NF pipeline
332 * IPv4/IPv6 routing with IP forwarding context per each NF pipeline
333 edge link or per NF pipeline.
337 NFV packet path(s) describe the actual packet forwarding path(s) used
338 for benchmarking, resulting from NFV topology and configuration. They
339 are aimed to resemble true packet forwarding actions during the NFV
342 Based on the specified NFV topologies and configurations two NFV packet
343 paths are taken for benchmarking:
346 * Requires chain topology and configuration.
347 * Packets enter the NFV chain through one edge NF and progress to the
348 other edge NF of the chain.
349 * Within the chain, packets follow a zigzagging "snake" path entering
350 and leaving host data-plane as they progress through the NF chain.
351 * Host data-plane is involved in packet forwarding operations between
352 NIC interfaces and edge NFs, as well as between NFs in the chain.
354 2. Pipeline packet path
355 * Requires pipeline topology and configuration.
356 * Packets enter the NFV chain through one edge NF and progress to the
357 other edge NF of the pipeline.
358 * Within the chain, packets follow a straight path entering and
359 leaving subsequent NFs as they progress through the NF pipeline.
360 * Host data-plane is involved in packet forwarding operations between
361 NIC interfaces and edge NFs only.
363 Both packet paths are shown in figures below.
367 +-----------------------------------------------------------+
368 | Host Compute Node |
370 | +--------+ +--------+ +--------+ |
371 | | S1NF1 | | S1NF2 | | S1NFn | |
372 | | | | | .... | | Service1 |
373 | | XXXX | | XXXX | | XXXX | |
374 | +-+X--X+-+ +-+X--X+-+ +X X+ +-+X--X+-+ |
375 | |X X| |X X| |X X| |X X| Virtual |
376 | |X X| |X X| |X X| |X X| Interfaces |
377 | +-+X--X+------+X--X+------+X--X+------+X--X+-+ |
378 | | X XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX X | |
380 | | X Host Data-Plane X | |
381 | +-+X-+----------------------------------+-X+-+ |
383 +----X--------------------------------------X---------------+
386 +---+X-+----------------------------------+-X+--------------+
388 | Traffic Generator |
390 +-----------------------------------------------------------+
392 Figure 4. Snake packet path thru NF chain topology.
395 Pipeline packet path:
397 +-----------------------------------------------------------+
398 | Host Compute Node |
400 | +--------+ +--------+ +--------+ |
401 | | S1NF1 | | S1NF2 | | S1NFn | |
402 | | +--+ +--+ .... +--+ | Service1 |
403 | | XXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXX | |
404 | +--X-----+ +--------+ +-----X--+ |
407 | +-+X--------------------------------------X+-+ |
410 | | X Host Data-Plane X | |
411 | +-+X-+----------------------------------+-X+-+ |
413 +----X--------------------------------------X---------------+
416 +---+X-+----------------------------------+-X+--------------+
418 | Traffic Generator |
420 +-----------------------------------------------------------+
422 Figure 5. Pipeline packet path thru NF pipeline topology.
424 In all cases packets enter NFV system via shared physical NIC interfaces
425 controlled by shared host data-plane, are then associated with specific
426 NFV service (based on service discriminator) and subsequently are cross-
427 connected/switched/routed by host data-plane to and through NF
428 topologies per one of above listed schemes.
430 # Virtualization Technology
432 NFV services are built of composite isolated NFs, with virtualisation
433 technology providing the workload isolation. Following virtualisation
434 technology types are considered for NFV service density benchmarking:
436 1. Virtual Machines (VMs)
437 * Relying on host hypervisor technology e.g. KVM, ESXi, Xen.
438 * NFs running in VMs are referred to as VNFs.
440 * Relying on Linux container technology e.g. LXC, Docker.
441 * NFs running in Containers are referred to as CNFs.
443 Different virtual interface types are available to VNFs and CNFs:
446 * virtio-vhostuser: fully user-mode based virtual interface.
447 * virtio-vhostnet: involves kernel-mode based backend.
449 * memif: fully user-mode based virtual interface.
450 * af_packet: involves kernel-mode based backend.
451 * (add more common ones)
455 Host networking data-plane is the central shared resource that underpins
456 creation of NFV services. It handles all of the connectivity to external
457 physical network devices through physical network connections using
458 NICs, through which the benchmarking is done.
460 Assuming that NIC interface resources are shared, here is the list of
461 widely available host data-plane options for providing packet
462 connectivity to/from NICs and constructing NFV chain and pipeline
463 topologies and configurations:
465 * Linux Kernel-Mode Networking.
466 * Linux User-Mode vSwitch.
467 * Virtual Machine vSwitch.
468 * Linux Container vSwitch.
469 * SRIOV NIC Virtual Function - note: restricted support for chain and
470 pipeline topologies, as it requires hair-pinning through the NIC and
471 oftentimes also through external physical switch.
473 Analysing properties of each of these options and their Pros/Cons for
474 specified NFV topologies and configurations is outside the scope of this
477 From all listed options, performance optimised Linux user-mode vswitch
478 deserves special attention. Linux user-mode switch decouples NFV service
479 from the underlying NIC hardware, offers rich multi-tenant functionality
480 and most flexibility for supporting NFV services. But in the same time
481 it is consuming compute resources and is harder to benchmark in NFV
482 service density scenarios.
484 Following sections focus on using Linux user-mode vSwitch, focusing on
485 its performance benchmarking at increasing levels of NFV service
488 # NFV Service Density Matrix
490 In order to evaluate performance of multiple NFV services running on a
491 compute node, NFV service instances are benchmarked at increasing
492 density, allowing to construct an NFV Service Density Matrix. Table
493 below shows an example of such a matrix, capturing number of NFV service
494 instances (row indices), number of NFs per service instance (column
495 indices) and resulting total number of NFs (values).
497 NFV Service Density - NF Count View
499 SVC 001 002 004 006 008 00N
503 006 6 12 24 36 48 6*N
504 008 8 16 32 48 64 8*N
505 00M M*1 M*2 M*4 M*6 M*8 M*N
507 RowIndex: Number of NFV Service Instances, 1..M.
508 ColumnIndex: Number of NFs per NFV Service Instance, 1..N.
509 Value: Total number of NFs running in the system.
511 In order to deliver good and repeatable network data-plane performance,
512 NFs and host data-plane software require direct access to critical
513 compute resources. Due to a shared nature of all resources on a compute
514 node, a clearly defined resource allocation scheme is defined in the
515 next section to address this.
517 In each tested configuration host data-plane is a gateway between the
518 external network and the internal NFV network topologies. Offered packet
519 load is generated and received by an external traffic generator per
520 usual benchmarking practice.
522 It is proposed that initial benchmarks are done with the offered packet
523 load distributed equally across all configured NFV service instances.
524 This could be followed by various per NFV service instance load ratios
525 mimicking expected production deployment scenario(s).
527 Following sections specify compute resource allocation, followed by
528 examples of applying NFV service density methodology to VNF and CNF
529 benchmarking use cases.
531 # Compute Resource Allocation
533 Performance optimized NF and host data-plane software threads require
534 timely execution of packet processing instructions and are very
535 sensitive to any interruptions (or stalls) to this execution e.g. cpu
536 core context switching, or cpu jitter. To that end, NFV service density
537 methodology treats controlled mapping ratios of data plane software
538 threads to physical processor cores with directly allocated cache
539 hierarchies as the first order requirement.
541 Other compute resources including memory bandwidth and PCIe bandwidth
542 have lesser impact and as such are subject for further study. For more
543 detail and deep-dive analysis of software data plane performance and
544 impact on different shared compute resources is available in [BSDP].
546 It is assumed that NFs as well as host data-plane (e.g. vswitch) are
547 performance optimized, with their tasks executed in two types of
550 * data-plane - handling data-plane packet processing and forwarding,
551 time critical, requires dedicated cores. To scale data-plane
552 performance, most NF apps use multiple data-plane threads and rely on
553 NIC RSS (Receive Side Scaling), virtual interface multi-queue and/or
554 integrated software hashing to distribute packets across the data
557 * main-control - handling application management, statistics and
558 control-planes, less time critical, allows for core sharing. For most
559 NF apps this is a single main thread, but often statistics (counters)
560 and various control protocol software are run in separate threads.
562 Core mapping scheme described below allocates cores for all threads of
563 specified type belonging to each NF app instance, and separately lists
564 number of threads to a number of logical/physical core mappings for
565 processor configurations with enabled/disabled Symmetric Multi-
566 Threading (SMT) (e.g. AMD SMT, Intel Hyper-Threading).
568 If NFV service density benchmarking is run on server nodes with
569 Symmetric Multi-Threading (SMT) (e.g. AMD SMT, Intel Hyper-Threading)
570 for higher performance and efficiency, logical cores allocated to data-
571 plane threads should be allocated as pairs of sibling logical cores
572 corresponding to the hyper-threads running on the same physical core.
574 Separate core ratios are defined for mapping threads of vSwitch and NFs.
575 In order to get consistent benchmarking results, the mapping ratios are
576 enforced using Linux core pinning.
578 | application | thread type | app:core ratio | threads/pcores (SMT disabled) | threads/lcores map (SMT enabled) |
579 |:-----------:|:-----------:|:--------------:|:-------------------------------:|:----------------------------------:|
580 | vSwitch-1c | data | 1:1 | 1DT/1PC | 2DT/2LC |
581 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC |
583 | vSwitch-2c | data | 1:2 | 2DT/2PC | 4DT/4LC |
584 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC |
586 | vSwitch-4c | data | 1:4 | 4DT/4PC | 8DT/8LC |
587 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC |
589 | NF-0.5c | data | 1:S2 | 1DT/S2PC | 1DT/1LC |
590 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC |
592 | NF-1c | data | 1:1 | 1DT/1PC | 2DT/2LC |
593 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC |
595 | NF-2c | data | 1:2 | 2DT/2PC | 4DT/4LC |
596 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC |
600 * application - network application with optimized data-plane, a
601 vSwitch or Network Function (NF) application.
602 * thread type - either "data", short for data-plane; or "main",
603 short for all main-control threads.
604 * app:core ratio - ratio of per application instance threads of
605 specific thread type to physical cores.
606 * threads/pcores (SMT disabled) - number of threads of specific
607 type (DT for data-plane thread, MT for main thread) running on a
608 number of physical cores, with SMT disabled.
609 * threads/lcores map (SMT enabled) - number of threads of specific
610 type (DT, MT) running on a number of logical cores, with SMT
611 enabled. Two logical cores per one physical core.
613 * vSwitch-(1c|2c|4c) - vSwitch with 1 physical core (or 2, or 4)
614 allocated to its data-plane software worker threads.
615 * NF-(0.5c|1c|2c) - NF application with half of a physical core (or
616 1, or 2) allocated to its data-plane software worker threads.
617 * Sn - shared core, sharing ratio of (n).
618 * DT - data-plane thread.
619 * MT - main-control thread.
620 * PC - physical core, with SMT/HT enabled has many (mostly 2 today)
621 logical cores associated with it.
622 * LC - logical core, if more than one lc get allocated in sets of
623 two sibling logical cores running on the same physical core.
624 * SnPC - shared physical core, sharing ratio of (n).
625 * SnLC - shared logical core, sharing ratio of (n).
627 Maximum benchmarked NFV service densities are limited by a number of
628 physical cores on a compute node.
630 A sample physical core usage view is shown in the matrix below.
632 NFV Service Density - Core Usage View
635 SVC 001 002 004 006 008 010
640 008 12 24 48 72 96 120
641 010 15 30 60 90 120 150
643 RowIndex: Number of NFV Service Instances, 1..10.
644 ColumnIndex: Number of NFs per NFV Service Instance, 1..10.
645 Value: Total number of physical processor cores used for NFs.
647 # NFV Service Density Benchmarks
649 To illustrate defined NFV service density applicability, following
650 sections describe three sets of NFV service topologies and
651 configurations that have been benchmarked in open-source: i) in
652 [LFN-FDio-CSIT], a continuous testing and data-plane benchmarking
653 project, and ii) as part of CNCF CNF Testbed initiative
656 In both cases each NFV service instance definition is based on the same
657 set of NF applications, and varies only by network addressing
658 configuration to emulate multi-tenant operating environment.
660 ## Test Methodology - MRR Throughput
662 Initial NFV density throughput benchmarks have been performed using
663 Maximum Receive Rate (MRR) test methodology defined and used in FD.io
666 MRR tests measure the packet forwarding rate under the maximum load
667 offered by traffic generator over a set trial duration, regardless of
668 packet loss. Maximum load for specified Ethernet frame size is set to
669 the bi-directional link rate (2x 10GbE in referred results).
671 Tests were conducted with two traffic profiles: i) continuous stream of
672 64B frames, ii) continuous stream of IMIX sequence of (7x 64B, 4x 570B,
673 1x 1518B), all sizes are L2 untagged Ethernet.
675 NFV service topologies tested include: VNF service chains, CNF service
676 chains and CNF service pipelines.
680 VNF Service Chain (VSC) topology is tested with KVM hypervisor (Ubuntu
681 18.04-LTS), with NFV service instances consisting of NFs running in VMs
682 (VNFs). Host data-plane is provided by FD.io VPP vswitch. Virtual
683 interfaces are virtio-vhostuser. Snake forwarding packet path is tested
684 using [TRex] traffic generator, see figure.
686 +-----------------------------------------------------------+
687 | Host Compute Node |
689 | +--------+ +--------+ +--------+ |
690 | | S1VNF1 | | S1VNF2 | | S1VNFn | |
691 | | | | | .... | | Service1 |
692 | | XXXX | | XXXX | | XXXX | |
693 | +-+X--X+-+ +-+X--X+-+ +-+X--X+-+ |
694 | |X X| |X X| |X X| Virtual |
695 | |X X| |X X| |X X| |X X| Interfaces |
696 | +-+X--X+------+X--X+------+X--X+------+X--X+-+ |
697 | | X XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX X | |
699 | | X FD.io VPP vSwitch X | |
700 | +-+X-+----------------------------------+-X+-+ |
702 +----X--------------------------------------X---------------+
705 +---+X-+----------------------------------+-X+--------------+
707 | Traffic Generator (TRex) |
709 +-----------------------------------------------------------+
711 Figure 6. VNF service chain test setup.
716 CNF Service Chain (CSC) topology is tested with Docker containers
717 (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs running
718 in Containers (CNFs). Host data-plane is provided by FD.io VPP vswitch.
719 Virtual interfaces are memif. Snake forwarding packet path is tested
720 using [TRex] traffic generator, see figure.
722 +-----------------------------------------------------------+
723 | Host Compute Node |
725 | +--------+ +--------+ +--------+ |
726 | | S1CNF1 | | S1CNF2 | | S1CNFn | |
727 | | | | | .... | | Service1 |
728 | | XXXX | | XXXX | | XXXX | |
729 | +-+X--X+-+ +-+X--X+-+ +-+X--X+-+ |
730 | |X X| |X X| |X X| Virtual |
731 | |X X| |X X| |X X| |X X| Interfaces |
732 | +-+X--X+------+X--X+------+X--X+------+X--X+-+ |
733 | | X XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX X | |
735 | | X FD.io VPP vSwitch X | |
736 | +-+X-+----------------------------------+-X+-+ |
738 +----X--------------------------------------X---------------+
741 +---+X-+----------------------------------+-X+--------------+
743 | Traffic Generator (TRex) |
745 +-----------------------------------------------------------+
747 Figure 7. CNF service chain test setup.
749 ## CNF Service Pipeline
751 CNF Service Pipeline (CSP) topology is tested with Docker containers
752 (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs running
753 in Containers (CNFs). Host data-plane is provided by FD.io VPP vswitch.
754 Virtual interfaces are memif. Pipeline forwarding packet path is tested
755 using [TRex] traffic generator, see figure.
757 +-----------------------------------------------------------+
758 | Host Compute Node |
760 | +--------+ +--------+ +--------+ |
761 | | S1NF1 | | S1NF2 | | S1NFn | |
762 | | +--+ +--+ .... +--+ | Service1 |
763 | | XXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXX | |
764 | +--X-----+ +--------+ +-----X--+ |
767 | +-+X--------------------------------------X+-+ |
770 | | X FD.io VPP vSwitch X | |
771 | +-+X-+----------------------------------+-X+-+ |
773 +----X--------------------------------------X---------------+
776 +---+X-+----------------------------------+-X+--------------+
778 | Traffic Generator (TRex) |
780 +-----------------------------------------------------------+
782 Figure 8. CNF service chain test setup.
784 ## Sample Results: FD.io CSIT
786 FD.io CSIT project introduced NFV density benchmarking in release
787 CSIT-1901 and published results for the following NFV service topologies
790 1. VNF Service Chains
791 * VNF: DPDK-L3FWD v18.10
794 * vSwitch: VPP v19.01-release
796 * vSwitch-1c, vSwitch-2c
797 * frame sizes: 64B, IMIX
798 2. CNF Service Chains
799 * CNF: VPP v19.01-release
802 * vSwitch: VPP v19.01-release
804 * vSwitch-1c, vSwitch-2c
805 * frame sizes: 64B, IMIX
806 3. CNF Service Pipelines
807 * CNF: VPP v19.01-release
810 * vSwitch: VPP v19.01-release
812 * vSwitch-1c, vSwitch-2c
813 * frame sizes: 64B, IMIX
815 More information is available in FD.io CSIT-1901 report, with specific
816 references listed below:
818 * Testbed: [CSIT-1901-testbed-2n-skx]
819 * Test environment: [CSIT-1901-test-enviroment]
820 * Methodology: [CSIT-1901-nfv-density-methodology]
821 * Results: [CSIT-1901-nfv-density-results]
823 ## Sample Results: CNCF/CNFs
825 CNCF CI team introduced a CNF testbed initiative focusing on benchmaring
826 NFV density with open-source network applications running as VNFs and
827 CNFs. Following NFV service topologies and configurations have been
830 1. VNF Service Chains
831 * VNF: VPP v18.10-release
834 * vSwitch: VPP v18.10-release
836 * vSwitch-1c, vSwitch-2c
837 * frame sizes: 64B, IMIX
838 2. CNF Service Chains
839 * CNF: VPP v18.10-release
842 * vSwitch: VPP v18.10-release
844 * vSwitch-1c, vSwitch-2c
845 * frame sizes: 64B, IMIX
846 3. CNF Service Pipelines
847 * CNF: VPP v18.10-release
850 * vSwitch: VPP v18.10-release
852 * vSwitch-1c, vSwitch-2c
853 * frame sizes: 64B, IMIX
855 More information is available in CNCF CNF Testbed github, with summary
856 test results presented in summary markdown file, references listed
859 * Results: [CNCF-CNF-Testbed-Results]
861 # IANA Considerations
865 # Security Considerations
871 Thanks to Vratko Polak of FD.io CSIT project and Michael Pedersen of the
872 CNCF Testbed initiative for their contributions and useful suggestions.