2 VNET (VPP Network Stack)
3 ========================
5 The files associated with the VPP network stack layer are located in the
6 *./src/vnet* folder. The Network Stack Layer is basically an
7 instantiation of the code in the other layers. This layer has a vnet
8 library that provides vectorized layer-2 and 3 networking graph nodes, a
9 packet generator, and a packet tracer.
11 In terms of building a packet processing application, vnet provides a
12 platform-independent subgraph to which one connects a couple of
15 Typical RX connections include "ethernet-input" \[full software
16 classification, feeds ipv4-input, ipv6-input, arp-input etc.\] and
17 "ipv4-input-no-checksum" \[if hardware can classify, perform ipv4 header
20 Effective graph dispatch function coding
21 ----------------------------------------
23 Over the 15 years, multiple coding styles have emerged: a
24 single/dual/quad loop coding model (with variations) and a
25 fully-pipelined coding model.
30 The single/dual/quad loop model variations conveniently solve problems
31 where the number of items to process is not known in advance: typical
32 hardware RX-ring processing. This coding style is also very effective
33 when a given node will not need to cover a complex set of dependent
36 Here is an quad/single loop which can leverage up-to-avx512 SIMD vector
37 units to convert buffer indices to buffer pointers:
41 simulated_ethernet_interface_tx (vlib_main_t * vm,
43 node, vlib_frame_t * frame)
45 u32 n_left_from, *from;
48 u32 thread_index = vm->thread_index;
49 vnet_main_t *vnm = vnet_get_main ();
50 vnet_interface_main_t *im = &vnm->interface_main;
51 vlib_buffer_t *bufs[VLIB_FRAME_SIZE], **b;
52 u16 nexts[VLIB_FRAME_SIZE], *next;
54 n_left_from = frame->n_vectors;
55 from = vlib_frame_vector_args (frame);
58 * Convert up to VLIB_FRAME_SIZE indices in "from" to
59 * buffer pointers in bufs[]
61 vlib_get_buffers (vm, from, bufs, n_left_from);
66 * While we have at least 4 vector elements (pkts) to process..
68 while (n_left_from >= 4)
70 /* Prefetch next quad-loop iteration. */
71 if (PREDICT_TRUE (n_left_from >= 8))
73 vlib_prefetch_buffer_header (b[4], STORE);
74 vlib_prefetch_buffer_header (b[5], STORE);
75 vlib_prefetch_buffer_header (b[6], STORE);
76 vlib_prefetch_buffer_header (b[7], STORE);
80 * $$$ Process 4x packets right here...
81 * set next[0..3] to send the packets where they need to go
84 do_something_to (b[0]);
85 do_something_to (b[1]);
86 do_something_to (b[2]);
87 do_something_to (b[3]);
89 /* Process the next 0..4 packets */
95 * Clean up 0...3 remaining packets at the end of the incoming frame
97 while (n_left_from > 0)
100 * $$$ Process one packet right here...
101 * set next[0..3] to send the packets where they need to go
103 do_something_to (b[0]);
105 /* Process the next packet */
112 * Send the packets along their respective next-node graph arcs
113 * Considerable locality of reference is expected, most if not all
114 * packets in the inbound vector will traverse the same next-node
117 vlib_buffer_enqueue_to_next (vm, node, from, nexts, frame->n_vectors);
119 return frame->n_vectors;
123 Given a packet processing task to implement, it pays to scout around
124 looking for similar tasks, and think about using the same coding
125 pattern. It is not uncommon to recode a given graph node dispatch function
126 several times during performance optimization.
128 Creating Packets from Scratch
129 -----------------------------
131 At times, it's necessary to create packets from scratch and send
132 them. Tasks like sending keepalives or actively opening connections
133 come to mind. Its not difficult, but accurate buffer metadata setup is
136 ### Allocating Buffers
138 Use vlib_buffer_alloc, which allocates a set of buffer indices. For
139 low-performance applications, it's OK to allocate one buffer at a
140 time. Note that vlib_buffer_alloc(...) does NOT initialize buffer
143 In high-performance cases, allocate a vector of buffer indices,
144 and hand them out from the end of the vector; decrement _vec_len(..)
145 as buffer indices are allocated. See tcp_alloc_tx_buffers(...) and
146 tcp_get_free_buffer_index(...) for an example.
148 ### Buffer Initialization Example
150 The following example shows the **main points**, but is not to be
151 blindly cut-'n-pasted.
159 /* Allocate a buffer */
160 if (vlib_buffer_alloc (vm, &bi0, 1) != 1)
163 b0 = vlib_get_buffer (vm, bi0);
165 /* Initialize the buffer */
166 VLIB_BUFFER_TRACE_TRAJECTORY_INIT (b0);
168 /* At this point b0->current_data = 0, b0->current_length = 0 */
171 * Copy data into the buffer. This example ASSUMES that data will fit
172 * in a single buffer, and is e.g. an ip4 packet.
174 if (have_packet_rewrite)
176 clib_memcpy (b0->data, data, vec_len (data));
177 b0->current_length = vec_len (data);
181 /* OR, build a udp-ip packet (for example) */
182 ip = vlib_buffer_get_current (b0);
183 udp = (udp_header_t *) (ip + 1);
184 data_dst = (u8 *) (udp + 1);
186 ip->ip_version_and_header_length = 0x45;
188 ip->protocol = IP_PROTOCOL_UDP;
189 ip->length = clib_host_to_net_u16 (sizeof (*ip) + sizeof (*udp) +
191 ip->src_address.as_u32 = src_address->as_u32;
192 ip->dst_address.as_u32 = dst_address->as_u32;
193 udp->src_port = clib_host_to_net_u16 (src_port);
194 udp->dst_port = clib_host_to_net_u16 (dst_port);
195 udp->length = clib_host_to_net_u16 (vec_len (udp_data));
196 clib_memcpy (data_dst, udp_data, vec_len(udp_data));
198 if (compute_udp_checksum)
200 /* RFC 7011 section 10.3.2. */
201 udp->checksum = ip4_tcp_udp_compute_checksum (vm, b0, ip);
202 if (udp->checksum == 0)
203 udp->checksum = 0xffff;
205 b0->current_length = vec_len (sizeof (*ip) + sizeof (*udp) +
209 b0->flags |= (VLIB_BUFFER_TOTAL_LENGTH_VALID;
211 /* sw_if_index 0 is the "local" interface, which always exists */
212 vnet_buffer (b0)->sw_if_index[VLIB_RX] = 0;
214 /* Use the default FIB index for tx lookup. Set non-zero to use another fib */
215 vnet_buffer (b0)->sw_if_index[VLIB_TX] = 0;
219 If your use-case calls for large packet transmission, use
220 vlib_buffer_chain_append_data_with_alloc(...) to create the requisite
223 ### Enqueueing packets for lookup and transmission
225 The simplest way to send a set of packets is to use
226 vlib_get_frame_to_node(...) to allocate fresh frame(s) to
227 ip4_lookup_node or ip6_lookup_node, add the constructed buffer
228 indices, and dispatch the frame using vlib_put_frame_to_node(...).
232 f = vlib_get_frame_to_node (vm, ip4_lookup_node.index);
233 f->n_vectors = vec_len(buffer_indices_to_send);
234 to_next = vlib_frame_vector_args (f);
236 for (i = 0; i < vec_len (buffer_indices_to_send); i++)
237 to_next[i] = buffer_indices_to_send[i];
239 vlib_put_frame_to_node (vm, ip4_lookup_node_index, f);
242 It is inefficient to allocate and schedule single packet frames.
243 That's typical in case you need to send one packet per second, but
244 should **not** occur in a for-loop!
249 Vlib includes a frame element \[packet\] trace facility, with a simple
250 debug CLI interface. The cli is straightforward: "trace add
251 input-node-name count" to start capturing packet traces.
253 To trace 100 packets on a typical x86\_64 system running the dpdk
254 plugin: "trace add dpdk-input 100". When using the packet generator:
255 "trace add pg-input 100"
257 To display the packet trace: "show trace"
259 Each graph node has the opportunity to capture its own trace data. It is
260 almost always a good idea to do so. The trace capture APIs are simple.
262 The packet capture APIs snapshoot binary data, to minimize processing at
263 capture time. Each participating graph node initialization provides a
264 vppinfra format-style user function to pretty-print data when required
265 by the VLIB "show trace" command.
267 Set the VLIB node registration ".format\_trace" member to the name of
268 the per-graph node format function.
270 Here's a simple example:
273 u8 * my_node_format_trace (u8 * s, va_list * args)
275 vlib_main_t * vm = va_arg (*args, vlib_main_t *);
276 vlib_node_t * node = va_arg (*args, vlib_node_t *);
277 my_node_trace_t * t = va_arg (*args, my_trace_t *);
279 s = format (s, "My trace data was: %d", t-><whatever>);
285 The trace framework hands the per-node format function the data it
286 captured as the packet whizzed by. The format function pretty-prints the
289 Graph Dispatcher Pcap Tracing
290 -----------------------------
292 The vpp graph dispatcher knows how to capture vectors of packets in pcap
293 format as they're dispatched. The pcap captures are as follows:
296 VPP graph dispatch trace record description:
299 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
300 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
301 | Major Version | Minor Version | NStrings | ProtoHint |
302 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
303 | Buffer index (big endian) |
304 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
305 + VPP graph node name ... ... | NULL octet |
306 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
307 | Buffer Metadata ... ... | NULL octet |
308 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
309 | Buffer Opaque ... ... | NULL octet |
310 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
311 | Buffer Opaque 2 ... ... | NULL octet |
312 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
313 | VPP ASCII packet trace (if NStrings > 4) | NULL octet |
314 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
315 | Packet data (up to 16K) |
316 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
319 Graph dispatch records comprise a version stamp, an indication of how
320 many NULL-terminated strings will follow the record header and preceed
321 packet data, and a protocol hint.
323 The buffer index is an opaque 32-bit cookie which allows consumers of
324 these data to easily filter/track single packets as they traverse the
327 Multiple records per packet are normal, and to be expected. Packets
328 will appear multiple times as they traverse the vpp forwarding
329 graph. In this way, vpp graph dispatch traces are significantly
330 different from regular network packet captures from an end-station.
331 This property complicates stateful packet analysis.
333 Restricting stateful analysis to records from a single vpp graph node
334 such as "ethernet-input" seems likely to improve the situation.
336 As of this writing: major version = 1, minor version = 0. Nstrings
337 SHOULD be 4 or 5. Consumers SHOULD be wary values less than 4 or
338 greater than 5. They MAY attempt to display the claimed number of
339 strings, or they MAY treat the condition as an error.
341 Here is the current set of protocol hints:
346 VLIB_NODE_PROTO_HINT_NONE = 0,
347 VLIB_NODE_PROTO_HINT_ETHERNET,
348 VLIB_NODE_PROTO_HINT_IP4,
349 VLIB_NODE_PROTO_HINT_IP6,
350 VLIB_NODE_PROTO_HINT_TCP,
351 VLIB_NODE_PROTO_HINT_UDP,
352 VLIB_NODE_N_PROTO_HINTS,
353 } vlib_node_proto_hint_t;
356 Example: VLIB_NODE_PROTO_HINT_IP6 means that the first octet of packet
357 data SHOULD be 0x60, and should begin an ipv6 packet header.
359 Downstream consumers of these data SHOULD pay attention to the
360 protocol hint. They MUST tolerate inaccurate hints, which MAY occur
363 ### Dispatch Pcap Trace Debug CLI
365 To start a dispatch trace capture of up to 10,000 trace records:
368 pcap dispatch trace on max 10000 file dispatch.pcap
371 To start a dispatch trace which will also include standard vpp packet
372 tracing for packets which originate in dpdk-input:
375 pcap dispatch trace on max 10000 file dispatch.pcap buffer-trace dpdk-input 1000
377 To save the pcap trace, e.g. in /tmp/dispatch.pcap:
380 pcap dispatch trace off
383 ### Wireshark dissection of dispatch pcap traces
385 It almost goes without saying that we built a companion wireshark
386 dissector to display these traces. As of this writing, we have
387 upstreamed the wireshark dissector.
389 Since it will be a while before wireshark/master/latest makes it into
390 all of the popular Linux distros, please see the "How to build a vpp
391 dispatch trace aware Wireshark" page for build info.
393 Here is a sample packet dissection, with some fields omitted for
394 clarity. The point is that the wireshark dissector accurately
395 displays **all** of the vpp buffer metadata, and the name of the graph
399 Frame 1: 2216 bytes on wire (17728 bits), 2216 bytes captured (17728 bits)
400 Encapsulation type: USER 13 (58)
401 [Protocols in frame: vpp:vpp-metadata:vpp-opaque:vpp-opaque2:eth:ethertype:ip:tcp:data]
403 BufferIndex: 0x00036663
404 NodeName: ethernet-input
407 Metadata: current_data: 0, current_length: 102
408 Metadata: current_config_index: 0, flow_id: 0, next_buffer: 0
409 Metadata: error: 0, n_add_refs: 0, buffer_pool_index: 0
410 Metadata: trace_index: 0, recycle_count: 0, len_not_first_buf: 0
411 Metadata: free_list_index: 0
414 Opaque: raw: 00000007 ffffffff 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
415 Opaque: sw_if_index[VLIB_RX]: 7, sw_if_index[VLIB_TX]: -1
416 Opaque: L2 offset 0, L3 offset 0, L4 offset 0, feature arc index 0
417 Opaque: ip.adj_index[VLIB_RX]: 0, ip.adj_index[VLIB_TX]: 0
418 Opaque: ip.flow_hash: 0x0, ip.save_protocol: 0x0, ip.fib_index: 0
419 Opaque: ip.save_rewrite_length: 0, ip.rpf_id: 0
420 Opaque: ip.icmp.type: 0 ip.icmp.code: 0, ip.icmp.data: 0x0
421 Opaque: ip.reass.next_index: 0, ip.reass.estimated_mtu: 0
422 Opaque: ip.reass.fragment_first: 0 ip.reass.fragment_last: 0
423 Opaque: ip.reass.range_first: 0 ip.reass.range_last: 0
424 Opaque: ip.reass.next_range_bi: 0x0, ip.reass.ip6_frag_hdr_offset: 0
425 Opaque: mpls.ttl: 0, mpls.exp: 0, mpls.first: 0, mpls.save_rewrite_length: 0, mpls.bier.n_bytes: 0
426 Opaque: l2.feature_bitmap: 00000000, l2.bd_index: 0, l2.l2_len: 0, l2.shg: 0, l2.l2fib_sn: 0, l2.bd_age: 0
427 Opaque: l2.feature_bitmap_input: none configured, L2.feature_bitmap_output: none configured
428 Opaque: l2t.next_index: 0, l2t.session_index: 0
429 Opaque: l2_classify.table_index: 0, l2_classify.opaque_index: 0, l2_classify.hash: 0x0
430 Opaque: policer.index: 0
431 Opaque: ipsec.flags: 0x0, ipsec.sad_index: 0
433 Opaque: map_t.v6.saddr: 0x0, map_t.v6.daddr: 0x0, map_t.v6.frag_offset: 0, map_t.v6.l4_offset: 0
434 Opaque: map_t.v6.l4_protocol: 0, map_t.checksum_offset: 0, map_t.mtu: 0
435 Opaque: ip_frag.mtu: 0, ip_frag.next_index: 0, ip_frag.flags: 0x0
436 Opaque: cop.current_config_index: 0
437 Opaque: lisp.overlay_afi: 0
438 Opaque: tcp.connection_index: 0, tcp.seq_number: 0, tcp.seq_end: 0, tcp.ack_number: 0, tcp.hdr_offset: 0, tcp.data_offset: 0
439 Opaque: tcp.data_len: 0, tcp.flags: 0x0
440 Opaque: sctp.connection_index: 0, sctp.sid: 0, sctp.ssn: 0, sctp.tsn: 0, sctp.hdr_offset: 0
441 Opaque: sctp.data_offset: 0, sctp.data_len: 0, sctp.subconn_idx: 0, sctp.flags: 0x0
442 Opaque: snat.flags: 0x0
445 Opaque2: raw: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
446 Opaque2: qos.bits: 0, qos.source: 0
447 Opaque2: loop_counter: 0
448 Opaque2: gbp.flags: 0, gbp.src_epg: 0
449 Opaque2: pg_replay_timestamp: 0
451 Ethernet II, Src: 06:d6:01:41:3b:92 (06:d6:01:41:3b:92), Dst: IntelCor_3d:f6 Transmission Control Protocol, Src Port: 22432, Dst Port: 54084, Seq: 1, Ack: 1, Len: 36
453 Destination Port: 54084
454 TCP payload (36 bytes)
457 0000 cf aa 8b f5 53 14 d4 c7 29 75 3e 56 63 93 9d 11 ....S...)u>Vc...
458 0010 e5 f2 92 27 86 56 4c 21 ce c5 23 46 d7 eb ec 0d ...'.VL!..#F....
459 0020 a8 98 36 5a ..6Z
460 Data: cfaa8bf55314d4c729753e5663939d11e5f2922786564c21…
464 It's a matter of a couple of mouse-clicks in Wireshark to filter the
465 trace to a specific buffer index. With that specific kind of filtration,
466 one can watch a packet walk through the forwarding graph; noting any/all
467 metadata changes, header checksum changes, and so forth.
469 This should be of significant value when developing new vpp graph
470 nodes. If new code mispositions b->current_data, it will be completely
471 obvious from looking at the dispatch trace in wireshark.
473 ## pcap rx, tx, and drop tracing
475 vpp also supports rx, tx, and drop packet capture in pcap format,
476 through the "pcap trace" debug CLI command.
478 This command is used to start or stop a packet capture, or show the
479 status of packet capture. Each of "pcap trace rx", "pcap trace tx",
480 and "pcap trace drop" is implemented. Supply one or more of "rx",
481 "tx", and "drop" to enable multiple simultaneous capture types.
483 These commands have the following optional parameters:
485 - <b>rx</b> - trace received packets.
487 - <b>tx</b> - trace transmitted packets.
489 - <b>drop</b> - trace dropped packets.
491 - <b>max _nnnn_</b> - file size, number of packet captures. Once
492 <nnnn> packets have been received, the trace buffer buffer is flushed
493 to the indicated file. Defaults to 1000. Can only be updated if packet
496 - <b>max-bytes-per-pkt _nnnn_</b> - maximum number of bytes to trace
497 on a per-packet basis. Must be >32 and less than 9000. Default value:
500 - <b>filter</b> - Use the pcap rx / tx / drop trace filter, which must
501 be configured. Use <b>classify filter pcap...</b> to configure the
502 filter. The filter will only be executed if the per-interface or
503 any-interface tests fail.
505 - <b>intfc _interface_ | _any_</b> - Used to specify a given interface,
506 or use '<em>any</em>' to run packet capture on all interfaces.
507 '<em>any</em>' is the default if not provided. Settings from a previous
508 packet capture are preserved, so '<em>any</em>' can be used to reset
509 the interface setting.
511 - <b>file _filename_</b> - Used to specify the output filename. The
512 file will be placed in the '<em>/tmp</em>' directory. If _filename_
513 already exists, file will be overwritten. If no filename is
514 provided, '<em>/tmp/rx.pcap or tx.pcap</em>' will be used, depending
515 on capture direction. Can only be updated when pcap capture is off.
517 - <b>status</b> - Displays the current status and configured
518 attributes associated with a packet capture. If packet capture is in
519 progress, '<em>status</em>' also will return the number of packets
520 currently in the buffer. Any additional attributes entered on
521 command line with a '<em>status</em>' request will be ignored.
523 - <b>filter</b> - Capture packets which match the current packet
524 trace filter set. See next section. Configure the capture filter
527 ## packet trace capture filtering
529 The "classify filter pcap | <interface-name> | trace" debug CLI command
530 constructs an arbitrary set of packet classifier tables for use with
531 "pcap rx | tx | drop trace," and with the vpp packet tracer on a
532 per-interface or system-wide basis.
534 Packets which match a rule in the classifier table chain will be
535 traced. The tables are automatically ordered so that matches in the
536 most specific table are tried first.
538 It's reasonably likely that folks will configure a single table with
539 one or two matches. As a result, we configure 8 hash buckets and 128K
540 of match rule space by default. One can override the defaults by
541 specifying "buckets <nnn>" and "memory-size <xxx>" as desired.
543 To build up complex filter chains, repeatedly issue the classify
544 filter debug CLI command. Each command must specify the desired mask
545 and match values. If a classifier table with a suitable mask already
546 exists, the CLI command adds a match rule to the existing table. If
547 not, the CLI command add a new table and the indicated mask rule
549 ### Configure a simple pcap classify filter
552 classify filter pcap mask l3 ip4 src match l3 ip4 src 192.168.1.11
553 pcap trace rx max 100 filter
556 ### Configure a simple per-interface capture filter
559 classify filter GigabitEthernet3/0/0 mask l3 ip4 src match l3 ip4 src 192.168.1.11"
560 pcap trace rx max 100 intfc GigabitEthernet3/0/0
563 Note that per-interface capture filters are _always_ applied.
565 ### Clear per-interface capture filters
568 classify filter GigabitEthernet3/0/0 del
571 ### Configure another fairly simple pcap classify filter
574 classify filter pcap mask l3 ip4 src dst match l3 ip4 src 192.168.1.10 dst 192.168.2.10
575 pcap trace tx max 100 filter
578 ### Configure a vpp packet tracer filter
581 classify filter trace mask l3 ip4 src dst match l3 ip4 src 192.168.1.10 dst 192.168.2.10
582 trace add dpdk-input 100 filter
585 ### Clear all current classifier filters
588 classify filter [pcap | <interface> | trace] del
591 ### To inspect the classifier tables
594 show classify table [verbose]
597 The verbose form displays all of the match rules, with hit-counters.
599 ### Terse description of the "mask <xxx>" syntax:
602 l2 src dst proto tag1 tag2 ignore-tag1 ignore-tag2 cos1 cos2 dot1q dot1ad
603 l3 ip4 <ip4-mask> ip6 <ip6-mask>
604 <ip4-mask> version hdr_length src[/width] dst[/width]
605 tos length fragment_id ttl protocol checksum
606 <ip6-mask> version traffic-class flow-label src dst proto
607 payload_length hop_limit protocol
608 l4 tcp <tcp-mask> udp <udp_mask> src_port dst_port
609 <tcp-mask> src dst # ports
610 <udp-mask> src_port dst_port
613 To construct **matches**, add the values to match after the indicated
614 keywords in the mask syntax. For example: "... mask l3 ip4 src" ->
615 "... match l3 ip4 src 192.168.1.11"
617 ## VPP Packet Generator
619 We use the VPP packet generator to inject packets into the forwarding
620 graph. The packet generator can replay pcap traces, and generate packets
621 out of whole cloth at respectably high performance.
623 The VPP pg enables quite a variety of use-cases, ranging from functional
624 testing of new data-plane nodes to regression testing to performance
629 PG setup scripts describe traffic in detail, and leverage vpp debug
630 CLI mechanisms. It's reasonably unusual to construct a pg setup script
631 which doesn't include a certain amount of interface and FIB configuration.
637 set int ip address loop0 192.168.1.1/24
638 set int state loop0 up
640 packet-generator new {
647 data { IP4: 1.2.3 -> 4.5.6
648 UDP: 192.168.1.10 - 192.168.1.254 -> 192.168.2.10
655 A packet generator stream definition includes two major sections:
656 - Stream Parameter Setup
659 ### Stream Parameter Setup
661 Given the example above, let's look at how to set up stream
664 - **name pg0** - Name of the stream, in this case "pg0"
666 - **limit 1000** - Number of packets to send when the stream is
667 enabled. "limit 0" means send packets continuously.
669 - **maxframe \<nnn\>** - Maximum frame size. Handy for injecting
670 multiple frames no larger than \<nnn\>. Useful for checking dual /
673 - **rate 1e6** - Packet injection rate, in this case 1 MPPS. When not
674 specified, the packet generator injects packets as fast as possible
676 - **size 300-300** - Packet size range, in this case send 300-byte packets
678 - **interface loop0** - Packets appear as if they were received on the
679 specified interface. This datum is used in multiple ways: to select
680 graph arc feature configuration, to select IP FIBs. Configure
681 features e.g. on loop0 to exercise those features.
683 - **tx-interface \<name\>** - Packets will be transmitted on the
684 indicated interface. Typically required only when injecting packets
685 into post-IP-rewrite graph nodes.
687 - **pcap \<filename\>** - Replay packets from the indicated pcap
688 capture file. "make test" makes extensive use of this feature:
689 generate packets using scapy, save them in a .pcap file, then inject
690 them into the vpp graph via a vpp pg "pcap \<filename\>" stream
693 - **worker \<nn\>** - Generate packets for the stream using the
694 indicated vpp worker thread. The vpp pg generates and injects O(10
695 MPPS / core). Use multiple stream definitions and worker threads to
696 generate and inject enough traffic to easily fill a 40 gbit pipe with
701 Packet generator data definitions make use of a layered implementation
702 strategy. Networking layers are specified in order, and the notation can
703 seem a bit counter-intuitive. In the example above, the data
704 definition stanza constructs a set of L2-L4 headers layers, and
705 uses an incrementing fill pattern to round out the requested 300-byte
708 - **IP4: 1.2.3 -> 4.5.6** - Construct an L2 (MAC) header with the ip4
709 ethertype (0x800), src MAC address of 00:01:00:02:00:03 and dst MAC
710 address of 00:04:00:05:00:06. Mac addresses may be specified in either
711 _xxxx.xxxx.xxxx_ format or _xx:xx:xx:xx:xx:xx_ format.
713 - **UDP: 192.168.1.10 - 192.168.1.254 -> 192.168.2.10** - Construct an
714 incrementing set of L3 (IPv4) headers for successive packets with
715 source addresses ranging from .10 to .254. All packets in the stream
716 have a constant dest address of 192.168.2.10. Set the protocol field
719 - **UDP: 1234 -> 2345** - Set the UDP source and destination ports to
720 1234 and 2345, respectively
722 - **incrementing 256** - Insert up to 256 incrementing data bytes.
724 Obvious variations involve "s/IP4/IP6/" in the above, along with
725 changing from IPv4 to IPv6 address notation.
727 The vpp pg can set any / all IPv4 header fields, including tos, packet
728 length, mf / df / fragment id and offset, ttl, protocol, checksum, and
729 src/dst addresses. Take a look at ../src/vnet/ip/ip[46]_pg.c for
732 If all else fails, specify the entire packet data in hex:
734 - **hex 0xabcd...** - copy hex data verbatim into the packet
736 When replaying pcap files ("**pcap \<filename\>**"), do not specify a
739 ### Diagnosing "packet-generator new" parse failures
741 If you want to inject packets into a brand-new graph node, remember
742 to tell the packet generator debug CLI how to parse the packet
745 If the node expects L2 Ethernet MAC headers, specify ".unformat_buffer
746 = unformat_ethernet_header":
750 VLIB_REGISTER_NODE (ethernet_input_node) =
753 .unformat_buffer = unformat_ethernet_header,
758 Beyond that, it may be necessary to set breakpoints in
759 .../src/vnet/pg/cli.c. Debug image suggested.
761 When debugging new nodes, it may be far simpler to directly inject
762 ethernet frames - and add a corresponding vlib_buffer_advance in the
763 new node - than to modify the packet generator.
767 The descriptions above describe the "packet-generator new" debug CLI in
770 Additional debug CLI commands include:
773 vpp# packet-generator enable [<stream-name>]
776 which enables the named stream, or all streams.
779 vpp# packet-generator disable [<stream-name>]
782 disables the named stream, or all streams.
786 vpp# packet-generator delete <stream-name>
789 Deletes the named stream.
792 vpp# packet-generator configure <stream-name> [limit <nnn>]
793 [rate <f64-pps>] [size <nn>-<nn>]
796 Changes stream parameters without having to recreate the entire stream
797 definition. Note that re-issuing a "packet-generator new" command will
798 correctly recreate the named stream.