X-Git-Url: https://gerrit.fd.io/r/gitweb?a=blobdiff_plain;f=doc%2Fguides%2Ftestpmd_app_ug%2Ftestpmd_funcs.rst;h=c9ce85c90a38f73a70a2aefbcfc0d1b736695533;hb=6e7cbd63706f3435b9d9a2057a37db1da01db9a7;hp=0e50c10716edd1f40cb1c8e229b1cb2ec655732c;hpb=7595afa4d30097c1177b69257118d8ad89a539be;p=deb_dpdk.git diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 0e50c107..c9ce85c9 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -347,6 +347,13 @@ The available information categories are: * ``ieee1588``: Demonstrate L2 IEEE1588 V2 PTP timestamping for RX and TX. Requires ``CONFIG_RTE_LIBRTE_IEEE1588=y``. +* ``tm``: Traffic Management forwarding mode + Demonstrates the use of ethdev traffic management APIs and softnic PMD for + QoS traffic management. In this mode, 5-level hierarchical QoS scheduler is + available as an default option that can be enabled through CLI. The user can + also modify the default hierarchy or specify the new hierarchy through CLI for + implementing QoS scheduler. Requires ``CONFIG_RTE_LIBRTE_PMD_SOFTNIC=y`` ``CONFIG_RTE_LIBRTE_SCHED=y``. + Note: TX timestamping is only available in the "Full Featured" TX path. To force ``testpmd`` into this mode set ``--txqflags=0``. Example:: @@ -380,6 +387,20 @@ For example:: testpmd> read txd 0 0 4 0x00000001 - 0x24C3C440 / 0x000F0000 - 0x2330003C +ddp get list +~~~~~~~~~~~~ + +Get loaded dynamic device personalization (DDP) package info list:: + + testpmd> ddp get list (port_id) + +ddp get info +~~~~~~~~~~~~ + +Display information about dynamic device personalization (DDP) profile:: + + testpmd> ddp get info (profile_path) + show vf stats ~~~~~~~~~~~~~ @@ -394,6 +415,14 @@ Reset VF statistics:: testpmd> clear vf stats (port_id) (vf_id) +show port pctype mapping +~~~~~~~~~~~~~~~~~~~~~~~~ + +List all items from the pctype mapping table:: + + testpmd> show port (port_id) pctype mapping + + Configuration Functions ----------------------- @@ -828,6 +857,49 @@ Where: Check the NIC Datasheet for hardware limits. +RSS queue region +~~~~~~~~~~~~~~~~ + +Set RSS queue region span on a port:: + + testpmd> set port (port_id) queue-region region_id (value) \ + queue_start_index (value) queue_num (value) + +Set flowtype mapping on a RSS queue region on a port:: + + testpmd> set port (port_id) queue-region region_id (value) flowtype (value) + +where: + +* For the flowtype(pctype) of packet,the specific index for each type has + been defined in file i40e_type.h as enum i40e_filter_pctype. + +Set user priority mapping on a RSS queue region on a port:: + + testpmd> set port (port_id) queue-region UP (value) region_id (value) + +Flush all queue region related configuration on a port:: + + testpmd> set port (port_id) queue-region flush (on|off) + +where: + +* "on"is just an enable function which server for other configuration, + it is for all configuration about queue region from up layer, + at first will only keep in DPDK softwarestored in driver, + only after "flush on", it commit all configuration to HW. + "off" is just clean all configuration about queue region just now, + and restore all to DPDK i40e driver default config when start up. + +Show all queue region related configuration info on a port:: + + testpmd> show port (port_id) queue-region + +.. note:: + + Queue region only support on PF by now, so these command is + only for configuration of queue region on PF port. + csum parse-tunnel ~~~~~~~~~~~~~~~~~ @@ -884,6 +956,106 @@ Display the status of TCP Segmentation Offload:: testpmd> tso show (port_id) +set port - gro +~~~~~~~~~~~~~~ + +Enable or disable GRO in ``csum`` forwarding engine:: + + testpmd> set port gro on|off + +If enabled, the csum forwarding engine will perform GRO on the TCP/IPv4 +packets received from the given port. + +If disabled, packets received from the given port won't be performed +GRO. By default, GRO is disabled for all ports. + +.. note:: + + When enable GRO for a port, TCP/IPv4 packets received from the port + will be performed GRO. After GRO, all merged packets have bad + checksums, since the GRO library doesn't re-calculate checksums for + the merged packets. Therefore, if users want the merged packets to + have correct checksums, please select HW IP checksum calculation and + HW TCP checksum calculation for the port which the merged packets are + transmitted to. + +show port - gro +~~~~~~~~~~~~~~~ + +Display GRO configuration for a given port:: + + testpmd> show port gro + +set gro flush +~~~~~~~~~~~~~ + +Set the cycle to flush the GROed packets from reassembly tables:: + + testpmd> set gro flush + +When enable GRO, the csum forwarding engine performs GRO on received +packets, and the GROed packets are stored in reassembly tables. Users +can use this command to determine when the GROed packets are flushed +from the reassembly tables. + +The ``cycles`` is measured in GRO operation times. The csum forwarding +engine flushes the GROed packets from the tables every ``cycles`` GRO +operations. + +By default, the value of ``cycles`` is 1, which means flush GROed packets +from the reassembly tables as soon as one GRO operation finishes. The value +of ``cycles`` should be in the range of 1 to ``GRO_MAX_FLUSH_CYCLES``. + +Please note that the large value of ``cycles`` may cause the poor TCP/IP +stack performance. Because the GROed packets are delayed to arrive the +stack, thus causing more duplicated ACKs and TCP retransmissions. + +set port - gso +~~~~~~~~~~~~~~ + +Toggle per-port GSO support in ``csum`` forwarding engine:: + + testpmd> set port gso on|off + +If enabled, the csum forwarding engine will perform GSO on supported IPv4 +packets, transmitted on the given port. + +If disabled, packets transmitted on the given port will not undergo GSO. +By default, GSO is disabled for all ports. + +.. note:: + + When GSO is enabled on a port, supported IPv4 packets transmitted on that + port undergo GSO. Afterwards, the segmented packets are represented by + multi-segment mbufs; however, the csum forwarding engine doesn't calculation + of checksums for GSO'd segments in SW. As a result, if users want correct + checksums in GSO segments, they should enable HW checksum calculation for + GSO-enabled ports. + + For example, HW checksum calculation for VxLAN GSO'd packets may be enabled + by setting the following options in the csum forwarding engine: + + testpmd> csum set outer_ip hw + + testpmd> csum set ip hw + + testpmd> csum set tcp hw + +set gso segsz +~~~~~~~~~~~~~ + +Set the maximum GSO segment size (measured in bytes), which includes the +packet header and the packet payload for GSO-enabled ports (global):: + + testpmd> set gso segsz + +show port - gso +~~~~~~~~~~~~~~~ + +Display the status of Generic Segmentation Offload for a given port:: + + testpmd> show port gso + mac_addr add ~~~~~~~~~~~~ @@ -1048,6 +1220,17 @@ For example, to set rx queue 2 on port 0 to mapping 5:: testpmd>set stat_qmap rx 0 2 5 +set xstats-hide-zero +~~~~~~~~~~~~~~~~~~~~ + +Set the option to hide zero values for xstats display:: + + testpmd> set xstats-hide-zero on|off + +.. note:: + + By default, the zero values are displayed for xstats. + set port - rx/tx (for VF) ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -1211,6 +1394,20 @@ Add an E-tag forwarding filter on a port:: Delete an E-tag forwarding filter on a port:: testpmd> E-tag set filter del e-tag-id (value) port (port_id) +ddp add +~~~~~~~ + +Load a dynamic device personalization (DDP) package:: + + testpmd> ddp add (port_id) (package_path[,output_path]) + +ddp del +~~~~~~~ + +Delete a dynamic device personalization package:: + + testpmd> ddp del (port_id) (package_path) + ptype mapping ~~~~~~~~~~~~~ @@ -1635,6 +1832,23 @@ Enable/disable the E-tag support:: testpmd> port config (port_id|all) l2-tunnel E-tag (enable|disable) +port config pctype mapping +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Reset pctype mapping table:: + + testpmd> port config (port_id) pctype mapping reset + +Update hardware defined pctype to software defined flow type mapping table:: + + testpmd> port config (port_id) pctype mapping update (pctype_id_0[,pctype_id_1]*) (flow_type_id) + +where: + +* ``pctype_id_x``: hardware pctype id as index of bit in bitmask value of the pctype mapping table. + +* ``flow_type_id``: software flow type id as the index of the pctype mapping table. + Link Bonding Functions ---------------------- @@ -1651,7 +1865,7 @@ Create a new bonding device:: For example, to create a bonded device in mode 1 on socket 0:: - testpmd> create bonded 1 0 + testpmd> create bonded device 1 0 created new bonded device (port X) add bonding slave @@ -1738,6 +1952,23 @@ For example, to set the link status monitoring polling period of bonded device ( testpmd> set bonding mon_period 5 150 +set bonding lacp dedicated_queue +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic +when in mode 4 (link-aggregration-802.3ad):: + + testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable) + + +set bonding agg_mode +~~~~~~~~~~~~~~~~~~~~ + +Enable one of the specific aggregators mode when in mode 4 (link-aggregration-802.3ad):: + + testpmd> set bonding agg_mode (port_id) (bandwidth|count|stable) + + show bonding config ~~~~~~~~~~~~~~~~~~~ @@ -1838,6 +2069,261 @@ For example, to set the high bit in the register from the example above:: port 0 PCI register at offset 0xEE00: 0x8000000A (2147483658) +Traffic Management +------------------ + +The following section shows functions for configuring traffic management on +on the ethernet device through the use of generic TM API. + +show port traffic management capability +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Show traffic management capability of the port:: + + testpmd> show port tm cap (port_id) + +show port traffic management capability (hierarchy level) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Show traffic management hierarchy level capability of the port:: + + testpmd> show port tm cap (port_id) (level_id) + +show port traffic management capability (hierarchy node level) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Show the traffic management hierarchy node capability of the port:: + + testpmd> show port tm cap (port_id) (node_id) + +show port traffic management hierarchy node type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Show the port traffic management hierarchy node type:: + + testpmd> show port tm node type (port_id) (node_id) + +show port traffic management hierarchy node stats +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Show the port traffic management hierarchy node statistics:: + + testpmd> show port tm node stats (port_id) (node_id) (clear) + +where: + +* ``clear``: When this parameter has a non-zero value, the statistics counters + are cleared (i.e. set to zero) immediately after they have been read, + otherwise the statistics counters are left untouched. + +Add port traffic management private shaper profile +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add the port traffic management private shaper profile:: + + testpmd> add port tm node shaper profile (port_id) (shaper_profile_id) \ + (tb_rate) (tb_size) (packet_length_adjust) + +where: + +* ``shaper_profile id``: Shaper profile ID for the new profile. +* ``tb_rate``: Token bucket rate (bytes per second). +* ``tb_size``: Token bucket size (bytes). +* ``packet_length_adjust``: The value (bytes) to be added to the length of + each packet for the purpose of shaping. This parameter value can be used to + correct the packet length with the framing overhead bytes that are consumed + on the wire. + +Delete port traffic management private shaper profile +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Delete the port traffic management private shaper:: + + testpmd> del port tm node shaper profile (port_id) (shaper_profile_id) + +where: + +* ``shaper_profile id``: Shaper profile ID that needs to be deleted. + +Add port traffic management shared shaper +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Create the port traffic management shared shaper:: + + testpmd> add port tm node shared shaper (port_id) (shared_shaper_id) \ + (shaper_profile_id) + +where: + +* ``shared_shaper_id``: Shared shaper ID to be created. +* ``shaper_profile id``: Shaper profile ID for shared shaper. + +Set port traffic management shared shaper +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Update the port traffic management shared shaper:: + + testpmd> set port tm node shared shaper (port_id) (shared_shaper_id) \ + (shaper_profile_id) + +where: + +* ``shared_shaper_id``: Shared shaper ID to be update. +* ``shaper_profile id``: Shaper profile ID for shared shaper. + +Delete port traffic management shared shaper +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Delete the port traffic management shared shaper:: + + testpmd> del port tm node shared shaper (port_id) (shared_shaper_id) + +where: + +* ``shared_shaper_id``: Shared shaper ID to be deleted. + +Set port traffic management hiearchy node private shaper +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +set the port traffic management hierarchy node private shaper:: + + testpmd> set port tm node shaper profile (port_id) (node_id) \ + (shaper_profile_id) + +where: + +* ``shaper_profile id``: Private shaper profile ID to be enabled on the + hierarchy node. + +Add port traffic management WRED profile +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Create a new WRED profile:: + + testpmd> add port tm node wred profile (port_id) (wred_profile_id) \ + (color_g) (min_th_g) (max_th_g) (maxp_inv_g) (wq_log2_g) \ + (color_y) (min_th_y) (max_th_y) (maxp_inv_y) (wq_log2_y) \ + (color_r) (min_th_r) (max_th_r) (maxp_inv_r) (wq_log2_r) + +where: + +* ``wred_profile id``: Identifier for the newly create WRED profile +* ``color_g``: Packet color (green) +* ``min_th_g``: Minimum queue threshold for packet with green color +* ``max_th_g``: Minimum queue threshold for packet with green color +* ``maxp_inv_g``: Inverse of packet marking probability maximum value (maxp) +* ``wq_log2_g``: Negated log2 of queue weight (wq) +* ``color_y``: Packet color (yellow) +* ``min_th_y``: Minimum queue threshold for packet with yellow color +* ``max_th_y``: Minimum queue threshold for packet with yellow color +* ``maxp_inv_y``: Inverse of packet marking probability maximum value (maxp) +* ``wq_log2_y``: Negated log2 of queue weight (wq) +* ``color_r``: Packet color (red) +* ``min_th_r``: Minimum queue threshold for packet with yellow color +* ``max_th_r``: Minimum queue threshold for packet with yellow color +* ``maxp_inv_r``: Inverse of packet marking probability maximum value (maxp) +* ``wq_log2_r``: Negated log2 of queue weight (wq) + +Delete port traffic management WRED profile +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Delete the WRED profile:: + + testpmd> del port tm node wred profile (port_id) (wred_profile_id) + +Add port traffic management hierarchy nonleaf node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add nonleaf node to port traffic management hiearchy:: + + testpmd> add port tm nonleaf node (port_id) (node_id) (parent_node_id) \ + (priority) (weight) (level_id) (shaper_profile_id) \ + (n_sp_priorities) (stats_mask) (n_shared_shapers) \ + [(shared_shaper_0) (shared_shaper_1) ...] \ + +where: + +* ``parent_node_id``: Node ID of the parent. +* ``priority``: Node priority (highest node priority is zero). This is used by + the SP algorithm running on the parent node for scheduling this node. +* ``weight``: Node weight (lowest weight is one). The node weight is relative + to the weight sum of all siblings that have the same priority. It is used by + the WFQ algorithm running on the parent node for scheduling this node. +* ``level_id``: Hiearchy level of the node. +* ``shaper_profile_id``: Shaper profile ID of the private shaper to be used by + the node. +* ``n_sp_priorities``: Number of strict priorities. +* ``stats_mask``: Mask of statistics counter types to be enabled for this node. +* ``n_shared_shapers``: Number of shared shapers. +* ``shared_shaper_id``: Shared shaper id. + +Add port traffic management hierarchy leaf node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add leaf node to port traffic management hiearchy:: + + testpmd> add port tm leaf node (port_id) (node_id) (parent_node_id) \ + (priority) (weight) (level_id) (shaper_profile_id) \ + (cman_mode) (wred_profile_id) (stats_mask) (n_shared_shapers) \ + [(shared_shaper_id) (shared_shaper_id) ...] \ + +where: + +* ``parent_node_id``: Node ID of the parent. +* ``priority``: Node priority (highest node priority is zero). This is used by + the SP algorithm running on the parent node for scheduling this node. +* ``weight``: Node weight (lowest weight is one). The node weight is relative + to the weight sum of all siblings that have the same priority. It is used by + the WFQ algorithm running on the parent node for scheduling this node. +* ``level_id``: Hiearchy level of the node. +* ``shaper_profile_id``: Shaper profile ID of the private shaper to be used by + the node. +* ``cman_mode``: Congestion management mode to be enabled for this node. +* ``wred_profile_id``: WRED profile id to be enabled for this node. +* ``stats_mask``: Mask of statistics counter types to be enabled for this node. +* ``n_shared_shapers``: Number of shared shapers. +* ``shared_shaper_id``: Shared shaper id. + +Delete port traffic management hierarchy node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Delete node from port traffic management hiearchy:: + + testpmd> del port tm node (port_id) (node_id) + +Update port traffic management hierarchy parent node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Update port traffic management hierarchy parent node:: + + testpmd> set port tm node parent (port_id) (node_id) (parent_node_id) \ + (priority) (weight) + +This function can only be called after the hierarchy commit invocation. Its +success depends on the port support for this operation, as advertised through +the port capability set. This function is valid for all nodes of the traffic +management hierarchy except root node. + +Commit port traffic management hierarchy +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Commit the traffic management hierarchy on the port:: + + testpmd> port tm hierarchy commit (port_id) (clean_on_fail) + +where: + +* ``clean_on_fail``: When set to non-zero, hierarchy is cleared on function + call failure. On the other hand, hierarchy is preserved when this parameter + is equal to zero. + +Set port traffic management default hierarchy (tm forwarding mode) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +set the traffic management default hierarchy on the port:: + + testpmd> set port tm hierarchy default (port_id) + Filter Functions ---------------- @@ -2264,7 +2750,8 @@ Flow rules management --------------------- Control of the generic flow API (*rte_flow*) is fully exposed through the -``flow`` command (validation, creation, destruction and queries). +``flow`` command (validation, creation, destruction, queries and operation +modes). Considering *rte_flow* overlaps with all `Filter Functions`_, using both features simultaneously may cause undefined side-effects and is therefore @@ -2318,6 +2805,10 @@ following sections. flow list {port_id} [group {group_id}] [...] +- Restrict ingress traffic to the defined flow rules:: + + flow isolate {port_id} {boolean} + Validating flow rules ~~~~~~~~~~~~~~~~~~~~~ @@ -2608,6 +3099,14 @@ This section lists supported pattern items and their attributes, if any. - ``protocol {unsigned}``: protocol type. +- ``fuzzy``: fuzzy pattern match, expect faster than default. + + - ``thresh {unsigned}``: accuracy threshold. + +- ``gtp``, ``gtpc``, ``gtpu``: match GTPv1 header. + + - ``teid {unsigned}``: tunnel endpoint identifier. + Actions list ^^^^^^^^^^^^ @@ -2894,6 +3393,46 @@ Output can be limited to specific groups:: 7 63 0 i- ETH IPV6 UDP VXLAN => MARK QUEUE testpmd> +Toggling isolated mode +~~~~~~~~~~~~~~~~~~~~~~ + +``flow isolate`` can be used to tell the underlying PMD that ingress traffic +must only be injected from the defined flow rules; that no default traffic +is expected outside those rules and the driver is free to assign more +resources to handle them. It is bound to ``rte_flow_isolate()``:: + + flow isolate {port_id} {boolean} + +If successful, enabling or disabling isolated mode shows either:: + + Ingress traffic on port [...] + is now restricted to the defined flow rules + +Or:: + + Ingress traffic on port [...] + is not restricted anymore to the defined flow rules + +Otherwise, in case of error:: + + Caught error type [...] ([...]): [...] + +Mainly due to its side effects, PMDs supporting this mode may not have the +ability to toggle it more than once without reinitializing affected ports +first (e.g. by exiting testpmd). + +Enabling isolated mode:: + + testpmd> flow isolate 0 true + Ingress traffic on port 0 is now restricted to the defined flow rules + testpmd> + +Disabling isolated mode:: + + testpmd> flow isolate 0 false + Ingress traffic on port 0 is not restricted anymore to the defined flow rules + testpmd> + Sample QinQ flow rules ~~~~~~~~~~~~~~~~~~~~~~ @@ -2942,4 +3481,3 @@ Validate and create a QinQ rule on port 0 to steer traffic to a queue on the hos ID Group Prio Attr Rule 0 0 0 i- ETH VLAN VLAN=>VF QUEUE 1 0 0 i- ETH VLAN VLAN=>PF QUEUE -