1 # QoS Hierarchical Scheduler {#qos_doc}
3 The Quality-of-Service (QoS) scheduler performs egress-traffic management by
4 prioritizing the transmission of the packets of different type services and
5 subscribers based on the Service Level Agreements (SLAs). The QoS scheduler can
6 be enabled on one or more NIC output interfaces depending upon the
12 The QoS scheduler supports a number of scheduling and shaping levels which
13 construct hierarchical-tree. The first level in the hierarchy is port (i.e.
14 the physical interface) that constitutes the root node of the tree. The
15 subsequent level is subport which represents the group of the
16 users/subscribers. The individual user/subscriber is represented by the pipe
17 at the next level. Each user can have different traffic type based on the
18 criteria of specific loss rate, jitter, and latency. These traffic types are
19 represented at the traffic-class level in the form of different traffic-
20 classes. The last level contains number of queues which are grouped together
21 to host the packets of the specific class type traffic.
23 The QoS scheduler implementation requires flow classification, enqueue and
24 dequeue operations. The flow classification is mandatory stage for HQoS where
25 incoming packets are classified by mapping the packet fields information to
26 5-tuple (HQoS subport, pipe, traffic class, queue within traffic class, and
27 color) and storing that information in mbuf sched field. The enqueue operation
28 uses this information to determine the queue for storing the packet, and at
29 this stage, if the specific queue is full, QoS drops the packet. The dequeue
30 operation consists of scheduling the packet based on its length and available
31 credits, and handing over the scheduled packet to the output interface.
33 For more information on QoS Scheduler, please refer DPDK Programmer's Guide-
34 http://dpdk.org/doc/guides/prog_guide/qos_framework.html
37 ### QoS Scheduler Parameters
39 Following illustrates the default HQoS configuration for each 10GbE output
42 Single subport (subport 0):
43 - Subport rate set to 100% of port rate
44 - Each of the 4 traffic classes has rate set to 100% of port rate
46 4K pipes per subport 0 (pipes 0 .. 4095) with identical configuration:
47 - Pipe rate set to 1/4K of port rate
48 - Each of the 4 traffic classes has rate set to 100% of pipe rate
49 - Within each traffic class, the byte-level WRR weights for the 4 queues are set to 1:1:1:1
52 #### Port configuration
56 rate 1250000000 /* Assuming 10GbE port */
57 frame_overhead 24 /* Overhead fields per Ethernet frame:
59 * 1B (Start of Frame Delimiter (SFD)) +
60 * 4B (Frame Check Sequence (FCS)) +
61 * 12B (Inter Frame Gap (IFG))
63 mtu 1522 /* Assuming Ethernet/IPv4 pkt (FCS not included) */
64 n_subports_per_port 1 /* Number of subports per output interface */
65 n_pipes_per_subport 4096 /* Number of pipes (users/subscribers) */
66 queue_sizes 64 64 64 64 /* Packet queue size for each traffic class.
67 * All queues within the same pipe traffic class
68 * have the same size. Queues from different
69 * pipes serving the same traffic class have
75 #### Subport configuration
79 tb_rate 1250000000 /* Subport level token bucket rate (bytes per second) */
80 tb_size 1000000 /* Subport level token bucket size (bytes) */
81 tc0_rate 1250000000 /* Subport level token bucket rate for traffic class 0 (bytes per second) */
82 tc1_rate 1250000000 /* Subport level token bucket rate for traffic class 1 (bytes per second) */
83 tc2_rate 1250000000 /* Subport level token bucket rate for traffic class 2 (bytes per second) */
84 tc3_rate 1250000000 /* Subport level token bucket rate for traffic class 3 (bytes per second) */
85 tc_period 10 /* Time interval for refilling the token bucket associated with traffic class (Milliseconds) */
86 pipe 0 4095 profile 0 /* pipes (users/subscribers) configured with pipe profile 0 */
91 #### Pipe configuration
95 tb_rate 305175 /* Pipe level token bucket rate (bytes per second) */
96 tb_size 1000000 /* Pipe level token bucket size (bytes) */
97 tc0_rate 305175 /* Pipe level token bucket rate for traffic class 0 (bytes per second) */
98 tc1_rate 305175 /* Pipe level token bucket rate for traffic class 1 (bytes per second) */
99 tc2_rate 305175 /* Pipe level token bucket rate for traffic class 2 (bytes per second) */
100 tc3_rate 305175 /* Pipe level token bucket rate for traffic class 3 (bytes per second) */
101 tc_period 40 /* Time interval for refilling the token bucket associated with traffic class at pipe level (Milliseconds) */
102 tc3_oversubscription_weight 1 /* Weight traffic class 3 oversubscription */
103 tc0_wrr_weights 1 1 1 1 /* Pipe queues WRR weights for traffic class 0 */
104 tc1_wrr_weights 1 1 1 1 /* Pipe queues WRR weights for traffic class 1 */
105 tc2_wrr_weights 1 1 1 1 /* Pipe queues WRR weights for traffic class 2 */
106 tc3_wrr_weights 1 1 1 1 /* Pipe queues WRR weights for traffic class 3 */
111 #### Random Early Detection (RED) parameters per traffic class and color (Green / Yellow / Red)
115 tc0_wred_min 48 40 32 /* Minimum threshold for traffic class 0 queue (min_th) in number of packets */
116 tc0_wred_max 64 64 64 /* Maximum threshold for traffic class 0 queue (max_th) in number of packets */
117 tc0_wred_inv_prob 10 10 10 /* Inverse of packet marking probability for traffic class 0 queue (maxp = 1 / maxp_inv) */
118 tc0_wred_weight 9 9 9 /* Traffic Class 0 queue weight */
119 tc1_wred_min 48 40 32 /* Minimum threshold for traffic class 1 queue (min_th) in number of packets */
120 tc1_wred_max 64 64 64 /* Maximum threshold for traffic class 1 queue (max_th) in number of packets */
121 tc1_wred_inv_prob 10 10 10 /* Inverse of packet marking probability for traffic class 1 queue (maxp = 1 / maxp_inv) */
122 tc1_wred_weight 9 9 9 /* Traffic Class 1 queue weight */
123 tc2_wred_min 48 40 32 /* Minimum threshold for traffic class 2 queue (min_th) in number of packets */
124 tc2_wred_max 64 64 64 /* Maximum threshold for traffic class 2 queue (max_th) in number of packets */
125 tc2_wred_inv_prob 10 10 10 /* Inverse of packet marking probability for traffic class 2 queue (maxp = 1 / maxp_inv) */
126 tc2_wred_weight 9 9 9 /* Traffic Class 2 queue weight */
127 tc3_wred_min 48 40 32 /* Minimum threshold for traffic class 3 queue (min_th) in number of packets */
128 tc3_wred_max 64 64 64 /* Maximum threshold for traffic class 3 queue (max_th) in number of packets */
129 tc3_wred_inv_prob 10 10 10 /* Inverse of packet marking probability for traffic class 3 queue (maxp = 1 / maxp_inv) */
130 tc3_wred_weight 9 9 9 /* Traffic Class 3 queue weight */
135 ### DPDK QoS Scheduler Integration in VPP
137 The Hierarchical Quality-of-Service (HQoS) scheduler object could be seen as
138 part of the logical NIC output interface. To enable HQoS on specific output
139 interface, vpp startup.conf file has to be configured accordingly. The output
140 interface that requires HQoS, should have "hqos" parameter specified in dpdk
141 section. Another optional parameter "hqos-thread" has been defined which can
142 be used to associate the output interface with specific hqos thread. In cpu
143 section of the config file, "corelist-hqos-threads" is introduced to assign
144 logical cpu cores to run the HQoS threads. A HQoS thread can run multiple HQoS
145 objects each associated with different output interfaces. All worker threads
146 instead of writing packets to NIC TX queue directly, write the packets to a
147 software queues. The hqos_threads read the software queues, and enqueue the
148 packets to HQoS objects, as well as dequeue packets from HQOS objects and
149 write them to NIC output interfaces. The worker threads need to be able to
150 send the packets to any output interface, therefore, each HQoS object
151 associated with NIC output interface should have software queues equal to
152 worker threads count.
154 Following illustrates the sample startup configuration file with 4x worker
155 threads feeding 2x hqos threads that handle each QoS scheduler for 1x output
160 socket-mem 16384,16384
176 corelist-workers 1, 2, 3, 4
177 corelist-hqos-threads 5, 6
182 ### QoS scheduler CLI Commands
184 Each QoS scheduler instance is initialised with default parameters required to
185 configure hqos port, subport, pipe and queues. Some of the parameters can be
186 re-configured in run-time through CLI commands.
191 Following commands can be used to configure QoS scheduler parameters.
193 The command below can be used to set the subport level parameters such as
194 token bucket rate (bytes per seconds), token bucket size (bytes), traffic
195 class rates (bytes per seconds) and token update period (Milliseconds).
198 set dpdk interface hqos subport <interface> subport <subport_id> [rate <n>]
199 [bktsize <n>] [tc0 <n>] [tc1 <n>] [tc2 <n>] [tc3 <n>] [period <n>]
202 For setting the pipe profile, following command can be used.
205 set dpdk interface hqos pipe <interface> subport <subport_id> pipe <pipe_id>
209 To assign QoS scheduler instance to the specific thread, following command can
213 set dpdk interface hqos placement <interface> thread <n>
216 The command below is used to set the packet fields required for classifying
217 the incoming packet. As a result of classification process, packet field
218 information will be mapped to 5 tuples (subport, pipe, traffic class, pipe,
219 color) and stored in packet mbuf.
222 set dpdk interface hqos pktfield <interface> id subport|pipe|tc offset <n>
226 The DSCP table entries used for identifying the traffic class and queue can be set using the command below;
229 set dpdk interface hqos tctbl <interface> entry <map_val> tc <tc_id> queue <queue_id>
235 The QoS Scheduler configuration can displayed using the command below.
238 vpp# show dpdk interface hqos TenGigabitEthernet2/0/0
240 Input SWQ size = 4096 packets
241 Enqueue burst size = 256 packets
242 Dequeue burst size = 220 packets
243 Packet field 0: slab position = 0, slab bitmask = 0x0000000000000000 (subport)
244 Packet field 1: slab position = 40, slab bitmask = 0x0000000fff000000 (pipe)
245 Packet field 2: slab position = 8, slab bitmask = 0x00000000000000fc (tc)
246 Packet field 2 tc translation table: ([Mapped Value Range]: tc/queue tc/queue ...)
247 [ 0 .. 15]: 0/0 0/1 0/2 0/3 1/0 1/1 1/2 1/3 2/0 2/1 2/2 2/3 3/0 3/1 3/2 3/3
248 [16 .. 31]: 0/0 0/1 0/2 0/3 1/0 1/1 1/2 1/3 2/0 2/1 2/2 2/3 3/0 3/1 3/2 3/3
249 [32 .. 47]: 0/0 0/1 0/2 0/3 1/0 1/1 1/2 1/3 2/0 2/1 2/2 2/3 3/0 3/1 3/2 3/3
250 [48 .. 63]: 0/0 0/1 0/2 0/3 1/0 1/1 1/2 1/3 2/0 2/1 2/2 2/3 3/0 3/1 3/2 3/3
252 Rate = 1250000000 bytes/second
254 Frame overhead = 24 bytes
255 Number of subports = 1
256 Number of pipes per subport = 4096
257 Packet queue size: TC0 = 64, TC1 = 64, TC2 = 64, TC3 = 64 packets
258 Number of pipe profiles = 1
260 Rate = 120000000 bytes/second
261 Token bucket size = 1000000 bytes
262 Traffic class rate: TC0 = 120000000, TC1 = 120000000, TC2 = 120000000, TC3 = 120000000 bytes/second
263 TC period = 10 milliseconds
265 Rate = 305175 bytes/second
266 Token bucket size = 1000000 bytes
267 Traffic class rate: TC0 = 305175, TC1 = 305175, TC2 = 305175, TC3 = 305175 bytes/second
268 TC period = 40 milliseconds
269 TC0 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1
270 TC1 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1
271 TC2 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1
272 TC3 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1
275 The QoS Scheduler placement over the logical cpu cores can be displayed using
279 vpp# show dpdk interface hqos placement
280 Thread 5 (vpp_hqos-threads_0 at lcore 5):
281 TenGigabitEthernet2/0/0 queue 0
282 Thread 6 (vpp_hqos-threads_1 at lcore 6):
283 TenGigabitEthernet4/0/1 queue 0
287 ### QoS Scheduler Binary APIs
289 This section explains the available binary APIs for configuring QoS scheduler
290 parameters in run-time.
292 The following API can be used to set the pipe profile of a pipe that belongs
296 sw_interface_set_dpdk_hqos_pipe rx <intfc> | sw_if_index <id>
297 subport <subport-id> pipe <pipe-id> profile <profile-id>
300 The data structures used for set the pipe profile parameter are as follows;
303 /** \\brief DPDK interface HQoS pipe profile set request
304 @param client_index - opaque cookie to identify the sender
305 @param context - sender context, to match reply w/ request
306 @param sw_if_index - the interface
307 @param subport - subport ID
308 @param pipe - pipe ID within its subport
309 @param profile - pipe profile ID
311 define sw_interface_set_dpdk_hqos_pipe {
320 /** \\brief DPDK interface HQoS pipe profile set reply
321 @param context - sender context, to match reply w/ request
322 @param retval - request return code
324 define sw_interface_set_dpdk_hqos_pipe_reply {
330 The following API can be used to set the subport level parameters, for
331 example- token bucket rate (bytes per seconds), token bucket size (bytes),
332 traffic class rate (bytes per seconds) and tokens update period.
335 sw_interface_set_dpdk_hqos_subport rx <intfc> | sw_if_index <id>
336 subport <subport-id> [rate <n>] [bktsize <n>]
337 [tc0 <n>] [tc1 <n>] [tc2 <n>] [tc3 <n>] [period <n>]
340 The data structures used for set the subport level parameter are as follows;
343 /** \\brief DPDK interface HQoS subport parameters set request
344 @param client_index - opaque cookie to identify the sender
345 @param context - sender context, to match reply w/ request
346 @param sw_if_index - the interface
347 @param subport - subport ID
348 @param tb_rate - subport token bucket rate (measured in bytes/second)
349 @param tb_size - subport token bucket size (measured in credits)
350 @param tc_rate - subport traffic class 0 .. 3 rates (measured in bytes/second)
351 @param tc_period - enforcement period for rates (measured in milliseconds)
353 define sw_interface_set_dpdk_hqos_subport {
364 /** \\brief DPDK interface HQoS subport parameters set reply
365 @param context - sender context, to match reply w/ request
366 @param retval - request return code
368 define sw_interface_set_dpdk_hqos_subport_reply {
374 The following API can be used set the DSCP table entry. The DSCP table have
375 64 entries to map the packet DSCP field onto traffic class and hqos input
379 sw_interface_set_dpdk_hqos_tctbl rx <intfc> | sw_if_index <id>
380 entry <n> tc <n> queue <n>
383 The data structures used for setting DSCP table entries are given below.
386 /** \\brief DPDK interface HQoS tctbl entry set request
387 @param client_index - opaque cookie to identify the sender
388 @param context - sender context, to match reply w/ request
389 @param sw_if_index - the interface
390 @param entry - entry index ID
391 @param tc - traffic class (0 .. 3)
392 @param queue - traffic class queue (0 .. 3)
394 define sw_interface_set_dpdk_hqos_tctbl {
403 /** \\brief DPDK interface HQoS tctbl entry set reply
404 @param context - sender context, to match reply w/ request
405 @param retval - request return code
407 define sw_interface_set_dpdk_hqos_tctbl_reply {