*Limitations*::
-* The feature supports 2 packet types:
+* The feature supports two packet L3 header types:
** IPv4 over Ethernet
-** IPv4 with one VLAN tag
+** IPv4 with one VLAN tag (except 82599 which does not support this type of packet)
* Packets must contain at least 16 bytes payload.
* Each stream must have unique pg_id number. This also means that a given "latency collecting" stream can't be transmitted from two interfaces in parallel (internally it means that there are two streams).
* Maximum number of concurrent streams (with different pg_id) on which latency info may be collected: 128 (This is in addition to the streams which collect per stream statistics).
+[NOTE]
+=====================================================================
+IPv6 support is WIP, IPv6.flow_id would be the ID of the rule
+=====================================================================
+
Two examples follow, one using the console and the other using the Python API.
*Console*::
----
trex>start -f stl/flow_stats.py --port 0
-Removing all streams from port(s) [0]: [SUCCESS]
-
-
-Attaching 2 streams to port(s) [0]: [SUCCESS]
-
-
-Starting traffic on port(s) [0]: [SUCCESS]
-
-155.81 [ms]
-
trex>tui
Latency Statistics (usec)
- PG ID | 12 | 13
+ PG ID | 7 | 12
----------------------------------------------
Max latency | 0 | 0 #<1>
Avg latency | 5 | 5 #<2>
<4> Jitter of latency measurements.
<5> Indication of number of errors (packet lost/out of order/duplicates) that occured). In the future it will be possible to 'zoom in', to see specific counters.
For now, if you need to see specific counters, you can use the Python API.
+
+
+An example of API usage is as follows
+
+*Example File*:: link:{github_stl_examples_path}/stl_flow_latency_stats.py[stl_flow_latency_stats.py]
+
+[source,python]
+----
+
+ stats = c.get_stats()
+
+ flow_stats = stats['flow_stats'].get(5)
+ lat_stats = stats['latency'].get(5) <1>
+
+
+ tx_pkts = flow_stats['tx_pkts'].get(tx_port, 0)
+ tx_bytes = flow_stats['tx_bytes'].get(tx_port, 0)
+ rx_pkts = flow_stats['rx_pkts'].get(rx_port, 0)
+ drops = lat_stats['err_cntrs']['dropped']
+ ooo = lat_stats['err_cntrs']['out_of_order']
+ dup = lat_stats['err_cntrs']['dup']
+ sth = lat_stats['err_cntrs']['seq_too_high']
+ stl = lat_stats['err_cntrs']['seq_too_low']
+ lat = lat_stats['latency']
+ jitter = lat['jitter']
+ avg = lat['average']
+ tot_max = lat['total_max']
+ last_max = lat['last_max']
+ hist = lat ['histogram']
+
+ # lat_stats will be in this format
+
+ latency_stats == {
+ 'err_cntrs':{ # error counters
+ u'dup':0, # The same seq number was received
+ u'out_of_order':0, # seq out of order - range is higher that cyclic of 1000
+ u'seq_too_high':0, # seq number too high
+ u'seq_too_low':0, # seq number too low
+ u'dropped':0 # Estimate of number of packets that were dropped (using seq number)
+ },
+ 'latency':{
+ 'jitter':0, # in usec
+ 'average':15.2, # average latency,usec
+ 'last_max':0, # last 1 sec window maximum latency
+ 'total_max':44, # maximum latency
+ 'histogram':[ # histogram of latency
+ {
+ u'key':20, # butcket latency in usec (20-30)
+ u'val':489342 # number of samples that hit this bucket range in usec
+ },
+ {
+ u'key':30,
+ u'val':10512
+ },
+ {
+ u'key':40,
+ u'val':146
+ },
+ {
+ 'key':0,
+ 'val':0
+ }
+ ]
+ }
+ },
+
+
+----
+<1> Get the Latency dictionary
+
+TBD need to update Python API with with this information
==== Tutorial: HLT traffic profile