Traditionally, routers have been tested using commercial traffic generators, while performance
typically has been measured using packets per second (PPS) metrics. As router functionality and
-services have become more complex, stateful traffic generators have become necessary to
-provide more realistic application traffic scenarios.
+services became more complex, stateful traffic generators now need to provide more realistic traffic scenarios.
Advantages of realistic traffic generators:
-* Accurate performance metrics
-* Discovering bottlenecks in realistic traffic scenarios
+* Accurate performance metrics.
+* Discovering bottlenecks in realistic traffic scenarios.
==== Current Challenges:
-* *Cost*: Commercial stateful traffic generators are expensive
-* *Scale*: Bandwidth does not scale up well with feature complexity
-* *Standardization*: Lack of standardization of traffic patterns and methodologies
-* *Flexibility*: Commercial tools do not allow agility when flexibility and changes are needed
+* *Cost*: Commercial stateful traffic generators are very expensive.
+* *Scale*: Bandwidth does not scale up well with feature complexity.
+* *Standardization*: Lack of standardization of traffic patterns and methodologies.
+* *Flexibility*: Commercial tools do not allow agility when flexibility and changes are needed.
==== Implications
* Stateful traffic generator based on pre-processing and smart replay of real traffic templates.
* Generates and *amplifies* both client and server side traffic.
* Customized functionality can be added.
-* Scales to 200Gb/sec for one UCS (using Intel 40Gb/sec NICs)
-* Low cost
-* Self-contained package that can be easily installed and deployed
+* Scales to 200Gb/sec for one UCS (using Intel 40Gb/sec NICs).
+* Low cost.
+* Self-contained package that can be easily installed and deployed.
* Virtual interface support enables TRex to be used in a fully virtual environment without physical NICs. Example use cases:
** Amazon AWS
** Cisco LaaS
=== Obtaining the TRex package
-Connect by `ssh` to the TRex machine and execute the commands described below.
+Connect using `ssh` to the TRex machine and execute the commands described below.
NOTE: Prerequisite: *$WEB_URL* is *{web_server_url}* or *{local_web_server_url}* (Cisco internal)
<1> X.XX = Version number
-=== Running TRex for the first time in loopback
+== First time Running
-Before jumping to check the DUT, you could verify TRex and NICs working in loopback. +
-For performance-wise, it's better to connect interfaces on the same NUMA (controlled by one physical processor) +
-However, if you have a 10Gb/sec interfaces (based on Intel 520-D2 NICs), and you connect ports that are on the same NIC to each other with SFP+, it might not sync. +
-We have checked many SFP+ (Intel/Cisco/SR/LR) and had link. +
-If you are still facing this issue you could either try to connect interfaces of different NICs or use link:http://www.fiberopticshare.com/tag/cisco-10g-twinax[Cisco twinax copper cable].
+=== Configuring for loopback
+
+Before connecting TRex to your DUT, it is strongly advised to verify that TRex and the NICs work correctly in loopback. +
+To get best performance, it is advised to loopback interfaces on the same NUMA (controlled by the same physical processor). If you do not know how to check this, you can ignore this advice for now. +
+
+[NOTE]
+=====================================================================
+If you are using 10Gbs NIC based on Intel 520-D2 NICs, and you loopback ports on the same NIC, using SFP+, it might not sync, and you will fail to get link up. +
+We checked many types of SFP+ (Intel/Cisco/SR/LR) and it worked for us. +
+If you still encounter link issues, you can either try to loopback interfaces from different NICs, or use link:http://www.fiberopticshare.com/tag/cisco-10g-twinax[Cisco twinax copper cable].
+=====================================================================
.Loopback example
image:images/loopback_example.png[title="Loopback example"]
-If you have a 1Gb/Sec Intel NIC (I350) or XL710/X710 NIC, you can connect any port to any port from loopback perspective *but* first filter the management port - see xref:trex_config[TRex Configuration].
-
==== Identify the ports
[source,bash]
----
- $>sudo ./dpdk_setup_ports.py --s
+ $>sudo ./dpdk_setup_ports.py -s
Network devices using DPDK-compatible driver
============================================
Network devices using kernel driver
===================================
- 0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio *Active*
0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<1>
- 0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<2>
- 0000:13:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<3>
- 0000:13:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<4>
-
+ 0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
+ 0000:13:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
+ 0000:13:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
+ 0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio *Active* #<2>
Other network devices
=====================
<none>
----
-<1> TRex interface #1 before unbinding
-<2> TRex interface #2 before unbinding
-<3> TRex interface #3 before unbinding
-<4> TRex interface #4 before unbinding
-Choose a port to use and follow instructions in the next section to create a configuration file.
+<1> If you did not run any DPDK application, you will see list of interfaces binded to the kernel, or not binded at all.
+<2> Interface marked as 'active' is the one used by your ssh connection. *Never* put it in TRex config file.
-==== Create minimum configuration file
+Choose ports to use and follow the instructions in the next section to create configuration file.
-Create a configuration file: `/etc/trex_cfg.yaml`.
+==== Creating minimum configuration file
-You can copy a basic configuration file from cfg folder by running this command...
+Default configuration file name is: `/etc/trex_cfg.yaml`.
+
+You can copy basic configuration file from cfg folder
[source,bash]
----
$cp cfg/simple_cfg.yaml /etc/trex_cfg.yaml
----
-...and edit the configuration file with the desired values.
+Then, edit the configuration file and put your interface's and IP addresses details.
Example:
[source,bash]
----
<none>
-- port_limit : 4 #<1>
- version : 2 #<2>
- interfaces : ["03:00.0","03:00.1","13:00.1","13:00.0"] #<3>
+- port_limit : 2
+ version : 2
+#List of interfaces. Change to suit your setup. Use ./dpdk_setup_ports.py -s to see available options
+interfaces : ["03:00.0", "03:00.1"] #<1>
+ port_info : # Port IPs. Change to suit your needs. In case of loopback, you can leave as is.
+ - ip : 1.1.1.1
+ default_gw : 2.2.2.2
+ - ip : 2.2.2.2
+ default_gw : 1.1.1.1
----
-<1> Mumber of ports
-<2> Must add version 2 to the configuration file
-<3> List of interfaces displayed by `#>sudo ./dpdk_setup_ports.py -s`
+<1> You need to edit this line to match the interfaces you are using.
+Notice that all NICs you are using should have the same type. You cannot mix different NIC types in one config file. For more info, see link:http://trex-tgn.cisco.com/youtrack/issue/trex-201[trex-201].
-When working with a VM, set the destination MAC of one port as the source or the other for loopback the port in the vSwitch
-and you should take the right value from the hypervisor (in case of a physical NIC you can set the MAC address with virtual you can't and you should take it from the hypervisor)
-and example
+You can find xref:trex_config[here] full list of configuration file options.
-// Clarify paragraph above.
+=== Script for creating config file
-[source,python]
-----
- - port_limit : 2
- version : 2
- interfaces : ["03:00.0","03:00.1"] <2>
- port_info : # set eh mac addr
- - dest_mac : [0x1,0x0,0x0,0x1,0x0,0x00] # port 0
- src_mac : [0x2,0x0,0x0,0x2,0x0,0x00] <1>
- - dest_mac : [0x2,0x0,0x0,0x2,0x0,0x00] # port 1 <1>
- src_mac : [0x1,0x0,0x0,0x1,0x0,0x00]
-----
-<1> Source MAC is like destination MAC (this should be set or taken from VMware). The MAC was taken from the hypervisor.
-<2> Currently TRex supports only one type of NIC at a time. You cannot mix different NIC types in one config file. For more info, see link:http://trex-tgn.cisco.com/youtrack/issue/trex-197[trex-201].
+To help starting with basic configuration file that suits your needs, there a script that can automate this process.
+The script helps you getting started, and you can then edit the file and add advanced options from xref:trex_config[here]
+if needed. +
+There are two ways to run the script. Interactively (script will pormpt you for parameters), or providing all parameters
+using command line options.
-// where can we describe this limitation (TRex supports only one type of NIC at a time. You cannot mix different NIC types in one config file.) and other limitations?
-
-==== Script for creating config file
-
-===== Interactive mode
+==== Interactive mode
[source,bash]
----
sudo ./dpdk_setup_ports.py -i
----
-Will be printed table with all interfaces and related information. +
-Then, user is asked to provide desired interfaces, MAC destinations etc.
+You will see a list of available interfaces with their related information +
+Just follow the instructions to get basic config file.
-===== Specifying input arguments from CLI
+==== Specifying input arguments using command line options
-Another option is to run script with all the arguments given directly from CLI. +
-Run this command to see list of all interfaces and related information:
+First, run this command to see the list of all interfaces and their related information:
[source,bash]
----
sudo ./dpdk_setup_ports.py -t
----
-* In case of *Loopback* and/or only *L1-L2 Switches* on the way, no need to provide destination MACs. +
-Will be assumed connection 0↔1, 2↔3 etc. +
+* In case of *Loopback* and/or only *L1-L2 Switches* on the way, you do not need to provide IPs or destination MACs. +
+The script Will assume the following interface connections: 0↔1, 2↔3 etc. +
Just run:
[source,bash]
sudo ./dpdk_setup_ports.py -c <TRex interface 0> <TRex interface 1> ...
----
-* In case of *Router* (or other next hop device, such as *L3 Switch*), should be specified MACs of router interfaces as destination.
-
-[source,bash]
-----
-sudo ./dpdk_setup_ports.py -c <TRex interface 0> <TRex interface 1> ... --dest-macs <Router interface 0 MAC> <Router interface 1 MAC> ...
-----
+* In case of *Router* (or other next hop device, such as *L3 Switch*), you should specify the TRex IPs and default gateways, or
+MACs of the router as described below.
.Additional arguments to creating script (dpdk_setup_ports.py -c)
[options="header",cols="2,5,3",width="100%"]
| -c | Create a configuration file by specified interfaces (PCI address or Linux names: eth1 etc.) | -c 03:00.1 eth1 eth4 84:00.0
| --dump | Dump created config to screen. |
| -o | Output the config to this file. | -o /etc/trex_cfg.yaml
-| --dest-macs | Destination MACs to be used in created yaml file per each interface. Without specifying the option, will be assumed loopback (0↔1, 2↔3 etc.) | --dest-macs 11:11:11:11:11:11 22:22:22:22:22:22
+| --dest-macs | Destination MACs to be used per each interface. Specify this option if you want MAC based config instead of IP based one. You must not set it together with --ip and --def_gw | --dest-macs 11:11:11:11:11:11 22:22:22:22:22:22
+| --ip | List of IPs to use for each interface. If this option and --dest-macs is not specified, script assumes loopback connections (0↔1, 2↔3 etc.) | --ip 1.2.3.4 5.6.7.8
+|--def-gw | List of default gateways to use for each interface. If --ip given, you must provide --def_gw as well | --def-gw 3.4.5.6 7.8.9.10
| --ci | Cores include: White list of cores to use. Make sure there is enough for each NUMA. | --ci 0 2 4 5 6
| --ce | Cores exclude: Black list of cores to exclude. Make sure there will be enough for each NUMA. | --ci 10 11 12
| --no-ht | No HyperThreading: Use only one thread of each Core in created config yaml. |
| --ignore-numa | Advanced option: Ignore NUMAs for config creation. Use this option only if you have to, as it might reduce performance. For example, if you have pair of interfaces at different NUMAs |
|=================
-==== Run TRex
+=== Configuring ESXi for running TRex
+
+To get best performance, it is advised to run TRex on bare metal hardware, and not use any kind of VM.
+Bandwidth on VM might be limited, and IPv6 might not be fully supported.
+Having said that, there are sometimes benefits for running on VM. +
+These include: +
+ * Virtual NICs can be used to bridge between TRex and NICs not supported by TRex. +
+ * If you already have VM installed, and do not require high performance. +
+
+1. Click the host machine, enter Configuration -> Networking.
+
+a. One of the NICs should be connected to the main vSwitch network to get an "outside" connection, for the TRex client and ssh: +
+image:images/vSwitch_main.png[title="vSwitch_main"]
+
+b. Other NICs that are used for TRex traffic should be in distinguish vSwitch: +
+image:images/vSwitch_loopback.png[title="vSwitch_loopback"]
+
+2. Right-click guest machine -> Edit settings -> Ensure the NICs are set to their networks: +
+image:images/vSwitch_networks.png[title="vSwitch_networks"]
+
+[NOTE]
+=====================================================================
+Before version 2.10, the following command did not function as expected:
+[subs="quotes"]
+....
+sudo ./t-rex-64 -f cap2/dns.yaml *--lm 1 --lo* -l 1000 -d 100
+....
+The vSwitch did not "know" where to route the packet. Was solved on version 2.10 when TRex started to support ARP.
+=====================================================================
+
+* Pass-through is the way to use directly the NICs from host machine inside the VM. Has no limitations except the NIC/hardware itself. The only difference via bare-metal OS is occasional spikes of latency (~10ms). Passthrough settings cannot be saved to OVA.
+
+1. Click on the host machine. Enter Configuration -> Advanced settings -> Edit. Mark the desired NICs. Reboot the ESXi to apply. +
+image:images/passthrough_marking.png[title="passthrough_marking"]
+
+2. Right click on guest machine. Edit settings -> Add -> *PCI device* -> Choose the NICs one by one. +
+image:images/passthrough_adding.png[title="passthrough_adding"]
+
+=== Configuring for running with router (or other L3 device) as DUT
+
+You can follow link:trex_config_guide.html[this] presentation for an example of how to configure router as DUT.
+
+=== Running TRex
-Use the following command to begin operation of a 4x 10Gb/sec TRex:
+When all is set, use the following command to start basic TRex run for 10 seconds
+(it will use the default config file name /etc/trex_cfg.yaml):
[source,bash]
----
-$sudo ./t-rex-64 -f cap2/dns.yaml -c 4 -m 1 -d 100 -l 1000
+$sudo ./t-rex-64 -f cap2/dns.yaml -c 4 -m 1 -d 10 -l 1000
----
-NOTE: For a 10Gb/sec TRex with 2, 6, or 8 ports, add `--limit-ports [number of ports]` *or* follow xref:trex_config[these instructions] to configure TRex.
-
If successful, the output will be similar to the following:
[source,python]
----
-$ sudo ./t-rex-64 -f cap2/dns.yaml -d 100 -l 1000
-Starting TRex 1.50 please wait ...
+$ sudo ./t-rex-64 -f cap2/dns.yaml -d 10 -l 1000
+Starting TRex 2.09 please wait ...
zmq publisher at: tcp://*:4500
- number of ports founded : 4
+ number of ports found : 4
port : 0
------------
link : link : Link Up - speed 10000 Mbps - full-duplex <1>
Tx Bw | 217.09 Kbps | 217.14 Kbps | 216.83 Kbps | 216.83 Kbps
-Global stats enabled
- Cpu Utilization : 0.0 % <12> 29.7 Gb/core <13>
+ Cpu Utilization : 0.0 % <2> 29.7 Gb/core <3>
Platform_factor : 1.0
- Total-Tx : 867.89 Kbps <2>
- Total-Rx : 867.86 Kbps <3>
+ Total-Tx : 867.89 Kbps <4>
+ Total-Rx : 867.86 Kbps <5>
Total-PPS : 1.64 Kpps
Total-CPS : 0.50 cps
- Expected-PPS : 2.00 pps <9>
- Expected-CPS : 1.00 cps <10>
- Expected-BPS : 1.36 Kbps <11>
+ Expected-PPS : 2.00 pps <6>
+ Expected-CPS : 1.00 cps <7>
+ Expected-BPS : 1.36 Kbps <8>
- Active-flows : 0 <6> Clients : 510 Socket-util : 0.0000 %
- Open-flows : 1 <7> Servers : 254 Socket : 1 Socket/Clients : 0.0
- drop-rate : 0.00 bps <8>
+ Active-flows : 0 <9> Clients : 510 Socket-util : 0.0000 %
+ Open-flows : 1 <10> Servers : 254 Socket : 1 Socket/Clients : 0.0
+ drop-rate : 0.00 bps <11>
current time : 5.3 sec
test duration : 94.7 sec
-Latency stats enabled
- Cpu Utilization : 0.2 % <14>
+ Cpu Utilization : 0.2 % <12>
if| tx_ok , rx_ok , rx ,error, average , max , Jitter , max window
| , , check, , latency(usec),latency (usec) ,(usec) ,
--------------------------------------------------------------------------------------------------
- 0 | 1002, 1002, 0, 0, 51 , 69, 0 | 0 69 67 <4>
- 1 | 1002, 1002, 0, 0, 53 , 196, 0 | 0 196 53 <5>
+ 0 | 1002, 1002, 0, 0, 51 , 69, 0 | 0 69 67 <13>
+ 1 | 1002, 1002, 0, 0, 53 , 196, 0 | 0 196 53
2 | 1002, 1002, 0, 0, 54 , 71, 0 | 0 71 69
3 | 1002, 1002, 0, 0, 53 , 193, 0 | 0 193 52
----
<1> Link must be up for TRex to work.
-<2> Total Rx must be the same as Tx
-<3> Total Rx must be the same as Tx
-<4> Tx_ok == Rx_ok
-<5> Tx_ok == Rx_ok
-<6> Number of TRex active "flows". Could be different than the number of router flows, due to aging issues. Usualy the TRex number of active flows is much lower than that of the router.
-<7> Total number of TRex flows opened since startup (including active ones, and ones already closed).
-<8> Drop rate.
-<9> Expected number of packets per second (calculated without latency packets).
-<10> Expected number of connections per second (calculated without latency packets).
-<11> Expected number of bits per second (calculated without latency packets).
-<12> Average CPU utilization of transmitters threads. For best results it should be lower than 80%.
-<13> Gb/sec generated per core of DP. Higher is better.
-<14> Rx and latency thread CPU utilization.
-
+<2> Average CPU utilization of transmitters threads. For best results it should be lower than 80%.
+<3> Gb/sec generated per core of DP. Higher is better.
+<4> Total Tx must be the same as Rx at the end of the run
+<5> Total Rx must be the same as Tx at the end of the run
+<6> Expected number of packets per second (calculated without latency packets).
+<7> Expected number of connections per second (calculated without latency packets).
+<8> Expected number of bits per second (calculated without latency packets).
+<9> Number of TRex active "flows". Could be different than the number of router flows, due to aging issues. Usualy the TRex number of active flows is much lower than that of the router because the router ages flows slower.
+<10> Total number of TRex flows opened since startup (including active ones, and ones already closed).
+<11> Drop rate.
+<12> Rx and latency thread CPU utilization.
+<13> Tx_ok on port 0 should equal Rx_ok on port 1, and vice versa.
More statistics information:
*Socket/Clients*:: Average of active flows per client, calculated as active_flows/#clients.
-*Socket-util*:: Estimate of how many socket ports are used per client IP. This is approximately ~(100*active_flows/#clients)/64K, calculated as (average active flows per client*100/64K). Utilization of more than 50% means that TRex is generating too many flows per single client, and that more clients must be added.
+*Socket-util*:: Estimation of number of L4 ports (sockets) used per client IP. This is approximately (100*active_flows/#clients)/64K, calculated as (average active flows per client*100/64K). Utilization of more than 50% means that TRex is generating too many flows per single client, and that more clients must be added in the generator config.
// clarify above, especially the formula
-*Max window*:: Momentary maximum latency for a time window of 500 msec. There are a few numbers per number of windows that are shown.
- The newest number (last 500msec) is on the right. Oldest in the left. This can help identifying spikes of high latency clearing after some time. Maximum latency is the total maximum over the entire test duration.
-//clarify above
+*Max window*:: Momentary maximum latency for a time window of 500 msec. There are few numbers shown per port.
+ The newest number (last 500msec) is on the right. Oldest on the left. This can help identifying spikes of high latency clearing after some time. Maximum latency is the total maximum over the entire test duration. To best understand this,
+ run TRex with latency option (-l) and watch the results with this section in mind.
-*Platform_factor*:: There are cases in which we duplicate the traffic using splitter/switch and we would like all numbers displayed to be multiplied by this factor (e.g. x2)
-//clarify above
+*Platform_factor*:: There are cases in which we duplicate the traffic using splitter/switch and we would like all numbers displayed by TRex to be multiplied by this factor, so that TRex counters will match the DUT counters.
WARNING: If you don't see rx packets, revisit your MAC address configuration.
-//clarify above
-
-==== Running TRex for the first time with ESXi:
-
-* Virtual NICs can be used to bridge between TRex and non-supported NICs, or for basic testing. Bandwidth is limited by vSwitch, has IPv6 issues.
-// clarify, especially what IPv6 issues
-
-1. Click the host machine, enter Configuration -> Networking.
-
-a. One of the NICs should be connected to the main vSwitch network to get an "outside" connection, for the TRex client and ssh: +
-image:images/vSwitch_main.png[title="vSwitch_main"]
-
-b. Other NICs that are used for TRex traffic should be in distinguish vSwitch: +
-image:images/vSwitch_loopback.png[title="vSwitch_loopback"]
-
-2. Right-click guest machine -> Edit settings -> Ensure the NICs are set to their networks: +
-image:images/vSwitch_networks.png[title="vSwitch_networks"]
-
-
-[NOTE]
-=====================================================================
-Current limitation: The following command does not function as expected:
-[subs="quotes"]
-....
-sudo ./t-rex-64 -f cap2/dns.yaml *--lm 1 --lo* -l 1000 -d 100
-....
-The vSwitch does not "know" where to route the packet. This is expected to be fixed when TRex supports ARP.
-=====================================================================
-
-* Pass-through is the way to use directly the NICs from host machine inside the VM. Has no limitations except the NIC/hardware itself. The only difference via bare-metal OS is occasional spikes of latency (~10ms). Passthrough settings cannot be saved to OVA.
-
-1. Click on the host machine. Enter Configuration -> Advanced settings -> Edit. Mark the desired NICs. Reboot the ESXi to apply. +
-image:images/passthrough_marking.png[title="passthrough_marking"]
-
-2. Right click on guest machine. Edit settings -> Add -> *PCI device* -> Choose the NICs one by one. +
-image:images/passthrough_adding.png[title="passthrough_adding"]
-
-==== Running TRex for the first time with router
-
-You can follow this presentation: link:trex_config_guide.html[first time TRex configuration]
-or continue reading.
-Without config file, TRex sets source MAC of all ports to `00:00:00:01:00:00` and expects to receive packets with this destination MAC address.
-So, you just need to configure your router with static ARP entry pointing to the above MAC address.
-
-NOTE: Virtual routers on ESXi (for example, Cisco CSR1000v) must have distinct MAC address for each port. You need to specify the addresses in the configuration file. see more xref:trex_config[here]. Another example is TRex connected to a switch. In this case, each one of the TRex ports should have distinct MAC address.
include::trex_book_basic.asciidoc[]
- duration : 0.1
vlan : { enable : 1 , vlan0 : 100 , vlan1 : 200 } <1>
----
-<1> Enable VLAN feature, valn0==100 , valn1==200
+<1> Enable VLAN feature, vlan0==100 , vlan1==200
*Problem definition:*::
1. Support for pcap files containing IPv6 packets
2. Ability to generate IPv6 traffic from pcap files containing IPv4 packets
-The following switch enables this feature: `--ipv6`
-Two new keywords (`src_ipv6`, `dst_ipv6`) have been added to the YAML file to specify the most significant 96 bits of the IPv6 address - for example:
+The following command line option enables this feature: `--ipv6`
+The keywords (`src_ipv6` and `dst_ipv6`) specify the most significant 96 bits of the IPv6 address - for example:
[source,python]
----
----
The IPv6 address is formed by placing what would typically be the IPv4
-address into the least significant 32-bits and copying the value provided
-in the src_ipv6/dst_ipv6 keywords into the most signficant 96-bits.
-If src_ipv6 and dst_ipv6 are not specified in the YAML file, the default
-is to form IPv4-compatible addresses (where the most signifcant 96-bits
-are zero).
+address into the least significant 32 bits and copying the value provided
+in the src_ipv6/dst_ipv6 keywords into the most signficant 96 bits.
+If src_ipv6 and dst_ipv6 are not specified, the default
+is to form IPv4-compatible addresses (most signifcant 96 bits are zero).
-There is a support for all plugins (control flows that needed to be changed).
+There is support for all plugins.
*Example:*::
[source,bash]
*Limitations:*::
-* TRex cannot generate both IPv4 and IPv6 traffic. The `--ipv6` switch must be specified even when using a pcap file containing only IPv6 packets.
+* TRex cannot generate both IPv4 and IPv6 traffic.
+* The `--ipv6` switch must be specified even when using pcap file containing only IPv6 packets.
*Router configuration:*::
<1> Enable IPv6
<2> Add pbr
<3> Enable IPv6 routing
-<4> MAC address setting should be like TRex
+<4> MAC address setting. Should be TRex MAC.
<5> PBR configuraion
=== Client clustering configuration
-Trex supports testing a complex topology by a feature called "client clustering".
-This feature allows a more detailed clustering of clients.
+TRex supports testing complex topologies, using a feature called "client clustering".
+This feature allows more detailed clustering of clients.
-Let's assume the following topology:
+Let's look at the following topology:
image:images/client_clustering_topology.png[title="Client Clustering"]
We would like to configure two clusters and direct traffic to them.
-Using a config file, you can instruct TRex to generate clients
+Using config file, you can instruct TRex to generate clients
with specific configuration per cluster.
-A cluster configuration includes:
+Cluster configuration includes:
-* IP start range
-* IP end range
-* Initator side configuration
-* Responder side configuration
+* IP start range.
+* IP end range.
+* Initiator side configuration.
+* Responder side configuration.
[NOTE]
-It is important to state that this is *complimentry* to the client generator
+It is important to understand that this is *complimentary* to the client generator
configured per profile - it only defines how the generator will be clustered.
-Let's take a look at an example:
+Let's look at an example.
-We have a profile which defines a client generator:
+We have a profile defining client generator.
[source,bash]
----
-$more cap2/dns.yaml
+$cat cap2/dns.yaml
- duration : 10.0
generator :
distribution : "seq"
dual_port_mask : "1.0.0.0"
tcp_aging : 1
udp_aging : 1
- mac : [0x00,0x00,0x00,0x01,0x00,0x00]
cap_info :
- name: cap2/dns.pcap
cps : 1.0
w : 1
----
-We would like to create two clusters of 4 devices each.
-We would also like to divide *80%* of the traffic to the upper cluster
-and *20%* to the lower cluster.
+We want to create two clusters with 4 devices each.
+We also want to divide *80%* of the traffic to the upper cluster and *20%* to the lower cluster.
-We create a cluster configuration file in YAML:
+We will create the following cluster configuration file.
[source,bash]
----
# Client configuration example file
# The file must contain the following fields
#
-# 'vlan' - is the entire configuration under VLAN
-# if so, each client group must include vlan
+# 'vlan' - if the entire configuration uses VLAN,
+# each client group must include vlan
# configuration
#
-# 'groups' - each client group must contain a range of IP
-# and initiator and responder maps
-# 'count' represents the number of MAC devices
-# on the group.
+# 'groups' - each client group must contain range of IPs
+# and initiator and responder section
+# 'count' represents the number of different MACs
+# addresses in the group.
#
# initiator and responder can contain 'vlan', 'src_mac', 'dst_mac'
#
# each group contains a double way VLAN configuration
-vlan: true <1>
+vlan: true
groups:
-- ip_start : 16.0.0.1 <2>
+- ip_start : 16.0.0.1
ip_end : 16.0.0.204
- initiator : <3>
+ initiator :
vlan : 100
dst_mac : "00:00:00:01:00:00"
- responder : <4>
+ responder :
vlan : 200
dst_mac : "00:00:00:01:00:00"
----
The above configuration will divide the generator range of 255 clients to two clusters,
-where each has 4 devices and VLAN on both ways.
+each with 4 devices and VLAN in both directions.
-MACs will be allocated incrementaly with a wrap around.
+MACs will be allocated incrementaly, with a wrap around.
e.g.
* 16.0.0.1 --> 00:00:00:01:00:00
* 16.0.0.2 --> 00:00:00:01:00:01
-* 16.0.0.3 --> 00:00:00:01:00:03
-* 16.0.0.4 --> 00:00:00:01:00:04
+* 16.0.0.3 --> 00:00:00:01:00:02
+* 16.0.0.4 --> 00:00:00:01:00:03
* 16.0.0.5 --> 00:00:00:01:00:00
* 16.0.0.6 --> 00:00:00:01:00:01
[source,bash]
----
-sudo ./t-rex-64 -f cap2/dns.yaml --client_cfg my_cfg.yaml -c 4 -d 100
+sudo ./t-rex-64 -f cap2/dns.yaml --client_cfg my_cfg.yaml
----
=== NAT support
$sudo ./t-rex-64 -f cap2/http_simple.yaml -c 4 -l 1000 -d 100000 -m 30 --learn-mode 1
----
-*SFR traffic without bundeling/ALG support*
+*SFR traffic without bundling/ALG support*
[source,bash]
----
-$sudo ./t-rex-64 -f avl/sfr_delay_10_1g_no_bundeling.yaml -c 4 -l 1000 -d 100000 -m 10 --learn-mode 2
+$sudo ./t-rex-64 -f avl/sfr_delay_10_1g_no_bundling.yaml -c 4 -l 1000 -d 100000 -m 10 --learn-mode 2
----
*NAT terminal counters:*::
*Configuration for Cisco ASR1000 Series:*::
-This feature was tested with the following configuration and sfr_delay_10_1g_no_bundeling. yaml traffic profile.
+This feature was tested with the following configuration and sfr_delay_10_1g_no_bundling. yaml traffic profile.
Client address range is 16.0.0.1 to 16.0.0.255
[source,python]
*Limitations:*::
-. The IPv6-IPv6 NAT feature does not exist on routers, so this feature can work on IPv4 only.
+. The IPv6-IPv6 NAT feature does not exist on routers, so this feature can work only with IPv4.
. Does not support NAT64.
-. Bundling/plugin support is not fully supported. Consequently, sfr_delay_10.yaml does not work. Use sfr_delay_10_no_bundeling.yaml instead.
-// verify file name "sfr_delay_10_no_bundeling.yaml" above. english spelling is bundling but maybe the filename has the "e"
+. Bundling/plugin is not fully supported. Consequently, sfr_delay_10.yaml does not work. Use sfr_delay_10_no_bundling.yaml instead.
[NOTE]
=====================================================================
* `--learn-verify` is a TRex debug mechanism for testing the TRex learn mechanism.
-* If the router is configured without NAT, it will verify that the inside_ip==outside_ip and inside_port==outside_port.
+* Need to run it when DUT is configured without NAT. It will verify that the inside_ip==outside_ip and inside_port==outside_port.
=====================================================================
=== Flow order/latency verification
-In normal mode (without this feature enabled), received traffic is not checked by software. Hardware (Intel NIC) testin for dropped packets occurs at the end of the test. The only exception is the Latency/Jitter packets.
+In normal mode (without this feature enabled), received traffic is not checked by software. Hardware (Intel NIC) testing for dropped packets occurs at the end of the test. The only exception is the Latency/Jitter packets.
This is one reason that with TRex, you *cannot* check features that terminate traffic (for example TCP Proxy).
To enable this feature, add `--rx-check <sample>` to the command line options, where <sample> is the sample rate.
The number of flows that will be sent to the software for verification is (1/(sample_rate). For 40Gb/sec traffic you can use a sample rate of 1/128. Watch for Rx CPU% utilization.
- INFO: This feature changes the TTL of the sampled flows to 255 and expects to receive packets with TTL 254 or 255 (one routing hop). If you have more than one hop in your setup, use `--hops` to change it to a higher value. More than one hop is possible if there are number of routers betwean TRex client side and TRex server side.
+[NOTE]
+============
+This feature changes the TTL of the sampled flows to 255 and expects to receive packets with TTL 254 or 255 (one routing hop). If you have more than one hop in your setup, use `--hops` to change it to a higher value. More than one hop is possible if there are number of routers betwean TRex client side and TRex server side.
+============
This feature ensures that:
*Notes and Limitations:*::
-** This feature must be enabled with a latency check (`-l`).
** To receive the packets TRex does the following:
*** Changes the TTL to 0xff and expects 0xFF (loopback) or oxFE (route). (Use `--hop` to configure this value.)
*** Adds 24 bytes of metadata as ipv4/ipv6 option header.
mac_override_by_ip : true <8>
----
<1> Test duration (seconds). Can be overridden using the `-d` option.
-<2> See the generator section.
+<2> See the link:trex_manual.html#_clients_servers_ip_allocation_scheme[generator] section.
// what does note 2 mean? see somewhere else? isn't this simply the generator section?
<3> Default source/destination MAC address. The configuration YAML can override this.
<4> true (default) indicates that the IPG is taken from the cap file (also taking into account cap_ipg_min and cap_override_ipg if they exist). false indicates that IPG is taken from per template section.
anchor:trex_config[]
The configuration file, in YAML format, configures TRex behavior, including:
-
-- MAC address for each port (source and destination)
-- Masking interfaces (usually for 1Gb/Sec TRex) to ensure that TRex does not take the management ports as traffic ports.
+- IP address or MAC address for each port (source and destination).
+- Masked interfaces, to ensure that TRex does not try to use the management ports as traffic ports.
- Changing the zmq/telnet TCP port.
-==== Basic Configuration
-
-Copy/install the configuration file to `/etc/trex_cfg.yaml`.
-TRex loads it automatically at startup. You still can override options with the command line option switch `--cfg [file]` in the CLI
-Configuration file examples can be found in the `$ROOT/cfg` folder
+You specify which config file to use by adding --cfg <file name> to the command line arguments. +
+If no --cfg given, the default `/etc/trex_cfg.yaml` is used. +
+Configuration file examples can be found in the `$TREX_ROOT/scripts/cfg` folder.
+==== Basic Configurations
[source,python]
----
- - port_limit : 2 #mandatory <1>
- version : 2 #mandatory <2>
- interfaces : ["03:00.0","03:00.1"] #mandatory <3>
- #enable_zmq_pub : true <4>
- #zmq_pub_port : 4500 <5>
- #prefix : setup1 <6>
- #limit_memory : 1024 <7>
- c : 4 <8>
- port_bandwidth_gb : 10 <9>
- port_info : # set eh mac addr mandatory
- - dest_mac : [0x1,0x0,0x0,0x1,0x0,0x00] # port 0 <10>
- src_mac : [0x2,0x0,0x0,0x2,0x0,0x00]
- - dest_mac : [0x3,0x0,0x0,0x3,0x0,0x00] # port 1
- src_mac : [0x4,0x0,0x0,0x4,0x0,0x00]
- - dest_mac : [0x5,0x0,0x0,0x5,0x0,0x00] # port 2
- src_mac : [0x6,0x0,0x0,0x6,0x0,0x00]
- - dest_mac : [0x7,0x0,0x0,0x7,0x0,0x01] # port 3
- src_mac : [0x0,0x0,0x0,0x8,0x0,0x02]
- - dest_mac : [0x0,0x0,0x0,0x9,0x0,0x03] # port 4
-----
-<1> The number of ports, should be equal to the number of interfaces in 3) - mandatory
-<2> Must be set to 2 - mandatory
-<3> Interface that should be used. used `sudo ./dpdk_setup_ports.py --show` - mandatory
+ - port_limit : 2 #mandatory <1>
+ version : 2 #mandatory <2>
+ interfaces : ["03:00.0", "03:00.1"] #mandatory <3>
+ #enable_zmq_pub : true #optional <4>
+ #zmq_pub_port : 4500 #optional <5>
+ #prefix : setup1 #optional <6>
+ #limit_memory : 1024 #optional <7>
+ c : 4 #optional <8>
+ port_bandwidth_gb : 10 #optional <9>
+ port_info : # set eh mac addr mandatory
+ - default_gw : 1.1.1.1 # port 0 <10>
+ dest_mac : '00:00:00:01:00:00' # Either default_gw or dest_mac is mandatory <10>
+ src_mac : '00:00:00:02:00:00' # optional <11>
+ ip : 2.2.2.2 # optional <12>
+ vlan : 15 # optional <13>
+ - dest_mac : '00:00:00:03:00:00' # port 1
+ src_mac : '00:00:00:04:00:00'
+ - dest_mac : '00:00:00:05:00:00' # port 2
+ src_mac : '00:00:00:06:00:00'
+ - dest_mac : [0x0,0x0,0x0,0x7,0x0,0x01] # port 3 <14>
+ src_mac : [0x0,0x0,0x0,0x8,0x0,0x02] # <14>
+----
+<1> Number of ports. Should be equal to the number of interfaces listed in 3. - mandatory
+<2> Must be set to 2. - mandatory
+<3> List of interfaces to use. Run `sudo ./dpdk_setup_ports.py --show` to see the list you can choose from. - mandatory
<4> Enable the ZMQ publisher for stats data, default is true.
-<5> ZMQ port number. the default value is good. you can remove this line
-
-<6> The name of the setup should be distinct ( DPDK --file-prefix )
-<7> DPDK -m limit the packet memory
-<8> Number of threads per dual interface ( like -c CLI option )
-<9> The bandwidth of each interface in Gb/sec. In this example we have 10Gb/sec interfaces. for VM put 1. it used to tune the amount of memory allocated by TRex.
-<10> MAC address per port - source and destination.
-
-
-To find out what the interfaces ids, perform the following:
+<5> ZMQ port number. Default value is good. If running two TRex instances on the same machine, each should be given distinct number. Otherwise, can remove this line.
+<6> If running two TRex instances on the same machine, each should be given distinct name. Otherwise, can remove this line. ( Passed to DPDK as --file-prefix arg)
+<7> Limit the amount of packet memory used. (Passed to dpdk as -m arg)
+<8> Number of threads (cores) TRex will use per interface pair ( Can be overridden by -c command line option )
+<9> The bandwidth of each interface in Gbs. In this example we have 10Gbs interfaces. For VM, put 1. Used to tune the amount of memory allocated by TRex.
+<10> TRex need to know the destination MAC address to use on each port. You can specify this in one of two ways: +
+Specify dest_mac directly. +
+Specify default_gw (since version 2.10). In this case (only if no dest_mac given), TRex will issue ARP request to this IP, and will use
+the result as dest MAC. If no dest_mac given, and no ARP response received, TRex will exit.
+
+<11> Source MAC to use when sending packets from this interface. If not given (since version 2.10), MAC address of the port will be used.
+<12> If given (since version 2.10), TRex will issue gratitues ARP for the ip + src MAC pair on appropriate port. In stateful mode,
+gratitues ARP for each ip will be sent every 120 seconds (Can be changed using --arp-refresh-period argument)
+<13> If given, gratitues ARP and ARP request will be sent using the given VLAN tag.
+<14> Old MAC address format. New format is supported since version v2.09.
+
+To find out which interfaces (NIC ports) can be used, perform the following:
[source,bash]
----
Network devices using kernel driver
===================================
- 0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio *Active*
- 0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<1>
- 0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<2>
- 0000:13:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<3>
- 0000:13:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<4>
+ 0000:02:00.0 '82545EM Gigabit Ethernet Controller' if=eth2 drv=e1000 unused=igb_uio *Active* #<1>
+ 0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<2>
+ 0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
+ 0000:13:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
+ 0000:13:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
Other network devices
=====================
<none>
----
-<1> TRex interface #1 before unbinding
-<2> TRex interface #2 before unbinding
-<3> TRex interface #3 before unbinding
-<4> TRex interface #4 before unbinding
-
+<1> We see that 02:00.0 is active (our management port).
+<2> All other NIC ports (03:00.0, 03:00.1, 13:00.0, 13:00.1) can be used.
minimum configuration file is:
----
<none>
- port_limit : 4
- version : 2 #<1>
- interfaces : ["03:00.0","03:00.1","13:00.1","13:00.0"] #<2>
+ version : 2
+ interfaces : ["03:00.0","03:00.1","13:00.1","13:00.0"]
----
-<1> must add version 2 to the configuration file
-<2> The list of interfaces from `sudo ./dpdk_setup_ports.py --show`
-
==== Memory section configuration
-The memory section is optional. It is used when there is a need to tune the amount of memory used by packet manager
+The memory section is optional. It is used when there is a need to tune the amount of memory used by TRex packet manager.
+Default values (from the TRex source code), are usually good for most users. Unless you have some unusual needs, you can
+eliminate this section.
[source,python]
----
dp_flows : 1048576 <4>
global_flows : 10240 <5>
----
-<1> Memory section
-<2> Per dual interfaces number of buffers - buffer for real time traffic generation
-<3> Traffic buffer - when you have many template only this section should be enlarge
-<4> number of TRex flows needed
-<5> reserved
+<1> Memory section header
+<2> Numbers of memory buffers allocated for packets in transit, per port pair. Numbers are specified per packet size.
+<3> Numbers of memory buffers allocated for holding the part of the packet which is remained unchanged per template.
+You should increase numbers here, only if you have very large amount of templates.
+<4> Number of TRex flow objects allocated (To get best performance they are allocated upfront, and not dynamically).
+If you expect more concurrent flows than the default (1048576), enlarge this.
+<5> Number objects TRex allocates for holding NAT ``in transit'' connections. In stateful mode, TRex learn NAT
+translation by looking at the address changes done by the DUT to the first packet of each flow. So, these are the
+number of flows for which TRex sent the first flow packet, but did not learn the translation yet. Again, default
+here (10240) should be good. Increase only if you use NAT and see issues.
==== Platform section configuration
[source,python]
----
- version : 2
- interfaces : ["03:00.0","03:00.1"]
+ interfaces : ["03:00.0","03:00.1"]
port_limit : 2
- prefix : setup1 <1>
- limit_memory : 1024 <2>
- c : 4 <3>
- port_bandwidth_gb : 10 <4>
- platform : <5>
- master_thread_id : 0 <6>
- latency_thread_id : 5 <7>
- dual_if :
- - socket : 0 <8>
- threads : [1,2,3,4] <9>
-----
-<1> The name of the setup should be distinct ( DPDK --file-prefix )
-<2> DPDK -m
-<3> Number of threads per dual interface ( like -c CLI option )
-<4> The bandwidth of each interface in Gb/sec. In this example we have 10Gb/sec interfaces. for VM put 1. it used to tune the amount of memory allocated by TRex.
-<5> the platform section
-<6> The thread_id for control
-<7> The thread_id for latency if used
-<8> Socket of the dual interfaces, in this example of 03:00.0 and 03:00.1, memory should be local to the interface. (Currently dual interface can't use 2 NUMAs.)
-<9> Thread to be used, should be local to the NIC. The threads are pinned to cores, thus specifying threads is like specifying cores.
+....
+ platform : <1>
+ master_thread_id : 0 <2>
+ latency_thread_id : 5 <3>
+ dual_if : <4>
+ - socket : 0 <5>
+ threads : [1,2,3,4] <6>
+----
+<1> Platform section header.
+<2> Hardware thread_id for control thread.
+<3> Hardware thread_id for RX thread.
+<4> ``dual_if'' section defines info for interface pairs (according to the order in ``interfaces'' list).
+each section, starting with ``- socket'' defines info for different interface pair.
+<5> The NUMA node from which memory will be allocated for use by the interface pair.
+<6> Hardware threads to be used for sending packets for the interface pair. Threads are pinned to cores, so specifying threads
+actually determines the hardware cores.
*Real example:* anchor:numa-example[]
-We've connected 2 Intel XL710 NICs close to each other on motherboard, they shared same NUMA:
+We connected 2 Intel XL710 NICs close to each other on the motherboard. They shared the same NUMA:
image:images/same_numa.png[title="2_NICSs_same_NUMA"]
-The CPU utilization was very high ~100%, with c=2 and c=4 the results were same.
+CPU utilization was very high ~100%, with c=2 and c=4 the results were same.
Then, we moved the cards to different NUMAs:
anchor:cml-line[]
-*-f=TRAFIC_YAML_FILE*::
- Traffic YAML configuration file.
+*--allow-coredump*::
+Allow creation of core dump.
+
+*--arp-refresh-period <num>*::
+Period in seconds between sending of gratuitous ARP for our addresses. Value of 0 means ``never send``.
+
+*-c <num>*::
+Number of hardware threads to use per interface pair. Use at least 4 for TRex 40Gbs. +
+TRex uses 2 threads for inner needs. Rest of the threads can be used. Maximum number here, can be number of free threads
+divided by number of interface pairs. +
+For virtual NICs on VM, we always use one thread per interface pair.
+
+*--cfg <file name>*::
+TRex configuration file to use. See relevant manual section for all config file options.
-*-c=CORES*::
- Number of cores _per dual interface_. Use 4 for TRex 40Gb/sec. Monitor the CPU% of TRex - it should be ~50%. +
- TRex uses 2 cores for inner needs, the rest of cores can be used divided by number of dual interfaces. +
- For virtual NICs the limit is -c=1.
+*--checksum-offload*::
+Enable IP, TCP and UDP tx checksum offloading, using DPDK. This requires all used interfaces to support this.
-*-l=HZ*::
- Run the latency daemon in this Hz rate. Example: -l 1000 runs 1000 pkt/sec from each interface. A value of zero (0) disables the latency check.
+*--client_cfg <file>*::
+YAML file describing clients configuration. Look link:trex_manual.html#_client_clustering_configuration[here] for details.
-*-d=DURATION*::
- Duration of the test (sec), Default: 0
+*-d <num>*::
+Duration of the test in seconds.
-*-m=MUL*::
- Factor for bandwidth (multiply the CPS of each template by this value).
+*-e*::
+ Same as `-p`, but change the src/dst IP according to the port. Using this, you will get all the packets of the
+ same flow from the same port, and with the same src/dst IP. +
+ It will not work good with NBAR as it expects all clients ip to be sent from same direction.
+
+*-f <yaml file>*::
+Specify traffic YAML configuration file to use. Mandatory option for stateful mode.
+
+*--hops <num>*::
+ Provide number of hops in the setup (default is one hop). Relevant only if the Rx check is enabled.
+ Look link:trex_manual.html#_flow_order_latency_verification[here] for details.
+
+*--iom <mode>*::
+ I/O mode. Possible values: 0 (silent), 1 (normal), 2 (short).
*--ipv6*::
- Convert template to IPv6 mode.
+ Convert templates to IPv6 mode.
+
+*-k <num>*::
+ Run ``warm up'' traffic for num seconds before starting the test. This is needed if TRex is connected to switch running
+ spanning tree. You want the switch to see traffic from all relevant source MAC addresses before starting to send real
+ data. Traffic sent is the same used for the latency test (-l option) +
+ Current limitation (holds for TRex version 1.82): does not work properly on VM.
-*--learn-mode <mode (1-2)>*::
- Learn the dynamic NAT translation. +
- 1 - Use TCP ACK in first SYN to pass NAT translation information. Will work only for TCP streams. Initial SYN packet must be present in stream. +
- 2 - Add special IP option to pass NAT translation information. Will not work on certain firewalls if they drop packets with IP options.
+*-l <rate>*::
+ In parallel to the test, run latency check, sending packets at rate/sec from each interface.
+
+*--learn-mode <mode>*::
+ Learn the dynamic NAT translation. Look link:trex_manual.html#_nat_support[here] for details.
*--learn-verify*::
- Learn the translation. This feature is intended for verification of the mechanism in cases where there is no NAT.
-
-*-p*::
- Flow-flip. Sends all flow packets from the same interface. This can solve the flow order. Does not work with any router configuration.
+ Used for testing the NAT learning mechanism. Do the learning as if DUT is doing NAT, but verify that packets
+ are not actually changed.
-*-e*::
- same as `-p` but comply to the direction rules and replace source/destination IPs. it might not be good for NBAR as it is expected clients ip to be sent from same direction.
+*--limit-ports <port num>*::
+ Limit the number of ports used. Overrides the ``port_limit'' from config file.
-//TBD: The last 2 sentences (flow order, router configuration) are unclear.
+*--lm <hex bit mask>*::
+Mask specifying which ports will send traffic. For example, 0x1 - Only port 0 will send. 0x4 - only port 2 will send.
+This can be used to verify port connectivity. You can send packets from one port, and look at counters on the DUT.
-
-*--lm=MASK*::
- Latency mask. Use this to verify port connectivity. Possible values: 0x1 (only port 0 will send traffic), 0x2 (only port 1 will send traffic).
+*--lo*::
+ Latency only - Send only latency packets. Do not send packets from the templates/pcap files.
-*--lo*::
- Latency test.
-
-*--limit-ports=PORTS*::
- Limit number of ports. Configure this in the --cfg file. Possible values (number of ports): 2, 4, 6, 8. (Default: 4)
-
-*--nc*::
- If set, will terminate exacly at the end of the duration. This provides a faster, more accurate TRex termination. In default it wait for all the flow to terminate gracefully. In case of a very long flow the termination might be prolong.
+*-m <num>*::
+ Rate multiplier. TRex will multiply the CPS rate of each template by num.
+
+*--mac-spread*::
+ Spread the destination mac by this this factor. e.g 2 will generate the traffic to 2 devices DEST-MAC ,DEST-MAC+1. The maximum is up to 128 devices.
+
+*--nc*::
+ If set, will terminate exacly at the end of the specified duration.
+ This provides faster, more accurate TRex termination.
+ By default (without this option), TRex waits for all flows to terminate gracefully. In case of a very long flow, termination might prolong.
+
+*--no-flow-control-change*::
+ Prevents TRex from changing flow control. By default (without this option), TRex disables flow control at startup for all cards, except for the Intel XL710 40G card.
+
+*--no-key*:: Daemon mode, don't get input from keyboard.
+
+*--no-watchdog*:: Disable watchdog.
-*-pm=MULTIFLIER*::
- Platform factor. If the setup includes a splitter, you can multiply the total results by this factor. Example: --pm 2.0 will multiply all bps results by this factor.
+*-p*::
+Send all packets of the same flow from the same direction. For each flow, TRex will randomly choose between client port and
+server port, and send all the packets from this port. src/dst IPs keep their values as if packets are sent from two ports.
+Meaning, we get on the same port packets from client to server, and from server to client. +
+If you are using this with a router, you can not relay on routing rules to pass traffic to TRex, you must configure policy
+based routes to pass all traffic from one DUT port to the other. +
+
+*-pm <num>*::
+ Platform factor. If the setup includes splitter, you can multiply all statistic number displayed by TRex by this factor, so that they will match the DUT counters.
*-pubd*::
- Disable ZMQ monitor's publishers.
+ Disable ZMQ monitor's publishers.
-*-1g*::
- Deprecated. Configure TRex to 1G. Configure this in the --cfg file.
-
-*-k=SEC*::
- Run a latency test before starting the test. TRex will wait for x sec before and after sending latency packets at startup.
- Current limitation (holds for TRex version 1.82): does not work properly on VM.
+*--rx-check <sample rate>*::
+ Enable Rx check module. Using this, each thread randomly samples 1/sample_rate of the flows and checks packet order, latency, and additional statistics for the sampled flows.
+ Note: This feature works on the RX thread.
-*-w=SEC*::
- Wait additional time between NICs initialization and sending traffic. Can be useful if DUT needs extra setup time. Default is 1 second.
-
-*--cfg=platform_yaml*::
- Load and configure platform using this file. See example file: cfg/cfg_examplexx.yaml
- This file is used to configure/mask interfaces, cores, affinity, and MAC addresses.
- You can use the example file by copying it to: /etc/trex_cfg.yaml
-
-
-*-v=VERBOSE*::
- Verbose mode (works only on the debug image! )
- 1 Show only stats.
- 2 Run preview. Does not write to file.
- 3 Run preview and write to stats file.
- Note: When using verbose mode, it is not necessary to add an output file.
- Caution: Operating in verbose mode can generate very large files (terabytes). Use with caution, only on a local drive.
-
-
-*--rx-check=SAMPLE_RATE*::
- Enable Rx check module. Using this each thread samples flows (1/sample) and checks order, latency, and additional statistics.
- Note: This feature operates as an additional thread.
-
-*--hops=HOPES*::
- Number of hops in the setup (default is one hop). Relevant only if the Rx check is enabled.
-
-*--iom=MODE*::
- I/O mode for interactive mode. Possible values: 0 (silent), 1 (normal), 2 (short)
-
-*--no-flow-control-change*::
- Prevents TRex from changing flow control. By default (without this option), TRex disables flow control at startup for all cards, except for the Intel XL710 40G card.
+*-v <verbosity level>*::
+ Show debug info. Value of 1 shows debug info on startup. Value of 3, shows debug info during run at some cases. Might slow down operation.
-*--mac-spread*::
- Spread the destination mac by this this factor. e.g 2 will generate the traffic to 2 devices DEST-MAC ,DEST-MAC+1. The maximum is up to 128 devices.
+*--vlan*:: Relevant only for stateless mode with Intel 82599 10G NIC.
+ When configuring flow stat and latency per stream rules, assume all streams uses VLAN.
+*-w <num seconds>*::
+ Wait additional time between NICs initialization and sending traffic. Can be useful if DUT needs extra setup time. Default is 1 second.
ifndef::backend-docbook[]
=== Simulator
-The TRex simulator is a linux application that can process on any Linux CEL (it can run on TRex itself).
-you can create create output pcap file from input of traffic YAML.
+The TRex simulator is a linux application (no DPDK needed) that can run on any Linux (it can also run on TRex machine itself).
+you can create output pcap file from input of traffic YAML.
==== Simulator
TRex simulates clients and servers and generates traffic based on the pcap files provided.
.Clients/Servers
-image:images/trex_model.png[title="generator"]
+image:images/trex_model.png[title=""]
The following is an example YAML-format traffic configuration file (cap2/dns_test.yaml), with explanatory notes.
[source,python]
----
-$more cap2/dns_test.yaml
+$cat cap2/dns_test.yaml
- duration : 10.0
generator :
distribution : "seq"
dual_port_mask : "1.0.0.0"
tcp_aging : 1
udp_aging : 1
- mac : [0x00,0x00,0x00,0x01,0x00,0x00]
cap_info :
- name: cap2/dns.pcap <3>
cps : 1.0 <4>
<6> Should be the same as ipg.
.DNS template file
-image:images/dns_wireshark.png[title="generator"]
+image:images/dns_wireshark.png[title=""]
The DNS template file includes:
//TBD: Not sure what the output looks like here, with this line showing only "gives"
.TRex generated output file
+//??? missing picture
image:images/dns_trex_run.png[title="generator"]
As the output file shows...
x axis: time (seconds)
y axis: flow ID
The output indicates that there are 10 flows in 1 second, as expected, and the IPG is 50 msec +
-//TBD: not sure what the "+ +" means ==> [hh] Ascii Doc break page
ifndef::backend-docbook[]
+++++++++++++++++++++++++++++++++
Open-flows : 1 Servers : 254 Socket : 1 Socket/Clients : 0.0
drop-rate : 0.00 bps
----
-<1> Number of clients
+<1> Number of clients
<2> sockets utilization (should be lowwer than 20%, elarge the number of clients in case of an issue).
=== DNS, W=1
|=================
-=== Mixing HTTP and DNS template
+=== Mixing HTTP and DNS templates
The following example combines elements of HTTP and DNS templates:
endif::backend-docbook[]
-=== TRex command line
+=== Running examples
-TRex commands typically include the following main arguments, but only `-f` and `-d` are required.
+TRex commands typically include the following main arguments, but only `-f` is required.
[source,bash]
----
-$.sudo /t-rex-64 -f [traffic_yaml] -m [muti] -d [duration] -l [Hz=1000] -c [cores]
+$.sudo /t-rex-64 -f <traffic_yaml> -m <multiplier> -d <duration> -l <latency test rate> -c <cores>
----
-
-*-f=TRAFIC_YAML_FILE*::
- YAML traffic configuration file.
-
-*-m=MUL*::
- Factor for bandwidth (multiplies the CPS of each template by this value).
-
-*-d=DURATION*::
- Duration of the test (sec). Default: 0
-
-*-l=HZ*::
- Rate (Hz) for running the latency daemon. Example: -l 1000 runs 1000 pkt/sec from each interface. A value of zero (0) disables the latency check.
-
-*-c=CORES*::
- Number of cores. Use 4 for TRex 40Gb/sec. Monitor the CPU% of TRex - it should be ~50%.
-
-
-The full reference can be found xref:cml-line[here]
+Full command line reference can be found xref:cml-line[here]
==== TRex command line examples
=== Mimicking stateless traffic under stateful mode
[NOTE]
-TRex now supports a true stateless traffic generation.
+TRex supports also true stateless traffic generation.
If you are looking for stateless traffic, please visit the following link: xref:trex_stateless.html[TRex Stateless Support]
With this feature you can "repeat" flows and create stateless, *IXIA* like streams.
-After injecting the number of flows defined by `limit`, TRex repeats the same flows. If all template has a `limit` the CPS will be zero after a time as there are no new flows after the first iteration.
+After injecting the number of flows defined by `limit`, TRex repeats the same flows. If all templates have `limit` the CPS will be zero after some time as there are no new flows after the first iteration.
*IMIX support:*::
Example:
[WARNING]
=====================================================================
The *-p* is used here to send the client side packets from both interfaces.
-(Normally it is sent only from client ports only.)
-Typically, the traffic client side is sent from the TRex client port; with this option, the port is selected by the client IP.
-All the flow packets are sent from the same interface. This may create an issue with routing, as the client's IP will be sent from the server interface. PBR router configuration solves this issue but cannot be used in all cases. So use this `-p` option carefully.
+(Normally it is sent from client ports only.)
+With this option, the port is selected by the client IP.
+All the packets of a flow are sent from the same interface. This may create an issue with routing, as the client's IP will be sent from the server interface. PBR router configuration solves this issue but cannot be used in all cases. So use this `-p` option carefully.
=====================================================================
w : 1
limit : 199
----
-The templates are duplicate here to better utilize DRAM and to get better performance.
+The templates are duplicated here to better utilize DRAM and to get better performance.
//TBD: What exactly repeates the templates - TRex, script, ? Also, how does that better utilize DRAM.
.Imix YAML `cap2/imix_fast_1g_100k_flows.yaml` example
=== Clients/Servers IP allocation scheme
-Currently, there is one global IP pool for clients and servers. It serves all templates. all the templates will allocate IP from this global pool.
-Each TRex client/server "dual-port" (pair of ports, such as port 0 for client, port 1 for server) has it own mask offset taken from the YAML. The mask offset is called `dual_port_mask`.
+Currently, there is one global IP pool for clients and servers. It serves all templates. All templates will allocate IP from this global pool.
+Each TRex client/server "dual-port" (pair of ports, such as port 0 for client, port 1 for server) has its own generator offset, taken from the config file. The offset is called `dual_port_mask`.
Example:
tcp_aging : 0
udp_aging : 0
----
-<1> Mask to add per dual-port pair.
-The reason we introduce dual_port_mask is to make static route configurable. With this mask, different ports has different prefix.
+<1> Offset to add per port pair.
+The reason for the ``dual_port_mask'' is to make static route configuration per port possible. With this offset, different ports have different prefixes.
-//TBD: needs clarification - this is the format of a port mask?
-
-With four ports, TRex produces the following output:
+For example, with four ports, TRex will produce the following ip ranges:
[source,python]
----
- dual-0 (0,1) --> C (16.0.0.1-16.0.0.128 ) <-> S( 48.0.0.1 - 48.0.0.128)
- dual-1 (2,3) --> C (17.0.0.129-17.0.0.255 ) <-> S( 49.0.0.129 - 49.0.0.255) + mask ("1.0.0.0")
+ port pair-0 (0,1) --> C (16.0.0.1-16.0.0.128 ) <-> S( 48.0.0.1 - 48.0.0.128)
+ port pair-1 (2,3) --> C (17.0.0.129-17.0.0.255 ) <-> S( 49.0.0.129 - 49.0.0.255) + mask ("1.0.0.0")
----
-In the case of setting dual-port_mask as 0.0.0.0, both ports will use the same range of ip.
-With four ports and dual_port_mask as 0.0.0.0, the ip range is :
+- Number of clients : 255
+- Number of servers : 255
+- The offset defined by ``dual_port_mask'' (1.0.0.0) is added for each port pair, but the total number of clients/servers will remain constant (255), and will not depend on the amount of ports.
+- TCP/UDP aging is the time it takes to return the socket to the pool. It is required when the number of clients is very small and the template defines a very long duration.
+//TBD: not clear - is TCP/UDP aging an option used when the template defines a long duration? also, should specify what "very long" refers to.
+
+If ``dual-port_mask'' was set to 0.0.0.0, both port pairs would have uses the same ip range.
+For example, with four ports, we would have get the following ip range is :
[source,python]
----
- dual-0 (0,1) --> C (16.0.0.1-16.0.0.128 ) <-> S( 48.0.0.1 - 48.0.0.128)
- dual-1 (2,3) --> C (16.0.0.129-16.0.0.255 ) <-> S( 48.0.0.129 - 48.0.0.255)
+ port pair-0 (0,1) --> C (16.0.0.1-16.0.0.128 ) <-> S( 48.0.0.1 - 48.0.0.128)
+ port pair-1 (2,3) --> C (16.0.0.129-16.0.0.255 ) <-> S( 48.0.0.129 - 48.0.0.255)
----
-//TBD: not clear what the following 5 points are referring to. This looks like it should be a continuation of the footnotes for the example a few lines up.
-- Number of clients : 255
-- Number of servers : 255
-- The mask defined by dual_port_mask (1.0.0.0) is added for each dual-port pair, but the total number of clients/servers from YAML will be constant and does not depend on the amount of dual ports.
-- TCP/UDP aging is required when the number of clients is very small and the template defines a very long duration.
-This is the time it takes to return the socket to the pool.
-//TBD: not clear - is TCP/UDP aging an option used when the template defines a long duration? also, should specify what "very long" refers to.
-- In the current version, the only option for distribution is "seq".
-
*Router configuration for this mode:*::
*One server:*::
-To support a template with one server, you can add a new YAML server_addr ip. Each dual-port pair will be assigned a separate server (in compliance with the mask).
-//TBD: clarify
+To support a template with one server, you can add ``server_addr'' keyword. Each port pair will be get different server IP
+(According to the ``dual_port_mask'' offset).
[source,python]
----
one_app_server : true <2>
wlength : 1
----
-<1> Server IPv4 address.
+<1> Server IP.
<2> Enable one server mode.
-*w/wlength:*::
-//TBD: looks like this should be a continuation of the footnotes as in 1 and 2 above.
+// TBD - what is wlength???
-not require to configure them, user 1
-//TBD: ?
-
-*new statistic:*::
+In TRex server, you will see the following statistics.
+// TBD - need to explain this
[source,python]
----
[NOTE]
=====================================================================
* No backward compatibility with the old generator YAML format.
-* When using -p option, TRex will not comply with the static route rules. Server-side traffic may be sent from the client side (port 0) and vice-versa. Use the -p option only with PBR configuration when the router, switch p1<->p2.
-//TBD: "when router..." unclear
-* VLAN (sub interface feature) does not comply with static route rules. Use it only with PBR.
- VLAN0 <-> VALN1 per interface
- vlan : { enable : 1 , vlan0 : 100 , vlan1 : 200 }
-* Limitation: When using a template with plugins (bundles), the number of servers must be higher than the number of clients.
+* When using -p option, TRex will not comply with the static route rules. Server-side traffic may be sent from the client side (port 0) and vice-versa.
+If you use the -p option, you must configure policy based routing to pass all traffic from router port 1 to router port 2, and vice versa.
+* xref:trex_vlan[VLAN] feature does not comply with static route rules. If you use it, you also need policy based routing
+rules to pass packets from VLAN0 to VLAN1 and vice versa.
+* Limitation: When using template with plugins (bundles), the number of servers must be higher than the number of clients.
=====================================================================
==== More Details about IP allocations
-Each time a new flow is creaed, TRex allocates a new Client IP/port and Server IP. This 3-tuple should be distinct among active flows.
+Each time a new flow is created, TRex allocates new Client IP/port and Server IP. This 3-tuple should be distinct among active flows.
-Currently, only sequcency distribution is supported in IP allocation. That means the IP address is increased one by one.
+Currently, only sequential distribution is supported in IP allocation. This means the IP address is increased by one for each flow.
-Let's say if we have 2 candidate IPs in the pool: 16.0.0.1 and 16.0.0.2. So the sequence of allocated clients should be something like this:
+For example, if we have a pool of two IP addresses: 16.0.0.1 and 16.0.0.2, the allocation of client src/port pairs will be
[source,python]
----
16.0.0.0.2 [1024]
16.0.0.0.1 [1025]
16.0.0.0.2 [1025]
+16.0.0.0.1 [1026]
+16.0.0.0.2 [1026]
+...
----
-==== How to decide the PPS and BPS
+==== How to determine the packet per second(PPS) and Bit per second (BPS)
-- Example of one flow with 4 packets
-- Green are first packet of flow
-- Lets say the client ip pool starts from 16.0.0.1, and the distribution is seq.
+- Let's look at an example of one flow with 4 packets.
+- Green circles represent the first packet of each flow.
+- The client ip pool starts from 16.0.0.1, and the distribution is seq.
-image:images/ip_allocation.png[title="rigt"]
+image:images/ip_allocation.png[title=""]
latexmath:[$Total PPS = \sum_{k=0}^{n}(CPS_{k}\times {flow\_pkts}_{k})$]
latexmath:[$Concurrent flow = \sum_{k=0}^{n}CPS_{k}\times flow\_duration_k $]
+// TBD Ido: The latexmath formulas only looks good in pdf format. In HTML they are not clear.
-
-The above fomulars can be used to calculate the PPS. The TRex throughput depends on the PPS calculated above and the value of m (a multiplier assigned by TRex cli).
+The above formulas can be used to calculate the PPS. The TRex throughput depends on the PPS calculated above and the value of m (a multiplier given as command line argument -m).
The m value is a multiplier of total pcap files CPS.
CPS of pcap file is configured on yaml file.
The BPS depends on the packet size. You can refer to your packet size and get the BPS = PPS*Packet_size.
-==== Client/Server IP allocation
+==== Per template allocation + future plans
- *1) per-template generator*
w : 1
----
-- *2) More distributions will be supported (normal distribution, random distribution, etc)*
+- *2) More distributions will be supported in the future (normal distribution for example)*
Currently, only sequcence and random are supported.
- *3) Histogram of tuple pool will be supported*
-This feature gives user more flexibility to define the IP generator.
+This feature will give the user more flexibility in defining the IP generator.
[source,python]
----
-=== Measure Jitter/Latency
+=== Measuring Jitter/Latency
-To measure jitter/latency on high priorty packets (one SCTP or ICMP flow), use `-l [Hz]` where Hz defines the number of packets to send from each port per second.
+To measure jitter/latency using independent flows (SCTP or ICMP), use `-l [Hz]` where Hz defines the number of packets to send from each port per second.
This option measures latency and jitter. We can define the type of traffic used for the latency measurement using the `--l-pkt-mode` option.
| 1 |
ICMP echo request packets from both sides
| 2 |
-*Stateful*, send ICMP requests from one side, and matching ICMP responses from other side.
+Send ICMP requests from one side, and matching ICMP responses from other side.
This is particulary usefull if your DUT drops traffic from outside, and you need to open pin hole to get the outside traffic in (for example when testing a firewall)
| 3 |
-send ICMP request packets with a constant 0 sequence number.
+Send ICMP request packets with a constant 0 sequence number from both sides.
|=================
TRex first time configuration
=============================
-:author: hhaim with the Help of Amir Kroparo
+:author: hhaim with the Help of Amir Kroparo. New rev fixes by Ido Barnea.
:description: TRex Getting started - instalation guide
:revdate: 2014-11-01
-:revnumber: 0.1
+:revnumber: 0.2
:deckjs_theme: swiss
:deckjs_transition: horizontal-slide
:scrollable:
$('#title-slide').css("background-position","center");
$('h1').html('');
$('h3').html('<font size="4">Hanoch Haim </font>');
- $('h4').html('<font size="4">04/2015</font>');
+ $('h4').html('<font size="4">Updated 10/2016</font>');
</script>
++++++++++++++++++
+== General info
+* This guide will help you configure Cisco ASR1K as DUT connected to TRex running in stateful mode.
+* This can be easily adopted for working with any L3 device. Equivalent commands for configuring Linux as your DUT are shown at the end as well.
+* Two options are given for configuring the router. Policy based route, and static route. You should
+choose the one appropriate for your needs.
+* TRex should be directly connected to ASR1K ports, and will act as both client and server.
-== Simple configuration
+== Setup description
-* TRex does not implement ARP emulation
-* This guide will help you to configure Cisco ASR1K to work with TRex
-* TRex is directly connected to ASR1K ports.
+* TRex will emulate the networks described in the figure below (on each side of the DUT, router connected to one or more clients/servers networks).
-image::images/TrexConfig.png[title="TRex/Router setup"]
-. TRex port 0 - Client side
-. Router TenG 0/0/0
-. Router TenG 0/0/1
-. TRex port 1 - Server side
-
+image::images/trex-asr-setup.png[title="TRex/Router setup"]
+
+== Not supported setup description
+
+* Notice that the following setup is *not* supported (Having TRex emulate a bunch of hosts connected by switch to the DUT).
+This means that the TRex IP addresses defined in ``generator'' section should be in different network then the DUT addresses
+and TRex addresses defined in port_info section.
+
+image::images/trex-not-supported-setup.png[title="Not supported setup"]
== TRex configuration
-* TRex act as both client and server side
-* TRex port mac address should configure correctly, so packet generated from port 1 will get to 2 and vice-versa
-* To use the config file you can add this switch `--cfg [file]`
-* Or edit the configuration file in `/etc/trex_cfg.yaml`
+* You can specify config file to use by the `--cfg` command line argument
+or use the default config file `/etc/trex_cfg.yaml`
+* Below is an example of how to configure TRex IP addresses. TRex will issue ARP for default_gw,
+and send gratuitous ARP for ip, on each port. This works, starting from TRex version 2.10.
+If you want to configure MAC addresses manually (equivalent to static
+ARP), or running older TRex version, please see the full description of config file parameters in the manual.
[source,python]
----
- - port_limit : 2
- port_info : # set eh mac addr
- - dest_mac : [0x0,0x0,0x0,0x1,0x0,0x00] <1>
- src_mac : [0x0,0x0,0x0,0x2,0x0,0x00] <2>
- - dest_mac : [0x0,0x0,0x0,0x3,0x0,0x00] <3>
- src_mac : [0x0,0x0,0x0,0x4,0x0,0x00] <4>
+ - port_limit : 2
+ port_info :
+ - default_gw : 11.11.11.1 <1>
+ ip : 11.11.11.2 <2>
+ - default_gw : 12.12.12.1 <3>
+ ip : 12.12.12.2 <4>
----
-<1> Correspond to TRex port 0 - should be Router TenG 0/0/0 mac-address
-<2> Should be distinc mac-address, router should be configure to sent to this mac-address
-<3> Correspond to TRex port 1 - should be Router TenG 0/0/1 mac-address
-<4> Should be distinc mac-address, router should be configure to sent to this mac-address
+<1> TRex port 0 config- should be router's TenG 0/0/0 IP.
+TRex will try to resolve this address by sending ARP request.
+<2> Next hop of router's TenG 0/0/0. TRex will send gratuitous ARP for this address.
+<3> TRex port 1 config- should be router's TenG 0/0/1 IP.
+TRex will try to resolve this address by sending ARP request.
+<4> Next hop of router's TenG 0/0/0. TRex will send gratuitous ARP for this address.
+
+== TRex emulated server/client IPs definition in traffic config file
-== Router configuration PBR part 1
+* You specify traffic config file by running TRex with -f <file name> (TRex stateful mode).
+* Examples for client config files exist in TREX_ROOT/scripts/cfg directory.
+* Add following section to the traffic config file, to define the range of IPs for clients and servers.
-* Router moves packets from port 0->1 and 1->0 without looking into IP address.
+[source,python]
+----
+generator :
+ distribution : "seq"
+ clients_start : "16.0.0.1"
+ clients_end : "16.0.0.255"
+ servers_start : "48.0.0.1"
+ servers_end : "48.0.0.240"
+----
-* TenG 0/0/0 <-> TenG 0/0/1
+* In this example, there are:
+** 255 clients talking to 240 servers
-*Router configuration:*::
+== Router config. Option 1 - static routes
[source,python]
----
interface TenGigabitEthernet0/0/0
- mac-address 0000.0001.0000 <1>
- mtu 4000 <2>
- ip address 11.11.11.11 255.255.255.0 <3>
- ip policy route-map p1_to_p2 <4>
- load-interval 30
+ ip address 11.11.11.1 255.255.255.0
!
-
+`
interface TenGigabitEthernet0/0/1
- mac-address 0000.0003.0000 <5>
- mtu 4000
- ip address 12.11.11.11 255.255.255.0
- ip policy route-map p2_to_p1
- load-interval 30
+ ip address 12.12.12.1 255.255.255.0
!
+ip route 16.0.0.0 255.0.0.0 11.11.11.2 <1>
+ip route 48.0.0.0 255.0.0.0 12.12.12.2 <2>
----
-<1> Configure mac-address to match TRex destination port-0
-<2> Set MTU
-<3> Set an ip address ( routing can't work without this)
-<4> Configure PBR policy - see next slide
-<5> Configure mac-address to match TRex destination port-1
+<1> Route clients network to TRex server emulation interface.
+<2> Route servers network to TRex client emulation interface.
-== Router configuration PBR part 2
+== Router config. Option 2 - PBR part 1
+
+* Router is configured to statically route packets from 0/0/0 to 0/0/1 and from 0/0/1 to 0/0/0.
+
+*Router configuration:*::
[source,python]
----
-
-route-map p1_to_p2 permit 10
- set ip next-hop 12.11.11.12 <1>
+interface TenGigabitEthernet0/0/0
+ ip address 11.11.11.1 255.255.255.0 <1>
+ ip policy route-map p1_to_p2 <2>
+ load-interval 30
!
-route-map p2_to_p1 permit 10
- set ip next-hop 11.11.11.12 <2>
+interface TenGigabitEthernet0/0/1
+ ip address 12.12.12.1 255.255.255.0 <1>
+ ip policy route-map p2_to_p1 <2>
+ load-interval 30
+!
----
+<1> Configure ip address for the port.
+<2> Configure PBR policy - see next slide
-<1> Set the destination packet to be 12.11.11.12 which correspond to TenG 0/0/1
-<2> Set the destination packet to be 11.11.11.12 which correspond to TenG 0/0/0
-
-
-== Router configuration PBR part 3
-
-* What about destination mac-address it should be TRex source mac-address?
-* The folowing configuration address it
+== Router config. Option 2 - PBR part 2
[source,python]
----
- arp 11.11.11.12 0000.0002.0000 ARPA <1>
- arp 12.11.11.12 0000.0004.0000 ARPA <2>
-----
-<1> Destination mac-address of packets sent from If 0/0/0 is matched to TRex source mac-address port-0
-<2> Destination mac-address of packets sent from If 0/0/1 is matched to TRex source mac-address port-1
-== Static-route configuration - TRex
-
-* You can set static range of IPs for client and server side
+route-map p1_to_p2 permit 10
+ set ip next-hop 12.12.12.2 <1>
+!
+route-map p2_to_p1 permit 10
+ set ip next-hop 11.11.11.2 <2>
-[source,python]
-----
-generator :
- distribution : "seq"
- clients_start : "16.0.0.1"
- clients_end : "16.0.0.255"
- servers_start : "48.0.0.1"
- servers_end : "48.0.0.240"
- dual_port_mask : "1.0.0.0"
- tcp_aging : 0
- udp_aging : 0
----
-* In this example, you should expect:
-** Number of clients 255
-** Number of servers 240
+<1> Set the destination to be 12.12.12.2, in the subnet of TenG 0/0/1.
+<2> Set the destination to be 11.11.11.2 , in the subnet to TenG 0/0/0.
-== Static-route configuration - Router
+== Verify cable connections
-[source,python]
-----
-interface TenGigabitEthernet0/0/0
- mac-address 0000.0001.0000
- mtu 4000
- ip address 11.11.11.11 255.255.255.0
-!
-`
-interface TenGigabitEthernet0/0/1
- mac-address 0000.0003.0000
- mtu 4000
- ip address 22.11.11.11 255.255.255.0
-!
-ip route 16.0.0.0 255.0.0.0 11.11.11.12 <1>
-ip route 48.0.0.0 255.0.0.0 22.11.11.12 <2>
-----
-<1> Match the range of TRex YAML ( client side 0/0/0 )
-<2> Match the range of TRex YAML ( server side 0/0/1)
-
-== Verify configuration
-
-* To verify that TRex port-0 is connected to Router 0/0/0 and not 0/0/1 run
+* To verify that TRex port-0 is really connected to Router 0/0/0, you can run the following.
...........................................
-$./t-rex-64 -f cap2/dns.yaml -m 1 -d 100 -l 1000 --lo --lm 1
+$./t-rex-64 -f cap2/dns.yaml -m 1 -d 10 -l 1000 --lo --lm 1
...........................................
* It sends packets only from TRex port-0 ( `--lm 1` )
* to send only from TRex port 1 do this:
...........................................
-$./t-rex-64 -f cap2/dns.yaml -m 1 -d 100 -l 1000 --lo --lm 2
+$./t-rex-64 -f cap2/dns.yaml -m 1 -d 10 -l 1000 --lo --lm 2
...........................................
-* In case you are connected to a Switch you must send packet from both direction first
+* If you are connected to a switch, you must send packets from both directions for few seconds first, to allow
+the switch to learn the MAC addresses of both sides.
...........................................
-$./t-rex-64 -f cap2/dns.yaml -m 1 -d 100 -l 1000
+$./t-rex-64 -f cap2/dns.yaml -m 1 -d 10 -l 1000
...........................................
+== Linux config
+
+* Assuming the same setup with Linux as DUT instead of the router, you can do the following.
+* Configure IPs of Linux interfaces to 12.12.12.1 and 11.11.11.1
+* route add -net 48.0.0.0 netmask 255.0.0.0 gw 12.12.12.2
+* route add -net 16.0.0.0 netmask 255.0.0.0 gw 11.11.11.2
-== Static-route configuration - IPV6
+
+== Static route configuration - IPV6
[source,python]
----
interface TenGigabitEthernet1/0/0
- mac-address 0000.0001.0000
- mtu 4000
- ip address 11.11.11.11 255.255.255.0
+ ip address 11.11.11.1 255.255.255.0
ip policy route-map p1_to_p2
load-interval 30
ipv6 enable #<1>
<5> Mac-addr setting should be like TRex
<6> PBR configuraion
-
-
-
-
-