X-Git-Url: https://gerrit.fd.io/r/gitweb?p=csit.git;a=blobdiff_plain;f=resources%2Flibraries%2Frobot%2Fdefault.robot;h=932fcaee07bd97fd71ceb17f890ffd7f04f1cb4f;hp=742906e94a49f2f47367b1a39ee34c218e718e87;hb=5f6802ba1d16005e7562f0eace81512dddab6762;hpb=72be8262ea5dc0136a21032402b7f4ffa5ff4576 diff --git a/resources/libraries/robot/default.robot b/resources/libraries/robot/default.robot index 742906e94a..932fcaee07 100644 --- a/resources/libraries/robot/default.robot +++ b/resources/libraries/robot/default.robot @@ -13,164 +13,283 @@ *** Settings *** | Variables | resources/libraries/python/topology.py +| Variables | resources/libraries/python/VatHistory.py | Library | resources.libraries.python.topology.Topology +| Library | resources.libraries.python.VatHistory | Library | resources.libraries.python.CpuUtils | Library | resources.libraries.python.DUTSetup +| Library | resources.libraries.python.SchedUtils | Library | resources.libraries.python.TGSetup +| Library | resources.libraries.python.L2Util | Library | resources/libraries/python/VppConfigGenerator.py +| Library | resources/libraries/python/VppCounters.py | Library | Collections *** Keywords *** -| Setup all DUTs before test -| | [Documentation] | Setup all DUTs in topology before test execution +| Configure all DUTs before test +| | [Documentation] | Setup all DUTs in topology before test execution. +| | ... | | Setup All DUTs | ${nodes} -| Setup all TGs before traffic script -| | [Documentation] | Prepare all TGs before traffic scripts execution +| Configure all TGs for traffic script +| | [Documentation] | Prepare all TGs before traffic scripts execution. +| | ... | | All TGs Set Interface Default Driver | ${nodes} -| Show vpp version on all DUTs -| | [Documentation] | Show VPP version verbose on all DUTs +| Show VPP version on all DUTs +| | [Documentation] | Show VPP version verbose on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Vpp show version verbose | ${nodes['${dut}']} -| Show vpp trace dump on all DUTs -| | [Documentation] | Save API trace and dump output on all DUTs +| Show Vpp Errors On All DUTs +| | [Documentation] | Show VPP errors verbose on all DUTs. +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Vpp Show Errors | ${nodes['${dut}']} + +| Show VPP trace dump on all DUTs +| | [Documentation] | Save API trace and dump output on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} | | | Vpp api trace save | ${nodes['${dut}']} | | | Vpp api trace dump | ${nodes['${dut}']} -| Add '${m}' worker threads and rxqueues '${n}' in 3-node single-link topo -| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup -| | ... | configuration on all DUTs in 3-node single-link topology. -| | ${m_int}= | Convert To Integer | ${m} -| | ${dut1_numa}= | Get interfaces numa node | ${dut1} -| | ... | ${dut1_if1} | ${dut1_if2} -| | ${dut2_numa}= | Get interfaces numa node | ${dut2} -| | ... | ${dut2_if1} | ${dut2_if2} -| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} -| | ... | cpu_cnt=${1} -| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} -| | ... | skip_cnt=${1} | cpu_cnt=${m_int} -| | ${dut2_cpu_main}= | Cpu list per node str | ${dut2} | ${dut2_numa} -| | ... | cpu_cnt=${1} -| | ${dut2_cpu_w}= | Cpu list per node str | ${dut2} | ${dut2_numa} -| | ... | skip_cnt=${1} | cpu_cnt=${m_int} -| | ${dut1_cpu}= | Catenate | main-core | ${dut1_cpu_main} -| | ... | corelist-workers | ${dut1_cpu_w} -| | ${dut2_cpu}= | Catenate | main-core | ${dut2_cpu_main} -| | ... | corelist-workers | ${dut2_cpu_w} -| | ${rxqueues}= | Catenate | num-rx-queues | ${n} -| | Add CPU config | ${dut1} | ${dut1_cpu} -| | Add CPU config | ${dut2} | ${dut2_cpu} -| | Add rxqueues config | ${dut1} | ${rxqueues} -| | Add rxqueues config | ${dut2} | ${rxqueues} - -| Add '${m}' worker threads and rxqueues '${n}' without HTT to all DUTs -| | [Documentation] | Setup M worker threads without HTT and rxqueues N in -| | ... | startup configuration of VPP to all DUTs -| | ${cpu}= | Catenate | main-core | 0 | corelist-workers -| | ${cpu}= | Run Keyword If | '${m}' == '1' | Catenate | ${cpu} | 1 -| | ... | ELSE IF | '${m}' == '2' | Catenate | ${cpu} | 1-2 -| | ... | ELSE IF | '${m}' == '4' | Catenate | ${cpu} | 1-4 -| | ... | ELSE IF | '${m}' == '6' | Catenate | ${cpu} | 1-6 -| | ... | ELSE | Fail | Not supported combination -| | ${rxqueues}= | Catenate | num-rx-queues | ${n} -| | Add worker threads and rxqueues to all DUTs | ${cpu} | ${rxqueues} - -| Add '${m}' worker threads and rxqueues '${n}' with HTT to all DUTs -| | [Documentation] | Setup M worker threads with HTT and rxqueues N in -| | ... | startup configuration of VPP to all DUTs -| | ${cpu}= | Catenate | main-core | 0 | corelist-workers -| | ${cpu}= | Run Keyword If | '${m}' == '2' | Catenate | ${cpu} | 1,10 -| | ... | ELSE IF | '${m}' == '4' | Catenate | ${cpu} | 1-2,10-11 -| | ... | ELSE IF | '${m}' == '6' | Catenate | ${cpu} | 1-3,10-12 -| | ... | ELSE IF | '${m}' == '8' | Catenate | ${cpu} | 1-4,10-13 -| | ... | ELSE | Fail | Not supported combination -| | ${rxqueues}= | Catenate | num-rx-queues | ${n} -| | Add worker threads and rxqueues to all DUTs | ${cpu} | ${rxqueues} - -| Add worker threads and rxqueues to all DUTs -| | [Documentation] | Setup worker threads and rxqueues in VPP startup -| | ... | configuration to all DUTs +| Show VPP vhost on all DUTs +| | [Documentation] | Show Vhost User on all DUTs. | | ... -| | ... | *Arguments:* -| | ... | - ${cpu} - CPU configuration. Type: string -| | ... | - ${rxqueues} - rxqueues configuration. Type: string +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Vpp Show Vhost | ${nodes['${dut}']} + +| Show Bridge Domain Data On All DUTs +| | [Documentation] | Show Bridge Domain data on all DUTs. | | ... -| | ... | *Example:* +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Vpp Get Bridge Domain Data | ${nodes['${dut}']} + +| Setup Scheduler Policy for Vpp On All DUTs +| | [Documentation] | Set realtime scheduling policy (SCHED_RR) with priority 1 +| | ... | on all VPP worker threads on all DUTs. | | ... -| | ... | \| Add worker threads and rxqueues to all DUTs \| main-core 0 \ -| | ... | \| rxqueues 2 -| | [Arguments] | ${cpu} | ${rxqueues} | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Add CPU config | ${nodes['${dut}']} -| | | ... | ${cpu} -| | | Add rxqueues config | ${nodes['${dut}']} -| | | ... | ${rxqueues} - -| Add all PCI devices to all DUTs -| | [Documentation] | Add all available PCI devices from topology file to VPP -| | ... | startup configuration to all DUTs +| | | Set VPP Scheduling rr | ${nodes['${dut}']} + +| Configure crypto device on all DUTs +| | [Documentation] | Verify if Crypto QAT device virtual functions are +| | ... | initialized on all DUTs. If parameter force_init is set to True, then +| | ... | try to initialize. +| | ... +| | ... | *Arguments:* +| | ... | - ${force_init} - Try to initialize. Type: boolean +| | ... +| | ... | *Example:* +| | ... +| | ... | \| Configure crypto device on all DUTs \| ${True} \| +| | ... +| | [Arguments] | ${force_init}=${False} +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Add PCI all devices | ${nodes['${dut}']} +| | | Crypto Device Verify | ${nodes['${dut}']} | force_init=${force_init} -| Add PCI device to DUT -| | [Documentation] | Add PCI device to VPP startup configuration -| | ... | to DUT specified as argument +| Configure kernel module on all DUTs +| | [Documentation] | Verify if specific kernel module is loaded on all DUTs. +| | ... | If parameter force_load is set to True, then try to initialize. | | ... | | ... | *Arguments:* -| | ... | - ${node} - DUT node. Type: dictionary -| | ... | - ${pci_address} - PCI address. Type: string +| | ... | - ${module} - Module to verify. Type: string +| | ... | - ${force_load} - Try to load module. Type: boolean | | ... | | ... | *Example:* | | ... -| | ... | \| Add PCI device to DUT \| ${nodes['DUT1']} \ -| | ... | \| 0000:00:00.0 -| | [Arguments] | ${node} | ${pci_address} -| | Add PCI device | ${node} | ${pci_address} +| | ... | \| Configure kernel module on all DUTs \| ${True} \| +| | ... +| | [Arguments] | ${module} | ${force_load}=${False} +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Kernel Module Verify | ${nodes['${dut}']} | ${module} +| | | ... | force_load=${force_load} + +| Create base startup configuration of VPP on all DUTs +| | [Documentation] | Create base startup configuration of VPP to all DUTs. +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Import Library | resources.libraries.python.VppConfigGenerator +| | | ... | WITH NAME | ${dut} +| | | Run keyword | ${dut}.Set Node | ${nodes['${dut}']} +| | | Run keyword | ${dut}.Add Unix Log +| | | Run keyword | ${dut}.Add Unix CLI Listen +| | | Run keyword | ${dut}.Add Unix Nodaemon +| | | Run keyword | ${dut}.Add DPDK Socketmem | "1024,1024" +| | | Run keyword | ${dut}.Add Heapsize | "3G" +| | | Run keyword | ${dut}.Add IP6 Hash Buckets | "2000000" +| | | Run keyword | ${dut}.Add IP6 Heap Size | "3G" + +| Add '${m}' worker threads and '${n}' rxqueues in 3-node single-link circular topology +| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup\ +| | ... | configuration on all DUTs in 3-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut2_numa}= | Get interfaces numa node | ${dut2} +| | ... | ${dut2_if1} | ${dut2_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} +| | ${dut2_cpu_main}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} +| | ${dut2_cpu_w}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} +| | Run keyword | DUT1.Add CPU Main Core | ${dut1_cpu_main} +| | Run keyword | DUT2.Add CPU Main Core | ${dut2_cpu_main} +| | Run keyword | DUT1.Add CPU Corelist Workers | ${dut1_cpu_w} +| | Run keyword | DUT2.Add CPU Corelist Workers | ${dut2_cpu_w} +| | Run keyword | DUT1.Add DPDK Dev Default RXQ | ${n} +| | Run keyword | DUT2.Add DPDK Dev Default RXQ | ${n} + +| Add '${m}' worker threads and '${n}' rxqueues in 2-node single-link circular topology +| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup\ +| | ... | configuration on all DUTs in 2-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} +| | Run keyword | DUT1.Add CPU Main Core | ${dut1_cpu_main} +| | Run keyword | DUT1.Add CPU Corelist Workers | ${dut1_cpu_w} +| | Run keyword | DUT1.Add DPDK Dev Default RXQ | ${n} + +| Add '${m}' worker threads using SMT and '${n}' rxqueues in 3-node single-link circular topology +| | [Documentation] | Setup M worker threads using SMT and N rxqueues in vpp\ +| | ... | startup configuration on all DUTs in 3-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut2_numa}= | Get interfaces numa node | ${dut2} +| | ... | ${dut2_if1} | ${dut2_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | smt_used=${True} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | smt_used=${True} +| | ${dut2_cpu_main}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | smt_used=${True} +| | ${dut2_cpu_w}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | smt_used=${True} +| | Run keyword | DUT1.Add CPU Main Core | ${dut1_cpu_main} +| | Run keyword | DUT2.Add CPU Main Core | ${dut2_cpu_main} +| | Run keyword | DUT1.Add CPU Corelist Workers | ${dut1_cpu_w} +| | Run keyword | DUT2.Add CPU Corelist Workers | ${dut2_cpu_w} +| | Run keyword | DUT1.Add DPDK Dev Default RXQ | ${n} +| | Run keyword | DUT2.Add DPDK Dev Default RXQ | ${n} -| Add No Multi Seg to all DUTs -| | [Documentation] | Add No Multi Seg to VPP startup configuration to all -| | ... | DUTs +| Add '${m}' worker threads using SMT and '${n}' rxqueues in 2-node single-link circular topology +| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup\ +| | ... | configuration on all DUTs in 2-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | smt_used=${True} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | smt_used=${True} +| | Run keyword | DUT1.Add CPU Main Core | ${dut1_cpu_main} +| | Run keyword | DUT1.Add CPU Corelist Workers | ${dut1_cpu_w} +| | Run keyword | DUT1.Add DPDK Dev Default RXQ | ${n} + +| Add no multi seg to all DUTs +| | [Documentation] | Add No Multi Seg to VPP startup configuration to all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Add No Multi Seg Config | ${nodes['${dut}']} +| | | Run keyword | ${dut}.Add DPDK No Multi Seg -| Add Enable Vhost User to all DUTs -| | [Documentation] | Add Enable Vhost User to VPP startup configuration to all -| | ... | DUTs +| Add SNAT to all DUTs +| | [Documentation] | Add SNAT configuration to all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Add Enable Vhost User Config | ${nodes['${dut}']} +| | | Run keyword | ${dut}.Add SNAT -| Remove startup configuration of VPP from all DUTs -| | [Documentation] | Remove VPP startup configuration from all DUTs +| Add cryptodev to all DUTs +| | [Documentation] | Add Cryptodev to VPP startup configuration to all DUTs. +| | ... +| | ... | *Arguments:* +| | ... | - ${count} - Number of QAT devices. Type: integer +| | ... +| | ... | *Example:* +| | ... +| | ... | \| Add cryptodev to all DUTs \| ${4} \| +| | ... +| | [Arguments] | ${count} | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Remove All PCI Devices | ${nodes['${dut}']} -| | | Remove All CPU Config | ${nodes['${dut}']} -| | | Remove Socketmem Config | ${nodes['${dut}']} -| | | Remove Heapsize Config | ${nodes['${dut}']} -| | | Remove Rxqueues Config | ${nodes['${dut}']} -| | | Remove No Multi Seg Config | ${nodes['${dut}']} -| | | Remove Enable Vhost User Config | ${nodes['${dut}']} - -| Setup default startup configuration of VPP on all DUTs -| | [Documentation] | Setup default startup configuration of VPP to all DUTs -| | Remove startup configuration of VPP from all DUTs -| | Add '1' worker threads and rxqueues '1' without HTT to all DUTs -| | Add all PCI devices to all DUTs -| | Apply startup configuration on all VPP DUTs +| | | Run keyword | ${dut}.Add DPDK Cryptodev | ${count} | Apply startup configuration on all VPP DUTs -| | [Documentation] | Apply startup configuration of VPP and restart VPP on all -| | ... | DUTs +| | [Documentation] | Write startup configuration and restart VPP on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Apply Config | ${nodes['${dut}']} +| | | Run keyword | ${dut}.Apply Config | | Update All Interface Data On All Nodes | ${nodes} | skip_tg=${TRUE} + +| Save VPP PIDs +| | [Documentation] | Get PIDs of VPP processes from all DUTs in topology and\ +| | ... | set it as a test variable. The PIDs are stored as dictionary items\ +| | ... | where the key is the host and the value is the PID. +| | ... +| | ${setup_vpp_pids}= | Get VPP PIDs | ${nodes} +| | ${keys}= | Get Dictionary Keys | ${setup_vpp_pids} +| | :FOR | ${key} | IN | @{keys} +| | | ${pid}= | Get From Dictionary | ${setup_vpp_pids} | ${key} +| | | Run Keyword If | $pid is None | FAIL | No VPP PID found on node ${key} +| | | Run Keyword If | ',' in '${pid}' +| | | ... | FAIL | More then one VPP PID found on node ${key}: ${pid} +| | Set Test Variable | ${setup_vpp_pids} + +| Verify VPP PID in Teardown +| | [Documentation] | Check if the VPP PIDs on all DUTs are the same at the end\ +| | ... | of test as they were at the begining. If they are not, only a message\ +| | ... | is printed on console and to log. The test will not fail. +| | ... +| | ${teardown_vpp_pids}= | Get VPP PIDs | ${nodes} +| | ${err_msg}= | Catenate | ${SUITE NAME} - ${TEST NAME} +| | ... | \nThe VPP PIDs are not equal!\nTest Setup VPP PIDs: +| | ... | ${setup_vpp_pids}\nTest Teardown VPP PIDs: ${teardown_vpp_pids} +| | ${rc} | ${msg}= | Run keyword and ignore error +| | ... | Dictionaries Should Be Equal +| | ... | ${setup_vpp_pids} | ${teardown_vpp_pids} +| | Run Keyword And Return If | '${rc}'=='FAIL' | Log | ${err_msg} +| | ... | console=yes | level=WARN + +| Set up functional test +| | [Documentation] | Common test setup for functional tests. +| | ... +| | Configure all DUTs before test +| | Save VPP PIDs +| | Configure all TGs for traffic script +| | Update All Interface Data On All Nodes | ${nodes} +| | Reset VAT History On All DUTs | ${nodes} + +| Tear down functional test +| | [Documentation] | Common test teardown for functional tests. +| | ... +| | Show Packet Trace on All DUTs | ${nodes} +| | Show VAT History On All DUTs | ${nodes} +| | Vpp Show Errors On All DUTs | ${nodes} +| | Verify VPP PID in Teardown