X-Git-Url: https://gerrit.fd.io/r/gitweb?p=csit.git;a=blobdiff_plain;f=resources%2Flibraries%2Frobot%2Fdefault.robot;h=932fcaee07bd97fd71ceb17f890ffd7f04f1cb4f;hp=51cafdff96dfcf3bb16165eb1d2f6b7c8c4baf4f;hb=5f6802ba1d16005e7562f0eace81512dddab6762;hpb=c1bdb7115f12e7d4ec586ec0673fd19dce3a2414 diff --git a/resources/libraries/robot/default.robot b/resources/libraries/robot/default.robot index 51cafdff96..932fcaee07 100644 --- a/resources/libraries/robot/default.robot +++ b/resources/libraries/robot/default.robot @@ -13,77 +13,283 @@ *** Settings *** | Variables | resources/libraries/python/topology.py +| Variables | resources/libraries/python/VatHistory.py | Library | resources.libraries.python.topology.Topology +| Library | resources.libraries.python.VatHistory +| Library | resources.libraries.python.CpuUtils | Library | resources.libraries.python.DUTSetup +| Library | resources.libraries.python.SchedUtils | Library | resources.libraries.python.TGSetup +| Library | resources.libraries.python.L2Util | Library | resources/libraries/python/VppConfigGenerator.py +| Library | resources/libraries/python/VppCounters.py | Library | Collections *** Keywords *** -| Setup all DUTs before test -| | [Documentation] | Setup all DUTs in topology before test execution +| Configure all DUTs before test +| | [Documentation] | Setup all DUTs in topology before test execution. +| | ... | | Setup All DUTs | ${nodes} -| Setup all TGs before traffic script -| | [Documentation] | Prepare all TGs before traffic scripts execution +| Configure all TGs for traffic script +| | [Documentation] | Prepare all TGs before traffic scripts execution. +| | ... | | All TGs Set Interface Default Driver | ${nodes} -| Show statistics on all DUTs -| | [Documentation] | Show VPP statistics on all DUTs after the test failed -| | Sleep | 10 | Waiting for statistics to be collected +| Show VPP version on all DUTs +| | [Documentation] | Show VPP version verbose on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Vpp show stats | ${nodes['${dut}']} - -| Setup '${m}' worker threads and rss '${n}' without HTT on all DUTs -| | [Documentation] | Setup M worker threads without HTT and rss N in startup -| | ... | configuration of VPP on all DUTs -| | ${cpu}= | Catenate | main-core | 0 | corelist-workers -| | ${cpu}= | Run Keyword If | '${m}' == '1' | Catenate | ${cpu} | 1 -| | ... | ELSE IF | '${m}' == '2' | Catenate | ${cpu} | 1-2 -| | ... | ELSE IF | '${m}' == '4' | Catenate | ${cpu} | 1-4 -| | ... | ELSE IF | '${m}' == '6' | Catenate | ${cpu} | 1-6 -| | ... | ELSE | Fail | Not supported combination -| | ${rss}= | Catenate | rss | ${n} -| | Setup worker threads and rss on all DUTs | ${cpu} | ${rss} - -| Setup '${m}' worker threads and rss '${n}' with HTT on all DUTs -| | [Documentation] | Setup M worker threads with HTT and rss N in startup -| | ... | configuration of VPP on all DUTs -| | ${cpu}= | Catenate | main-core | 0 | corelist-workers -| | ${cpu}= | Run Keyword If | '${m}' == '2' | Catenate | ${cpu} | 1,10 -| | ... | ELSE IF | '${m}' == '4' | Catenate | ${cpu} | 1-2,10-11 -| | ... | ELSE IF | '${m}' == '6' | Catenate | ${cpu} | 1-3,10-12 -| | ... | ELSE IF | '${m}' == '8' | Catenate | ${cpu} | 1-4,10-13 -| | ... | ELSE | Fail | Not supported combination -| | ${rss}= | Catenate | rss | ${n} -| | Setup worker threads and rss on all DUTs | ${cpu} | ${rss} - -| Setup worker threads and rss on all DUTs -| | [Documentation] | Setup worker threads and rss in startup configuration of -| | ... | VPP on all DUTs -| | [Arguments] | ${cpu} | ${rss} +| | | Vpp show version verbose | ${nodes['${dut}']} + +| Show Vpp Errors On All DUTs +| | [Documentation] | Show VPP errors verbose on all DUTs. +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Vpp Show Errors | ${nodes['${dut}']} + +| Show VPP trace dump on all DUTs +| | [Documentation] | Save API trace and dump output on all DUTs. +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Vpp api trace save | ${nodes['${dut}']} +| | | Vpp api trace dump | ${nodes['${dut}']} + +| Show VPP vhost on all DUTs +| | [Documentation] | Show Vhost User on all DUTs. +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Vpp Show Vhost | ${nodes['${dut}']} + +| Show Bridge Domain Data On All DUTs +| | [Documentation] | Show Bridge Domain data on all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Add CPU config | ${nodes['${dut}']} -| | | ... | ${cpu} -| | | Add PCI device | ${nodes['${dut}']} -| | | Add RSS config | ${nodes['${dut}']} -| | | ... | ${rss} -| | | Apply config | ${nodes['${dut}']} - -| Reset startup configuration of VPP on all DUTs -| | [Documentation] | Reset startup configuration of VPP on all DUTs -| | ${cpu}= | Catenate | main-core | 1 +| | | Vpp Get Bridge Domain Data | ${nodes['${dut}']} + +| Setup Scheduler Policy for Vpp On All DUTs +| | [Documentation] | Set realtime scheduling policy (SCHED_RR) with priority 1 +| | ... | on all VPP worker threads on all DUTs. +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Set VPP Scheduling rr | ${nodes['${dut}']} + +| Configure crypto device on all DUTs +| | [Documentation] | Verify if Crypto QAT device virtual functions are +| | ... | initialized on all DUTs. If parameter force_init is set to True, then +| | ... | try to initialize. +| | ... +| | ... | *Arguments:* +| | ... | - ${force_init} - Try to initialize. Type: boolean +| | ... +| | ... | *Example:* +| | ... +| | ... | \| Configure crypto device on all DUTs \| ${True} \| +| | ... +| | [Arguments] | ${force_init}=${False} +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Crypto Device Verify | ${nodes['${dut}']} | force_init=${force_init} + +| Configure kernel module on all DUTs +| | [Documentation] | Verify if specific kernel module is loaded on all DUTs. +| | ... | If parameter force_load is set to True, then try to initialize. +| | ... +| | ... | *Arguments:* +| | ... | - ${module} - Module to verify. Type: string +| | ... | - ${force_load} - Try to load module. Type: boolean +| | ... +| | ... | *Example:* +| | ... +| | ... | \| Configure kernel module on all DUTs \| ${True} \| +| | ... +| | [Arguments] | ${module} | ${force_load}=${False} +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Kernel Module Verify | ${nodes['${dut}']} | ${module} +| | | ... | force_load=${force_load} + +| Create base startup configuration of VPP on all DUTs +| | [Documentation] | Create base startup configuration of VPP to all DUTs. +| | ... | | ${duts}= | Get Matches | ${nodes} | DUT* | | :FOR | ${dut} | IN | @{duts} -| | | Remove All PCI Devices | ${nodes['${dut}']} -| | | Remove All CPU Config | ${nodes['${dut}']} -| | | Remove Socketmem Config | ${nodes['${dut}']} -| | | Remove Heapsize Config | ${nodes['${dut}']} -| | | Remove RSS Config | ${nodes['${dut}']} -| | | Add CPU Config | ${nodes['${dut}']} -| | | ... | ${cpu} -| | | Add PCI Device | ${nodes['${dut}']} -| | | Apply Config | ${nodes['${dut}']} +| | | Import Library | resources.libraries.python.VppConfigGenerator +| | | ... | WITH NAME | ${dut} +| | | Run keyword | ${dut}.Set Node | ${nodes['${dut}']} +| | | Run keyword | ${dut}.Add Unix Log +| | | Run keyword | ${dut}.Add Unix CLI Listen +| | | Run keyword | ${dut}.Add Unix Nodaemon +| | | Run keyword | ${dut}.Add DPDK Socketmem | "1024,1024" +| | | Run keyword | ${dut}.Add Heapsize | "3G" +| | | Run keyword | ${dut}.Add IP6 Hash Buckets | "2000000" +| | | Run keyword | ${dut}.Add IP6 Heap Size | "3G" + +| Add '${m}' worker threads and '${n}' rxqueues in 3-node single-link circular topology +| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup\ +| | ... | configuration on all DUTs in 3-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut2_numa}= | Get interfaces numa node | ${dut2} +| | ... | ${dut2_if1} | ${dut2_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} +| | ${dut2_cpu_main}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} +| | ${dut2_cpu_w}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} +| | Run keyword | DUT1.Add CPU Main Core | ${dut1_cpu_main} +| | Run keyword | DUT2.Add CPU Main Core | ${dut2_cpu_main} +| | Run keyword | DUT1.Add CPU Corelist Workers | ${dut1_cpu_w} +| | Run keyword | DUT2.Add CPU Corelist Workers | ${dut2_cpu_w} +| | Run keyword | DUT1.Add DPDK Dev Default RXQ | ${n} +| | Run keyword | DUT2.Add DPDK Dev Default RXQ | ${n} + +| Add '${m}' worker threads and '${n}' rxqueues in 2-node single-link circular topology +| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup\ +| | ... | configuration on all DUTs in 2-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} +| | Run keyword | DUT1.Add CPU Main Core | ${dut1_cpu_main} +| | Run keyword | DUT1.Add CPU Corelist Workers | ${dut1_cpu_w} +| | Run keyword | DUT1.Add DPDK Dev Default RXQ | ${n} + +| Add '${m}' worker threads using SMT and '${n}' rxqueues in 3-node single-link circular topology +| | [Documentation] | Setup M worker threads using SMT and N rxqueues in vpp\ +| | ... | startup configuration on all DUTs in 3-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut2_numa}= | Get interfaces numa node | ${dut2} +| | ... | ${dut2_if1} | ${dut2_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | smt_used=${True} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | smt_used=${True} +| | ${dut2_cpu_main}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | smt_used=${True} +| | ${dut2_cpu_w}= | Cpu list per node str | ${dut2} | ${dut2_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | smt_used=${True} +| | Run keyword | DUT1.Add CPU Main Core | ${dut1_cpu_main} +| | Run keyword | DUT2.Add CPU Main Core | ${dut2_cpu_main} +| | Run keyword | DUT1.Add CPU Corelist Workers | ${dut1_cpu_w} +| | Run keyword | DUT2.Add CPU Corelist Workers | ${dut2_cpu_w} +| | Run keyword | DUT1.Add DPDK Dev Default RXQ | ${n} +| | Run keyword | DUT2.Add DPDK Dev Default RXQ | ${n} + +| Add '${m}' worker threads using SMT and '${n}' rxqueues in 2-node single-link circular topology +| | [Documentation] | Setup M worker threads and N rxqueues in vpp startup\ +| | ... | configuration on all DUTs in 2-node single-link topology. +| | ... +| | ${m_int}= | Convert To Integer | ${m} +| | ${dut1_numa}= | Get interfaces numa node | ${dut1} +| | ... | ${dut1_if1} | ${dut1_if2} +| | ${dut1_cpu_main}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${1} | cpu_cnt=${1} | smt_used=${True} +| | ${dut1_cpu_w}= | Cpu list per node str | ${dut1} | ${dut1_numa} +| | ... | skip_cnt=${2} | cpu_cnt=${m_int} | smt_used=${True} +| | Run keyword | DUT1.Add CPU Main Core | ${dut1_cpu_main} +| | Run keyword | DUT1.Add CPU Corelist Workers | ${dut1_cpu_w} +| | Run keyword | DUT1.Add DPDK Dev Default RXQ | ${n} + +| Add no multi seg to all DUTs +| | [Documentation] | Add No Multi Seg to VPP startup configuration to all DUTs. +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Run keyword | ${dut}.Add DPDK No Multi Seg + +| Add SNAT to all DUTs +| | [Documentation] | Add SNAT configuration to all DUTs. +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Run keyword | ${dut}.Add SNAT + +| Add cryptodev to all DUTs +| | [Documentation] | Add Cryptodev to VPP startup configuration to all DUTs. +| | ... +| | ... | *Arguments:* +| | ... | - ${count} - Number of QAT devices. Type: integer +| | ... +| | ... | *Example:* +| | ... +| | ... | \| Add cryptodev to all DUTs \| ${4} \| +| | ... +| | [Arguments] | ${count} +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Run keyword | ${dut}.Add DPDK Cryptodev | ${count} + +| Apply startup configuration on all VPP DUTs +| | [Documentation] | Write startup configuration and restart VPP on all DUTs. +| | ... +| | ${duts}= | Get Matches | ${nodes} | DUT* +| | :FOR | ${dut} | IN | @{duts} +| | | Run keyword | ${dut}.Apply Config +| | Update All Interface Data On All Nodes | ${nodes} | skip_tg=${TRUE} + +| Save VPP PIDs +| | [Documentation] | Get PIDs of VPP processes from all DUTs in topology and\ +| | ... | set it as a test variable. The PIDs are stored as dictionary items\ +| | ... | where the key is the host and the value is the PID. +| | ... +| | ${setup_vpp_pids}= | Get VPP PIDs | ${nodes} +| | ${keys}= | Get Dictionary Keys | ${setup_vpp_pids} +| | :FOR | ${key} | IN | @{keys} +| | | ${pid}= | Get From Dictionary | ${setup_vpp_pids} | ${key} +| | | Run Keyword If | $pid is None | FAIL | No VPP PID found on node ${key} +| | | Run Keyword If | ',' in '${pid}' +| | | ... | FAIL | More then one VPP PID found on node ${key}: ${pid} +| | Set Test Variable | ${setup_vpp_pids} + +| Verify VPP PID in Teardown +| | [Documentation] | Check if the VPP PIDs on all DUTs are the same at the end\ +| | ... | of test as they were at the begining. If they are not, only a message\ +| | ... | is printed on console and to log. The test will not fail. +| | ... +| | ${teardown_vpp_pids}= | Get VPP PIDs | ${nodes} +| | ${err_msg}= | Catenate | ${SUITE NAME} - ${TEST NAME} +| | ... | \nThe VPP PIDs are not equal!\nTest Setup VPP PIDs: +| | ... | ${setup_vpp_pids}\nTest Teardown VPP PIDs: ${teardown_vpp_pids} +| | ${rc} | ${msg}= | Run keyword and ignore error +| | ... | Dictionaries Should Be Equal +| | ... | ${setup_vpp_pids} | ${teardown_vpp_pids} +| | Run Keyword And Return If | '${rc}'=='FAIL' | Log | ${err_msg} +| | ... | console=yes | level=WARN + +| Set up functional test +| | [Documentation] | Common test setup for functional tests. +| | ... +| | Configure all DUTs before test +| | Save VPP PIDs +| | Configure all TGs for traffic script +| | Update All Interface Data On All Nodes | ${nodes} +| | Reset VAT History On All DUTs | ${nodes} +| Tear down functional test +| | [Documentation] | Common test teardown for functional tests. +| | ... +| | Show Packet Trace on All DUTs | ${nodes} +| | Show VAT History On All DUTs | ${nodes} +| | Vpp Show Errors On All DUTs | ${nodes} +| | Verify VPP PID in Teardown