Reduce ring size for dpdk NICs and overtal mem footprint 68/1768/2
authorDamjan Marion <damarion@cisco.com>
Sun, 26 Jun 2016 18:16:57 +0000 (20:16 +0200)
committerDave Barach <openvpp@barachs.net>
Tue, 28 Jun 2016 15:08:45 +0000 (15:08 +0000)
commita06dfb39c6bee3fbfd702c10e1e1416b98e65455
tree2f1e4dc672f0f3af8bfce51aa644618e72d15013
parent310dca43a3d0c1a541d91f54a8034401c702c8e5
Reduce ring size for dpdk NICs and overtal mem footprint

Size of interface descriptor rings have direct impact
on Last Level Cache utilization, and can significantly affect performance.
So generally having smaller ring size is good idea as long as
there is enough buffer in the ring to accomodate line rate.

Here we are reducing rings sizes to 1024 which is still bigger
than lab verified 512 buffers per ring.

Indirectly, this also affects memory footprint, as we can have
smaller buffer allocation, which is now 16384 (previously it was 32768)

This patch also fixes issue with i40e vector PMD which was leaking
buffers when previous default ring sizes were set.

Change-Id: I58fb40586304b2f0cb5de9a444055da3cd3acb53
Signed-off-by: Damjan Marion <damarion@cisco.com>
vnet/vnet/devices/dpdk/dpdk.h
vnet/vnet/devices/dpdk/dpdk_priv.h
vnet/vnet/devices/dpdk/init.c