The goal of this article is to configure an egress policer Quality-of-Service (QoS) instance for a Data Plane Development Kit (DPDK) interface on Open vSwitch* (OvS) with DPDK. This article was written with network admin users in mind who wish to use QoS to guarantee performance for DPDK port types in their Open vSwitch server deployment.
Note: At the time of writing, QoS for OvS with DPDK is only available on the OvS master branch. Users can download the OvS master branch as a zip here. Installation steps for OvS with DPDK are available here.
Figure 1: Test Environment
Note: Both the host and the virtual machines (VMs) used in this setup run Fedora 23 Server 64bit with Linux* kernel 4.4.6. Each VM has a virtual NIC that is connected to the vSwitch bridge via a DPDK vhost user interface. The vnic appears as a Linux kernel device (for example, "ens0") in the VM OS. Ensure there is connectivity between the VMs (for example, ping VM2 from VM1).
QoS in OvS with DPDK
Before we configure QoS we need to understand its place and how it interacts with traffic in the vSwitch. When QoS is configured in OvS with DPDK it operates only on egress traffic transmitted from a port on the vSwitch.
A list of supported QoS types for a given port (for example, vhost-user2) can be obtained with the following command.
ovs-appctl -t ovs-vswitchd qos/show-types vhost-user2
Currently OvS with DPDK supports only 1 QoS type though this may change over time as new QoS types are supported. The call above would return the following:
QoS type: egress-policer
Egress policer is a QoS type supported by OvS with DPDK. An egress policer simply drops packets once a certain transmission rate is surpassed on the interface (a token bucket implementation). For a physical device it will drop traffic that is to be transmitted out of the host via a NIC. For a virtual interface, that is, DPDK vhost-user, it will drop traffic that is transmitted to the guest from the vSwitch, in effect limiting the reception rate of the traffic for the guest on that port. Figure 2 below provides an illustration of this.
Figure 2: Egress policer QoS configured for vhost-user port
QoS Configuration and Testing
To test the configuration, make sure iPerf is installed on both VMs. Users should ensure to match the rpm version to the OS guest version; in this case the Fedora 64bit rpm should be used. If using a package manager such as ‘dnf’ on Fedora 23 then the user can install iPerf automatically with the following command:
dnf install iperf
iPerf can be run in a client mode or server mode. In this example, we will run the iPerf client on VM1 and the iPerf server on VM2.
Test Case without QoS Configured
From VM2, run the following to deploy an iPerf server in UDP mode on port 8080:
iperf –s –u –p 8080
From VM1, run the following to deploy an iPerf client in UDP mode on port 8080 with a transmission bandwidth of 100Mbps:
iperf -c 220.127.116.11 -u -p 8080 -b 100m
This will cause VM1 to attempt to transmit UDP traffic at a rate of 100Mbps to VM2. After 10 seconds, this will output a series of values. Run these commands before QoS is configured, and you will see results similar to Figure 3 below. Note we are interested in the "Bandwidth" column in the server report.
Figure 3: Output without QoS configured.
The figures above indicate that a bandwidth of 100Mbps was attained between the VMs.
Test Case with Egress Policer QoS Type Configured
Now an egress policer will be configured on vhost-user2 to police traffic at a rate of 10Mbps with the following command:
ovs-vsctl set port vhost-user2 qos=@newqos -- --id=@newqos create qos type=egress-policer other-config:cir=1250000 other-config:cbs=2048
The relevant parameters are explained below:
- ‘type= egress-policer‘: The QoS type to set on the port. In this case ‘egress-policer’.
- ‘other_config=cir’: Committed Information Rate, the maximum rate (in bytes) that the port should be allowed to send.
- ‘other_config=cbs’: Committed Burst Size measured in bytes and represents a token bucket. At a minimum should be set to the expected largest size packet.
Repeating the iPerf UDP bandwidth test, now you will we see something similar to Figure 4 below.
Figure 4: Output with QoS configured
Note that the attainable bandwidth with QoS configured is now 9.81Mbps rather than 100Mbps. iPerf has sent UDP traffic at the 100 Mbits/sec bandwidth from its client on VM1 to its server on VM2; however, the traffic has been policed on the vSwitch vhost-user2 ports transmission path via QoS. This has limited the traffic received at the iPerf server on VM2 to ~10Mbits/sec.
It should be noted that if using TCP traffic, the CBS parameter should be set at a sizable fraction of the CIR, a general rule of thumb is > 10%. This is due to how TCP interacts poorly when packets are dropped which causes issues with packet retransmission.
The current QoS configuration for vhost-user2 can be examined with:
ovs-appctl -t ovs-vswitchd qos/show vhost-user2
To remove the QoS configuration from vhost-user2, use:
ovs-vsctl -- destroy QoS vhost-user2 -- clear Port vhost-user2 qos
In this article, we have shown a simple use case where traffic is transmitted between 2 VMs over Open vSwitch with DPDK configured with a QoS egress policer. We have demonstrated utility commands to show the supported QoS types, configure QoS on a given DPDK port, how to examine current QoS configuration details, and finally how clear a QoS configuration from a port.
Have a question? Feel free to follow up with the query on the Open vSwitch discussion mailing thread.
To learn more about Open vSwitch with DPDK readers are encouraged to check out the following videos on Intel Network Builders University.
About the Author
Ian Stokes is a network software engineer with Intel. His work is primarily focused on accelerated software switching solutions in user space running on Intel Architecture. His contributions to Open vSwitch with DPDK include the OvS DPDK QoS API and egress/ingress policer solutions.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.