Vector Packet Processing Using FD.io
FD.io Vector Packet Processing (VPP) is used to build discrete packet processing appliances, cloud and cloud-native infrastructure, and virtual and cloud-native network functions. In this video, Sujata Tibrewala from Intel gives an overview of Vector Packet Processing, or VPP, within the FD.io project. She also gives a few of the many FD.io use case examples.
Hi, I’m Sujata from Intel. In this video I’ll give you an overview of Vector Packet Processing, or VPP, within the FD.io project.
FD.io is a Linux Foundation project that holds many sub-projects, including VPP. FD.io also hosts other projects focusing on Networking and storage. VPP is a software packet processor designed to take advantage of the modern CPU architecture and packet accelerator libraries such as DPDK. Now let’s go into some more details about VPP.
VPP is used to build discrete packet processing appliances, cloud and cloud-native infrastructure, and virtual and cloud-native network functions.
It is a feature-rich network stack, with many protocol Layers including Layer 2, Layer 3, and now Layer 4 protocol support. VPP also implements all the standard overlays, telemetry, traffic management, security and control plane features.
Another great thing about VPP is that it is fast -- typically achieving data rates measured by many millions of packets per second.
Designed for fantastic scalability, it achieves linear scaling with available cores. In addition, it manages to maintain performance with large traffic management -- switching and routing tables necessary for production environments.
It is also deterministic, typically achieving around 15 micro-seconds of latency for Layer 2 switching.
VPP is generally benchmarked with multiple-cores, 1000s of Access Control Lists, and millions of bridge mac address and IP routes.
It has been benchmarked running on an industry standard 2RU server, where it broke the 1Tbps throughput boundary. This was made possible by the new family of Intel Xeon Scalable Processors and DPDK.
VPP is extensible. Many features have already been implemented as optional plugins, and you can easily enhance and customize it to meet your needs.
It is also developer friendly, with easy tracing, debugging and counters that provide a deep insight into the efficiency and optimization of the packet processing pipeline on Intel Micro-architectures.
VPP has been ported across architectures including x86, ARM, and PowerPC. It can be scaled to a device as small as a Raspberry Pi and as large as a standard server based on Intel Xeon Scalible Processors.
VPP has a rigorous benchmarking and verification project called CSIT, or (Continuous system integration and testing). It ensures that, patch-by-patch, build-by-build and release-by-release, VPP continues to maintain its industry leading performance.
Now that we’ve covered features and performance of VPP, let’s talk more about the use cases.
Although VPP is relatively new, there are already some great examples where it is used in production-ready commercial network functions -- from vendors such as Cisco, ZTE, Yahoo Japan, and Alibaba.
The following is a sampling of the many use cases that can benefit from VPP:
Networking connectivity for enterprises using commodity platform.
Cloud Load Balancers supporting both Kubernetes and Open Stack Deployment, with Maglev, L3DSR and Kube-proxy implementations.
Discrete appliances such as Border Relays implementing multiple variations of network address translation, or NAT.
These are only a few of the use cases for VPP. Be sure to follow the links to learn more, contribute to VPP and use it in development for your packet processing applications.
Thanks for watching and be sure to like and subscribe.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.