How Developers Can Benefit from the DPDK Packet Framework

DPDK architects designed the DPDK Packet Framework with following objectives in mind:

  • To provide a standard methodology for building complex packet processing pipelines.
  • To be the best trade-off between flexibility and performance, with performance being higher priority.
  • Provide a framework that is logically similar to OpenFlow.

With the DPDK Packet Framework, you use port, table, action, and mbuf elements to build an application infrastructure from ingress to egress. Using configuration files, you can generate the application you need, such as a router, security, or load balancing application. Available pre-defined pipeline types include router, ARP, flow action, firewall, and KNI for Kernel stack. You can also define your own pipeline types.

This video provides an overview of the DPDK Packet Framework and demonstrates implementation of a simple pass-through pipeline.

Check out the companion video: Deep Dive Into the Architecture of a Pipeline Stage

For more information:

Visit to learn more about DPDK

Packet Framework section of DPDK Programmer’s Guide

Subscribe to the Intel® Software YouTube channel

Hi. I'm MJ from Intel. Today we are going to see how you can benefit from that DPDK Packet Framework and learn how you can generate DPDK applications using your own recipe. DPDK Packet Framework can be compared to atomic particles, simple and small to begin with but they can build a complex infrastructure with it. 

The DPDK Packet Framework has simple elements like ports and tables. You can build your entire application from ingress to egress with them. DPDK already have 40 or more sample applications, [INAUDIBLE] one more. 40 different sample publications, each one of them is your skeleton for a specific vertical segment. Are you building a router? Then you have a router sample application. Are you building a security appliance? You have a security sample application. 

Whereas DPDK Packet Framework can be thought of as an application generator rather than an application. In that, it can generate a security application or router application or load balancer just through configuration files. Who'd use a DPDK Packet Framework? Architects, application developers, and performance tuning experts. Maybe you are coming from a background of using hardware for backup processing, so we totally understand what you are saying, "I don't want to write code to move packets, I just want to set knobs and press buttons." 

Or maybe you are interested in ways to do modeling-- what if scenarios to decide how many calls and in what topology it takes to realize your performance target. Our architects have many objectives when designing DPDK Packet Framework. Let me tell you three. First, to provide a standard methodology for building complex packet processing pipelines. Second, to be the best to trade-off between flexibility and performance, of course, with performance being higher priority. And third, provide a framework that is logically similar to OpenFlow. 

Let's look at how you can configure pipelines for that application by taking a bird's eye view of the anatomy of the pipeline. Here's an example of pass-through pipeline. It's available as a sample prefabricated stage. You mention the type of pipeline you want in the config file, the framework generates an instance of the pipeline for you. Input, output cores are specified for packet in and packet out for the stage. 

Next, you are assigning a core to that pipeline stage. You can always create your own pipeline types. Some of the pipeline types that are already available for you to use are shown here, router, ARP, flow action, firewall, and KNI. As you can see, compared to the flow through, which only had RX and PX, these stages comes with handcrafted core for table look up and match [INAUDIBLE] specific functionality. In addition to the handcrafted core, you also get these objects, ports, tables, actions, and Mbuf. To put it all together, take a look at the pipeline anatomy. 

The picture shows two configurations. One, each core handling only one pipeline stage, and two, one core handling all the three stages. In fact, this is how you will do if/else modeling. If your core has enough muscle power to handle all the functionalities of more than one stage, you can assign all those stages to that one core. 

If you are planning to increase the load by adding more functionality to a pipeline and you want to have one core only handling that pipeline, then you will assign one core dedicated for that pipeline. You can find out the list of all available pipeline stages by looking at the source code at Thanks for watching. To learn more about the DPDK Packet Framework, follow the links provided. And do remember to like this video and subscribe.