Optimize and Integrate Containers: Part 2
Learn how to optimize the runtime for Intel® Clear Containers in Red Hat Fedora* and Ubuntu*, and create a Docker Swarm* in Ubuntu.
Hi, my name is Manohar Castelino. I'm part of the [Intel®] Clear Containers team. As part of this demo, I'll show you [some Intel] Clear Containers running on [Red Hat] Fedora*, as well as on Ubuntu*. I will highlight some of the key QMU capabilities that we use in [Intel] Clear Containers. And at the very end, I'll show you [Intel] Clear Containers being orchestrated via Docker Swarm*.
Here I'm going to show you under [sic] [Intel] Clear Containers running on Fedora. If you notice, we have Fedora running a 4.9.13 kernel. And on this machine, the default runtime for Docker has been set to [Intel] Clear Containers. Now let's launch a[n Intel] Clear Container and time how long it takes for us to launch a[n Intel] Clear Container. So if you notice, it takes around 775 milliseconds to launch a[n Intel] Clear Container.
Now let us launch a runC-based container. So now I'm launching the same Alpine container, but with a runtime set to runC. If you notice, it took around 549 milliseconds. So the launch of a[n Intel] Clear Container takes around 200 milliseconds more than the namespace container.
Let's take a quick look at what the [Intel] Clear Container looks like. Each [Intel] Clear Container is launched in its own virtual machine. So you will see a [Quick EMUlator] QEMU instance that corresponds to a[n Intel] Clear Container, with a machine type called PC Lite, which is an optimized machine type, which [sic] is used with [Intel] Clear Containers.
The root file system for the [Intel] Clear Container is mapped into the virtual machine using [a non-volatile dual inline memory module] NVDIMM. And it is mounted inside of the [virtual machine] VM using [Data Analysis Expressions] DAX. Lastly, the container workload itself is mapped into the virtual machine using the [INAUDIBLE] file system. This allows to transfer [sic] in use of the old [INAUDIBLE] file system inside of the [Intel] Clear Container.
Next I'll show you [the Intel] Clear Containers being orchestrated by Docker Swarm inside Ubuntu. Here I'm on a machine running Ubuntu. And on this machine, the default runtime has been set to [Intel] Clear Containers. Now let us create it on Docker Swarm.
So I'll go and create a Docker Swarm on this machine. Now let me launch an NGINX* service with three replicas. And let's see how long it takes for us to launch it. In the time that it took for runtime to command, we have three replicas of NGINX running in [Intel] Clear Containers on this machine. Just to take a quick look if this is true, we will see that there are three VMs, one for each container running in the Docker Swarm.
Now let's try to reach the service that we're running. So here I'm going to try to access the service multiple times. And you will notice that each time, we get a different response indicating that the response is coming from a different [INAUDIBLE] replica. And each response comes from one of the [Intel] Clear Containers that has been launched as part of this swarm.
So in summary, in this demo, you have seen [Intel] Clear Containers being used seamlessly across multiple distributions. You have used it with all the Docker tools that you are used to and being orchestrated in a Docker Swarm. If you are interested in trying this [sic] [Intel] Clear Containers out and the entirety of this demo, you can find it at this link. Thanks for watching.