Optimize and Integrate Containers: Part 1
Learn how to optimize Intel® Clear Containers to boot faster, and integrate them with Docker* and Kubernetes.
Hi, my name is Manohar Castelino. I'm part of the [Intel®] Clear Containers team. And today I'd like to give you a quick overview of [Intel] Clear Containers. I will give you an overview of the technology behind [Intel] Clear Containers, how we make [Intel] Clear Containers small [and] fast to boot. And I'm also going to give you a brief overview of how we integrated [Intel] Clear Containers with both Docker* as well as on Kubernetes.
First of all, [Intel] Clear Containers launch containers in virtual machines. But we do not have any of the downsides of virtual machines. [Intel] Clear Containers launch containers in very lightweight virtual machines. We have achieved this by creating a new platform called PC Lite. This is a legacy-free platform on which you can boot up a kernel without a BIOS. That means that the system boots up pretty quickly. And the kernel doesn't need to carry any legacy.
Within [Intel] Clear Containers, we ship a very minimal kernel that is custom configured to support just the PC Lite platform and have enough features just to launch the container workload. Furthermore, we carry a very minimal root file system, which is system dBase*, which is just enough to launch the container workload. This system dBase root file system is mounted inside of the virtual machine using a combination of virtual nonvolatile memory and [Data Analysis Expressions] DAX.
Well, what this lets us do is it lets us reduce the file system caching overhead inside of the virtual machine. Furthermore, because both the kernel, as well as the root file system, are memory marked, it lets us leverage a feature in [a Quality Virtual Machine] QVM called kernel same page merging. This allows us to transparently deduplicate memory that is read-only across the kernel and root file system of multiple containers. This reduces memory overlay.
Lastly, the container workload itself is mounted into the virtual machine using plan manifest. That means that we do not copy the workload into the container, but we directly access the overlay file system mounted on the host. This allows [Intel] Clear Containers to have a very minimal footprint, as well as boot up quickly, with the same latency of that of containers.
[INAUDIBLE] [Intel] Clear Containers integrate with Docker. Starting [with] Docker 112, Docker added support for replacing the default runtime on any given host with any OCI-compliant runtime. [Intel] Clear Containers is an OCI-compliant runtime. And you can install [Intel] Clear Containers on any machine running Docker and set the default runtime to be [Intel] Clear Containers.
By doing this, it's transferred into the end user or to orchestrate a late swarm. The user workflow and developer workflow is [sic] unchanged. But transparently, when a container is launched on that host, it is launched within a virtual machine, which gives you higher security.
What does it take to install [Intel] Clear Containers on any given machine? Here's an example for [Red Hat] Fedora*. All you do is you install one of the packages we provide on Fedora. And you switch the default runtime in the system to [the] unit file for Docker and set it to [Intel] Clear Containers. With that, that machine from then on will run [Intel] Clear Containers as the default runtime.
Similarly, I'm going to talk about how we integrate with Kubernetes. Recent versions of Kubernetes have a specification called [Container Runtime Interface] CRI, which can be used to launch Kubernetes with [Intel] Clear Containers. [Intel] Clear Containers supports CRI.
And in this case, when you create a port in Kubernetes, the entirety of that port, all the containers that constitute that port, are launched within a[n Intel] Clear Containers virtual machine. So the unit of isolation in the case of Kubernetes is a port, whereas the unit of isolation in the case of Docker is a container. So in the case of Kubernetes, we put the entirety of the port in a virtual machine. In [the] case of Docker, we put an individual container inside of a virtual machine.
In summary, in this presentation, we talked about how we make [Intel] Clear Containers small and fast. We showed how it integrates with Kubernetes as well as Docker. We also showed you how easy it is to try them on any given host. We'd like you to try us out. You can find us on GitHub*. And also we are on freenode. We'd love feedback from you about [Intel] Clear Containers. Thank you again for listening to this. And we hope to hear from you.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.