Open Source: The Nerd Version of Formula One



In this first episode of Open at Intel Season 2, we broaden our conversation to discuss the very human aspects of open source software and the always-personal Linux* desktop, but with a cloud native twist. 

Jorge Castro, Developer Advocate at the Cloud Native Computing Foundation* (CNCF), and my fellow Intel Open Source Evangelist Chris Norman join me to geek out on taking the desktop cloud native with immutable Linux* and talk open source community sustainability. 

Katherine Druckman: Please introduce yourself, so people get a picture of who you are.  

Jorge Castro: I’m a community manager currently finishing up a sabbatical. I've been fortunate enough to work in open source for a long time. I've worked on Ubuntu*. I've worked on Kubernetes*, Kubeflow* and Cloud Custodian* and a bunch of affiliated projects around the CNCF landscape and have seen the explosion of open source and do what I can to help move that forward - ensuring that the next generation is set to enjoy the tech that we get to play with every day.

Katherine Druckman: I love what you said about the next generation, and we'll get to that, but first I want to ask about an immutable Linux distribution that you work on called Universal Blue*. You can find it at or, in other words: “You blew it!” 

Jorge Castro: Yeah, you're supposed to point at your laptop and be like, “It's time to UBlue it!” 

Katherine Druckman: How does it work? What's cool about this? 

Jorge Castro: I’ve always been into desktop Linux...I've always used GNOME* and Ubuntu back in the day. I was very fortunate to meet a lot of people who worked in client, and before 2010 when people were investing in the Linux desktop. Then I took a detour to cloudland. I’ve always used a Linux desktop, but in cloud, I've always found it interesting that Linux just dominates the computing industry. You can't have a modern world with mobile and all that stuff unless there are apps running it. I've always thought it was interesting that client has never really seen that success outside of mobile and a few use cases by a vendor. It's strange because it's the same powerful technology, and it just never quite got there -- that's always bothered me.

I was at KubeCon + CloudNativeCon* in Detroit and met with Colin Walters, who works on the core operating system of Fedora* and is a long-time Debian* developer. He's also been working on a technology called OSTree, which is a Git-like structure where you shove the OS in there. That's the extent of my technical knowledge. I'm a community manager. It was that weird thing where you don't understand, but you're positive it's useful for someone, somewhere. And when we had breakfast, he was like, “I shoved the whole operating system in there.”  

“Like, what do you mean?”  

“I mean, I shoved the whole OSTree and everything into an OCI container, like a Docker* container.”  

“Can you...could you repeat what you mean?” 

And in the server and cloud use case, it immediately became apparent that the community would run with it. It's the concept of being able to derive your operating system using common OCI tools. So, imagine if you’re able to make your perfect server operating system. If you could say “from CoreOS*” and then do all the things in the Docker file for things that you want on that operating system, including using all the existing tools you have. At the end, you type ‘podman build’ and an OCI container comes out. After that, you push it to a registry and boot the metal off of it.

You then have the same model that cloud developers are using to build applications that they deploy on Kubernetes and all of the stuff that's happening, but now we can do that at the operating system level, in a way that's consumable, with tools that have been around− Kubernetes celebrated its 10th birthday yesterday. So, I immediately I thought: “the server nerds are gonna love this!”  

Having ties with Kubernetes folks like the Cluster Lifecycle Special Interest Group (SIG), and the Cluster API SIG, as they started to see this, everyone got it. A vendor recently told me, “Well, sometimes we get Intel Network Interface Controllers (NICs) that aren’t supported in the kernel yet, and need to do these little tweaks...” I said, “Well, what if from your existing distro just do the business that you need to do to get it to work, and as long as the image builds, you know the machine is going to work? Could that actually work?” 

Before we left, Colin said, “I don't have time to work on any of this, but I'm running my entire laptop out of GitHub*.” Because GitHub offers Git Hosting, Actions and a registry. So, I thought about it and said, “Could this actually work? What if we could take a bunch of site reliability engineering nerds who aren't distro folks? What if I can make my perfect desktop?” But build it in the same way that I'm building and deploying my apps on Kubernetes. It would solve a lot of problems. At the time I didn't think about that because I'm a nerd and I went immediately to my home lab.  

I've got a stack of NUCs just like everybody else, and I got to work, and started to shop the idea around with friends, many of whom don’t run Linux desktops. They're just cloud nerds, and because the tooling is common, I didn't need to convince them to install Linux on their laptop. They could just help me script out some things.  

So, we grab Fedora which publishes their images now as OCI in test.  This isn't in production yet for them - it'll probably be in Fedora 39, the next cycle. And I said, “Couldn't we just ingest all of these images, put whatever we want on them, and then just give people what they want?” A little kit to make your own thing, similar to the first time you learned cloud, when someone said, “Hey, you need to deploy this stack.” You went looking for that set of YAML files so you could ‘docker compose up’ or whatever it is that you wanted to do. What if we could do that for client? And it turns out that it works really well. So, we ingest everything from Fedora for versions 37 and 38.

Then people started to say, “Hey, this is neat, but I like KDE* and not GNOME.” And in the cloud you make your little build matrix. We just picked a different set of packages. So I'm relearning how to write multi-stage Docker files and things like that. While we're building it, we're not learning distribution-specific tools. We're just reusing our common cloud language because making distributions is hard. At Ubuntu, getting a change into distro, there's a lot of engineering and things that happen. And I didn't really want to make a distro. 



Chris Norman: There are a lot of challenges that come with integration and a lot of testing to make sure that all the components work together, right?

Jorge Castro: Right. And you don't want to fall into that trap. We thought, “Well, we know Fedora is going to do this, but the feature isn't ready, we can prototype now? Could this actually work?” So, we made KDE, GNOME and a few tiling window manager ones. We found all sorts of things in the Fedora Archive where people were saying “I want this cool thing. I want this cool thing.” And usually in Linux, when you're setting things up−like hardware acceleration on my Intel 2-in-1− you see instructions for how to enable that in Linux and it's always a manual step. On the other hand, you see how people buy a Chromebook at Best Buy*. They don't have to set up any of that. So, I said, “Couldn't we just grab all these web pages and shove them in container files with the instructions to enable this and see what comes out?” What came out was a nice operating system that just works and just boots because it's image-based. Then we get to remove a lot of the complexity on the client for upgrades, distro upgrades, adding Personal Package Archives (PPAs), having to do ‘dpkg-reconfigure –a'.  

For example, RPM Fusion* was built for a module for the kernel that's a day old, so I have to wait for the build system in the distro to catch it.  These things are still problems, but now they happen in our CI.  Then we can catch them, but the end user always gets an image that works. Suddenly, they have the granularity to go back in time and boot off the image. You're not doing a snapshot/back up/restore. It's literally booting off an image, so it's clean.

Chris Norman: That’s the premise of Clear Linux*: all the integration is done at compile time so you don't have to worry about the distribution hell of dependencies. 

Jorge Castro: Right. That's something I learned from Tim Pepper, who worked on Clear Linux.  

I shopped the idea around because I wanted to do it right. I've learned enough and how to take advantage of all the senior people and ask them for advice. He always told me that Clear Linux was designed to do a bunch of work client-side, and then you want the package manager to just splat the disk and not do the other stuff. However, in traditional Linux land it's all about packages and your package manager, and that's what you're picking, and you're supposed to know this stuff. But I started to realize as I was doing it -- even as an advanced user who knew all this stuff -- that I don't want to do it either...

Chris Norman: It's like starting a car. You just want to turn the key and have the car go. You don't want to be putting a different set of wheels on every time you want to go to the store, right? 

Jorge Castro: I've done all of that already. I don't have to prove anything. Why would I compile my own kernel when you know Colin King? However, in traditional Linux, there's this feeling of, "Uh oh, but we're supposed to be about packages and doing all that stuff,” which I found very interesting. The Linux desktop culture is very much entrenched in the package. Meanwhile, I'm hanging out in cloud land, and all the best Linux people I look up to who are just absolute experts in this stuff−they're all running Macs*. Because they decided that getting that for work done, they didn't have time for the Linux desktop. 

So, I thought, “Why don't we tackle it this way then?” That's when I really started to think about the economics.  Look at just e-waste. I live in Michigan, in Ann Arbor, and The University of Michigan is here. They have a property disposition where you can get used computers and it's just pallets and pallets of computers. You start that nerd thinking, “Well, if I put my image on that thing, I know it could do something. I know it could be something great.” And we know that people can use Linux because they bought Chromebooks*. And the great thing about Chromebooks is they purposely don't mention Linux at all. It's invisible.

Katherine Druckman: Yeah, it's irrelevant to the Chromebook user. 

Jorge Castro: Right. And I think a lot of people get upset by that. Because they want everyone to know that this is Linux. 

Chris Norman: It’s like the Steam Deck* story, right? It's made gaming on Linux a thing, even though people don't realize they’re using it. 

I have my steam account, I boot it up on the steam deck, and it just works. 

Katherine Druckman: No one cares really what the solution is as long as the problem is solved.  At least, for that user... 

For more of this conversation and others, subscribe to the Open at Intel podcast:


About the Author

Katherine Druckman, an Intel Open Source Evangelist, is a host of podcasts Open at Intel, Reality 2.0 and FLOSS Weekly.  A security and privacy advocate, software engineer, and former digital director of Linux Journal, she's a long-time champion of open source and open standards.