Balancing Act: Software Security and Developer Experience

author-image

By

In this episode, Open at Intel host Katherine Druckman speaks with Stacklok CTO Luke Hinds about the open source projects he supports, including Sigstore, Minder, and Trusty; the importance of focusing on software supply chain security; and how those projects can help. They even cover the evolving issues of developer identity in the age of AI-generated code. Enjoy this trancript of their conversation.  

 

“They want to ship code, okay? So they need to balance their developers being productive but then having good guardrails around the security. And the people that we are speaking to are people that face that problem.”

—Luke Hinds, CTO, Stacklok 

 

Katherine Druckman: Thank you, Luke Hinds, for joining me today. I really appreciate you scheduling me this morning to talk and take some time out of your very busy schedule. 

 
Luke Hinds: Absolutely, it's great to be here. Always good to be at KubeCon. It is a big one, it's a busy one. 

 
Katherine Druckman: It is, yeah. 

 
Luke Hinds: I'm quite surprised. 

 
Katherine Druckman: Yeah. Salt Lake City is not so easy to get to, but there are still so many people here. 

 
Luke Hinds: Very much, yeah. 

 
Katherine Druckman: Yeah. 

 
Luke Hinds: A good attendance. 

Personal Reflections on Security

Katherine Druckman: It's buzzing for sure. Everybody's talking about AI. But we're going to talk about security, although we can talk about AI a little bit if you want. 

 
Luke Hinds: Sure, why not? 

 
Katherine Druckman: Just give us a little bit of background about who you are and what you do. You're with Stacklok, and you're heavily involved in open source security communities. 

 
Luke Hinds: That's correct. I've always been a software engineer, for a good 25-odd years now, I guess. So somebody that started out quite a few years ago, and I'm still doing it because I love it. A lot of people drop off, they get a bit of money, and they go off and take up woodworking or whatever. But I've stuck around because I love the technology, and I've always been involved in security for quite some time. And I’m always attracted to it because there's the dark side to it, and there's the cat and the mouse, and I've always been attracted to that side. But I've been more of a platform builder, somebody trying to build security tooling rather than somebody that's like a red team, penetration testing, researcher. 

 
Katherine Druckman: Sure. 

 
Luke Hinds: I've always looked to build software. I'm very much a software engineer who just happens to build a lot of security software. And I've been involved in open source for another long amount of time. 

 
Katherine Druckman: Yeah, I stopped counting because it's too awkward. 

 
Luke Hinds: Yeah. Originally working on some of the early IP filter stuff in the kernel, firewall, and that sort of stuff. And over the years I’ve been involved in your OpenStack, your Kubernetes. I used to be on the Kubernetes security response team where we would handle vulnerabilities that were raised, and we ran a bug bounty program. And I got elected to the OpenStack security group quite a few years ago running that, the working group there. And then, of course, OpenSSF and Sigstore, and various other projects that we can get into. But yeah, I've been around open source quite a bit. 

 
Katherine Druckman: Yeah. That's great. 

 
Luke Hinds: That's been my main gig for the past few years. 

 
Katherine Druckman: Yeah, I appreciate you filling us in on what attracts you to security. 

 
Luke Hinds: Yeah. 

 
Katherine Druckman: I find those stories in and of themselves to be very interesting, right? Some people come at it, "I was a software engineer too," and I think I am attracted to security just out of fear. No one wants to be the person responsible for ruining someone else's day. 

 
Luke Hinds: I've been there. There's an element of a car crash. There's a car crash at the side of the road. You don't feel like you should stare, but as you drive past, you do have a look. We had some stuff recently where a really smart young lady, Poppy, who's part of a team at Stacklok, she's a threat hunter, and she revealed these nation-state attackers effectively trying to compromise developers through these mock interviews. 

 
Katherine Druckman: Yeah. 

 
Luke Hinds: It's fascinating to go down the rabbit hole and see how all of that played out. It is a very real threat. It's not just theater, people do get... 

 
Katherine Druckman: Yeah, that's another thing. I wonder, how does it affect you just personally, right? You live your life thinking about security threats and risk, and how does that affect the way you walk down the street or sit in a restaurant? Are you a person who will not sit with their back to the door? 

 
Luke Hinds: I feel like I've become immune to it. 

 
Katherine Druckman: Oh, okay. 

 
Luke Hinds: I've been around it so much. I am a little bit lax with my own security sometimes, quite interestingly. 

 
Katherine Druckman: Cobbler's son has no shoes, yeah. 

 
Luke Hinds: I've pushed secrets to GitHub repos before. I'm not perfect by any means. 

 
Katherine Druckman: We've all done that, I'm just saying. 

 
Luke Hinds: Yeah, exactly. 

 
Katherine Druckman: And then frantically go, "Oh, I should probably scrub that." Yeah. 

 
Luke Hinds: Yeah, you get that minute where you get that, "Oh," pulse in your belly where you think, "Oh, what did I just do?" 

Introduction to Stacklok and Minder

Katherine Druckman: Yeah, been there. Tell us a little bit about the projects that Stacklok is involved in. You mentioned Sigstore. I definitely want us to talk a little bit about Minder, which you've just donated to OpenSSF

 
Luke Hinds: Yeah, very much so. Minder is a software supply chain platform. And what we mean by platform, it's very Kubernetes, Kubernetes-y, I have to say. There's a word there somewhere. 

 
Katherine Druckman: Kubesque. 

 
Luke Hinds: Kubesque. And we noticed that in the security world there are, for want of a better word, somewhat disparate, fragmented ecosystems with different tools and services that weren't particularly good at talking to each other. They'd all have different sorts of APIs that would allow some amount of interaction, but with a somewhat disparate system. And we figured, it'd be really interesting to build a platform, a community-centric platform where we have good open interfaces that people can integrate with, and then we can effectively monitor the state of the software supply chain within the immediate circle of that platform. They're not the entire software supply chain like everything, every single open source project out there. 

I mean, that'd be the wonderful North Star to land, but within your sort of immediate trust boundaries or your immediate network and that's Minder. Minder is a system where you can declare policy for the supply chain, and Minder will then make sure you do not drift and you conform to that policy. You can set these security controls, and then Minder will engage with these systems to make sure they stay within conformance of that policy. It is a system where you can actually write your own policy, which is good. If you wanted a particular system to be integrated or you want a particular policy flow, perhaps more the latter. 

Then we have an engine where you can write it in YAML if you wanted to, you could use an expressive language like Rigo or Cue and you can effectively write your own policies, push them into Minder, and Minder will run them for you. Like I said, it's got that typical platform centricity to it. It's very much something that's conducive to people to integrate with. Speaking to lots of people that are interested in collaborating with Minder, because it's a project under an open source foundation. There is a good level of governance. We can't do some sneaky flip to a business source license. 

 
Katherine Druckman: Right. Sure, sure. 

 
Luke Hinds: It's a good place to have a big tent. Multiple people can play there and use the platform to their advantage. And it's something that myself and my co-founder, my co-founder is Craig McLuckie, one of the original three that created Kubernetes. I've created many platforms myself. It's a play that we know well, really. It's very effective, an open source community centric platform. And that's what Minder is. I wouldn't say, we're relatively early and we've got a good substantial amount of features that are already available there, but it's nowhere near where we want to take it. 

 
Katherine Druckman: Yeah. Well, I'm a big fan of some of your engineers. We'll put it that way. 

 
Luke Hinds: Okay. Cool. 

Target Audience and Use Cases for Minder

Katherine Druckman: Yeah. If you could speak to the ideal target audience or end user or even contributor of Minder, who would that be, and what problems are they looking to solve? 

 
Luke Hinds: Very much. The common personality, not personality type, the common sort of… 

 
Katherine Druckman: Persona. 

 
Luke Hinds: Persona, yeah, persona that we are engaging with are people that have to manage these multiple actor systems and keep them secure. They need to do so within the context of developers, they want to be productive, and they want to move fast. They do not want to be sort of hit on the head all the time and told, "No, denied, red stamp." They want to ship code, okay? They need to balance their developers being productive but then having good guardrails around the security. And the people that we are speaking to are people that face that problem. They live in an organization, or they work in an organization where it's very much a software ship, build fast, innovate, and they can't have security slowing them down. 

 
Katherine Druckman: No. 

 
Luke Hinds: A good example that I always use, and I don't want to pick on it because it's an incredibly useful piece of software, is SELinux

 
Katherine Druckman: Okay. 

 
Luke Hinds: Okay. So SELinux is very strict. It will completely deny you access to a certain file. Developers would switch it off because they would be like, "I can't get anything to work." So, they'd switch it off and then eventually, it'd be switched on again, and then everything we wanted to, in a way, be streamlined with how to work with developers. You see what I mean? 

 
Katherine Druckman: Mm-hmm. 

 
Luke Hinds: Be able to bridge this gap between operations and engineering so that they can understand there's a common policy that we wish to have in place. We could be slightly more lax in certain environments. You might have your research arm or your labs division where you can be a little bit more loose, and you can apply policies to that particular organization that are a little bit more loose. Whereas something in production, you want to be very strict. 

 
Katherine Druckman: Sure. 

 
Luke Hinds: Let's pair the shipping to production, then you can really introduce lots of controls. And Minder has lots of those out of the box that are available. There are things such as, we can make sure that artifacts are signed with Sigstore. They have provenance of some using services like GitHub actions or Docker files. We can make sure they're tagged through immutable digests rather than sort of floating tags. All sorts of controls that we can do there. We can check packages, are they malicious? Are they known to be malicious? What CVEs do they have? And then you get to know about this stuff early, effectively. It comes into the system early so you understand about its risk profile at an early stage. 

Balancing Security and Developer Productivity

Katherine Druckman: I love that you're addressing making developers lives easier, and acknowledging that weird tension that there's always been between developers, engineers, and the security folk, right? 

 
Luke Hinds: Very much. Yeah. 

 
Katherine Druckman: And I wish that wasn't the case. In my engineering life, I understand the issue, right? You run into something, you just want to get your job done, you want to close a Jira ticket, right? 

 
Luke Hinds: Yeah. 

 
Katherine Druckman: And here is some security control that's in your way. Now, I took the approach maybe, "Ugh, I want to get better at this," so... 

 
Luke Hinds: Sure. 

 
Katherine Druckman: Because I want my thing to be secure, but I am a person who was attracted to the security field. But other people, I sympathize heavily with the, "Ugh, I just want to turn this off because I've got to get something to work." 

 
Luke Hinds: Sure, yeah. 

 
Katherine Druckman: And I understand it. And what do you think we can do to bridge that gap? 

The Importance of Seamless Security

Luke Hinds: I think the key is really the UX. With security, it's interesting because when you try to convince people they should adopt security, you're trying to convince them to take out insurance. The value is that at some point in the future, this undetermined event might play out and then you are happy that you have your insurance. You bang your car up, the insurance is there. But it's a difficult sell if there is a lot of difficulty in adopting that value as well because developers, they want to unlock productivity, that's their key thing. They want to move faster. If you're asking them to take on something where there's this slight intangible event that could occur in the future and you need to learn all of these concepts and acronyms and terms, and configure something and spend hours doing it, then it's very difficult to get that traction in. If it is low effort and high value, you're onto a winner then. Do you see what I mean? 

 
Katherine Druckman: Mm-hmm. Yeah, yeah. 

 
Luke Hinds: If it's seamless, then people will adopt it. If it's high effort and sort of medium to low value, it's very difficult then because they've not seen any tangible value that they derive from it. It's really balancing the ease of use, the more seamless it can be, the better. The value will then be accepted, that people are like, "Well, yeah, it makes sense. I want to be protected, I want to do the right thing." But if you're going to make me jump through hoops and learn about all these arcane concepts, cryptographic, principles, and yeah, you're going to lose them. And I think that's why Sigstore is a success because we took an incredibly complex, deep security technology, cryptography, and we made it so that it's at the point now where people don't even know it's there. It's running behind the scenes. 

 
Katherine Druckman: That's the best way. 

 
Luke Hinds: Do you see what I mean? 

 
Katherine Druckman: Mm-hmm. 

 
Luke Hinds: It works very much in line with their common approach to developing software, their modern approach to engaging with SDLC. 

 
Katherine Druckman: Yeah, fits right in. 

 
Luke Hinds: Yeah, yeah. 

 
Katherine Druckman: Oh, that's fabulous. Tell me a little bit more about the other projects you're involved in as a company that you put engineering resources to. And why? Why those projects? 

Introduction to Trusty: Understanding Open Source Security Risks

Luke Hinds: Yeah, so we have Minder, which we covered, and the other project we've been working on is called Trusty. Trusty is built around understanding the security risk of open source packages. 

 
Katherine Druckman: Okay. 

 
Luke Hinds: Now, open source packages quite often can be malicious. They can have a pretty horrendous payload. I mean, you have vulnerabilities, very important as well. But vulnerabilities are generally bugs. A developer makes a mistake, you need to then circumnavigate a network to reach that vulnerability and hope the code is reachable. But a malicious package, when that runs, it will seek to backdoor your machine straight away. Something bad's going to happen. We started off with a focus of trying to understand malicious packages. We started to apply data science to this, looking at the signals around the malicious package, the type of people that produce malicious packages. 

We did a lot of research and we actually started to reveal a lot of pretty risky supply chain results. And we then realized that there are some really interesting aspects to the open source software supply chain. There is a community and there is a very useful element to that community which you can tap into for security. There is this concept that the blockchain folks use called proof of work, okay? And we realized there is a kind of a proof of work within open source communities. Now, when somebody contributes to a project like, say, Kubernetes, you've got a lot of prior work getting to that point that you are suitable to make a contribution to that project. 

You have found a feature that people feel is meaningful, you found a bug that needs to be fixed, you found docs that need to be changed, so you've invested time to get yourself to that point that a very high profile, high quality project would then accept your contribution. That's a signal there that there's a strong... I don't like the word reputation, but there's a reputation that's been created over time, that's very difficult to game. You can't just create an account and then pretend to be that person. 

Analyzing Malicious Packages and Developer Contributions

Katherine Druckman: But people have gamed it. 

 
Luke Hinds: Well, they have. I mean, you've got stuff like XY Utils and that's like a very incredibly complex, very expensive attack. 

 
Katherine Druckman: Yeah, for sure. 

 
Luke Hinds: But I think that's the key thing. It took about two, three years. That's an expensive attack. Again, you've got this sort of blockchain-y proof of work. 

 
Katherine Druckman: Right, uncommon potentially too. 

 
Luke Hinds: Makes it very expensive. We found that by looking at these, essentially we graphed this out and we notice that you could attribute a package is likely to be of a certain quality because a certain individual started to contribute to it. It takes somebody who's a sort of prolific, well-known open source developer. If they start contributing to a relatively not that well-known package, then you can derive that they see value, they see a future, they see something unique about it. You can allow that to propagate through the supply chain. If you look at the supply chain as a graph, we look at it like a spaghetti monster, but it's a series of interconnected nodes. And so that's the sort of stuff that we're doing in Trusty. 

 
Katherine Druckman: Interesting. 

 
Luke Hinds: It's a lot of data science. And a lot of folks that are way smarter than me in this area... I've never been particularly great at math and I never really had a great brain for that. I've got some people that are very good at that. And I'd say we're starting to find some interesting things like these data attackers trying to compromise developers' laptops and all sorts of nasty stuff has surfaced. 

 
Katherine Druckman: Very interesting. 

 
Luke Hinds: Yeah. 

The Role of Developer Identity in Open Source Projects

Katherine Druckman: The conversation around developer identity is very interesting to me and how it relates to security and then other things, sustainability of projects, all of these things. But the way that we identify ourselves as developers in distributed projects like open source communities is very interesting. I went to a workshop, a little mini workshop with Defcon a couple of years ago that was so interesting. And the whole concept was about gaming identity. And it's such an interesting thing, right? Because there are ways, again, in terms of timeframes, you're talking about massive investments. If you want to do this over the time it really takes to build, as you say reputation, which can be a problematic word. But this was kind of interesting about just completely gaming the green checks and gaming contributions. And that's such an interesting thing to me. And I just wonder, I think it's an evolving and ongoing conversation, right? It's not new. 

 
Luke Hinds: Yeah. 

 
Katherine Druckman: But I wonder where you see it going. What are some more other interesting solutions, right? You're working on this one, but I think there's a lot of other things going on out there in the community, and I wondered if there's anything that's caught your eye. 

AI's Impact on Code Development and Security

Luke Hinds: Yeah, so I can't think of anything particular in the community, but I do want to visit a topic that you brought up at the beginning is around AI and how this relates to all of this, which is something that we've been looking at. I think it's over 50% now of that is claimed, I don't know how true it is, but 50% of code is developed by a large language model, okay? And let's assume that's more or less the ballpark figure. Now when you look at code, it surfaces with a human identity. Apull request will be made and you'll see a picture of Luke Hinds, my sort of GitHub or my GitLab account, and then that code is shown as coming from me. But more and more code is actually coming from machines. 

 
Katherine Druckman: Yeah. 

 
Luke Hinds: What does that human identity even really mean anymore? What do you derive from that? I mean, it's very interesting. I don't really have the answers. 

 
Katherine Druckman: Yeah. 

 
Luke Hinds: I'm not going to come down from the mountain here. 

 
Katherine Druckman: I know. 

 

Challenges and Future Directions in Developer Identity

Luke Hinds: But what does it mean now? Human identity and developer identity, especially when you look at the genetic workflows where there's no human in the loop or it's very minimal, it starts to become very interesting. And that's some of the stuff we've been thinking about in Stacklok. You know what that means around model provenance. 

 
Katherine Druckman: Yeah, absolutely. Yeah. 

 
Luke Hinds: So where did the data set come from this model? What was the data set? I mean, if you speak to me as a human, you could go, "Oh, that's Luke. I know Luke, I met him at KubeCon. We had a coffee later. He works at Stacklok, he's got a dog, two children, lives in the UK." You can build up a social… 

 
Katherine Druckman: Yeah. Triangulate a little bit there. 

 
Luke Hinds: You can triangulate, very good word, yeah. Now, you can't with a large language world because it's just completely unseen. If it's an API, you ask it stuff and it comes back with a response, but you don't know the knowledge that it was trained on. Was that knowledge tainted? Was it recent? 

 
Katherine Druckman: Yeah. 

 
Luke Hinds: All of these sorts of things. Again, one of the areas that Trusty's proven to be very useful is large language models will recommend packages. And they have a very stale knowledge cutoff because these things are very, very expensive to train. And we've found that quite often those packages might be deprecated, abandoned, sometimes hostile takeovers do happen. So Trusty is actually a really good source to get a current view on a package. Is it malicious? Is it actually maintained? And we're using very traditional data science algorithms here, principal component analysis, community clusters, stuff that I'm not smart enough to talk about, but I've learned a few names to be dangerous. And these are really revealing some interesting things. So we're not just looking at how many stars there are or how many years or that... 

 
Katherine Druckman: Right. 

 
Luke Hinds: We're really trying to understand the health of something and how that health changes over time as well. Because you make a decision at a point in time, you start a new project, it calls down a hundred dependencies, and then you're off. 

 
Katherine Druckman: Yep. 

 
Luke Hinds: And nobody goes back and visits those, or very rarely do they go back and visit them until they're a problem. But the juncture between them becoming a problem and you selecting them, there is a change, there's a path. 

 
Katherine Druckman: Right, yeah. It's a living and breathing thing. Yeah. 

 
Luke Hinds: Yeah. Yeah. They're like plants. They get neglected, they don't get watered, and you only know about them until the things died and suddenly got a panic, "Ah, Log4J." And I'm not saying we would have solved Log4J and all vulnerabilities but know there's a drifting quality to these things. And that's what really fascinates us it’s being able to preemptively see those changes happening. There's sort of changes of dynamics. Who's contributing? Whose package is well maintained? All of these sorts of signals. 

Concluding Thoughts and Future Conversations

Katherine Druckman: Well, this is fabulous. I don't promise that we can come up with all of the answers in 20 to 25 minutes, but I think half of the battle is raising the right questions and I think we've done that. 

 
Luke Hinds: Yeah, very much. 

 
Katherine Druckman: I really appreciate it and I'd love to revisit this conversation in the future when the developer identity starts to really evolve. 

 
Luke Hinds: I would really enjoy that. Yeah, very much because we are going to be surfacing new work where we've got our sunk works sort of slightly under the radar technology that we're working on. We'd love to talk about that when we surface. 

 
Katherine Druckman: Fabulous. 

 
Luke Hinds: It's some really interesting stuff so... 

 
Katherine Druckman: Cool. Well, thank you so much. 

 
Luke Hinds: Awesome. Thank you for having me. 

 
Katherine Druckman: You've been listening to Open at Intel. Be sure to check out more about Intel’s work in the open source community at Open.Intel, on X, or on LinkedIn. We hope you join us again next time to geek out about open source.  

About the Guest 

Luke Hinds, CTO, Stacklok 

Luke Hinds is the CTO of Stacklok. He is the creator of the open source project Sigstore, which makes it easier for developers to sign and verify software artifacts. Prior to Stacklok, Luke was a distinguished engineer at Red Hat. 
 

About the Host 

Katherine Druckman, Open Source Security Evangelist, Intel

Katherine Druckman, an Intel open source security evangelist, hosts the podcasts Open at Intel, Reality 2.0, and FLOSS Weekly. A security and privacy advocate, software engineer, and former digital director of Linux Journal, she's a long-time champion of open source and open standards. She is a software engineer and content creator with over a decade of experience in engineering, content strategy, product management, user experience, and technology evangelism. Find her on LinkedIn

1