How Good Security Hygiene Can Help Ward Off Attacks

author-image

作者

Photo by Kelly Sikkema on Unsplash

Traditionally, cybersecurity has been largely siloed from development and other parts of IT, but according to Microsoft* Cloud Security Advocate Sarah Young, recent shifts in the security landscape—and new tools—are empowering developers to think of security practices as an aspect of writing good code. An important part of this shift is the effort of security specialists to find ways to make it easy for businesses to implement security recommendations. 

On a recent episode of the Open at Intel podcast, Young joined us to talk about the growing focus on security in open source, common misconceptions about hackers, and developer tips to help prevent data breaches. This conversation has been edited and condensed for brevity and clarity. 

Katherine Druckman: You’ve been in security for many years. Can you tell us a little bit about yourself? 

Sarah Young: When I tell people I’m a cloud security advocate at Microsoft, they often think I want to sell them things. But really it means that I talk to the community about good security practices. An important part of that is advocating on behalf of the community to make sure Microsoft understands the community’s concerns. You could also say I’m a professional conference attendee; it’s only a small part of my job, but it’s the most visible part. I really enjoy meeting with the community to not only find out what’s topical but also to immerse myself in the community. That’s an important part of how we uplift security perspectives more generally. 

Katherine Druckman: Lifting up the community is important for all of us, especially right now. Security is on everyone’s mind. There have always been high-profile attacks, but a heightened scrutiny of security has emerged in the last few years, especially around open source.  

Sarah Young: If we go back 10 years or so, there was very little focus on security outside of security specialists. Some high-profile breaches that affected millions of people made the general public suddenly more aware of security, especially as their lives have become increasingly digital. Then about five or six years ago, the landscape evolved again as people changed their thinking around open source security. Organizations used to believe that vendors knew how to best secure products and that anything open source must be bad because it’s been developed by lots of people. Today, some argue that open source solutions may be more secure because they’ve been scrutinized by so many different people from all over the world. The truth is probably somewhere in the middle—some projects are secure and some aren’t—but at least now people appreciate that open source security is distinct from enterprise security. And at the same time, the two are intrinsically linked when enterprises use open source solutions. 

Balancing Security in Open Source and Commercial Projects

Katherine Druckman: Just in the last couple of years, more and more nontechnical people are focusing on security, and it’s leading to bizarre situations. For instance, someone will send a vendor checklist to an individual project maintainer. Have you come across situations like this that amuse you? 

Sarah Young: I recently saw an exchange where someone posted an aggressive message in the GitHub* chat of an open source project demanding that the maintainers fix a bug. It was a medium security issue. I could understand why an enterprise would want it to be resolved before using the solution in their environment, but the person seemed to misunderstand the difference between vendors and open source. If a vendor has a bug, you can typically go through official channels to request a patch, whereas open source is made up of volunteers. In this case, the maintainer sent a very classy response, in my opinion, explaining that the project is supported by volunteers, but the person could pay the maintainer at their hourly rate to prioritize fixing the issue. People on social media were debating about whether this is a proper response; I think it is. A respected colleague once told me that payment merely allows maintainers to reprioritize their time. But I think there’s a long way to go for people who don’t spend much time in the community to understand the difference between open source and commercial projects. 

Katherine Druckman: I sometimes worry that reminding people that open source is largely run by volunteers reinforces the perception that open source is the stuff of hobbyists, when in fact most of the world runs on open source software. It provides critical infrastructure. Do you think it can be a double-edged sword? 

Sarah Young: It depends on who you’re talking to. People in the community who work in the weeds appreciate this idea. “Hobbyists” isn’t the right word for these volunteers; I like to call this kind of work “extracurricular activities.” It might not be what the volunteers do in their day-to-day jobs, but they still invest a lot of time and love into this work. Those who are not super technical or haven’t come from an open source background, such as enterprise execs and hands-off tech folks, may not believe that open source solutions can be professional. If you showed enterprise leaders how many of their critical systems undoubtedly run on open source, many would probably be horrified because I’m not sure they realize it.
 

Making Secure Synonymous with Easy

Katherine Druckman: We’re responsible for keeping open source going, but contributors are only human. We all make mistakes. How does human vulnerability fit into the security ecosystem?  

Sarah Young: The tech side of security is relatively straightforward. We know what to do in technical security terms. It doesn’t mean implementation is always easy, but we know to use good, secure coding practices. People, on the other hand, are much more interesting. For one thing, we have to contend with fatigue. If you lead security awareness trainings and overload people with information or wag your finger too much at them, they’ll get bored and tune out. In fact, I’d like to apologize to anyone who has had a bad experience because of an unhelpful security person. In the past, security often told people they were doing something wrong or prescribed a solution that took four hours without explaining why. We must change this culture. Security teams are getting better at understanding that people have other priorities and pressures that drive them. If you’ve got to pick between making USD 10M and fixing a medium security bug, for example, an organization will almost always pick the money and let the security bugs slide. We must make doing the secure thing the easy thing.  

You’ve probably heard phrases like “secure by design” and “secure by default” bandied about, which focus on taking the pressure off individual users and developers. While not open source, if you booted up Windows* OS years ago, you’d have to manually turn on many of the security features, whereas newer versions of Windows have the features turned on by default. When it comes to security, it’s not that people don’t want secure operating systems, it’s that they don’t have time to think about it. We’re beginning to understand this better. 

The open source perspective is not much different. There are many commercial and open source tools that help developers prevent silly mistakes. For example, if you accidentally push a secret or keys into a GitHub repo, you have to spend loads of time scrubbing the branch to remove them. It’s a common mistake. Now you can run a tool in your development environment that will identify anything that looks like a secret in your code and alert you before it becomes a problem. While these tools were clunky when they were first introduced, they’ve really improved in the last few years. As the thinking around security has evolved, not only do the tools warn you of a mistake, but they also focus on providing suggestions that are fairly quick and easy to implement so you’ll actually feel inclined to do it. If you haven’t looked at some of these developer tools in a few years, I highly encourage you to do so. 

Tools will even automate parts of the process when possible. I can’t go on a podcast without talking about all the AI stuff and the copilots out there. Aside from helping you code, they will also highlight where you’ve got a security issue, such as alerting you to a SQL* injection. We’re empowering developers to take more responsibility for their code up front. I’m not saying that means security jobs are going to go away, but the tools are empowering developers to look at security as an overall part of the development piece. Developers want to write good quality code, and that includes security. If handled up front, developers don’t have to go to the teacher and ask if their work passes the test, which can cause friction because, of course, we’re not at school; we’re all skilled professionals.

Katherine Druckman: There’s been a lot of progress made in the last few years, including a heightened focus on the developer experience. 

Sarah Young: There’s still a ways to go. Security hasn’t had the best relationship with developers and other parts of the IT organization in the past because many times we’ve acted as a gatekeeper. It’s almost as if security teams purposely failed to explain things to developers to make security seem overly complex and clever, and I don’t know why because we’ll always need security experts. I found that when security is explained in the right way, I’ve never encountered a single person in IT who didn’t understand why it’s important. Therefore, my conclusion is that it’s security’s fault when people don’t understand why they should do things the right way.

How to Prevent Attacks: Protect the Low-Hanging Fruit

Sarah Young: When I go to events, I like to tell a lot of stories, either real-life stories or stats, about how prevalent attacks are because there’s still naivete out there—not because people are dumb but because they haven’t experienced it. One misconception is that attackers are really proud of their craft. They’re not. Many attacks are what we call low-hanging fruit attacks, where attackers target small but common mistakes developers make. Attackers are not spending days, weeks, or months crafting a sophisticated zero-day attack that has no patches when they can find a load of keys you’ve put in your GitHub repo. They look for the easiest path. Nine times out of 10—if not more—attacks happen to the low-hanging fruit.  

It hasn’t helped that when hacks hit the news, businesses will often release a statement calling the attack sophisticated. Though I’m only privy to the same public information everyone else is, I can normally read between the lines and see that the attack was not as sophisticated as the business said it was. There’s a lot of reputational damage that comes with a public breach, and businesses want to save face. This adds to the misconception that using the same username and password in multiple places or missing a few patches doesn’t make a big difference, but it does. 

Katherine Druckman: You’ve given us examples of low-hanging fruit from the hacker side. What is the low-hanging fruit that developers can address to protect organizations?  

Sarah Young: It hasn’t changed that much over the years. As an umbrella, we call these practices having good security hygiene. I have this conversation at least once a day: many people think they can only protect themselves with new shiny tooling or by completely changing the way they work, but research shows that having good security hygiene will prevent about 95 percent of breaches. 

In developer terms, this means focusing on supply chain security—are you pulling from random libraries when you’re unsure where they came from? If you don’t know where a library came from, you should verify there’s nothing nasty in it. Same idea when using Kubernetes* and containers: Don’t pull random container images from the internet. You should be using verified containers from a reputable repo or containers that your organization has built. Use good coding practices, and don’t hard code creds or secrets into your code. You should use a key store and include variables in the code that reference the key store. And even better, no matter what commercial cloud you’re building on, you don’t have to use hard-coded strings anymore. Identity has evolved way beyond that. Nowadays, you can use what Microsoft calls a managed identity, a nonhuman identity managed in your identity provider’s identity and access management (IAM) system that allows you to put more security over it and monitor it more easily.  

Of course, I’d be wrong not to mention good, secure coding practices. All of these things like SQL injection and directory traversal are still a problem, and attackers will look for them in your code. Whichever CI/CD pipeline you use, make sure it’s secure and that your code is being checked as it goes through the pipeline. It’s possible to inject nasty things while code is in the pipeline. Even more basic is creating identities, which has been a problem since the beginning of time. When you’re creating identities in an application that you’re building, it should be plugged into your enterprise identity provider of choice. Are you giving machine IDs the permissions they need to work, or have you been lazy because you couldn’t work out what permissions they needed and gave them global admin so they have God mode over everything? This is the worst offender, and it makes an attacker’s life really easy. If they compromise that account, they don’t even have to try to privilege escalate because you’ve just given them the keys to the kingdom. I totally get it. It can be difficult to work out precisely what permissions an application needs, and often it’s a tech debt problem because the devs shouldn’t really be having to work this out. They should be able to refer to documentation to know precisely what permissions their application needs and no more, but often we find that environments are not documenting it well. It’s difficult, but talk to your security people and let them help you work out what permissions you need because attackers know these things are difficult and so they look for them.

Katherine Druckman: The conversation around identity is fascinating. We could record an entire episode on the role identity plays in security—human identity and machine identity. I went to a DEF CON* workshop that was all about how to completely fake a nice-looking reputation on GitHub so that you look like the most impressive developer on the site. 

Sarah Young: Oh, definitely. When everything was on-prem, it used to be that your enterprise perimeter would be largely network based. You could use things like firewalls to build a very easy, clearly defined security perimeter. But you can’t do that in the cloud because you may be mixing cloud and on-prem or multiple clouds. So now the only way we can draw a security perimeter is by using a consistent identity.

The Future of Security is Bright with AI

Katherine Druckman: On a more personal note, what are you really excited about in security? 

Sarah Young: AI is the big thing in security now, as it is for everyone. There are two parts to it. You can use AI to complement the work security people do. A lot of security work is extremely monotonous, and adding AI to the tool set can reduce some of the drudgery. AI is not going to replace security jobs, but it can free up people to look at what’s important. 

On the other side of the coin, there’s a huge challenge. We need to work out how we secure all the things that people are building on AI. The good news is that traditional security hygiene is still the best thing you can do. AI is changing at a million miles per second, but right now we need to treat AI just like any other application. You still need to have good, secure coding practices. You need to classify your data. You need to control who can access things on the identity side of things. There’s a lot of talk about AI model poisoning, and yes, theoretically that’s absolutely possible; it’s only a matter of time until there’s a breach on an AI model. But attackers will still target the path of least resistance—if you’re not patching or if you’ve got accounts without sound privileges, attackers are not going to try to launch a sophisticated attack to compromise your AI model. The things we’ve been recommending for years still apply.

To hear more of this conversation and others, subscribe to the Open at Intel podcast:
 

About the Author

Katherine Druckman, Open Source Evangelist, Intel 

Katherine Druckman, an Intel Open Source Evangelist, hosts the podcasts Open at Intel, Reality 2.0, and FLOSS Weekly. A security and privacy advocate, software engineer, and former digital director of Linux Journal, she’s a longtime champion of open source and open standards.