Subscribe: iTunes* | Spotify* | Google* | PodBean* | RSS
Environmental sustainability should be one of the most important considerations of technology going forward. Cloud and on-prem data centers consume a significant amount of energy, which maps to a signficant carbon footprint. This month, we will be focusing on how developers can affect environmental sustainability by using a variety of efforts and technologies.
Niki Manoledaki is a software engineer who works with the CNCF to advocate for cloud-native environmental sustainability by contributing to the CNCF Environmental Sustainability Technical Advisory Group. She is also a Maintainer of the GitOps WG.
Marlow Weston is a Cloud Software Architect at Intel specializing in resource management. She is co-chair of the CNCF Environmental Sustainability Technical Advisory Group. She’s also worked on MLOps, firmware, drivers, HPC cluster tools, and I/O HPC libraries.
Listen. [36:00]
Learn More:
Tony [00:00:04] Welcome to Code Together, a podcast for developers by developers, where we discuss technology and trends in industry. I'm your host Tony Mongkolsmai.
Tony [00:00:17] Environmental sustainability should be one of the most important considerations of technology going forward. Cloud and on-prem data centers consume a significant amount of energy, which maps to a significant carbon footprint. This month, we will be focusing on how developers can affect environmental sustainability by using a variety of processes and technologies. I recently contributed to a group that created an Environmental Sustainability Technical Advisory Group or tag within the Cloud Native Compute Foundation. Today, I'm joined by two engineers who are also part of that group and honestly, much better equipped to help us talk about sustainability efforts. Nicki Manoledaki, who is a software engineer who works with the CNCF to advocate for cloud native environmental sustainability by contributing to the environmental sustainability tag. She is also maintainer of the GitHub workgroup. Welcome to the podcast Niki.
Niki [00:01:06] Hi, thanks so much for having me.
Tony [00:01:08] We're also joined by Marlow Weston, who's a cloud software architect at Intel specializing in resource management. She's also worked on MLOps, firmware drivers, HPC cluster tools and Iot HPC libraries. She's also one of the chairs of the TAG Sustainability group. Welcome to the podcast, Marlow.
Marlow [00:01:27] Thanks for having us.
Tony [00:01:27] So let's start off with what is tech sustainability? How does it fit into CNCF and what are the goals of the TAG?
Marlow [00:01:35] So TAG Sustainability was initially a working group which brought together a group of people passionate about sustainability and trying to save power within data centers. Because data centers are using an increasing amount of power percentagewise and we need to get those numbers down. So as we're trying to save power and we're trying to move more towards zero emissions, saving power is is a very big thing. And I know we say saving carbon, but power and carbon are very strongly interlinked. And so this is initially a working group and the CNCF decided to turn it into a technical advisory group. So we went there and we ended up having a big collaboration. We had Max and Leo, I'll mess up their last names if I tried to say them, and they were my co-chairs and they're wonderful. They work at Liquid Reply. So they were two of the frontrunners of this working group before and now are my co-chairs. We've all come together and now we're starting to do presentations because we're still new. So we're trying to do presentations on current projects. We have one on Kepler, which we'll talk about hopefully later in this podcast, and Scaphandre, which is another project that was just presented this week and will continue to take in demos. So there's some knowledge about what projects are being done in order to reduce power usage and improve sustainability within data centers.
Niki [00:02:58] Yeah, So we we have the also now we're creating a new working group in the TAG, which will be focusing on comms, conferences and work around that and any kind of outreach and raising awareness that we can do. There's also a white paper that Marlow and I contributed to. Yeah, it's, it's on the repo everything, all the information about the tag can be found on the tag environmental sustainability repo that is part of the CNCF organization and please join our meetings. They happen twice monthly.
Tony [00:03:31] So the tag is open. Everybody can attend...
Marlow [00:03:34] People are encouraged to attend. Also, if you can't make the meetings, there's a Slack channel so you can contribute there. And we're always happy to take contributions to the landscape document. The landscape talk is the white paper that Niki is referring to. You can find it off the webpage. You know, we're also working on a maturity model. We're looking at running the small events at future Kubecons to raise awareness, but also so we can start looking at different projects that are addressing the technical challenges in this area, because we can talk a lot about how we want everything to be better, but we need to also be discussing the technologies that we're using to address.
Tony [00:04:06] The scope of something like the tag sustainability group within CNCF... what is the scope of what you guys are doing? Can you kind of explain for people who may not be familiar but are interested in potentially joining? Are you guys just...are you guys... Are we... Are we trying to influence just the software? Are we trying to influence the hardware? What is the role of this and how can we actually make a difference when it comes to sustainability and data centers?
Marlow [00:04:34] So what we're talking about is only cloud native technologies. So if you're not running on cloud native software, we're probably not the group for you, but there are other groups that are linked to us. So there's, for instance, Green Software Foundation that's run by a symposium that has a lot of good work in there and there's a lot of energy. Niki I think you know, some more that I don't have listed here, but as far as our scope specifically, we want to identify, define, and develop tooling to assess and improve sustainability approaches. We have community outreach, engagement on the work of this technical advisory group. And we're collaborating with other environmental sustainability organizations and efforts that may fall outside the CNCF. But we are trying to stay in a very small space because otherwise you get scope creep. This is a nightmare for every engineer on the planet. Sustainability, specifically has a huge amount of scope creep because we can talk about I mean, we've had conversations in these meetings about reducing our carbon use by using stickers instead of t shirts or things like that when we're, advertising the TAG. Out of scope is any umbrella organization beyond the CNCF establishing any compliance or standards bodies. We don't want to evaluate individual infrastructures, so if you want that, you have to find something a little more formal in that.
Niki [00:05:50] Yeah, conversations get difficult as soon as well. Also, when it comes to cloud sustainability, for example, the PV of a data center may or may not be in the scope. It depends. Right. So that's why we have. The since you have definition of cloud native as kind of a marker of whether something is in scope or out of scope.
Tony [00:06:17] Yeah. So we don't provide the analysis for companies, but we do want to provide the tooling that enables companies to do that analysis and then maybe not provide the tool, but kind of provide a recommendation of this is a good tool for you to kind of do analysis of your power consumption and potentially your carbon footprint. Right?
Marlow [00:06:37] So we don't do official recommendations of tooling, but we will say what tooling is there. So the landscape doc tells you what tooling is available as far as what we know and what's been contributed. And if we're missing something and a listener hears about it, they should go and update our landscape doc. They're welcome to that. The landscape doc is just trying to say what tooling is available as far as maturity. For tooling, you have to evaluate that yourself or you can see if it's a CNCF Sandbox project. I know we have at least one CNCF sandbox project that just got approved, Kepler just got approved as a CNCF sandbox project, which is pretty exciting. So we're not in the business of recommending tooling, we're in the business of helping make sure that tooling is built appropriately and helping recommend and giving people a platform with which they can advertise.
Tony [00:07:28] Okay. And the first thing that we need to worry about when we're talking about trying to figure out whether something is using a lot of energy is measurement. So with that, let's talk about how we measure things within a data center, a cloud native data center. You guys mentioned Kepler. So let's talk a little bit about what's Kepler.
Niki [00:07:46] So Kepler is a tool, an eBPF based tool for energy monitoring. So it's it's it works with eBPF. So it looks at kernel information and it associates the energy consumption that it can read from different components in the kernel with components in Kubernetes. So for example, pods or namespaces, aggregations or the node. So with that ability, it's the first time that we really can see the energy consumption of different Kubernetes components. And so it's it's the first cloud native energy monitoring tool. So that's great that now it's part of this CNCF project landscape. There are various limitations to it that are also, you know, we should mention them since it's an eBPF based tool. It's not a new technology, but it is a technology that people may be less familiar with. There's various security considerations with eBPF and other parts of Kepler as well that need to be worked out. I've done yeah, I've done a couple of talks on Kepler also. I did one last week, which was part of in the GitOps working group. We have kind of a subgroup on environmental sustainability. And as part of that, we wanted to do benchmark tests for for energy consumption, for get ops tools such as Flux and ArgoCD. And we used Kepler to test different scenarios like reconciliation scenarios to see. And also idle right, in the idle state what do these GitOps tools look like in how much energy do they consume. I think by the time this airs, this should be out. And it would be really interesting if anyone wants to learn how to use Kepler to do tests like these about the software they're building or using, you know, feel free. We have all the information out there. You can look through the GitHub working group or the sustainability tag and find us there.
Marlow [00:09:52] One other nice thing about Kepler is it has some modularity there, so you can plug in different tooling or write your own tooling if the measurements there are not quite what you want. That's a really friendly group of people and working on that. So they're very agreeable to constructive feedback.
Niki [00:10:13] And with energy measurements. So once you have the energy, you can then kind of deduce the carbon intensity of a piece of software. But that's a little bit already. You know, establishing how to use Kepler is is a big step. And the one after that would be to deduce the carbon emissions and carbon intensity based on the energy consumption with things such as a carbon coefficient to if you know, the carbon intensity or the marginal carbon emissions it's called of your of the grids where your software is running, and then you can find that information. But it's very challenging in the cloud environment.
Tony [00:10:57] And one of the other challenges that we have is that we're really talking about using software level measurements, which captures whatever hardware vendors choose to expose through their interfaces to, in their case, the Linux kernel subsystem or maybe even at a higher level. But still it's a software measurement, which means that we're only capturing information about things that hardware vendors choose to expose. So for devices that don't have measurements related to power, like, for instance, potentially hard drives, memory DIMMs, even motherboards.
Marlow [00:11:40] That's correct. But additionally to all that, we have to be really careful because just because you can measure how much CPU you're using doesn't mean you are also. Because if you're waiting for latency and you're running longer because you're lower power, that doesn't mean you're in a better place. Because one thing we also haven't really dove into yet that HPC has, right? So if you go through some high performance computing, they've been trying to deal with sustainability for a long time. So there's lots of lessons we can take by reading those papers. Sandia National Laboratories, I'll call out to them because James Laros over there kind of pioneered this space. But with all of that, you know, you have to look also at time to failure. So how much time does it take for your job to fail? So are you running a large job and you're trying to do minimal power and then your whole job goes down, Then you have to restart it or you have to start from the last check points, how much energy you're using there. So there's there's a lot of interesting things that we forget when we're doing measurements or we're trying to minimize power that that can actually hurt us in the long run.
Niki [00:12:45] Another challenge with energy monitoring and for example, using an eBPF based tool such as Kepler is the amount of data that is emitted by eBPF is huge. It's a lot. So running BPF based tool, it gives you this kind of super power. But at the same time, there's all this data that now is being stored and accumulated, and that in itself is taking up compute power and memory. So there's still a lot of challenges.
Tony [00:13:16] So you're talking a little bit about how we need to be careful, how our measurements affect the behavior of our systems. One of the common things with measurement also is that it can perturb how things are behaving. So typically we think about it in terms of performance. In this case, we might think of it in terms of power, as Niki just mentioned. What are some of the issues that we should be concerned about there?
Marlow [00:13:35] Some of the things we have to look for our performance issues. So if you're if you're hurting your performance issues because you're flooding the network because you're doing eBPF, which can see lots and lots of data, then you're going to have you know, your users aren't going to want to use your product if you are using a lot of power in order to run your your measuring, they're not going to want to use your product. Right? And the other thing to be really careful of in all of this is there's also the mean time to failure. So if you're failing your job because you're trying to keep your power level down, because you're going back and forth between your measuring, then you probably don't want to be using that method either.
Tony [00:14:14] And we're talking kind of in the realm of the CNCF, so we're talking about things that people are looking to do, companies are looking to do kind of on-prem. At least that's how I think of CNCF, right? It's how you you try to mimic a cloud environment. Are there things that individual companies should be concerned about, like how they're comparing their spend or their carbon footprint versus what cloud providers are providing? Because one of the things that we think about is going from a cloud environment to an on-prem environment is you're trying to make some kind of tradeoff. So if I care about sustainability as a business and most businesses are starting to move that way, how do I compare what I'm measuring versus what a cloud vendor is telling me in terms of the sustainability. Performance that I can measure? Right? We have performance metrics and things like that, but what about sustainability? What should we be looking for there?
Niki [00:14:58] So I think on that topic, there's there's been an interesting study that came out not long ago on the sustainability benefits and from moving from on-prem to cloud environments, mainly from the scale of the operations and what you can see from that and with with using the cloud. I think at the moment what we're seeing from an environmental sustainability point of view is that there's not that much data available to us, at least I'm more familiar with AWS because I was a maintainer of the EKS CLI, and what we see there with AWS, for example, is that you there is a customer carbon dashboard like carbon metrics that are available to you based on what region now that's available. It wasn't available before. There's EC2 instances and S3 that you can get your carbon emissions based on those services. But really that's kind of where that's as far as the data goes there. There's many more issues. There's a a lag, there's a time lag of three months. So whatever you use today, whatever carbon you emits today from the operations that you run today, that will be shown in three months in your dashboard. So you can have real time data. And that's I think mainly there's a lot of the limitations there are due to legal reasons as well, from my understanding, from talking with AWS folks about this. And there's also the data granularity issue. So you they increase the decimals by which you get data carbon data on, but you can't, for example, sort it by a tag or label that will allow you to really get information about a specific environment or user or a team using the the specific account. So there's still a lot of limitations there. And there's no API endpoints either, which could potentially help or. That's for AWS, but I know Azure and GCP have, I think, more features around sustainability.
Tony [00:17:27] The lag is really interesting because if I was trying to make decisions about whether or not to use a certain cloud to reduce my carbon footprint, a three month lag is essentially a change of seasons around the world. So I guess I would have to look at the year before to try to make that kind of decision where to schedule my job?
Niki [00:17:44] So you get like a quarter over lag, right? Like you get in Q2, you get information about Q1, which could be, I think for yearly carbon accounting. It's useful, but for engineers it might not really be useful.
Tony [00:18:03] So the first step is I'm going to measure, I'm going to try to figure out whether or not I have a potential issue or whether or not I can optimize around that. So let's go to optimizations. And I know, Marlow, you and I talked about this a lot. Let's talk a little bit about the optimizations that we can get in a cloud native environment.
Marlow [00:18:20] So there's I think of it as different levels of optimization. So I think about regional, which is which data center you're going to. If you have more than one data center you're dealing with, I think about cluster level, which is scheduling like which nodes in the cluster are used because you don't want to be spinning up more nodes and you need to and I'll tell you from looking at the studies and feedback from users, at best you're using 40% of your CPU power on average and at best it's 20 to 40 really is a numbers I've seen. And and then there's the node level optimizations and they're all important. And but not everyone has the luxury of reasonable. So if you have an on premise system and your data center is local, you know, being able to choose a place that's getting solar power is made may not be an option if you're living somewhere rainy, Right? So it really it really depends on the specifics of your system when you start talking about. But the other thing that may matter is if you're in financial services and you need quick turnaround, you need to be close to your data center. So regional isn't always the best thing to to easily move a data set across thing around the cluster level stuff is where you schedule on your system. There's all sorts of scheduling components within Kubernetes and there's quite a few carbon aware scheduling. I think there was just a new one that came out with KEDA. We we're working on a project called Intent Driven Orchestration. And my team, we have another project called Telemetry aware scheduling that's been used with power metrics as well. And you know, there's a variety of them. So we can get better than than not doing any at all as far as choosing where we schedule. But also it depends on the parameters you want to be spinning nodes up and down. Customers are nervous about using those things because spinning nodes up and down increases chances of they're not coming back up, right? So you have to put them into sleep states. And then when you're talk about node level optimizations, you can look at power and like trying to keep the power level low, of course, that aren't being used, It'll cause that's kind of an easy, easy case, right? But it's still not done as much as you think because it takes time to spin the power up on those cores. And then where do they get scheduled once the nodes get the course, get scuttled? Because Kubernetes is still not quite in the place where it does these things easily. And then the last piece is if you're doing multi socket and then you start worrying about NUMA nodes, what is the proximity of your of your cords to your NIC Right. What is proximity of your core is to your memory? Are you losing time as you're going across the UPI bus and are your devices aligned if you're using a GPU or are you losing time going again across the UPI bus that you wouldn't have to be spending on this. There's there's quite a few different optimizations you can play with in this area and quite a few tools that are aligned to that landscape talk that try to address various areas. But and some companies are selling cohesive strategies. There's there's quite a bit of space.
Niki [00:21:26] One area that I'd like to add here is also optimizations per environment. So for example, in your development environment, you may be able to have more flexibility in terms of which region to use, how you're scheduling. For example, your integration test may be scheduled in the read in a specific region that has more renewable energy available to it, like the in the Nordic countries in Europe, for example. But that may not be something you can do in production. And again, going back to eBPF, which, you know, there's security considerations and some folks may not be inclined to use it in production. You could still do that maybe in development or for a specific integration test. We've seen a demo for a new tool in the tag as well called Cube Green. What that does is it will scale down some of your deployments outside of office hours. That's difficult in larger companies that may be geographically distributed across the globe, but there is still value in that. And there is a drive in terms of cost efficiency as well to save on the financial cost by doing this as well as energy and carbon. But it is something that, you know, you have to you could only really do that in your DevEnv it raises a question of who is the persona that can do these optimizations. So I guess another factor is in this is also the persona who is doing the optimizations, who has access to which part of of the Kubernetes and cloud environment, etc..
Tony [00:23:10] Yeah, it's almost like you're having to software engineer your solutions for sustainability, right? Because you're talking about how you do your infrastructure, how you do your your engineering practices. So it's not just how do I run something on a system that I'm deploying and production making that optimal. But but how am I actually stepping back and engineering my practice or my my software development lifecycle to be a sustainable as possible, leveraging the tools that are available to you.
Marlow [00:23:40] And when when level when other thing that we can look at that isn't necessarily cloud native is when you start looking at what languages are you using. So there is an interesting paper that came out a while ago and it's controversial because it depends on what workloads you run on how much power you use and what the language is optimized for that. It showed some very interesting numbers regarding what language you're using. So why, for instance, Rust was unexpectedly good. Python was expensive, Java was expensive. C, C++, they were cheap. No, no one actually sat down and well, maybe people other than me, you need to sit down and lay out well on what types of workloads each one is good..because Golang surprisingly did not perform well. But I which surprised me. I expected it to do better, considering it's pretty good as far as performance.
Tony [00:24:33] That was just runtime performance though, right? Like they're measuring the performance of an algorithm at runtime, not necessarily compilation overhead or things like that. Although I would expect Rust more modern languages are generally compiled faster because we understand how lexers work more efficiently and tokenizers etc.
Marlow [00:24:49] Well, you'd expect virtual machines to be more.
Tony [00:24:53] When we talk about CNCF, if we obviously look and think a lot about Kubernetes, because that's one of the core components that drives the CNCF in that it's the core orchestration method for a lot of cloud native workloads. But by definition, by design, Kubernetes isn't really designed to allow you to have fine grained control of your hardware, the whole goal of Kubernetes is to abstract the hardware away. But for something like sustainability, we actually need to understand what the hardware is so we can make those right optimizations. So I think that that's what you were talking about. Marlow. Just for people who aren't really familiar with your projects, can you describe a little bit about how they fit into the Kubernetes landscape?
Marlow [00:25:33] When you're talking about hardware savings, you need to basically lift that hardware and make it easy for the user to use. The user still shouldn't have to know about the hardware, at least not in dev. Maybe they want to know specifics if they're running an HPC or an AI/ML type workload about and what zone they're in. But other than that, they don't. They should not have to know specifics. And so what we're basically doing is you leverage and you use algorithms to automatically tune those cores if you can, but they have to plug into the Kubernetes infrastructure. So some of some of our work is finding ways to plug hardware specific capabilities into the Kubernetes infrastructure without the user having to understand that world. Because if you look at different types of users and I have a talk about this that I gave on an an HPC batch day in 2022 Kubecon NA. It's basically, there's different paradigms of users. And I think that's sometimes lost on people that are trying to build these tooling because they're used to their space. So if you're doing HPC, if you're using Telco, if you're doing AI/ML, and legacy users assume that they have control of the hardware, that the newer users don't want to care. And it's because it's a high overhead, right? They want to go build the products that they want to build. So it's basically taking that hardware and finding ways to leverage it into the software and within Kubernetes. So it doesn't mean that you're not cloud native and that doesn't mean that you push the pain on the user. That means you push the pain on your development teams that are writing these tools.
Tony [00:27:07] I'm sure your development team is happy to hear that.
Marlow [00:27:09] They're cheerful.
Tony [00:27:14] The other thing that's that we can look at as we consider sustainability, right, is at the end of the day, it's really a tradeoff between productive work versus the amount of carbon that we're emitting or the energy that we're using. So when we think about accelerators, accelerators have the interesting property of generally, at least when we think of GPUs, they're really high power, but they're also much more efficient in terms of the amount of work they get done for that power usage. But then we also have to look at what's the idle power as they sit in the data center. How we cooll them etc. So are we looking at different tools or different frameworks to kind of understand the behavior and the power of accelerators?
Marlow [00:28:00] I think we should be. There are some people looking at accelerator specific power savings. I will say the accelerators, when you go and you look at the amount of power they used, they blow through your CPU usage by a lot. If you're talking an accelerator versus a CPU. And so we should be looking in those ways. I mean, we're looking at ways to share various cards among among more than one process. Used to be one process per card. I'm sorry, one pod per card. But now it's that's changing. Right. So now it's more than one process per card or pod per card.
Tony [00:28:38] So actually the virtualization of the pods onto the accelerators.
Marlow [00:28:42] Exactly. Exactly. So you can share them. But also, there needs to be sometimes if you look at how it's how you power on these accelerators, they're long, complex processes and they have to be in particular orders. So that needs to be better, right? That needs to improve. So you need to be able to power on and off or at least put the the cards into lower power state because we all start accelerators, especially the types of accelerators that are being built these days are still fairly new compared to CPUs. Right? So we've had time to sit down and figure out how to put cores into sleep states. But for the accelerators and it's not necessarily true across the board. So we need to start looking there. And I do think it's a field that would be very interesting.
Niki [00:29:27] This isn't about accelerators specifically, but the more data centers as a whole. There was a really interesting talk last week in the Open Source summit, specifically in the SustainabilityCon track by Chen Wang, and she's also a contributor to the sustainability TAG and she presented work by Hua Ye and Fan Jing Meng from IBM, China, where they basically did a lot of optimizations in a data center, and then they showed how they reduced the energy consumption and the cooling costs as well in using cloud native software. And I would not be able to say all of these optimizations myself. I think you'll have to watch the talk, but hopefully we can link it. But there there are you know, this is a very recent talk and I think people will learn a lot about it. There's also a blue paper that for anyone who doesn't know, this is a white paper by IBM because they're the big blue or whatever. So there's a blue paper that's going to be coming out very soon. It's available in Chinese, but it will be also in English very, very soon.
Tony [00:30:37] Awesome. Yeah, we will definitely link that. And it's always good to share the different learnings, especially. That's kind of the goal, right, is to make sure as many people know about all the possibilities that are available to them. So we're almost out of time. I'd like to ask, what are the things that you think are going to be most interesting or most important in terms of ensuring that we have a sustainable future around data centers? And I know we spend a lot of time talking about CNCF and we kind of want it to focus on CNCF because that's and the TAG sustainability, because that's kind of the things that we can control. But let's make it more broad in terms of just sustainability. What are the things that you guys are most interested in going forward?
Marlow [00:31:22] I'll do the two parts of that question. One is what I think has the most scope for good, right? And that is probably cooling in general. And if you look at the cooling numbers of 30 to 50% of your data center is is energy is on cooling. And it's it's important because engineers want to go and they want to measure everything and optimize everything. You know, computer scientists, engineers, computer scientists want to optimize everything that can be optimized on the node. But we're not looking enough at how much heat for generating. So when you start looking at heat exchangers sets, those heat exchangers are huge power users. As far as what I want to focus on is probably going to be more on the on the node tuning and cluster tuning optimizations, because I think we can start building as a smarter system that tunes and I'd like to see tooling and going in that direction going forward. Niki?
Niki [00:32:22] So there's, I'll also give a two part answer. The first thing that I see also in the open source ecosystem, beyond the CNCF. From watching the talks in SustainabilityCon, I realized that there is a lot of open source ecosystems in the Linux Foundation working on very similar things like measuring carbon emissions, scope one, two and three. So that's direct, indirect and embedded emissions. And different, there's a lot of the same challenges. And so I hope that there can be more coordination between the different sustainability focused groups in different Linux Foundation organizations. What I would like to as an engineer, what I would like to focus on in the future, what I would like to see more of is to see, like for example, with IBM, China's research on optimizing sustainability in the data center, they used a lot of Grafana dashboards, they looked at personas and what each energy and carbon metric looks like for each persona, like the person running finances, the engineer, the SRE, the person doing infrastructure work, like everyone has a different persona, different view points for these for this data. So how can we look at aggregating these dashboards and tell a story and also look at like really look at these optimizations and show them, show them to the world. Spread the information about the data per tool and for optimization in that in that way.
Tony [00:34:23] It's great and I'll give my own I typically don't give my own at the end because I just ask my guests. But I think that as someone who thinks a lot about business strategy and sustainability and I spend a lot more time thinking about business strategy than sustainability, but I think it would be really important for us to get a true accounting of the carbon footprint of various large companies, because obviously we have all kinds of things that we do with offsets and things like that where people are saying, I'm carbon neutral because I had this offset. Well, the reality is when we generate energy, when we generate carbon, we generated carbon. Right? There are still ways that we need to reduce that. And rather than looking at some offset that has been decided on by a government or something like that, we should have a true accounting from companies that can actually take the measurements that we're talking about and say this is the carbon footprint that I'm generating. Right? Without the offsets. I think that that would be something that is challenging for businesses, but something that would be really good for for society, for sustainability and for the environment. And with that, I hope you are still listening that you are interested in sustainability. We hope to have Asim actually from the Green Software Foundation, who Marlow mentioned on one of our podcasts in the coming weeks as we talk about sustainability here in the month of June. So I hope you guys care about this because the three of us really care about this. And thank you for listening.