If you’re a fan of deep dish pizza, beautiful parks, and celebrated architecture, you’ll have even more reasons to love this year’s KubeCon + CloudNativeCon North America in Chicago from November 6 to 9.
The Cloud Native Computing Foundation’s (CNCF’s) flagship conference is the place to be for anyone interested in advancing the Kubernetes* and cloud native communities. Join the Intel Open Ecosystem team and open source leaders from around the globe for four days of learning from experts, sharing project insights, elevating your skills, and networking with peers. You can also attend virtually.
Intel has been invested in open source since 1991 and is a top corporate contributor to the Linux* Kernel, Kubernetes*, OpenJDK*, and PyTorch*. This year, we’re excited to lead the following sessions and demos focused on AI, sustainability, resource management, and more.
Monday, November 6
- Lessons from Scaling AI Powered Translation Services Using Istio*: Learn how AI translation and localization solution Lilt* leverages Istio to drive higher product velocity. We’ll cover how the initiative gained organizational traction, how Lilt capitalizes on Istio’s functionality, and what’s next on the optimization road map.
- Lighting Talk: Environmentally Sustainable AI via Power-Aware Batch Scheduling: Learn about a new solution that can deliver more-efficient training of large language models (LLMs) in batch environments via a pod-scheduling algorithm and a nonlinear hardware power model.
- Exploit Parallelism for AI Workloads with WASM and OpenMP*: Discover the potential benefits WebAssembly* can bring to AI workloads using the WASI-threads interface in concert with OpenMP, and learn what challenges stand in the way.
- Tracing HTTP/2 traffic using gRPC tapping filter with hardware acceleration: See how gRPC sink for transport socket‒level tapping can address problems, such as preventing bottlenecks in the main thread when performing admin streaming, and how it provides a set of counters representing traced traffic and a ConnectionInfo cache-replay mechanism.
Tuesday, November 7
- Keynote: Environmental Sustainability in the Cloud Is Not a Mythical Creature: Hear about the latest technologies and practices cloud providers and users are leveraging to accelerate net-zero computing. In this keynote session, Intel will join panelists representing the Green Software Foundation*, CNCF Environmental Sustainability TAG*, O-RAN*, and more.
- CNCF Environmental Sustainability TAG Updates and Information: Get up to speed on CNCF Environmental Sustainability TAG’s latest initiatives and collaborations, including how the TAG evaluates and reports on the success of CNCF projects.
- All Things In-Toto: Supply Chain Attestations, Policies and Adoption Stories, Oh My! Get an introduction to in-toto*, a popular CNCF project for software supply chain security, and learn about community updates on new attestation formats for supply chain use cases.
- High Performance, Low Latency Networking for Edge & Telco: Listen in as panelists from Intel, AMD*, Bloomberg*, and Google* discuss different approaches for supporting Kubernetes Network Infrastructure Offload, an implementation-agnostic solution that’s purpose-built for cloud environments. As a follow-up from last year’s panel, the panel will share an update on the standardization and open source developments for offloading Kubernetes networking operations.
- Advancing Memory Management in Kubernetes: Next Steps with Memory QoS: Explore how advanced memory controls help improve performance and resource utilization in Kubernetes deployments, including an overview of advanced memory controls available in cgroup v2 and how custom Node Resource Interface (NRI) plugins give administrators complete control over workload memory options.
- KMM: Your Swiss Army Knife for Kernel Modules on Kubernetes: Watch as we demo the Kernel Module Management (KMM) operator to build, load, and upgrade kernel modules directly to nodes, making it simpler and more cost-effective to deploy and manage the life cycle of GPU drivers in Kubernetes for AI use cases.
Wednesday, November 8
- Diversity, Equity, and Inclusion (DEI) Lunch: Join us for lunch and a special program around diversity, equity, and inclusion. We encourage attendees to engage in discussions at their tables about DEI initiatives at Intel.
- Networking event at Fatpour Tap Works on McCormick Place: Catch up with peers from 6 to 9 p.m., sponsored by Intel. Visit the Intel booth to RSVP and grab a wristband!
Thursday, November 9
- Environmentally Sustainable AI via Power-Aware Batch Scheduling: Learn about a new solution that can deliver more-efficient training of large language models (LLMs) in batch environments via a pod-scheduling algorithm and a nonlinear hardware power model.
- gRPC Tapping Filter with Hardware Accelerated in Service Mesh: See how gRPC sink for transport socket‒level tapping can address problems, such as preventing bottlenecks in the main thread when performing admin streaming, and how it provides a set of counters representing traced traffic and a ConnectionInfo cache-replay mechanism.
- Lifting the Hood to Take a Look at the Kubernetes Resource Management Evolution: Learn about what the rapidly evolving hardware landscape means for the Kubernetes resource management, including important UX changes for end users, interfaces between Kubernetes stack components, and how different pluggable algorithms can help deliver on performance, power, and resource utilization goals.
- Flavors of Certificates in Service Mesh: The Whys and Hows! End users often get confused about which certificate deployment option in service mesh security is right for them. Get the scoop on the different flavors of certificates and how to get the most out of them. We’ll also demo common business use cases on the Istio service mesh platform.
Demos at the Intel booth
Between sessions, stop by the Intel booth #D3 to catch a technical demo.
- Federated Learning with OpenFL* in an Intel-attested confidential environment: Open Federated Learning (OpenFL) is a Python* framework that allows data scientists to train AI models at different locations without sharing sensitive data. See how OpenFL uses a trusted execution environment to help protect AI models during training while maximizing the benefits of using cloud/hybrid/on-prem providers.
- Generative AI with Intel® OpenVINO™ and Red Hat OpenShift Data Science (RHODS)*: Generative AI is unlocking game-changing possibilities, but evaluating and adopting gen AI requires significant compute resources and can lead to deployment roadblocks. This demo highlights two open source technologies that can help make generative AI a reality for your organization. Get the knowledge and tools you need accelerate and deploy generative AI on Intel® hardware.
- Open, Secured, and Accelerated Intel Service Mesh: Istio is an open source service mesh that provides distributed applications a uniform and efficient way to secure, connect, and monitor services. See how the Intel Managed Distribution of Istio service mesh leverages Intel® hardware to optimize performance, security, and extensions.
- AI workloads, you can have it all: Scalable, confidential, and sustainable: Orchestrating and scaling AI workloads in cloud-native environments is challenging. Ensuring they’re confidential and sustainable raises the bar even higher. In this demo of an AI pipeline for a retail checkout or security surveillance application—which can be equipped with everything from video camera data processing to object identification—see how microservices can scale to handle video input using the Kubernetes KEDA operator. This methodology can help meet frames per second (FPS) SLAs for AI stream inference to scale pods horizontally or vertically as appropriate while inferencing in a trusted execution environments to help enhance security. Further, the workload will seamlessly leverage matrix instructions that conserve power by up to 20 percent.
You can find us at booth #D3 on Tuesday from 10:30 a.m. to 8:00 p.m., Wednesday from 10:30 a.m. to 5:00 p.m., and Thursday from 10:30 a.m. to 2:30 p.m.
About the author
Sonia Goldsby brings over a decade of experience in event planning, open source program and project management to Intel. Her focus at Intel is leading marketing initiatives and driving cross-functional open source events. She enjoys spending time with family and traveling.