Join Us at KubeCon + CloudNativeCon North America 2023

author-image

By

If you’re a fan of deep dish pizza, beautiful parks, and celebrated architecture, you’ll have even more reasons to love this year’s KubeCon + CloudNativeCon North America in Chicago from November 6 to 9.

The Cloud Native Computing Foundation’s (CNCF’s) flagship conference is the place to be for anyone interested in advancing the Kubernetes* and cloud native communities. Join the Intel Open Ecosystem team and open source leaders from around the globe for four days of learning from experts, sharing project insights, elevating your skills, and networking with peers. You can also attend virtually.

Intel has been invested in open source since 1991 and is a top corporate contributor to the Linux* Kernel, Kubernetes*, OpenJDK*, and PyTorch*. This year, we’re excited to lead the following sessions and demos focused on AI, sustainability, resource management, and more.

Intel Sessions

Monday, November 6

  • Lessons from Scaling AI Powered Translation Services Using Istio*: Learn how AI translation and localization solution Lilt* leverages Istio to drive higher product velocity. We’ll cover how the initiative gained organizational traction, how Lilt capitalizes on Istio’s functionality, and what’s next on the optimization road map.

Tuesday, November 7

Wednesday, November 8

  • Diversity, Equity, and Inclusion (DEI) Lunch: Join us for lunch and a special program around diversity, equity, and inclusion. We encourage attendees to engage in discussions at their tables about DEI initiatives at Intel.
  • Networking event at Fatpour Tap Works on McCormick Place: Catch up with peers from 6 to 9 p.m., sponsored by Intel. Visit the Intel booth to RSVP and grab a wristband!

Thursday, November 9

  • gRPC Tapping Filter with Hardware Accelerated in Service Mesh: See how gRPC sink for transport socket‒level tapping can address problems, such as preventing bottlenecks in the main thread when performing admin streaming, and how it provides a set of counters representing traced traffic and a ConnectionInfo cache-replay mechanism.
  • Lifting the Hood to Take a Look at the Kubernetes Resource Management Evolution: Learn about what the rapidly evolving hardware landscape means for the Kubernetes resource management, including important UX changes for end users, interfaces between Kubernetes stack components, and how different pluggable algorithms can help deliver on performance, power, and resource utilization goals.
  • Flavors of Certificates in Service Mesh: The Whys and Hows! End users often get confused about which certificate deployment option in service mesh security is right for them. Get the scoop on the different flavors of certificates and how to get the most out of them. We’ll also demo common business use cases on the Istio service mesh platform.

Demos at the Intel Booth

Between sessions, stop by the Intel booth #D3 to catch a technical demo.
 

  • Federated Learning with OpenFL* in an Intel-attested confidential environment: Open Federated Learning (OpenFL) is a Python* framework that allows data scientists to train AI models at different locations without sharing sensitive data. See how OpenFL uses a trusted execution environment to help protect AI models during training while maximizing the benefits of using cloud/hybrid/on-prem providers.
  • Generative AI with Intel® OpenVINO™ and Red Hat OpenShift Data Science (RHODS)*: Generative AI is unlocking game-changing possibilities, but evaluating and adopting gen AI requires significant compute resources and can lead to deployment roadblocks. This demo highlights two open source technologies that can help make generative AI a reality for your organization. Get the knowledge and tools you need accelerate and deploy generative AI on Intel® hardware.
  • Open, Secured, and Accelerated Intel Service Mesh: Istio is an open source service mesh that provides distributed applications a uniform and efficient way to secure, connect, and monitor services. See how the Intel Managed Distribution of Istio service mesh leverages Intel® hardware to optimize performance, security, and extensions.
  • AI workloads, you can have it all: Scalable, confidential, and sustainable: Orchestrating and scaling AI workloads in cloud-native environments is challenging. Ensuring they’re confidential and sustainable raises the bar even higher. In this demo of an AI pipeline for a retail checkout or security surveillance application—which can be equipped with everything from video camera data processing to object identification—see how microservices can scale to handle video input using the Kubernetes KEDA operator. This methodology can help meet frames per second (FPS) SLAs for AI stream inference to scale pods horizontally or vertically as appropriate while inferencing in a trusted execution environments to help enhance security. Further, the workload will seamlessly leverage matrix instructions that conserve power by up to 20 percent.

You can find us at booth #D3 on Tuesday from 10:30 a.m. to 8:00 p.m., Wednesday from 10:30 a.m. to 5:00 p.m., and Thursday from 10:30 a.m. to 2:30 p.m.

Register today. You can find the full conference schedule here.

About the Author

Sonia Goldsby brings over a decade of experience in event planning, open source program and project management to Intel. Her focus at Intel is leading marketing initiatives and driving cross-functional open source events. She enjoys spending time with family and traveling.