Intel at KubeCon + CloudNativeCon
Europe 2024
March 19-22, 2024
Paris, France
Kubernetes* experts from Intel present their current open source projects that are aimed at advancing sustainability and cloud-native computing.
When working together, technology becomes a force for good, helping us to solve the world's most critical challenges.
Join us at booth H5 to learn more.
For more information on Intel Software activities at KubeCon + CloudNativeCon, see Software Events.
Social Events
Community Social Hosted by Intel
Off-site Location: Le Perchoir
Thursday, March 21, 18:00-20:30 (GMT+1)
Diversity, Equity, and Inclusion Lunch
Join this special program featuring discussion around diversity, equity, and inclusion. More details to be announced.
Seating is limited and will be provided on a first-come, first-served basis.
Thursday, March 21, 12:30-14:30 (GMT+1)
Co-located Events
Istio* Day
Welcome and Opening Remarks
Iris Ding, Intel; Zack Butcher, Tetrate*
Tuesday, March 19, 9:00-9:15 (GMT+1)
Closing Remarks
Iris Ding, Intel; Zack Butcher, Tetrate
Tuesday, March 19, 17:25 - 17:35 (GMT+1)
Cloud-Native AI Day
Panel—Beyond the Clouds: Charting the Course for AI in the Cloud-Native World
Cathy Zhang, Intel; Rajas Kakodkar, VMware*; Ricardo Aravena, TruEra*; Alolita Sharma, Apple*; Alex Joes, Amazon Web Services (AWS)*
Tuesday, March 19, 12:05-12:40 (GMT+1)
Cloud-Native WebAssembly (Wasm) Day
Pathfinder to GPU Offload in Wasm
Atanas Atanasov and Aaron Dorney, Intel
Tuesday, March 19, 14:20-14:45 (GMT+1)
KubeCon + CloudNativeCon Sessions
Sponsored Session: American Airlines* Increases Velocity Leveraging K8s at Scale and Autonomous App-Level Optimization
Markus Flierl, Intel; Vijay Premkumar, American Airlines*
Wednesday, March 20, 11:15-11:50 (GMT+1)
Beyond Default: Harnessing CPU Affinity for Enhanced Performance Across Your Workload Portfolio
Antti Kervinen, Intel; Dixita Narang, Google* LLC
Wednesday, March 20, 14:30-15:05 (GMT+1)
Tutorial: Cloud-Native Sustainable Large Language Model (LLM) Inference in Action
Cathy Zhang, Intel; Chen Wang, Eun Kyung Lee, and Bo Wen, IBM*; Huamin Chen, Red Hat
Wednesday, March 20, 14:30-16:00 (GMT+1)
Cloud-Native LLM Deployments Made Easy Using LangChain
Arun Gupta and Ezequiel Lanza, Intel
Wednesday, March 20, 16:30-17:05 (GMT+1)
DRAcon: Demystifying Dynamic Resource Allocation—from Myths to Facts
Patrick Ohly, Intel; Kevin Klues, NVIDIA*
Wednesday, March 20, 17:25-18:00 (GMT+1)
Savoir Faire: Cloud Native Technical Leadership
Arun Gupta, Intel; Nikhita Raghunath, VMware; Lin Sun, solo.io; Emily Fox, Red Hat; Nancy Chauhan, LocalStack
Wednesday, March 20, 17:25-18:00 (GMT+1)
Leverage Contextual and Structured Logging in K8s for Enhanced Monitoring
Patrick Ohly, Intel GmbH; Shivanshu Raj Shrivastava, Adyen; Mengjiao Liu, DaoCloud
Thursday, March 21,15:25-16:00 (GMT+1)
Innovative Optical and Wireless Network (IOWN): Challenges of Kubernetes for Composable Disaggregated Computing
Clara Li, Intel; Naoki Oguchi, Fujitsu*; Hidetsugu Sugiyamja, Red Hat; Ryosuke Kurebayashi, NTT Software Innovation Center
Thursday, March 21, 16:30-17:30 (GMT+1)
Demos
Sharing and Isolating GPU Resources for AI Workloads with Kubernetes DRA
Dynamic Resource Allocation (DRA), a recent Kubernetes API, is an alternative to device plug-ins and provides more flexible access to GPU resources in a Kubernetes cluster. Find out how DRAs allow for easier deployment of multiple nondemanding workloads on the same GPU, or one demanding workload on multiple GPUs with a GPU resource driver from Intel for Kubernetes in Intel® Developer Cloud.
Deploy LLM Serving and Fine-Tuning Services Using BigDL-LLM on Kubernetes
If you are using an LLM, optimization and serving are necessary steps in your workflow. BigDL, a distributed deep learning framework, has the BigDL-LLM library that can help you accomplish this with ease. Come see it in action as we demonstrate the creation of serving and fine-tuning its services and deployment on Kubernetes.
GenAI: Paint Your Dreams with Optimum for Intel and Intel® Distribution of OpenVINO™ Toolkit
Accelerate the end-to-end process of building, optimizing, containerizing, and deploying generative AI (GenAI) models with Intel® Distribution of OpenVINO™ toolkit. Attendees will gain insights into running Stable Diffusion* and advanced transformer models on CPUs and GPUs with an emphasis on flexibility and edge deployment, including its support for Hugging Face* Optimum for Intel.
Confidential Computing with ACon Attested Containers
Incorporating confidential computing into your data pipeline can keep your data secure within the CPU. We will demonstrate how a workload is launched on Intel® Trust Domain Extensions into an ACon trust domain virtual machine that provides an attestation to a demo-relying party. The party evaluates the attestation and sends a secret into the container using a secure channel.
Wasm GPU Offload APIs for Accelerated Computation
Developers can now deploy Wasm to securely develop once, and then deploy across multiple architectures. See how to integrate Wasm tools—WASI-NN and WebGPU—in AI, HPC, and visualization applications to offload to the GPU and accelerate computation.
Up Next
KubeCon + CloudNativeCon North America 2024
November 12-15, 2024, Salt Lake City, UT
Adopters and technologists from leading open source and cloud native communities attend this Cloud Native Computing Foundation (CNCF) conference.