Intel at KubeCon & CloudNativeCon
Open Source Summit China 2023
September 26-28, 2023
Shanghai, China
Kubernetes* experts from Intel present their current open source projects to advance sustainability and cloud-native computing.
When working together, technology becomes a force for good, helping us to solve the world's most critical challenges.
Co-located Events
A Closer Look into Istio* Network Flows & Configurations
Huailong (Steve) Zhang, Intel; Yuxing (Jessie) Zeng, Alibaba Cloud*
Tuesday, September 26, 9:25 - 9:55 (UTC +8)
Unleash the Magic: Harness eBPF* for Traffic Redirection in Istio Ambient Mode
Chun Li & Iris Ding, Intel
Tuesday, September 26, 11:25 - 11:50 (UTC +8)
Keynotes and Sessions
Keynote: Cloud Native Journey at Intel—Revolutionizing AI & Machine Learning
Grace Lian, Intel
Wednesday, September 27, 9:55 - 10:20 (UTC +8)
Build Proxy with a New Envoy* Asynchronous I/O API: io_uring
Hejie (Alex) Xu & Zhihao Xie, Intel
Wednesday, September 27, 11:00 - 11:35 (UTC +8)
Container Network Interface (CNI)-Agnostic Network Performance Accelerator with eBPF
Yizhou Xu, Intel & Mengxin Liu, Alauda*
Wednesday, September 27, 11:00 - 11:35 (UTC +8)
Kubernetes* on Bare Metal or VMs?
Yang Ailin, Intel & Feng Ye, SmartX*
Wednesday, September 27, 11:50 - 12:25 (UTC +8)
Panel Discussion: How the Enterprise Uses Service Mesh
Huailong Zhang, Intel; Zhonghu Xu, Huawei*; Fei Pei, Netease*; Xi Ning Wang, Alibaba Cloud; Fan Bin, China Mobile*
Wednesday, September 27, 13:55 - 14:30 (UTC +8)
The State and Future of WebAssembly in Dapr
Loong Dai, Intel
Wednesday, September 27, 15:50 - 16:25 (UTC +8)
Optimized Microservices Performance & Sustainability via Istio, Kepler & Smart Scheduling
Yingchun Guo, Intel; Peng Hui Jiang & Kevin Su, lBM*
Wednesday, September 27, 15:50 - 16:25 (UTC +8)
User Space Block Services Based on Ublk and Vduse in the Storage Performance Developer Kit (SPDK)
Liu Xiaodong & Changpeng Liu, Intel
Thursday, September 28, 11:00 - 11:35 (UTC +8)
Enhance Workload Quality of Service with Pluggable and Customizable Runtimes
Kang Zhang, Intel & Rougang Han, Alibaba Cloud
Thursday, September 28, 11:50 - 12:25 (UTC +8)
Is Kepler* Accurate on Specific Platforms?
Jie Ren & Ken Lu, Intel
Thursday, September 28, 13:55 - 14:30 (UTC +8)
Fill a Kubernetes Gap: I/O Resource Scheduling & Isolation
Theresa Shan & Cathy Zhang, Intel
Thursday, September 28, 14:45 - 15:20 (UTC +8)
Demos
The Intel booth showcases the following demos.
Enhance Istio and Envoy Service Mesh with Intel Technologies
Explore Intel's innovative contributions to the Istio* and Envoy* open source community with a focus on achieving performance gains, strengthening security, and enhancing network functionalities through unleashing underlying hardware capabilities.
Fill a Kubernetes Gap: I/O Resource Scheduling & Isolation
This demo shows how to ensure optimal performance for critical workloads. Workloads aren't scheduled to a node whose available I/O resource can't meet the workload's requirements. Conversely, each workload's real-time disk I/O use is monitored, and to guarantee critical workloads' I/O performance, the noisy neighbor workloads' I/O resource boundary is shrunk dynamically.
AI Workload: HPA & VPA Confidential, Sustainable
It is challenging to orchestrate and scale AI workloads in cloud-native environments, especially for cross-domain uses like confidential and sustainable computing. Watch a containerized AI pipeline in a stateless microservice design as the significant workload demonstrate:
- Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) scaling by a Kubernetes* event-driven autoscaler (KEDA) operator.
- Confidential computing using a confidential cloud-native primitive SDK.
- Sustainable computing based on Kepler to collect a carbon footprint.
End-to-End Secured Large Language Model (LLM) Serving with Intel® TDX on Alibaba Cloud*
Secured LLM serving deploys and uses LLMs to ensure the confidentiality, integrity, and availability of models and data. This demo showcases end-to-end secured LLM pipelines (including multiturn chat and text completion) powered by BigDL Privacy Preserving Machine Learning (PPML) and Intel® Trust Doman Extensions (Intel® TDX) on Alibaba Cloud, which are secured by various security technologies.
Unleashing the Power of Generative AI on Intel Platforms
Bigdl-llm is an open source library for running LLMs on Intel laptops and GPUs using low-bit optimizations (int3, int4, nf4, int 5, int8) with very low latency. The library is built on top of various technologies and is optimized on Intel CPUs and GPUs. With bigdl-llm, you can build and run LLM applications using standard PyTorch* APIs (such as Hugging Face* Transformers and LangChain*). This demo showcases the experience of running LLMs on laptops using bigdl-llm.
Ad-Serving Showcase from DBApp Based on Secured Cloud Management Stack Around SGX
Secured Cloud Management Stack is an open source software solution to achieve an on-premise and hybrid multicloud management solution with optimized performance that is based on software guard extensions (SGX) and Intel TDX. DBApp is the network and information security leader in the People's Republic of China (PRC). This demo shows:
- SGX enablement in the cloud and how SGX is secure
- Internet precision marketing from DBApp, which combines multiparty data to see the ad-serving effect based on SGX
Build Trustworthy and Confidential LLM Inference with Intel TDX Shield
This demo shows an end-to-end, confidential LLM (Retrieval Augmented Generation [RAG]) reference solution that takes advantage of the Intel TDX shield to address privacy and security concerns. The secure design ingredients include an end-user prompt, LLM mode, RAG pipeline, and database protection with confidentiality and integrity in the full flow. This demo can be deployed in an Alibaba Cloud for Intel TDX instance with Intel® Advanced Matric Extensions (Intel® AMX) enabled to accelerate the RAG interference service.
Intelligent Traffic Searching on a Cloud-Native Edge Stack Enabled for a Private 5G Network
This session gives an in-depth introduction of a cloud-native software framework that helps end-to-end private 5G deployment through a cloud-native edge stack from Intel with intelligent traffic searching as a sample application. The format is a video recording with live Q&A.
Real-time 3D Human Pose Estimation
Estimate a 3D human pose from a single RGB camera without wearable devices in real time, based on container technology. This solution takes the video frames input from a single RGB camera and outputs the 3D human pose in real time. The solution can be readily employed for reconstructing virtual characters in the virtual 3D space by a rendering client, and could be widely used for motion capture, fitness teaching, sport scoring, and other digital twin applications.
A Better Total Cost of Ownership for Cloud Native Solutions through an Open Cloud from Intel
This reference cloud stack is optimized on Intel® architecture for a PRC enterprise cloud ecosystem around OpenStack* and Kubernetes to accelerate development and deployment of highly optimized and scalable cloud solutions. The demo shows how to accelerate the infrastructure and workload for cloud-native solutions with an Intel® Xeon® platform and technology from resource management, storage, and converged HPC and AI.
Up Next
KubeCon + CloudNativeCon North America 2023
November 6-9, 2023, in Chicago, Illinois
KubeCon + CloudNativeCon Europe 2024
March 19-22, 2024, in Paris, France