It is time to bang the big AI-generated computational bong — welcome to Intel Innovation, where company leaders and technologists rally developers around Intel’s latest hardware, software and services.
Intel CEO Pat Gelsinger kicks off the festivities with a live presentation to mark the rise of the “Siliconomy,” a new era of economic expansion enabled by sustainable, open and secure computing power. For developers, the Siliconomy means a new world of opportunity.
Live Blog: Follow along for a live report on the talk, the guests, the demonstrations and plenty of news.
8:15 a.m.: Hello and welcome! This is Jeremy Schultz, communications manager at Intel, and thank you for tuning in for another Intel Innovation. We’re back at the San Jose McEnery Convention Center, where I’m sitting with a laptop in hand and the big stage ahead.
8:25 a.m.: Intel is all about collaboration and this year the venue reflects it. The event hall is set up like a great room — keynote stage at the back and a technology showcase packed with demos filling the rest of the room. Only a data center has more chips inside.
8:32 a.m.: The music stops — and we start off with a video. Pat scribbles S-i-l-i-c-o-n-o-m-y on a whiteboard, a panicked set of employees profess confusion, and Pat escapes with — a soccer ball? ⚽️
In a headband and cleats, Pat is dribbling and sprinting through a training montage, watched by employees eating lunch and some kind of mobile app.
8:33 a.m.: Video Pat awkwardly fist-bumps an employee to end the montage and the real in-the-room Pat charges on stage to a cheering crowd.
8:34 a.m.: Pat’s first message is appropriately for developers: “I’m excited to help you unlock the massive opportunities created by the generational shift to AI.”
He’s also got “some exciting achievements to share” when it comes to “the advancements of Moore’s Law, based on choice and trust in an open ecosystem.” Let’s make some chips!
8:35 a.m.: After that brief intro we’ve got our first guest: Rich Felton-Thomas, director of sports science and chief operating officer at ai.io.
Pat says he’s been training for both Innovation and to “the next career opportunities” — he did show some flashes of speed. 🏃
8:36 a.m.: Rich explains that ai.io’s aiScout gives players around the world an equal opportunity to be scouted, in an objective manner, with just a mobile device.
It helps teams like the Chelsea Football Club autonomously evaluate a large pool of athletes and has cut the time between discovering a player and signing them from 18 months to 2 weeks. It’s powered by Xeon-based services from AWS. 🦾
8:38 a.m.: Pat’s eager to find out if he’s got player potential: “How did I do?”
Rich shows the coaches’ view to compare Pat to prospects around the world and adds that cognitive tests are part of the profile.
Sorry, Pat. “Your drills weren’t up to Messi levels,” Rich says, “but the San Jose Earthquakes were impressed with your technology acumen” and sent along a jersey.
8:39 a.m.: “I’m flattered!” Pat says. “I think this shows I’m in the right role.”
Rich: “That’s good news — we’re relying on Intel.”
Thanks, Rich! He trots off stage and we’re back with Pat.
8:40 a.m.: The role of computing is undergoing a fundamental shift, Pat says. This is a new time of global expansion where computing “is foundational to a bigger opportunity and better future for every person on the planet.” 🌎
“Welcome to the Siliconomy!”
8:42 a.m.: “More plentiful, powerful and affordable processing power is a vital component for growing economies everywhere.”
And of course, he adds, “AI represents a generational shift in computing that is giving rise to the Siliconomy.”
8:43 a.m.: “Developers rule: You run the global economy.” 🧐
Chip architectures are getting more diverse and specialized. “Our commitment to you is access to the coolest hardware and software as early as possible.”
8:44 a.m.: Outside of the quick end of Pat’s football career, we have our first bit of news: “I am thrilled to announce the general availability of Intel Developer Cloud.”
It lets developers build and test applications on the whole big-iron family: Intel Gaudi 2 accelerators, 4th Gen Xeon processors, the CPU Max series and Intel Data Center GPUs. And it includes Intel’s sweet suite of software for AI, deep learning, high-performance computing, rendering and more. 🧰
8:45 a.m.: Pat says the Intel Developer Cloud also lets developers access pre-production hardware and thus prepare to get to market faster.
The folks in the room get an extra boost: “For being here today, every one of you received a code good for a free week of Intel Developer Cloud access!”
8:46 a.m.: More fun with the Developer Cloud: Intel invited companies from the Intel Ignite incubator community to pitch what they’re working on. The winner will show a live demo later in the keynote.
First up is Deep Render, which Pat says set out to solve “the massive problem of too much data, too little bandwidth.”
8:48 a.m.: In a video, Deep Render shows AI-only compression for videos — already achieving 5x smaller files with hope to reach 50x more versus what’s common today.
8:49 a.m.: AI is old, Pat says, in development for 50 years. He recalls working on the Intel 80486 well over 40 years ago, which the team thought would be “a great AI chip.”
“These last 10 years AI has been incredible, redefining itself every two to three years, impacting every domain along the way.”
8:50 a.m.: Intel’s committed “to address every phase of the AI continuum,” Pat adds, including big-time generative AI and large language models.
“We just secured a design that’s a pretty big deal — a large AI supercomputer, built entirely on Intel Xeon processors and 4,000 Intel Gaudi 2 accelerators.” 🎉 Stability AI is an anchor customer, Pat adds.
8:52 a.m.: Dude, Dell’s getting some Gaudi, too. “PowerEdge systems with Xeon and Gaudi will support AI workloads ranging from large-scale training to base-level inferencing,” says Dell COO Jeff Clarke. “We look forward to helping customers transform their business with new applications with this powerful combination.”
8:53 a.m.: Pat’s got MLPerf results to share — we do love our data around here. If you think GPUs when you think of AI, he’s reminding you that 4th Gen Xeon CPUs “can run any general-purpose AI workload.”
And the Xeon CPU Max Series joined the MLPerf party for the first time and “was the only CPU able to achieve 99.9% accuracy” on GPT-J, a large language model.
8:54 a.m.: Alibaba has some fans of Xeon, too. Zhou Jingren, CTO of Alibaba Cloud, is explaining over video that his company’s generative AI and LLMs “achieved an average 3x inference acceleration in response time” on 4th Gen Xeon.
“We look forward to unleashing the power of Intel’s technical advancement, for superior performance and higher efficiency across our AI workloads,” he adds.
8:56 a.m.: Pat shares one of my favorite Gordon Moore quotes: “No physical quantity can continue to change exponentially forever. But that end can be delayed.”
“As the stewards of Moore’s Law, we are in relentless pursuit for more powerful and efficient computing,” Pat says. “We will not rest” in this quest.
8:57 a.m.: And here comes Intel Developer Cloud pitch #2, this time from Scala Biodesign. The startup “uses computational biology and generative AI to dramatically speed up the protein engineering process.” A meaty task, no doubt.
8:59 a.m.: In a video, the founders of Scala explain that proteins make life possible. But it can take years of development and millions of dollars to make new proteins, which often fail. Scala is now taking protein engineering from the lab to the computer.
9:00 a.m.: AI is coming for the PC, too, Pat says, “unleashing personal productivity and creativity.” Ooh, next year we should try an AI-boosted live blog!
“We are ushering in a new age of AI PC.” 🔖
9:01 a.m.: The first Wi-Fi specs came out in 1997 but it wasn’t until Centrino in 2003 that spawned “the wireless world we’re used to today,” Pat notes. That’s where we are now with the AI PC, “a sea-change moment in tech innovation.”
9:03 a.m.: So, what’s the killer app? Demo pro Craig comes up to show three demos in one minute.
9:06 a.m.: And it turns out the Stable Diffusion demo wasn’t running on the imminent Meteor Lake platform, but rather its successor, Lunar Lake — points to the validation team for testing in front of a live audience. 👩🍳
9:08 a.m.: The Meteor Lake wave is coming soon. “Intel expects to ship tens of millions of new AI-enabled PCs into the market in 2024 and later scaling to hundreds of millions of units,” Pat says.
These new Intel Core Ultra processors include Intel’s first integrated neural processing unit, or NPU, for AI. It’s going to be a great Christmas — Core Ultra launches Dec. 14.
9:09 a.m.: Core Ultra is a “tour de force,” Pat says. It’s a little chip sandwich, built using the Intel 4 process node and Foveros packaging technology.
The NPU adds power-optimized AI performance, available to developers via industry-standard software frameworks and tools, Pat explains.
9:10 a.m.: Intel partners have been working tirelessly to prepare for Core Ultra, Pat says, and we get to meet one. Pat invites Jerry Kao, chief operating officer at Acer, up on stage. Welcome, Jerry!
9:11 a.m.: Jerry: “We’ve been working with Intel for a while to bring Intel Core Ultra to the AI PC.” 🧞♂️
Jerry unfolds a sleek laptop and says it’ll include “a suite of Acer AI applications” built with Intel’s help and the OpenVINO toolkit.
9:12 a.m.: What can it do? Jerry gives a peek at built-in image generation and a wild parallax responsive wallpaper effect that moves with your eyes.
9:14 a.m.: Thanks, Jerry!
For systems and software makers of the world, “We want to bring the capabilities of the AI PC to realize your software and hardware visions.” Wow, imagine how much meeting time would be saved with AI unmute — when your lips move, you’re unmuted on the call. No more “Pat, you’re on mute!” That’s a free idea I’m putting out there.
9:15 a.m.: There’s more innovation on the horizon. Wafer sighting! “Fresh out of the fab with our Arrow Lake processor,” Pat says, built on Intel 20A. “It’s working as expected.” Pat has chip design superpowers — he can tell the chips are healthy just holding the wafer.
We’re also working on Lunar Lake, Pat says, set for production readiness in 2024. Lunar Lake, shown earlier, promises more AI and “a new architecture designed from the ground up for mobility.”
And after that comes Panther Lake, built on Intel 18A and “heading into fab in Q1 ’24.” Phew, that leaves a little bit of time for hardware-accelerated AI unmute. 🤞
9:16 a.m.: As we pile up these innovations, Pat says, “performance per unit of energy must become our industry’s mission.”
“We have developed technologies across our product lines, for client, network, edge and server that reduce energy consumption.”
9:17 a.m.: Speaking of servers: “Intel Xeon processors continue to deliver on-time against their roadmap.”
And there’s plenty more on the menu.
First up: 5th Gen Intel® Xeon® processors, “launching together with Intel Core Ultra on Dec. 14.” I told you: Save that date!
“5th Gen Xeon boasts more compute and faster memory while still using the same power draw as Intel’s previous generation of Xeon,” Pat says.
9:18 a.m.: That’s not all. 2024’s Xeon platform “will be really good.”
Said platform will deliver innovative E-core efficiency to compete in the cloud and strong P-core performance for critical workloads like AI.
9:19 a.m.: Sierra Forest includes a forest of E-cores and Granite Rapids rocks the P-cores but together offer a compatible hardware architecture and shared software stack to tackle any workload.
Looking to 2025, Sierra’s successor, Clearwater Forest, arrives built on Intel 18A.
9:20 a.m.: I wasn’t kidding when I said, “a forest.”
The Xeon team stepped up to the challenge, Pat says, and will deliver a Sierra Forest SKU “with a whopping 288 E-cores” on the 2024 Xeon roadmap. 🤯
“2024 is shaping up to be a really, really good year for Intel Xeon customers.”
9:21 a.m.: So, we’ve got gobs more cores, AI, performance and efficiency — how’s about some security, too?
“Exponential compute also means a vastly broader attack surface,” Pat says. “We must create a sense of urgency and awareness to usher the dawn of confidential computing.”
And for that: “Tomorrow we’ll be announcing a new portfolio of trust and security services,” Pat says. Book it: Don’t miss Intel CTO Greg Lavender’s keynote tomorrow at 9:30 a.m. Pacific.
9:22 a.m.: We’re about halfway through — time to replenish with the five technology superpowers. 🦸
Compute, connectivity, infrastructure, AI and sensing — these technologies “profoundly shape how we experience the world.”
9:23 a.m.: With sensing, “I like to say disabilities become digitally enhanced strengths.”
For Pat, it’s personal. “One of my favorite sounds in the world is my granddaughter’s voice calling me ‘Papa’.” He points to his hearing aid and says without technology, he might not be able to experience that.
Pat invites up Dan Siroker, founder of Rewind.ai, who’s “in search of ways technology can augment human capabilities.” Welcome, Dan!
9:24 a.m.: Dan says he started to go deaf in his 20s and got a hearing aid at 30 — “it was magical.” 🪄
He says that Rewind.ai aims “to give humans superpowers.” It’s like an assistant with perfect memory: Rewind.ai runs in the background capturing your screen and audio, then compresses, transcribes, encrypts and stores your data locally so only you have access.
Then you can ask it questions or give it tasks, like write up notes from a meeting or compose an email.
9:26 a.m.: Pat asks Dan to show what it can do on an Intel Core Ultra machine with OpenVINO.
Dan says Rewind can run an LLM locally and preserve your privacy using Llama2. Rewind has been “listening” to the keynote and Dan types in the prompt: “What is Pat’s favorite sound?”
Rewind gets it right: his granddaughter’s voice saying Papa.
9:28 a.m.: Then asks it to “Write a summary of the keynote. Use emojis.” Phew, I was afraid he’d ask it to live-blog. Thanks, Dan!
9:29 a.m.: Earlier this year, Intel, Microsoft and Samsung teamed up to deliver Bluetooth Low Energy Audio to PCs, which makes it feasible to connect hearing aids.
Now Intel and Starkey Laboratories, a maker of hearing aids, have created a proof-of-concept that Pat says “shows how AI will improve the hearing aid experience.” CEO demo time!
9:30 a.m.: Pat joins a conference call and our demo pro explains the PC is now “contextually aware” and will automatically switch Pat’s hearing aids between “ambient aware” and “focus” modes. 🧘♂️
In focus mode, background noise is filtered out, but the computer still hears it when Pat gets a knock at the door. A pop-up notifies him about the knock and he dismisses it.
9:31 a.m.: When a colleague calls for Pat from the side, the computer again hears it and shows a notification. When Pat turns his head, the hearing aids switch to “ambient aware” so he can have a conversation.
When Pat resumes on the computer, it’ll catch him up — a “summarizer” condenses the bit of missed meeting and the hearing aids switch back to “focus.” This is wild. Am I the only one envisioning a future where the computers meet without us?
9:33 a.m.: Speaking of focus, we’re back with Pat.
The next company in the spotlight has a complicated relationship with gravity. The final Intel Ignite pitch is Antaris, which offers “the world’s first software platform that dramatically simplifies the design, simulation and operation of satellites.”
9:34 a.m.: In its video, Antaris shows how its platform aims to make the design and engineering of satellites as easy as building software.
9:36 a.m.: Notice all the OpenVINO? Acer, Rewind and Starkey showed what Pat calls “Intel’s AI inferencing and deployment runtime platform” for developers on client and edge platforms.
The newest release, OpenVINO 2023.1, “brings us closer to the vision of any model, on any hardware, anywhere.” That now includes chips from Arm, too.
9:37 a.m.: More and more inferencing — the day-to-day running of AI to make decisions — “is going to be increasingly hybrid,” Pat says.
To help developers better employ hybrid AI, Pat says Intel will release a Hybrid AI SDK, coming early next year. The SDK is a new developer toolbox for model and application development built into a low-code environment.
9:39 a.m.: And OpenVINO 2023.1 makes generative AI more accessible for real-world work, Pat says, thanks to “great strides” in performance and memory usage and the addition of models for chatbots, instruction following, code generation and much more. Still nothing for live blogs, however.
9:40 a.m.: Just as “cloud-native” applications followed the rise of the cloud, we’re now seeing “a new class of ‘edge-native’ applications,” Pat says. But the wide diversification of systems outside the data center complicates building and managing these apps.
Intel’s answer: “Project Strata, an edge-native software platform with premium services and support offerings.” It’ll offer network, IoT and edge workload optimization on Intel chips, Pat says, and still broadly supports diverse architectures, avoiding vendor lock-in.
9:41 a.m.: Project Strata enables developers to build, deploy, run, manage, connect and help secure distributed infrastructure, applications and AI models at the edge, Pat says, “launching in early 2024.”
9:42 a.m.: Another thing Pat hears his grandkids say is that “I need cooler clothes that fit better.” Pat calls up Meera Bhatia, COO of Fabletics, to help. Welcome, Meera!
Meera says that “with our partner Fit:match AI, we are aiming to revolutionize the retail industry by solving the universal fit problem.” The goal is to “end the adage of ‘It looked better on the hanger.’”
9:43 a.m.: Pat explains that this solution uses Intel RealSense cameras, Intel Core processors and a PyTorch model optimized by OpenVINO.
Called Fit:match Concierge, says Meera, a customer can opt for a 100% private full body scan, generate a 3-D avatar and then browse a selection of clothes “suited to their shape.”
9:44 a.m.: Pat got scanned and avatar’d yesterday. “I didn’t know my avatar would look so good,” Pat quips.
9:45 a.m.: Fit:match customers are happier, Meera confirms, as measured by higher sales, lower returns and a rise in overall customer satisfaction.
A curated set of shorts and shirts are heading Pat’s way — no try-ons required. Thanks, Meera!
9:47 a.m.: Envelope, please: Taking home the 2023 Intel Startup Innovator Award is… 🥁 Deep Render!
9:48 a.m.: Deep Render co-founder Chri Besenbruch joins Pat on stage to receive the trophy and show his company’s compression solution in action.
Chri’s got an AI PC with Intel Core Ultra and with the NPU, he says, can achieve 5x better compression technology. That’s like having 5 times faster internet, Chri adds. Yes, please!
9:51 a.m.: “These innovations are powered by Moore’s Law,” Pat reminds us. An this being Silicon Valley, it’s wafer time again. Intel’s “five nodes in four years” plan is “progressing very well,” Pat confirms, alongside work “on the future beyond 2025.”
Pat ticks through status on the first four nodes: Intel 7 is done ✅; Intel 4 is done ✅ and now heading over to Ireland for high-volume ramp of the Intel Core Ultra chip; Intel 3 will be manufacturing-ready by the end of this year — Sierra Forest and Granite Rapids, the first products on Intel 3, are sampling to customers and on track; and Intel 20A is on track to be manufacturing ready next year.
9:53 a.m.: Last, but not least, is 18A, Pat says, “the Picasso” of wafers. Pat’s got an 18A wafer to show, too. It’s a beaut, Clark!
The 0.9 PDK or process development kit is “imminent” and many 18A test chips and shuttles for both internal and foundry customers are running in the fab.
Ericsson will use Intel 18A for its future custom 5G SoC, Pat says, and “Intel is intensively working with Arm.”
On the menu after 18A, Pat explains, are enhanced RibbonFET transistors, next-generation PowerVia for backside power and new features requested by foundry customers, like a high-voltage transistor.
9:54 a.m.: One more silicon highlight: high-NA EUV technology, which Pat says Intel will be “first in the industry to deploy.”
High-NA will be used in development with 18A and enter production with “Intel Next.” This way, Intel is “de-risking the introduction of high-NA,” Pat adds, making sure future nodes can use both EUV and high-NA.
9:55 a.m.: Intel has an uncommon affinity for sand, which got another boost yesterday: Intel announced industry-first breakthroughs in glass substrates that will enable continued scaling well beyond 2030, Pat says.
It’s another way Intel will keep advancing Moore’s Law, he adds, with the mechanical, physical and optical properties to keep pushing for more transistors in a package.
With wafers and packaging, we’ve got the whole semiconductor manufacturing process on stage. Let’s make some chips!
9:56 a.m.: Sand is also a critical ingredient for concrete — “Intel also intends to play a critical role in rebalancing the supply chain,” Pat says.
Over the course of five years, he adds, “we expect to invest more than $100 billion in the U.S.”
9:57 a.m.: All that chipmaking portends a lot more chiplets, and “chiplet innovations point the way to the next wave of Moore’s Law.”
That’s where the Universal Chiplet Interconnect Express (UCIe) consortium, which Intel helped inaugurate last year, comes in.
Now with more than 120 members, the group is focused on creating an open ecosystem to allow varied chiplets to work together. 🏁
9:58 a.m.: And this year we have real silicon results. Pat points to a test chip called Pike Creek with an Intel chiplet built on Intel 3 and a Synopsys chiplet made on TSMC N3E connected using Intel’s EMIB advanced packaging tech.
This example shows the commitment of TSMC, Synopsys and Intel Foundry to support an open ecosystem with UCIe.
9:59 a.m.: What comes next in AI? Intel Labs is plugging away at neuromorphic computing, which Pat says can “solve challenging optimization problems” like routing deliveries through traffic or assigning tasks in a data center. 🧠
Widespread adoption is still years away, although more than 200 groups are putting Intel’s Loihi 2 research chip and open-source Lava software to the test. Companies working on potential applications include Sony Semiconductor Solutions Corporation, Mercedes-Benz, Sandia National Labs, the U.S. Air Force Research Laboratory, the U.S. Naval Research Laboratory and Teledyne-FLIR.
10:00 a.m.: Wilder yet than those brain-like neuromorphic chips are quantum computers, which have the potential to be “exponentially faster than large-scale supercomputers,” Pat explains.
While commercial quantum computing might still be 10 to 15 years away, Intel is taking a uniquely practical approach to quantum hardware. “We’re the only company using the same process line to make our qubits as we do our leading-edge logic technology,” Pat says.
10:01 a.m.: The result of that approach is Tunnel Falls, a 12-qubit device built with the same EUV and other tools used on Intel’s 18A line. Its qubits are just 50 nanometers by 50 nanometers, “1 million times smaller than other qubit types,” Pat says. 🔎
Tunnel Falls is now being used in research at labs including the Laboratory for Physical Sciences in partnership with the U.S. Army Research Office, Sandia National Laboratories, the University of Rochester and the University of Wisconsin-Madison.
10:02 a.m.: Intel’s got quantum software, too. The Intel Quantum SDK launched earlier this year on the Intel Developers Cloud and gives developers hands-on experience programming applications for quantum in simulation. Just like life, a quantum simulation. ⏳
10:03 a.m.: Pat’s got one more thing: “I’d like to announce our 2023 recipient of the Intel Innovation Award for a lifetime’s worth of technical achievement.”
“In recognition of her extensive achievements and contributions to the field of AI,” Pat says, 🥁 … “Please join me in welcoming Fei-Fei Li.”
10:04 a.m.: “Our focus at Intel is to bring AI everywhere,” says Pat as the symphony fades in. You made it all the way!
And that’s a wrap for this liveblog! Thank you for following along, and for all the details on Intel’s news today and through Intel Innovation this week, hop one link over to the Innovation press kit.
Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.