You can rely on the central processor for smaller workloads at low latency.

The graphics processing unit is ideal for large workloads that require parallel throughput.

The neural processing unit handles sustained, heavily-used AI workloads at low power for greater efficiency.

FAQs

Frequently Asked Questions

AI PCs represent a new generation of personal computers with dedicated AI acceleration, including a central processing unit (CPU), a graphic processing unit (GPU), and a neural processing unit (NPU), all designed to handle AI workloads more efficiently by working in concert. With cutting-edge capabilities, the AI PC is ready for new applications, such as tools that can act as local AI assistants, saving you time by summarizing meeting transcripts and creating drafts. When powered by Intel® Core™ Ultra processors, AI PCs have the ability to balance power and performance for fast and efficient AI experiences.

By pairing Intel® hardware with innovations from Independent Hardware Vendors (IHV) that drive the metamorphosis of sensors, we’re empowering devices like the AI PC to intelligently perceive and respond to surroundings—from facial recognition to adaptive color control. We're reshaping power management and battery technology, creating devices that run longer and adapt power consumption based on individual usage, charting a course toward a more sustainable and user-centric computing future.

AI PCs featuring Intel® Core™ Ultra processors are more productive, creative, and secure than regular computers. As AI tools become more pervasive in everyday computer use, you need hardware that keeps up with this advanced software. Intel® Core™ Ultra processors are purpose-built to support new and emerging AI applications, including tools that boost productivity, security, collaboration, and creation.

Running AI apps and workloads on the PC-- as opposed to having them run in the Cloud-- provides a number of benefits. These include increased privacy and data security, reduced latency, online/offline accessibility, and customization. This means users get assistance with tasks that typically require human intelligence- reasoning, learning, understanding natural language, recognizing patterns, making decisions, and content generation-- all on their PC.

Tera operations per second (TOPS) measures a system’s theoretical peak AI performance, assuming 100% AI accelerator utilization. TOPS highlights the processing potential of neural processing units (NPUs) and graphic processing units (GPUs) for AI tasks.

You may have heard of teraFLOPS or tera floating point operations per second (TFLOPS), a similar peak performance metric based on floating-point operations. Both estimate how well a system handles compute-intensive tasks, TOPS for AI performance and TFLOPS for non-AI supercomputer performance.

TOPS is generally calculated using int8 with the formula:

  • TOPS = # of cores * frequency (Hz) * operations/cycle
  • For example, Lunar Lake will deliver up to 120 TOPS, measured at INT8 without sparsity, across three AI accelerators. This includes up to 48 TOPS from the NPU, up to 67 TOPS from the GPU, and up to 5 TOPS from the CPU.

Intel is focused on fostering an open ecosystem to help developers become more productive and catalyze community innovation. We support open innovation, open platforms, and horizontal competition, and we provide tools to remove code barriers and allow interoperability within existing infrastructure to help developers maximize the value of their investments. Our contributions are part of many layers of the global technology stack, including toolkits optimized for oneAPI, a cross-industry model that supports multiple architectures and vendors. OneAPI allows developers to seamlessly keep up with the explosion of novel computer architectures and work with whichever accelerator and tools they prefer.