Intel today announced the latest Meteor Lake processor and introduced Meteor Lake’s integrated NPU in detail.
Intel said Al is being integrated into every aspect of people’s lives. Although cloud AI provides scalable computing, there are limitations. It is characterized by dependence on connections, high latency, high implementation costs, and privacy issues. MeteorLake brings AI to client PCs, providing low-latency AI calculations that better protect data privacy at lower costs.
Intel said that starting with MeteorLake, Intel will widely introduce Al to PCs, leading hundreds of millions of PCs into the Al era, and the huge x86 ecosystem will provide a wide range of software models and tools.
A detailed explanation of Intel NPU architecture:
Host Interface and Device Management – The Device Management area supports Microsoft’s new driver model called the Microsoft Computing Driver Model (MCDM). This enables Meteor Lake’s NPUs to support MCDM in a superior manner while ensuring security, while the memory management unit (MMU) provides isolation in multiple scenarios and supports power and workload scheduling, enabling fast low-power states Convert.
Multi-engine architecture – The NPU consists of a multi-engine architecture with two neural computing engines that can work together to process a single workload or each to process different workloads. In the Neural Compute Engine, there are two main computing components. One is the inference pipeline – this is the core driver of energy-efficient computing and handles common large calculations by minimizing data movement and leveraging fixed-function operations. Tasks can achieve energy efficiency in neural network execution. The vast majority of computation occurs in the inference pipeline, a fixed-function pipeline hardware that supports standard neural network operations. The pipeline consists of a multiply-accumulate operation (MAC) array, an activation function block, and a data conversion block. The second is SHAVEDSP – a highly optimized VLIW DSP (Very Long Instruction Word/Digital Signal Processor) designed specifically for Al. The Streaming Hybrid Architecture Vector Engine (SHAVE) can be pipelined with inference pipelines and direct memory access (DMA) engines to enable truly heterogeneous computing in parallel on the NPU to maximize performance.
DMA Engine – This engine optimally orchestrates data movement for maximum energy efficiency and performance.
Here is a sponsor promotion:
Lenovo TWS Earphone is only $1.99BUY IT NOW