Skip to content

Commit 2645012

Browse files
committed
blog for vllm optimizations on Intel GPU/XPU
Update the vllm optimizations and support on Intel GPU, using ARC Pro B series product. Signed-off-by: roger feng <roger.feng@intel.com>
1 parent 08a3395 commit 2645012

File tree

9 files changed

+128
-0
lines changed

9 files changed

+128
-0
lines changed
Lines changed: 128 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,128 @@
1+
---
2+
layout: post
3+
title: "Fast and Affordable LLMs serving on Intel Arc Pro B-Series GPUs with vLLM"
4+
author: "Intel vLLM Team"
5+
---
6+
7+
8+
[Intel® Arc™ Pro B-Series GPU Family](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/workstations/b-series/overview.html) GPUs deliver powerful AI capabilities with a focus on accessibility and exceptional price-to-performance ratios. Their large memory capacity and scalability with multi-GPU setups make it possible to run the latest, large and capable AI models locally, making advanced AI inference accessible to professionals looking to deploy Large Language Models (LLMs) without the premium costs typically associated with AI hardware.
9+
10+
vLLM is at the core of the software stack enabling fast and cost-effective LLM serving on Intel Arc Pro B-Series GPUs. Over the past few months, Intel developers have been actively collaborating with the vLLM community to enable and optimize key features and ensure seamless performance with multi-GPU scaling and PCIe P2P data transfer on Intel Arc Pro B-Series GPUs.
11+
12+
vLLM is at the core of the software stack enabling fast and cost-effective LLM serving on Intel Arc Pro B-Series GPUs. Over the past few months, Intel developers have been actively collaborating with the vLLM community to enable and optimize key features and ensure seamless performance with multi-GPU scaling and PCIe P2P data transfer on Intel Arc Pro B-Series GPUs.
13+
14+
Based on vLLM v1 engine, Intel® Arc™ Pro B-series GPUs provide vLLM key features and optimizations including:
15+
- Solid inference performance for DeepSeek distilled llama/qwen models
16+
- Long context length (>50K) with good scaling on batch size
17+
- Support for embedding, reranker, pooling models
18+
- Support multi-modality models
19+
- Well optimized Mixture of Experts (MoE) models (GPT-OSS, DeepSeek-v2-lite, Qwen3-30B-A3B etc)
20+
- By-layer online quantization to reduce the required GPU memory
21+
- Support for Data Parallelism, Tensor Parallelism and Pipeline Parallelism
22+
- FP16 and BF16 path support for Torch.compile
23+
- Speculative decoding in methods n-gram, EAGLE and EAGLE3
24+
- Async scheduling
25+
- Prefill/Decode disaggregation
26+
- Low-Rank Adapter (LoRA)
27+
- Reasoning output
28+
- Sleep mode
29+
- Structured outputs
30+
- Tool calling
31+
- Mixed precision support for BF16, FP16, INT4 and FP8 vLLM recipes
32+
33+
34+
## Advanced Optimizations for MoE Models
35+
Mixture of Experts (MoE) is a model approach where multiple specialized expert networks collaborate to process input sequences, guided by a gating mechanism. For each token in the input sequence, the gating network dynamically selects which subset of experts should process that token. Rather than relying on a single dense feedforward layer, MoE architectures employ multiple parallel GEMM operations distributed across expert networks to achieve equivalent computational functionality. This design introduces structured sparsity into the model, as only a subset of experts is activated for any given input, thereby improving computational efficiency while maintaining model capacity. Beyond general optimizations for General Matrix Multiplications (GEMM) and Flash Attention, these MoE components (experts and gating network) represent the key performance contributors in MoE-based language models.
36+
37+
![moe_diagram](/assets/figures/2025-vllm-on-intel-arc/moe.png)
38+
39+
However, naive implementations of MoE GEMM operations can suffer from significant efficiency bottlenecks. The typical approach, where individual GEMM kernels are sequentially launched per iteration on a for-loop, generates excessive kernel launch overhead and introduces substantial scheduling latency. Furthermore, since expert routing decisions are produced by the gating network, GEMM operations must wait for gate computation to complete before execution can begin. This data dependency creates pipeline stalls that disrupt the kernel execution stream and severely limit GPU parallelism, preventing optimal device utilization.
40+
41+
Targeting these limitations of MoE GEMM we designed a persistent zero gap kernel which achieved over 80% efficiency of hardware capacity of Intel® Arc™ Pro B60 GPU.
42+
## Optimization 1. Single kernel launched in persistent loop
43+
Single kernel design will remove launching and scheduling overhead mentioned above. Also, persistent loop removes the need for launching parameters which depends on the results of expert routing network. They help keep maximum device parallelism.
44+
45+
Before persistent kernel, we could see device idle for host waiting
46+
![kernel trace](/assets/figures/2025-vllm-on-intel-arc/persistent-kernel1.png)
47+
48+
Enabling persistent keep device busy:
49+
![kernel trace](/assets/figures/2025-vllm-on-intel-arc/persistent-kernel2.png)
50+
51+
Intel® Arc™ Pro B60 GPU has 20 XeCores, each with identical resources that can host multiple SYCL groups. In our design, we launch two groups per XeCore to balance compute and memory bandwidth needs.
52+
53+
## Optimization 2. Dynamic balancing of computing groups
54+
One observation is that each group runs a different amount of work due to the imbalance of expert routing. If a group loops fixed stride of work, there is always a group that takes the largest amount of work and another, smallest. The gap between them will accumulate up to 15% of the total MoE GEMM time. A better alternative is whoever finishes a task in one loop starts the immediate available task in the next loop.
55+
For a concrete example, there are 40 groups to crunch 200 GEMM blocks, static stride will result that group 0 loop through 0, 40, 80, ... group 1 loop through 1, 41, 81, etc. A caveat is that due to the nature of MoE, each GEMM block may not have same amount of compute intensity. Also, randomized access patterns will let certain groups finish work faster than others. This will limit efficiency in such a way that the groups always finished job earlier can’t help those always meet heavy loads.
56+
57+
![thread load](/assets/figures/2025-vllm-on-intel-arc/thread-load1.png)
58+
![thread load](/assets/figures/2025-vllm-on-intel-arc/thread-load2.png)
59+
60+
We mitigate the effect by letting each group compete for the next job through an atomic number. Whoever finishes computing one GEMM block will get a rank from the atomic number who decides which next block it’ll take. In this case, we eliminated small gaps in kernel looping and achieved perfect scheduling among all scenarios of experts routing.
61+
62+
## Optimization 3. Fast MXFP4 to BFLOAT16 algorithm with prepack for memory load efficiency
63+
Prepacking has long been known to improve memory load efficiency. For 4-bit memory loads, a hardware-friendly format can increase efficiency by up to 30%, as observed in our case. Also, naive FP4 to BF16 incurs too many instructions which prompt a need for better alternative (borrow from oneDNN, stride E2M1 encoding on single precision E/M position and multiple the scale difference between two types):
64+
Bitcast-bf16 ((x << 12) >> 6 & 0x81c0) * 2^126
65+
The solution minimizes instructions needed to convert fp4 to bf16.
66+
67+
## Performance
68+
69+
With 24GB of high-bandwidth VRAM, 456 GB/s memory bandwidth and 160 Intel® Xe Matrix Extensions (Intel® XMX) AI engines, Intel Arc Pro B-Series GPUs offers good hardware capacity for the optimization of high touch models on vLLM . The full support model list can be found at [intel/ai-containers](https://github.com/intel/ai-containers/blob/main/vllm/0.10.2-xpu.md#supported-models)
70+
71+
DeepSeek distilled models sized from 8B to 70B are optimized for good output token throughput on a system with eight Intel® Arc™ Pro GPUs.
72+
![model perf](/assets/figures/2025-vllm-on-intel-arc/perf-figure1.png)
73+
Figure 1: FP8 model output token throughput with max concurrency under SLA on a system configured with 8 Intel® Arc™ Pro B60 GPU cards
74+
75+
The system sustains less than 100 ms next token latencies with good concurrency load.
76+
![model perf](/assets/figures/2025-vllm-on-intel-arc/perf-figure2.png)
77+
Figure 2: Qwen-32B next token latency with increasing number of prompts on a system configured with 4 Intel® Arc™ Pro B60 GPU cards
78+
79+
The model inference maintains consistent next-token latency across a wide range of input sequence lengths, scaling from 1K to over 40K tokens. This performance is underpinned by highly optimized flash attention kernels that parallelize operations across the sequence length dimension.
80+
![model perf](/assets/figures/2025-vllm-on-intel-arc/perf-figure3.png)
81+
Figure 3: TTFT/TPOT for llama-70B single batch with long context input from 1K to 40K sequences on a system configured with 8 Intel® Arc™ Pro B60 GPU cards
82+
83+
84+
GPT-OSS: Intel® Arc™ Pro B60 GPU also demonstrates exceptional performance with OpenAI's recently launched GPT-OSS model, providing developers and enterprises with a powerful, cost-effective solution for large-scale AI inference as shown in the table below.
85+
86+
| Model | Data type | TP | Input/output seq length | Concurrency | TTFT (s) | TPOT (ms) | Output Token Throughput (toks/s) |
87+
| --- | --- | --- | --- | --- | --- | --- | --- |
88+
| GPT-OSS-20b |MXFP4 |1 |1024/1024 |75 |7.614 |53.96 |1210.74|
89+
| GPT-OSS-20b |MXFP4 |1 |2048/2048 |38 |7.823 |42.35 |818.92 |
90+
| GPT-OSS-20b |MXFP4 |1 |5120/5120 |15 |8.36 |34.27 |416.94 |
91+
| GPT-OSS-120b |MXFP4 |4 |1024/1024 |100|8.04 |58.78 |1495.12|
92+
| GPT-OSS-120b |MXFP4 |4 |2048/2048 |50 |8.11 |41.98 |1085.58|
93+
| GPT-OSS-120b |MXFP4 |4 |5120/5120 |20 |8.60 |30.60 |619.10 |
94+
95+
Table 1: GPT-OSS vLLM inference throughput using 1-4 GPUs on x8 Intel® Arc™ Pro B-series System
96+
97+
MLPerf: Intel Arc Pro B-Series GPUs shines in the recently published MLPerf Inference v5.1 results ([link](https://mlcommons.org/benchmarks/inference-datacenter/)). In Llama 8B, Intel® Arc™ Pro B60 GPU demonstrates performance-per-dollar advantages. The results were achieved with vLLM as the serving framework.
98+
99+
## How to setup
100+
The vllm docker image for Intel XPU support can be downloaded from [intel/vllm - Docker Image | Docker Hub](https://hub.docker.com/r/intel/vllm). The MoE models like gpt-oss is supported since vllm 0.10.2 docker release. Below examples require host OS: Ubuntu 25.04, KMD Driver: 6.14.0, running on the Xeon system configured with 4 Intel® Arc™ Pro B60 GPU cards plugged on PCIe slots.
101+
102+
\# Get the released docker image with command
103+
`$ docker pull intel/vllm:0.10.2-xpu`
104+
105+
\# Instantiate a docker container with command
106+
`$ docker run -t -d --shm-size 10g --net=host --ipc=host --privileged -v /dev/dri/by-path:/dev/dri/by-path --name=vllm-test --device /dev/dri:/dev/dri --entrypoint= intel/vllm:0.10.2-xpu /bin/bash`
107+
108+
\# Run the vllm server with gpt-oss-120b on 4 Intel® Arc™ Pro B60 cards
109+
`$ vllm serve openai/gpt-oss-120b --dtype=bfloat16 --enforce-eager --port 8000 --host 0.0.0.0 --trust-remote-code --gpu-memory-util=0.9 --no-enable-prefix-caching --max-num-batched-tokens=8192 --disable-log-requests --max-model-len=16384 --block-size 64 -tp 4`
110+
111+
\# Start another shell and run the benchmarking
112+
`$ vllm bench serve --model --model openai/gpt-oss-120b --dataset-name sonnet --dataset-path="./benchmarks/sonnet.txt" --sonnet-input-len=1024 --sonnet-output-len=1024 --ignore-eos --num-prompt 1 --trust_remote_code --request-rate inf --backend vllm --port=8000 --host 0.0.0.0`
113+
114+
More validated supported model list can be found here: [Supported Models](https://github.com/intel/ai-containers/blob/main/vllm/0.10.2-xpu.md#supported-models)
115+
116+
117+
## Looking Ahead
118+
We commit to deepening the integration between our optimizations and the core vLLM project. Our roadmap includes providing full support for upstream vLLM features, delivering state-of-the-art performance optimizations for a broad range of models, with a special focus on popular, high-performance LLMs on Intel® hardware, and actively contributing our enhancements back to the vLLM upstream community.
119+
120+
121+
## Acknowledgement
122+
We would like to express our sincere appreciation to the entire vLLM team. Their groundbreaking work has set a new standard for LLM serving. The openness and support have enabled us to contribute effectively, and we are truly thankful for their partnership in this endeavor.
123+
124+
125+
## Notices & Disclaimers
126+
Performance varies by use, configuration and other factors. Learn more at [www.Intel.com/PerformanceIndex](http://www.intel.com/PerformanceIndex).
127+
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. Visit [MLCommons](https://mlcommons.org/) for more details. No product or component can be absolutely secure.
128+
Intel technologies may require enabled hardware, software or service activation.
59.8 KB
Loading
75.7 KB
Loading
36.5 KB
Loading
85.4 KB
Loading
48.8 KB
Loading
50.6 KB
Loading
10 KB
Loading
10.2 KB
Loading

0 commit comments

Comments
 (0)