Output Projection CTE Kernel API Reference#
This topic provides the API reference for the Output Projection CTE kernel. The kernel computes the output projection operation typically used after an attention block in transformer models, optimized for Context Encoding (Prefill) use cases.
The kernel supports:
Efficient projection of attention outputs
Optional bias addition
LNC sharding for distributed computation
Optimized memory access patterns
Head dimension packing for improved performance
Background#
The Output Projection CTE kernel computes the operation out = attention @ weight + bias, which is commonly used to project the output scores after an attention block in transformer models. This kernel is specifically optimized for Context Encoding (Prefill) use cases, where the sequence length can be large (typically S ≥ 512).
The kernel employs efficient tiling strategies and memory access patterns to maximize performance on Neuron hardware, with support for sharding across multiple Logical Neuron Cores (LNCs) to handle large hidden dimensions. When LNC>1, the H dimension is sharded across the cores, which avoids the need for any inter-core collective operations as each core produces part of the output tensor.
API Reference#
Source code for this kernel API can be found at: aws-neuron/nki-library
output_projection_cte#
- nkilib.core.output_projection.output_projection_cte.output_projection_cte(attention, weight, bias=None)#
Output Projection Kernel optimized for Context Encoding (Prefill) use cases.
This kernel computes
out = attention @ weight + bias, typically used to project the output scores after an attention block in transformer models.This kernel is optimized for Context Encoding (aka Prefill) use cases where sequence length
Sis large. Using this kernel withS < 512may result in degraded performance.This kernel uses a layout also used by other Context Encoding kernels to avoid need for transposes.
- Parameters:
attention (
nl.ndarray) – Input tensor in HBM, typically the scores output from an attention block. Shape:[B, N, D, S], whereBis batch size,Nis number of heads,Dis head dimension, andSis sequence length. Indexing:[b, n, d, s].weight (
nl.ndarray) – Weight tensor in HBM. Shape:[N*D, H], whereHis hidden dimension size. Indexing:[n * D + d, h].bias (
nl.ndarray, optional) – Optional bias tensor in HBM. Shape:[1, H]. Indexing:[1, h].
- Returns:
Output tensor in HBM. Shape:
[B, S, H]. Indexing:[b, s, h].- Return type:
nl.ndarray
- Data Types:
This kernel supports
nl.float32,nl.float16andnl.bfloat16data types. However, fornl.float32, large inputs may not fit in SBUF.- Dimensions:
B: Batch sizeN: Number of headsS: Sequence lengthH: Hidden dimension sizeD: Head dimension size
Restrictions:
The contract dimension of input and weight tensors must match (
N*D == weight.shape[0])Output projection kernel currently only supports
Hto be no more than 32768Hidden dimension (
H) needs to be divisible by LNC size since LNC sharding is on the weight hidden dimensionHead dimension (
D) must be <= 128Maximum validated
Hsize is 20705Maximum validated
B*Ssize is 131072Maximum validated
Nsize is 17
Implementation Details#
The kernel implementation includes several key optimizations:
Dimension Packing: Optimizes the contraction dimension by folding
N(number of heads) intoD(head dimension) when beneficial, improving computational efficiency.Efficient Tiling Strategy: Uses carefully chosen tile sizes for processing batches and sequences to maximize hardware utilization.
LNC Sharding: Supports sharding across multiple Logical Neuron Cores (LNCs) by dividing the hidden dimension, enabling processing of larger models.
Memory Access Optimization: Employs optimized memory access patterns to maximize bandwidth utilization and minimize data movement.
PSUM Bank Utilization: Efficiently utilizes PSUM banks for accumulating partial results during matrix multiplication operations.
Stream Shuffle Broadcast: Uses stream shuffle broadcast for bias tensors to efficiently distribute them across processing elements.
Specialized Engine Selection: Alternates between scalar and vector engines for tensor copy operations to balance workload and improve performance.