Tag: pre-norm transformer

Transformer Pre-Norm vs Post-Norm Architectures: Which One Keeps LLMs Stable? 16 October 2025

Transformer Pre-Norm vs Post-Norm Architectures: Which One Keeps LLMs Stable?

Pre-norm and post-norm architectures determine how Layer Normalization is applied in Transformers. Pre-norm enables stable training of deep LLMs with 100+ layers, while post-norm struggles beyond 30 layers. Most modern models like GPT-4 and Llama 3 use pre-norm.

Susannah Greenwood 8 Comments