Abstract
Normalization layers are ubiquitous in modern neural networks and have long been considered essential. This work demonstrates that Transformers without normalization can achieve the same or better performance using a remarkably simple technique. We introduce Dynamic Tanh (DyT), an element-wise operation DyT(x) = tanh(alpha x), as a drop-in replacement for normalization layers in Transformers. DyT is inspired by the observation that layer normalization in Transformers often produces tanh-like, S-shaped input-output mappings. By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning. We validate the effectiveness of Transformers with DyT across diverse settings, ranging from recognition to generation, supervised to self-supervised learning, and computer vision to language models. These findings challenge the conventional understanding that normalization layers are indispensable in modern neural networks, and offer new insights into their role in deep networks.
Community
As straight forward as it sounds (with caveats on the sensitivity of alpha), bruh
class DyT(nn.Module):
def __init__(self, num_features, alpha_init_value=0.5):
super().__init__()
self.alpha = nn.Parameter(torch.ones(1) * alpha_init_value)
self.weight = nn.Parameter(torch.ones(num_features))
self.bias = nn.Parameter(torch.zeros(num_features))
def forward(self, x):
x = torch.tanh(self.alpha * x)
return x * self.weight + self.bias
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HybridNorm: Towards Stable and Efficient Transformer Training via Hybrid Normalization (2025)
- Scale-Distribution Decoupling: Enabling Stable and Effective Training of Large Language Models (2025)
- The Sharpness Disparity Principle in Transformers for Accelerating Language Model Pre-Training (2025)
- MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections (2025)
- When Can You Get Away with Low Memory Adam? (2025)
- Accurate INT8 Training Through Dynamic Block-Level Fallback (2025)
- Efficient Language Modeling for Low-Resource Settings with Hybrid RNN-Transformer Architectures (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Hey authors, @JiachenZhu @endernewton @ylecun awesome paper! thanks for this!
However, I have a question. I recently read this paper (https://huggingface.co/papers/2502.05795) which talks about curse of depth. I was wondering, since the DyT mirrors the output structure of norm layers, does it also have all the setbacks of it too? In that paper, they noted that due to accumulated variance because of Pre-LN, we can safely remove half of the layers from LLama 13B and not have significant drop in performance. They proposed a scaling term attached to layer norm to mitigate this.
I was wondering if DyT based LLMs suffer from same robustness to removal of later layers? Do we need a scaling term here to increase efficiency of layers?
This is a special case of DRA activation function. If you think of DyT as an Activation function, it will be exactly a sub-family of our learnable Dynamic Range Activator (DRA) activation function, when (a,c)=0:
https://openreview.net/forum?id=4X9RpKH4Ls¬eId=4PbJSndLRZ
@akhaliq Do you like to cover this paper as well?
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper