
Optimizing the Learnable RoPE Theta Parameter in Transformers
Author(s) -
Zhigao Huang,
Musheng Chen
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3590604
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Rotary Position Embedding (RoPE) enhances Transformer models by encoding relative positions through a frequency parameter θ, but conventional implementations fix θ, constraining adaptability.We conduct the first systematic study of learnable RoPE θ, introducing four optimization strategies—separate learning rates, layer-wise initialization, cosine annealing scheduling, and sigmoid-based constraints—to stabilize and refine positional learning. Our approach demonstrates modest but consistent benefits across multiple datasets including Tiny Shakespeare,WikiText-103, and IWSLT’14, achieving measurable gains in validation loss, perplexity, and BLEU scores relative to a fixed-θ baseline while maintaining high inference throughput and requiring minimal architectural modifications. Ablation experiments quantify each strategy’s contribution and offer practical integration guidelines. This adaptive position encoding framework provides a foundation for large-scale pretraining and diverse sequence modeling applications.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom