Premium
On the Performance and Memory Footprint of Distributed Training: An Empirical Study on Transformers
Author(s) -
Lu Zhengxian,
Wang Fangyu,
Xu Zhiwei,
Yang Fei,
Li Tao
Publication year - 2025
Publication title -
software: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.437
H-Index - 70
eISSN - 1097-024X
pISSN - 0038-0644
DOI - 10.1002/spe.3421
ABSTRACT Background: Transformer models have emerged as potent solutions to a wide array of multidisciplinary challenges. The deployment of transformer architectures is significantly hindered by their extensive computational and memory requirements, necessitating reliance on advanced efficient distributed training methodologies. Motivation: Prior research has delved into the performance bottlenecks associated with distributed training, aiming to unravel these bottlenecks and suggest optimization directions. However, such analyses often overlook three aspects unique to transformer models: the specialized architecture, the dependency on various distributed strategies, and the requirement to balance computational and memory overhead. Method: This paper aims to bridge this gap by offering a comprehensive examination of the performance bottlenecks inherent in the distributed training of transformer models, leveraging both theoretical analysis and empirical investigation. We propose an analytical framework tailored to these unique aspects of transformers, facilitating a holistic evaluation of model architectures, distributed strategies, and resource consumption. Based on this analytical framework, we conduct a comparative analysis of theoretical performances and further systematically explore how various distributed training strategies fare in real‐world scenarios. Results: Most of the experimental results can be well explained by the analytical outcomes derived from the analytical framework. Notably, our findings suggest an advantage of pipeline parallelism over data parallelism for transformer models. Moreover, we shed light on some unexpected outcomes, such as the potential for increased total memory overhead due to suboptimal model partitioning within pipeline parallelism. Additionally, we underscore the significance of communication block size and waiting time to further enhance performance.
Empowering knowledge with every search
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom