Solving the TCP-Incast Problem with Application-Level Scheduling
Author(s) -
Maxim Podlesny,
Carey Williamson
Publication year - 2012
Publication title -
2012 ieee 20th international symposium on modeling, analysis and simulation of computer and telecommunication systems
Language(s) - English
Resource type - Conference proceedings
eISSN - 2375-0227
pISSN - 1526-7539
DOI - 10.1109/mascots.2012.21
Subject(s) - computing and processing , communication, networking and broadcast technologies , components, circuits, devices and systems
Data center networks are characterized by high link speeds, low propagation delays, small switch buffers, and temporally clustered arrivals of many concurrent TCP flows fulfilling data transfer requests. However, the combination of these features can lead to transient buffer overflow and bursty packet losses, which in turn lead to TCP retransmission timeouts that degrade the performance of short-lived flows. This so-called TCP-incast problem can cause TCP throughput collapse. In this paper, we explore an application-level approach for solving this problem. The key idea of our solution is to coordinate the scheduling of short-lived TCP flows so that no data loss happens. We develop a mathematical model of lossless data transmission, and estimate the maximum good put achievable in data center networks. The results indicate non-monotonic good put that is highly sensitive to specific parameter configurations in the data center network. We validate our model using ns-2 network simulations, which show good correspondence with the theoretical results.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom