z-logo
Premium
TU‐FF‐A1‐06: A Robust Scalable Parallel Processing System for Radiation Therapy
Author(s) -
Morrill S,
Parker B,
Brack C
Publication year - 2006
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1118/1.2241643
Subject(s) - computer science , scalability , gigabit ethernet , compiler , operating system , computer cluster , benchmark (surveying) , parallel computing , message passing interface , ethernet , cluster (spacecraft) , software , embedded system , distributed computing , message passing , geodesy , geography
Purpose: To develop a robust Linux—based cluster for the parallel computation of problems of interest in a radiation therapy environment. This system should be robust, scalable and easy to manage. It should be constructed from commercially available low cost hardware and use only open source software tools to manage the system. Method and Materials: This cluster was constructed using a distributed memory model with the Message Passing Interface (MPI) protocol. The use of distributed memory requires a fast backbone to efficiently distribute programs and data to the various cluster nodes. This rapid data transfer was accomplished using a Gigabit Ethernet switch which allows a peak transfer rate of 100 Mbytes/sec. The cluster currently consists of 76 CPU's each with a minimum of 512 Mbytes of RAM. The individual nodes run an open source version (Centos 4.0) of the Redhat Enterprise 4.0 Linux operating system. The MPI protocol is implemented using the open source implementation, MPICH. Cluster node management is accomplished using the ROCKS 4.0 shareware toolset. The compilers and debuggers (C++ and FORTRAN for Linux) are Intel 9.0. Finally, the Integrated Development Environment (IDE) is the Eclipse open‐source project v3.0.1 with PHOTRAN extensions. Results: This cluster has recently been commissioned and several benchmark tests have been completed. Factors of 15X – 60X improvement in speed for parallelizable sections of various codes have been demonstrated. Conclusions: This system is robust enough to solve complex problems which were intractable with our previous computational tools. The demonstrated speed improvements will allow for the implementation of codes for problems such as: real‐time dose calculation, fast IMRT optimization, and the convolution of correlated CT datasets to account for patient motion.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here