z-logo
Premium
High performance computing and simulation: architectures, systems, algorithms, technologies, services, and applications
Author(s) -
Smari Waleed W.,
Fiore Sandro,
Hill David
Publication year - 2013
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.2955
Subject(s) - pace , computer science , aerospace , supercomputer , exploit , emerging technologies , risk analysis (engineering) , data science , engineering , business , computer security , artificial intelligence , geodesy , aerospace engineering , geography , operating system
In many areas, high-performance computing (HPC) and simulation have become determinants of industrial competitiveness and advanced research. Following the progress in aerospace, automobile, environmental, energy, healthcare, and networking industries, most research domains nowadays measure the strategic importance of their developments vis-á-vis the mastering of these critical technologies. Intensive computing and numerical simulation are now essential tools that contribute to the success in systems designs, effectiveness of public policies such as prevention of natural hazards and taking account of climate risks, but also to security and national sovereignty. Yet, the fact that simulation is employed by a large number of users does not mean that they all contribute equally to the advancement of this science. It is widely anticipated that the continual progress and investment in HPC and simulation will bring about innovations and technologies that will contribute to the growth and evolution in all major scientific domains. For instance, the simulation of complex phenomena, such as biological and living components, will lead to spectacular scientific breakthroughs. In terms of hardware and software architectures, we can expect exaflopic performances [1] to be reached before 2020. Exascale computing is however an inspiring challenge, implying difficult but invigorating technological obstacles. The arrival of General Purpose-Graphical Processing Units (GP-GPU) has impacted the pace of improvements in peak performances. However, this development implies rethinking the use of such architectures to obtain maximum performance or peak return whenever possible. In some cases, these technologies will require significant efforts to adapt them to existing applications. At the same time, they will also impact the design of future applications. Furthermore, they will require acquiring and building new tools and infrastructure [2–5]. HPC has so far been a laboratory for the development of techniques, technologies, services, and applications that sooner or later will end up in future consumer desktop computers. Nowadays, desktops and laptops have vector processing capabilities, with Streaming SIMD Extensions (SSE) instructions, similar to what Cray proposed in the seventies (Advanced Vector Extensions (AVX) are also now available since the introduction of Intel’s Sandy Bridge processor). Equally, the introduction of the personal ‘super-computer’ in 2008 with NVIDIA’s Tesla boards (1 Teraflop single precision) changed the way we think about HPC [6]. Such components have been introduced in the design of supercomputers and clusters [7]. At the time of the High Performance Computing and Simulation (HPCS) 2010 conference, three of the first five supercomputers ranked in the ‘top500’ [8] were hybrid, some with Tesla boards and others with the Fermi architecture, which considerably improved double precision performances [9,10]. At the time of writing this editorial, double Graphical Processing Units (GPUs) with thousands of cores are available. An IBM BlueGene/Q system named Sequoia has been recently installed at the Department of Energy’s Lawrence Livermore National Laboratory. This supercomputer achieved 16.32 petaflop/s on the Linpack benchmark using 1,572,864 cores. It is also one of the most energy efficient systems in the Top500 list. For next year, supercomputers are expected to be more energy efficient while surpassing the 20 petaflops milestone, and we anticipate even higher peak performances and efficiencies in the subsequent years. In addition, the introduction in 2009 of Intel’s Sandy Bridge [11] and AMD’s Accelerated Processing Unit (APU) [12] will also impact the way we will design and program these new parallel architectures. These exciting developments are also a challenge we will have to deal with. Generalist multicore architectures will arrive around 2013 with the commercial availability of the Intel Many Integrated Core (MIC) architecture [13, 14] (Xeon Phi is the final name retained by Intel for the commercialization of this

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here