z-logo
open-access-imgOpen Access
Targeted Data Prefetching
Author(s) -
WengFai Wong
Publication year - 2005
Publication title -
lecture notes in computer science
Language(s) - English
Resource type - Book series
SCImago Journal Rank - 0.249
H-Index - 400
eISSN - 1611-3349
pISSN - 0302-9743
ISBN - 3-540-29643-3
DOI - 10.1007/11572961_63
Subject(s) - instruction prefetch , computer science , cache , parallel computing , overhead (engineering) , cpu cache , focus (optics) , scheme (mathematics) , cas latency , embedded system , operating system , memory controller , mathematical analysis , physics , mathematics , optics , semiconductor memory
Given the increasing gap between processors and memory, prefetch- ing data into cache becomes an important strategy for preventing the processor from being starved of data. The success of any data prefetching scheme depends on three factors: timeliness, accuracyand overhead. In most hardware prefetching mechanism, the focus has been on accuracy - ensuring that the predicted address do turn out to be demanded in a later part of the code. In this paper, we intro- duce a simple hardware prefetching mechanism that targets delinquent loads, i.e. loads that account for a large proportion of the load misses in an application. Our results show that our prefetch strategy can reduce up to 45% of stall cycles of benchmarks running on a simulated out-of-order superscalar processor with an overhead of 0.0005 prefetch per CPU cycle.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom