The Joseph and Grunwald study focused primarily on data cache misses
, and did not compare Markov prefetching with techniques designed specifically for prefetching instructions.
Many of these cache misses
can be avoided if we augment the demand fetch policy of the cache with a data prefetch operation.
The sum of all these costs should be a predictor of cache misses
Direct cache-miss measurements indicate that the difference in performance is largely due to differences in the number of level-2 cache misses
that the two algorithms generate.
Local cache misses
that hit in the server cache avoid expensive disk accesses.
Transforming a control-flow graph into a linear sequence of instructions is called the code layout problem; algorithms that attack code layout attempt to reduce cache misses
or pipeline stalls in the program by changing the order of basic blocks or procedures [Calder and Grunwald 1994; Hwu and Chang 1989; McFarling 1993; Pettis and Hansen 1990; Torellas et al.
We assume the level-L cache is large enough to hold all of the needed data; therefore there are never any level-L cache misses
Memory performance can also be measured with hardware-based counters that keep track of events such as cache misses
in a running system.
In the trace-driven simulation model, however, instructions may wait at different stages in the pipeline because of resource conflicts, incorrect speculative execution, data dependencies, serialization, cache misses
, and many other reasons .
The cache server queries the authoritative server for the cache misses
DNS question, meanwhile maintains its status as unanswered in a waiting queue.
. Unfortunately, due to the limited size of the cache, three types of cache misses
occur in a single processor system: compulsory, capacity, and conflict.
These latencies represent the penalty for various types of cache misses
. They do not include the overhead of software address translation.
Either way, programmers or compilers need detailed, accurate assessments of when and why cache misses