Cache

Question 1
A direct mapped cache memory of 1 MB has a block ize of 256 bytes. The cache has an access time of 3 ns and a hit rate of 94%. During a cache miss, it takes 20 ns to bring the first word of a block from the main memory, while each subsequent word takes 5 ns. The word size is 64 bits. The average memory access time in ns (round off to 1 decimal place) is _____.
A
13.5
       Computer-Organization       Cache       GATE 2020
Question 1 Explanation: 
Cache access time = 3 ns
Hit ratio of cache=0.94
Word size is 64 bits = 8 bytes.
  Cache line size = 256 bytes = 32 words
Main memory access time=20ns(time for first word)+155ns(time for remaining 31 words, 31*5=155ns) = 175 ns
Average access time = h1*t1+(1-h1)(t1+t2) = t1+(1-h1)t2
⇒ 3+(0.06)(175) = 13.5 ns
Question 2
A computer system with a word length of 32 bits has a 16 MB byte-addressable main memory and a 64 KB, 4-way set associative cache memory with a block size of 256 bytes. Consider the following four physical addresses represented in hexadecimal notation.
A1 = 0x42C8A4, A2 = 0x546888, A3 = 0x6A289C, A4 = 0x5E4880
Which one of the following is TRUE?
A
A1 and A4 are mapped to different cache sets.
B
A1 and A3 are mapped to the same cache set.
C
A3 and A4 are mapped to the same cache set.
D
A2 and A3 are mapped to the same cache set.
       Computer-Organization       Cache       GATE 2020
Question 2 Explanation: 
Main memory is 16MB in size.
Each of the physical addresses mentioned contain 6 hexa-decimal digits, so the physical address is 24 bits long.
Block size is 256 bytes, block offset = 8 bits as it is a byte addressable memory.
Cache size = 64KB
Number of blocks in the cache = 64KB/256B = 256
It is a 4-way set associative cache, so no. of sets in the cache = 256/4 = 64 = 2^6
In the physical address we need 6 bits for the SET number.
TAG bits = 24 - 6 - 8 = 10
So the 32 bits physical address is divided as (10 TAG bits + 6 SET number bits + 8 OFFSET bits)
The given physical addresses are in hexadecimal. So when we convert them into binary each hexadecimal digit will take 4 binary bits.
From the binary form of the addresses the least significant 8 bits are for the OFFSET, the next 6 bits are for the SET number. And the next 10 bits are for the TAG bits.
You can see the bold and underlined bits for the SET number for each of the given addresses.
A1 = 0x42C8A4 = (0100 0010 1100 1000 1010 0100) ==> set bits = 00 1000
A2 = 0x546888 = (0101 0100 0110 1000 1010 0100) ==> set bits = 10 1000
A3 = 0x6A289C = (0110 1010 0010 1000 1010 0100) ==> set bits = 10 1000
A4 = 0x5E4880 = (0101 1110 0100 1000 1010 0100) ==> set bits = 00 1000
From the given options option-4 is TRUE as A2, A3 are mapped to the same cache SET.
Question 3

The size of the physical address space of a processor is 2P bytes. The word length is 2W bytes. The capacity of cache memory is 2N bytes. The size of each cache block is 2M words. For a K-way set-associative cache memory, the length (in number of bits) of the tag field is

A
P - N - log2K
B
P - N + log2K
C
P - N - M - W - log2K
D
P - N - M - W + log2K
       Computer-Organization       Cache       Gate 2018
Question 3 Explanation: 
Physical memory is of size 2P bytes.
Each word is of size 2W bytes.
Number of words in physical memory = 2(P-W)
So the physical address is P-W bits
Cache size is 2N bytes.
Number of words in the cache = 2(N-W)
Block size is 2M words
No. of blocks in the cache = 2(N-W-M)
Since it is k-way set associative cache, each set in the cache will have k blocks.
No. of sets = 2(N-W-M ) / k
SET bits = N-W-M-logk
Block offset = M
TAG bits = P-W-(N-M-W-logk)-M = P-W-N+M+W+logk-M = P - N + logk
Question 4

Consider a two-level cache hierarchy with L1 and L2 caches. An application incurs 1.4 memory accesses per instruction on average. For this application, the miss rate of L1 cache is 0.1; the L2 cache experiences, on average, 7 misses per 1000 instructions. The miss rate of L2 expressed correct to two decimal places is __________.

A
0.05
B
0.06
C
0.07
D
0.08
       Computer-Organization       Cache       Gate 2017 set-01
Question 4 Explanation: 
It is given that average references per instruction = 1.4
For 1000 instructions total number of memory references = 1000 * 1.4 = 1400
These 1400 memory references are first accessed in the L1.
Since the miss rate of L1 is 0.1, for 1400 L1 references the number of misses = 0.1 * 1400 = 140
We know when there is a miss in L1 we next access the L2 cache.
So number of memory references to L2 = 140
It is given that there are 7 misses in L2 cache. Out of 140 memory references to L2 cache there are 7 misses.
Hence the miss rate in L2 cache = 7/140 = 0.05
Question 5

Consider a 2-way set associative cache with 256 blocks and uses LRU replacement. Initially the cache is empty. Conflict misses are those misses which occur due to contention of multiple blocks for the same cache set. Compulsory misses occur due to first time access to the block. The following sequence of accesses to memory blocks

     (0, 128, 256, 128, 0, 128, 256, 128, 1, 129, 257, 129, 1, 129, 257, 129)

is repeated 10 times. The number of conflict misses experienced by the cache is __________.

 
A
76
B
79
C
80
D
81
       Computer-Organization       Cache       Gate 2017 set-01
Question 5 Explanation: 
When a block is first time accessed and it will not be present in the cache, so it will be a compulsory miss.
If a block is accessed once and then before its second access if there are k-unique block accesses and the cache size is less than k, and in that case if the second access is amiss then it is capacity miss. In this case the cache doesn't have the size to hold all the k-unique blocks that came and so when the initial block came back again it is not in the cache because capacity of the cache is less than the unique block accesses k. Hence it is capacity miss.
If a block is accessed once and then before its second access if there are k-unique block accesses and the cache size is greater than k, and in that case if the second access is a miss then it is conflict miss. In this case the cache can hold all the k-unique blocks that came and even then when the initial block came back again it is not in the cache because it got replaced, then it is conflict miss.
LRU will use the function xmod128

Cache size = 256 Bytes
2 way set associative cache
So, no. of cache sets = 256/2 = 128
Blue → Compulsory miss
Red → Conflict miss
At the end of first round we have 4, compulsory misses & 4 conflict misses.
Similarly, if we continue from Round-2 to last round, in every round we will get 8 conflict misses.
Total conflict misses = 4+9(8) = 4+72 = 76 (conflict misses)
Question 6

A cache memory unit with capacity of N words and block size of B words is to be designed. If it is designed as a direct mapped cache, the length of the TAG field is 10 bits. If the cache unit is now designed as a 16-way set-associative cache, the length of the TAG field is ___________ bits.

A
14
B
15
C
16
D
17
       Computer-Organization       Cache       Gate 2017 set-01
Question 6 Explanation: 
When it is directed mapped cache, the physical address can be divided as
(Tag bits + bits for block number + bits for block offset)
With block size being B words no. of bits for block offset = log (B)
Because the cache capacity is N words and each block is B words, number of blocks in cache = N / B
No. of bits for block number = log (N/B)
So, the physical address in direct mapping case
= 10 + log (N/B) + log (B)
= 10 + log (N) – log B + log B
= 10 + log (N)
If the same cache unit is designed as 16-way set associative, then the physical address becomes
(Tag bits + bits for set no. + Bits for block offset)
There are N/B blocks in the cache and in 16-way set associative cache each set contains 16 blocks.
So no. of sets = (N/B) / 16 = N / (16*B)
Then bits for set no = log (N/16B)
Bits for block offset remain the same in this case also. That is log (B).
So physical address in the set associative case
= tag bits + log (N/16*B) + log B
= tag bits + log (N) – log (16*B) + log B
= tag bits + log (N) – log 16 – log B + log B
= tag bits + log N – 4
The physical address is the same in both the cases.
So, 10 + log N = tag bits + log N – 4
Tag bits = 14
So, no. of tag bits in the case 16-way set associative mapping for the same cache = 14.
Question 7

In a two-level cache system, the access times of L1 and L2 caches are 1 and 8 clock cycles, respectively. The miss penalty from the L2 cache to main memory is 18 clock cycles. The miss rate of L1 cache is twice that of L2. The average memory access time (AMAT) of this cache system is 2 cycles. The miss rates of L1 and L2 respectively are:

A
0.111 and 0.056
B
0.056 and 0.111
C
0.0892 and 0.1784
D
0.1784 and 0.0892
       Computer-Organization       Cache       GATE 2017(set-02)
Question 7 Explanation: 
The Average memory access time is,
AMAT = (L1 hit rate)*(L1 access time) + (L1 miss rate)*(L1 access time + L1 miss penalty)
= L1 hit rate * L1 access time + L1 miss rate * L1 access time + L1 miss rate * L1 miss penalty
We can write,
L1 miss rate = 1 - L1 hit rate
AMAT = L1 hit rate * L1 access time + (1 - L1 hit rate) * L1 access time + L1 miss rate * L1 miss penalty
By taking L1 access time common,
= (L1 hit rate + 1 - L1 hit rate)* L1 access time + L1 miss rate * L1 miss penalty
AMAT = L1 access time + L1 miss rate * L1 miss penalty
We access L2 only when there is a miss in L1.
So, L1 miss penalty is nothing but the effective time taken to access L2.
L1_miss_penalty = Hit_rate_of_L2* Access time of L2 + MissRate of L2 *(Access time of L2+ miss penalty L2)
= Hit_rate_of_L2* Access time of L2 + MissRate of L2 *Access time of L2 + MissRate of L2 * miss penalty L2
By taking Access time of L2 common we get,
= Access time of L2 * (Hit_rate_of_L2 + MissRate of L2 ) + MissRate of L2 * miss penalty L2
We know, MissRate of L2 = 1 - Hit_rate_of_L2 → Hit_rate_of_L2 + MissRate of L2 = 1
So, the above formula becomes,
L1_miss_penalty = Access time of L2 + (MissRate of L2 * miss penalty L2)
It is given,
access time of L1 = 1,
access time of L2 = 8,
miss penalty of L2 = 18,
AMAT = 2.
Let, miss rate of L2 = x.
Since it is given that L1 miss rate is twice that of 12 miss rate, L1 miss rate = 2 * x.
Substituting the above values,
L1_miss_penalty = Access time of L2 + (MissRate of L2 * miss penalty L2)
L1_miss_penalty = 8 + (x*18)
AMAT = L1 access time + L1 miss rate * L1 miss penalty
2 = 1 + (2*x) (8+18*x)
36*x2+ 16*x -1 = 0
By solving the above quadratic equation we get,
x = Miss rate of L2 = 0.056
Miss rate of L1 = 2*x = 0.111
Question 8

Consider a machine with a byte addressable main memory of 232 bytes divided into blocks of size 32 bytes. Assume that a direct mapped cache having 512 cache lines is used with this machine. The size of the tag field in bits is _____________.

A
18
B
19
C
20
D
21
       Computer-Organization       Cache       GATE 2017(set-02)
Question 8 Explanation: 
Given that it is a byte addressable memory and the main memory is of size 232 Bytes.
So the physical address is 32 bits long.
Each block is of size 32(=25) Bytes. So block offset 5.
Also given that there are 512(=29) cache lines, since it is a direct mapped cache we need 9 bits for the LINE number.
When it is directed mapped cache, the physical address can be divided as
(Tag bits + bits for block/LINE number + bits for block offset)
So, tag bits + 9 + 5 = 32
Tag bits = 32 - 14 = 18
Question 9
 

The read access times and the hit ratios for different caches in a memory hierarchy are as given below.

The read access time of main memory is 90 nanoseconds. Assume that the caches use the referred-word-first read policy and the write back policy. Assume that all the caches are direct mapped caches. Assume that the dirty bit is always 0 for all the blocks in the caches. In execution of a program, 60% of memory reads are for instruction fetch and 40% are for memory operand fetch. The average read access time in nanoseconds (up to 2 decimal places) is ___________.

A
4.72
B
4.73
C
4.74
D
4.75
       Computer-Organization       Cache       GATE 2017(set-02)
Question 9 Explanation: 
Since nothing is given whether it is hierarchical memory or simultaneous memory, by default we consider it as hierarchical memory.
Hierarchical memory (Default case):
For 2-level memory:
The formula for average memory access time = h1 t1 + (1-h1)(t1 + t2)
This can be simplified as
t1 + (1-h1)t2
For 3-level memory:
h1 t1 + (1-h1)(t1 + h2 t2 + (1-h2)(t2 + t3))
This can be simplified as
t1 + (1-h1)t2 + (1-h1)(1-h2)t3
Instruction fetch happens from I-cache whereas operand fetch happens from D-cache.
Using that we need to calculate the instruction fetch time (Using I-cache and L2-cache) and operand fetch time (Using D-cache and L2-cache) separately.
Then calculate 0.6 (instruction fetch time) + 0.4(operand fetch time) to find the average read access time.
The equation for instruction fetch time = t1 + (1-h1 ) t2 + (1-h1 )(1-h2 ) t3
= 2 + 0.2*8 + 0.2*0.1*90 = 5.4ns
Operand fetch time = t1 + (1-h1)t2 + (1-h1)(1-h2)t3 = 2 + 0.1*8 + 0.1*0.1*90 = 3.7ns
The average read access time = 0.6*5.4 + 0.4*3.7 = 4.72ns
Question 10
 

The width of the physical address on a machine is 40 bits. The width of the tag field in a 512 KB 8-way set associative cache is _________ bits.

A
24
B
25
C
26
D
27
       Computer-Organization       Cache       GATE 2016 set-2
Question 10 Explanation: 
Given that it is a set associative cache, so the physical address is divided as following:
(Tag bits + bits for set no. + Bits for block offset)
In question block size has not been given, so we can assume block size 2x Bytes.
The cache is of size 512KB, so number of blocks in the cache = 219/2x = 219-x
It is 8-way set associative cache so there will be 8 blocks in each set.
So number of sets = (219 − x)/8 = 216 − x
So number of bits for sets = 16−x
Let number of bits for Tag = T
Since we assumed block size is 2x Bytes, number of bits for block offset is x.
So, T + (16−x) + x = 40
T + 16 = 40
T = 24
Question 11

A file system uses an in-memory cache to cache disk blocks. The miss rate of the cache is shown in the figure. The latency to read a block from the cache is 1 ms and to read a block from the disk is 10 ms. Assume that the cost of checking whether a block exists in the cache is negligible. Available cache sizes are in multiples of 10 MB.

The smallest cache size required to ensure an average read latency of less than 6 ms is _______ MB.

 
A
30
B
31
C
32
D
33
       Computer-Organization       Cache       GATE 2016 set-2
Question 11 Explanation: 
Here it is not given whether the memory is hierarchical or simultaneous.
So we consider it as hierarchical memory.
But it is given that “assume that the cost of checking whether a block exists in the cache is negligible”, which means don't consider the checking time in the cache when there is a miss.
So formula for average access time becomes h1t1 + (1-h1)(t2) which is same as for simultaneous access.
Though the memory is hierarchical because of the statement given in the question we ignored the cache access time when there is a miss and effectively the calculation became like simultaneous access.
The average access time or read latency = h1t1 + (1-h1)t2.
It is given that the average read latency has to be less than 6ms.
So, h1t1 + (1-h1)t2 < 6
From the given information t1 = 1ms, t2 = 10ms
h1*1+(1-h1)10 < 6
10-9h1 < 6
-9h1 < - 4
-h1 < - 4/9
-h1 < -0.444
Since in the given graph we have miss rate information and 1-h1 gives the miss rate, so we add 1 on both sides of the above inequality.
1-h1 < 1-0.444
1-h1 < 0.555
So for the average read latency to be less than 6 ms the miss rate hsa to be less than 55.5%.
From the given graph the closest value of miss rate less than 55.5 is 40%.
For 40% miss rate the corresponding cache size is 30MB.
Hence the answer is 30MB.
Question 12
Assume that for a certain processor, a read request takes 50 nanoseconds on a cache miss and 5 nanoseconds on a cache hit. Suppose while running a program, it was observed that 80% of the processors read requests result in a cache hit. The average and access time in nanoseconds is  _______.
A
14
B
15
C
16
D
17
       Computer-Organization       Cache       GATE 2015 -(Set-2)
Question 12 Explanation: 
Average read access time =[(0.8)(5)+(0.2)(50)]ns=4+10=14ns
Question 13

Consider a machine with a byte addressable main memory of 220 bytes, block size of 16 bytes and a direct mapped cache having 212 cache lines. Let the addresses of two consecutive bytes in main memory be (E201F)16 and (E2020)16. What are the tag and cache line address (in hex) for main memory address (E201F)16?

A
E, 201
B
F, 201
C
E, E20
D
2, 01F
       Engineering-Mathematics       Cache       GATE 2015(Set-03)
Question 13 Explanation: 
Block size is of 16 bytes, so we need 4 offset bits. So the lowest 4 bit is offset bits.
ac No. of cache lines in cache is 212 bytes which needs 12 bits. So next lower 12 bits are line indexing bits.
And the remaining top 4 bits are tag bits (out of 20). So answer is (A).
Question 14
An access sequence of cache block addresses is of length N and contains n unique block addresses. The number of unique block addresses between two consecutive accesses to the same block address is bounded above by k. What is the miss ratio if the access sequence is passed through a cache of associativity A≥k exercising least-recently-used replacement policy?
A
n⁄N
B
1⁄N
C
1⁄A
D
k⁄n
       Data-Structures       Cache       GATE 2014(Set-01)
Question 14 Explanation: 
Consider the scenario, you have k-way set associative cache, let us say a block b0 comes to set-0 and then there are k-1 unique blocks to the same set. Now the set-0 is full then one more block came to set-0 and the block b0 is replaced using LRU, if b0 comes again then it will cause a miss. But it is given that the set associativity A>=k, means the no. of blocks in a set is k or more so assume we have (k+1)-way set associativity, applying the above logic b0 comes first and then k-1 other unique accesses come to the same set-0 then there is still place for one more block in set-0 and if b0 comes again now then it will not cause a miss. As the associativity increases further there will not be any more misses other than the unique blocks we have a best case scenario. So out of N accesses only the unique blocks n can be missed and the miss ratio is n/N.
As the set associativity size keeps increasing then we don't have to replace any block, so LRU is not of any significance here.
Question 15
A 4-way set-associative cache memory unit with a capacity of 16 KB is built using a block size of 8 words. The word length is 32 bits. The size of the physical address space is 4 GB. The number of bits for the TAG field is __________.
A
20
B
21
C
22
D
23
       Computer-Organization       Cache       Gate 2014 Set -02
Question 15 Explanation: 
Physical address size = 32 bits
Cache size = 16K bytes = 214 Bytes
block size = 8 words = 8⨯4 Byte = 32 Bytes = 25 Bytes
(where each word = 4 Bytes)
No. of blocks =214/25=29
block offset =9bits
Because it is 4-way set associative cache, no. of sets =29/4=27
Set of set = 7 bits
TAG = 32 – (7 + 5) = 20 bits
Question 16
Suppose a stack implementation supports an instruction REVERSE, which reverses the order of elements on the stack, in addition to the PUSH and POP instructions. Which one of the following statements is TRUE with respect to this modified stack?
A
A queue cannot be implemented using this stack.
B
A queue can be implemented where ENQUEUE takes a single instruction and DEQUEUE takes a sequence of two instructions.
C
A queue can be implemented where ENQUEUE takes a sequence of three instructions and DEQUEUE takes a single instruction.
D
A queue can be implemented where both ENQUEUE and DEQUEUE take a single instruction each.
       Computer-Organization       Cache       Gate 2014 Set -02
Question 16 Explanation: 
A Queue can be implemented where Enqueue takes 3 operations & dequeuer takes 1 operation.
Suppose:

Dequeue:

If we want to delete an element, that first we need to delete 1.
Enqueue:
Question 17
In designing a computer’s cache system, the cache block (or cache line) size is an important parameter.  Which one of the following statements is correct in this context?
A
A smaller block size implies better spatial locality
B
A smaller block size implies a smaller cache tag and hence lower cache tag overhead
C
A smaller block size implies a larger cache tag and hence lower cache hit time
D
A smaller block size incurs a lower cache miss penalty
       Computer-Organization       Cache        Gate 2014 Set -02
Question 17 Explanation: 
When a cache block size is smaller, it could accommodate more number of blocks, it improves the hit ratio for cache, so the miss penalty for cache will be lowered.
Question 18
If the associativity of a processor cache is doubled while keeping the capacity and block size unchanged, which one of the following is guaranteed to be NOT affected?
A
Width of tag comparator
B
Width of set index decoder
C
Width of way selection multiplexor
D
Width of processor to main memory data bus
       Computer-Organization       Cache       Gate 2014 Set -02
Question 18 Explanation: 
When associativity is doubled, then the set offset will be affected, accordingly, the number of bits used for TAG comparator be affected.
Width of set index decoder also will be affected when set offset is changed.
A k-way set associative cache needs k-to-1 way selection multiplexer. If the associativity is doubled the width of way selection multiplexer will also be doubled.
With of processor to main memory data bus is guaranteed to be NOT affected as this is not dependent on the cache associativity.
Question 19
In a k-way set associative cache, the cache is divided into v sets, each of which consists of k lines. The lines of a set are placed in sequence one after another. The lines in set s are sequenced before the lines in set (s+1). The main memory blocks are numbered 0 onwards. The main memory block numbered j must be mapped to any one of the cache lines from
A
(j mod v) * k to (j mod v) * k + (k-1)
B
(j mod v) to (j mod v) + (k-1)
C
(j mod k) to (j mod k) + (v-1)
D
(j mod k) * v to (j mod k) * v + (v-1)
       Computer-Organization       Cache       Gate 2013
Question 19 Explanation: 
Number of sets in cache = v. A block numbered j will be mapped to set number (j mod v). Since it is k-way set associative cache, there are k blocks in each set. It is given in the question that the blocks in consecutive sets are sequenced. It means for set-0 the cache lines are numbered 0, 1, .., k-1 and for set-1, the cache lines are numbered k, k+1,... k+k-1 and so on. As the main memory block j will be mapped to set (j mod v), it will be any one of the cache lines from (j mod v) * k to (j mod v) * k + (k-1).
Question 20
A RAM chip has a capacity of 1024 words of 8 bits each (1K×8). The number of 2×4 decoders with enable line needed to construct a 16K×16 RAM from 1K×8 RAM is
A
4
B
5
C
6
D
7
       Computer-Organization       Cache       Gate 2013
Question 20 Explanation: 
The capacity of the RAM needed = 16K
Capacity of the chips available = 1K
No. of address lines = 16K/1K = 16
Hence we can use 4 × 16 decoder for this. But we were only given 2 × 4 decoders.
So 4 decoders are required in inner level as from one 2×4 decoder we have only 4 output lines whereas we need 16 output lines.
Now to point to these 4 decoders, another 2×4 decoder is required in the outer level.
Hence no. of 2×4 decoders to realize the above implementation of RAM = 1 + 4 = 5
Question 21
A
11
B
14
C
16
D
27
       Computer-Organization       Cache       Gate 2012
Question 21 Explanation: 
It is given that cache size = 256 KB
Cache block size = 32 Bytes
So, number of blocks in the cache = 256K / 32 = 8 K
It is a 4-way set associative cache. Each set has 4 blocks.
So, number of sets in cache = 8 K / 4 = 2 K = 211.
So, 11 bits are needed for accessing a set. Inside a set we need to identify the cache block.
Since cache block size is 32 bytes, block offset needs 5 bits.
Out of 32 bit address, no. of TAG bits = 32 - 11 - 5 = 32-16 = 16
So, we need 16 tag bits.
Question 22
A computer has a 256 KByte, 4-way set associative, write back data cache with block size of 32 Bytes. The processor sends 32 bit addresses to the cache controller. Each cache tag directory entry contains, in addition to address tag, 2 valid bits, 1 modified bit and 1 replacement bit. The number of bits in the tag field of an address is
A
160 Kbits
B
136 Kbits
C
40 Kbits
D
32 Kbits
       Computer-Organization       Cache       Gate 2012
Question 22 Explanation: 
It is given that cache size =256 KB
Cache block size = 32 Bytes
So, number of blocks in the cache = 256K / 32 = 8 K
It is a 4-way set associative cache. Each set has 4 blocks.
So, number of sets in cache = 8 K / 4 = 2 K = 211.
So, 11 bits are needed for accessing a set. Inside a set we need to identify the cache block.
Since cache block size is 32 bytes, block offset needs 5 bits.
Out of 32 bit address, no. of TAG bits = 32 - 11 - 5 = 32-16 = 16
So, we need 16 tag bits.
It is given that in addition to address tag there are 2 valid bits, 1 modified bit and 1 replacement bit.
So size of each tag entry = 16 tag bits + 2 valid bits + 1 modified bit + 1 replacement bit = 20 bits
Size of cache tag directory=Size of tag entry×Number of tag entries
= 20×8 K
=160 Kbits
Question 23
An 8KB direct-mapped write-back cache is organized as multiple blocks, each of size 32-bytes. The processor generates 32-bit addresses. The cache controller maintains the tag information for each cache block comprising of the following. 1 Valid bit 1 Modified bit As many bits as the minimum needed to identify the memory block mapped in the cache. What is the total size of memory needed at the cache controller to store meta-data (tags) for the cache?
A
4864 bits
B
6144 bits
C
6656 bits
D
5376 bits
       Computer-Organization       Cache       Gate 2011
Question 23 Explanation: 
No. of cache blocks = cache size/size of a block = 8KB/32B = 256
So we need 8 bits for indexing the 256 blocks in the cache. And since a block is 32 bytes we need 5 word bits to address each byte.
So out of remaining (32 - 8 - 5), 19 bits should be tag bits.
So tag entry size = 19 + 1 (valid bit) + 1 (modified bit) = 21 bits
∴ Total size of metadata = 21 × Number blocks = 21 × 256 = 5376 bits
Question 24
Consider a 4-way set associative cache (initially empty) with total 16 cache blocks. The main memory consists of 256 blocks and the request for memory blocks is in the following order: 0, 255, 1, 4, 3, 8, 133, 159, 216, 129, 63, 8, 48, 32, 73, 92, 155. Which one of the following memory block will NOT be in cache if LRU replacement policy is used?  
A
3
B
8
C
129
D
216
       Computer-Organization       Cache       2009
Question 24 Explanation: 
It is given that cache contains 16 blocks, it is 4-way set associative cache.
So number of sets = 16 / 4 = 4
Given main memory blocks will be mapped to one of the 4 sets.
The given blocks are 0, 255, 1, 4, 3, 8, 133, 159, 216, 129, 63, 8, 48, 32, 73, 92, 155, and they will be mapped to following sets(Since there are 4 sets we get the set number by doing mod 4):
0, 3, 1, 0, 3, 0, 1, 3, 0, 1, 3, 0, 0, 0, 1, 0, 3
The cache mapping and the replacement using LRU can be seen from the below diagram.

We can see that at the end of mapping the given block pattern block number 216 is not in the cache.
Question 25
For inclusion to hold between two cache levels L1 and L2 in a multi-level cache hierarchy, which of the following are necessary? I.L1 must be a write-through cache II.L2 must be a write-through cache III.The associativity of L2 must be greater than that of L1 IV.The L2 cache must be at least as large as the L1 cache  
A
IV only
B
I and IV only
C
I, III and IV only
D
I, II, III and IV
       Computer-Organization       Cache       Gate-2008
Question 25 Explanation: 
Inclusion property: in this the presence of an address in a given level of the memory system guarantees that the address is present in all lower levels of the memory system.
1st is not always correct as data need not to be exactly same at the same point of time and so write back policy can be used in this instead of write through policy.
2nd is not needed when talking only about L1 and L2, as whether L2 is write through will have impact on any memory higher than L2, not on L1.
For 3rd, associativity can be equal also, so it need not be true.
For 4th statement, L2 cache has to be as large as L1 cache. In most cases L2 cache will be generally larger than L1 cache, but we will never have an L2 cache smaller than L1 cache. So only 4th statement is necessarily true. Hence option A is the answer.
Question 26
Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16bytes. The cache is managed using 32 bit virtual addresses and the page size is 4Kbyts. A program to be run on this machine begins as follows: double ARR [1024][1024]; int       i,          j         ; /* Initialize array ARR to 0.0 * / for (i = 0;i < 1024; i + +) for (j = 0; j < 1024; j + +) ARR [i]   [j] = 0.0; The size of double is 8Bytes. Array ARR is located in memory starting at the beginning of virtual page 0xFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR The total size of the tags in the cache directory is
A
32Kbits
B
34Kbits
C
64Kbits
D
68Kbits
       Computer-Organization       Cache       Gate-2008
Question 26 Explanation: 
It is given that cache size = 64KB
Block size = 16 Bytes = 24 Bytes, so block offset = 4 bits
Number of blocks in the cache = 64KB / 16B = 4K
It is 2-way set associative cache, so each set contains 2 blocks.
So, number of sets = 4K / 2 = 2K = 211
So, we need 11-bits for set indexing.
Since the address is 32 bits long, number of tag bits = 32−11−4 = 17
Total size of the tag directory = No. of tag bits ×Number of cache blocks =17×4K
=68 Kbits
Question 27
Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16bytes. The cache is managed using 32 bit virtual addresses and the page size is 4Kbyts. A program to be run on this machine begins as follows:   double ARR [1024][1024]; int       i,          j         ; /* Initialize array ARR to 0.0 * / for (i = 0;i < 1024; i + +) for (j = 0; j < 1024; j + +) ARR [i]   [j] = 0.0; The size of double is 8Bytes. Array ARR is located in memory starting at the beginning of virtual page 0xFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR Which of the following array elements has the same cache index as ARR [0] [0]?
A
ARR [0] [4]
B
ARR [4] [0]
C
ARR [0] [5]
D
ARR [5] [0]
       Computer-Organization       Cache       Gate-2008
Question 27 Explanation: 
It is given that cache size = 64KB
Block size = 16 Bytes
Number of blocks in the cache = 64KB / 16B = 4K
It is 2-way set associative cache, so each set contains 2 blocks.
So, number of sets = 4K / 2 = 2K = 211
Each element size = 8B and size of each block = 16 B
No. of elements in one block = 16/8 = 2 We know no. of elements in one row of the array = 1024. So, no. of blocks for one row of the array = 1024/2 = 512
We know there are 211 sets, and each set has two blocks.
For any element to share the same cache index as ARR[0][0], it has to belong to the same set.
ARR[0][0] belongs to set-0. The next element that belongs to set-0 will have block number 2048 because 2048 mod 211 = 0.
Block number 2048 will be from row number 2048/512 = 4, because each row has 512 blocks we divide the block number with 512 to get the row number.
From the given options ARR[4][0] will have the same cache index as ARR[0][0].
Question 28
Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16bytes. The cache is managed using 32 bit virtual addresses and the page size is 4Kbyts. A program to be run on this machine begins as follows:   double ARR [1024][1024]; int       i,          j         ; /* Initialize array ARR to 0.0 * / for (i = 0;i < 1024; i + +) for (j = 0; j < 1024; j + +) ARR [i]   [j] = 0.0; The size of double is 8Bytes. Array ARR is located in memory starting at the beginning of virtual page 0xFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR The cache hit ratio for this initialization loop is
A
0%
B
25%
C
50%
D
75%
       Computer-Organization       Cache       Gate-2008
Question 28 Explanation: 
Block size = 16B and it is given that double size is 8B, so one element = 8B.
So in one block 2 elements will be stored.
To store 1024×1024 elements no. of blocks needed = 1024×1024/2 = 220/2 = 219.
In each block the first element will be a miss and second element will be a hit because on every miss that entire block is brought into the cache. So, there will be 219 hits and 219 misses. Total no. of references = no. of elements in the array = 220
⇒hit ratio = No. of hits / Total no. of references
=219/220 = 1/2 = 0.5
=0.5×100=50%
Question 29
Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. The number of bits in the TAG, SET and WORD fields, respectively are:
A
7, 6, 7
B
8, 5, 7
C
8, 6, 6
D
9, 4, 7
       Computer-Organization       Cache       Gate 2008-IT
Question 29 Explanation: 
Question 30
Consider a computer with a 4-ways set-associative mapped cache of the following character­istics: a total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. While accessing the memory location 0C795H by the CPU, the contents of the TAG field of the corresponding cache line is
A
000011000
B
110001111
C
00011000
D
110010101
       Computer-Organization       Cache       Gate 2008-IT
Question 30 Explanation: 
As we have found in earlier question that first 9 bits is tag bits.
So lets first convert given Hexadecimal no. into binary number,
Question 31
Consider a 4-way set associative cache consisting of 128 lines with a line size of 64 words. The CPU generates a 20-bit address of a word in main memory. The number of bits in the TAG, LINE and WORD fields are respectively:
A
9, 6, 5
B
7, 7, 6
C
7, 5, 8
D
9, 5, 6
       Computer-Organization       Cache       Gate-2007
Question 31 Explanation: 
Cache has 128 lines.
Each line size 64 words, so no. of bits for WORD = 6 bits
Because it is 4-way set associative cache, no. of sets in the cache = 128/4 = 32 = 25
No. of bits for the set number = 5
Because the address is 20-bits long, no. of TAG bits = 20-5-6 = 9
Question 32
Consider a machine with a byte addressable main memory of 216 bytes. Assume that a direct mapped data cache consisting of 32 lines of 64 bytes each is used in the system. A 50 × 50 two-dimensional array of bytes is stored in the main memory starting from memory location 1100H. Assume that the data cache is initially empty. The complete array is accessed twice. Assume that the contents of the data cache do not change in between the two accesses. How many data cache misses will occur in total?
A
40
B
50
C
56
D
59
       Computer-Organization       Cache       Gate-2007
Question 32 Explanation: 
No. of bits to represent address
= log2 216 = 16 bits
Cache line size is 64 bytes means offset bit required is
log2 26 = 6 bits
No. of lines in cache is 32, means lines bits required is
log2 25 = 5 bits.
So, tag bits = 16 - 6 - 5 = 5 bits
No. of elements in array is
50 × 50 = 2500
and each element is of 1 byte.
So, total size of array is 2500 bytes.
So, total no. of lines it will require to get into the cache,
⌈2500/64⌉ = 40
Now, given starting address of array,

Now we need 40 cache lines to hold all array elements, but we have only 32 cache lines.
Now lets divide 2500 array elements into 40 array lines, each of which will contain 64 of its elements.
Now,

So, if complete array is accessed twice, total no. of cache misses is,
Question 33
Consider a machine with a byte addressable main memory of 216 bytes. Assume that a direct mapped data cache consisting of 32 lines of 64 bytes each is used in the system. A 50 × 50 two-dimensional array of bytes is stored in the main memory starting from memory location 1100H. Assume that the data cache is initially empty. The complete array is accessed twice. Assume that the contents of the data cache do not change in between the two accesses. How many data cache misses will occur in total?
A
line 4 to line 11
B
line 4 to line 12
C
line 0 to line 7
D
line 0 to line 8
       Computer-Organization       Cache       Gate-2007
Question 33 Explanation: 
From previous question explanation figure we can see that if array is accessed second time then from line 4 to line 11, A33 to A40 will be replaced by A1 to A8 and then A9 to A32 all array lines will be present and again that A1 to A8 will be replaced by A33 to A40.
Question 34
Consider a Direct Mapped Cache with 8 cache blocks (numbered 0-7). If the memory block requests are in the following order 3, 5, 2, 8, 0, 63, 9,16, 20, 17, 25, 18, 30, 24, 2, 63, 5, 82,17, 24. Which of the following memory blocks will not be in the cache at the end of the sequence ?    
A
3
B
18
C
20
D
30
       Computer-Organization       Cache       Gate 2007-IT
Question 34 Explanation: 

So memory block 18 is not in the cache.
Question 35
Consider two cache organizations: The first one is 32 KB 2-way set associative with 32-byte block size. The second one is of the same size but direct mapped. The size of an address is 32 bits in both cases. A 2-to-1 multiplexer has a latency of 0.6 ns while a kbit comparator has a latency of k/10 ns. The hit latency of the set associative organization is h1 while that of the direct mapped one is h2. The value of h1 is:
A
2.4 ns
B
2.3 ns
C
1.8 ns
D
1.7 ns
       Computer-Organization       Cache       Gate-2006
Question 35 Explanation: 
Cache size = 32 KB = 32 × 210 B
Block size = 32 bytes
No. of blocks = 2 (2-way set associative)
No. of combinations = Cache size / (Block size×No. of blocks) = (32×210B) / (32×2) = 29
Index bits = 9
No. of set bits = 5 (∵ cache block size is 32 bytes = 25 bytes)
No. of Tag bits = 32 - 9 - 5 = 18
Hit latency = Multiplexer latency + latency
= 0.6 + (18/10)
= 0.6 + 1.8
= 2.4 ns
Question 36
A CPU has a 32 KB direct mapped cache with 128-byte block size. Suppose A is a twodimensional array of size 512×512 with elements that occupy 8-bytes each. Consider the following two C code segments, P1 and P2. P1:
for (i=0; i<512; i++) {
   for (j=0; j<512; j++) {
      x += A[i][j];
   }
}
P2:
for (i=0; i<512; i++) {
   for (j=0; j<512; j++) {
      x += A[j][i];
   }
}
P1 and P2 are executed independently with the same initial state, namely, the array A is not in the cache and i, j, x are in registers. Let the number of cache misses experienced by P1 be M1 and that for P2 be M2 . The value of the ratio M1/M2 is:
A
0
B
1/16
C
1/8
D
16
       Computer-Organization       Cache       Gate-2006
Question 36 Explanation: 
Total misses for [P2] = 512 * 512 = 262144
(for every element there would be a miss)
M1/M2 = 16384/262144 = 1/16
Question 37
A CPU has a 32 KB direct mapped cache with 128-byte block size. Suppose A is a twodimensional array of size 512×512 with elements that occupy 8-bytes each. Consider the following two C code segments, P1 and P2. P1:
for (i=0; i<512; i++) {
   for (j=0; j<512; j++) {
      x += A[i][j];
   }
}
P2:
for (i=0; i<512; i++) {
   for (j=0; j<512; j++) {
      x += A[j][i];
   }
}
P1 and P2 are executed independently with the same initial state, namely, the array A is not in the cache and i, j, x are in registers. Let the number of cache misses experienced by P1 be M1 and that for P2 be M2 . The value of M1 is:
A
0
B
2048
C
16384
D
262144
       Computer-Organization       Cache       Gate-2006
Question 37 Explanation: 
[P1] Access A in row major order.
[P2] Access A in column major order.
No. of cache blocks=Cache size/Block size = 32KB / 128B = 32×210B / 128B = 215 / 27 = 256
No. of array elements in each block = Block size / Element size = 128B / 8B =16
Total misses for (P1)=Array size * No. of array elements / No. of cache blocks = (512×512) * 16 / 256=16384
Question 38
Consider two cache organizations: The first one is 32 KB 2-way set associative with 32-byte block size. The second one is of the same size but direct mapped. The size of an address is 32 bits in both cases. A 2-to-1 multiplexer has a latency of 0.6 ns while a kbit comparator has a latency of k/10 ns. The hit latency of the set associative organization is h1 while that of the direct mapped one is h2. The value of h2 is:
A
2.4 ns
B
2.3 ns
C
1.8 ns
D
1.7 ns
       Computer-Organization       Cache       Gate-2006
Question 38 Explanation: 
Hit latency = k/10 ns
For k,

∴ Hit latency = 17/1 = 1.7 ns
Question 39
A CPU has a five-stage pipeline and runs at 1 GHz frequency. Instruction fetch happens in the first stage of the pipeline. A conditional branch instruction computes the target address and evaluates the condition in the third stage of the pipeline. The processor stops fetching new instructions following a conditional branch until the branch outcome is known. A program executes 109 instructions out of which 20% are conditional branches. If each instruction takes one cycle to complete on average, the total execution time of the program is:
A
92 ns
B
104 ns
C
172 ns
D
184 ns
       Computer-Organization       Cache       Gate-2006
Question 39 Explanation: 
Cache with block size = 64 bytes
Latency = 80 ns
k = 24
No. of banks are accessed in parallel , then it takes k/2 ns = 24/2 = 12ns
Decoding time = 12ns
Size of each bank C = 2 bytes
Each Bank memory is 2 bytes, and there is 24 banks are there, in one iteration they may get 2*24 = 48 bytes
And getting 64 bytes requires 2 iteration.
T = decoding time + latency = 12+80 = 92
For 2 iterations = 92×2 = 184ns
Question 40

A cache line is 64 bytes. The main memory has latency 32 ns and bandwidth 1 GBytes/s. The time required to fetch the entire cache line from the main memory is

A
32 ns
B
64 ns
C
96 ns
D
128 ns
       Computer-Organization       Cache       Gate 2006-IT
Question 40 Explanation: 
For 1GBytes/s bandwidth → it takes 1 sec to load 109 bytes on line.
→ So, for 64 bytes it will take 64*1/109 = 64ns.
Main memory latency = 32
Total time required to place cache line is
64+32 = 96 ns
Question 41
A computer system has a level-1 instruction cache (1-cache), a level-1 data cache (D-cache) and a level-2 cache (L2-cache) with the following specifications: The length of the physical address of a word in the main memory is 30 bits. The capacity of the tag memory in the I-cache, D-cache and L2-cache is, respectively,  
A
1K × 18-bit, 1K × 19-bit, 4K × 16-bit
B
1K × 16-bit, 1K × 19-bit, 4K × 18-bit
C
1K × 16-bit, 512 × 18-bit, 1K × 16-bit
D
1K × 18-bit, 512 × 18-bit, 1K × 18-bit
       Computer-Organization       Cache       Gate 2006-IT
Question 41 Explanation: 
No. of blocks in cache = Capacity/Block size = 2m
Bits to represent blocks = m
No. of words in a block = 2n
Bits to represent a word = n
Tag bits = (length of physical address of a word) - (bits to represent blocks) - (bits to represent a word)
⇒ Each block will have its own tag bits.
So, total tag bits = no. of blocks × tag bits.
Question 42
Consider a direct mapped cache of size 32 KB with block size 32 bytes. The CPU generates 32 bit addresses. The number of bits needed for cache indexing and the number of tag bits are respectively.
A
10, 17
B
10, 22
C
15, 17
D
5, 17
       Computer-Organization       Cache       Gate-2005
Question 42 Explanation: 
No. of blocks = cache size/block size = 32KB/32 = 210
So indexing requires 10 bits.
No. of offset bit required to access 32 byte block = 5
So, No. of TAG bits = 32 - 10 - 5 = 17
Question 43
A dynamic RAM has a memory cycle time of 64 nsec. It has to be refreshed 100 times per msec and each refresh takes 100 nsec. What percentage of the memory cycle time is used for refreshing?
A
10
B
6.4
C
1
D
.64
       Computer-Organization       Cache       Gate 2005-IT
Question 43 Explanation: 
We know that 1ms = 106ns
In 106ns refresh 100 times.
Each refresh takes 100ns.
Memory cycle time = 64ns
Refresh time per 1ms i.e., per 106ns = 100 * 100 = 104ns
Refresh time per 1ns = (104)/(106) ns
Refresh time per cycle = (104*64)/(106) = 64ns
Percentage of the memory cycle time is used for refreshing = (64*100)/64 = 1%
Question 44
Consider a small two-way set-associative cache memory, consisting of four blocks. For choosing the block to be replaced, use the least recently used (LRU) scheme. The number of cache misses for the following sequence of block addresses is 8, 12, 0, 12, 8
A
2
B
3
C
4
D
5
       Computer-Organization       Cache       Gate-2004
Question 44 Explanation: 
Cache consists of 4 blocks and is 2-way set associative. So cache have 2 sets each of size 2 blocks.
The given sequernce is 8, 12, 0, 12, 8.

So in total 4 misses is there.
Question 45
Consider a system with 2 level caches. Access times of Level 1 cache, Level 2 cache and main memory are 1 ns, 10ns, and 500 ns, respectively. The hit rates of Level 1 and Level 2 caches are 0.8 and 0.9, respectively. What is the average access time of the system ignoring the search time within the cache?
A
13.0 ns
B
12.8 ns
C
12.6 ns
D
12.4 ns
       Computer-Organization       Cache       Gate 2004-IT
Question 45 Explanation: 
Average access time = [H1 * T1] + [(1 - H1) * Hm * Tm]
H1 = 0.8, (1 - H1) = 0.2
H2 = 0.9, (1 - H2) = 0.1
T1 = Access time for level 1 cache = 1ns
T2 = Access time for level 2 cache = 10ns
Hm = Hit rate of main memory = 1
Tm = Access time for main memory = 500ns
Average access time = [(0.8 * 1) + (0.2 * 0.9 * 10) + (0.2)(0.1) * 1 * 500]
= 0.8 + 1.8 + 10
= 12.6ns
Question 46

A
Theory Explanation is given below.
       Computer-Organization       Cache       Gate-2002
Question 47
The main memory of a computer has 2 cm blocks while the cache has 2 c blocks. If the cache uses the set associative mapping scheme with 2 blocks per set, then block k of the main memory maps to the set
A
(k mod m) of the cache
B
(k mod c) of the cache
C
(k mod 2c) of the cache
D
(k mod 2 cm) of the cache
       Computer-Organization       Cache       Gate-1999
Question 47 Explanation: 
No. of sets in cache = No. of blocks in cache/No. of blocks per set
= 2c/c
= c
∴ Cache set no. to which block k of main memory maps to
= (Main memory block no.) mod (Total set in cache)
= k mod c
Question 48

A computer system has a 4K word cache organized in block-set-associative manner with 4 blocks per set, 64 words per block. The number of its in the SET and WORD fields of the main memory address format is:

A
15, 40
B
6, 4
C
7, 2
D
4, 6
       Computer-Organization       Cache       Gate-1995
Question 48 Explanation: 
No. of sets = 4K/ (64×4) = 212/ 28 = 24
So we need 4 set bits.
And,
64 words per block means WORD bit is 6-bits.
So, answer is option (D).
Question 49
More than one word is put in one cache block to
A
exploit the temporal locality of reference in a program
B
exploit the spatial locality of reference in a program
C
reduce the miss penalty
D
none of the above
       Computer-Organization       Cache       Gate-2001
Question 49 Explanation: 
Spatial locality refers to the use of data elements within relatively close storage locations. To exploit the spatial locality, more than one word are put into cache block.
The temporal locality refers to reuse of data elements with a smaller regular intervals.
Question 50
 
A
Theory Explanation is given below.
       Computer-Organization       Cache       Gate-2001
Question 51
A
True
B
False
       Computer-Organization       Cache       Gate-1990
Question 51 Explanation: 
Main memory transfer the data in the form of block to cache and cache transfer the data in the form of words, so it enables the interleaved main memory unit to operate unit at maximum speed.
There are 51 questions to complete.
PHP Code Snippets Powered By : XYZScripts.com