## Computer-Organization

 Question 1

The chip select logic for a certain DRAM chip in a memory system design is shown below. Assume that the memory system has 16 address lines denoted by A15 to A0. What is the range of addresses (in hexadecimal) of the memory system that can get enabled by the chip select (CS) signal?

 A C800 to C8FF B C800 to CFFF C DA00 to DFFF D CA00 to CAFF
Computer-Organization       GATE 2019
Question 1 Explanation:
Total address space from A0 - A15 = 215
The chip select address for given figure:
A4 - A15 = 25
So, total addressable loactions = 216/25 = 211
211 = 2048 or location 0 to 2047
∴ CFFF - C800 = 2047
 Question 2

A certain processor uses a fully associative cache of size 16 kB. The cache block size is 16 bytes. Assume that the main memory is byte addressable and uses a 32-bit address. How many bits are required for the Tag and the Index fields resectively in the addresses generated by the processor?

 A 28 bits and 0 bits B 24 bits and 4 bits C 24 bits and 0 bits D 28 bits and 4 bits
Computer-Organization       GATE 2019
Question 2 Explanation:
Since, it is said for fully associative cache so

No index bits is there. So, now for tag bits,
Total bits - Offset bits = 32 - 4 = 28
So, tag bits = 28, Index bits = 0
 Question 3

A certain processor deploys a single-level cache. The cache block size is 8 words and the word size is 4 bytes. The memory system uses a 60-MHz clock. To service a cache miss, the memory controller first takes 1 cycle to accept the starting address of the block, it then takes 3 cycles to fetch all the eight words of the block, and finally transmits the words of the requested block at the rate of 1 word per cycle. The maximum bandwidth for the memory system when the program running on the processor issues a series of read operations is ______ × 106 bytes/sec.

 A 160 B 145 C 172 D 124
Computer-Organization       GATE 2019
Question 3 Explanation:

Cache block = 8 words
Word size = 4 bytes
Cache block size = 32 bytes
Clock = 60 MHz
⇒ T = 1/clock = 1/60×106 seconds
Cache miss
= 1 cycle(Address) + 3 cycles (8 words) + 1word/cycle ×8 (transfer)
= 12 cycles
= 12/60×106
Total bandwidth = total data/total time = 32 bytes/(12/60×106) = 160 × 106 bytes/second
 Question 4
Consider the following statements.
1. Daisy chaining is used to assign priorities in attending interrupts.
2. When a device raises a vectored interrupt, the CPU does polling to identify the source of the interrupt.
3. In polling, the CPU periodically checks the status bits to know if any device needs its attention.
4. During DMA, both the CPU and DMA controller can be bus masters at the same time.
Which of the above statements is/are TRUE?
 A I and IV only B I and II only C III only D I and III only
Computer-Organization       Interruption       GATE 2020
Question 4 Explanation:
Statement-I is true as daisy chaining is used to assign priorities in attending interrupts.
Statement-II is false as vectored interrupt doesn’t involve polling but non-vectored interrupt involves polling.
Statement-III is true as polling means that CPU periodically checks the status bits to know if any device needs attention.
Statement-IV is false as during DMA only one of the CPU or DMA can be bus master at a time.
 Question 5
A direct mapped cache memory of 1 MB has a block ize of 256 bytes. The cache has an access time of 3 ns and a hit rate of 94%. During a cache miss, it takes 20 ns to bring the first word of a block from the main memory, while each subsequent word takes 5 ns. The word size is 64 bits. The average memory access time in ns (round off to 1 decimal place) is _____.
 A 13.5
Computer-Organization       Cache       GATE 2020
Question 5 Explanation:
Cache access time = 3 ns
Hit ratio of cache=0.94
Word size is 64 bits = 8 bytes.
Cache line size = 256 bytes = 32 words
Main memory access time=20ns(time for first word)+155ns(time for remaining 31 words, 31*5=155ns) = 175 ns
Average access time = h1*t1+(1-h1)(t1+t2) = t1+(1-h1)t2
⇒ 3+(0.06)(175) = 13.5 ns
 Question 6
Consider the following data path diagram.

Consider an instruction: R0 ← R1 + R2. The following steps are used to execute it over the given data path. Assume that PC is incremented appropriately. The subscripts r and w indicate read and write operations, respectively.
Which one of the following is the correct order of execution of the above steps?
 A 3, 5, 1, 2, 4 B 3, 5, 2, 1, 4 C 1, 2, 4, 3, 5 D 2, 1, 4, 5, 3
Computer-Organization       Registers       GATE 2020
Question 6 Explanation:
To execute the given instruction R0 <- R1 + R2,
First the PC value has to be moved into MAR (step-3 from the given sequence),
then the instruction has to be fetched(step-5 from the given sequence).
then Temp1 is loaded with the value of R1 (step-2 from the given sequence),
then the addition operation is performed by accessing the R2 value directly and adding it to Temp1 value and storing the result in Temp2 (step-1 from the given sequence).
Finally the result from Temp2 is stored in R0 (step-4 from the given sequence).
Hence the correct sequence is (3, 5, 2, 1, 4).
 Question 7
Consider a non-pipelined processor operating at 2.5 GHz. It takes 5 clock cycles to complete an instruction. You are going to make a 5-stage pipeline out of this processor. Overheads associated with pipelining force you to operate the pipelined processor at 2 GHz. In a given program, assume that 30% are memory instructions, 60% are ALU instructions and the rest are branch instructions. 5% of the memory instructions cause stalls of 50 clock cycles each due to cache misses and 50% of the branch instructions cause stalls of 2 cycles each. Assume that there are no stalls associated with the execution of ALU instructions. For this program, the speedup achieved by the pipelined processor over the non-pipelined processor (round off to 2 decimal places) is _____.
 A 2.16
Computer-Organization       Pipelining       GATE 2020
Question 7 Explanation:
In the non-pipelined architecture the clock cycle time = 1/(2.5)G = 0.4 ns
It is given that each instruction takes 5 clock cycles to execute in the  non-pipelined architecture, so time taken to execute each instruction = 5 * 0.4 = 2ns
In the pipelined architecture the clock cycle time = 1/2G = 0.5 ns
In the pipelined architecture there are stalls due to memory instructions and branch instructions.
In the pipeline, the updated clocks per instruction CPI = (1+stall frequency due to memory operations * stalls of memory instructions + stall frequency due to branch operations  * stalls due to branch instructions)
Out of the total instructions , 30% are memory instructions. Out of those 30%, only 5% cause stalls of 50 cycles each.
Stalls per instruction due to memory operations = 0.3*0.05*50 = 0.75
Out of the total instructions 10% are branch instructions. Out of those 10% of instructions 50% of them cause stalls of 2 cycles each.
Stalls per instruction due to branch operations = 0.1*0.5*2 = 0.1
The updated CPI in pipeline = 1 + 0.75 + 0.1 = 1.85
The execution time in the pipeline = 1.85 * 0.5 = 0.925 ns
The speed up = Time in non-pipelined architecture / Time in pipelined architecture = 2 / 0.925 = 2.16
 Question 8
A computer system with a word length of 32 bits has a 16 MB byte-addressable main memory and a 64 KB, 4-way set associative cache memory with a block size of 256 bytes. Consider the following four physical addresses represented in hexadecimal notation.
A1 = 0x42C8A4, A2 = 0x546888, A3 = 0x6A289C, A4 = 0x5E4880
Which one of the following is TRUE?
 A A1 and A4 are mapped to different cache sets. B A1 and A3 are mapped to the same cache set. C A3 and A4 are mapped to the same cache set. D A2 and A3 are mapped to the same cache set.
Computer-Organization       Cache       GATE 2020
Question 8 Explanation:
Main memory is 16MB in size.
Each of the physical addresses mentioned contain 6 hexa-decimal digits, so the physical address is 24 bits long.
Block size is 256 bytes, block offset = 8 bits as it is a byte addressable memory.
Cache size = 64KB
Number of blocks in the cache = 64KB/256B = 256
It is a 4-way set associative cache, so no. of sets in the cache = 256/4 = 64 = 2^6
In the physical address we need 6 bits for the SET number.
TAG bits = 24 - 6 - 8 = 10
So the 32 bits physical address is divided as (10 TAG bits + 6 SET number bits + 8 OFFSET bits)
The given physical addresses are in hexadecimal. So when we convert them into binary each hexadecimal digit will take 4 binary bits.
From the binary form of the addresses the least significant 8 bits are for the OFFSET, the next 6 bits are for the SET number. And the next 10 bits are for the TAG bits.
You can see the bold and underlined bits for the SET number for each of the given addresses.
A1 = 0x42C8A4 = (0100 0010 1100 1000 1010 0100) ==> set bits = 00 1000
A2 = 0x546888 = (0101 0100 0110 1000 1010 0100) ==> set bits = 10 1000
A3 = 0x6A289C = (0110 1010 0010 1000 1010 0100) ==> set bits = 10 1000
A4 = 0x5E4880 = (0101 1110 0100 1000 1010 0100) ==> set bits = 00 1000
From the given options option-4 is TRUE as A2, A3 are mapped to the same cache SET.
 Question 9
A processor has 64 registers and uses 16-bit instruction format. It has two types of instructions: I-type and R-type. Each I-type instruction contains an opcode, a register name, and a 4-bit immediate value. Each R-type instruction contains an opcode and two register names. If there are 8 distinct I-type opcodes, then the maximum number of distinct R-type opcodes is _____.
 A 14
Computer-Organization       Registers       GATE 2020
Question 9 Explanation:
Instruction is of size 16-bits.
All possible binary combinations = 2^16
There are 64 registers, so no. of bits needed to identify a register = 6
I-type instruction has (Opcode+Register+4-bit immediate value).
There are 8 distinct I-type instructions.
All the binary combinations possible with the I-type instructions are = 8*2^6*2^4 = 2^13
R-type instructions have 2 register operands.
Let x be the number of R-type instructions.
All the possible binary combinations of R-type instructions = x*2^6*2^6 = x*2^12
The sum of I-type and R-type binary combinations should be equal to 2^16.
x*2^12 + 2^13 = 2^16
2^12 (x+2) = 2^16
x+2 = 2^4
x = 16 -2 = 14
 Question 10

Consider the following processor design characteristics.

I. Register-to-register arithmetic operations only
II. Fixed-length instruction format
III. Hardwired control unit

Which of the characteristics above are used in the design of a RISC processor?

 A I and II only B II and III only C I and III only D I, II and III
Computer-Organization       RISC-and-CISC       Gate 2018
Question 10 Explanation:
RISC processor has all the three given characteristics. So option D is the correct answer.
 Question 11

A 32-bit wide main memory unit with a capacity of 1 GB is built using 256M×4-bit DRAM chips. The number of rows of memory cells in the DRAM chip is 214. The time taken to perform one refresh operation is 50 nanoseconds. The refresh period is 2 milliseconds. The percentage (rounded to the closest integer) of the time available for performing the memory read/write operations in the main memory unit is _________.

 A 59% B 60% C 61% D 62%
Computer-Organization       Memory-Interfacing       Gate 2018
Question 11 Explanation:
Time taken to refresh one row = 50 ns
There are 214 rows, so time taken to refresh all the rows = 214 * 50ns = 0.82 milliseconds
It is given that total refresh period is 2ms. The refresh period contains the time to refresh all the rows and also the time to perform read/write operation.
So % time spent in refresh = (Time taken to refresh all rows / refresh period)*100
= (0.82 ms / 2ms)*100
= 41%
So the % of time for read/write operation = 100 - 41 = 59%
 Question 12

The following are some events that occur after a device controller issues an interrupt while process L is under execution.

(P) The processor pushes the process status of L onto the control stack.
(Q) The processor finishes the execution of the current instruction.
(R) The processor executes the interrupt service routine.
(S) The processor pops the process status of L from the control stack.
(T) The processor loads the new PC value based on the interrupt.

 A QPTRS B PTRSQ C TRPQS D QTPRS
Computer-Organization       Interrupt-Service-Routing       Gate 2018
Question 12 Explanation:
The sequence of execution in the case of an interrupt while executing a process L is:
(Q) The processor finishes the execution of the current instruction
(P) The processor pushes the process status of L onto the control stack
(T) The processor loads the new PC value based on the interrupt
(R) The processor executes the interrupt service routine
(S) The processor pops the process status of L from the control stack
This is the sequence because when a process is in the middle of execution if an interrupt comes then that process execution is completed then the interrupt is serviced.
 Question 13

The size of the physical address space of a processor is 2P bytes. The word length is 2W bytes. The capacity of cache memory is 2N bytes. The size of each cache block is 2M words. For a K-way set-associative cache memory, the length (in number of bits) of the tag field is

 A P - N - log2K B P - N + log2K C P - N - M - W - log2K D P - N - M - W + log2K
Computer-Organization       Cache       Gate 2018
Question 13 Explanation:
Physical memory is of size 2P bytes.
Each word is of size 2W bytes.
Number of words in physical memory = 2(P-W)
So the physical address is P-W bits
Cache size is 2N bytes.
Number of words in the cache = 2(N-W)
Block size is 2M words
No. of blocks in the cache = 2(N-W-M)
Since it is k-way set associative cache, each set in the cache will have k blocks.
No. of sets = 2(N-W-M ) / k
SET bits = N-W-M-logk
Block offset = M
TAG bits = P-W-(N-M-W-logk)-M = P-W-N+M+W+logk-M = P - N + logk
 Question 14

A processor has 16 integer registers (R0, R1, …, R15) and 64 floating point registers (F0, F1, …, F63). It uses a 2-byte instruction format. There are four categories of instructions: Type-1, Type-2, Type-3 and Type-4.Type-1 category consists of four instructions, each with 3 integer register operands (3Rs). Type-2 category consists of eight instructions, each with 2 floating point register operands (2Fs). Type-3 category consists of fourteen instructions, each with one integer register operand and one floating point register operand (1R+1F). Type-4 category consists of N instructions, each with a floating point register operand (1F).

The maximum value of N is ________.

 A 32 B 33 C 34 D 35
Computer-Organization       Machine-Instructions       Gate 2018
Question 14 Explanation:
Instruction size = 2−byte.
So, total number of instruction encodings = 216
There are 16 possible integer registers, so no. of bits required for an integer operand = 4
There are 64 possible floating point registers, so no. of bits required for a floating point operand = 6
Type-1 instructions:
There are 4 type-1 instructions and each takes 3 integer operands.
No. of encodings consumed by type-1 = 4 × 24 × 24 × 24 =214.
Type-2 instructions:
There are 8 type-2 instructions and each takes 2 floating point operands.
No. of encodings consumed by Type-2 instructions = 8 × 26 x 26 = 215.
Type-3 instructions:
There are 14 type-3 instructions and each takes one integer operand and one floating point operand.
No. of encodings consumed by Type-3 instructions = 14 × 24 x 26 = 14336.
So, no. of encodings left for Type-4 = 216 − (214 + 215 + 14336) = 2048.
Since type-4 instructions take one floating point register, no. of different instructions of Type-4 = 2048 / 64 = 32.
 Question 15

The instruction pipeline of a RISC processor has the following stages: Instruction Fetch (IF), Instruction Decode (ID), Operand Fetch (OF), Perform Operation (PO) and Writeback (WB). The IF, ID, OF and WB stages take 1 clock cycle each for every instruction. Consider a sequence of 100 instructions. In the PO stage, 40 instructions take 3 clock cycles each, 35 instructions take 2 clock cycles each, and the remaining 25 instructions take 1 clock cycle each. Assume that there are no data hazards and no control hazards.

The number of clock cycles required for completion of execution of the sequence of instructions is ___________.

 A 219 B 220 C 221 D 222
Computer-Organization       Pipeling       Gate 2018
Question 15 Explanation:
Total Instruction = 100
Number of stages = 5
We know in a normal pipeline with k-stages time taken for n-instructions = k + n - 1 clock cycles.
So, in normal case total cycles = 100 + 5 - 1 = 104 cycles
But in this question it is given that PO stage of 40 instructions takes 3 cycles, 35 instructions takes 2 cycles and 25 instructions takes 1 cycle.
It is also given that all other stages take one clock cycle in all the 100 instructions.
PO stage of 40 instructions takes 3 cycles so these instructions will cause 2 stall cycle each, PO stage of 35 instructions takes 2 cycles so these instructions will cause 1 stall cycle each, But the 25 instruction whose PO stage takes 1 cycle, there are no stall cycles for these.
So, extra stall cycles = 40*2 + 35*1 = 80+35 = 115 cycles. So, total clock cycles = 104 + 115 = 219
 Question 16

Consider the C struct defined below:

    struct data   {
int marks [100];
int cnumber;
};
struct data student;


The base address of student is available in register R1. The field student.grade can be accessed efficiently using

 A Post-increment addressing mode, (R1)+ B Pre-decrement addressing mode, -(R1) C Register direct addressing mode, R1 D Index addressing mode, X(R1), where X is an offset represented in 2’s complement 16-bit representation
Question 16 Explanation:
sruct data
{
int marks[100];
int cnumber;
}; struct data student
Base Address of student is available in R1.
It is clearly mentioned X is the offset address to be summed with Base Address of R1.

Hence Index Addressing mode X(R1), where X is an offset represented in 2’s complement 16-bit representation.
⇾ Relative, Base Indexed & all subtypes of Indirect addressing modes are used with Arrays.
 Question 17

Consider a two-level cache hierarchy with L1 and L2 caches. An application incurs 1.4 memory accesses per instruction on average. For this application, the miss rate of L1 cache is 0.1; the L2 cache experiences, on average, 7 misses per 1000 instructions. The miss rate of L2 expressed correct to two decimal places is __________.

 A 0.05 B 0.06 C 0.07 D 0.08
Computer-Organization       Cache       Gate 2017 set-01
Question 17 Explanation:
It is given that average references per instruction = 1.4
For 1000 instructions total number of memory references = 1000 * 1.4 = 1400
These 1400 memory references are first accessed in the L1.
Since the miss rate of L1 is 0.1, for 1400 L1 references the number of misses = 0.1 * 1400 = 140
We know when there is a miss in L1 we next access the L2 cache.
So number of memory references to L2 = 140
It is given that there are 7 misses in L2 cache. Out of 140 memory references to L2 cache there are 7 misses.
Hence the miss rate in L2 cache = 7/140 = 0.05
 Question 18

Consider a RISC machine where each instruction is exactly 4 bytes long. Conditional and unconditional branch instructions use PC-relative addressing mode with Offset specified in bytes to the target location of the branch instruction. Further the Offset is always with respect to the address of the next instruction in the program sequence. Consider the following instruction sequence

If the target of the branch instruction is i, then the decimal value of the Offset is ___________.

 A -16 B -17 C -18 D -19
Computer-Organization       Instruction-Cycle       Gate 2017 set-01
Question 18 Explanation:
Size of each instruction = 4 Bytes
⇾ Assuming the first instruction starts at address zero

⇒ Offset = 0 – 16 = (-16) ⇾ Final answer
 Question 19

Instruction execution in a processor is divided into 5 stages, Instruction Fetch (IF), Instruction Decode (ID), Operand Fetch (OF), Execute (EX), and Write Back (WB). These stages take 5, 4, 20, 10 and 3 nanoseconds (ns) respectively. A pipelined implementation of the processor requires buffering between each pair of consecutive stages with a delay of 2 ns. Two pipelined implementations of the processor are contemplated:

(i) a naive pipeline implementation (NP) with 5 stages and
(ii) an efficient pipeline (EP) where the OF stage is divided into stages OF1 and OF2 with execution times of 12 ns and 8 ns respectively.

The speedup (correct to two decimal places) achieved by EP over NP in executing 20 independent instructions with no hazards is _________.

 A 1.51 B 1.52 C 1.53 D 1.54
Computer-Organization       Pipelining       Gate 2017 set-01
Question 19 Explanation:
Naive Pipeline implementation:
The stage delays are 5, 4, 20, 10 and 3. And buffer delay = 2ns
So clock cycle time = max of stage delays + buffer delay
= max(5, 4, 20, 10,3)+2
= 20+2
= 22ns
Execution time for n-instructions in a pipeline with k-stages = (k+n-1) clock cycles
= (k+n-1)* clock cycle time
In this case execution time for 20 instructions in the pipeline with 5-stages
= (5+20-1)*22ns
= 24*22
= 528ns
Efficient Pipeline implementation:
OF phase is split into two stages OF1, OF2 with execution times of 12ns, 8ns
New stage delays in this case = 5, 4, 12, 8, 10, 3
Buffer delay is the same 2ns.
So clock cycle time = max of stage delays + buffer delay
= max(5, 4, 12, 8, 10,3) + 2
= 12+2
= 14ns
Execution time = (k+n-1) clock cycles
= (k+n-1)* clock cycle time
In this case no. of pipeline stages, k = 6
No. of instructions = 20
Execution time = (6+20-1)*14 = 25*14 = 350ns
Speed up of Efficient pipeline over native pipeline
= Naive pipeline execution time / efficient pipeline execution time
= 528 / 350
≌ 1.51
 Question 20

Consider a 2-way set associative cache with 256 blocks and uses LRU replacement. Initially the cache is empty. Conflict misses are those misses which occur due to contention of multiple blocks for the same cache set. Compulsory misses occur due to first time access to the block. The following sequence of accesses to memory blocks

(0, 128, 256, 128, 0, 128, 256, 128, 1, 129, 257, 129, 1, 129, 257, 129)

is repeated 10 times. The number of conflict misses experienced by the cache is __________.

 A 76 B 79 C 80 D 81
Computer-Organization       Cache       Gate 2017 set-01
Question 20 Explanation:
When a block is first time accessed and it will not be present in the cache, so it will be a compulsory miss.
If a block is accessed once and then before its second access if there are k-unique block accesses and the cache size is less than k, and in that case if the second access is amiss then it is capacity miss. In this case the cache doesn't have the size to hold all the k-unique blocks that came and so when the initial block came back again it is not in the cache because capacity of the cache is less than the unique block accesses k. Hence it is capacity miss.
If a block is accessed once and then before its second access if there are k-unique block accesses and the cache size is greater than k, and in that case if the second access is a miss then it is conflict miss. In this case the cache can hold all the k-unique blocks that came and even then when the initial block came back again it is not in the cache because it got replaced, then it is conflict miss.
LRU will use the function xmod128

Cache size = 256 Bytes
2 way set associative cache
So, no. of cache sets = 256/2 = 128
Blue → Compulsory miss
Red → Conflict miss
At the end of first round we have 4, compulsory misses & 4 conflict misses.
Similarly, if we continue from Round-2 to last round, in every round we will get 8 conflict misses.
Total conflict misses = 4+9(8) = 4+72 = 76 (conflict misses)
 Question 21

A cache memory unit with capacity of N words and block size of B words is to be designed. If it is designed as a direct mapped cache, the length of the TAG field is 10 bits. If the cache unit is now designed as a 16-way set-associative cache, the length of the TAG field is ___________ bits.

 A 14 B 15 C 16 D 17
Computer-Organization       Cache       Gate 2017 set-01
Question 21 Explanation:
When it is directed mapped cache, the physical address can be divided as
(Tag bits + bits for block number + bits for block offset)
With block size being B words no. of bits for block offset = log (B)
Because the cache capacity is N words and each block is B words, number of blocks in cache = N / B
No. of bits for block number = log (N/B)
So, the physical address in direct mapping case
= 10 + log (N/B) + log (B)
= 10 + log (N) – log B + log B
= 10 + log (N)
If the same cache unit is designed as 16-way set associative, then the physical address becomes
(Tag bits + bits for set no. + Bits for block offset)
There are N/B blocks in the cache and in 16-way set associative cache each set contains 16 blocks.
So no. of sets = (N/B) / 16 = N / (16*B)
Then bits for set no = log (N/16B)
Bits for block offset remain the same in this case also. That is log (B).
So physical address in the set associative case
= tag bits + log (N/16*B) + log B
= tag bits + log (N) – log (16*B) + log B
= tag bits + log (N) – log 16 – log B + log B
= tag bits + log N – 4
The physical address is the same in both the cases.
So, 10 + log N = tag bits + log N – 4
Tag bits = 14
So, no. of tag bits in the case 16-way set associative mapping for the same cache = 14.
 Question 22

In a two-level cache system, the access times of L1 and L2 caches are 1 and 8 clock cycles, respectively. The miss penalty from the L2 cache to main memory is 18 clock cycles. The miss rate of L1 cache is twice that of L2. The average memory access time (AMAT) of this cache system is 2 cycles. The miss rates of L1 and L2 respectively are:

 A 0.111 and 0.056 B 0.056 and 0.111 C 0.0892 and 0.1784 D 0.1784 and 0.0892
Computer-Organization       Cache       GATE 2017(set-02)
Question 22 Explanation:
The Average memory access time is,
AMAT = (L1 hit rate)*(L1 access time) + (L1 miss rate)*(L1 access time + L1 miss penalty)
= L1 hit rate * L1 access time + L1 miss rate * L1 access time + L1 miss rate * L1 miss penalty
We can write,
L1 miss rate = 1 - L1 hit rate
AMAT = L1 hit rate * L1 access time + (1 - L1 hit rate) * L1 access time + L1 miss rate * L1 miss penalty
By taking L1 access time common,
= (L1 hit rate + 1 - L1 hit rate)* L1 access time + L1 miss rate * L1 miss penalty
AMAT = L1 access time + L1 miss rate * L1 miss penalty
We access L2 only when there is a miss in L1.
So, L1 miss penalty is nothing but the effective time taken to access L2.
L1_miss_penalty = Hit_rate_of_L2* Access time of L2 + MissRate of L2 *(Access time of L2+ miss penalty L2)
= Hit_rate_of_L2* Access time of L2 + MissRate of L2 *Access time of L2 + MissRate of L2 * miss penalty L2
By taking Access time of L2 common we get,
= Access time of L2 * (Hit_rate_of_L2 + MissRate of L2 ) + MissRate of L2 * miss penalty L2
We know, MissRate of L2 = 1 - Hit_rate_of_L2 → Hit_rate_of_L2 + MissRate of L2 = 1
So, the above formula becomes,
L1_miss_penalty = Access time of L2 + (MissRate of L2 * miss penalty L2)
It is given,
access time of L1 = 1,
access time of L2 = 8,
miss penalty of L2 = 18,
AMAT = 2.
Let, miss rate of L2 = x.
Since it is given that L1 miss rate is twice that of 12 miss rate, L1 miss rate = 2 * x.
Substituting the above values,
L1_miss_penalty = Access time of L2 + (MissRate of L2 * miss penalty L2)
L1_miss_penalty = 8 + (x*18)
AMAT = L1 access time + L1 miss rate * L1 miss penalty
2 = 1 + (2*x) (8+18*x)
36*x2+ 16*x -1 = 0
By solving the above quadratic equation we get,
x = Miss rate of L2 = 0.056
Miss rate of L1 = 2*x = 0.111
 Question 23

Consider a machine with a byte addressable main memory of 232 bytes divided into blocks of size 32 bytes. Assume that a direct mapped cache having 512 cache lines is used with this machine. The size of the tag field in bits is _____________.

 A 18 B 19 C 20 D 21
Computer-Organization       Cache       GATE 2017(set-02)
Question 23 Explanation:
Given that it is a byte addressable memory and the main memory is of size 232 Bytes.
So the physical address is 32 bits long.
Each block is of size 32(=25) Bytes. So block offset 5.
Also given that there are 512(=29) cache lines, since it is a direct mapped cache we need 9 bits for the LINE number.
When it is directed mapped cache, the physical address can be divided as
(Tag bits + bits for block/LINE number + bits for block offset)
So, tag bits + 9 + 5 = 32
Tag bits = 32 - 14 = 18
 Question 24

The read access times and the hit ratios for different caches in a memory hierarchy are as given below.

The read access time of main memory is 90 nanoseconds. Assume that the caches use the referred-word-first read policy and the write back policy. Assume that all the caches are direct mapped caches. Assume that the dirty bit is always 0 for all the blocks in the caches. In execution of a program, 60% of memory reads are for instruction fetch and 40% are for memory operand fetch. The average read access time in nanoseconds (up to 2 decimal places) is ___________.

 A 4.72 B 4.73 C 4.74 D 4.75
Computer-Organization       Cache       GATE 2017(set-02)
Question 24 Explanation:
Since nothing is given whether it is hierarchical memory or simultaneous memory, by default we consider it as hierarchical memory.
Hierarchical memory (Default case):
For 2-level memory:
The formula for average memory access time = h1 t1 + (1-h1)(t1 + t2)
This can be simplified as
t1 + (1-h1)t2
For 3-level memory:
h1 t1 + (1-h1)(t1 + h2 t2 + (1-h2)(t2 + t3))
This can be simplified as
t1 + (1-h1)t2 + (1-h1)(1-h2)t3
Instruction fetch happens from I-cache whereas operand fetch happens from D-cache.
Using that we need to calculate the instruction fetch time (Using I-cache and L2-cache) and operand fetch time (Using D-cache and L2-cache) separately.
Then calculate 0.6 (instruction fetch time) + 0.4(operand fetch time) to find the average read access time.
The equation for instruction fetch time = t1 + (1-h1 ) t2 + (1-h1 )(1-h2 ) t3
= 2 + 0.2*8 + 0.2*0.1*90 = 5.4ns
Operand fetch time = t1 + (1-h1)t2 + (1-h1)(1-h2)t3 = 2 + 0.1*8 + 0.1*0.1*90 = 3.7ns
The average read access time = 0.6*5.4 + 0.4*3.7 = 4.72ns
 Question 25
A processor can support a maximum memory of 4GB, where the memory is word-addressable (a word consists of two bytes). The size of the address bus of the processor is at least _________ bits.
 A 32 B 34 C 31 D 33
Computer-Organization       Memory-Interfacing       2016 set-01
Question 25 Explanation:
Maximum Memory = 4GB = 232 bytes
Size of a word = 2 bytes
Therefore, Number of words = 232 / 2 = 231
So, we require 31 bits for the address bus of the processor.
 Question 26

The size of the data count register of a DMA controller is 16 bits. The processor needs to transfer a ﬁle of 29,154 kilobytes from disk to main memory. The memory is byte addressable. The minimum number of times the DMA controller needs to get the control of the system bus from the processor to transfer the ﬁle from the disk to main memory is ________.

 A 456 B 457 C 458 D 459
Computer-Organization       DMA       2016 set-01
Question 26 Explanation:
Since nothing is mentioned about the mode of DMA working whether it is cycle stealing mode or burst mode, we consider it as burst mode by default.
As the data count register of the DMA is 16 bits long in burst mode DMA transfers 216 Bytes (= 64KB) once it gets the control.
To transfer 29,154 KB, no. of times DMA needs to take control
= (29,154 KB / 64KB)
= 29,154/64
= 455.53, means 456 times.
 Question 27

The stage delays in a 4-stage pipeline are 800, 500, 400 and 300 picoseconds. The ﬁrst stage (with delay 800 picoseconds) is replaced with a functionally equivalent design involving two stages with respective delays 600 and 350 picoseconds. The throughput increase of the pipeline is ________ percent.

 A 33.33% B 33.34% C 33.35% D 33.36%
Computer-Organization       Pipelining       2016 set-01
Question 27 Explanation:
In a pipelined processor the throughput is 1/clock cycle time.
Cycle time = max of all stage delays.
In the first case max stage delay = 800.
So throughput = 1/800 initially.
After replacing this stage with two stages of delays 600, 350... the cycle time = maximum stage delay = 600.
So the new throughput = 1/600.
The new throughput > old throughput.
And the increase in throughput = 1/600 - 1/800.
We calculate the percentage increase in throughput w.r.t initial throughput, so the % increase in throughput
= (1/600 - 1/800) / (1/800) * 100
= ((800 / 600) - 1) * 100
= ((8/6) -1) * 100
= 33.33%
 Question 28

A processor has 40 distinct instructions and 24 general purpose registers. A 32-bit instruction word has an opcode, two register operands and an immediate operand. The number of bits available for the immediate operand ﬁeld is __________.

 A 16 bits B 17 bits C 18 bits D 19 bits
Computer-Organization       Machine-Instructions       GATE 2016 set-2
Question 28 Explanation:
6 bits are needed for 40 distinct instructions (because, 25 < 40 < 26)
5 bits are needed for 24 general purpose registers (because, 24 < 24 < 25)
32-bit instruction word has an opcode (6 bit), two register operands (total 10 bits) and an immediate operand (x bits).
The number of bits available for the immediate operand field
⇒ x = 32 – (6 + 10) = 16 bits
 Question 29

Suppose the functions F and G can be computed in 5 and 3 nanoseconds by functional units UF and UG, respectively. Given two instances of UF and two instances of UG, it is required to implement the computation F(G(Xi)) for 1 ≤ i ≤ 10. Ignoring all other delays, the minimum time required to complete this computation is _________ nanoseconds.

 A 28 B 29 C 30 D 31
Computer-Organization       Datapath       GATE 2016 set-2
Question 29 Explanation:
 Question 30

Consider a processor with 64 registers and an instruction set of size twelve. Each instruction has ﬁve distinct ﬁelds, namely, opcode, two source register identiﬁers, one destination register r identiﬁer, and a twelve-bit immediate value. Each instruction must be stored in memory in a byte-aligned fashion. If a program has 100 instructions, the amount of memory (in bytes) consumed by the program text is _________.

 A 500 bytes B 501 bytes C 502 bytes D 503 bytes
Computer-Organization       Machine-Instructions       GATE 2016 set-2
Question 30 Explanation:
One instruction is divided into five parts,
(i) The opcode- As we have instruction set of size 12, an instruction opcode can be identified by 4 bits, as 24 = 16 and we cannot go any less.
(ii) & (iii) Two source register identifiers- As there are total 64 registers, they can be identified by 6 bits. As they are two i.e. 6 bit + 6 bit.
iv) One destination register identifier- Again it will be 6 bits.
v) A twelve bit immediate value- 12 bit.
4 + 6 + 6 + 6 + 12 = 34 bit = 34/8 byte = 4.25 bytes.
Due to byte alignment total bytes per instruction = 5 bytes.
As there are 100 instructions, total size = 5*100 = 500 Bytes.
 Question 31

The width of the physical address on a machine is 40 bits. The width of the tag ﬁeld in a 512 KB 8-way set associative cache is _________ bits.

 A 24 B 25 C 26 D 27
Computer-Organization       Cache       GATE 2016 set-2
Question 31 Explanation:
Given that it is a set associative cache, so the physical address is divided as following:
(Tag bits + bits for set no. + Bits for block offset)
In question block size has not been given, so we can assume block size 2x Bytes.
The cache is of size 512KB, so number of blocks in the cache = 219/2x = 219-x
It is 8-way set associative cache so there will be 8 blocks in each set.
So number of sets = (219 − x)/8 = 216 − x
So number of bits for sets = 16−x
Let number of bits for Tag = T
Since we assumed block size is 2x Bytes, number of bits for block offset is x.
So, T + (16−x) + x = 40
T + 16 = 40
T = 24
 Question 32

Consider a 3 GHz (gigahertz) processor with a three-stage pipeline and stage latencies τ1, τ2, τ3 and such that τ = 3τ2/4 = 2τ3. If the longest pipeline stage is split into two pipeline stages of equal latency, the new frequency is _________ GHz, ignoring delays in the pipeline registers.

 A 4 B 5 C 6 D 7
Computer-Organization       Pipelining       GATE 2016 set-2
Question 32 Explanation:
Given 3 stage pipeline, with 3 GHz processor.
Given ,τ1 = 3 τ2/4 = 2 τ3
Put τ1 = 6t, we get τ2 = 8t, τ3 = 3t
Now largest stage time is 8t.
So, frequency is 1/8t
⇒ 1/8t = 3 GHz
⇒ 1/t = 24 GHz
From the given 3 stages, τ 1 = 6t, τ 2 = 8t and τ 3 = 3t
So, τ 2 > τ1 > τ3.
The longest stage is τ2 = 8t and we will split that into two stages of 4t & 4t.
New processor has 4 stages - 6t, 4t, 4t, 3t.
Now largest stage time is 6t.
So, new frequency is = 1/6t
We can substitute 24 in place of 1/t, which gives the new frequency as 24/6 = 4 GHz
 Question 33

A ﬁle system uses an in-memory cache to cache disk blocks. The miss rate of the cache is shown in the ﬁgure. The latency to read a block from the cache is 1 ms and to read a block from the disk is 10 ms. Assume that the cost of checking whether a block exists in the cache is negligible. Available cache sizes are in multiples of 10 MB.

The smallest cache size required to ensure an average read latency of less than 6 ms is _______ MB.

 A 30 B 31 C 32 D 33
Computer-Organization       Cache       GATE 2016 set-2
Question 33 Explanation:
Here it is not given whether the memory is hierarchical or simultaneous.
So we consider it as hierarchical memory.
But it is given that “assume that the cost of checking whether a block exists in the cache is negligible”, which means don't consider the checking time in the cache when there is a miss.
So formula for average access time becomes h1t1 + (1-h1)(t2) which is same as for simultaneous access.
Though the memory is hierarchical because of the statement given in the question we ignored the cache access time when there is a miss and effectively the calculation became like simultaneous access.
The average access time or read latency = h1t1 + (1-h1)t2.
It is given that the average read latency has to be less than 6ms.
So, h1t1 + (1-h1)t2 < 6
From the given information t1 = 1ms, t2 = 10ms
h1*1+(1-h1)10 < 6
10-9h1 < 6
-9h1 < - 4
-h1 < - 4/9
-h1 < -0.444
Since in the given graph we have miss rate information and 1-h1 gives the miss rate, so we add 1 on both sides of the above inequality.
1-h1 < 1-0.444
1-h1 < 0.555
So for the average read latency to be less than 6 ms the miss rate hsa to be less than 55.5%.
From the given graph the closest value of miss rate less than 55.5 is 40%.
For 40% miss rate the corresponding cache size is 30MB.
 Question 34

For computers based on three-address instruction formats, each address field can be used to specify which of the following:

(S1) A memory operand
(S2) A processor register
(S3) An implied accumulator register
 A Either S1 or S2 B Either S2 or S3 C Only S2 and S3 D All of S1, S2 and S3
Computer-Organization       Machine-Instructions       GATE 2015 (Set-01)
Question 34 Explanation:
Any implied register is never explicitly mentioned as an operand in an operation.
So as the question asks what can be specified using the address fields, implied accumulator register can't be represented in address field.
So, S3 is wrong.
Hence Option A is the correct answer.
 Question 35
Consider a non-pipelined processor with a clock rate of 2.5 gigahertz and average cycles per instruction of four. The same processor is upgraded to a pipelined processor with five stages; but due to the internal pipeline delay, the clock speed is reduced to 2 gigahertz. Assume that there are no stalls in the pipeline. The speed up achieved in this pipelined processor is __________.
 A 3.2 B 3.3 C 3.4 D 3.5
Computer-Organization       Pipeling       GATE 2015 (Set-01)
Question 35 Explanation:
Given that the processor clock rate = 2.5 GHz, the processor takes 2.5 G cycles in one second.
Time taken to complete one cycle = (1 / 2.5 G) seconds
Since it is given that average number of cycles per instruction = 4, the time taken for completing one instruction = (4 / 2.5 G) = 1.6 ns
In the pipelined case we know in the ideal case CPI = 1, and the clock speed = 2 GHz.
Time taken for one instruction in the pipelined case = (1 / 2 G) = 0.5 ns
Speedup = 1.6/0.5 = 3.2
 Question 36

Consider a disk pack with a seek time of 4 milliseconds and rotational speed of 10000 rotations per minute (RPM). It has 600 sectors per track and each sector can store 512 bytes of data. Consider a file stored in the disk. The file contains 2000 sectors. Assume that every sector access necessitates a seek, and the average rotational latency for accessing each sector is half of the time for one complete rotation. The total time (in milliseconds) needed to read the entire file is _________.

 A 14020 B 14021 C 14022 D 14023
Computer-Organization       Secondary-Storage       GATE 2015 (Set-01)
Question 36 Explanation:
Given
Seek time = 4ms
60s→ 10000 rotations

∴ Rotational latency =1/2×6ms=3ms
1sector→600sectors
⇒6ms←600sectors (1 rotation means 600 sectors (or)1 track)
1 track→6ms/600=0.01ms
2000sector→2000(0.01)=20ms
∴total time needed to read the entire file is
=2000(4+3)+20
=8000+6000+20
=14020ms
 Question 37
Assume that for a certain processor, a read request takes 50 nanoseconds on a cache miss and 5 nanoseconds on a cache hit. Suppose while running a program, it was observed that 80% of the processors read requests result in a cache hit. The average and access time in nanoseconds is  _______.
 A 14 B 15 C 16 D 17
Computer-Organization       Cache       GATE 2015 -(Set-2)
Question 37 Explanation:
 Question 38
Consider a  processor with byte-addressable memory. Assume that all registers, including Program Counter (PC) and Program Status WOrd (PSW), are of size 2 byte. Astack in the main memory is impllemented from memory locatuons (0100)16 and it grows upward. The stack pointer (SP) points to the top element of the stack. The current value SP is (016E)16. The CALL instructions is of two words, the first word is the op-code and the second word is the starting address of the subroutine. (one word =2bytes). The CALL instructions is implemented as follows
 A (016A)16 B (016C)16 C (0170)16 D (0172)16
Computer-Organization       Machine-Instructions       GATE 2015 -(Set-2)
Question 38 Explanation:
The CALL instruction is implemented as follows:
-Store the current value of PC in the stack
pc is 2 bytes so when we store pc in stack SP is increased by 2 so current value of SP is (016E)16+2 -Store the value of PSW register in the stack
psw is 2 bytes, so when we store psw in stack SP is increased by 2
so current value of SP is (016E)16+2+2=(0172)16
 Question 39

Consider the sequence of machine instruction given below

MUL R5,R0,R1 DIV R6,R2,R3 ADD R7,R5,R6 SUB R8,R7,R4 In the above sequence, R0 to R8 are general purpose registraters. In the instructions shown, the first register stores the result of the operation performed on the second and the third registers. This sequence of instructions is to be executed in a pipelined instruction processor with the following 4 stages: (1) Instruction Fetch and Decode (IF), (2) Operand Fetch (OF), (3) Perfom Operations (PO) and (4) Write back the result (WB). The IF, OF and WB stages take 1 clock cycle each instruction The PO stahe takes 1 clock cycle for ADD or SUB instruction, 3 clock cycles for MUL instruction and 5 clock cycles for DIV instructions. The pipelined processor uses operand foewarding from the PO stage to the OF stage. The number of clock cycles taken for the execution of the aboce sequence of instructions is _______

 A 11 B 12 C 13 D 14
Computer-Organization       Pipelining       GATE 2015 -(Set-2)
Question 39 Explanation:
I ⇒ Instruction Fetch and Decode
O ⇒ Operand Fetch
P ⇒ Perform the operation
W ⇒ write back the result
 Question 40
Consider the following reservation table for a pipeline having three stages S1, S2 and S3.
     Time -->
-----------------------------
1    2   3    4     5
-----------------------------
S1  | X  |   |   |    |  X |
S2  |    | X |   | X  |    |
S3  |    |   | X |    |    |
The minimum average latency (MAL) is __________
 A 4 B 5 C 6 D 7
Computer-Organization       Pipelining       GATE 2015(Set-03)
Question 40 Explanation:
Minimum average latency is based on an advanced concept in pipelining.
S1 is needed at time 1 and 5, so its forbidden latency is 5-1 = 4.
S2 is needed at time 2 and 4, so its forbidden latency is 4-2 = 2.
So, forbidden latency = (2,4,0) (0 by default is forbidden)
Allowed latency = (1,3,5) (any value more than 5 also).
Collision vector (4,3,2,1,0) = 10101 which is the initial state as well.
From initial state we can have a transition after "1" or "3" cycles and we reach new states with collision vectors
(10101 >> 1 + 10101 = 11111) and (10101 >> 3 + 10101 = 10111) respectively.
These 2 becomes states 2 and 3 respectively.
For "5" cycles we come back to state 1 itself.
From state 2 (11111), the new collision vector is 11111.
We can have a transition only when we see first 0 from right.
So, here it happens on 5th cycle only which goes to initial state. (Any transition after 5 or more cycles goes to initial state as we have 5 time slices).
From state 3 (10111), the new collision vector is 10111.
So, we can have a transition on 3, which will give (10111 >> 3 + 10101 = 10111) third state itself. For 5, we get the initial state.
Thus all the transitions are complete. State\Time 1 3 5 1 (10101) 2 3 1 2 (11111) - - 1 3 (10111) - 3 1 So, minimum length cycle is of length 3 either from 3-3 or from 1-3. So the minimum average latency is also 3.
 Question 41
Consider the following code sequence having five instructions I1 to I5. Each of these instructions has the following format.
    OP Ri, Rj, Rk
where operation OP is performed on contents of registers Rj and Rk and the result is stored in register Ri.
   I1 : ADD R1, R2, R3
I2 : MUL R7, R1, R3
I3 : SUB R4, R1, R5
I4 : ADD R3, R2, R4
I5 : MUL R7, R8, R9
Consider the following three statements:
S1: There is an anti-dependence between instructions I2 and I5.
S2: There is an anti-dependence between instructions I2 and I4.
S3: Within an instruction pipeline an anti-dependence always
creates one or more stalls.

 A Only S1 is true B Only S2 is true C Only S1 and S3 are true D Only S2 and S3 are true
Computer-Organization       Pipelining       GATE 2015(Set-03)
Question 41 Explanation:
S1: False. Antidependency means WAR dependency. There is no WAR dependency between I2 and I5.
S2: True. There is WAR dependency between I2 and I4.
S3: False. Because WAR or antidependency can be resolved by register renaming.
 Question 42
A machine has a 32-bit architecture, with 1-word long instructions. It has 64 registers, each of which is 32 bits long.  It needs to support 45 instructions, which have an immediate operand in addition to two register operands.  Assuming that the immediate operand is an unsigned integer, the maximum value of the immediate operand is ____________.
 A 16383 B 16384 C 16385 D 16386
Computer-Organization       Machine-Instructions       GATE 2014(Set-01)
Question 42 Explanation:
1 Word = 32 bits
Each instruction has 32 bits.
To support 45 instructions, opcode must contain 6-bits.
Register operand1 requires 6 bits, since the total registers are 64.
Register operand 2 also requires 6 bits

14-bits are left over for immediate Operand Using 14-bits, we can give maximum 16383, Since 214 = 16384 (from 0 to 16383)
 Question 43
Consider two processors P1 and P2 executing the same instruction set. Assume that under identical conditions, for the same input, a program running on P2 takes 25% less time but incurs 20% more CPI (clock cycles per instruction) as compared to the program running on P1. If the clock frequency of P1 is 1GHz, then the clock frequency of P2 (in GHz) is _________.
 A 1.6 B 1.7 C 1.8 D 1.9
Computer-Organization       Speed-Up       GATE 2014(Set-01)
Question 43 Explanation:
1 cycle time for P1=109/ 1GH=1 ns
Assume P1 takes 5 cycles for a program then P2 takes 20%more, means, 6 cycles.
P2 takes 25% less time, means, if P1 takes 5 ns, then P2 takes 3.75 ns.
Assume P2 clock frequency is x GHz.
P2 taken 6 cycles, so 6×109/x GH=3.75, x=1.6
 Question 44
A 4-way set-associative cache memory unit with a capacity of 16 KB is built using a block size of 8 words. The word length is 32 bits. The size of the physical address space is 4 GB. The number of bits for the TAG field is __________.
 A 20 B 21 C 22 D 23
Computer-Organization       Cache       Gate 2014 Set -02
Question 44 Explanation:
Physical address size = 32 bits
Cache size = 16K bytes = 214 Bytes
block size = 8 words = 8⨯4 Byte = 32 Bytes = 25 Bytes
(where each word = 4 Bytes)
No. of blocks =214/25=29
block offset =9bits
Because it is 4-way set associative cache, no. of sets =29/4=27
Set of set = 7 bits
TAG = 32 – (7 + 5) = 20 bits
 Question 45
Suppose a stack implementation supports an instruction REVERSE, which reverses the order of elements on the stack, in addition to the PUSH and POP instructions. Which one of the following statements is TRUE with respect to this modified stack?
 A A queue cannot be implemented using this stack. B A queue can be implemented where ENQUEUE takes a single instruction and DEQUEUE takes a sequence of two instructions. C A queue can be implemented where ENQUEUE takes a sequence of three instructions and DEQUEUE takes a single instruction. D A queue can be implemented where both ENQUEUE and DEQUEUE take a single instruction each.
Computer-Organization       Cache       Gate 2014 Set -02
Question 45 Explanation:
A Queue can be implemented where Enqueue takes 3 operations & dequeuer takes 1 operation.
Suppose:

Dequeue:

If we want to delete an element, that first we need to delete 1.
Enqueue:
 Question 46
In designing a computer’s cache system, the cache block (or cache line) size is an important parameter.  Which one of the following statements is correct in this context?
 A A smaller block size implies better spatial locality B A smaller block size implies a smaller cache tag and hence lower cache tag overhead C A smaller block size implies a larger cache tag and hence lower cache hit time D A smaller block size incurs a lower cache miss penalty
Computer-Organization       Cache        Gate 2014 Set -02
Question 46 Explanation:
When a cache block size is smaller, it could accommodate more number of blocks, it improves the hit ratio for cache, so the miss penalty for cache will be lowered.
 Question 47
If the associativity of a processor cache is doubled while keeping the capacity and block size unchanged, which one of the following is guaranteed to be NOT affected?
 A Width of tag comparator B Width of set index decoder C Width of way selection multiplexor D Width of processor to main memory data bus
Computer-Organization       Cache       Gate 2014 Set -02
Question 47 Explanation:
When associativity is doubled, then the set offset will be affected, accordingly, the number of bits used for TAG comparator be affected.
Width of set index decoder also will be affected when set offset is changed.
A k-way set associative cache needs k-to-1 way selection multiplexer. If the associativity is doubled the width of way selection multiplexer will also be doubled.
With of processor to main memory data bus is guaranteed to be NOT affected as this is not dependent on the cache associativity.
 Question 48
The value of a float type variable is represented using the single-precision 32-bit floating point format of IEEE-754 standard that uses 1 bit for sign, 8 bits for biased exponent and 23 bits for mantissa. A float type variable X is assigned the decimal value of −14.25. The representation of X in hexadecimal notation is
 A C1640000H B 416C0000H C 41640000H D C16C0000H
Computer-Organization       Number-Systems       Gate 2014 Set -02
Question 48 Explanation:
Given number is a negative number. So sign bit =1
(14.25)10 = 1110.01000
= 1.11001000 x 23
23 bit Mantissa = 11001000000000000000000
Biased Exponent = exponent + bias
= 3 + 127 = 130 = 1000 0010
(-14.25) in 32-bit IEEE-754 floating point representation is
1 10000010 11001000000000000000000
=1100 0001 0110 0100 0000 0000 000 0000
= (C 1 6 4 0 0 0 0)16
 Question 49
Consider a main memory system that consists of 8 memory modules attached to the system bus, which is one word wide. When a write request is made, the bus is occupied for 100 nanoseconds (ns) by the data, address, and control signals. During the same 100 ns, and for 500 ns thereafter, the addressed memory module executes one cycle accepting and storing the data. The (internal) operation of different memory modules may overlap in time, but only one request can be on the bus at any time. The maximum number of stores (of one word each) that can be initiated in 1 millisecond is ____________
 A 10000 B 10001 C 10002 D 10003
Computer-Organization       Instruction-Cycle       Gate 2014 Set -02
Question 49 Explanation:
Each write request, the bus is occupied for 100 n.s
Storing of data requires 100 n.s.
In 100 n.s – 1 store
100/106m.s=1 store
1 m.s=106/100stores=10000 stores
 Question 50
Consider the following processors (ns stands for nanoseconds). Assume that the pipeline registers have zero latency. P1: Four-stage pipeline with stage latencies 1 ns, 2 ns, 2 ns, 1 ns. P2: Four-stage pipeline with stage latencies 1 ns, 1.5 ns, 1.5 ns, 1.5 ns. P3: Five-stage pipeline with stage latencies 0.5 ns, 1 ns, 1 ns, 0.6 ns, 1 ns. P4: Five-stage pipeline with stage latencies 0.5 ns, 0.5 ns, 1 ns, 1 ns, 1.1 ns. Which processor has the highest peak clock frequency?
 A P1 B P2 C P3 D P4
Computer-Organization       Pipelining-and-addressing-modes       Gate 2014 Set -03
Question 50 Explanation:
Clock period (CP) = max stage delay + overhead
So CPP1=Max(1,2,2,1)=2ns
CPP2=Max(1,1.5,1.5,1.5)=1.5ns
CPP3=Max(0.5,1,1,0.6,1)=1ns
CPP4=Max(0.5,0.5,1,1,1.1)=1.1ns
As frequency ∝ 1/C.P, so least clock period will give the highest peak clock frequency.
So, fP3=1/1ns=1GHz
 Question 51
An instruction pipeline has five stages, namely, instruction fetch (IF), instruction decode and register fetch (ID/RF), instruction execution (EX), memory access (MEM), and register writeback (WB) with stage latencies 1 ns, 2.2 ns, 2 ns, 1 ns, and 0.75 ns, respectively (ns stands for nanoseconds). To gain in terms of frequency, the designers have decided to split the ID/RF stage into three stages (ID, RF1, RF2) each of latency 2.2/3 ns. Also, the EX stage is split into two stages (EX1, EX2) each of latency 1 ns. The new design has a total of eight pipeline stages. A program has 20% branch instructions which execute in the EX stage and produce the next instruction pointer at the end of the EX stage in the old design and at the end of the EX2 stage in the new design. The IF stage stalls after fetching a branch instruction until the next instruction pointer is computed. All instructions other than the branch instruction have an average CPI of one in both the designs. The execution times of this program on the old and the new design are P and Q nanoseconds, respectively. The value of P/Q is __________.
 A 1.54 B 1.55 C 1.56 D 1.57
Computer-Organization       Pipelining-and-addressing-modes       Gate 2014 Set -03
Question 51 Explanation:
There are five stages:
Instruction Fetch (IF),
instruction decode and register fetch (ID/RF),
Instruction execution (EX),
Memory access (MEM),
and register writeback (WB)
P old design:
With stage latencies 1 ns, 2.2 ns, 2 ns, 1 ns, and 0.75 ns
Cycle time = MAX(1 ns, 2.2 ns, 2 ns, 1 ns, and 0.75 ns) = 2.2nsec
Branch penalty = 3-1 = 2 because the next instruction pointer at the end of the EX stage(which is 3rd stage) in the old design
AVG instruction execution time is
P=Tavg=(1+no. of stalls*branch penalty)*cycle time
=(1+0.20*2)2.2
P =3.08 nsec
Q new design:
ID/RF stage is split into three stages (ID, RF1, RF2) each of latency (2.2)/3 ns = 0.73ns.
The EX stage is split into two stages (EX1, EX2) each of latency 1 ns.
The new design has a total of eight pipeline stages.
Time of stages in new design ={1ns, 0.73ns, 0.73ns, 0.73ns , 1ns, 1ns, 1ns, and 0.75ns}
Cycle time = MAX( 1ns, 0.73ns, 0.73ns, 0.73ns , 1ns, 1ns, 1ns, and 0.75ns) =1nsec
Branch penalty = 6-1 = 5 because the next instruction pointer at the end of the EX2 stage(which is 6th stage) in the new design.
AVG instruction execution time is
Q = Tavg=(1+no. of stalls*branch penality)*cycle time
= (1+0.20*5)1
Q = 2 nsec
Therefore, P/Q=3.08/2 =1.54
 Question 52
In a k-way set associative cache, the cache is divided into v sets, each of which consists of k lines. The lines of a set are placed in sequence one after another. The lines in set s are sequenced before the lines in set (s+1). The main memory blocks are numbered 0 onwards. The main memory block numbered j must be mapped to any one of the cache lines from
 A (j mod v) * k to (j mod v) * k + (k-1) B (j mod v) to (j mod v) + (k-1) C (j mod k) to (j mod k) + (v-1) D (j mod k) * v to (j mod k) * v + (v-1)
Computer-Organization       Cache       Gate 2013
Question 52 Explanation:
Number of sets in cache = v. A block numbered j will be mapped to set number (j mod v). Since it is k-way set associative cache, there are k blocks in each set. It is given in the question that the blocks in consecutive sets are sequenced. It means for set-0 the cache lines are numbered 0, 1, .., k-1 and for set-1, the cache lines are numbered k, k+1,... k+k-1 and so on. As the main memory block j will be mapped to set (j mod v), it will be any one of the cache lines from (j mod v) * k to (j mod v) * k + (k-1).
 Question 53
Consider the following sequence of micro-operations.
     MBR ← PC
MAR ← X
PC ← Y
Memory ← MBR
Which one of the following is a possible operation performed by this sequence?
 A Instruction fetch B Operand fetch C Conditional branch D Initiation of interrupt service
Computer-Organization       CPU-Control-Design-and-Interfaces       Gate 2013
Question 53 Explanation:
We know PC holds the value of the next instruction to be executed. Here the value of PC is stored in MBR( MBR ← PC ) and then to the memory (Memory ← MBR). We are saving the value of PC in memory and a new address is loaded into the PC (PC ← Y), so we start executing from the new address Y which is loaded into PC. So this behaviour matches with the interrupt service execution as we have already saved the address of current running instruction into memory and then we are loading new address in to the PC and start executing from there. Hence option D is the answer.
 Question 54
Consider an instruction pipeline with five stages without any branch prediction: Fetch Instruction (FI), Decode Instruction (DI), Fetch Operand (FO), Execute Instruction (EI) and Write Operand (WO). The stage delays for FI, DI, FO, EI and WO are 5 ns, 7 ns, 10 ns, 8 ns and 6 ns, respectively. There are intermediate storage buffers after each stage and the delay of each buffer is 1 ns. A program consisting of 12 instructions I1, I2, I3, ..., I12 is executed in this pipelined processor. Instruction I4 is the only branch instruction and its branch target is I9. If the branch is taken during the execution of this program, the time (in ns) needed to complete the program is
 A 132 B 165 C 176 D 328
Question 54 Explanation:

Cycle time = max of all stage delays + buffer delay = max (5 ns, 7 ns, 10 ns, 8 ns, 6 ns) + 1 = 10+1 = 11ns
Out of all the instructions I1, I2, I3....I12 it is given that only I4 is a branch instruction and when I4 takes branch the control will jump to instruction I9 as I9 is the target instruction.
As can be seen from the timing diagram there is a gap of only 3 stall cycles between I4 and I9 because after I4 enters Decode Instruction (DI) whether there is a branch or not will be known at the end of Execute Instruction (EI) phase. So there are total 3 phases here namely DI, FO, EI. After 3 stall cycles I9 will start executing as that is the branch target.
As per the timing diagram total no. of clock cycles to complete the program = 15
Since 1 clock cycle = 11ns, time to complete the program = 15*11 = 165ns
 Question 55
A RAM chip has a capacity of 1024 words of 8 bits each (1K×8). The number of 2×4 decoders with enable line needed to construct a 16K×16 RAM from 1K×8 RAM is
 A 4 B 5 C 6 D 7
Computer-Organization       Cache       Gate 2013
Question 55 Explanation:
The capacity of the RAM needed = 16K
Capacity of the chips available = 1K
No. of address lines = 16K/1K = 16
Hence we can use 4 × 16 decoder for this. But we were only given 2 × 4 decoders.
So 4 decoders are required in inner level as from one 2×4 decoder we have only 4 output lines whereas we need 16 output lines.
Now to point to these 4 decoders, another 2×4 decoder is required in the outer level.
Hence no. of 2×4 decoders to realize the above implementation of RAM = 1 + 4 = 5
 Question 56
The following code segment is executed on a processor which allows only register operands in its instructions. Each instruction can have atmost two source operands and one destination operand. Assume that all variables are dead after this code segment.
   c = a + b;
d = c * a;
e = c + a;
x = c * c;
if (x > a) {
y = a * a;
}
else {
d = d * d;
e = e * e;
}
Suppose the instruction set architecture of the processor has only two registers. The only allowed compiler optimization is code motion, which moves statements from one place to another while preserving correctness. What is the minimum number of spills to memory in the compiled code?
 A 3 B 4 C 5 D 6
Computer-Organization       Code-Segment       Gate 2013
Question 56 Explanation:
Let R1← a, R2← b
The 1st statement c = a + b; can be written as R2 = R1+R2, here R2 is used to store the value of 'c' as we don't need 'b' any further.
Let R3 be used to store 'd', so we can write the 2nd statement d = c * a; as R3 = R2*R1
Let R4 be used to store 'e', so we can write the 3rd statement e = c + a; as R4 = R2+R1
We can reuse R2 to store 'x' as the value of 'c' is not needed after statement-4.
We can write the 4th statement x = c*c; as R2 = R2*R2;
Using the four registers R1, R2, R3 and R4 we can write the remaining statements of the program without needing any new registers...
if (x > a) {
y = a * a;
}
else {
d = d * d;
e = e * e;
}
So the minimum no. of registers needed is 4.
 Question 57

 A Only one B Only two C Only three D All four
Computer-Organization       General       Gate 2013
Question 57 Explanation:
void find_and_replace (char *A, char *oldc, char *newc)
{
for (int i=0; i<5; i++)
for (int j=0; j<3; j++)
if (A[i] = = oldc[j])
A[i] = newc[j];
}
The procedure is tested with the following four test cases.
1. oldc = “abc”, newc = ”dab”
2. oldc = “cde”, newc = ”bcd”
3. oldc = “bca”, newc = ”cda”
4. oldc = “abc”, newc = ”bac”
Given input string of length 5 consisting of a, b, c, d, e, duplicates allowed.

Similarly for (4), we done 2 replacements for A[0] with element newc[0] & newc[1].
Updating the newc value with another newc value is called as a flaw here. Hence 2 flaws.
 Question 58
Register renaming is done in pipelined processors
 A as an alternative to register allocation at compile time B for efficient access to function parameters and local variables C to handle certain kinds of hazards D as part of address translation
Computer-Organization       Instruction-Pipelining       Gate 2012
Question 58 Explanation:
Register renaming is used to eliminate hazards that arise due to WAR (Write After Read) and WAW(Write After Write) dependencies. Hence option C is the answer.
 Question 59
 A 11 B 14 C 16 D 27
Computer-Organization       Cache       Gate 2012
Question 59 Explanation:
It is given that cache size = 256 KB
Cache block size = 32 Bytes
So, number of blocks in the cache = 256K / 32 = 8 K
It is a 4-way set associative cache. Each set has 4 blocks.
So, number of sets in cache = 8 K / 4 = 2 K = 211.
So, 11 bits are needed for accessing a set. Inside a set we need to identify the cache block.
Since cache block size is 32 bytes, block offset needs 5 bits.
Out of 32 bit address, no. of TAG bits = 32 - 11 - 5 = 32-16 = 16
So, we need 16 tag bits.
 Question 60
A computer has a 256 KByte, 4-way set associative, write back data cache with block size of 32 Bytes. The processor sends 32 bit addresses to the cache controller. Each cache tag directory entry contains, in addition to address tag, 2 valid bits, 1 modified bit and 1 replacement bit. The number of bits in the tag field of an address is
 A 160 Kbits B 136 Kbits C 40 Kbits D 32 Kbits
Computer-Organization       Cache       Gate 2012
Question 60 Explanation:
It is given that cache size =256 KB
Cache block size = 32 Bytes
So, number of blocks in the cache = 256K / 32 = 8 K
It is a 4-way set associative cache. Each set has 4 blocks.
So, number of sets in cache = 8 K / 4 = 2 K = 211.
So, 11 bits are needed for accessing a set. Inside a set we need to identify the cache block.
Since cache block size is 32 bytes, block offset needs 5 bits.
Out of 32 bit address, no. of TAG bits = 32 - 11 - 5 = 32-16 = 16
So, we need 16 tag bits.
It is given that in addition to address tag there are 2 valid bits, 1 modified bit and 1 replacement bit.
So size of each tag entry = 16 tag bits + 2 valid bits + 1 modified bit + 1 replacement bit = 20 bits
Size of cache tag directory=Size of tag entry×Number of tag entries
= 20×8 K
=160 Kbits
 Question 61
Consider a hypothetical processor with an instruction of type LW R1, 20 (R2), which during execution reads a 32-bit word from memory and stores it in a 32-bit register R1. The effective address of the memory location is obtained by the addition of constant 20 and the contents of register R2. Which of the following best reflects the addressing mode implemented by this instruction for the operand memory?
Computer-Organization       Machine-Instructions       Gate 2011
Question 61 Explanation:
Here 20 will act as base and content of R2 will be index.
 Question 62
A computer handles several interrupt sources of which the following are relevant for this question.
. Interrupt from CPU temperature sensor (raises interrupt if
CPU temperature is too high)
. Interrupt from Mouse(raises interrupt if the mouse is moved
or a button is pressed)
. Interrupt from Keyboard(raises interrupt when a key is
pressed or released)
. Interrupt from Hard Disk(raises interrupt when a disk
Which one of these will be handled at the HIGHEST priority?

 A Interrupt from Hard Dist B Interrupt from Mouse C Interrupt from Keyboard D Interrupt from CPU temperature sensor
Computer-Organization       Interrupt       Gate 2011
Question 62 Explanation:
The interrupts of higher priority are assigned to requests which, if delayed, can cause serious problems. High speed transfer device such as magnetic disks are given higher priority, and slow devices such as keyboard, mouse receive low priority. Also delaying a CPU temperature sensor can cause serious problems like can damage CPU.
Therefore, we can say that priorities are,
CPU temperature sensor > Hard disk > Mouse > Keyboard.
 Question 63
Consider an instruction pipeline with four stages (S1, S2, S3 and S4) each with combinational circuit only. The pipeline registers are required between each stage and at the end of the last stage. Delays for the stages and for the pipeline registers are as given in the figure: What is the approximate speed up of the pipeline in steady state under ideal conditions when compared to the corresponding non-pipeline implementation?
 A 4 B 2.5 C 1.1 D 3
Computer-Organization       Instruction-Pipelining       Gate 2011
Question 63 Explanation:
Sppedup = Time without pipelining/Time with pipelining
Clock cycle time of pipelining
= max (5ns, 6ns, 11ns, 8ns) + 1ns
= 11 + 1
= 12ns
Time without pipelining = 5 + 6 + 11 + 8 = 30 ns
∴ Speedup = 30/12 = 2.5
 Question 64
An 8KB direct-mapped write-back cache is organized as multiple blocks, each of size 32-bytes. The processor generates 32-bit addresses. The cache controller maintains the tag information for each cache block comprising of the following. 1 Valid bit 1 Modified bit As many bits as the minimum needed to identify the memory block mapped in the cache. What is the total size of memory needed at the cache controller to store meta-data (tags) for the cache?
 A 4864 bits B 6144 bits C 6656 bits D 5376 bits
Computer-Organization       Cache       Gate 2011
Question 64 Explanation:
No. of cache blocks = cache size/size of a block = 8KB/32B = 256
So we need 8 bits for indexing the 256 blocks in the cache. And since a block is 32 bytes we need 5 word bits to address each byte.
So out of remaining (32 - 8 - 5), 19 bits should be tag bits.
So tag entry size = 19 + 1 (valid bit) + 1 (modified bit) = 21 bits
∴ Total size of metadata = 21 × Number blocks = 21 × 256 = 5376 bits
 Question 65
An application loads 100 libraries at startup. Loading each library requires exactly one disk access. The seek time of the disk to a random location is given as 10 ms. Rotational speed of disk is 6000 rpm. If all 100 libraries are loaded from random locations on the disk, how long does it take to load all libraries? (The time to transfer data from the disk block once the head has been positioned at the start of the block may be neglected).
 A 0.50 s B 1.50 s C 1.25 s D 1.00 s
Computer-Organization       Secondary-Storage       Gate 2011
Question 65 Explanation:
Transfer Time = Seek Time + Rotational latency/2 + time to read/write
= 10ms + (60/12000)+0
For 100 libraries = 100(10ms + (60/12000)+0) = 1.5seconds
 Question 66
On a non-pipelined sequential processor, a program segment, which is a part of the interrupt service routine, is given to transfer 500 bytes from an I/O device to memory.
Initialize the address register
Initialize the count to 500
LOOP: Load a byte from device
Decrement the count
If count != 0 go to LOOP

Assume that each statement in this program is equivalent to machine instruction which takes one clock cycle to execute if it is a non-load/store instruction. The load-store instructions take two clock cycles to execute. The designer of the system also has an alternate approach of using DMA controller to implement the same transfer. The DMA controller requires 20 clock cycles for initialization and other overheads. Each DMA transfer cycle takes two clock cycles to transfer one byte of data from the device to the memory. What is the approximate speedup when the DMA controller based design is used in place of the interrupt driven program based input-output?

 A 3.4 B 4.4 C 5.1 D 6.7
Computer-Organization       DMA       Gate 2011
Question 66 Explanation:
No. of clock cycles required by using load-store approach = 2 + 500 × 7 = 3502 and that of by
using DMA = 20 + 500 × 2 = 1020
Required speed up = 3502/1020=3.4
 Question 67
A 5-stage pipelined processor has Instruction Fetch(IF),Instruction Decode(ID),Operand Fetch(OF),Perform Operation(PO)and Write Operand(WO)stages.The IF,ID,OF and WO stages take 1 clock cycle each for any instruction.The PO stage takes 1 clock cycle for ADD and SUB instructions,3 clock cycles for MUL instruction,and 6 clock cycles for DIV instruction respectively.Operand forwarding is used in the pipeline.What is the number of clock cycles needed to execute the following sequence of instructions?
     Instruction           Meaning of instruction
I0 :MUL R2 ,R0 ,R1	      R2 ¬ R0 *R1
I1 :DIV R5 ,R3 ,R4  	      R5 ¬ R3/R4
I2 :ADD R2 ,R5 ,R2	      R2 ¬ R5+R2
I3 :SUB R5 ,R2 ,R6	      R5 ¬ R2-R6

 A 13 B 15 C 17 D 19
Computer-Organization       Pipelining       2010
Question 67 Explanation:
It is given that there is operand forwarding. In the case of operand forwarding the updated value from previous instruction’s PO stage is forwarded to the present instruction’s PO stage. Here there’s RAW dependency between I1-I2 for R5 and between I2-I3 for R2. These dependencies are resolved by using operand forwarding as shown in the below timeline diagram. The total number of clock cycles needed is 15.
 Question 68

 A 2 nanoseconds B 20 nanoseconds C 22 nanoseconds D 88 nanoseconds
Computer-Organization       Memory-Hierarchy       2010
Question 68 Explanation:
Between L1 cache block is of size 4 words, and the bus width between L1 and L2 is also of size 4 words. So only one time the data needs to be transferred from L2 to L1.
And it requires one access of L1 cache and one access of L2 cache. So time taken = 20+2 = 22ns
 Question 69
A computer system has an L1 cache, an L2 cache, and a main memory unit connected as shown below. The block size in L1 cache is 4 words. The block size in L2 cache is 16 words. The memory access times are 2 nanoseconds. 20 nanoseconds and 200 nanoseconds for L1 cache, L2 cache and main memory unit respectively. When there is a miss in L1 cache and a hit in L2 cache, a block is transferred from L2 cache to L1 cache. What is the time taken for this transfer?
 A 222 nanoseconds B 888 nanoseconds C 902 nanoseconds D 968 nanoseconds
Computer-Organization       Memory-Hierarchy       2010
Question 69 Explanation:
Between L1 cache block is of size 4 words, and the bus width between L1 and L2 is also of size 4 words. So only one time the data needs to be transferred from L2 to L1.
And it requires one access of L1 cache and one access of L2 cache. So time taken = 20+2 = 22ns
L2 cache block is of size 16 words and nothing is mentioned about the main memory block, so we assume the main memory block is also of size 16 words. But the bus between L2 and main memory is only 4 words..so when the data has to be transferred from main memory to L2 cache it has to send 4 times through the data bus.
When a data request comes to L1, if there is a cache miss in L1 then it will be searched in L2 if there is a hit in L2 then the required data is transferred from L2 to L1 in a single transfer through the bus. If there is a miss in L2 then it will be searched in main memory. Then the 16 words data from main memory will be transferred to L2 in 4 transfers through the data bus. Time taken for this = 4*(200+20) = 4*220 = 880 ns
Then from L2 to L1 only 4 words of data will be transferred through the data bus in a single transfer. We know time taken for this is 22ns.
So total time taken = 880 + 22 = 902 ns
 Question 70
A CPU generally handles an interrupt by executing an interrupt service routine
 A As soon as an interrupt is raised. B By checking the interrupt register at the end of fetch cycle. C By checking the interrupt register after finishing the execution of the current instruction. D By checking the interrupt register at fixed time intervals.
Computer-Organization       Interrupt       2009
Question 70 Explanation:
As soon as an interrupt is raised the corresponding register is set. But CPU acts on it only after execution of its current instruction. This is followed to ensure integrity of instructions.
 Question 71
Consider a 4 stage pipeline processor. The number of cycles needed by the four instructions I1, I2, I3, I4 in stages S1, S2, S3, S4 is shown below: What is the number of cycles needed to execute the following loop? For (i=1 to 2) {I1; I2; I3; I4;}
 A 16 B 23 C 28 D 30
Computer-Organization       Pipelining       2009
Question 71 Explanation:
 Question 72
Consider a 4-way set associative cache (initially empty) with total 16 cache blocks. The main memory consists of 256 blocks and the request for memory blocks is in the following order: 0, 255, 1, 4, 3, 8, 133, 159, 216, 129, 63, 8, 48, 32, 73, 92, 155. Which one of the following memory block will NOT be in cache if LRU replacement policy is used?
 A 3 B 8 C 129 D 216
Computer-Organization       Cache       2009
Question 72 Explanation:
It is given that cache contains 16 blocks, it is 4-way set associative cache.
So number of sets = 16 / 4 = 4
Given main memory blocks will be mapped to one of the 4 sets.
The given blocks are 0, 255, 1, 4, 3, 8, 133, 159, 216, 129, 63, 8, 48, 32, 73, 92, 155, and they will be mapped to following sets(Since there are 4 sets we get the set number by doing mod 4):
0, 3, 1, 0, 3, 0, 1, 3, 0, 1, 3, 0, 0, 0, 1, 0, 3
The cache mapping and the replacement using LRU can be seen from the below diagram.

We can see that at the end of mapping the given block pattern block number 216 is not in the cache.
 Question 73
A hard disk has 63 sectors per track, 10 platters each with 2 recording surfaces and 1000 cylinders. The address of a sector is given as a triple c, h, s , where <c,h,s>, where c is the cylinder number, h is the surface number and s is the sector number. Thus, the 0th sector is addressed as 0, 0, 0 , the 1st sector as 0, 0,1 , and so on The address <400, 16, 29> corre4sponds tp sector number:
 A 505035 B 505036 C 505037 D 505038
Computer-Organization       Secondary-Storage       2009
Question 73 Explanation:
By default the cylinders numbering starts from 0. In the given hard disk there are 1000 cylinders, 1st cylinder is numbered 0, 2nd cylinder is numbered 1 and so on 1000th cylinder is numbered 999.
To reach a cylinder numbered n, we need to cross cylinders numbered from 0 to n-1.
To reach cylinder numbered 400 (401th cylinder) we need to skip cylinders numbered from 0 to 399, (from cylinder number 0 to 399 there are total 400 cylinders).
Each cylinder consists of 10 plates with 2 recording surfaces and 63 sectors per track.
So total number of tracks to be crossed to reach cylinder numbered 400 = 400 * (10*2) * 63 = 504,000 sectors.
Then, to reach the 16th surface of the cylinder numbered 400, we need to skip another 16*63 = 1,008 sectors.
Finally, to find the 29 sector, we need to move another 29 sectors.
In total, we moved 504,000 + 1,008 + 29 = 505,037 sectors.
Hence, option C is the answer.
 Question 74

 A 〈0, 15, 31〉 B 〈0, 16, 30〉 C 〈0, 16, 31〉 D 〈0, 17, 31〉
Computer-Organization       Secondary-Storage       2009
Question 74 Explanation:
We know in each track there are 63 sectors. And we know there is one track per surface.
From the given options we can calculate the sector numbers as
Option A - 15*63+31 = 976
Option B - 16*63+30 = 1038
Option C - 16*63+31 = 1039
Option D - 17*63+31 = 1102
Hence Option C is the answer.
 Question 75
For a magnetic disk with concentric circular tracks, the seek latency is not linearly proportional to the seek distance due to
 A non-uniform distribution of requests B arm starting and stopping inertia C higher capacity of tracks on the periphery of the platter D use of unfair arm scheduling policies
Computer-Organization       Secondary-Storage       Gate-2008
Question 75 Explanation:
Whenever the head moves from one track to another track its speed changes, this is the case of inertia .
Because the definition of inertia is a property of matter by which it continues in its existing state of rest or uniform motion in a straight line, unless that state is changed by an external force.
 Question 76
Which of the following is/are true of the auto-increment addressing mode? I.It is useful in creating self-relocating code II.If it is included in an Instruction Set Architecture, then an additional ALU is required for effective address calculation III.The amount of increment depends on the size of the data item accessed
 A I only B II only C III Only D II and III only
Question 76 Explanation:
I : Self relocating code always takes some address in memory. So auto-increment mode is not used for self relocating code. Hence this statement is wrong.
II. An additional ALU is not necessary for auto-increment. So this statement is wrong.
III. In auto-increment addressing mode the address where next data block to be stored is generated automatically depending upon the size of single data item required to store. This is based on pointer arithmetic. So this statement is true.
Hence option C is the answer.
 Question 77
Which of the following must be true for the RFE (Return From Exception) instruction on a general purpose processor? I.It must be a trap instruction II.It must be a privileged instruction III.An exception cannot be allowed to occur during execution of an RFE instruction
 A I only B II only C I and II only D I, II and III only
Computer-Organization       Machine-Instructions       Gate-2008
Question 77 Explanation:
RFE is a privileged instruction that is performed explicitly by the Operating System to switch from kernel mode to user mode at the end of handling an exception. Hence it has to be a trap. We know when a trap/interrupt is in execution, till its completion all other trap/interrupts will not be allowed to execute. So option D is correct answer.
 Question 78
For inclusion to hold between two cache levels L1 and L2 in a multi-level cache hierarchy, which of the following are necessary? I.L1 must be a write-through cache II.L2 must be a write-through cache III.The associativity of L2 must be greater than that of L1 IV.The L2 cache must be at least as large as the L1 cache
 A IV only B I and IV only C I, III and IV only D I, II, III and IV
Computer-Organization       Cache       Gate-2008
Question 78 Explanation:
Inclusion property: in this the presence of an address in a given level of the memory system guarantees that the address is present in all lower levels of the memory system.
1st is not always correct as data need not to be exactly same at the same point of time and so write back policy can be used in this instead of write through policy.
2nd is not needed when talking only about L1 and L2, as whether L2 is write through will have impact on any memory higher than L2, not on L1.
For 3rd, associativity can be equal also, so it need not be true.
For 4th statement, L2 cache has to be as large as L1 cache. In most cases L2 cache will be generally larger than L1 cache, but we will never have an L2 cache smaller than L1 cache. So only 4th statement is necessarily true. Hence option A is the answer.
 Question 79
Which of the following are NOT true in a pipelined processor? I.Bypassing can handle all RAW hazards II.Register renaming can eliminate all register carried WAR hazards III.Control hazard penalties can be eliminated by dynamic branch prediction
 A I and II only B I and III only C II and III only D I, II and III
Computer-Organization       Pipelining       Gate-2008
Question 79 Explanation:
I. False. Bypassing can't handle all RAW hazard.
II. True. Register renaming can eliminate all WAR Hazard as well as WAW hazard.
III. If this statement would have said that
"Control hazard penalties can be completely eliminated by dynamic branch prediction", then it is false. But it is only given that "Control hazard penalties can be eliminated by dynamic branch prediction". So, it is true.
Hence, none of the given Option is Correct.
 Question 80
The use of multiple register windows with overlap causes a reduction in the number of memory accesses for I.Function locals and parameters II.Register saves and restores III.Instruction fetches
 A I only B II only C III only D I, II and III
Computer-Organization       Run-Time-Environments       Gate-2008
Question 80 Explanation:
I is true because when we make a function call there are some input registers and some output registers. If function F() is calling function G(), we can make the caller function F()'s output registers the same as the called procedure G()'s input registers this is done using overlapping register windows.This will reduce the memory accesses so that F()'s output need not be put into memory for G() to access again from memory.
II is false as register saves and restores would still be required for each and every variable.
III is also false as instruction fetch is not affected by memory access using multiple register windows.
 Question 81
In an instruction execution pipeline, the earliest that the data TLB (Translation Lookaside Buffer) can be accessed is
 A Before effective address calculation has started B During effective address calculation C After effective address calculation has completed D After data cache lookup has completed
Computer-Organization       Virtual Memory       Gate-2008
Question 81 Explanation:
TLB is a mini page table and it contains the frequently accessed page table entries. Because logical address is effective address, only after we calculate what is the effective address we can access the TLB. Hence option C is the correct answer.
 Question 82
Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16bytes. The cache is managed using 32 bit virtual addresses and the page size is 4Kbyts. A program to be run on this machine begins as follows: double ARR [1024][1024]; int       i,          j         ; /* Initialize array ARR to 0.0 * / for (i = 0;i < 1024; i + +) for (j = 0; j < 1024; j + +) ARR [i]   [j] = 0.0; The size of double is 8Bytes. Array ARR is located in memory starting at the beginning of virtual page 0xFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR The total size of the tags in the cache directory is
 A 32Kbits B 34Kbits C 64Kbits D 68Kbits
Computer-Organization       Cache       Gate-2008
Question 82 Explanation:
It is given that cache size = 64KB
Block size = 16 Bytes = 24 Bytes, so block offset = 4 bits
Number of blocks in the cache = 64KB / 16B = 4K
It is 2-way set associative cache, so each set contains 2 blocks.
So, number of sets = 4K / 2 = 2K = 211
So, we need 11-bits for set indexing.
Since the address is 32 bits long, number of tag bits = 32−11−4 = 17
Total size of the tag directory = No. of tag bits ×Number of cache blocks =17×4K
=68 Kbits
 Question 83
Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16bytes. The cache is managed using 32 bit virtual addresses and the page size is 4Kbyts. A program to be run on this machine begins as follows:   double ARR [1024][1024]; int       i,          j         ; /* Initialize array ARR to 0.0 * / for (i = 0;i < 1024; i + +) for (j = 0; j < 1024; j + +) ARR [i]   [j] = 0.0; The size of double is 8Bytes. Array ARR is located in memory starting at the beginning of virtual page 0xFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR Which of the following array elements has the same cache index as ARR [0] [0]?
 A ARR [0] [4] B ARR [4] [0] C ARR [0] [5] D ARR [5] [0]
Computer-Organization       Cache       Gate-2008
Question 83 Explanation:
It is given that cache size = 64KB
Block size = 16 Bytes
Number of blocks in the cache = 64KB / 16B = 4K
It is 2-way set associative cache, so each set contains 2 blocks.
So, number of sets = 4K / 2 = 2K = 211
Each element size = 8B and size of each block = 16 B
No. of elements in one block = 16/8 = 2 We know no. of elements in one row of the array = 1024. So, no. of blocks for one row of the array = 1024/2 = 512
We know there are 211 sets, and each set has two blocks.
For any element to share the same cache index as ARR[0][0], it has to belong to the same set.
ARR[0][0] belongs to set-0. The next element that belongs to set-0 will have block number 2048 because 2048 mod 211 = 0.
Block number 2048 will be from row number 2048/512 = 4, because each row has 512 blocks we divide the block number with 512 to get the row number.
From the given options ARR[4][0] will have the same cache index as ARR[0][0].
 Question 84
Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16bytes. The cache is managed using 32 bit virtual addresses and the page size is 4Kbyts. A program to be run on this machine begins as follows:   double ARR [1024][1024]; int       i,          j         ; /* Initialize array ARR to 0.0 * / for (i = 0;i < 1024; i + +) for (j = 0; j < 1024; j + +) ARR [i]   [j] = 0.0; The size of double is 8Bytes. Array ARR is located in memory starting at the beginning of virtual page 0xFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR The cache hit ratio for this initialization loop is
 A 0% B 25% C 50% D 75%
Computer-Organization       Cache       Gate-2008
Question 84 Explanation:
Block size = 16B and it is given that double size is 8B, so one element = 8B.
So in one block 2 elements will be stored.
To store 1024×1024 elements no. of blocks needed = 1024×1024/2 = 220/2 = 219.
In each block the first element will be a miss and second element will be a hit because on every miss that entire block is brought into the cache. So, there will be 219 hits and 219 misses. Total no. of references = no. of elements in the array = 220
⇒hit ratio = No. of hits / Total no. of references
=219/220 = 1/2 = 0.5
=0.5×100=50%
 Question 85
Delayed branching can help in the handling of control hazards For all delayed conditional branch instructions, irrespective of whether the condition evaluates to true or false
 A The instruction following the conditional branch instruction in memory is executed. B The first instruction in the fall through path is executed. C The first instruction in the taken path is executed. D The branch takes longer to execute than any other instruction.
Computer-Organization       Pipelining       Gate-2008
Question 85 Explanation:
In order to avoid the pipeline delay due to conditional branch instruction, a suitable instruction is placed below the conditional branch instruction such that the instruction will be executed irrespective of whether branch is taken or not and won't affect the program behaviour. Hence option A is the answer.
 Question 86
Delayed branching can help in the handling of control hazards The following code is to run on a pipelined processor with one branch delay slot: I1 : ADD R2 ¬ R7 + R8 I2 :    SUB  R4 ¬ R5 - R6 I3 :    ADD R1 ¬ R2 + R3 I4 :    STORE Memory [R4] ¬ R1 BRANCH to Label if R1 = = 0 Which of the instructions I1, I2, I3 or I4 can legitimately occupy the delay slot without any other program modification?
 A I1 B I2 C I3 D I4
Computer-Organization       Pipelining       Gate-2008
Question 86 Explanation:
It is the method to maximize the use of the pipeline by finding and executing an instruction that can be safely executed whether the branch is taken or not. So, when a branch instruction is encountered, the hardware puts the instruction following the branch into the pipe and begins executing it. Here we do not need to worry about whether the branch is taken or not, as we do not need to clear the pipe because no matter whether the branch is taken or not, we know the instruction is safe to execute.
From the given set of instructions I3 is updating R1, and the branch condition is based on the value of R1 so I3 can’t be executed in the delay slot.
Instruction I1 is updating the value of R2 and R2 is used in I3. So I1 also can’t be executed in the delay slot.
Instruction I2 is updating R4, and at the memory location represented by R4 the value of R1 is stored. So if I2 is executed in the delay slot then the memory location where R1 is to be stored as part of I4 will be in a wrong place. Hence between I2 and I4, I2 can’t be executed after I4. Hence I2 can’t be executed in the delay slot.
Instruction I4 can be executed in the delay slot as this is storing the value of R1 in a memory location and executing this in the delay slot will have no effect. Hence option D is the answer.
 Question 87
How many bytes of data can be sent in 15 seconds over a serial link with baud rate of 9600 in asynchronous mode with odd parity and two stop bits in the frame?
 A 10,000 bytes B 12,000 bytes C 15,000 bytes D 27,000 bytes
Computer-Organization       Serial-Communication       Gate 2008-IT
Question 87 Explanation:
In the Asynchronous mode of transmission along with per byte, we need to send some extra bit like start, stop bit and parity bit, etc.
1 bit for start bit, 8 bits for data, 1 bit for parity, 2 bits for stop bits i.e.,
 Question 88

Assume that EA = (X)+ is the effective address equal to the contents of location X, with X incremented by one word length after the effective address is calculated; EA = −(X) is the effective address equal to the contents of location X, with X decremented by one word length before the effective address is calculated; EA = (X)− is the effective address equal to the contents of location X, with X decremented by one word length after the effective address is calculated. The format of the instruction is (opcode, source, destination), which means (destination ← source op destination). Using X as a stack pointer, which of the following instructions can pop the top two elements from the stack, perform the addition operation and push the result back to the stack.

Computer-Organization       Machine-Instructions       Gate 2008-IT
Question 88 Explanation:
Answer is A as 998 ← 1000 + 998(These are the memory locations).
Lets say SP is 1000 initially then after it calculates the EA of source (which is 1000 as it decrements after the EA). The destination becomes 998 and this is where we want to store the result as stack is decrementing.
In case of C and D it becomes,
998 ← 998 + 998
 Question 89

Consider a CPU where all the instructions require 7 clock cycles to complete execution. There are 140 instructions in the instruction set. It is found that 125 control signals are needed to be generated by the control unit. While designing the horizontal microprogrammed control unit, single address field format is used for branch control logic. What is the minimum size of the control word and control address register?

 A 125, 7 B 125, 10 C 135, 7 D 135, 10
Computer-Organization       Micro-Programmed-Control-Unit       Gate 2008-IT
Question 89 Explanation:
Each instruction takes 7 cycles,
i.e., 140 instruction takes = 140 * 7 =980 cycles.
So, size of control address register = ⌈log2 980⌉
= 10 bit
Since horizontal microprogramming is used, so 125 control signals will require 125 bits.
Hence, size of control word = 125 + 10 = 135 bits
 Question 90

A non pipelined single cycle processor operating at 100 MHz is converted into a synchro­nous pipelined processor with five stages requiring 2.5 nsec, 1.5 nsec, 2 nsec, 1.5 nsec and 2.5 nsec, respectively. The delay of the latches is 0.5 nsec. The speedup of the pipeline processor for a large number of instructions is

 A 4.5 B 4 C 3.33 D 3
Computer-Organization       Pipelining       Gate 2008-IT
Question 90 Explanation:
For non-pipelined system time required = 2.5 + 1.5 + 2.0 + 1.5 + 2.5 = 10
For pipelined system = Max(stage delay) + Max(latch delay) = 2.5 + 0.5 = 3.0
Speedup = Time in non-pipelined system/Time in pipelined system = 10/3 = 3.33
 Question 91
Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. The number of bits in the TAG, SET and WORD fields, respectively are:
 A 7, 6, 7 B 8, 5, 7 C 8, 6, 6 D 9, 4, 7
Computer-Organization       Cache       Gate 2008-IT
Question 91 Explanation:
 Question 92
Consider a computer with a 4-ways set-associative mapped cache of the following character­istics: a total of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. While accessing the memory location 0C795H by the CPU, the contents of the TAG field of the corresponding cache line is
 A 000011000 B 110001111 C 00011000 D 110010101
Computer-Organization       Cache       Gate 2008-IT
Question 92 Explanation:
As we have found in earlier question that first 9 bits is tag bits.
So lets first convert given Hexadecimal no. into binary number,
 Question 93
Consider a 4-way set associative cache consisting of 128 lines with a line size of 64 words. The CPU generates a 20-bit address of a word in main memory. The number of bits in the TAG, LINE and WORD fields are respectively:
 A 9, 6, 5 B 7, 7, 6 C 7, 5, 8 D 9, 5, 6
Computer-Organization       Cache       Gate-2007
Question 93 Explanation:
Cache has 128 lines.
Each line size 64 words, so no. of bits for WORD = 6 bits
Because it is 4-way set associative cache, no. of sets in the cache = 128/4 = 32 = 25
No. of bits for the set number = 5
Because the address is 20-bits long, no. of TAG bits = 20-5-6 = 9
 Question 94
Consider a disk pack with 16 surfaces, 128 tracks per surface and 256 sectors per track. 512 bytes of data are stored in a bit serial manner in a sector. The capacity of the disk pack and the number of bits required to specify a particular sector in the disk are respectively:
 A 256 Mbyte, 19 bits B 256 Mbyte, 28 bits C 512 Mbyte, 20 bits D 64 Gbyte, 28 bit
Computer-Organization       Secondary-Storage       Gate-2007
Question 94 Explanation:
Given that the disk pack has 16 surfaces, 128 tracks per surface, 256 sectors per track and each sector size is 512 bytes.
So the disk pack capacity = 16 * 128 * 256 * 512 bytes = 256 MB
To specify a sector we need the information about surface number, track number and sector number within a track. Surface number needs 4 bits as there are 16 surfaces(24), track number needs 7 bits as there are 128 tracks(27) within a surface, within a track the sector number needs 8 bits as there are 256 sectors (28). Total number bits needed to specify a particular sector = 4+7+8 = 19 bits. Hence option A is the answer.
 Question 95
Consider a pipelined processor with the following four stages:
IF: Instruction Fetch
ID: Instruction Decode and Operand Fetch
EX: Execute
WB: Write Back
The IF, ID and WB stages take one clock cycle each to complete the operation. The number of clock cycles for the EX stage dependson the instruction. The ADD and SUB instructions need 1 clock cycle and the MUL instruction needs 3 clock cycles in the EX stage. Operand forwarding is used in the pipelined processor. What is the number of clock cycles taken to complete the following sequence of instructions?
ADD R2, R1, R0       R2 <- R0 + R1
MUL R4, R3, R2       R4 <- R3 * R2
SUB R6, R5, R4       R6 <- R5 - R4
 A 7 B 8 C 10 D 14
Computer-Organization       Pipelining       Gate-2007
Question 95 Explanation:
Since operand forwarding is there, by default we consider the operand forwarding from EX stage to EX stage.

So, total no. of clock cycles needed to execute the given 3 instructions is 8.
 Question 96
In a simplified computer the instructions are: Assume that all operands are initially in memory. The final value of the computation should be in memory. What is the minimum number of MOV instructions in the code generated for this basic block?
 A 2 B 3 C 5 D 6
Computer-Organization       Machine-Instructions       Gate-2007
Question 96 Explanation:
We can write the given four instructions using OP and MOV operations as below.
MOV a, R1
MOV c, R2
SUB e, R2
SUB R1, R2
MOV R2, m
So, from the above total no. of MOV instructions = 3
 Question 97
Consider the following program segment. Here R1, R2 and R3 are the general purpose registers. Assume that the content of memory location 3000 is 10 and the content of the register R3 is 2000. The content of each of the memory locations from 2000 to 2010 is 100. The program is loaded from the memory location 1000. All the numbers are in decimal. Assume that the memory is word addressable. The number of memory references for accessing the data in executing the program completely is:
 A 10 B 11 C 20 D 21
Computer-Organization       Machine-Instructions       Gate-2007
Question 97 Explanation:
Firstly before the loop one memory reference is,
R1 ← m[3000]
Now loop will run 10 times because R1 contain value 10, and in each iteration of the loop there are two memory reference,
R2 ← M[R3]
M[R3] ← R2
So, in total, the no. of memory references are,
2(10) + 1 = 21
 Question 98
Consider the following program segment. Here R1, R2 and R3 are the general purpose Assume that the content of memory location 3000 is 10 and the content of the register R3 is 2000. The content of each of the memory locations from 2000 to 2010 is 100. The program is loaded from the memory location 1000. All the numbers are in decimal. Assume that the memory is word addressable. The number of memory references for accessing the data in executing the program completely is:
 A 100 B 101 C 102 D 110
Computer-Organization       Machine-Instructions       Gate-2007
Question 98 Explanation:
Loop will execute 10 times since the value in R1 stored is 10 in first step.
Now since the value will change at memory locations from 2000 to 2009 and will not affect the memory locations 2010.
Hence, the value at memory location 2010 it will be old value, i.e., 100.
 Question 99

 A 1005 B 1020 C 1024 D 1040
Computer-Organization       Machine-Instructions       Gate-2007
Question 99 Explanation:
Memory is byte addressable, so it requires 1 word or 4 bytes to perform a each operation such that
→ Starts at memory location 1000.

Interrupt occurs during the execution of information “INC R3”.
Then the value of address i.e., 1024 is pushed into the stack.
 Question 100
Consider a machine with a byte addressable main memory of 216 bytes. Assume that a direct mapped data cache consisting of 32 lines of 64 bytes each is used in the system. A 50 × 50 two-dimensional array of bytes is stored in the main memory starting from memory location 1100H. Assume that the data cache is initially empty. The complete array is accessed twice. Assume that the contents of the data cache do not change in between the two accesses. How many data cache misses will occur in total?
 A 40 B 50 C 56 D 59
Computer-Organization       Cache       Gate-2007
Question 100 Explanation:
No. of bits to represent address
= log2 216 = 16 bits
Cache line size is 64 bytes means offset bit required is
log2 26 = 6 bits
No. of lines in cache is 32, means lines bits required is
log2 25 = 5 bits.
So, tag bits = 16 - 6 - 5 = 5 bits
No. of elements in array is
50 × 50 = 2500
and each element is of 1 byte.
So, total size of array is 2500 bytes.
So, total no. of lines it will require to get into the cache,
⌈2500/64⌉ = 40
Now, given starting address of array,

Now we need 40 cache lines to hold all array elements, but we have only 32 cache lines.
Now lets divide 2500 array elements into 40 array lines, each of which will contain 64 of its elements.
Now,

So, if complete array is accessed twice, total no. of cache misses is,
 Question 101
Consider a machine with a byte addressable main memory of 216 bytes. Assume that a direct mapped data cache consisting of 32 lines of 64 bytes each is used in the system. A 50 × 50 two-dimensional array of bytes is stored in the main memory starting from memory location 1100H. Assume that the data cache is initially empty. The complete array is accessed twice. Assume that the contents of the data cache do not change in between the two accesses. How many data cache misses will occur in total?
 A line 4 to line 11 B line 4 to line 12 C line 0 to line 7 D line 0 to line 8
Computer-Organization       Cache       Gate-2007
Question 101 Explanation:
From previous question explanation figure we can see that if array is accessed second time then from line 4 to line 11, A33 to A40 will be replaced by A1 to A8 and then A9 to A32 all array lines will be present and again that A1 to A8 will be replaced by A33 to A40.
 Question 102
A processor takes 12 cycles to complete an instruction I. The corresponding pipelined processor uses 6 stages with the execution times of 3, 2, 5, 4, 6 and 2 cycles respectively. What is the asymptotic speedup assuming that a very large number of instructions are to be executed?
 A 1.83 B 2 C 3 D 6
Computer-Organization       Pipelining       Gate 2007-IT
Question 102 Explanation:
Let there be n instructions.
For a non-pipelined processor each instruction takes 12 cycles.
So for n instructions total execution time be 12 × n = 12n
For a pipelined processor each instruction takes
max (3, 2, 5, 4, 6, 2) =6
So for n instructions total execution time be,
(1×6 + (n-1) × 1) × 6
= (6 + n - 1) × 6
= (5 + n) × 6
= 30 + 6n

So, if n is very large,
 Question 103

The floating point unit of a processor using a design D takes 2t cycles compared to t cycles taken by the fixed point unit. There are two more design suggestions D1 and D2. D1 uses 30% more cycles for fixed point unit but 30% less cycles for floating point unit as compared to design D. D2 uses 40% less cycles for fixed point unit but 10% more cycles for floating point unit as compared to design D. For a given program which has 80% fixed point operations and 20% floating point operations, which of the following ordering reflects the relative performances of three designs? (Di > Dj denotes that Di is faster than Dj)

 A D1 > D > D2 B D2 > D > D1 C D > D2 > D1 D D > D1 > D2
Computer-Organization       Clock-Frequency       Gate 2007-IT
Question 103 Explanation:
T = 0.8 × time taken in fixed point + 0.2 × time taken in floating point
D = 0.8 × t + 0.2 × 2t = 1.2t
D1 = 0.8 × 1.3t + 0.2 × 0.7 × 2t = 1.32t
D2 = 0.8 × 0.6t + 0.2 × 1.1 × 2t = 0.92t
⇒ D2 > D > D1
 Question 104
Consider a Direct Mapped Cache with 8 cache blocks (numbered 0-7). If the memory block requests are in the following order 3, 5, 2, 8, 0, 63, 9,16, 20, 17, 25, 18, 30, 24, 2, 63, 5, 82,17, 24. Which of the following memory blocks will not be in the cache at the end of the sequence ?
 A 3 B 18 C 20 D 30
Computer-Organization       Cache       Gate 2007-IT
Question 104 Explanation:

So memory block 18 is not in the cache.
 Question 105

Data forwarding techniques can be used to speed up the operation in presence of data dependencies. Consider the following replacements of LHS with RHS.

In which of the following options, will the result of executing the RHS be the same as executing the LHS irrespective of the instructions that follow ?

 A (i) and (iii) B (i) and (iv) C (ii) and (iii) D (ii) and (iv) E Only (i)
Computer-Organization       Data-Dependencies       Gate 2007-IT
Question 105 Explanation:
(i) Is true. Both Loc and R2 are getting the value of R1 in LHS and RHS.
(ii) Is false. Because R2 gets the correct data in both LHS and RHS, but Loc is not updated in RHS.
(iii) Is false. Because R2 is writing last in LHS, but not in RHS.
(iv) Is true. The first write to Loc in LHS is useless as it is overwritten by the next write.
 Question 106
Following table indicates the latencies of operations between the instruction producing the result and instruction using the result.
 Load R1, Loc 1;	 Load R1 from memory location Loc1
Add R1, R2, R1;	 Add R1 and R2 and save result in R1
Dec R2;	 Decrement R2
Dec R1;	 Decrement R1
Mpy R1, R2, R3;	 Multiply R1 and R2 and save result in R3
Store R3, Loc 3;	 Store R3 in memory location Loc 3

What is the number of cycles needed to execute the above code segment assuming each instruction takes one cycle to execute ?

 A 7 B 10 C 13 D 14
Computer-Organization       Machine-Instructions       Gate 2007-IT
Question 106 Explanation:
In the given question, there are 7 instructions each of which takes 1 clock cycle to complete.
If an instruction is in execution phase then any other instructions cannot be in the execution phase. So, atleast 7 clock cycles will be taken.
Now, it is given that between two instructions latency or delay should be there based on their operation.
• Number of tracks = 500
• Number of sectors/track = 100
• Number of bytes /sector = 500
• Time taken by the head to move from one track to adjacent track = 1 ms
• Rotation speed = 600 rpm.
What is the average time taken for transferring 250 bytes from the disk ?
 A 300.5 ms B 255.5 ms C 255.0 ms D 300.0 ms
Computer-Organization       Secondary-Storage       Gate 2007-IT
Question 107 Explanation:
Avg. time to transfer = Avg. seek time + Avg. rotational delay + Data transfer time
For Avg. seek time:
Given that, time to move between successive tracks is 1ms.
Time to move from track1 to track1 = 0ms
Time to move from track1 to track2 = 1ms
Time to move from track1 to track3 = 2ms
⋮ Time to move from track1 to track500 = 499ms
∴ Avg. seek time = 0+1+2+...+499/500 = 249.5ms
Avg. rotational delay:
600 rotations in 60sec.
One rotation takes 60/600 s = 100ms ∴ Avg. rotational delay = 100/2 = 50ms
Data transfer time:
In one rotation, we can read data on one complete track
= 100 × 500 = 50,000B data is read in one complete rotation
One complete rotation takes 100ms.
100ms → 50,000B
250B → 100/50000 × 250 = 0.5ms
∴ Avg. time to transfer = 249.5 × 50 + 0.5 = 300ms
 Question 108
Consider two cache organizations: The first one is 32 KB 2-way set associative with 32-byte block size. The second one is of the same size but direct mapped. The size of an address is 32 bits in both cases. A 2-to-1 multiplexer has a latency of 0.6 ns while a kbit comparator has a latency of k/10 ns. The hit latency of the set associative organization is h1 while that of the direct mapped one is h2. The value of h1 is:
 A 2.4 ns B 2.3 ns C 1.8 ns D 1.7 ns
Computer-Organization       Cache       Gate-2006
Question 108 Explanation:
Cache size = 32 KB = 32 × 210 B
Block size = 32 bytes
No. of blocks = 2 (2-way set associative)
No. of combinations = Cache size / (Block size×No. of blocks) = (32×210B) / (32×2) = 29
Index bits = 9
No. of set bits = 5 (∵ cache block size is 32 bytes = 25 bytes)
No. of Tag bits = 32 - 9 - 5 = 18
Hit latency = Multiplexer latency + latency
= 0.6 + (18/10)
= 0.6 + 1.8
= 2.4 ns
 Question 109
A CPU has 24-bit instructions. A program starts at address 300 (in decimal). Which one of the following is a legal program counter (all values in decimal)?
 A 400 B 500 C 600 D 700
Computer-Organization       Machine-Instructions       Gate-2006
Question 109 Explanation:
The instruction is of 24 bits or 3 bytes. Now the program starts at address 300 and the next will be 303 then, 306, then 309 and so on.
So finally we can say that the values in the program counter will always be the multiple of 3.
Hence, option (C) is correct.
 Question 110
A CPU has a 32 KB direct mapped cache with 128-byte block size. Suppose A is a twodimensional array of size 512×512 with elements that occupy 8-bytes each. Consider the following two C code segments, P1 and P2. P1:
 for (i=0; i<512; i++) {    for (j=0; j<512; j++) {       x += A[i][j];    } }
P2:
 for (i=0; i<512; i++) {    for (j=0; j<512; j++) {       x += A[j][i];    } }
P1 and P2 are executed independently with the same initial state, namely, the array A is not in the cache and i, j, x are in registers. Let the number of cache misses experienced by P1 be M1 and that for P2 be M2 . The value of the ratio M1/M2 is:
 A 0 B 1/16 C 1/8 D 16
Computer-Organization       Cache       Gate-2006
Question 110 Explanation:
Total misses for [P2] = 512 * 512 = 262144
(for every element there would be a miss)
M1/M2 = 16384/262144 = 1/16
 Question 111
A CPU has a 32 KB direct mapped cache with 128-byte block size. Suppose A is a twodimensional array of size 512×512 with elements that occupy 8-bytes each. Consider the following two C code segments, P1 and P2. P1:
 for (i=0; i<512; i++) {    for (j=0; j<512; j++) {       x += A[i][j];    } }
P2:
 for (i=0; i<512; i++) {    for (j=0; j<512; j++) {       x += A[j][i];    } }
P1 and P2 are executed independently with the same initial state, namely, the array A is not in the cache and i, j, x are in registers. Let the number of cache misses experienced by P1 be M1 and that for P2 be M2 . The value of M1 is:
 A 0 B 2048 C 16384 D 262144
Computer-Organization       Cache       Gate-2006
Question 111 Explanation:
[P1] Access A in row major order.
[P2] Access A in column major order.
No. of cache blocks=Cache size/Block size = 32KB / 128B = 32×210B / 128B = 215 / 27 = 256
No. of array elements in each block = Block size / Element size = 128B / 8B =16
Total misses for (P1)=Array size * No. of array elements / No. of cache blocks = (512×512) * 16 / 256=16384
 Question 112
Consider two cache organizations: The first one is 32 KB 2-way set associative with 32-byte block size. The second one is of the same size but direct mapped. The size of an address is 32 bits in both cases. A 2-to-1 multiplexer has a latency of 0.6 ns while a kbit comparator has a latency of k/10 ns. The hit latency of the set associative organization is h1 while that of the direct mapped one is h2. The value of h2 is:
 A 2.4 ns B 2.3 ns C 1.8 ns D 1.7 ns
Computer-Organization       Cache       Gate-2006
Question 112 Explanation:
Hit latency = k/10 ns
For k,

∴ Hit latency = 17/1 = 1.7 ns
 Question 113
Consider a new instruction named branch-on-bit-set (mnemonic bbs). The instruction “bbs reg, pos, label” jumps to label if bit in position pos of register operand reg is one. A register is 32 bits wide and the bits are numbered 0 to 31, bit in position 0 being the least significant. Consider the following emulation of this instruction on a processor that does not have bbs implemented. temp¬reg & mask Branch to label if temp is non-zero. The variable temp is a temporary register. For correct emulation, the variable mask must be generated by:
Computer-Organization       Machine-Instructions       Gate-2006
Question 113 Explanation:
Using the following operation "temp→reg & mask" we are checking whether bit at position pos in register reg is 1 or not. For that mask should have 1 only in position pos. In all the other positions mask have 0s.
So for mask to have 1 only in position pos and 0s in all the other positions, we can get it by doing left shift on 1, pos number of times.
Out of the given options, in option A this left shift operation on 1 is performed pos number of times. Hence option A is the answer.
 Question 114
A CPU has a five-stage pipeline and runs at 1 GHz frequency. Instruction fetch happens in the first stage of the pipeline. A conditional branch instruction computes the target address and evaluates the condition in the third stage of the pipeline. The processor stops fetching new instructions following a conditional branch until the branch outcome is known. A program executes 109 instructions out of which 20% are conditional branches. If each instruction takes one cycle to complete on average, the total execution time of the program is:
 A 92 ns B 104 ns C 172 ns D 184 ns
Computer-Organization       Cache       Gate-2006
Question 114 Explanation:
Cache with block size = 64 bytes
Latency = 80 ns
k = 24
No. of banks are accessed in parallel , then it takes k/2 ns = 24/2 = 12ns
Decoding time = 12ns
Size of each bank C = 2 bytes
Each Bank memory is 2 bytes, and there is 24 banks are there, in one iteration they may get 2*24 = 48 bytes
And getting 64 bytes requires 2 iteration.
T = decoding time + latency = 12+80 = 92
For 2 iterations = 92×2 = 184ns
 Question 115
A CPU has a five-stage pipeline and runs at 1 GHz frequency. Instruction fetch happens in the first stage of the pipeline. A conditional branch instruction computes the target address and evaluates the condition in the third stage of the pipeline. The processor stops fetching new instructions following a conditional branch until the branch outcome is known. A program executes 109 instructions out of which 20% are conditional branches. If each instruction takes one cycle to complete on average, the total execution time of the program is:
 A 1.0 second B 1.2 seconds C 1.4 seconds D 1.6 seconds
Computer-Organization       Pipelining       Gate-2006
Question 115 Explanation:
No. of total instructions = 109
20% are condition branches out of 109
⇒ 20/100 × 109
⇒ 2 × 108
In third stage of pipeline it consists of 2 stage cycles.
Total cycle penalty = 2 × 2 × 108 = 4 × 108
Clock speed = 1 GHz
Each Instruction takes 1 cycle i.e., 109 instructions.
Total execution time of a program is
= (109 / 109) +((4× 108) / 109) = 1+0.4 = 1.4seconds
 Question 116
Which of the following DMA transfer modes and interrupt handling mechanisms will enable the highest I/O band-width?
 A Transparent DMA and Polling interrupts B Cycle-stealing and Vectored interrupts C Block transfer and Vectored interrupts D Block transfer and Polling interrupts
Computer-Organization       Modes-of-Transfer       Gate 2006-IT
Question 116 Explanation:
→ CPU has highest bandwidth in transparent DMA and polling. But it asked for I/O bandwidth not CPU bandwidth option. A is wrong.
→ In case of cycle stealing in each cycle time derive send data then wait again few CPU cycle it sends to memory. So, option B is wrong.
→ In case of polling CPU takes the initiative. So I/O bandwidth cannot be high. So, option D is wrong.
→ Consider block transfer in each single block device. Send data so bandwidth must be high. This makes option (C) correct,
 Question 117
 A It enables reduced instruction size B It allows indexing of array elements with same instruction C It enables easy relocation of data D It enables faster address calculations than absolute addressing
Question 117 Explanation:
 Question 118
The memory locations 1000, 1001 and 1020 have data values 18, 1 and 16 respectively before the following program is executed. MOVI Rs, 1 ; Move immediate LOAD Rd, 1000(Rs) ; Load from memory ADDI Rd, 1000 ; Add immediate STOREI 0(Rd), 20 ; Store immediate Which of the statements below is TRUE after the program is executed ?
 A Memory location 1000 has value 20 B Memory location 1020 has value 20 C Memory location 1021 has value 20 D Memory location 1001 has value 20
Computer-Organization       Machine-Instructions       Gate 2006-IT
Question 118 Explanation:
Rs ← 1
Rd ← 1
Rd ← 1001
Store in address 1001 ← 20.
 Question 119
The data path shown in the figure computes the number of 1s in the 32-bit input word corresponding to an unsigned even integer stored in the shift register. The unsigned counter, initially zero, is incremented if the most significant bit of the shift register is The micro-program for the control is shown in the table below with missing control words for micro-instructions I1, I2, ..... In. The counter width (k), the number of missing micro-instructions (n), and the control word for microinstructions I2, ..... In are, respectively,
 A 32, 5, 010 B 5, 32, 010 C 5, 31, 011 D 5, 31, 010
Computer-Organization       Micro-Instructions       Gate 2006-IT
Question 119 Explanation:
→ If a number is to be even LSB = 0.
→ So, there may be only 31, is for an unsigned even integer.
→ And 31 left shifts are needed to determine the number of 1's.
 Question 120

A cache line is 64 bytes. The main memory has latency 32 ns and bandwidth 1 GBytes/s. The time required to fetch the entire cache line from the main memory is

 A 32 ns B 64 ns C 96 ns D 128 ns
Computer-Organization       Cache       Gate 2006-IT
Question 120 Explanation:
For 1GBytes/s bandwidth → it takes 1 sec to load 109 bytes on line.
→ So, for 64 bytes it will take 64*1/109 = 64ns.
Main memory latency = 32
Total time required to place cache line is
64+32 = 96 ns
 Question 121
A computer system has a level-1 instruction cache (1-cache), a level-1 data cache (D-cache) and a level-2 cache (L2-cache) with the following specifications: The length of the physical address of a word in the main memory is 30 bits. The capacity of the tag memory in the I-cache, D-cache and L2-cache is, respectively,
 A 1K × 18-bit, 1K × 19-bit, 4K × 16-bit B 1K × 16-bit, 1K × 19-bit, 4K × 18-bit C 1K × 16-bit, 512 × 18-bit, 1K × 16-bit D 1K × 18-bit, 512 × 18-bit, 1K × 18-bit
Computer-Organization       Cache       Gate 2006-IT
Question 121 Explanation:
No. of blocks in cache = Capacity/Block size = 2m
Bits to represent blocks = m
No. of words in a block = 2n
Bits to represent a word = n
Tag bits = (length of physical address of a word) - (bits to represent blocks) - (bits to represent a word)
⇒ Each block will have its own tag bits.
So, total tag bits = no. of blocks × tag bits.
 Question 122

A pipelined processor uses a 4-stage instruction pipeline with the following stages: Instruction fetch (IF), Instruction decode (ID), Execute (EX) and Writeback (WB). The arithmetic operations as well as the load and store operations are carried out in the EX stage. The sequence of instructions corresponding to the statement X = (S - R * (P + Q))/T is given below. The values of variables P, Q, R, S and T are available in the registers R0, R1, R2, R3 and R4 respectively, before the execution of the instruction sequence.

The number of Read-After-Write (RAW) dependencies, Write-After-Read( WAR) dependencies, and Write-After-Write (WAW) dependencies in the sequence of instructions are, respectively,

 A 2, 2, 4 B 3, 2, 3 C 4, 2, 2 D 3, 3, 2
Computer-Organization       Pipelining       Gate 2006-IT
Question 122 Explanation:
RAW:
I1 - I2 (R5)
I2 - I3 (R6)
I3 - I4 (R5)
I4 - I5 (R6)
WAR:
I2 - I3 (R5)
I3 - I4 (R6)
WAW:
I1 - I3 (R5)
I3 - I4 (R6)
 Question 123
A pipelined processor uses a 4-stage instruction pipeline with the following stages: Instruction fetch (IF), Instruction decode (ID), Execute (EX) and Writeback (WB). The arithmetic operations as well as the load and store operations are carried out in the EX stage. The sequence of instructions corresponding to the statement X = (S - R * (P + Q))/T is given below. The values of variables P, Q, R, S and T are available in the registers R0, R1, R2, R3 and R4 respectively, before the execution of the instruction The IF, ID and WB stages take 1 clock cycle each. The EX stage takes 1 clock cycle each for the ADD, SUB and STORE operations, and 3 clock cycles each for MUL and DIV operations. Operand forwarding from the EX stage to the ID stage is used. The number of clock cycles required to complete the sequence of instructions is
 A 10 B 12 C 14 D 16
Computer-Organization       Pipelining       Gate 2006-IT
Question 123 Explanation:
 Question 124

Consider the following data path of a CPU.

The, ALU, the bus and all the registers in the data path are of identical size. All operations including incrementation of the PC and the GPRs are to be carried out in the ALU. Two clock cycles are needed for memory read operation - the first one for loading address in the MAR and the next one for loading data from the memory bus into the MDR 79. The instruction "call Rn, sub" is a two word instruction. Assuming that PC is incremented during the fetch cycle of the first word of the instruction, its register transfer interpretation is

Rn < = PC + 1;
PC < = M[PC];

The minimum number of CPU clock cycles needed during the execution cycle of this instruction is:

 A 2 B 3 C 4 D 5
Computer-Organization       Machine-Instructions       Gate-2005
Question 124 Explanation:
1 cycle → Increment
1 cycle → Load PC to MAR
1 cycle → Fetch memory and load to PC
----------------------------------------------
3 cycles
----------------------------------------------
 Question 125
Consider the following data path of a CPU. The, ALU, the bus and all the registers in the data path are of identical size. All operations including incrementation of the PC and the GPRs are to be carried out in the ALU. Two clock cycles are needed for memory read operation - the first one for loading address in the MAR and the next one for loading data from the memory bus into the MDR The instruction "add R0, R1" has the register transfer interpretation R0 < = R0 + R1. The minimum number of clock cycles needed for execution cycle of this instruction is.
 A 2 B 3 C 4 D 5
Computer-Organization       Machine-Instructions       Gate-2005
Question 125 Explanation:
Required minimum no. of clock cycles =3
3) output to R1
 Question 126
Consider a three word machine instruction
ADD A[R0], @ B
The first operand (destination) "A [R0]" uses indexed addressing mode with R0 as the index register. The second operand (source) "@ B" uses indirect addressing mode. A and B are memory addresses residing at the second and the third words, respectively. The first word of the instruction specifies the opcode, the index register designation and the source and destination addressing modes. During execution of ADD instruction, the two operands are added and stored in the destination (first operand). The number of memory cycles needed during the execution cycle of the instruction is
 A 3 B 4 C 5 D 6
Computer-Organization       Machine-Instructions       Gate-2005
Question 126 Explanation:
Total 4 memory cycles are required.
1 → To get 2nd operand from the address given by previous memory (Read).
1 → To store first operand (Write).
3R + 1W = 4
 Question 127
Match each of the high level language statements given on the left hand side with the most natural addressing mode from those listed on the right hand side.
 1 A[1] = B[J];	     a Indirect addressing
2 while [*A++];     b Indexed, addressing
3 int temp = *x;    c Autoincrement
 A (1, c), (2, b), (3, a) B (1, a), (2, c), (3, b) C (1, b), (2, c), (3, a) D (1, a), (2, b), (3, c)
Computer-Organization       Match-the-Following       Gate-2005
Question 127 Explanation:
(a) A[i] = B[j]; Indexed, Addressing.
Here using the Index.
(b) while [*A++]; Auto increment.
The memory is increments automatically.
(c) int temp = *x; Indirect Addressing.
Here temp is assigned to integer type of Address contained in x.
 Question 128
Consider a direct mapped cache of size 32 KB with block size 32 bytes. The CPU generates 32 bit addresses. The number of bits needed for cache indexing and the number of tag bits are respectively.
 A 10, 17 B 10, 22 C 15, 17 D 5, 17
Computer-Organization       Cache       Gate-2005
Question 128 Explanation:
No. of blocks = cache size/block size = 32KB/32 = 210
So indexing requires 10 bits.
No. of offset bit required to access 32 byte block = 5
So, No. of TAG bits = 32 - 10 - 5 = 17
 Question 129
A 5 stage pipelined CPU has the following sequence of stages:
IF — Instruction fetch from instruction memory,
RD — Instruction decode and register read,
EX — Execute: ALU operation for data and address computation,
MA — Data memory access - for write access, the register read
at RD stage is used,
WB — Register write back.
Consider the following sequence of instructions:
I1 : L R0, 1oc1;        R0 <= M[1oc1]
I2 : A R0, R0;           R0 <= R0 + R0
I3 : S R2, R0;           R2 <= R2 - R0
Let each stage take one clock cycle.
What is the number of clock cycles taken to complete the above sequence of instructions starting from the fetch of I1 ?
 A 8 B 10 C 12 D 15
Computer-Organization       Pipelining       Gate-2005
Question 129 Explanation:
From memory stage we are using operator forwarding:

If we don't use operator forwarding:

Total clock cycles = 8/11
There is no '11' in option.
Then no. of cycles = 8
 Question 130

A device with data transfer rate 10 KB/sec is connected to a CPU. Data is transferred byte-wise. Let the interrupt overhead be 4 μsec. The byte transfer time between the device interfaces register and CPU or memory is negligible. What is the minimum performance gain of operating the device under interrupt mode over operating it under program-controlled mode?

 A 15 B 25 C 35 D 45
Computer-Organization       Interrupt       Gate-2005
Question 130 Explanation:
Data transfer rate = 10 KB/sec
In 1 sec it transfer 10 KB of data = 10 × 103 B = 104B
Data interrupt overhead = 4μsec = 4×10-6 sec
Performance gain = 1/(4×10-6×104) =102/4 = 25
 Question 131
Consider a disk drive with the following specifications: 16 surfaces, 512 tracks/surface, 512 sectors/track, 1 KB/sector, rotation speed 3000 rpm. The disk is operated in cycle stealing mode whereby whenever one byte word is ready it is sent to memory; similarly, for writing, the disk interface reads a 4 byte word from the memory in each DMA cycle. Memory cycle time is 40 nsec. The maximum percentage of time that the CPU gets blocked during DMA operation is:
 A 10 B 25 C 40 D 50
Computer-Organization       Secondary-Storage       Gate-2005
Question 131 Explanation:
Time for one rotation = 60/3000
It reads 512 * 1024 B in one rotation.
Time taken to read 4B = 153ns
(153) for 4 is approximately = 160
Percentage CPU get blocked = 40*100/160 = 25
 Question 132
Which one of the following is true for a CPU having a single interrupt request line and a single interrupt grant line?
 A Neither vectored interrupt nor multiple interrupting devices are possible. B Vectored interrupts are not possible but multiple interrupting devices are possible. C Vectored interrupts and multiple interrupting devices are both possible. D Vectored interrupt is possible but multiple interrupting devices are not possible.
Computer-Organization       Interrupt       Gate-2005
Question 132 Explanation:
We can use one Interrupt line for all the devices connected.
Also for vectored Interrupts it is always possible if we implement in daisy chain mechanism.
 Question 133
What is the swap space in the disk used for?
 A Saving temporary html pages B Saving process data C Storing the super-block D Storing device drivers
Computer-Organization       Secondary-Storage       Gate-2005
Question 133 Explanation:
A swap file is a space on hard disk used as the virtual memory extension of a computer's real memory.
→ The interchange of data between a virtual memory and a real memory is called as swapping and space on a disk as 'swap space'.
→ Swap space in the disk can be used for saving process data.
 Question 134
Using Booth’s Algorithm for multiplication, the multiplier -57 will be recoded as
 A 0 -1 0 0 1 0 0 -1 B 1 1 0 0 0 1 1 1 C 0 -1 0 0 1 0 0 0 D 0 1 0 0 -1 0 0 1
Computer-Organization       Booth's-Algorithm       Gate 2005-IT
Question 134 Explanation:
Consider option A:
 Question 135
A dynamic RAM has a memory cycle time of 64 nsec. It has to be refreshed 100 times per msec and each refresh takes 100 nsec. What percentage of the memory cycle time is used for refreshing?
 A 10 B 6.4 C 1 D 0.64
Computer-Organization       Cache       Gate 2005-IT
Question 135 Explanation:
We know that 1ms = 106ns
In 106ns refresh 100 times.
Each refresh takes 100ns.
Memory cycle time = 64ns
Refresh time per 1ms i.e., per 106ns = 100 * 100 = 104ns
Refresh time per 1ns = (104)/(106) ns
Refresh time per cycle = (104*64)/(106) = 64ns
Percentage of the memory cycle time is used for refreshing = (64*100)/64 = 1%
 Question 136

We have two designs D1 and D2 for a synchronous pipeline processor. D1 has 5 pipeline stages with execution times of 3 nsec, 2 nsec, 4 nsec, 2 nsec and 3 nsec while the design D2 has 8 pipeline stages each with 2 nsec execution time How much time can be saved using design D2 over design D1 for executing 100 instructions?

 A 214 nsec B 202 nsec C 86 nsec D -200 nsec
Question 136 Explanation:
k = total no. of stages
n = no. of instructions
Total execution time = (k+n-1) * maximum clock cycle
In case of D1:
k = 5
n = 100
Maximum clock cycle = 4 ns
Total execution time = (5+100-1) * 4 = 416
In case of D2:
k = 8
n = 100
Maximum clock cycle = 2 ns
Total execution time = (8+100-1) * 2 = 214
Starved time D2 over D1 = 416 - 214 = 202
 Question 137
A hardwired CPU uses 10 control signals S1 to S10, in various time steps T1 to T5, to implement 4 instructions I1 to I4 as shown below: Which of the following pairs of expressions represent the circuit for generating control signals S5 and S10 respectively? ((Ij+Ik)Tn indicates that the control signal should be generated in time step Tn if the instruction being executed is Ij or lk)
 A S5=T1+I2⋅T3 and S10=(I1+I3)⋅T4+(I2+I4)⋅T5 B S5=T1+(I2+I4)⋅T3 and S10=(I1+I3)⋅T4+(I2+I4)⋅T5 C S5=T1+(I2+I4)⋅T3 and S10=(I2+I3+I4)⋅T2+(I1+I3)⋅T4+(I2+I4)⋅T5 D S5=T1+(I2+I4)⋅T3 and S10=(I2+I3)⋅T2+I4⋅T3+(I1+I3)⋅T4+(I2+I4)⋅T5
Computer-Organization       CPU-Control-Design-an-Interfaces       Gate 2005-IT
Question 137 Explanation:
It is an example of Hardwired CU programming used in RISC processors.
→ If we look at the table, we need to find those timestamps which are using these control signals.
→ For example consider S5 = T1 has been used for control signal for all the instructions, or we can say that irrespective of the instruction.
→ Also S5 is used by instructions I2 and I4 for the timestamp T3 so that
S5 = T1 + I2⋅T3 + I4⋅T3 = T1 + (I2+I4)⋅T3
→ Like that
S10 = (I2+I3)⋅T2 + I4⋅T3 +(I1+I3)⋅T4 + (I2⋅I4)⋅T5
 Question 138

In a computer system, four files of size 11050 bytes, 4990 bytes, 5170 bytes and 12640 bytes need to be stored. For storing these files on disk, we can use either 100 byte disk blocks or 200 byte disk blocks (but can’t mix block sizes). For each block used to store a file, 4 bytes of bookkeeping information also needs to be stored on the disk. Thus, the total space used to store a file is the sum of the space taken to store the file and the space taken to store the book keeping information for the blocks allocated for storing the file. A disk block can store either bookkeeping information for a file or data from a file, but not both.

What is the total space required for storing the files using 100 byte disk blocks and 200 byte disk blocks respectively?

 A 35400 and 35800 bytes B 35800 and 35400 bytes C 35600 and 35400 bytes D 35400 and 35600 bytes
Computer-Organization       Secondary-memory-and-DMA       Gate 2005-IT
Question 138 Explanation:
For 100 bytes block,
11050 = 111 blocks requiring 111×4=444B of bookkeeping into which requires another 5 disk blocks. So, totally 111+5=116 disk blocks. Similarly,
4990 = 50 + ⌈(50×4)/100⌉ = 52
5170 = 52 + ⌈(52×4)/100⌉ = 55
12640 = 127 + ⌈(127×4)/100⌉ = 133
∴ Total no. of blocks required
= 52 + 55 + 133 + 116
= 356
So, total space required
= 35600 Bytes
For 200 Bytes block:
56 + ⌈(56×4)/200⌉ = 58
25 + ⌈(25×4)/200⌉ = 26
26 + ⌈(26×4)/200⌉ = 27
64 + ⌈(64×4)/200⌉ = 66
∴ Total space required,
(58+26+27+66) × 200
= 177 × 200
= 35400 Bytes
 Question 139

 A (i) and (iv) B (i) and (ii) C (ii) and (iii) D (i), (ii) and (iv)
Question 139 Explanation:
A fixed address in memory which indicates a location by specifying a distance from another location. In this displacement type addressing is preferred.
So, option A is false.
This scheme is used by computers to control access to memory. In this pointers are replaced by protected objects which can be executed by kernal (or) some other privileged process authors.
So, this is suitable for program relocation at runtime.
The offset of the relative addressing is to allow reference to code both before and after the instruction.
This is also suitable.
Which leads to extra memory location which can be not suitable at run time.
This is not suitable.
 Question 140

 A 1007 B 1020 C 1024 D 1028
Computer-Organization       Machine-Instructions       Gate-2004
Question 140 Explanation:
Byte addressableb with size = 32 bits (word size) = 4 bytes
→ Interrupt occurs after executing halt instruction
So, number of instructions = 2+1+1+2+1 = 7
→ Each instruction with 4 bytes, then total instruction size = 7 * 4 = 28
→ Memory start location = 1000
→ Instruction saved location = 1000 + 28 = 1028
1028 is the starting address of next instruction.
 Question 141

 A 29 B 24 C 23 D 20
Computer-Organization       Machine-Instructions       Gate-2004
Question 141 Explanation:
 Question 142
Consider a small two-way set-associative cache memory, consisting of four blocks. For choosing the block to be replaced, use the least recently used (LRU) scheme. The number of cache misses for the following sequence of block addresses is 8, 12, 0, 12, 8
 A 2 B 3 C 4 D 5
Computer-Organization       Cache       Gate-2004
Question 142 Explanation:
Cache consists of 4 blocks and is 2-way set associative. So cache have 2 sets each of size 2 blocks.
The given sequernce is 8, 12, 0, 12, 8.

So in total 4 misses is there.
 Question 143

 A 10, 3, 1024 B 8, 5, 256 C 5, 8, 2048 D 10, 3, 512
Computer-Organization       Micro-programming       Gate-2004
Question 143 Explanation:
Processed width = 26 bits
Micro process field width = 13 bits
MUX consists of 8 state bits, then it require 3 inputs to select input line.
No. of bits for next address field = 26 - 13 - 3= 10
For 10 bit addressing, we require 210 memory size
Size (X, Y) = 10, 3
 Question 144

A hard disk with a transfer rate of 10 Mbytes/second is constantly transferring data to memory using DMA. The processor runs at 600 MHz, and takes 300 and 900 clock cycles to initiate and complete DMA transfer respectively. If the size of the transfer is 20 Kbytes, what is the percentage of processor time consumed for the transfer operation?

 A 5.0% B 1.0% C 0.5% D 0.1%
Computer-Organization       DMA       Gate-2004
Question 144 Explanation:
Clock cycle time = 1/600 × 10-6 (time = 1/frequency)
For DMA initiation and completion total 1200 cycles is required. So total time will be = 1200 × 1/600 × 10-6 = 2μs
Disk transfer rate = 10MB/s
1B = 1/107 s
So, total transfer 20KB time taken will be,
20 × 103 × 1/(107) s
= 2000μs
∴ Percentage of processor time consumed is,
2/2+2000 × 100 = 0.1%
 Question 145

A 4-stage pipeline has the stage delays as 150, 120, 160 and 140 nanoseconds respectively. Registers that are used between the stages have a delay of 5 nanoseconds each. Assuming constant clocking rate, the total time taken to process 1000 data items on this pipeline will be

 A 120.4 microseconds B 160.5 microseconds C 165.5 microseconds D 590.0 microseconds
Computer-Organization       Pipelining       Gate-2004
Question 145 Explanation:
First instruction will take complete four cycle for execution. And then after that all 999 instruction will take only 1 cycle for execution to be completed. So time required to process 1000 instruction or data items is,
1st instruction × 4 × clock time + 999 instruction × 1 × clock time
1 × 4 × 165ns + 999 × 1 × 165ns
= 1654.95ns
= 165.5μs
 Question 146
Consider a system with 2 level caches. Access times of Level 1 cache, Level 2 cache and main memory are 1 ns, 10ns, and 500 ns, respectively. The hit rates of Level 1 and Level 2 caches are 0.8 and 0.9, respectively. What is the average access time of the system ignoring the search time within the cache?
 A 13.0 ns B 12.8 ns C 12.6 ns D 12.4 ns
Computer-Organization       Cache       Gate 2004-IT
Question 146 Explanation:
Average access time = [H1 * T1] + [(1 - H1) * Hm * Tm]
H1 = 0.8, (1 - H1) = 0.2
H2 = 0.9, (1 - H2) = 0.1
T1 = Access time for level 1 cache = 1ns
T2 = Access time for level 2 cache = 10ns
Hm = Hit rate of main memory = 1
Tm = Access time for main memory = 500ns
Average access time = [(0.8 * 1) + (0.2 * 0.9 * 10) + (0.2)(0.1) * 1 * 500]
= 0.8 + 1.8 + 10
= 12.6ns
 Question 147
If we use internal data forwarding to speed up the performance of a CPU (R1, R2 and R3 are registers and M[100] is a memory reference), then the sequence of operations
 A R1 → R3 R2 → M[100] B M[100] → R2 R1 → R2 R1 → R3 C R1 → M[100] R2 → R3 D R1 → R2 R1 → R3 R1 → M[100]
Question 147 Explanation:
Data forwarding means if CPU writes to a memory location and subsequently reads from the same memory location, the second instruction can fetch the value directly from the register used to do write than waiting for the memory. So, this increses the performance.
 Question 148
For a pipelined CPU with a single ALU, consider the following situations
1. The j + 1-st instruction uses the result of the j-th instruction
as an operand
2. The execution of a conditional jump instruction
3. The j-th and j + 1-st instructions require the ALU at the same
time
Which of the above can cause a hazard ?
 A I and II only B II and III only C III only D All the three
Computer-Organization       Pipelining       Gate-2003
Question 148 Explanation:
I is belongs to the Data hazard.
II is belongs to the Control hazard.
III is belongs to the Structural hazard.
→ Hazards are the problems with the instruction pipeline in CPU micro architectures.
 Question 149
Using a larger block size in a fixed block size file system leads to
 A better disk throughput but poorer disk space utilization B better disk throughput and better disk space utilization C poorer disk throughput but better disk space utilization D poorer disk throughput and poorer disk space utilization
Computer-Organization       Disk-Scheduling       Gate-2003
Question 149 Explanation:
While using a larger block size means that contains less number of blocks then that results better throughput. This can be implemented in a fixed block size then the space utilization is not upto the mark. So the statement results better disk throughput but poorer disk space utilization.
 Question 150
Consider the following assembly language program for a hypothetical processor. A, B, and C are 8 bit registers. The meanings of various instructions are shown as comments.
 MOV B, # 0 ; B ← 0 MOV C, # 8 ; C ← 8 Z : CMP C, # 0 ; compare C with 0 JZX ; jump to X if zero flag is set SUB C, # 1 ; C ← C - 1 RRC A, # 1 ; right rotate A through carry by one bit. Thus: ; if the initial values of A and the carry flag are a7...a0 and ; c0 respectively, their values after the execution of this ; instruction will be c0a7...a1 and a0 respectively. JC Y ; jump to Y if carry flag is set JMP Z ; jump to Z Y : ADD B, # 1 ; B ← B + 1 JMP Z ; jump to Z X :
 A the number of 0 bits in A0 B the number of 1 bits in A0 C A0 D 8
Computer-Organization       Machine-Instructions       Gate-2003
Question 150 Explanation:
B is to be increments when a is moved to carry.
The code is counting the number of 1 bits in A0
 Question 151
Consider the following assembly language program for a hypothetical processor. A, B, and C are 8 bit registers. The meanings of various instructions are shown as comments.
 MOV B, # 0 ; B ← 0 MOV C, # 8 ; C ← 8 Z : CMP C, # 0 ; compare C with 0 JZX ; jump to X if zero flag is set SUB C, # 1 ; C ← C - 1 RRC A, # 1 ; right rotate A through carry by one bit. Thus: ; if the initial values of A and the carry flag are a7...a0 and ; c0 respectively, their values after the execution of this ; instruction will be c0a7...a1 and a0 respectively. JC Y ; jump to Y if carry flag is set JMP Z ; jump to Z Y : ADD B, # 1 ; B ← B + 1 JMP Z ; jump to Z X :
Which of the following instructions when inserted at location X will ensure that the value of register A after program execution is the same as its initial value ?
 A RRC A, #1 B NOP ; no operation C LRC A, #1 ; left rotate A through carry flag by one bit D ADD A, #1
Computer-Organization       Machine-Instructions       Gate-2003
Question 151 Explanation:
Initially, the 8 bits will be,
a7, a6, a5, a4, a3 , a2, a1, a0
Now right rotate it once,
C0, a7, a6, a5, a4, a3 , a2, a1, now a0 is the new carry.
Now again right rotate it,
a0C0, a7, a6, a5, a4, a3 , a2

So after 8 rotations,
a6, a5, a4, a3 , a2, a1, a0C0 and carry is a7
Now, one more rotation will restore the original value of A0.
 Question 152
A device employing INTR line for device interrupt puts the CALL instruction on the data bus while
 A B HOLD is active C READY is active D None of the above
Computer-Organization       Microprocessor       Gate-2002
Question 152 Explanation:
INTR is a signal which if enabled then microprocessor has interrupt enabled it receives high INR signal & activates INTA signal, so another request can’t be accepted till CPU is busy in servicing interrupt.
 Question 153
In 8085 which of the following modifies the program counter?
 A Only PCHL instruction B Only ADD instructions C Only JMP and CALL instructions D All instructions
Computer-Organization       Microprocessor       Gate-2002
Question 153 Explanation:
PCHL Instruction: Which can copy the content from H& L to PC.
ADD Instruction: increments the program counter.
JMP & CALL: Change the values of PC.
 Question 154
In serial data transmission, every byte of data is padded with a ‘0’ in the beginning and one or two ‘1’s at the end of byte because
 A Receiver is to be synchronized for byte reception B Receiver recovers lost ‘0’s and ‘1’s from these padded bits C Padded bits are useful in parity computation D None of the above
Computer-Organization       Serial-Communication       Gate-2002
Question 154 Explanation:
‘0’ is added at the beginning which can represent receiver about arrival of data and ‘1’ is added at the end which represent receiver state of new arrival of data.
 Question 155
Which of the following is not a form of memory?
 A instruction cache B instruction register C instruction opcode D translation look-a-side buffer
Computer-Organization       Instruction-Execution       Gate-2002
Question 155 Explanation:
Opcode is the portion of a machine language instruction that specifies the operation to be performed.
 Question 156
 A the operand is inside the instruction B the address of the operand is inside the instruction C the register containing the address of the operand is specified inside the instruction D the location of the operand is implicit
Question 156 Explanation:
The operand is inside the instruction---Immediate addressing.
The operand is inside the instruction --- absolute addressing.
The register containing the address of the operand is specified inside the instruction --- Register addressing.
The location of the operand is implicit --- Implicit addressing.
 Question 157

 A AC = 0 and CY =0 B AC = 1 and CY =1 C AC = 1 and CY =0 D AC = 0 and CY =1
Computer-Organization       Microprocessor       Gate-2002
Question 157 Explanation:
MOV H, 5DH
⇒ H = 0101 1101
MOV L, 6BH
⇒ L = 0110 1011
MOV A, H
A = 0101 1101

Here, AC=1; CY=0
 Question 158
The performance of a pipelined processor suffers if
 A the pipeline stages have different delays B consecutive instructions are dependent on each other C the pipeline stages share hardware resources D All of the above
Computer-Organization       Pipelining       Gate-2002
Question 158 Explanation:
To speedup from pipelining equals the number of pipe stages are involve. Usually, however, the stages will not be perfectly balanced; besides, the pipelining itself involves some overhead.
If pipeline stages can’t have different delays, no dependency among consecutive instructions and sharing of hardware resources should not be there.
 Question 159
Horizontal microprogramming
 A does not require use of signal decoders B results in larger sized microinstructions than vertical microprogramming C uses one bit for each control signal D all of the above
Computer-Organization       Microprogramming       Gate-2002
Question 159 Explanation:
In horizontal microprogramming the instruction size is less as compared to vertical microprogramming. So, there is no need for decoding. But, one bit is used for all control signals to execute the microinstruction. If the bit is set to ‘1’ the control signal field is activated. If the bit is set to ‘0’ the control signal field is deactivated
 Question 160

 A Theory Explanation is given below.
Computer-Organization       Cache       Gate-2002
 Question 161
To put the 8085 microprocessor in the wait state
 A lower the HOLD input B lower the READY input C raise the HOLD input D raise the READY input
Computer-Organization       Microprocessor       Gate-2000
Question 161 Explanation:
If ready pin is high the microprocessor will complete the operation and proceeds for the next operation. If ready pin is low the microprocessor will wait until it goes high.
 Question 162
Comparing the time T1 taken for a single instruction on a pipelined CPU with time T2 taken on a non-pipelined but identical CPU, we can say that
 A T1 ≤ T2 B T1 ≥ T2 C T1 < T2 D T1 is T2 plus the time taken for one instruction fetch cycle
Computer-Organization       Pipelining       Gate-2000
Question 162 Explanation:
PIPELINING SYSTEM:
Pipelining is an implementation technique where multiple instructions are overlapped in execution. It has a high throughput (amount of instructions executed per unit time). In pipelining, many instructions are executed at the same time and execution is completed in fewer cycles. The pipeline is filled by the CPU scheduler from a pool of work which is waiting to occur. Each execution unit has a pipeline associated with it, so as to have work pre-planned. The efficiency of pipelining system depends upon the effectiveness of CPU scheduler.
NON- PIPELINING SYSTEM:
All the actions (fetching, decoding, executing of instructions and writing the results into the memory) are grouped into a single step. It has a low throughput.
Only one instruction is executed per unit time and execution process requires more number of cycles. The CPU scheduler in the case of non-pipelining system merely chooses from the pool of waiting work when an execution unit gives a signal that it is free. It is not dependent on CPU scheduler.
 Question 163
The 8085 microprocessor responds to the present of an interrupt
 A as soon as the TRAP pin becomes ‘high’ B by checking the TRAP pin for ‘high’ status at the end of each instruction each C by checking the TRAP pin for ‘high’ status at the end of the execution of each instruction D by checking the TRAP pin for ‘high’ status at regular intervals
Computer-Organization       Microprocessor       Gate-2000
Question 163 Explanation:
TRAP is non maskable interrupt . TRAP is active high, level, edge triggered non maskable highest priority interrupt. When TRAP line is active microprocessor insert intervals restarts automatically at vector location of TRAP.
 Question 164

 A X – 3 Y – 2 Z - 1 B X – 1 Y – 3 Z - 2 C X – 2 Y – 3 Z - 1 D X – 3 Y – 1 Z - 2
Computer-Organization       Match-the-Following       Gate-2000
Question 164 Explanation:
Indirect addressing means that the address of the data is held in an intermediate location so that the address is first 'looked up' and then used to locate the data itself.
Immediate Addressing. An immediate operand has a constant value or an expression. When an instruction with two operands uses immediate addressing, the first operand may be a register or memory location, and the second operand is an immediate constant.
Auto increment or decrements can be one by using loops.
 Question 165

 A Theory Explanation is given below.
Computer-Organization       Descriptive       Gate-2000
Question 165 Explanation:
Note: Out of syllabus.
 Question 166

 A Theory Explanation is given below.
Computer-Organization       Descriptive       Gate-2000
Question 166 Explanation:
Note: Out of syllabus.
 Question 167

 A Theory Explanation is given below.
Computer-Organization       Descriptive       Gate-2000
Question 167 Explanation:
(a) Since it is said that the instruction after branch is not fetched till the branch instruction is completed. So CPI for branch instruction is 5. And for normal instruction CPI is 1.
So total execution time
= (0.2×5 + 0.8×1) × 2ns
= 3.6 ns
(b) Total branch instructions are 20% as given in previous part. In which 80% are conditional and in which 50% of the conditional branch is taken. So total conditional branch instruction which is taken is
50% of 80% of 20%
0.5×0.8×0.2 = 0.08
And 20% of total branch instruction are not conditional means for that branch is necessarily taken. So total unconditional branch instruction is,
0.2×0.2 = 0.04
Hence, total branch instruction which is taken is,
0.08+0.04 = 0.12
Now lets calculate total execution time,
= (0.12×5 + 0.88×1) × 2ns
= 2.96 ns
 Question 168
The main memory of a computer has 2 cm blocks while the cache has 2 c blocks. If the cache uses the set associative mapping scheme with 2 blocks per set, then block k of the main memory maps to the set
 A (k mod m) of the cache B (k mod c) of the cache C (k mod 2c) of the cache D (k mod 2 cm) of the cache
Computer-Organization       Cache       Gate-1999
Question 168 Explanation:
No. of sets in cache = No. of blocks in cache/No. of blocks per set
= 2c/c
= c
∴ Cache set no. to which block k of main memory maps to
= (Main memory block no.) mod (Total set in cache)
= k mod c
 Question 169
Raid configurations of the disks are used to provide
 A Fault-tolerance B High speed C High data density D None of the above
Computer-Organization       RAID       Gate-1999
Question 169 Explanation:
Raid is a way of storing the same data in different places on multiple hard disks to protect data in the case of drive failure.
So, we can say that RAID is used to provide fault tolerence.
 Question 170
Arrange the following configuration for CPU in decreasing order of operating speeds: Hard wired control, vertical microprogramming, horizontal microprogramming.
 A Hard wired control, vertical micro-programming, horizontal micro- programming. B Hard wired control, horizontal micro-programming, vertical micro- programming. C Horizontal micro-programming, vertical micro-programming, Hard wired control. D Vertical micro-programming, horizontal micro-programming, hard wired control.
Computer-Organization       Micro-Programming       Gate-1999
Question 170 Explanation:
Hardwired control involves only hardware which is definitely faster than microprogramming approach.
Now between horizontal and vertical microprogramming, the signals in vertical microprogramming are in encoded form which has to be decoded first using decoder. So, horizontal microprogramming is faster than vertical microprogramming.
 Question 171
The main differences(s) between a CSIC and A RISC processor is/are that a RISC processor typically
 A has fewer instructions B has fewer addressing modes C has more registers D is easier to implement using hard-wired control logic E All the above
Computer-Organization       CISC-and-RISC       Gate-1999
Question 171 Explanation:
All are properties of RISC processor.
 Question 172
A certain processor supports only the immediate and the direct addressing modes. Which of the following programming language features cannot be implemented on this processor?
 A Pointers B Arrays C Records D Recursive procedures with local variable E All the above
Question 172 Explanation:
A) Cannot be implemented because pointers need indirect addressing mode.
B) Cannot be implemented because arrays need Register indexing.
C) Records also needs pointers which needs indirect addressing modes, so this also cannot be implemented.
D) Recursive procedures needs stack, and so it needs stack pointers which inturn needs indirect addressing. So this also cannot be implemented.
 Question 173
In serial communication employing 8 data bits, a parity bit and 2 stop bits, the minimum band rate required to sustain a transfer rate of 300 characters per second is
 A 2400 band B 19200 band C 4800 band D 1200 band
Computer-Organization       Serial-Communication       Gate-1998
Question 173 Explanation:
Stop bit is given, it is asynchronous communication and 1 start bit also implied then
(8+2+1+1) * 300
= 3600
Minimum band rate required is 4800 band.
 Question 174
Which of the following devices should get higher priority in assigning interrupts?
 A Hard disk B Printer C Keyboard D Floppy disk
Computer-Organization       Interrupt       Gate-1998
Question 174 Explanation:
Hard disk has the higher priority in assigning interrupts.
 Question 175
Which of the following addressing modes permits relocation without any change whatsoever in the code?
Question 175 Explanation:
In PC relative addressing there is no change in the code.
 Question 176
Which of the following is true?
 A Unless enabled, a CPU will not be able to process interrupts. B Loop instructions cannot be interrupted till they complete. C A processor checks for interrupts before executing a new instruction. D Only level triggered interrupts are possible on microprocessors.
Computer-Organization       Interrupt       Gate-1998
Question 176 Explanation:
Option B & D are false.
Option 'C' also false. A processor checks for the interrupt before fetching an instruction.
 Question 177
Formatting for a floppy disk refers to
 A arranging the data on the disk in contiguous fashion B writing the directory C erasing the system area D writing identification information on all tracks and sectors
Computer-Organization       Disk       Gate-1998
Question 177 Explanation:
The formatted disk capacity is always less than the 'raw' unformatted capacity specified by the disk manufacturers, because some portion of each track is used for sector identification and for gaps between sectors and at the end of track.
 Question 178
The address space of 8086 CPU is
 A one Megabyte B 256 Kilobytes C 1 K Megabytes D 64 Kilobytes
Computer-Organization       Microprocessor       Gate-1998
Question 178 Explanation:
Note: Out of syllabus.
 Question 179
RST 7.5 interrupt in 8085 microprocessor executes service routing from interrupt vector location
 A 0000H B 0075H C 003CH D 0034H
Computer-Organization       Microprocessor       Gate-1997
Question 179 Explanation:
RST7.5 then location is = 7.5*8 = 60 (8085 is 8 bit processor)
→ 60 in hexa decimal is 003CH.
 Question 180
Purpose of a start bit in RS 232 serial communication protocol is
 A to synchronize receiver for receiving every byte B to synchronize receiver for receiving a sequence of bytes C a parity bit D to synchronize receiver for receiving the last byte
Computer-Organization       Serial-Communication       Gate-1997
Question 180 Explanation:
RS-232 needs a start before each byte transmission for synchronization.
 Question 181

 A A – 4 B – 3 C – 1 D – 2 B A – 2 B – 1 C – 3 D – 4 C A – 4 B – 3 C – 2 D – 1 D A – 2 B – 3 C – 4 D – 1
Computer-Organization       Match-the-Following       Gate-1997
Question 181 Explanation:
DMA I/O → Disk
Cache → High speed RAM
Interrupt I/O → Printer
Condition code register → ALU
 Question 182
When an interrupt occurs, an operating system
 A ignores the interrupt B always changes state of interrupted process after processing the interrupt C always resumes execution of interrupted process after processing the interrupt D may change state of interrupted process to 'blocked’ and schedule another process
Computer-Organization       Interrupt       Gate-1997
Question 182 Explanation:
Option A: Based on the priority.
Option B: Not always.
Option C: Not always. If some high priority interrupt comes during execution of current interrupt then it fails.
Option D: It is True always.
 Question 183

 A ‘op’ is ’+’ or ‘*’ B ‘op’ is ’↑’ or ‘*’ C ‘op’ is ’↑’ or ‘+’ D not possible to evaluate without storing
Computer-Organization       Register-Allocation       Gate-1997
Question 183 Explanation:
Lets say, the expression is one of the below:
(a*b)*c + d
(a*b)*c * d
(a*b)*c ∧ d
In any case, brackets has the highest priority always. So I have to compute brackets first. Now, for + and *, I can do the rest of the operation and save results in the same register. But for exponentiation, I have to store the result of a*b, then do the computation of c∧d, then multiply these two results.
Hence, (A) is correct.
 Question 184

 A 7AH B 80H C 50H D 22H
Computer-Organization       Microprocessor       Gate-1997
Question 184 Explanation:
Note: Out of syllabus.
 Question 185

 A 9 B 5 C 8 D None of the above
Computer-Organization       Micro-Instructions       Gate-1997
Question 185 Explanation:
(a) This is a vertical microprogramming because at any given time atmost one microoperations will be activated.
So, no. of bits required
⌈log2 (3)⌉ = 2
(b) This is a horizontal micro-programming because at any given time atmost six microoperations will be activated.
So, no. of bits required = 6
So, total minimum no. of bits required = 6+2 = 8
 Question 186
Relative mode of addressing is most relevant to writing
 A coroutines B position – independent code C shareable code D interrupt handlers
Question 186 Explanation:
The main advantage of PC- relative addressing is that code may be position independent, i.e., it can be loaded anywhere in memory without the need to adjust any address.
 Question 187
Number of machine cycles required for RET instruction in 8085 microprocessor is
 A 1 B 2 C 3 D 5
Computer-Organization       Microprocessor       Gate-1996
Question 187 Explanation:
1 for instruction fetch.
2 for stack operation.
Total no. of cycles = 2+1 = 3
 Question 188
For the daisy chain scheme of connecting I/O devices, which of the following statements is true?
 A It gives non-uniform priority to various devices. B It gives uniform priority to all devices. C It is only useful for connecting slow devices to a processor device. D It requires a separate interrupt pin on the processor for each device.
Computer-Organization       Interrupt       Gate-1996
Question 188 Explanation:
Daisy chaining technique tells the processor in which order the interrupt should be handle by providing priority devices.
→ In this all devices connected serially.
→ High priority devices placed first, followed by low priority devices.
 Question 189
A micro program control unit is required to generate a total of 25 control signals. Assume that during any microinstruction, at most two control signals are active. Minimum number of bits required in the control word to generate the required control signals will be
 A 2 B 2.5 C 10 D 12
Computer-Organization       Control-Signals       Gate-1996
Question 189 Explanation:
To generate 1 out of 25 different control signals we need 5 bits. But at any time atmost 2 signals can be active. So control word length
= 5+5
= 10 bits
 Question 190
A single instruction to clear the lower four bits of the accumulator in 8085 assebly language?
 A XRI OFH B ANI FOH C XRI FOH D ANI OFH
Computer-Organization       Microprocessor       Gate-1995
Question 190 Explanation:
Here, we use the AND as a accumulator with immediate. F leaves the high nibble whatever it is, 0 clears the lower nibble.
→ The XOR's don't reliably clear random bits and ANI OF clears the upper nibble, not the lower nibble.
 Question 191
In a vectored interrupt
 A the branch address is assigned to a fixed location in memory B the interrupt source supplies the branch information to the processor through an interrupt vector C the branch address is obtained from a register in the processor D none of the above
Computer-Organization       Interrupt       Gate-1995
Question 191 Explanation:
A vectored interrupt is a processing technique in which the interrupting device directs the processor to the appropriate interrupt service routine vector.
 Question 192
The principle of locality justifies the use of
 A interrupts B DMA C Polling D Cache Memory
Computer-Organization       General       Gate-1995
Question 192 Explanation:
The locality of reference is also known as principle of locality.
→ The things which are used more frequently those are stored in locality of reference.
→ For this purpose we use the cache memory.
 Question 193

 A Variables B Identifiers C Actual parameters D Formal parameters
Computer-Organization       Microprocessor       Gate-1995
Question 193 Explanation:
Formal arguments (or) formal parameters is a special kind of variables used in subroutime to refer to one of pieces of data provided as input to the subroutine.
 Question 194
The capacity of a memory unit is defined by the number of words multiplied by the number of bits/word. How many separate address and data lines are needed for a memory of 4K × 16?
 A 10 address, 16 data lines B 11 address, 8 data lines C 12 address, 16 data lines D 12 address, 12 data lines
Computer-Organization       Memory-Management       Gate-1995
Question 194 Explanation:
ROM memory size = 2m × n
m = no. of address lines
n = no. of data lines
Given, 4K × 16 = 212 × 16
Data lines = 16
 Question 195

A computer system has a 4K word cache organized in block-set-associative manner with 4 blocks per set, 64 words per block. The number of its in the SET and WORD fields of the main memory address format is:

 A 15, 40 B 6, 4 C 7, 2 D 4, 6
Computer-Organization       Cache       Gate-1995
Question 195 Explanation:
No. of sets = 4K/ (64×4) = 212/ 28 = 24
So we need 4 set bits.
And,
64 words per block means WORD bit is 6-bits.
 Question 196
Which one of the following statements is true?
 A Macro definitions cannot appear within other macro definitions in assembly language programs B Overlaying is used to run a program which is longer than the address space of computer C Virtual memory can be used to accommodate a program which is longer than the address space of a computer D It is not possible to write interrupt service routines in a high level language
Computer-Organization       General       Gate-1994
Question 196 Explanation:
A macro body can also have further macro definitions. However, these nested macro definitions aren't valid until the enclosing macro has been expanded. That means enclosing macro must have been called before the macros can be called.
 Question 197

 A True B False
Computer-Organization       Microprocessor       Gate-1994
Question 197 Explanation:
Note: Out of syllabus.
The major reason of multiplexing address and data bus is to reduce the number of pins for address and data and dedicate those pins for other several functions of micro-processor.
 Question 198

 A True B False
Computer-Organization       Finite-State-Machine       Gate-1994
Question 198 Explanation:
FA or Finite state machine to add two integers can be constructed using two states:
→ q0: Start state to represent carry bit is 0.
→ q1: State to represent carry bit is 1.
The inputs to the FA will be pairs of bits, i.e., 00, 01, 10, 11.

The FA starts in state 1 (since carry is 0) and inputs a pair of bits. If the pair is 11, the FA outputs a '0' and switches to state 2 (since the carry is 1), where the next pair of bits is input and is added to a carry bit of 1.
 Question 199

 A True B False
Computer-Organization       RISC       Gate-1994
Question 199 Explanation:
RISC systems use fixed length instruction to simplify pipeline.
Now the challenge is: How to fit multiple sets of instructions types into limited or fixed size instruction format.
Here comes expanding opcode into the picture, So RISC system uses expanding opcode technique to have fixed size instructions.
 Question 200

 A Out of syllabus.
Computer-Organization       General       Gate-1994
Question 200 Explanation:
(i) - (a), (ii) - (b), (iii) - (d), (iv) - (c)
 Question 201
Which of the following is (are) valid FORTRAN 77 statement(s)?
 A DO 13 I = 1 B A = DIM ***7 C READ = 15.0 D GO TO 3 = 10
Computer-Organization       FORTRAN       Gate-1993
Question 201 Explanation:
Note: Out of syllabus.
 Question 202
Assume that each character code consists of 8 bits. The number of characters that can be transmitted per second through an asynchronous serial line at 2400 baud rate, and with two stop bits, is
 A 109 B 216 C 218 D 219 E 240
Computer-Organization       Serial-Communication       Gate-1993
Question 202 Explanation:
Total bit per character
= 8 bit data + 2 stop bit + 1 start bit
= 11 bits
No. of characters = 2400/11 = 218.18
Since, it is asked for transmitted characters we take floor and answer is 218.
 Question 203

 A 90%
Computer-Organization       Interrupt       Gate-1993
Question 203 Explanation:
Time to service an interrupt
= saving state of CPU + ISR execution + restoring of CPU state
= (80 + 10 + 10) × 10-6
= 100 μs
For every 1ms an interrupt occurs which is served for 100 μs.
1ms = 1000μs
Thus, for every 1000μs, (1000 - 100) = 900 μs of main program and 100μs of interrupt overhead exists.
Thus, 900/1000 is usage of CPU to execute main program .
∴ % of CPU to execute main program is (900/1000) × 100 = 90%
 Question 204
A simple two-pass assembler does the following in the first pass:
 A It allocates space for the literals. B It computes the total length of the program C It builds the symbol table for the symbols and their values. D It generates code for all the load and store register instructions. E A, B and C
Computer-Organization       Assembler       Gate-1993
Question 204 Explanation:
Pass 1:
1) Assign address to all statements in the program.
2) Save the values assigned to all tables for use in pass 2.
3) Perform some processing of assembler directives.
 Question 205
Many microprocessors have a specified lower limit on clock frequency (apart from the maximum clock frequency limit) because ______
 A clock frequency can't go below this value.
Computer-Organization       Microprocessor       Gate-1992
Question 205 Explanation:
Clock frequency becomes low memory time period of clock becomes high. When this time period increases beyond the time period in which the non-volatile memory contents must be refreshed, we loose those contents. So clock frequency can't go below this value.
 Question 206
Many of the advanced microprocessors prefetch instructions and store it in an instruction buffer to speed up processing. This speed up is achieved because _________
 A prefetching the instructions to be executed can save considerable amount of waiting time.
Computer-Organization       Microprocessor       Gate-1992
Question 206 Explanation:
Because CPU is faster than memory. Fetching the instructions from memory would require considerable amount of time while CPU is much faster. So, prefetching the instructions to be executed can save considerable amount of waiting time.
 Question 207

In an 11-bit computer instruction format, the size of address field is 4-bits. The computer uses expanding OP code technique and has 5 two-address instructions and 32 two-address instructions and the number of zero-address instructions it can support is _________

 A 256
Computer-Organization       OP-Code       Gate-1992
Question 207 Explanation:
In encoding no. of possible instructions = 211 = 2048
The possibility of no. of encoding taken by two-address instructions = 5×24×24 = 1280
By one-address instructions = 32×24 = 512
So, the possibility of zero-address instructions =2048-(1280+512) = 256
 Question 208
Macro expansion is done in pass one instead of pass two in a pass macro assembler because _________
 A all macro definitions are processed during the first pass only due to all macro expansions done during pass 1 only not in pass 2.
Computer-Organization       Macro-Expansion       Gate-1992
 Question 209
The purpose of instruction location counter in an assembler is _______
 A used to assign storage address to the program's statements.
Computer-Organization       Counters       Gate-1992
Question 209 Explanation:
is used to assign storage address to the program's statements. As the instruction of a source module are being assembled, the location counter keeps track of current location in storage.
 Question 210
Bit-slice processors
 A Can be cascaded to get any desired word length processor B speed of operation is independent of the word length configured C don’t contain anything equivalent of program counter in a ‘normal’ microprocessor D contain only the data path of a ‘normal’ CPU
Computer-Organization       Bit-slicing       Gate-1992
Question 210 Explanation:
Bit string is a technique for constructing a processor from modules of processors of smaller bit width, for the purpose of increasing word length.
 Question 211
PCHL is an instruction in 8085 which transfers the contents of the register pair HL to PC. This is not a very commonly used instruction as it changes the flow of control in rather ‘unstructured’ fashion. This instruction can be useful in implementing.
 A if ……. then ….. else ….. construct B while …… construct C case …… construct D call …… construct
Computer-Organization       Microprocessor       Gate-1992
Question 211 Explanation:
Note: Out of syllabus.
 Question 212
 A Theory Explanation.
Computer-Organization       Macro-Expansion       Gate-1992
 Question 213

 A a-2, b-1, c-2, d-1
Computer-Organization       Assembler       Gate-1992
Question 213 Explanation:
Two pass assembler.
Pass 1:
→ Assign addresses to all statements.
→ Save the values assigned to all labels for use in pass 2.
→ Perform some processing of assembler directives.
Pass 2:
→ Assembler instructions by translating opcode and symbolic operands.
→ Generate data values defined by BYTE, WORD.
→ Perform processing of assembler directives not done in pass 1.
→ Write the object program and the assembly listing.
 Question 214
More than one word is put in one cache block to
 A exploit the temporal locality of reference in a program B exploit the spatial locality of reference in a program C reduce the miss penalty D none of the above
Computer-Organization       Cache       Gate-2001
Question 214 Explanation:
Spatial locality refers to the use of data elements within relatively close storage locations. To exploit the spatial locality, more than one word are put into cache block.
The temporal locality refers to reuse of data elements with a smaller regular intervals.
 Question 215
A low memory can be connected to 8085 by using
 A INTER B C HOLD D READY
Computer-Organization       Microprocessor       Gate-2001
Question 215 Explanation:
Communication is only possible when READY signal is set. So a low memory can be connected to 8085 by using READY signal.
 Question 216
Suppose a processor does not have any stack pointer register. Which of the following statements is true?
 A It cannot have subroutine call instruction B It can have subroutine call instruction, but no nested subroutine calls C Nested subroutine calls are possible, but interrupts are not D All sequences of subroutine calls and also interrupts are possible
Computer-Organization       Subroutines       Gate-2001
Question 216 Explanation:
If stack pointer register is not available then activation records in the stack cannot be created. So it cannot have subroutine call instruction.
 Question 217
A processor needs software interrupt to
 A test the interrupt system of the processor B implement co-routines C obtain system services which need execution of privileged instructions D return from subroutine
Computer-Organization       Interrupt       Gate-2001
Question 217 Explanation:
To execute privileged instructions, system services can be obtained using software interrupt.
 Question 218
A CPU has two modes-privileged and non-privileged. In order to change the mode from privileged to non-privileged
 A a hardware interrupt is needed B a software interrupt is needed C a privileged instruction (which does not generate an interrupt) is needed D a non-privileged instruction (which does not generate an interrupt is needed
Computer-Organization       Interrupt       Gate-2001
Question 218 Explanation:
For switching between privileged to non-privileged area, non-privileged instruction is used, without interrupt.
 Question 219
Where does the swap space reside?
 A RAM B Disk C ROM D On-chip cache
Computer-Organization       Disk-Scheduling       Gate-2001
Question 219 Explanation:
Swap space is an area on disk that temporarily holds a process memory image.
When memory is full and process needs memory, inactive parts of process are put in swap space of disk.
 Question 220
Which of the following requires a device driver?
 A Register B Cache C Main memory D Disk
Computer-Organization       DMA       Gate-2001
Question 220 Explanation:
Disk driver is software which enables communication between internal hard disk (or drive) and computer.
 Question 221

 A (X, III) (Y, I) (Z, II) B (X, II) (Y, III) (Z, I) C (X, III) (Y, II) (Z, I) D (X, I) (Y, III) (Z, II)
Computer-Organization       Match-the-Following       Gate-2001
Question 221 Explanation:
⇒ Array implementation can be done by Indexed addressing.
⇒ Writing relocatable code can be done by Base Register addressing by changing the value of Base Register.
⇒ While passing an array as parameter we use pointer and hence can use Indirect addressing.
 Question 222

 A 2 B 3 C 4 D 5
Computer-Organization       Machine-Instructions       Gate-2001
Question 222 Explanation:
T1: SP → MAR, 2 cycles (as SP is 16 bits and data bus is 8 bits so needs 2 cycles to move data)
T2: 8 → MBR, 1 cycle
T3: M[MAR] ← MBR, 2 cycles (As it is a memory operation)
So, total 5 clock cycles.
 Question 223

 A Theory Explanation if given below.
Computer-Organization       Secondary-Storage       Gate-2001
 Question 224

 A Theory Explanation is given below.
Computer-Organization       Cache       Gate-2001
 Question 225

 A Theory Explanation is given below.
Computer-Organization       Pipelining       Gate-2001
 Question 226

 A Theory Explanation is given below.
Computer-Organization       Secondary-Storage       Gate-2001
 Question 227
In interleaved memory organization, consecutive words are stored in consecutive memory modules in ________ interleaving, whereas consecutive words are stored within the module in ________ interleaving.
 A low order, high order
Computer-Organization       Memory-Organization       Gate-1991
Question 227 Explanation:
→ Consecutive words in consecutive memory modules in low order interleaving as the lower order bits determine the module.
→ Consecutive words in same memory module in high order interleaving as the higher order bits determine the module.
 Question 228
Using the 8087 arithmetic coprocessor with the 8087 CPU requires that the 8087 CPU is operated ________.
 A Out of syllabus.
Computer-Organization       Microprocessor       Gate-1991
 Question 229

 A Out of syllabus.
Computer-Organization       Microprocessor       Gate-1991
 Question 230
 A lower power dissipation B greater speed C smaller chip size D fewer masks for fabrication E none of the above
Computer-Organization       CMOS       Gate-1991
Question 230 Explanation:
Note: Out of syllabus.
 Question 231
 A executes an instruction supplied by an external device through the INTA signal B executes an instruction from memory location 20H C executes a NOP D none of the above
Computer-Organization       Microprocessor       Gate-1991
Question 231 Explanation:
Note: Out of syllabus.
 Question 232

 A latch the output of an I/O instruction into an external latch B deactivate the chip-select signal from memory devices C latch the 8 bits of address lines AD7-AD0 into an external latch D find the interrupt enable status of the TRAP interrupt E None of the above
Computer-Organization       Microprocessor       Gate-1991
Question 232 Explanation:
Note: Out of syllabus.
 Question 233

 A True B False
Computer-Organization       Modes-of-Transfer       Gate-1990
Question 233 Explanation:
In programmed I/0, no interface register is there but in interrupt driven I/0 interface register is used. So interrupt driven is faster than programmed I/0.
 Question 234

 A True B False
Computer-Organization       Instruction-Execution       Gate-1990
Question 234 Explanation:
Flags are tested during conditional call and jumps not affected or changed.
 Question 235
 A True B False
Computer-Organization       Cache       Gate-1990
Question 235 Explanation:
Main memory transfer the data in the form of block to cache and cache transfer the data in the form of words, so it enables the interleaved main memory unit to operate unit at maximum speed.
 Question 236

 A (A)-(r), (B)-(s), (C)-(q), (D)-(p)
Computer-Organization       Match-the-Following       Gate-1989
Question 236 Explanation:
 Question 237

 A (A)-(s), (B)-(r), (C)-(p), (D)-(q)
Computer-Organization       Match-the-Following       Gate-1989
Question 237 Explanation:
Base addressing is a position independent. By changing the value in base register, location of address also be changed.
Indexing mode can be used to access an array whose elements are in successive memory locations.
Stack addressing is a Reentranecy whenever code happens to be used again, address is need not be the same.
Implied addressing belongs to accumulator if an address is not specified, It is assumed implied to be accumulator.
 Question 238
In a microprocessor-based system, if a bus (DMA) request and an interrupt request arrive sumultaneously, the microprocessor attends first to the bus request.
 A True. B False.
Computer-Organization       Modes-of-Transfer       GATE-1987
Question 238 Explanation:
The HOLD input has a higher priority than the INTR or NMI interrupt inputs.
 Question 239
Data transfer between a microprocessor and an I/O device is usually faster in memory-mapped-I/O scheme than in I/O-mapped - I/O scheme.
 A True B False
Computer-Organization       I/O-Handling       GATE-1987
Question 239 Explanation:
Memory mapped I/0 runs faster than I/0 mapped I/O.
 Question 240
The most relevant addressing mode to write position-independent codes is:
 A Direct mode B Indirect mode C Relative mode D Indexed mode
Question 240 Explanation:
Relative mode since we can just change the content of base register, if we wish to relocate.
 Question 241
A microprogrammed control unit
 A Is faster than a hard-wired control unit. B Facilitates easy implementation of new instruction. C Is useful when very small programs are to be run. D Usually refers to the control unit of a microprocessor.
Computer-Organization       Micro-Programmed-Control-Unit       GATE-1987
Question 241 Explanation:
In micro-programmed control unit we can add new instruction by changing the content of control memory.
 Question 242
On receiving an interrupt from a I/O device the CPU:
 A Halts for a predetermined time. B Hands over control of address bus and data bus to the interrupting device. C Branches off to the interrupt service routine immediately. D Branches off to the interrupt service routine after completion of the current instruction.
Computer-Organization       Interrupt       GATE-1987
Question 242 Explanation:
On receiving an interrupt from a I/O device the CPU branches off to the interrupt service routine after completion of the current instruction.
There are 242 questions to complete.