Operating-Systems
Question 1 |
Consider three concurrent processes P1, P2 and P3 as shown below, which access a shared variable D that has been initialized to 100.

The process are executed on a uniprocessor system running a time-shared operating system. If the minimum and maximum possible values of D after the three processes have completed execution are X and Y respectively, then the value of Y–X is _______.
10 | |
40 | |
60 | |
80 |

P2 reads D=100, preempted.
P1 executes D=D+20, D=120.
P3 executes D=D+10, D=130.
Now, P2 has D=100, executes
D = D-50 = 100-50 = 50
P2 writes D=50 final value. This is minimum.
Next,
P2 reads D=100, executes D = D-50, before that assume P1 & P3 has read D=100.
P2 makes D=50 & writes it.
P1 executes (D=100), D=D+20 & P3 executes D=D+10 gives maximum value D=130.
So, Y - X = 130 - 50 =80.
Question 2 |
The following C program is executed on a Unix/Linux system:
#include <unistd.h> int main () { int i ; for (i=0; i<10; i++) if (i%2 == 0) fork ( ) ; return 0 ; }The total number of child processes created is _____.
26 | |
33 | |
31 | |
28 |
#include
int main( )
{
int i;
fork( );
fork( );
fork( );
fork( );
fork( );
}
n - fork statements will have 2n-1 child.
Hence, n = 5 ⇒ We have 31 child processes.
Question 3 |
Assume that in a certain computer, the virtual addresses are 64 bits long and the physical addresses are 48 bits long. The memory is word addressable. The page size is 8 kB and the word size is 4 bytes. The Translation Look-aside Buffer (TLB) in the address translation path has 128 valid entries. At most how many distinct virtual addresses can be translated without any TLB miss?
8×220 | |
4×220 | |
16×210 | |
256×210 |
So, it can refer to 27 pages.
Each page size is 8 kB & word is 4 bytes.
So, the total addresses of virtual address spaces that can be addressed

Question 4 |
Consider the following four processes with arrival times (in milliseconds) and their length of CPU bursts (in milliseconds) as shown below:

These processes are run on a single processor using preemptive Shortest Remaining Time First scheduling algorithm. If the average waiting time of the processes is 1 millisecond, then the value of Z is _____.
2 | |
7 | |
1 | |
4 |

At this point P4 arrives with burst 'Z' & P3 is in queue with burst 3.
P1 & P2 have executed with P1 incurred delay 1unit & P2 0units.
Hence, Avg = 0+1+x/4 =1
⇒ x=3, the next delay should be 3. It would happen if assume Z=2.
It executes and completes at 6.
P3 will wait totally for 3units.
Hence, Avg=1.

Z=2
Question 5 |
The index node (inode) of a Unix-like file system has 12 direct, one single-indirect and one double-indirect pointers. The disk block size is 4 kB, and the disk block address is 32-bits long. The maximum possible file size is (rounded off to 1 decimal place) ______ GB.
7.0 | |
9.0 | |
2.0 | |
4.0 |
Max. file size
= (12 × 1k + 1k × 1k) × 4kB ≈ (1024 × 12 + 1024 × 1024) × 4 × 1024 bytes
≈ 4GB
Question 6 |
Consider the following snapshot of a system running n concurrent processes. Process i is holding Xi instances of a resource R, 1 ≤ i ≤ n. Assume that all instances of R are currently in use. Further, for all i, process i can place a request for at most Yi additional instances of R while holding the Xi instances it already has. Of the n processes, there are exactly two processes p and q such that Yp = Yq = 0. Which one of the following conditions guarantees that no other process apart from p and q can complete execution?
Min (Xp, Xq) ≥ Min {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} | |
Xp + Xq < Max {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} | |
Min (Xp, Xq) ≤ Max {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} | |
Xp + Xq < Min {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} |
Pi holds Xi instances.
Pi can request additional Yi instances.
Given two process p & q such that their additional requests are zero.
Yp = Yq = 0
{Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q} means that out of 'n' processes, we are left with (n-2) process (except p&q), i.e., Yk indicates additional request of all the processes (n-2) except p & q.
For p & q to complete first, accordingly
Xp + Xq < Min {Yk}
Option D is correct.
There are exactly two process p and q which do not need any additional instances of resources.
So, p and q will complete their execution and will release Xp and Xq instances of resources.
Now to guarantee that no other process apart from p and q can complete execution, the no. of instances of resources available must be less than the minimum no. of instances of resources required by any other process, i.e.,
Xp + Xq < Min {Yk | 1 ≤ k ≤ n, k ≠ p, k ≠ q}
Question 7 |
The hole created by worst fit is always larger than the hole created by first fit.
| |
The hole created by best fit is never larger than the hole created by first fit.
| |
The hole created by first fit is always larger than the hole created by next fit. | |
The hole created by next fit is never larger than the hole created by best fit.
|
Question 8 |
I. A running process can move to ready state.
II. A ready process can move to ready state.
III. A blocked process can move to running state.
IV. A blocked process can move to ready state.
Which of the above statements are TRUE?
II and III only
| |
I, II and III only
| |
I, II, III and IV
| |
I, II and IV only
|

Question 9 |

If the time quantum for RR is 4 ms, then the absolute value of the difference between the average turnaround times (in ms) of SJF and RR (round off to 2 decimal places) is _____.
5.25 |

Question 10 |

wait (a); count = count+1;
if (count==n) signal (b);
signal (a); wait (b); signal (b);

What does the code achieve?
It ensures that all processes execute CODE SECTION P mutually exclusively.
| |
It ensures that at most two processes are in CODE SECTION Q at any time. | |
It ensures that no process executes CODE SECTION Q before every process has finished CODE SECTION P.
| |
It ensures that at most n-1 processes are in CODE SECTION P at any time.
|
Question 11 |
(P, 155), (Q, 85), (R, 110), (S, 30), (T, 115)
Assume the head is positioned at cylinder 100. The scheduler follows Shortest Seek Time First scheduling to service the requests.
Which one of the following statements is FALSE?
The head reverses its direction of movement between servicing of Q and P. | |
T is serviced before P. | |
R is serviced before P.
| |
Q is serviced after S, but before T.
|

Question 12 |
154.5ns |
T=20ns
D=5000ns
h=0.95
p=0.1, 1-p=0.9
d=0.2, 1-d=0.8
EMAT = h×(T+M)+(1-h)[(1-p)×2M+p[(1-d)[D+M]+d(2D+M)]+T]
= 0.95×(20+100)+(1-0.95)[(1-0.1)×200+(0.1)[(1-0.2)[5000+100]+(0.2)(10000+100)]+20] = 154.5ns
Question 13 |
Consider a process executing on an operating system that uses demand paging. The average time for a memory access in the system is M units if the corresponding memory page is available in memory and D units if the memory access causes a page fault. It has been experimentally measured that the average time taken for a memory access in the process is X units.
Which one of the following is the correct expression for the page fault rate experienced by the process?
(D-M)/(X-M) | |
(X-M)/(D-M) | |
(D-X)/(D-M) | |
(X-M)/(D-X) |
X = (1 - P)M + D × P
X = M ∙ PM + DP
(X - M) = P(D - M)
⇒ P = (X - M) / (D - M)
Question 14 |
Consider a system with 3 processes that share 4 instances of the same resource type. Each process can request a maximum of K instances. Resource instances can be requested and released only one at a time. The largest value of K that will always avoid deadlock is _________.
2 | |
3 | |
4 | |
5 |
No. of resources = 4
Let’s distribute each process one less than maximum demands i.e., (k-1) resources.
So, for three processes, 3(k – 1) resources.
For deadlock avoidance provide an additional resource to any one of the process.
∴ Total resources required to avoid deadlock in any case is 3(k – 1) + 1 = 3k – 2
Now this 3k – 2 should be less than equal to available no. of resources, i.e.,
3k – 2 ≤ 4
k ≤ 2
So maximum value of k = 2
Question 15 |
Consider the following solution to the producer-consumer synchronization problem. The shared buffer size is N. Three semaphores empty, full and mutex are defined with respective initial values of 0, N and 1. Semaphore empty denotes the number of available slots in the buffer, for the consumer to read from. Semaphore full denotes the number of available slots in the buffer, for the producer to write to. The placeholder variables, denoted by P, Q, R and S, in the code below can be assigned either empty or full. The valid semaphore operations are: wait() and sigmal().

Which one of the following assignments to P, Q, R and S will yield the correct solution?
P: full, Q: full, R: empty, S: empty
| |
P: empty, Q: empty, R: full, S: full
| |
P: full, Q: empty, R: empty, S: full
| |
P: empty, Q: full, R: full, S: empty
|
Initial: mutex = 1
empty = 0
full = N

Question 16 |
Consider a storage disk with 4 platters (numbered as 0, 1, 2 and 3), 200 cylinders (numbered as 0, 1, … , 199), and 256 sectors per track (numbered as 0, 1, … 255). The following 6 disk requests of the form [sector number, cylinder number, platter number] are received by the disk controller at the same time:
-
[120, 72, 2], [180, 134, 1], [60, 20, 0], [212, 86, 3], [56, 116, 2], [118, 16, 1]
Currently head is positioned at sector number 100 of cylinder 80, and is moving towards higher cylinder numbers. The average power dissipation in moving the head over 100 cylinders is 20 milliwatts and for reversing the direction of the head movement once is 15 milliwatts. Power dissipation associated with rotational latency and switching of head between different platters is negligible.
The total power consumption in milliwatts to satisfy all of the above disk requests using the Shortest Seek Time First disk scheduling algorithm is ______ .
85 | |
86 | |
87 | |
88 |
→ A storage disk - 4 platters(0, 1, 2 & 3), Cylinder - 200 (0, 1, …, 199) , 256 sector per track.
In the above question the disk requests are given in the form of 〈sector no, cylinder no, platter no〉.
In SSTF (Shortest Seek Time First) Disk Scheduling algo, requests having shortest seek time are executed first.
So, the seek time of every request is calculated in advance in queue and then they are scheduled according to their calculated seek time. Head is positioned at 80 and moving towards higher cylinder no.

Head Movement in SSTF = (86 – 80) + (86 – 72) + (134 – 72) + (134 – 16) = 200
P1: Power dissipated by 200 movements = 0.2 * 200 = 40 mW
Power dissipated in reversing Head direction once = 15 mW
No. of times Head changes its direction = 3 P2: Power dissipated in reversing Head direction = 3 * 15 = 45 mW
Total power consumption is P1 + P2 = 85 mW
Question 17 |
In a system, there are three types of resources: E, F and G. Four processes P0, P1, P2 and P3 execute concurrently. At the outset, the processes have declared their maximum resource requirements using a matrix named Max as given below. For example, Max[P2, F] is the maximum number of instances of F that P2 would require. The number of instances of the resources allocated to the various processes at any given state is given by a matrix named Allocation.
Consider a state of the system with the Allocation matrix as shown below, and in which 3 instances of E and 3 instances of F are the only resources available.

From the perspective of deadlock avoidance, which one of the following is true?
The system is in safe state.
| |
The system is not in safe state, but would be safe if one more instance of E were available. | |
The system is not in safe state, but would be safe if one more instance of F were available.
| |
The system is not in safe state, but would be safe if one more instance of G were available.
|

Safe sequence:
〈P0, P2, P1, P3〉
P0: P0 can be allotted 〈3,3,0〉.
After completion Available = 〈4,3,1〉
P2: P2 can be allotted 〈0,3,0〉.
After completion: Available = 〈5,3,4〉
P1: P1 can be allotted 〈1,0,2〉.
After completion: Available = 〈6,4,6〉
P3: P3 can be allotted 〈3,4,1〉.
After completion: Available = 〈8,4,6〉
Question 18 |
global variables but not heap. | |
heap but not global variables. | |
neither global variables nor heap. | |
both heap and global variables. |

Question 19 |
Consider the following CPU processes with arrival times (in milliseconds) and length of CPU bursts (in milliseconds) as given below:

If the pre-emptive shortest remaining time first scheduling algorithm is used to schedule the processes, then the average waiting time across all processes is __________ milliseconds.
3 | |
4 | |
5 | |
6 |

Question 20 |
A multithreaded program P executes with x number of threads and used y number of locks for ensuring mutual exclusion while operating on shared memory locations. All locks in the program are non-reentrant, i.e., if a thread holds a lock l, then it cannot re-acquire lock l without releasing it. If a thread is unable to acquire a lock, it blocks until the lock becomes available. The minimum value of x and the minimum value of y together for which execution of P can result in a deadlock are:
x = 1, y = 2 | |
x = 2, y = 1 | |
x = 2, y = 2 | |
x = 1, y = 1 |
Here non re-entrant process can’t own same lock multiple times, so if process tries to acquire already owned lock, will get blocked, and deadlock will happen.
From the above options x=1 (a single thread) and y=1 (a single lock) deadlock is possible when we consider given situations in question.
Question 21 |
Recall that Belady’s anomaly is that the page-fault rate may increase as the number of allocated frames increases. Now, consider the following statements:
-
S1: Random page replacement algorithm (where a page chosen at random is replaced) suffers from Belady’s anomaly
S2: LRU page replacement algorithm suffers from Belady’s anomaly
Which of the following is CORRECT?
S1 is true, S2 is true | |
S1 is true, S2 is false | |
S1 is false, S2 is true | |
S1 is false, S2 is false |
Page replacement algorithm suffers from Belady's anomaly when it is not a stack algorithm.
S1: Random page replacement algorithm is not a stack algorithm. So, S1 is true.
S2: LRU is a stack algorithm. Therefore, it doesn't suffer from Belady's anomaly. S2 is false.
Question 22 |
Which of the following is/are shared by all the threads in a process?
-
I. Program counter
II. Stack
III. Address space
IV. Registers
I and II only | |
III only | |
IV only | |
III and IV only |
A process, in the simplest terms, is an executing program.
One or more threads run in the context of the process.
A thread is the basic unit to which the operating system allocates processor time.
A thread can execute any part of the process code, including parts currently being executed by another thread.
Each thread has its own stack, register and PC.
So here address space that is shared by all thread for a single process.
Question 23 |
In a file allocation system, which of the following allocation scheme(s) can be used if no external fragmentation is allowed?
-
I. Contiguous
II. Linked
III. Indexed
I and III only | |
II only | |
III only | |
II and III only |
Advantage:
i) Both sequential and direct access is possible.
ii) Extremely fast.
Disadvantage:
i) Suffers from both internal and external fragmentation.
ii) Increasing file size is difficult, because it depends on the availability of contiguous memory at a particular instance.
→ Linked list allocation:
Advantage:
i) Flexible in terms of size.
ii) Does not suffers from external fragmentation.
Disadvantage:
i) Allocation is slower.
ii) Does not support random or direct access.
iii) Pointers require some extra overhead.
→ Indexed allocation:
Advantage:
i) Support direct access, so provides fast access to the file blocks.
ii) No external fragmentation.
Disadvantage:
i) The pointer overhead for indexed allocation is greater than linked allocation.
ii) Inefficient in terms of memory utilization.
Question 24 |
A system shares 9 tape drives. The current allocation and maximum requirement of tape drives for three processes are shown below:

Which of the following best describe current state of the system?
Safe, Deadlocked | |
Safe, Not Deadlocked | |
Not Safe, Deadlocked | |
Not Safe, Not Deadlocked |

Available: (9 - (3 + 1 + 3)) = 2, P3 can be satisfied.
New available = 3 + 2 = 5
Now, P2 can be satisfied.
New available: 5 + 1 = 6
Now, P1 can be satisfied. Thus safe sequence: P3 → P2 → P1
That is not deadlocked.
Question 25 |
Consider the set of processes with arrival time (in milliseconds), CPU burst time (in milliseconds), and priority (0 is the highest priority) shown below. None of the processes have I/O burst time.

The average waiting time (in milliseconds) of all the processes using preemptive priority scheduling algorithm is __________.
29 | |
30 | |
31 | |
32 |

Waiting Time = 0 + (33 - 5) + (40 - 2) + (49 - 12) + (51 - 9) = 145
Average waiting time: 145/5 = 29
Question 26 |
Consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same time to a computer system. Which one of the following process scheduling algorithms would minimize the average waiting time in the ready queue?
Shortest remaining time first | |
Round-robin with time quantum less than the shortest CPU burst | |
Uniform random | |
Highest priority first with priority proportional to CPU burst length |
We can consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same time to a computer system.
We have to choose the appropriate process scheduling algorithms, which would minimize the average waiting time in the ready queue.
Waiting time is the time for which process is ready to run but not executed by CPU scheduler.
In all CPU Scheduling algorithms, shortest job first is optimal.
It gives minimum turnaround time, minimum average waiting time and high throughput and the most important thing is that shortest remaining time first is the preemptive version of shortest job first.
This scheduling algorithm may lead to starvation because if the short processes are added to the CPU scheduler continuously then the currently running process will never be able to execute as they will get pre-empted but here all the processes are arrived at same time so there will be no issue such as starvation.
SRTF would be same as SJF.
So, A is the answer. Shortest remaining time first.
Question 27 |
Consider a computer system with 40-bit virtual addressing and page size of sixteen kilobytes. If the computer system has a one-level page table per process and each page table entry requires 48 bits, then the size of the per-process page table is __________ megabytes.
384 MB | |
385 MB | |
386 MB | |
387 MB |
Size of memory = 240 and Page size = 16 KB = 214
No. of pages = size of Memory / page size = 240/ 214 = 226
Size of page table = 226 * 48 / 8 bytes = 26 * 6 MB = 384 MB
Question 28 |
Consider a disk queue with requests for I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10. The C-LOOK scheduling algorithm is used. The head is initially at cylinder number 63, moving towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is ___________.
346 | |
347 | |
348 | |
349 |
I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10.
C-LOOK scheduling algorithm is used.
Head is initially at cylinder number 63.
First start from 63
63 → 87 → 92 → 121 → 191 = 24 + 5 + 29 + 70 movements = 128
191 → 10 movements = 181
10 → 11 → 38 → 47 = 1 + 27 + 9 movements = 37
Total = 128 + 181 + 37 = 346
Question 29 |
Consider a computer system with ten physical page frames. The system is provided with an access sequence (a1,a2,…,a20,a1,a2,…,a20), where each ai is a distinct virtual page number. The difference in the number of page faults between the last-in-first-out page replacement policy and the optimal page replacement policy is ___________.
1 | |
2 | |
3 | |
4 |
First we can consider LIFO (Last In First Out) →
a1 to a10 will result in page faults = 10 page faults,
Then a11 will replace a10 (last in is a10), a12 replace a11 and ...till a20 = 10 page faults and a20 will be top of stack and a9…a1 are remained as such.
Then a1 to a9 are already there.
So, 0 page faults from a1 to a9.
a10 will replace a20, a11 will replace a10 and so on = So 11 page faults.
So total faults will be 10+10+11 = 31.
Second Optimal Page Replacement Policy →
a1 to a10 = 10 page faults,
a11 will replace a10 because among a1 to a10, a10 will be used later, a12 will replace a11 and so on = 10 page faults.
a20 will be top of stack and a9…a1 are remained as such.
a1 to a9 = 0 page fault. a10 will replace a1 because it will not be used afterwards and so on, a10 to a19 will have 10 page faults.
So no page fault for a20.
Total = 10+ 10 +10 = 30.
So the difference between LIFO - Optimal = 1
Question 30 |
Consider the following proposed solution for the critical section problem. There are n processes: P0...P(n-1). In the code, function pmax returns an integer not smaller than any of its arguments. For all i, t[i] is initialized to zero.
Code for Pi: do { c[i]=1; t[i] = pmax(t[0],...,t[n-1])+1; c[i]=0; for every j ≠ i in {0,...,n-1} { while (c[j]); while (t[j] != 0 && t[j]<=t[i]); } Critical Section; t[i]=0; Remainder Section; } while (true);
Which one of the following is TRUE about the above solution?
At most one process can be in the critical section at any time | |
The bounded wait condition is satisfied | |
The progress condition is satisfied | |
It cannot cause a deadlock |
Based on the above code option B, C and D are not satisfied.
We can see that while (t[j] != 0 && t[j] <= t[i]);
because of this condition deadlock is possible when t[j] = = t[i].
Because Progress == no deadlock as no one process is able to make progress by stopping other process.
Bounded waiting is also not satisfied.
In this case both deadlock and bounded waiting to be arising from the same reason as if t[j] = = t[i] is possible then starvation is possible means infinite waiting.
Mutual exclusion is satisfied.
All other processes j started before i must have value of t[j]) < t[i] as function pMax() return a integer not smaller than any of its arguments.
So if anyone out of the processes j have positive value will be executing in its critical section as long as the condition t[j] > 0 && t[j] <= t[i] within while will persist.
And when this j process comes out of its critical section, it sets t[j] = 0; and next process will be selected in for loop.
So, when i process reaches to its critical section none of the processes j which started earlier before process i is in its critical section.
This ensure that only one process is executing its critical section at a time.
So, A is the answer.
Question 31 |
In which one of the following page replacement algorithms it is possible for the page fault rate to increase even when the number of allocated frames increases?
LRU (Least Recently Used) | |
OPT (Optimal Page Replacement) | |
MRU (Most Recently Used) | |
FIFO (First In First Out) |
In Belady’s anomaly is the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm.
In some situations, FIFO page replacement gives more page faults when increasing the number of page frames.
Question 32 |
Consider the following processes, with the arrival time and the length of the CPU burst given in milliseconds. The scheduling algorithm used is preemptive shortest remaining-time first.

The average turn around time of these processes is _________ milliseconds.
8.25 | |
8.26 | |
8.27 | |
8.28 |
To answer the question we need to design the gantt chart:

In this algorithm, the processes will be scheduled on the CPU which will be having least remaining burst time.
Turnaround Time (TAT) = Completion Time (CT) - Arrival Time (AT)
TAT for P1 = 20 - 0 = 20,
TAT for P2 = 10 - 3 = 7,
TAT for P3 = 8 - 7 = 1,
TAT for P4 = 13 - 8 = 5.
Total TAT = 20 + 7 + 1 + 5 = 33 / 4 = 8.25 (Avg. TAT)
Question 33 |
Consider the following two-process synchronization solution.
Process 0 Process 1 --------- --------- Entry: loop while (turn == 1); Entry: loop while (turn == 0); (critical section) (critical section) Exit: turn = 1; Exit: turn = 0;
The shared variable turn is initialized to zero. Which one of the following is TRUE?
This is a correct two-process synchronization solution. | |
This solution violates mutual exclusion requirement. | |
This solution violates progress requirement. | |
This solution violates bounded wait requirement. |
So False.
C) Progress means if one process does not want to enter the critical section then it should not stop other process to enter the critical section.
But we can see that if process 0 will not enter the critical section then value of turn will not become 1 and process 1 will not be able to enter critical section.
So progress not satisfied. True.
D) Bounded waiting solution as there is a strict alteration.
So, False.
Question 34 |
Consider a non-negative counting semaphore S. The operation P(S) decrements S, and V(S) increments S. During an execution, 20 P(S) operations and 12 V(S) operations are issued in some order. The largest initial value of S for which at least one P(S) operation will remain blocked is _________.
7 | |
8 | |
9 | |
10 |
P(S) operation remain in blocked state therefore it will -1.
The negative value of the counting semaphore indicates the number of processes in suspended list (or blocked).
Take any sequence of 20P and 12V operations, at least one process will always remain blocked.
So, X - 20 + 12 = -1
Here P(S) = 20 and V(S) = 12
X = 7
Question 35 |
P1() { C = B – 1; B = 2*C; } P2() { D = 2 * B; B = D - 1; }The number of distinct values that B can possibly take after the execution is
3 | |
4 | |
5 | |
6 |
If we execute P1 process after P2 process, then B = 4
If we did preemption between P1 & P2 processes, then B = 2 (Preemption have done from P1 to P2) or B = 3 (Preemption have done from P2 to P1). So, among 2 & 3 values, only one value will be saved in B. So, total no. of distinct values that B can possibly take after the execution is 3.
Question 36 |
13 | |
12 | |
15 | |
16 |

∴ At the end of 12 milliseconds, 1st instance of T3 will complete its execution.
Question 37 |
Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 50. The additional distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100 when it starts execution) is _________ tracks.
10 | |
11 | |
12 | |
13 |

∴ Total head movements,
= 10 + 10 + 10 + 10 + 10 + 55 + 20 + 5 + 10
= 140
SSTF:

∴ Total head movements
= 5 + 15 + 10 + 10 + 65 + 5 + 10 + 10
= 130
∴ Additional distance that will be traversed by R/W head is
= 140 - 130
= 10
Question 38 |
Consider a main memory with five page frames and the following sequence of page references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. Which one of the following is true with respect to page replacement policies First-In-First Out (FIFO) and Least Recently Used (LRU)?
Both incur the same number of page faults
| |
FIFO incurs 2 more page faults than LRU
| |
LRU incurs 2 more page faults than FIFO | |
FIFO incurs 1 more page faults than LRU |

∴ Number of page faults = 9

∴ Number of page faults = 9
Question 39 |
A system has 6 identical resources and N processes competing for them. Each process can request atmost 2 resources. Which one of the following values of N could lead to a deadlock?
1 | |
2 | |
3 | |
6 |
Question 40 |
22 | |
23 | |
24 | |
25 |

Question 41 |
36 | |
37 | |
38 | |
39 |
PAS = 32 bit
∴ No. of frames =PA/Page size = 232 / 213 = 219
Also, it is given that each page table entry contains a valid bit, a dirty bit, 3 permission bits:
= 5 bits reserved
So one Page table entry size is
= 19+5 = 24 bits = 3 bytes
Now, Page table size = No. of entries × Entry size
24 × 220 = No. of entries × 3
No. of entries = 8 × 220 = 2 23
∴ Virtual Address size = No. of entries × Page size = 223 × 213 = 236
∴ Virtual Address Space = 36 bits
Question 42 |
n | |
n2 | |
2n | |
Independent of n |
Maximum number of processes that will be in ready state is independent of number of processors.
Question 43 |
I. Processes should acquire all their resources at the beginning of execution. If any resource is not available, all resources acquired so far are released. II. The resources are numbered uniquely, and processes are allowed to request for resources only in increasing resource numbers. III. The resources are numbered uniquely, and processes are allowed to request for resources only in decreasing resource numbers. IV. The resources are numbered uniquely. A process is allowed to request only for a resource with resource number larger than its currently held resources. Which of the above policies can be used for preventing deadlock?
Any one of I and III but not II or IV | |
Any one of I, III, and IV but not II | |
Any one of II and III but not I or IV | |
Any one of I, II, III, and IV p |
Question 44 |

First Come First Serve
| |
Non-preemptive Shortest Job First | |
Shortest Remaining Time | |
Round Robin with Quantum value two |

FCFS:

TAT for A = completion time(A) - AT(A) = 3 - 0 = 3
TAT of B = 9 - 1 = 8
TAT of C = 13 - 4 = 9
TAT of D = 15 - 6 = 9
∴ Avg. TAT = 29/4
SJF:

TAT of A = 3 - 0 = 3
TAT of B = 9 - 1 = 8
TAT of C = 15 - 4 = 11
TAT of D = 11 - 6 = 5
∴ Avg. TAT = 27/4
SRTF:

TAT of A = 3 - 0 = 3
TAT of B = 15 - 1 = 14
TAT of C = 8 - 4 = 4
TAT of D = 10 - 6 = 4
∴ Avg. TAT = 25/4
RR:

TAT of A = 5 - 0 = 5
TAT of B = 15 - 1 = 14
TAT of C = 13 - 4 = 9
TAT of D = 11 - 6 = 5
∴ Avg. TAT = 33/4
∴ SRTF has lowest Avg. TAT.
Question 45 |
3 | |
4 | |
5 | |
6 |

90 is serviced after servicing three requests, i.e.,
100 → 105 → 110 → 90
Question 46 |
User level threads are not scheduled by the kernel. | |
When a user level thread is blocked, all other threads of its process are blocked. | |
Context switching between user level threads is faster than context switching between kernel level threads. | |
Kernel level threads cannot share the code segment. |
Question 47 |


REQ1: P0 requests 0 units of X, 0 units of Y and 2 units of Z REQ2: P1 requests 2 units of X, 0 units of Y and 0 units of Z
Only REQ1 can be permitted. | |
Only REQ2 can be permitted. | |
Both REQ1 and REQ2 can be permitted. | |
Neither REQ1 nor REQ2 can be permitted. |
After allowing Req 1,

Available: X=3, Y=2, Z=0
With this we can satisfy P1's requirement. So available becomes: X=6, Y=4, Z=0.
Since Z is not available, neither P0's nor P2's requirement can be satisfied. So, it is an unsafe state.
Lets take 2nd case,
After allowing Req 2,

Available: X=1, Y=2, Z=2
With this we can satisfy any one of P1's or P2's requirement. Lets first satisfy P1's requirement. So Available now becomes:
X=6, Y=4, Z=2
Now with the availability we can satisfy P2's requirement. So Available now becomes,
X=8, Y=5, Z=3
With this availability P0 can also be satisfied. So, hence it is in safe state.
So from above two cases Req 1 cannot be permitted but Req 2 can be permitted.
Question 48 |

7.2 | |
7.3 | |
7.4 | |
7.5 |

TAT(A) = 8-0 = 8, TAT(B)= 5-3=2, TAT(C)= 12-5=7, TAT(D)= 21-7= 14, TAT(E)=15-10=5
Average turnaround time = 8+2+7+14+5/ 5 = 7.2ms
Question 49 |
7 | |
8 | |
9 | |
10 |

Total 7 faults are there.
Question 50 |
99.60 | |
99.61 | |
99.62 | |
99.63 |
= Total disk size/ disk block size
= 100 × 106 / 103
= 100 × 103
∴ Total overhead = 100 × 103 × 4B
= 400 × 103 B
= 0.4 × 106 B
So, Maximum size of file that can be stored in disk is,
Total disk size - Total overhead
= 100 × 106 - 0.4 × 106
= 99.6 × 106 B
Question 51 |

The producer will be able to add an item to the buffer, but the consumer can never consume it. | |
The consumer will remove no more than one item from the buffer. | |
Deadlock occurs if the consumer succeeds in acquiring semaphore s when the buffer is empty. | |
The starting value for the semaphore n must be 1 and not 0 for deadlock-free operation. |
Question 52 |
Process id tc tio A 100 ms 500 ms B 350 ms 500 ms C 200 ms 500 msThe processes A, B, and C are started at times 0, 5 and 10 milliseconds respectively, in a pure time sharing system (round robin scheduling) that uses a time slice of 50 milliseconds. The time in milliseconds at which process C would complete its first I/O operation is ___________.
1000 | |
1001 | |
1002 | |
1003 |

So 'C' completes its I/O at 1000 time units.
Question 53 |
Least-recently-used | |
First-in-first-out | |
Last-in-first-out | |
Most-recently-used |

So, there would be 300 page faults in total (each access 100 page faults). Also it is visible that every time a replacement is done for the page which is most recently referred as it will be least recently referred in future. So, for the given page reference string optimal page replacement policy is working same as most recently used policy and thus number of page faults will be same in both of them.
Question 54 |
7 | |
8 | |
9 | |
10 |
Question 55 |
6 and 6 | |
8 and 10 | |
9 and 12 | |
4 and 4 |
c = a + d
d = b + c
e = d - b = b + c - b = c
a = d - b + b = d
So, DAG representation of above basic block respectively,

So, number of nodes = 6 &
number of edges = 6
Question 56 |


The average waiting time (in milliseconds) of the processes is |
5.5 | |
5.6 | |
5.7 | |
5.8 |

WT - Waiting Time
CT - Completion Time
TAT - Turn Around Time
TAT = CT - AT < br> WT = TAT - BT
Gantt chart using Shortest remaining time first,

Avg. WT = 15+0+3+4/4 = 22/4 = 5.5
Question 57 |
122 | |
123 | |
124 | |
125 |
= 10 + 0.4 × 80 + 80
= 10 + 32 + 80
= 122 ms
Question 58 |
1.68 | |
1.69 | |
1.70 | |
1.71 |
=200 instructions (operations)
Time taken for fetching 100 instructions (equivalent to read) = 90*1ns + 10*5ns = 140ns
Memory operand Read operations = 90% (60)*1ns + 10% (60) × 5ns = 54ns + 30 ns = 84ns
Memory operands Write operations time = 90% (40)*2ns + 10% (40)*10ns
= 72ns + 40ns = 112ns
Total time taken for executing 200 instructions = 140 + 84 + 112 = 336ns
∴ Average memory access time =336 ns/200=1.68ns
Question 59 |
This algorithm is equivalent to the first-come-first-serve algorithm. | |
This algorithm is equivalent to the round-robin algorithm. | |
This algorithm is equivalent to the shortest-job-first algorithm. | |
This algorithm is equivalent to the shortest-remaining-time-first algorithm. |
Question 60 |
X: P(a)P(b)P(c) Y: P(b)P(c)P(d) Z: P(c)P(d)P(a) | |
X: P(b)P(a)P(c) Y: P(b)P(c)P(d) Z: P(a)P(c)P(d) | |
X: P(b)P(a)P(c) Y: P(c)P(b)P(d) Z: P(a)P(c)P(d) | |
X: P(a)P(b)P(c) Y: P(c)P(b)P(d) Z: P(c)P(d)P(a) |
Question 61 |
1281 | |
1282 | |
1283 | |
1284 |
Suppose we have x cylinders & in each cylinder we have 16 surface and 64 sectors so we need to store 42797 KB of data.
x * 16* 64 *512 + 16*64* 512 + 64* 512 = 42797 KB , by solving this we get x = 82.5 so we need 83 cylinders.
We can add this to the no. of cylinders in the starting address <1200, 9 ,40 >, i.e. 1200, but we also need to cover 40 more sectors which will need one more cylinder, this cylinder is not full but still it has to be accommodated.
Hence 1200+83+1 = 1284 and currently we will be on 1284 cylinder.
Question 62 |
-2 | |
-1 | |
1 | |
2 |

Suppose, 'W' executes first, and makes value of semaphore S=1. Now it increment value of x to 1 and comes out and make S=2. Now 'X' executes, where the value of x=1 is stored in R1 and then R1 is incremented and value in R1 is 2. Now 'X' gets preempted. Also to be noted that value of semaphore S is now '1' because the execution during process X got preempted and wasn't able to increase value of S to 2.
Now process Y will get completely execute and then Z will get completely execute. Now the value of 'x' is -3. Now again 'X' will get executed and the value stored in R1 i.e., '2' will get stored in 'x'. So the value in 'x' will be finally 2.
So the maximum possible value of 'x' is 2.
Question 63 |
Process X: Process Y: private i; private i; for (i=0; i < n; i++) { for (i=0; i < n; i++) { a[i] = f(i); EntryY(R, S); ExitX(R, S); b[i]=g(a[i]); } }Which one of the following represents the CORRECT implementations of ExitX and EntryY?
ExitX(R,S) { P(R); V(S); } EntryY(R,S) { P(S); V(R); } | |
ExitX(R,S) { V(R); V(S); } EntryY(R,S) { P(R); P(S); } | |
ExitX(R,S) { P(S); V(R); } EntryY(R,S) { V(S); P(R); } | |
ExitX(R,S) { V(R); P(S); } EntryY(R,S) { V(S); P(R); } |
Option C - Correct because each and every value inserted by the process X in the array will be immediately consumed by process Y.
Question 64 |
2 | |
4 | |
8 | |
16 |
So the no. of entries in one page is 2p/4, where 4 is the page table entry size given in question.
So we know that process size or virtual address space size is equal to
No. of entries × Page size
So total no. of entries for 3 level page table is,
(2p/4)×(2p/4)×(2p/4)
So, No. of entries × Page size = VAS
(2p/4)×(2p/4)×(2p/4)× (2p) = 246
24p = 252
4p = 52
p = 13
∴ Page size = 213
Question 65 |
2 | |
4 | |
8 | |
16 |

Architecture of virtual indexed physically tagged (VIPT):

VIPT cache and aliasing effect and synonym.
Alias: Same physical address can be mapped to multiple virtual addresses.
Synonym: Different virtual addresses mapped to same physical address (for data sharing).
So these synonyms should be in same set to avoid write-update problems.
In our problem VA = 46bits

We are using 16bits for indexing into cache.
To have two synonym is same set we need to have same 16 bits index for PA & VA.
Assume that physical pages are colored and each set should have pages of same color so that any synonyms are in same set.
Since page size = 8KB ⇒ 13bits
These 13bits are not translated during VA→PA. So 13bits are same out of 16 Index bits, 13 are same we need to make 3bits (16-13) same now.
3bits can produce, 23 = 8 combinations which can be mapped on the different sets, so we need 8 different colors to color our pages. >br> In physically indexed cache indexing is done via physical address bits, but in virtual indexed cache, cache is indexed from (offset + set) bits. In physical Index cache indexing is done one to one (1 index maps to one page in one block of cache). In VIPT we have more/ extra bits, so mapping is not one-one. Hence these extra bits have to be taken care, such that if two virtual address refers to same page in cache block of different sets then they have to be assumed same i.e., we say of same color and put same color page in one set to avoid write update problems.
Question 66 |
fork(); fork(); fork();The total number of child processes created is
3 | |
4 | |
7 | |
8 |
7 are child processes.
Question 67 |
Process Arrival time Time Units Required P1 0 5 P2 1 7 P3 3 4The completion order of the 3 processes under the policies FCFS and RR2 (round robin scheduling with CPU quantum of 2 time units) are
FCFS: P1, P2, P3 RR2: P1, P2, P3 | |
FCFS: P1, P3, P2 RR2: P1, P3, P2 | |
FCFS: P1, P2, P3 RR2: P1, P3, P2 | |
FCFS: P1, P3, P2 RR2: P1, P2, P3 |
FCFS is clear.
RR Queue: In RR queue time slot is of 2 units.
Processes are assigned in following order
P1, P2, P1, P3, P2, P1, P3, P2, P2
This question used ready queue concept. At t=2, P2 starts and P1 is sent to the ready queue and at t=3 P3 arrives so then the job P3 is queued in ready queue after P1. So at t=4, again P1 is executed then P3 is executed for first time at t=6.
RR2: P1, P3, P2 So option C.
Question 68 |
AcquireLock(L){ while (Fetch_And_Add(L,1)) L = 1; } ReleaseLock(L){ L = 0; }This implementation
fails as L can overflow | |
fails as L can take on a non-zero value when the lock is actually available | |
works correctly but may starve some processes | |
works correctly without starvation |
while (Fetch_And_Add (L,1))
L = 1; // A waiting process can be here just after
// the lock is released, and can make L = 1.
Assume P1 executes until while condition and preempts before executing L =1. Now P2 executes all statements, hence L = 0. Then P1 without checking L it makes L = 1 by executing the statement where it was preempted.
It takes a non-zero value (L=1) when the lock is actually available (L = 0). So option B.
Question 69 |
3 KBytes | |
35 KBytes | |
280 KBytes | |
dependent on the size of the disk |
So, one direct block addressing will point to 8 disk blocks = 8*128 B = 1 KB
Singly Indirect block addressing will point to 1 disk block which has 128/8 disc block addresses = (128/8)*128 B = 2 KB
Doubly indirect block addressing will point to 1 disk block which has 128/8 addresses to disk blocks which in turn has 128/8 addresses to disk blocks = 16*16*128 B = 32 KB
Maximum possible file size = 1 KB + 2 KB + 32 KB = 35 KB
Question 70 |
OPTIMAL < LRU < FIFO | |
OPTIMAL < FIFO < LRU | |
OPTIMAL = LRU | |
OPTIMAL = FIFO |

No. of page faults with FIFO = 6
LRU:

No. of page faults with LRU = 9
Optimal:

No. of page faults with optimal = 5
∴ Optimal < FIFO < LRU
Question 71 |
21 ns | |
30 ns | |
23 ns | |
35 ns |
EA = p × page fault service time + (1 – p) × Memory access time
=1/106×10×106+(1-1/106)×20 ≅29.9 ns
Question 72 |
On per-thread basis, the OS maintains only CPU register state | |
The OS does not maintain a separate stack for each thread | |
On per-thread basis, the OS does not maintain virtual memory state | |
On per thread basis, the OS maintains only scheduling and accounting information
|
B) False, OS do maintain a separate stack for each thread.
C) True
D) False
Question 73 |
(t1) > (t2) | |
(t1) = (t2) | |
(t1) < (t2) | |
Nothing can be said about the relation between t1 and t2 |
Question 74 |
The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival or completion of processes. What is the average waiting time for the three processes?
5.0 ms | |
4.33 ms | |
6.33 ms | |
7.33 ms |

CT = Completion time
TAT = Turn Around Time
WT = Waiting Time
TAT = CT - AT
WT = TAT - BT
Gantt Chart using pre-emptive shortest job first scheduling algorithm,

Avg. WT = 4+0+11/3 = 5ns
Question 75 |
I. Shortest remaining time first scheduling may cause starvation II. Preemptive scheduling may cause starvation III. Round robin is better than FCFS in terms of response time
I only
| |
I and III only
| |
II and III only | |
I, II and III |
- Pre-emptive scheduling causes starvation (for example lower priority jobs are waiting).
- Best Response time is given by RR.
Question 76 |
Which one of the following statements describes the properties achieved?
Mutual exclusion but not progress
| |
Progress but not mutual exclusion | |
Neither mutual exclusion nor progress
| |
Both mutual exclusion and progress |
Question 77 |
196 | |
192 | |
197 | |
195 |
Page 1 …….. Page 100 causes 100 faults.
Now, when they are accesses in reverse order page 100, 99, 98, 97 are already present. So we get 96 page faults. So, total of 196 page faults.
Question 78 |

At least twice | |
Exactly twice | |
Exactly thrice | |
Exactly once |
S1=0
S2=0
P0 enters the critical section first,
prints (‘0’)
releases S1,S2(i.e., S1=1 & S2=1)
Now P1 & P2 both can enter critical section releases S0 & prints (‘0’)
This process continues, hence the number of zero’s printed ≥2.
Question 79 |
if (i % 2 == 0) { if (i < n) request Ri if (i+2 < n) request Ri+2 } else { if (i < n) request Rn-i if (i+2 < n) request Rn-i-2 }In which one of the following situations is a deadlock possible?
n = 40, k = 26 | |
n = 21, k = 12 | |
n = 20, k = 10
| |
n = 41, k = 19
|
P10 requests R10 & R11
P11 requests R10 & R8
Hence P10 & P11 inorder in deadlock.
Question 80 |
FIFO
| |
Optimal
| |
LRU | |
MRU
|
Question 81 |
Virtual page number | |
Page frame number
| |
Both virtual page number and page frame number
| |
Access right information
|
Virtual page number can represents index in the page table to get the page frame number.
Question 82 |

All processes will finish without any deadlock. | |
Only P1 and P2 will be in deadlock.
| |
Only P1 and P3 will be in a deadlock.
| |
All three processes will be in deadlock. |
→ P1 requests 2 units of R2 which is granted.
→ P2 requests 2 units of R3 which is granted.
→ P3 requests 1 units of R4 which is granted.
Now Available resources are,

At t=1,
→ P1 requests 1 unit of R3 which is granted because it is available.
Now Available resources are,

At t=2,
→ P2 requests 1 unit of R4 which is granted.
→ P3 requests 2 units of R1.
Now Available resources are,

At t=3,
→ P1 requests 2 units of R1 which cannot be granted, and will wait for other processes to release.
Available resources are,

At t=4,
→ P2 requests 1 unit of R1, which is granted.
Available resources are

At t=5,
→ P3 releases 2 units of R1.
Now Available resources are,

→ Now P1 is waiting for R1, so now P1 will be granted 2 units of R1.
Now Available resources are,

→ Now P1 releases 1 unit of R2 and 1 unit of R1.
Now Available resources are

At t=6,
→ Now P2 releases 1 unit of R3.
Now available resources are,

At t=7,
→ P3 requests 1 unit of R2, which is granted.
→ P1 releases 1 unit of R3.
Now Available resources are,

At t=8,
→ P2 fnishes and releases remaining resources. So now Available resources,

→ P3 requests 1 unit of R3 which is granted.
Now Available resources are,

→ P1 requests 2 units of R4 which cannot be granted, so it will wait for other process to release them.
At t=9,
→ P3 finishes, and releases rest of the resources.
Now Available resources are

→ Now, P1 can be granted with resources 2 units of R4 for which it was waiting for.
Now Available resources are,

At t=10,
→ P1 finishes its execution.
So, finally we can conclude that all processes will finish without any deadlock.
Question 83 |
95 ms | |
119 ms
| |
233 ms | |
276 ms
|
4, 34, 10,7, 19, 73, 2, 15, 6, 20
Arrange the sequence in order
2, 4, 6, 10, 15, 19, 20, 34, 73

⇒ 1 ms to move from one cylinder to adjacent one
⇒ (16*1)+(14*1)+(1*1)+(4*1)+(5*1)+(3*1)+(1*1)+(2*1)+(2*1)+(71*1)
⇒ 16+14+1+4+5+3+1+2+2+71
⇒ 119 ms
Question 84 |


I and II
| |
I and III
| |
II and III | |
II and IV
|
Statement II is true, a process can move to ready state when I/O completes irrespective of other process being in running state or not.
Statement III is true, the transition C, represents preemptive scheduling.
Statement IV is false, because it is preemptive scheduling.
Correct Answer is Option-C (II & III are true).
Question 85 |
- The enter_CS() and leave_CS() functions to implement critical section of a process are realized using test-and-set instruction as follows:
I only
| |
I and II | |
II and III
| |
IV only |
It is not using any queue to avoid starvation and there is no use of FIFO.
In CS, only one process can enter.
So, Answer is option A.
Question 86 |
It reduces the memory access time to read or write a memory location.
| |
It helps to reduce the size of page table needed to implement the virtual address space of a process.
| |
It is required by the translation lookaside buffer.
| |
It helps to reduce the number of page faults in page replacement algorithms.
|
Question 87 |
contiguous allocation
| |
linked allocation
| |
indexed allocation | |
an extension of indexed allocation
|
Question 88 |
0 and 0
| |
0 and 1 | |
1 and 0 | |
1 and 1 |
Yb must be '0' initially, because if Yb is '1' initially then two process can be in critical section at the same time.
So answer is Option (C).
Question 89 |
An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
| |
In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after completion of the I/O
| |
A process making a synchronous I/O call waits until I/O is complete, but a process making an asynchronous I/O call does not wait for completion of the I/O
| |
In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the ISR that is invoked after the completion of I/O
|
Synchronous I/O -- some execution vehicle (like a process or thread) that initiates the I/O also waits for the I/O to complete (and perhaps completes it). When the I/O completes, that same execution vehicle goes on to do something else, perhaps using the results of the I/O.
Asynchronous I/O -- no execution vehicle waits for the I/O to complete. When the I/O completes, whatever execution vehicle happens to complete the I/O may arrange for later things to happen.
Option B is not true, because both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is not invoked after completion of the I/O.
Question 90 |
In deadlock prevention, the request for resources is always granted if the resulting state is safe
| |
In deadlock avoidance, the request for resources is always granted if the result state is safe | |
Deadlock avoidance is less restrictive than deadlock prevention | |
Deadlock avoidance requires knowledge of resource requirements a priori
|
Question 91 |
n | |
(2n) - 1
| |
2n | |
(2n+1) - 1
|
It is a operation where process creates a copy of itself.

1,3,7,15,31,... ⇒ 2n-1
Question 92 |
- Bits 30-31 are used to index into the first level page table
- Bits 21-29 are used to index into the second level page table
- Bits 12-20 are used to index into the third level page table, and
- Bits 0-11 are used as offset within the page
20, 20 and 20
| |
24, 24 and 24
| |
24, 24 and 20
| |
25, 25 and 24
|
From the question we can see the below info:
Physical address size = 36 bits
Physical memory size = 236 bytes
Page frame size = 4K bytes = 212 bytes
No. of bits for offset = 12
No. of bits required to access physical memory frame = 36 – 12 = 24
So in third level of page table, 24 bits are required to access an entry.
In second level page table entry -- 9 bits of virtual address are used to access second level page table entry and size of pages in second level is 4 bytes.
So size of second level page table is (29)*4 = 211 bytes. It means there are (236)/(211) possible locations to store this page table. Therefore the second page table requires 25 bits to address it. the first page table needs 25 bits.
Answer - D
First level
Question 93 |
Mathematics Information Technology SAME. | |
Mathematics Information Technology. | |
Information Technology. | |
Information Technology SAME. |
Question 94 |
6 and 1, 2, 3, 4 | |
7 and 1, 2, 4, 5 | |
8 and 1, 2, 4, 5 | |
9 and 1, 2, 3, 5 |
(Each page contains 16 bytes, so say for page0, it contains the virtual address 0-15)
0: page fault-1, pages in memory-0
4: page fault-1, pages in memory-0
8: page fault-1, pages in memory-0
20: page faults-2, pages in memory-0,1
24: page faults-2, pages in memory-0,1
36: page faults-3, pages in memory-0,1,2
44: page faults-3, pages in memory-0,1,2
12: page faults-3, pages in memory-1,2,0
68: page faults-4, pages in memory-1,2,0,4
72: page faults-4, pages in memory-1,2,0,4
80: page faults-5, pages in memory-2,0,4,5
84: page faults-5, pages in memory-2,0,4,5
28: page faults-6, pages in memory-0,4,5,1
32: page faults-7, pages in memory-4,5,1,2
88: page faults-7, pages in memory-4,1,2,5
92: page faults-7, pages in memory-4,1,2,5
Question 95 |
semaphore S = 1, Q = 0; integer x;
producer: consumer: while (true) do while (true) do P(S); P(Q); x = produce (); consume (x); V(Q); V(S); done done Which of the following is TRUE about the program above?The process can deadlock | |
One of the threads can starve | |
Some of the items produced by the producer may be lost | |
Values generated and stored in ‘x’ by the producer will always be consumed before the producer can generate a new value |
(B) No starvation happen because there is alteration between Producer and Consumer.
(C) No items is lost.
Question 96 |
Both starvation and deadlock can occur | |
Starvation can occur but deadlock cannot occur | |
Starvation cannot occur but deadlock can occur | |
Neither starvation nor deadlock can occur
|
Now, maybe the process has not used the resources it released yet. This may happen again when the process requests another resource.
So, the process starved for proper utilization of resources.
Deadlock will not occur as it is one of the deadlock prevention scheme.
Question 97 |
degenerate to shortest job first | |
degenerate to priority scheduling | |
degenerate to first come first serve | |
none of the above |
Question 98 |

I-d, II-a, III-b, IV-c | |
I-b, II-c, III-a, IV-d | |
I-c, II-d, III-a, IV-b | |
I-b, II-c, III-d, IV-a |
The bit indicates that its associated block of memory has been modified and has not been saved to storage yet. Dirty bits are used by the CPU cache and in the page replacement algorithms of an operating system.
R/W bit:
If the bit is set, the page is read/ write. Otherwise when it is not set, the page is read only.
Reference bit:
Reference bit is used in a version of FIFO called second chance policy, in order to avoid replacement of heavily used page.
Valid bit:
Valid bit is not used for page replacement. It is not used in any page replacement policy. It tells the page in the memory is valid or not. If it is valid it is directly used and if it is not then a fresh page is loaded. So, basically it is page initialization, because we are not replacing, it is initializing, we not knocking out somebody, we are filling empty space. So initialization and so option (D).
Question 99 |
Group I Group II (P) Gang Scheduling (1) Guaranteed Scheduling (Q) Rate Monotonic Scheduling (2) Real-time Scheduling (R) Fair Share Scheduling (3) Thread Scheduling
P – 3 Q – 2 R – 1 | |
P – 1 Q – 2 R – 3
| |
P – 2 Q – 3 R – 1
| |
P – 1 Q – 3 R – 2
|
⇒ Rate monotonic scheduling is used in Real-time operating system.
⇒ Fair share scheduling distributes the CPU equally among users due to which it generates scheduling process.
Question 100 |
Context switch time is longer for kernel level threads than for user level threads. | |
User level threads do not need any hardware support.
| |
Related kernel level threads can be scheduled on different processors in a multi-processor system.
| |
Blocking one kernel level thread blocks all related threads.
|
B) True, because kernel is not involved in it.
C) True
D) Blocking one kernel level thread blocks all related threads is false, because kernel level threads are independent.
Question 101 |
Process Execution time Arrival time P1 20 0 P2 25 15 P3 10 30 P4 15 45What is the total waiting time for process P2?
5 | |
15 | |
40 | |
55 |

Waiting time for process P2 = TAT - Execution time
= Completion time - AT - ET
= 55 - 15 - 25
= 15
Question 102 |
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate. Q: Some programs do not exhibit locality of reference.Which one of the following is TRUE?
Both P and Q are true, and Q is the reason for P
| |
Both P and Q are true, but Q is not the reason for P | |
P is false, but Q is true
| |
Both P and Q are false
|
FIFO suffers from Belady anomaly. Belady anomaly states that when number of page frames increases no. of page fault increases.
Statement Q,
Locality of reference also known as the principle of locality, is the tendency of a processor to accesses the same set of memory locations respectively over a short period of time.
And yes some programs do not exhibit locality of reference.
Hence, both P and Q are true. But Q is not the reason for P.
Question 103 |
alloc request X Y Z X Y Z P0 1 2 1 1 0 3 P1 2 0 1 0 1 2 P2 2 2 1 1 2 0
P0 | |
P1
| |
P2 | |
None of the above, since the system is in a deadlock.
|

543:
System is in safe state
Order of Execution =
P2 will execute last.
Question 104 |
/* P1 */ while (true) { wants1 = true; while (wants2 == true); /* Critical Section */ wants1=false; } /* Remainder section */ /* P2 */ while (true) { wants2 = true; while (wants1==true); /* Critical Section */ wants2 = false; } /* Remainder section */
It does not ensure mutual exclusion. | |
It does not ensure bounded waiting.
| |
It requires that processes enter the critical section in strict alternation.
| |
It does not prevent deadlocks, but ensures mutual exclusion. |
Now, if in P1
wants1 = true ;
executed and preempted and now if in P2
wants2 = true ;
executed.
Then the deadlock situation will be created because both will fall into infinite loop.
Question 105 |
7 | |
8 | |
9 | |
10 |
1, 2, 1, 3, 7, 4, 5, 6, 3, 1
No. of frames = 3
Using Optimal page replacement,

Total 7 page faults.
Question 106 |
A process has been allocated 3 page frames. Assume that none of the pages of the process are available in the memory initially. The process makes the following sequence of page references (reference string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1 If optimal page replacement policy is used, how many page faults occur for the above reference string?
0 | |
1 | |
2 | |
3 |
1, 2, 1, 3, 7, 4, 5, 6, 3, 1 br> No. of frames = 3 (Using LRU)

No. of page faults with LRU = 9
⟶ In the LRU we replace the page with least recently used page.
Using optimal page replacement
⟶ 1,2,1,3,7,4,5,6,3,1

No. of page faults with optimal = 7
Difference between optimal and LRU = 9 - 7 = 2
In optimal we replace the page which will occur last in the future.
Question 107 |
get_exclusive_access ( ) { if (critical _flag == FALSE) { critical_flag = TRUE ; critical_region () ; critical_flag = FALSE; } }
Consider the following statements.
i. It is possible for both P1 and P2 to access critical_region concurrently.
ii. This may lead to a deadlock.
Which of the following holds?
(i) is false and (ii) is true | |
Both (i) and (ii) are false | |
(i) is true and (ii) is false | |
Both (i) and (ii) are true |
Question 108 |

16 | |
19 | |
20 | |
37 |

At t = 8

At t = 10

At t = 11

J7 can be finished at t = 19.
Question 109 |
0 | |
4 | |
7 | |
8 |
0200 - Page fault, address till 299 in memory
0430 - Page fault, address till 529 in memory
0499 - No page fault
0510 - No page fault
0530 - Page fault, address till 629 in memory
0560 - No page fault.
0120 - Page fault, address till 219 in memory
0220 - Page fault, address till 319 in memory
0240 - No page fault
0260 - No page fault
0320 - Page fault, address till 419 in memory
0410 - No page fault
So, total page faults = 7.
Question 110 |
Consider n jobs J1, J2,......Jn such that job Ji has execution time ti and a non-negative integer weight wi. The weighted mean completion time of the jobs is defined to be
where Ti is the completion time of job Ji. Assuming that there is only one processor available, in what order must the jobs be executed in order to minimize the weighted mean completion time of the jobs?
Non-decreasing order of ti | |
Non-increasing order of wi | |
Non-increasing order of witi | |
None-increasing order of wi/ti |
So answer is the non-increasing order of wi/ti.
Question 111 |
A demand paging system takes 100 time units to service a page fault and 300 time units to replace a dirty page. Memory access time is 1 time unit. The probability of a page fault is p. In case of a page fault, the probability of page being dirty is also p. It is observed that the average access time is 3 time units. Then the value of p is
0.194 | |
0.233 | |
0.514 | |
0.981 | |
0.0194 |
(if the page is not dirty)
Page fault service time = 300
(if there is dirty page)
Probability of page fault = P
Probability pf page being dirty = P
So, total page fault service time
= P(300) + (1 - P)100
= 300P + 100 - 100P
= 300P + 100
Now given,
Effective average memory access time = 3
So,
3 = m + P × total page fault service time
= 1 + P(200P + 100)
= 1+ 200P2 + 100P
⇒ 200P2 + 100P - 2 = 0
⇒ P = 0.0194
Question 112 |
11, 139, 170, 178, 181, 184, 201, 265 | |
10, 138, 170, 178, 181, 185, 201, 265 | |
10, 139, 169, 178, 181, 184, 201, 265 | |
10, 138, 170, 178, 181, 185, 200, 265 |

Hence, correct option is (B).
Question 113 |
9 | |
10 | |
11 | |
12 |
1) The alternating direction with SSTF policy.
2) Maximize the no. of requests.
The first condition can be satisfied by not having two requests in the equal distance from the current location. As shown below, we must not have request located in the circle marked positions.
Now to maximize the no. of request we need the requests to be located as compact as possible, which can be done by just placing the request in the next position after the circle marked position in particular direction (the direction in which the head need to move).
Now to satisfy the 1st criteria:

Seek length sequence for maximum cardinality and alternating head movements:
→ 1, 3, 7, 15, ...
→ or, 21-1, 22-1, 23-1, 24-1, ...
→ We have 2048 tracks so, maximum swing (seek length) can be 2047.
→ Which corresponds to a seek length of 211-1 in the 11th service.
Question 114 |
1 | |
2 | |
3 | |
4 |

Total no.of context switches is 2.
Question 115 |
void barrier ( void ) { 1: P(S); 2: process_arrived++; 3. V(S); 4: while (process_arrived !=3); 5: P(S); 6: process_left++; 7: if (process_left==3) { 8: process_arrived = 0; 9: process_left = 0; 10: } 11: V(S); } |
The barrier implementation is wrong due to the use of binary semaphore S
| |
The barrier implementation may lead to a deadlock if two barriers in invocations are
used in immediate succession
| |
Lines 6 to 10 need not be inside a critical section
| |
The barrier implementation is correct if there are only two processes instead of three
|
Hence, it is leads to deadlock.
Question 116 |
void barrier ( void ) { 1: P(S); 2: process_arrived++; 3. V(S); 4: while (process_arrived !=3); 5: P(S); 6: process_left++; 7: if (process_left==3) { 8: process_arrived = 0; 9: process_left = 0; 10: } 11: V(S); } |
Lines 6 to 10 are simply replaced by process_arrived--
| |
At the beginning of the barrier the first process to enter the barrier waits until process_arrived becomes zero before proceeding to execute P(S).
| |
Context switch is disabled at the beginning of the barrier and re-enabled at the end.
| |
The variable process_left is made private instead of shared.
|
Question 117 |
Consider the following snapshot of a system running n processes. Process i is holding xi instances of a resource R, 1 ≤ i ≤ n. Currently, all instances of R are occupied. Further, for all i, process i has placed a request for an additional yi instances while holding the xi instances it already has. There are exactly two processes p and q such that yp = yq = 0. Which one of the following can serve as a necessary condition to guarantee that the system is not approaching a deadlock?
min (xp, xq) < maxk≠p,qyk | |
xp + xq ≥ mink≠p,qyk | |
max (xp, xq) > 1 | |
min (xp, xq) > 1 |
→ When two (or) more processes waiting for another process to release the resources.
→ P and Q can execute if they have sufficient resources, they don’t wait for extra resources (i.e., Xp+ Xq) required.
→ Option B can satisfies the corresponding equation i.e., Xp+ Xq >= min(Yk) where k != p and k != q.
Here we have sufficient resources.
Question 118 |
0% | |
10.6% | |
30.0% | |
89.4%
|

Total time needed to complete the execution = 47
Idle time = 2+3 = 5
Percentage of Idle time = 5/47 × 100 =10.6%
Question 119 |
13 units
| |
14 units
| |
15 units
| |
16 units
|

Avg TAT = 12+13+14/3 = 39/3 = 13 units
Question 120 |
void P (binary_semaphore *s) { unsigned y; unsigned *x = &(s->value); do { fetch-and-set x, y; } while (y); } void V (binary_semaphore *s) { S->value = 0; }Which one of the following is true?
The implementation may not work if context switching is disabled in P | |
Instead of using fetch-and-set, a pair of normal load/store can be used | |
The implementation of V is wrong
| |
The code does not implement a binary semaphore |
B) If we use normal load and store instead of Fetch and Set, then there can be chance that more than one process sees S.value as 0 and then mutual exclusion will not be satisfied. So wrong.
C) Here we are setting S→value to 0, which is correct. This option thats why wrong.
D) Only one process can be in critical section at any time. So this option is wrong.
Question 121 |
11 bits
| |
13 bits | |
15 bits
| |
20 bits |
Virtual Address = 32 bit
No. of bits needed to address the page frame = 32 - 12 = 20
TLB can hold 128 page table entries with 4-way set associative
⇒ 128/4=32=25
→ 5 bits are needed to address a set.
→ The size of TLB tag = 20 - 5 = 15 bits
Question 122 |
Efficient implementation of multi-user support is no longer possible
| |
The processor cache organization can be made more efficient now
| |
Hardware support for memory management is no longer needed | |
CPU scheduling can be made more efficient now |
→ Because special hardware support needed only for virtual memory.
Question 123 |
- It initiates another process if there are enough extra frames.
- It selects a process to suspend if the sum of the sizes of the working-sets exceeds the total number of available frames.
1 only | |
2 only | |
Neither 1 nor 2 | |
Both 1 and 2 |
Question 124 |
It is a multiprogrammed operating system | |
It uses preemptive scheduling | |
It uses non-preemptive scheduling | |
It is a multi-user operating system
|
Question 125 |
The arrival time, priority, and duration of the CPU and I/O bursts for each of three processes P1, P2 and P3 are given in the table below. Each process has a CPU burst followed by an I/O burst followed by another CPU burst. Assume that each process has its own I/O resource.
The multi-programmed operating system uses preemptive priority scheduling. What are the finish times of the processes P1, P2 and P3 ?
11, 15, 9 | |
10, 15, 9 | |
11, 16, 10 | |
12, 17, 11 |

Hence, finish times of process.
P1 - 10
P2 - 11
P3 - 9
Question 126 |
- Interchanging Wait (F) and Wait (S) in the Producer process
- Interchanging Signal (S) and Signal (F) in the Consumer process
1 only | |
2 only | |
Neither 1 nor 2 | |
Both 1 and 2 |
Now if Wait(S) in producer succeeds, then producer will wait for Wait(F) which is never going to succeed as consumer would be waiting for Wait(S). So deadlock, can happen.
If Signal(S) and Signal(F) are interchanged in consumer, deadlock won't happen. It will just give priority to a producer compared to the next consumer waiting.
Question 127 |

P < S < T | |
S < P < T | |
S < T < P | |
T < S < P |
For P1,
Page size is 1KB. So, no. of pages required for P1=195. An entry in page table is of size 4 bytes and assuming an inner level page table takes the size of a page, we can have upto 256 entries in second level page table and we require only 195 for P1. Thus only 1 second level page table is enough. So, memory overhead = 1KB (for first level) + 1KB for second level = 2KB.
For P2 and P3 also, we get 2KB each and for P4 we get 1+2=3KB as it requires 1 first level page table and 2 second level page tables (364 > 256). So, total overhead for their concurrent execution = 2×3+3 = 9KB. Thus P = 9KB.
Case-2 (For segmentation method):
P1 uses 4 segments → 4 entries in segment table = 4×8 = 32Bytes
Similarly, for P2, P3 and P4 we get 5×8, 3×8 and 8×8 Bytes respectively and the total overhead will be
32+40+24+64 = 160 Bytes
So, S = 160 Bytes
Case-3 (For segmentation with paging):
Here, we segment first and then page. So, we need the page table size. We are given maximum size of a segment is 256KB and page size is 1KB and thus we require 256 entries in the page table. So, total size of page table = 256 × 4 = 1024 Bytes (exactly one page size).
So, now for P1 we require 1 segment table of size 32 Bytes plus 1 page table of size 1KB.
Similarly,
P2 - 40 Bytes and 1 KB
P3 - 24 Bytes and 1 KB
P4 - 64 Bytes and 1KB
Thus, total overhead = 160 Bytes + 4KB = 4256 Bytes
So, T = 4256 Bytes
So, answer should be S < T < P.
Question 128 |
-
- x is a condition variable,
- mutex is a semaphore initialized to 1,
- x_sem is a semaphore initialized to 0,
- x_count is the number of processes waiting on semaphore x_sem, initially 0, next is a semaphore initialized to 0,
- next_count is the number of processes waiting on semaphore next, initially 0. The body of each procedure that is visible outside the monitor is replaced with the following:
P(mutex); body of procedure if (next_count > 0) V(next); else V(mutex);
-
- Each occurrence of x.wait is replaced with the following:
x_count = x_count + 1; if (next_count > 0) V(next) else V(mutex); ------------------------------------------------------------ E1; x_count = x_count - 1;
-
- Each occurrence of x.signal is replaced with the following:
if (x_count > 0) { next_count = next_count + 1; ------------------- E2; P(next), next_count = next_count - 1; }
- For correct implementation of the monitor, statements E1 and E2 are, respectively,
P(x_sem), V(next) | |
V(next), P(x_sem) | |
P(next), V(x_sem) | |
P(x_sem), V(x_sem) |
x_count is incremented and decremented in x_wait, which shows that in between them wait(x_sem) must happen which is P(x_sem). Correspondingly V(x_sem) must happen in x_signal. So, D choice.
Question 129 |
if (fork() == 0) { a = a + 5; printf(“%d,%dn”, a, &a); } else { a = a –5; printf(“%d, %dn”, a, &a); }Let u, v be the values printed by the parent process, and x, y be the values printed by the child process. Which one of the following is TRUE?
u = x + 10 and v = y
| |
u = x + 10 and v is ≠ y
| |
u + 10 = x and v = y
| |
u + 10 = x and v ≠ y
|
u=a-5; v be address of a
a=u+5;
x, y values printed by child process.
x=a+5; y be the address of a
x=u+5+5; v=y
x=u+10
Question 130 |
Suppose n processes, P1, …., Pn share m identical resource units, which can be reserved and released one at a time. The maximum resource requirement of process Pi is si, where si > 0. Which one of the following is a sufficient condition for ensuring that deadlock does not occur?
![]() | |
![]() | |
![]() | |
![]() |
If the deadlock will never occcur in the corresponding process then the following condition be true.

Question 131 |
Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs having explicit I/O instructions, such I/O protection is ensured by having the I/O instructions privileged. In a CPU with memory mapped I/O, there is no explicit I/O instruction. Which one of the following is true for a CPU with memory mapped I/O?
I/O protection is ensured by operating system routine(s)
| |
I/O protection is ensured by a hardware trap
| |
I/O protection is ensured during system configuration
| |
I/O protection is not possible
|
Question 132 |
Virtual memory increases
| |
Larger RAMs are faster
| |
Fewer page faults occur
| |
Fewer segmentation faults occur
|
→ Such as if RAM size increases, then no. of page entries increases, then no. of page faults decreases.
Question 133 |
ln -s file 1 file 2 ln -s file 2 file 3Which of the following types of information would be lost from her file system? (I) Hobbies (II) Friends (III) Courses
(I) and (II) only | |
(II) and (III) only | |
(II) only | |
(I) and (III) only |
Question 134 |
find -name passwd -printis executed in /etc directory of a computer system running Unix. Which of the following shell commands will give the same information as the above command when executed in the same directory?
ls passwd | |
cat passwd | |
grep name passwd | |
grep print passwd
|
Question 135 |
A user level process in Unix traps the signal sent on a Ctrl-C input, and has a signal handling routine that saves appropriate files before terminating the process. When a Ctrl-C input is given to this process, what is the mode in which the signal handling routine executes?
kernel mode | |
superuser mode | |
privileged mode | |
user mode |
Question 136 |
P1 | P2 |
---|---|
repeat forever V (X) ; Compute ; P(X) ; | repeat forever P(X) ; Compute ; V(X) ; |
- It is possible for process P1 to starve.
- It is possible for process P2 to starve.
Both 1 and 2 are true | |
1 is true but 2 is false | |
2 is true but 1 is false | |
Both 1 and 2 are false |
P2 can be starves on P, then P1 loops forever.
Both statements (i) and (ii) are True.
Question 137 |
P1 | P2 |
---|---|
Compute: Use R1; Use R2; Use R3; Use R4; | Compute; Use R1; Use R2; Use R3;. Use R4; |
- P2 must complete use of R1 before P1 gets access to R1
- P1 must complete use of R2 before P2 gets access to R2.
- P2 must complete use of R3 before P1 gets access to R3.
- P1 must complete use of R4 before P2 gets access to R4.
1 | |
2 | |
3 | |
4 |

Question 138 |

000 101 111 | |
101 110 111 | |
011 101 111 | |
001 110 111 | |
None of these |

While filling done in reverse order, all operations are not satisfied.
Question 139 |
Process | Priority | CPU time required | Arrival time (hh:mm:ss) |
P1 | 10(highest) | 20 sec | 00:00:05 |
P2 | 9 | 10 sec | 00:00:03 |
P3 | 8 (lowest) | 15 sec | 00:00:00 |
30 sec, 30 sec | |
30 sec, 10 sec | |
42 sec, 42 sec | |
30 sec, 42 sec
|

TAT of P2 = Completion time of P2 - Arrival time of P2
= 33 - 3
= 30 sec
For non-preemptive,

TAT of P2 = Completion time of P2 - Arrival time of P2
= 45 - 3
= 42 sec
Question 140 |
0 3 5 7 16 55 | |
0 3 5 7 9 16 55 | |
0 5 7 9 16 55 | |
3 5 7 9 16 55 |
So,

Since, each set has only 2 places, 3 will be thrown out as its the least recently used block. So final content of cache will be
0 5 7 9 16 55
Hence, answer is (C).
Question 141 |
- T11 > T21
- T12 > T22
- T11 < T21
- T12 < T22
(I) and (IV) | |
(II) and (III) | |
(I) and (II) | |
None of the above |
Question 142 |
(i) 80 MB (ii) 2040 MB | |
(i) 2040 MB (ii) 80 MB | |
(i) 80 MB (ii) 360 MB | |
(i) 80 MB (ii) 360 MB |
Diameter of inner track = d = 1 cm
Circumference of inner track
= 2 * 3.14 * d/2
= 3.14 cm
Storage capacity = 10 MB (given)
Circumference of all equidistant tracks
= 2 * 3.14 * (0.5+1+1.5+2+2.5+3+3.5+4)
= 113.14 cm
Here, 3.14 cm holds 10 MB
Therefore, 1 cm holds 3.18 MB.
So, 113.14 cm holds
113.14 * 3.18 = 360 MB
So, total amount of data that can be hold on the disk = 360 MB.
For constant angular velocity:
In case of CAV, the disk rotates at a constant angular speed. Same rotation time is taken by all the tracks.
Total amount of data that can be stored on the disk
= 8 * 10 = 80 MB
Question 143 |
13.5 ms | |
10 ms | |
9.5 ms | |
20 ms |
So, the header has to seek (4 - 0.5) = 3.5cm.
For 10m ------- 1s
1m ------- 1/10 s
100cm ------- 1/(10×100) s
3.5cm ------- 3.5/1000 s = 3.5ms
So, the header will take 3.5ms.
Now, angulur velocity is constant and header is now at end of 5th sector. To start from front of 4th sector it must rotate upto 18 sector.
6000 rotation in 60000ms.
1 rotation in 10ms (time to traverse 20 sectors).
So, to traverse 18 sectors, it takes 9ms.
In 10ms, 10MB data is read.
So, 1MB data can be read in 1ms.
∴ Total time = 1+9+3.5 = 13.5ms
Question 144 |
(ii), (iii) and (iv) only
| |
(ii) and (iii) only
| |
(i) and (iii) only
| |
(i) and (ii) only
|
→ If one user level thread perform blocking operation then entire process will be blocked. Option B is true.
→ User level threads are threads are transparent to the kernal. Because user level threads are created by users. Option D is true.
→ Kernal supported threads can be scheduled independently, that is based on OS. Option C is true.
Question 145 |
Consider an operating system capable of loading and executing a single sequential user process at a time. The disk head scheduling algorithm used is First Come First Served (FCFS). If FCFS is replaced by Shortest Seek Time First (SSTF), claimed by the vendor to give 50% better benchmark results, what is the expected improvement in the I/O performance of user programs?
50% | |
40% | |
25% | |
0% |
→ The better vendor benchmark results doesn't effects the I/O performance.
→ In FCFS (or) SSTF only one choice is to choose for IO from multiple IO's. There is always one I/O at a time.
Question 146 |
the instruction set architecture
| |
page size
| |
physical memory size
| |
number of processes in memory
|
→ An ISA permits multiple implementations that may vary in performance, physical size and monetary cost.
Question 147 |
5.50 | |
5.75 | |
6.00 | |
6.25 |

Avg. TAT = 12+3+6+1/4 = 22/4 = 5.50
Question 148 |
Consider a system with a two-level paging scheme in which a regular memory access takes 150 nanoseconds, and servicing a page fault takes 8 milliseconds. An average instruction takes 100 nanoseconds of CPU time, and two memory accesses. The TLB hit ratio is 90%, and the page fault rate is one in every 10,000 instructions. What is the effective average instruction execution time?
645 nanoseconds
| |
1050 nanoseconds
| |
1215 nanoseconds
| |
2060 nanoseconds |
= 100ns + 2EMAT
Now lets calculate EMAT,
EMAT = TLB + miss rate × 2 × 150ns + 150ns + 1/10000 × 8ms
= 0 + 0.1 × 300ns + 150ns + 800ns
= 980ns
∴ Effective average instruction time,
= 100ns + 2 × 980ns
= 2060ns
Question 149 |
P(Sy), P(Sx); P(Sx), P(Sy)
| |
P(Sx), P(Sy); P(Sy), P(Sx)
| |
P(Sx), P(Sx); P(Sy), P(Sy)
| |
P(Sx), P(Sy); P(Sx), P(Sy)
|
P1 : line 1
P2 : line 3 (block require Sx)
P1 : line 2
P2 : line 4 (still in block state)
P1 : execute CS, the push up the value of Sx.
P2 : line 3 line 4 (block require Sy)
P1 : push up Sy
P2 : line 4 get Sy and enter into CS
P2 : start execution.
So option D is Answer.
Question 150 |
224 bytes
| |
232 bytes
| |
234 bytes | |
248 bytes
|
= 10 + 28 + (28)2 + (28)3
= 224 Blocks (App)
Size of each block = 210
Maximum file size = 224 × 210 = 234
Question 151 |
What is the bit rate of a video terminal unit with 80 characters/line, 8 bits/character and horizontal sweep time of lOOµs (including 20 µs of retrace time)?
8 Mbps | |
6.4 Mbps | |
0.8 Mbps | |
0.64 Mbps |
Total number of bits transmitted = 80 * 8 = 640 bits
Bit rate = (640 * 106)/100 = 6.4 Mbps
Question 152 |
Stack | |
Address Space | |
File Descriptor Table | |
Message Queue |
Question 153 |
II and III | |
II and IV | |
I and III | |
II only |
Question 154 |

11 ns | |
12 ns | |
13 ns | |
28 ns |

So, total time would be 13 ns.
Question 155 |
4 | |
5 | |
6 | |
7 |
4, 3, 25, 8, 19, 6, 16, 35
CPU array (first element being least recently used)
[4, 3, 19, 6, 25, 8, 16, 35]
So, 45 replaces 4.
45, 3, 25, 8, 19, 6, 16, 35 [3, 19, 6, 25, 8, 16, 35, 45]
Similarly, 22 replaces 3 to give,
45, 22, 25, 8, 19, 6, 16, 35 [19, 6, 25, 8, 16, 35, 45, 22]
8 hits in cache.
45, 22, 25, 8, 19, 6, 16, 35 [19, 6, 25, 16, 35, 45, 22, 8]
3 replaces 19,
45, 22, 25, 8, 3, 6, 16, 35 [6, 25, 16, 35, 45, 22, 8, 3]
16 and 25 hits in cache,
45, 22, 25, 8, 3, 6, 16, 35 [6, 35, 45, 22, 8, 3, 16, 25]
Finally, 7 replaces 6, which is in block 5.
Question 156 |
T1.I1 + T2.I3 + T4.I3 + T3 | |
(T1 + T2 + T3).I3 + T1.I1 | |
(T1 + T2 ).I1 + (T2 + T4).I3 + T3 | |
(T1 + T2 ).I2 + (T1 + T3).I1 + T3 |
So, Ain is 1 in T3 of I1, I2 and I3. Also during T1, and T2 and T4 of I3. So, answer will be
T1.I1 + T2.I3 + T4.I3 + T3.I1 + T3.I2 + T3.I3
Since, CPU is having only 3 instructions, T3.I1 + T3.I2 + T3.I3 can be replaced by T3.
So, T1.I1 + T2.I3 + T4.I3 + T3
Question 157 |
1.155 | |
1.185 | |
1.255 | |
1.285 |

Question 158 |
The storage area of a disk has innermost diameter of 10 cm and outermost diameter of 20 cm. The maximum storage density of the disk is 1400 bits/cm. The disk rotates at a speed of 4200 RPM. The main memory of a computer has 64-bit word length and 1µs cycle time. If cycle stealing is used for data transfer from the disk, the percentage of memory cycles stolen for transferring one word is
0.5% | |
1% | |
5% | |
10% |
x μs is data transfer time.
% of time CPU idle is,
y/x × 100
Maximum storage density is given, so consider innermost track to get the capacity
= 2 × 3.14 × 5 × 1700 bits
= 3.14 × 14000 bits
Rotational latency = 60/4200 s = 1/70 s
Therefore, to read 64 bits, time required
(106 × 64)/(70 × 3.14 × 17000) μs = 20.8 μs
As memory cycle time given is 1μs,
Therefore, % CPU cycle stolen = (1/20.8) × 100 ≈ 5%
Question 159 |
2 and 3 | |
3 and 3 | |
3 and 4 | |
4 and 4
|
Direction changes at 120, 110, 130.
FCFS: (90) 120 30 70 115 130 110 80 20 25
Direction changes at 120, 30, 130, 20.
Question 160 |
if T(Pr) < T(Ph) then kill Pr else waitWhich one of the following is TRUE?
The scheme is deadlock-free, but not starvation-free | |
The scheme is not deadlock-free, but starvation-free | |
The scheme is neither deadlock-free nor starvation-free | |
The scheme is both deadlock-free and starvation-free |
So, starvation is possible, but deadlock is not possible.
Question 161 |
for(i = 1; i < = n; i++) fork ();The number of new processes created is
n | |
![]() | |
2n - 1 | |
3n - 1 |
2n - 1.
Question 162 |
P(full), V(empty), P(full), V(empty) | |
P(full), V(empty), P(empty), V(full) | |
P(empty), V(full), P(empty), V(full) | |
P(empty), V(full), P(full), V(empty) |
In process P1, wait on semaphore 'empty' signifies that if there is no space in buffer then P2 cannot produce more items. Signal on semaphore 'full' is to signify that one item has been added to the buffer.
In process P2, wait on semaphore 'full' signifies that if the buffer is empty then consumer cannot consume any item. Signal on semaphore 'empty' increments a space in the buffer after consumption of an item.
Question 163 |
In a virtual memory system, size of virtual address is 32-bit, size of physical address is 30-bit, page size is 4 Kbyte and size of each page table entry is 32-bit. The main memory is byte addressable. Which one of the following is the maximum number of bits that can be used for storing protection and other information in each page table entry?
2 | |
10 | |
12 | |
14 |
No. of frames = Physical memory size/Page size
= (230)/(212)
= 218
18+4 = 32 (PT entry size = 32 bits)
k = 14 bits
Question 164 |
T1 | T2 |
---|---|
Read(A)A = A - 10 | Read (A)Temp = 0.2*A Write(A) Read(B) |
Write(A)Read(B) B = B + 10 Write(B) | B = B + TempWrite(B) |
S is serializable only as T1, T2 | |
S is serializable only as T2, T1 | |
S is serializable both as T1, T2 and T2, T1 | |
S is serializable either as T1 or as T2 | |
None of these |
Question 165 |
the large amount of internal fragmentation | |
the large amount of external fragmentation
| |
the large memory overhead in maintaining page tables | |
the large computation overhead in the translation process
|
Virtual address = 32 bit = 232
No. of page level entries = 232 / 210
= 222
= 4M (Too large size)
Question 166 |
First come first served scheduling
| |
Shortest remaining time first scheduling
| |
Static priority scheduling with different priorities for the two processes | |
Round robin scheduling with a time quantum of 5 ms
|
(i) FCFS:
0-10: Process 1 can execute
10-20: Process 2 can execute
100-110: Process 1 Terminate
110-120: Process 2 Terminate
CPU utilization = 20/100 [In every 100ms it utilizes]
=20%
(ii) SRTF: can process P and Q same as FCFS
then CPU utilization = 20%
(iii) Round robin: with TQ-5
0-5: Process P1
5-10: Process P2
10-15: Process P1
15-20: Process P2
105-110: Process P1
110-115: Process P2
CPU utilization = 20/105 = 19.5
Question 167 |
1.5 ns | |
2 ns | |
3 ns
| |
4 ns |
(TLB Miss * Cache Hit) + (TLB Miss * Cache Miss)
= (0.96*0.9*2)+(0.96*0.1+12)
(0.04*0.9*2)+(0.04*0.1*32)
= 3.8
= 4 (approximately)
Question 168 |
8 KB | |
12 KB
| |
16 KB | |
20 KB |
32bits are broken up as 10bits (L2) | 10bits (L1) | 12bits (offset)
First code page:
0x00000000 = 0000 0000 00 | 00 0000 0000 | 0000 0000 0000
So next code page will start from
0x00001000 = 0000 0000 00 | 00 0000 0001 | 0000 0000 0000
First data page:
0x00400000 = 0000 0000 01 | 00 0000 0000 | 0000 0000 0000
So next data page will start from
0x00401000 = 0000 0000 01 | 00 0000 0001 | 0000 0000 0000
Only one stack page:
0xFFFFF000 = 1111 1111 11 | 11 1111 1111 | 0000 0000 0000
Now, for second level page table, we will just require 1 Page
which will contain following 3 distinct entries i.e. 0000 0000 00, 0000 0000 01, 1111 1111 11.
Now, for each of these distinct entries, we will have 1-1 page in Level-1.
Hence, we will have in total 4 pages and page size = 212 = 4KB.
Therefore, Memory required to store page table = 4*4KB = 16KB.
Question 169 |
Process P: while (1) { W: print '0'; print '0'; X: } Process Q: while (1) { Y: print '1'; print '1'; Z: }Synchronization statements can be inserted only at points W, X, Y and Z. Which of the following will always lead to an output staring with '001100110011' ?
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
| |
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0
| |
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1 | |
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0
|
Process P1 will be executed first and then Process P2 can be executed next.
At the process P: W=P(S)
X=V(T)
At the process Q: Y=P(T)
Z=V(S)
Here, S=1, T=0 then the process P executes first and then Q, and both can run on process alternate way start with P.
Question 170 |
Process P: while (1) { W: print '0'; print '0'; X: } Process Q: while (1) { Y: print '1'; print '1'; Z: }Synchronization statements can be inserted only at points W, X, Y and Z Which of the following will ensure that the output string never contains a substring of the form 01^n0 or 10^n1 where n is odd?
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1 | |
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1 | |
P(S) at W, V(S) at X, P(S) at Y, V(S) at Z, S initially 1
| |
V(S) at W, V(T) at X, P(S) at Y, P(T) at Z, S and T initially 1 |
Question 171 |
Round Robin | |
First-In First-Out | |
Multilevel Queue Scheduling | |
Multilevel Queue Scheduling with Feedback |
Question 172 |
Has not been used for the longest time in the past. | |
Will not be used for the longest time in the future. | |
Has been used least number of times. | |
Has been used most number of times. |
Question 173 |
Security is dynamic | |
The path for searching dynamic libraries is not known till runtime | |
Linking is insecure | |
Cryptographic procedures are not available for dynamic linking |
Question 174 |
A | |
A and B | |
A and C | |
A, B and C |
Question 175 |
the size of the blocks, and the size of the address of the blocks. | |
the number of blocks used for the index, and the size of the blocks. | |
the size of the blocks, the number of blocks used for the index, and the size of the address of the blocks. | |
None of the above |
No. of block addresses available for addressing one file(B) = No. of Maximum blocks we can use for the Index * No. of addressable blocks using one Index block(A)
Size of File = B * Size of Block
Question 179 |
General purpose registers | |
Translation look-aside buffer | |
Program counter | |
All of the above |
Question 180 |
Thrashing | |
Deadlock | |
Starvation, but not deadlock | |
None of the above |

Question 181 |
1600 × 400 resolution with 256 colours on a 17 inch monitor | |
1600 × 400 resolution with 16 million colours on a 14 inch monitor | |
800 × 400 resolution with 16 million colours on a 17 inch monitor | |
800 × 800 resolution with 256 colours on a 14 inch monitor |
Option A:
1600×400 = 64000 = 640 KB
256 colours 8 bits = 1 Byte
⇒ 640×1 = 640KB (we have 1MB)
Option B:
1600×400 = 640KB
16 million = 24 bits = 3 bytes
⇒ 640×3 = 1920KB (Not enough)
Option C:
800×400 = 320 KB
16 millions = 3 bytes ⇒ 320×3 = 960 KB (we have 1MB)
Option D:
800×400 = 320 KB
256 colours = 1 Byte ⇒ 320×1 = 320 (we have 1MB)
Question 182 |
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of
1.9999 milliseconds | |
1 millisecond | |
9.999 microseconds | |
1.9999 microseconds |
= (0.9999*1) + [(1-0.9999) *10000]
= (0.9999) + (0.0001 * 10000)
= 0.9999 + 1
= 1.9999 microseconds
Question 183 |
Release all resources before requesting a new resource | |
Number the resources uniquely and never request a lower numbered resource than the last one requested | |
Never request a resource after releasing any resource | |
Request and all required resources be allocated before execution |
Question 184 |
Theory Explanation is given below. |
(ii) Signal (mutex)
(iii) R ⇒ 1 (or) W == 1
(b) In CS, atleast one of the reader is always present. That means writer can't be enter into the critical section.
This leads to readers-writers problem may occur in the queue.
Question 185 |
(A) – 2 (B) – 4 (C) – 3 (D) - 1 | |
(A) – 1 (B) – 2 (C) – 3 (D) – 4 | |
(A) – 3 (B) – 2 (C) – 4 (D) - 1 | |
(A) – 4 (B) – 1 (C) – 2 (D) – 3 |

⇒ Threads are handled by CPU.
⇒ Virtual address is a memory type.
⇒ File system is used to manage the disk.
⇒ Interrupt is a signal.
Question 186 |
Farthest cylinder next | |
Nearest cylinder next | |
First come first served | |
Elevator algorithm |
Nearest cylinder next - SSTF
First come first served - FCFS
Elevator - SCAN
SSTF always serves the request of nearest cylinder first. Due to which the necessary movement gets reduced.
Question 187 |
a software interrupt | |
polling | |
an indirect jump | |
a privileged instruction |
Question 188 |
Address translation | |
DMA for disk transfer | |
At least two modes of CPU execution (privileged and non-privileged) | |
Demand paging | |
Both A and C |
Question 189 |
Faster access to memory on an average. | |
Processes can be given protected address spaces. | |
Linker can assign addresses independent of where the program will be loaded in physical memory. | |
Programs larger than the physical memory size can be run.
| |
Both B and D |
B) True. Because in virtual memory concept of address translation provides protected address space so that one process do not interfere the other process.
C) False.
D) True.
Question 190 |
Saving current register values and restoring saved register values for process B.
| |
Changing address translation tables. | |
Swapping out the memory image of process A to the disk. | |
Invalidating the translation look-aside buffer. |
B) True.
C) False, because swapping is done when the process is suspended and not during context switching.
D) True, Invalidation of TLB is necessary because, if the TLB is not invalidated then the new process might end up using the translation of old process. Note that Invalidation of TLB is necessary but saving and reuse of TLB is not necessary.
Question 191 |
0.125 0.125 | |
0.25 0.25 | |
0.25 0.125 | |
0.125 0.25
|
Question 192 |
0, 200, 500, 600 | |
0, 200, 1000, 1600 | |
200, 500, 600, 800 | |
200, 700, 1300, 2100 |
Now 2nd will start loading at 200, since size is 800, so last address is 999.
Now 3rd module will start loading at 1000, since size is 600. So last address is 1599.
Now 4th module will start loading at 1600 and go till 2099.
Question 193 |
The terminal used to enter the input data for the C program being executed | |
An output device used to print the output of a number of jobs | |
The secondary memory device in a virtual storage system | |
The swapping area on a disk used by the swapper |
For example in printer if a process attempt to print a document but printer is busy printing another document, the process, instead of waiting for printer to become available, write its output to disk. When the printer becomes available the data on disk is printed.
Spooling allows process to request operation from peripheral device without requiring that the device is ready to service the request.
Question 194 |
cycle steating | |
rare condition | |
a time lock | |
a deadlock |
Speed of processes corresponds to ordering of processes.
Question 195 |
A counting semaphore was initialized to 10. Then 6P (wait) operations and 4V (signal) operations were completed on this semaphore. The resulting value of the semaphore is
0 | |
8 | |
10 | |
12 |
S = 10
Now 6P operations and uv operations were completed on this semaphore. So final value of S will be
S = 10 - 6 + 4 = 8
Question 196 |
6 | |
5 | |
4 | |
3 |
So maximum no. of process for the system to be deadlock free is 5.
Question 197 |
12 KB | |
14 KB | |
10 KB | |
8 KB |
For the above program, maximum memory will be required when running code portion present at leaves.
Maximum requirement
= MAX(12, 14, 10, 14)
= 14
Question 198 |
Consider n processes sharing the CPU in a round-robin fashion. Assuming that each process switch takes s seconds, what must be the quantum size q such that the overhead resulting from process switching is minimized but at the same time each process is guaranteed to get its turn at the CPU at least every t seconds?
![]() | |
![]() | |
![]() | |
![]() |

Question 199 |
If an instruction takes i microseconds and a page fault takes an additional j microseconds, the effective instruction time if on the average a page fault occurs every k instruction is:
![]() | |
![]() | |
![]() | |
![]() |
= service time + page fault rate * page fault service time
= i + 1/k * j
= i + j/k
Question 200 |
will always be to the page used in the previous page reference | |
is likely to be to one of the pages used in the last few page references | |
will always be to one of the pages existing in memory | |
will always lead to a page fault |
(1) Temporal locality
(2) Spatial locality
(3) Sequential locality
→ In programs related data are stored in consecutive locations and loops in same locations are used repeatedly.
Question 201 |
A – 3 B – 4 C – 2 D – 1 | |
A – 4 B – 3 C – 2 D – 1 | |
A – 2 B – 4 C – 1 D – 3 | |
A – 3 B – 4 C – 3 D – 2 |
Batch processing - FIFO
Time sharing - Round Robin
Interrupt processing - LIFO
Question 202 |
reduces page I/O | |
decreases the degree of multiprogramming | |
implies excessive page I/O | |
improve the system performance |
Question 203 |
implies changing the name of a file | |
can be employed to use an existing file as input file for a program | |
implies connection 2 programs through a pipe | |
None of the above
|
Question 204 |
helps avoid unnecessary writes on a paging device | |
helps maintain LRU information | |
allows only read on a page | |
None of the above |
Question 205 |
3 | |
5 | |
4 | |
6 |
Question 206 |
1 | |
2 | |
3 | |
None of the above |
repeat
{
V(mutex)
C.S.
V(mutex)
}
forever
Now, let me say P1 is in CS then P1 is in CS
then P10 comes and executes CS (upon mutex).
Now, P2 comes (down on mutex).
Now P10 moves out of CS (again binary semaphore will be 1).
Now P3 comes (down on mutex).
Now P10 comes (upon mutex).
Now P4 comes (down mutex).
⁞ So, if we take P10 'in' and 'out' of CS recursively, all 10 processes can be in CS at same time using binary semaphore only.
Question 207 |
a batch operating system | |
an operating system with a preemptive scheduler | |
an operating system with a non-preemptive scheduler | |
a uni-programmed operating system |
So this is preemptive.
Question 208 |
which should run in a certain specified amount of time | |
which avoids deadlocks | |
where shared resources are accessed | |
which must be enclosed by a pair of semaphore operations, P and V
|
Question 209 |
A line printer used to print the output of a number of jobs. | |
A terminal used to enter input data to a running program. | |
A secondary storage device in a virtual memory sytem. | |
A graphic display device. |
Question 210 |
A – 3 B – 4 C – 1 D – 2 | |
A – 4 B – 3 C – 1 D – 2 | |
A – 4 B – 3 C – 2 D – 1 | |
A – 3 B – 4 C – 2 D – 1 |
An assembler uses location counter value to give address to each instruction which is needed for relative addressing as well as for jump labels.
Reference count is used by garbage collector to clear the memory whose reference count be comes 0.
Linker loader is a loader which can load several compiled codes and link them together into a single executable. Thus it needs to do relocation of the object codes.
Question 211 |
151 | |
181 | |
231 | |
541 |

So, smallest allocation request which can be denied is 181 KB.
Question 212 |
ensure that all philosophers pick up the left fork before the right fork | |
ensure that all philosophers pick up the right fork before the left fork | |
ensure that one particular philosopher picks up the left fork before the right fork, and that all other philosophers pick up the right fork before the left fork | |
None of the above |
To avoid this, atleast one philosopher should choose its first chopstick in different way so that circular loop is not formed.
Question 213 |
10 | |
4 | |
8 | |
9 |

∴ Completion time of A is 9 unit.
Question 214 |
ROM is a Read/Write memory | |
PC points to the last instruction that was executed | |
Stack works on the principle of LIFO | |
All instructions affect the flags |
Question 215 |
the segment table is often too large to fit in one page | |
each segment is spread over a number of pages | |
segment tables point to page table and not to the physical locations of the segment | |
the processor’s description base register points to a page table | |
Both A and B |
Segment paging is different from paged segmentation.
Question 216 |
Optimal replacement | |
LRU | |
FIFO | |
Both (A) and (C)
|
Question 217 |
Shortest Job First | |
Round Robin | |
First Come First Serve | |
Elevator |
Question 218 |
{3,2,1),1 | |
(2,1,3},0 | |
{3,2,1),0 | |
{1,2,3},5 |
So, (B) and (C) will be eliminated.
Now in (A) and (D):
For r(A),

So, idle time is between 0 to 1 which in case of option (A).
For option(D),

We can see there is no idle time at all, but in option given idle time is 5, which is not matching with our chart. So option (D) is eliminated.
Therefore, the correct sequence is option (A).
Question 219 |
13 | |
8 | |
7 | |
10 |
01, 02, 04, 04, 05, 05, 05, 01, 02, 02, 02, 03, 03.
Clearly 7 page faults.
Question 220 |
smaller, smaller | |
smaller, larger | |
larger, smaller | |
larger, larger |
Question 221 |
LRU page replacement algorithm is used | |
FIFO page replacement algorithm is used | |
LFU page replacement algorithm is used | |
None of the above |
In LRU which can eliminate (or) removed which is least recently used.
In LFU the frequency of the page is more. So it is in constant use so cannot be replaced.
Question 222 |
either first fit or best fit policy (any one) | |
first fit but not best fit policy
| |
best fit but first fit policy
| |
None of the above |
So, request for 300 will be satisfied by 350 size block reducing the free size to 50.
Request for 25, satisfied by 125 size block, reducing it to 125.
Request for 125 satisfied by the 125 size block.
And request for 50 satisfied by 50 size block.
So, all requests can be satisfied.
In best fit strategy, a block request is satisfied by the smallest block in which can fit it. So, request for 300 will be satisfied by 350 size block reducing the free size to 50.
Request for 25, satisfied by 50 size block as its the smallest size that fits 25, reducing it to 25.
Request for 125, satisfied by 150 size block, reducing it to 25.
Now, request for 50 cannot be satisfied as the two 25 size blocks are not contiguous.
Question 223 |
P = 12.5, Q = 2.5×106 |
So, in DOS, the disk rotates 2400 times.
Average latency is the time for half a rotation
= 0.5×60/2400 s
= 12.5 ms
In one full rotation, entire data in a track can be transferred. Track storage capacity = 62500 bits
So, disk transfer rate
= 62500 × 2400/60
= 2.5 × 106 bps
So,
P = 12.5, Q = 2.5×106
Question 224 |
7 | |
9 | |
13, 15 | |
13 | |
15 |
→ If A have 2, B have 3, C have 5 then deadlock will occur i.e., 2+3+5=10.
→ If we have one extra resource then deadlock will not occur i.e., 10+1=11.
→ If we have equal (or) more than 11 resources then deadlock will never occur.
Question 225 |
7 | |
9 | |
13, 15 | |
13 | |
15 |
→ If A have 2, B have 3, C have 5 then deadlock will occur i.e., 2+3+5=10.
→ If we have one extra resource then deadlock will not occur i.e., 10+1=11.
→ If we have equal (or) more than 11 resources then deadlock will never occur.
Question 226 |
4 | |
10 | |
11 | |
12 | |
None of the above |

p is departure at 11.
Question 227 |
42 | |
2 | |
7 | |
12 |
Initial value of S is 7.
So after 20P operations and 15V operations the value of semaphore S will be,
S = 7-20+15 = 2
Question 228 |
2 | |
3 | |
4 | |
1 |
Question 229 |
The terminal used to the input data for a program being executed. | |
The secondary memory device in a virtual memory system | |
A line printer used to print the output of a number of jobs. | |
None of the above |
Question 230 |
FIFO |
Question 231 |
Virtual memory implements the translation of a program’s address space into physical memory address space | |
Virtual memory allows each program to exceed the size of the primary memory | |
Virtual memory increases the degree of multiprogramming | |
Virtual memory reduces the context switching overhead |
Question 232 |
Consider a set of n tasks with known runtimes r1, r2, ..., rn to be run on a uniprocessor machine. Which of the following processor scheduling algorithms will result in the maximum throughput?
Round-Robin | |
Shortest-Job-First | |
Highest-Response-Ratio-Next | |
First-Come-First-Served |
Question 233 |
always decrease the number of page faults | |
always increase the number of page faults | |
sometimes increase the number of page faults | |
never affect the number of page faults |
See Belady’s anomaly is the name given to the phenomenon where increasing the number of page frames results in an increase in the number of page faults for a given memory access pattern.
This phenomenon is commonly experienced in following page replacement algorithms: First in first out (FIFO), Second chance algorithm, Random page replacement algorithm.
Question 234 |
A device | |
Timer | |
Scheduler process | |
Power failure |
Question 235 |
16 MB | |
8 MB | |
2 MB | |
24 MB |
Frame size = 226 / 212 = 214
PT have to be stored in one frame so entry size must be 2 bytes, hence size of PT = 220 * 2 = 2 MB
Question 236 |
flag[j]=true and turn=i | |
flag[j]=true and turn=j | |
flag[i]=true and turn=j | |
flag[i]=true and turn=i |
Question 238 |
A-R, B-P, C-S, D-Q |
B-P
C-S
D-Q
Buddy system: The buddy system is a technique to allocate the memory by dividing the memory into partitions to try to satisfy a memory request as suitably as possible.
Interpretation: Runtime.
Pointer type: Garbage collector dereference the dangling pointer.
Virtual memory: Segmentation.
Question 239 |
the length of MAR | |
the available secondary storage | |
the available main memory | |
all of the above | |
none of the above
| |
Both A and B |
MAR can hold the address that is generated from CPU and limits the total virtual memory address space.
Question 240 |
The amount of virtual memory available is limited by the availability of secondary storage. | |
Any implementation of a critical section requires the use of an indivisible machine-instruction, such as test-and-set. | |
The LRU page replacement policy may cause hashing for some type of programs.
| |
The best fit techniques for memory allocation ensures the memory will never be fragmented.
| |
B and D |
(B) Is false, because one of the solution is Peterson's solution which is purely software based solution without use of hardware.
(C) Is true.
(D) False, memory can get fragmented with best fit technique.
Question 241 |
(a) - (q), (b) - (p), (c) - (r), (d) - (s) |
Question 242 |
True | |
False |
Whereas in link, load and go scheme the linkage editor does not co-exists with program in main memory while performing linking task.
Question 243 |
(A)-(r), (B)-(s), (C)-(q), (D)-(p) |
Shared Memory - Mutual Exclusion
Look-ahead buffer - Spatial Locality
Look-aside buffer - Temporal Locality
Question 244 |
One which is enclosed by a pair of P and V operations on semaphores. | |
A program segment that has not been proved bug-free. | |
A program segment that often causes unexpected system crashes. | |
A program segment where shared resources are accessed. |