Virtual Memory
Question 1 |
Consider a process executing on an operating system that uses demand paging. The average time for a memory access in the system is M units if the corresponding memory page is available in memory and D units if the memory access causes a page fault. It has been experimentally measured that the average time taken for a memory access in the process is X units.
Which one of the following is the correct expression for the page fault rate experienced by the process?
(D-M)/(X-M) | |
(X-M)/(D-M) | |
(D-X)/(D-M) | |
(X-M)/(D-X) |
X = (1 - P)M + D × P
X = M ∙ PM + DP
(X - M) = P(D - M)
⇒ P = (X - M) / (D - M)
Question 2 |
22 | |
23 | |
24 | |
25 |

Question 3 |
36 | |
37 | |
38 | |
39 |
PAS = 32 bit
∴ No. of frames =PA/Page size = 232 / 213 = 219
Also, it is given that each page table entry contains a valid bit, a dirty bit, 3 permission bits:
= 5 bits reserved
So one Page table entry size is
= 19+5 = 24 bits = 3 bytes
Now, Page table size = No. of entries × Entry size
24 × 220 = No. of entries × 3
No. of entries = 8 × 220 = 2 23
∴ Virtual Address size = No. of entries × Page size = 223 × 213 = 236
∴ Virtual Address Space = 36 bits
Question 4 |
It reduces the memory access time to read or write a memory location.
| |
It helps to reduce the size of page table needed to implement the virtual address space of a process.
| |
It is required by the translation lookaside buffer.
| |
It helps to reduce the number of page faults in page replacement algorithms.
|
Question 5 |
Before effective address calculation has started
| |
During effective address calculation | |
After effective address calculation has completed | |
After data cache lookup has completed |
Question 6 |
- Bits 30-31 are used to index into the first level page table
- Bits 21-29 are used to index into the second level page table
- Bits 12-20 are used to index into the third level page table, and
- Bits 0-11 are used as offset within the page
20, 20 and 20
| |
24, 24 and 24
| |
24, 24 and 20
| |
25, 25 and 24
|
From the question we can see the below info:
Physical address size = 36 bits
Physical memory size = 236 bytes
Page frame size = 4K bytes = 212 bytes
No. of bits for offset = 12
No. of bits required to access physical memory frame = 36 – 12 = 24
So in third level of page table, 24 bits are required to access an entry.
In second level page table entry -- 9 bits of virtual address are used to access second level page table entry and size of pages in second level is 4 bytes.
So size of second level page table is (29)*4 = 211 bytes. It means there are (236)/(211) possible locations to store this page table. Therefore the second page table requires 25 bits to address it. the first page table needs 25 bits.
Answer - D
First level
Question 7 |
54 | |
60 | |
65 | |
75 |
= 10 + 0.1 × 50 + 50
= 10 + 5 + 50
= 65
Question 8 |

I-d, II-a, III-b, IV-c | |
I-b, II-c, III-a, IV-d | |
I-c, II-d, III-a, IV-b | |
I-b, II-c, III-d, IV-a |
The bit indicates that its associated block of memory has been modified and has not been saved to storage yet. Dirty bits are used by the CPU cache and in the page replacement algorithms of an operating system.
R/W bit:
If the bit is set, the page is read/ write. Otherwise when it is not set, the page is read only.
Reference bit:
Reference bit is used in a version of FIFO called second chance policy, in order to avoid replacement of heavily used page.
Valid bit:
Valid bit is not used for page replacement. It is not used in any page replacement policy. It tells the page in the memory is valid or not. If it is valid it is directly used and if it is not then a fresh page is loaded. So, basically it is page initialization, because we are not replacing, it is initializing, we not knocking out somebody, we are filling empty space. So initialization and so option (D).
Question 9 |
11 bits
| |
13 bits | |
15 bits
| |
20 bits |
Virtual Address = 32 bit
No. of bits needed to address the page frame = 32 - 12 = 20
TLB can hold 128 page table entries with 4-way set associative
⇒ 128/4=32=25
→ 5 bits are needed to address a set.
→ The size of TLB tag = 20 - 5 = 15 bits
Question 10 |
Efficient implementation of multi-user support is no longer possible
| |
The processor cache organization can be made more efficient now
| |
Hardware support for memory management is no longer needed | |
CPU scheduling can be made more efficient now |
→ Because special hardware support needed only for virtual memory.
Question 11 |
the instruction set architecture
| |
page size
| |
physical memory size
| |
number of processes in memory
|
→ An ISA permits multiple implementations that may vary in performance, physical size and monetary cost.
Question 12 |
Consider a system with a two-level paging scheme in which a regular memory access takes 150 nanoseconds, and servicing a page fault takes 8 milliseconds. An average instruction takes 100 nanoseconds of CPU time, and two memory accesses. The TLB hit ratio is 90%, and the page fault rate is one in every 10,000 instructions. What is the effective average instruction execution time?
645 nanoseconds
| |
1050 nanoseconds
| |
1215 nanoseconds
| |
2060 nanoseconds |
= 100ns + 2EMAT
Now lets calculate EMAT,
EMAT = TLB + miss rate × 2 × 150ns + 150ns + 1/10000 × 8ms
= 0 + 0.1 × 300ns + 150ns + 800ns
= 980ns
∴ Effective average instruction time,
= 100ns + 2 × 980ns
= 2060ns
Question 13 |
the large amount of internal fragmentation | |
the large amount of external fragmentation
| |
the large memory overhead in maintaining page tables | |
the large computation overhead in the translation process
|
Virtual address = 32 bit = 232
No. of page level entries = 232 / 210
= 222
= 4M (Too large size)
Question 14 |
1.5 ns | |
2 ns | |
3 ns
| |
4 ns |
(TLB Miss * Cache Hit) + (TLB Miss * Cache Miss)
= (0.96*0.9*2)+(0.96*0.1+12)
(0.04*0.9*2)+(0.04*0.1*32)
= 3.8
= 4 (approximately)
Question 15 |
8 KB | |
12 KB
| |
16 KB | |
20 KB |
32bits are broken up as 10bits (L2) | 10bits (L1) | 12bits (offset)
First code page:
0x00000000 = 0000 0000 00 | 00 0000 0000 | 0000 0000 0000
So next code page will start from
0x00001000 = 0000 0000 00 | 00 0000 0001 | 0000 0000 0000
First data page:
0x00400000 = 0000 0000 01 | 00 0000 0000 | 0000 0000 0000
So next data page will start from
0x00401000 = 0000 0000 01 | 00 0000 0001 | 0000 0000 0000
Only one stack page:
0xFFFFF000 = 1111 1111 11 | 11 1111 1111 | 0000 0000 0000
Now, for second level page table, we will just require 1 Page
which will contain following 3 distinct entries i.e. 0000 0000 00, 0000 0000 01, 1111 1111 11.
Now, for each of these distinct entries, we will have 1-1 page in Level-1.
Hence, we will have in total 4 pages and page size = 212 = 4KB.
Therefore, Memory required to store page table = 4*4KB = 16KB.
Question 16 |
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of
1.9999 milliseconds | |
1 millisecond | |
9.999 microseconds | |
1.9999 microseconds |
= (0.9999*1) + [(1-0.9999) *10000]
= (0.9999) + (0.0001 * 10000)
= 0.9999 + 1
= 1.9999 microseconds
Question 17 |
Address translation | |
DMA for disk transfer | |
At least two modes of CPU execution (privileged and non-privileged) | |
Demand paging | |
Both A and C |
Question 18 |
Faster access to memory on an average. | |
Processes can be given protected address spaces. | |
Linker can assign addresses independent of where the program will be loaded in physical memory. | |
Programs larger than the physical memory size can be run.
| |
Both B and D |
B) True. Because in virtual memory concept of address translation provides protected address space so that one process do not interfere the other process.
C) False.
D) True.
Question 19 |
If an instruction takes i microseconds and a page fault takes an additional j microseconds, the effective instruction time if on the average a page fault occurs every k instruction is:
![]() | |
![]() | |
![]() | |
![]() |
= service time + page fault rate * page fault service time
= i + 1/k * j
= i + j/k
Question 20 |
helps avoid unnecessary writes on a paging device | |
helps maintain LRU information | |
allows only read on a page | |
None of the above |
Question 21 |
151 | |
181 | |
231 | |
541 |

So, smallest allocation request which can be denied is 181 KB.
Question 22 |
smaller, smaller | |
smaller, larger | |
larger, smaller | |
larger, larger |
Question 23 |
Virtual memory implements the translation of a program’s address space into physical memory address space | |
Virtual memory allows each program to exceed the size of the primary memory | |
Virtual memory increases the degree of multiprogramming | |
Virtual memory reduces the context switching overhead |
Question 24 |
the length of MAR | |
the available secondary storage | |
the available main memory | |
all of the above | |
none of the above
| |
Both A and B |
MAR can hold the address that is generated from CPU and limits the total virtual memory address space.
Question 25 |
The amount of virtual memory available is limited by the availability of secondary storage. | |
Any implementation of a critical section requires the use of an indivisible machine-instruction, such as test-and-set. | |
The LRU page replacement policy may cause hashing for some type of programs.
| |
The best fit techniques for memory allocation ensures the memory will never be fragmented.
| |
B and D |
(B) Is false, because one of the solution is Peterson's solution which is purely software based solution without use of hardware.
(C) Is true.
(D) False, memory can get fragmented with best fit technique.