PAS = 32 bit
∴ No. of frames =PA/Page size = 232 / 213 = 219
Also, it is given that each page table entry contains a valid bit, a dirty bit, 3 permission bits:
= 5 bits reserved
So one Page table entry size is
= 19+5 = 24 bits = 3 bytes
Now, Page table size = No. of entries × Entry size
24 × 220 = No. of entries × 3
No. of entries = 8 × 220 = 2 23
∴ Virtual Address size = No. of entries × Page size = 223 × 213 = 236
∴ Virtual Address Space = 36 bits
It reduces the memory access time to read or write a memory location.
It helps to reduce the size of page table needed to implement the virtual address space of a process.
It is required by the translation lookaside buffer.
It helps to reduce the number of page faults in page replacement algorithms.
Before effective address calculation has started
During effective address calculation
After effective address calculation has completed
After data cache lookup has completed
20, 20 and 20
24, 24 and 24
24, 24 and 20
25, 25 and 24
From the question we can see the below info:
Physical address size = 36 bits
Physical memory size = 236 bytes
Page frame size = 4K bytes = 212 bytes
No. of bits for offset = 12
No. of bits required to access physical memory frame = 36 – 12 = 24
So in third level of page table, 24 bits are required to access an entry.
In second level page table entry -- 9 bits of virtual address are used to access second level page table entry and size of pages in second level is 4 bytes.
So size of second level page table is (29)*4 = 211 bytes. It means there are (236)/(211) possible locations to store this page table. Therefore the second page table requires 25 bits to address it. the first page table needs 25 bits.
Answer - D
= [(0.9) * 60] + [0.1 * 110]
I-d, II-a, III-b, IV-c
I-b, II-c, III-a, IV-d
I-c, II-d, III-a, IV-b
I-b, II-c, III-d, IV-a
The bit indicates that its associated block of memory has been modified and has not been saved to storage yet. Dirty bits are used by the CPU cache and in the page replacement algorithms of an operating system.
If the bit is set, the page is read/ write. Otherwise when it is not set, the page is read only.
Reference bit is used in a version of FIFO called second chance policy, in order to avoid replacement of heavily used page.
Valid bit is not used for page replacement. It is not used in any page replacement policy. It tells the page in the memory is valid or not. If it is valid it is directly used and if it is not then a fresh page is loaded. So, basically it is page initialization, because we are not replacing, it is initializing, we not knocking out somebody, we are filling empty space. So initialization and so option (D).
Virtual Address = 32 bit
No. of bits needed to address the page frame = 32 - 12 = 20
TLB can hold 128 page table entries with 4-way set associative
→ 5 bits are needed to address a set.
→ The size of TLB tag = 20 - 5 = 15 bits
Efficient implementation of multi-user support is no longer possible
The processor cache organization can be made more efficient now
Hardware support for memory management is no longer needed
CPU scheduling can be made more efficient now
→ Because special hardware support needed only for virtual memory.
the instruction set architecture
physical memory size
number of processes in memory
→ An ISA permits multiple implementations that may vary in performance, physical size and monetary cost.
Consider a system with a two-level paging scheme in which a regular memory access takes 150 nanoseconds, and servicing a page fault takes 8 milliseconds. An average instruction takes 100 nanoseconds of CPU time, and two memory accesses. The TLB hit ratio is 90%, and the page fault rate is one in every 10,000 instructions. What is the effective average instruction execution time?
= 100ns + 2EMAT
Now lets calculate EMAT,
EMAT = TLB + miss rate × 2 × 150ns + 150ns + 1/10000 × 8ms
= 0 + 0.1 × 300ns + 150ns + 800ns
∴ Effective average instruction time,
= 100ns + 2 × 980ns
the large amount of internal fragmentation
the large amount of external fragmentation
the large memory overhead in maintaining page tables
the large computation overhead in the translation process
Virtual address = 32 bit = 232
No. of page level entries = 232 / 210
= 4M (Too large size)
(TLB Miss * Cache Hit) + (TLB Miss * Cache Miss)
= 4 (approximately)
32bits are broken up as 10bits (L2) | 10bits (L1) | 12bits (offset)
First code page:
0x00000000 = 0000 0000 00 | 00 0000 0000 | 0000 0000 0000
So next code page will start from
0x00001000 = 0000 0000 00 | 00 0000 0001 | 0000 0000 0000
First data page:
0x00400000 = 0000 0000 01 | 00 0000 0000 | 0000 0000 0000
So next data page will start from
0x00401000 = 0000 0000 01 | 00 0000 0001 | 0000 0000 0000
Only one stack page:
0xFFFFF000 = 1111 1111 11 | 11 1111 1111 | 0000 0000 0000
Now, for second level page table, we will just require 1 Page
which will contain following 3 distinct entries i.e. 0000 0000 00, 0000 0000 01, 1111 1111 11.
Now, for each of these distinct entries, we will have 1-1 page in Level-1.
Hence, we will have in total 4 pages and page size = 212 = 4KB.
Therefore, Memory required to store page table = 4*4KB = 16KB.
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory access time of
= (0.9999*1) + [(1-0.9999) *10000]
= (0.9999) + (0.0001 * 10000)
= 0.9999 + 1
= 1.9999 microseconds
DMA for disk transfer
At least two modes of CPU execution (privileged and non-privileged)
Both A and C
We also need atleast 2 nodes of execution to ensure user processes share resources properly and OS maintains controls.
Demand paging and DMA enhances the performances not necessarily.
Faster access to memory on an average.
Processes can be given protected address spaces.
Linker can assign addresses independent of where the program will be loaded in physical memory.
Programs larger than the physical memory size can be run.
Both B and D
(A) False. Because in virtual memory conceptfirst virtual to physical address translation is required then only we can access MM. So here access will be slow.
(B) True. Because without virtual memory it is difficult to give protected address space to processes as they will be accessing physical memory directly - No protection mechanism can be done inside the physical memory as processes are dynamic and number of processes changes from time to time.
(C) Position independent can be produced even without virtual memory support.
(D) This is one primary use of virtual memory. So true.
If an instruction takes i microseconds and a page fault takes an additional j microseconds, the effective instruction time if on the average a page fault occurs every k instruction is:
= service time + page fault rate * page fault service time
= i + 1/k * j
= i + j/k
helps avoid unnecessary writes on a paging device
helps maintain LRU information
allows only read on a page
None of the above
So, smallest allocation request which can be denied is 181 KB.
the length of MAR
the available secondary storage
the available main memory
all of the above
none of the above
Both A and B
MAR can hold the address that is generated from CPU and limits the total virtual memory address space.
The amount of virtual memory available is limited by the availability of secondary storage.
Any implementation of a critical section requires the use of an indivisible machine-instruction, such as test-and-set.
The LRU page replacement policy may cause hashing for some type of programs.
The best fit techniques for memory allocation ensures the memory will never be fragmented.
B and D
(B) Is false, because one of the solution is Peterson's solution which is purely software based solution without use of hardware.
(C) Is true.
(D) False, memory can get fragmented with best fit technique.
Virtual memory implements the translation of a program’s address space into physical memory address space
Virtual memory allows each program to exceed the size of the primary memory
Virtual memory increases the degree of multiprogramming
Virtual memory reduces the context switching overhead