Time-Complexity
Question 1 |
Θ(loglogn) | |
Θ(logn) | |
Θ(√n) | |
Θ(n) |
(or)
T(n) = 2T(n(1⁄2)) + 1
Here, Assume n = 2k
T(2k) = 2T(2k)(1⁄2) + 1
T(2k) = 2T(2(k/2) ) + 1
Assume T(2k) = S(k)
S(k) = 2S(k/2) + 1
Apply Master’s theorem Case-1
a=2, b=2
S(k) = k(log22 )
S(k) = θ(k')
but S(k) = T(2k)
T(2k) = θ(k')
but n = 2k
k = logn
T(n) = θ(logn)
Question 2 |
Consider the following C function.
int fun (int n) { int i, j; for (i = 1; i <= n; i++) { for (j = 1; j < n; j += i) { printf("%d %d",i,j); } } }
Time complexity of fun in terms of Θ notation is
Θ(n√n) | |
Θ(n2 ) | |
Θ(n logn) | |
Θ(n2 logn) |
int fun(int n)
{
int i, j;
for (i = 1; i <= n ; i++) /* It is independent loop. So, it takes O(n) time */
{
for (j = 1; j < n; j += i) /* It is dependent loop, It always incrementing with the help of i. It will take approximately O(log n) times*/
{
printf("%d %d", i, j); /* It takes O(1) time */
}
}
}
So, the overall complexity of the program is θ(n logn) times.
Question 3 |
The given diagram shows the flowchart for a recursive function A(n). Assume that all statements, except for the recursive calls, have O(1) time complexity. If the worst case time complexity of this function is O(nα), then the least possible value (accurate upto two decimal positions) of α is _________.

2.3219280 | |
2.3219281 | |
2.3219282 | |
2.3219283 |
According to flow chart total 5 worst case possibilities of function calls.

The remaining function calls/return statements will execute only constant amount of time.
So, total function calls 5.
The Recurrence will be
A(n) = 5A(n/2) + O(1)
Apply Master’s theorem,
A=5, b=2, k=0, p=0
a > bk case
A(n) = n(logba ) = n(log25) = n2.3219280
∴α = 2.3219280
Question 4 |
An algorithm performs (log N)1/2 find operations, N insert operations, (log N)1/2 delete operations, and (log N)1/2 decrease-key operations on a set of data items with keys drawn from a linearly ordered set. For a delete operation, a pointer is provided to the record that must be deleted. For the decrease-key operation, a pointer is provided to the record that has its key decreased. Which one of the following data structures is the most suited for the algorithm to use, if the goal is to achieve the best total asymptotic complexity considering all the operations?
Unsorted array | |
Min-heap | |
Sorted array | |
Sorted doubly linked list |

Question 5 |
1 | |
2 | |
3 | |
4 |
Substitute into a and b values in equation.
⇒ a+10b
⇒ a*1 + 10*0
⇒ 1+0
⇒ 1
Question 6 |
D = 2 for i = 1 to n do for j = i to n do for k = j + 1 to n do D = D * 3
Half of the product of the 3 consecutive integers. | |
One-third of the product of the 3 consecutive integers. | |
One-sixth of the product of the 3 consecutive integers. | |
None of the above. |
for i = 1 to n do
for j = i to n do
for k = j + 1 to n do
D = D * 3;

Also you can try for smaller ‘n’.
Question 7 |
T(n) = 2T(n/2) + Logn
Θ(n) | |
Θ(nlog n) | |
Θ(n2) | |
Θ(logn) |
Apply Master’s theorem,
a=2,b=2,k=0,p=1
According to Master’s theorem, we can apply Case-I
(I) If a>bk, then T(n)=θ(n(logba ) ) =θ(n(log22 ) )= θ (n)
Question 8 |
Suppose you want to move from 0 to 100 on the number line. In each step, you either move right by a unit distance or you take a shortcut. A shortcut is simply a pre-specified pair of integers i, j with i < j. Given a shortcut i, j if you are at position i on the number line, you may directly move to j. Suppose T(k) denotes the smallest number of steps needed to move from k to 100. Suppose further that there is at most 1 shortcut involving any number, and in particular from 9 there is a shortcut to 15. Let y and z be such that T(9) = 1 + min(T(y),T(z)). Then the value of the product yz is _____.
150 | |
151 | |
152 | |
153 |
Now, it is given that
T(9) = 1 + min(T(y),T(z))
where,
T(y) = steps from y to 100
T(z) = steps from z to 100
where y and z are two possible values that can be reached from 9.
One number that can be reached from 9 is 10. Another no. is 15, the shortcut path from 9, as given in the question.
∴ The value of 'y' and 'z' are 10 and 15.
So, y × z = 10 × 15 = 150
Question 9 |
MultiDequeue(Q){ m = k while (Q is not empty and m > 0) { Dequeue(Q) m = m - 1 } }What is the worst case time complexity of a sequence of n MultiDequeue() operations on an initially empty queue? (GATE CS 2013)
Θ(n) | |
Θ(n + k) | |
Θ(nk) | |
Θ(n2) |
Question 10 |
A(n) = Ω (W(n)) | |
A(n) = Θ (W(n)) | |
A(n) = O (W(n)) | |
A(n) = o (W(n)) |
So, A(n) would be upper bounded by W(n) and it will not be strict upper bound as it can even be same (e.g. Bubble Sort and merge sort).
A(n)=O(W(n))
Note: Option A is wrong because A(n) is not equal to Ω(w(n)) .
Question 11 |
12 | |
10 | |
6 | |
5 |
Let n = 10k records. Substitute into 10nlog10n ≤ 0.0001n2
10(10k)log1010k ≤ 0.0001(10k)2
10k+1k ≤ 0.0001 × 102k
k ≤ 102k−k−1−4
k ≤ 10k−5
According to the problem value 6 is suitable for K.
Question 12 |

θ(n)
| |
θ(n log n) | |
θ(n2)
| |
θ(n2log n ) |
The recurrence relation is T(n) = T(n/3) + cn
Here a=1 , b=3, k=1, p=0.
So, ak and p>=0. So, it comes under case 3.
Therefore T(n) = θ(nk(logpn)) => θ(n1(log0 n)) => θ(n)
Question 13 |
f(n) = O(g(n)); g(n) = O(h(n))
| |
f(n) = Ω(g(n)); g(n) = O(h(n))
| |
g(n) = O(f(n)); h(n) = O(f(n)) | |
h(n) = O(f(n)); g(n) = Ω(f(n))
|
1. Check whether the function if polynomial time or exponential time.
2. Group them into polynomial functions and exponential function. The exponential functions are always bigger than polynomial functions.
As per the above 3 functions, the g(n) is greater than others. f(n) greater than h(n). So, the order of growth h(n) < f(n) < g(n).
Note: For computing, take always big number.
Question 14 |
θ(n)
| |
θ(logn) | |
θ(log*n) | |
θ(1) |
1. The First occurrence of an element can be found out in O(log(n))time using divide and conquer technique,lets say it is i.
2. The Last occurrence of an element can be found out in O(log(n)) time using divide and conquer technique,lets say it is j.
3. Now number of occurrence of that element(count) is (j-i+1).
Overall time complexity = log n +log n +1 = O(logn)
Program:
/* If x is present in arr[low...high] then returns the index of first occurrence of x, otherwise returns -1 */
int binarySearch(int arr[], int low, int high, int x){
if (high >= low) {
int mid = (low + high)/2; /*low + (high - low)/2;*/
/* Check if arr[mid] is the first occurrence of x.
arr[mid] is first occurrence if x is one of the following is true:
(i) mid == 0 and arr[mid] == x
(ii) arr[mid-1] < x and arr[mid] == x */
if ( (mid == 0 || x > arr[mid-1]) && (arr[mid] == x) )
return mid;
else if (x > arr[mid]
) return _binarySearch(arr, (mid + 1), high, x);
else
return _binarySearch(arr, low, (mid -1), x);
}
return -1;
}
Note: The code finds out the first occurrence of the element in the array and checks if the element at (n/2+1)th position is same(as the array is sorted). Takes O(logn) time.
Question 15 |
θ(n) and θ(n) | |
θ(2n) and θ(n) | |
θ(n) and θ(2n)
| |
θ(2n) and θ(2n)
|
T(n) = T(n-1) + T(n-2) (multiplication by 2 and 3 won't affect complexity as it is a constant time operation)
T(0) = T(1) = 1
The recurrence is similar to fibonacci series. The time complexity is O(2n)
T(n) = T(n-1) + T(n-2) + c
T(0) = 1
T(n-1) ≈ T(n-2)
But in reality, T(n-1) > T(n-2)
So to find upper bound the expression will be
T(n) = 2T(n-1) + c
= 4T(n-2) + 3c
= 8T(n-3) +7c
= 2kT(n-k) + (2k-1)c
n-k=0 ⇒ k = n
T(n) = 2nT(0) + (2n-1)c
= 2n +(2n-1)c
⇒ T(n) = O(2n)
The time required for f2 is O(n) (because it consists of only one loop).
Question 16 |
A, D, C, E, B | |
D, A, C, E, B | |
A, C, D, E, B | |
A, C, D, B, E |
n1/3 < n log9 < n7/4 < 1.0000001n < en
Question 17 |
![]() | |
![]() | |
![]() | |
![]() |

Question 18 |
Merge sort
| |
Bubble Sort | |
Quick Sort
| |
Selection Sort |
Merge sort→ O(nlogn)
Bubble sort→ O(n2)
Quick sort→ O(n2)
Selection sort→ O(n2)
Question 19 |
int j, n; j = 1; while (j <= n) j = j*2;The number of comparisons made in the execution of the loop for any n > 0 is: Base of Log is 2 in all options.
⌈log2n⌉ + 1
| |
n | |
⌈log2n⌉ | |
⌊log2n⌋+1
|
1<=6 (✔️)
2<=6 (✔️)
4<=6 (✔️)
8<=6 (❌)
4 comparisons required
Option A:
[log n]+1
[log 6]+1
3+1=4 (✔)
Option B:
n=6 (❌)
Option C:
[log n]
[log 6]=3 (❌)
Option D:
[log2n]+1
[log26]+1=2+1=3 (❌)
Question 20 |
Dijkstra’s algorithm starting from S. | |
Warshall’s algorithm.
| |
Performing a DFS starting from S. | |
Performing a BFS starting from S.
|
→ Time Complexity of the Warshall’s algorithm: O(n3). Warshall’s algorithm basically we are using to find all pair shortest path.
→ DFS cannot be used for finding shortest paths.
→ Time Complexity for BFS : O(E+V)
Question 21 |
int gcd(n,m) { if (n%m ==0) return m; n = n%m; return gcd(m,n); } |
θ (log2 n)
| |
Ω (n) | |
θ (log2log2 n)
| |
θ(√n)
|
Then, time complexity is = O(log(a∙b) = O(log(a+b)) = O(log n)
Question 22 |
int DoSomething ( int n) { if (n <= 2) return 1; else return (DoSomething ( floor ( sqrt (n))) + n); } |
Ω (n2) | |
θ (nlog2 n)
| |
θ (log2 n) | |
θ (log2log2 n) |

Now apply Master's theorem,
a=1, b=2, k=0, p=0
a = bk, so Case-2 will be applied, and p>-1, so

Question 23 |
At least 2n - c comparisons, for some constant c, are needed.
| |
At most 1.5n - 2 comparisons are needed.
| |
At least nlog2n comparisons are needed.
| |
None of the above. |
→ Now take other two element and compare between them and we will get one minimum element which will be compared with min and we will get one maximum element which will be compared with max and accordingly we will mark them. So in total 3 comparisions are required, and we will do this with all (n-1)/2 pairs.
So in total, no. of comparisions required,
= 3(n-2)/2 + 2
= 3n/2 - 3 + 2
= 3n/2 - 1
= 1.5n - 1
Question 24 |
int IsPrime(n) { int i,n; for (i=2;i<= sqrt (n);i++) if (n%i == 0) { printf (“Not Primen”); return 0;} return 1; } |



T(n) = O(√n) and T(n) = Ω(√n)
| |
T(n) = O(√n) and T(n) = Ω(1)
| |
T(n) = O(n) and T(n) = Ω(√n) | |
None of the above
|
Question 25 |
for (i = n, j = 0; i >0; i /= 2, j += i); |
val(j) = θ(logn) | |
val(j) = θ(√n) | |
val(j) = θ(n)
| |
val(j) = θ(n logn) |
Question 26 |
Solves it in linear time using a left to right pass of the array
| |
Solves it in linear time using a right to left pass of the array | |
Solves it using divide and conquer in time θ(nlogn)
| |
Solves it in time θ(n2) |
Question 27 |
θ(n)
| |
θ(nlog n)
| |
θ(n2)
| |
θ(n3)
|
T(n) = 2T(n/2) +O(n)
The above recurrence in the form of masters theorem: a=2, b=2, k=1, p=0. Since, a=bk, so Case-2 is applied
and also p > -1, so

Question 28 |
Takes O(3n) and Ω(2n) time if hashing is permitted
| |
Takes O(n3) and Ω(n2.5) time in the key comparison model | |
Takes θ(n) time and space | |
Takes O(√n) time only if the sum of the 2n elements is an even number |
1. Since there are total n elements, maximum sum is n for both arrays.
2. Difference between two sums varies from -n to n. So there are total 2n + 1 possible values of difference.
3. If differences between prefix sums of two arrays become same at two points, then subarrays between these two points have same sum.
Question 29 |
T(n) = θ(loglogn)
| |
T(n) = θ(logn)
| |
T(n) = θ(√n) | |
T(n) = θ(n) |
Let n =2m
T(2m) = 2T(2m/2) + 1 --------(1)
Let T(2m) = S(m)
Then equation (1) becomes,
S(m) = 2S(m/2) + 1
So now lets apply Master's theorem,
a=2, b=2, k=0
Since, a>bk, so Case 1 applied here

But m = log n
So finally the answer is,
O(log n)
Question 30 |
O(n) | |
O(n log n)
| |
O(n3/2)
| |
O(n3)
|
For example, if X is a set of airports and x R y means "there is a direct flight from airport x to airport y" (for x and y in X), then the transitive closure of R on X is the relation R+ such that x R+ y means "it is possible to fly from x to y in one or more flights". Informally, the transitive closure gives you the set of all places you can get to from any starting place.
To find the transitive closure of given relation, we can represent the given relation by the graph such that if x R y then there should be directed edge between x and y in a graph. For this graph, we have to apply modified Floyd_Warshall Algorithm to find the transitive closure of the graph.
The time complexity of this algorithm is O(V3) where V is the number of vertices in the given graph. You can take here V as the number of elements is set i.e., N. Therefore time complexity for the given question is O(n3).
Question 31 |
O(nlog log n)
| |
θ(nlog n )
| |
Ω(nlog n )
| |
Ω(n3/2)
|
P = log n
Q = n/log n
∴ Putting values we get,
O(n log log n)
Question 32 |
T(n) = O(n2) | |
T(n) = θ(n log n)
| |
T(n) = Ω(n2)
| |
T(n) = O(n log n)
|
a=2, b=2, k=1, p=0.
The recurrence relation result will be O(nlogn).
Question 33 |
n | |
n2 | |
nlogn | |
nlog2n |
→Tightest upper bound is (big O).
Tightest lower bound is (big Ω).
Question 34 |
best if A is in row-major, and B is in column major order
| |
best if both are in row-major order
| |
best if both are in column-major order
| |
independent of the storage scheme
|
But if the question would have asked best time complexity in which of the following implementation (not algorithm) then Option (A) is correct.
Question 35 |
Ω(n2)
| |
Ω(nlog n) and O(n2)
| |
θ(n)
| |
o(n)
|
If the string contains all one's then it takes O(n) time, because counter++ can execute n times.
If it contains half 0's and half 1's then also it takes O(n) time.
Question 36 |
O(n)
| |
O(n log n)
| |
O(n2)
| |
O(2n)
|
return(1)
else
return(recursive(n-1) + recursive(n-1))
n>0:
T(2) = T(1) + T(1) = 2T(1)
T(3) = T(2) + T(2) = 2T(2) = 2⋅2T(1) = 22T(1)
T(4) = 23⋅T(1)
⋮
T(n) = 2n⋅T(1) = O(2n)
Question 37 |
A program takes as input a balanced binary search tree with n leaf nodes and computes the value of a function g(x) for each node x. If the cost of computing g(x) is min(number of leaf-nodes in left-subtree of x, number of leaf-nodes in right-subtree of x) then the worst-case time complexity of the program is
Θ (n)
| |
Θ (n log n)
| |
Θ (n2)
| |
Θ (n2 log n)
|
⇒ g(x) = min (no. of leaf nodes of left-subtree of x, no. of leaf nodes in the right-subtree of x)
→ Total no. of leaves = n
Time complexity for a binary search = log n
Time complexity of the program is = O(n(log n))
Question 38 |
1. (n + k)m = Θ(nm), where k and m are constants 2. 2n + 1 = O(2n) 3. 22n + 1 = O(2n)Which of these claims are correct ?
I and II
| |
I and III | |
II and III
| |
I, II, and III |

Which is true by considering leading ordered term present in polynomial expression.

2n×2n can't be written as Θ(2n)
So, this is False.
Question 39 |
remain Θ(n2)
| |
become Θ(n (log n)2)
| |
become Θ(n log n)
| |
become Θ(n)
|
Instaed, linear search use binary search then (log n) will be the worst case time complexity of binary search and performing n swaps to place an element in right position for the corresponding n elements
i.e., n×(logn+n)
Θ((n×logn)+n2)
Θ(n2)
Remains same.
Question 40 |
O(n) but not O(n0.5)
| |
O(n0.5 but not O((log n)k) for any constant k>0
| |
O((log n)k) for some constant k>0, but not O((log log n)m) for any constant m>0 | |
O((log log n)k) for some constant k>0.5, but not O((log log n)0.5) |
The time complexity is never less than O(log n) because to represent a binary search will takes O(log n) ... (II)
From (I), option A and B False
From (II), option D False.
Question 41 |
O(n) | |
O(log n) | |
O(log log n) | |
O(1) |
Let n=2m
So, T(2m) = T(2m/2)+1
We substitute T(2m) = S(m),
So now the equation becomes,
S(m) = S(m/2)+1
Now after applying master's theorem,
S(m) = O(log m)
T(n) = O(log log n)
Question 42 |
h(n) is O (f(n)) | |
h(n) is O (g(n)) | |
g(n) is not O (f(n)) | |
f(n) is O(g(n)) |
Then
f(n) = 3(n32)=3*(210)32 = 3*2320
g(n) = 2320
h(n) = 1024!
So relation between the functions can be:
f(n) and g(n) are of same order, so f(n) is O(g(n)) and g(n)=O(f(n)). Option C is wrong.
h(n) is n! Which is of higher order than f(n) and g(n). So options A and B are wrong.
Question 43 |
n - 1 | |
n | |
n + 1 | |
None of the above |
Now we will swap 1st no. with nth no. and then 2nd no. with (n-1)th no. and then 3rd no. with (n-2)th and so on. Like this we will have to do n/2 swaps in worst case.
Question 44 |
log2 n | |
√n | |
n-1 | |
n |

We require 4 multiplications to calculate a16 .....(I)
→ Like that 3 multiplications requires to calculate a8 .....(II)
I, II are satisfied with the option A.
Question 45 |
M – W N – V O – U P - X | |
M – W N – U O – X P - V | |
M – V N – W O – X P - U | |
None of the above |
(N) Apply Master's theorem
T(n) = θ(n) = O(n)
(O) Apply Master's theorem
T(n) = θ(n logn) = O(n logn)
(P) Here we are adding the log of firstn natural numbers.
So,
Tn = log1 + log2 + log3 + ... + logn
= log (1×2×...n)
= log(n!)
= θ(n logn)
Question 46 |
![]() | |
![]() | |
![]() | |
![]() |

Question 47 |
O(n) | |
O(log n) | |
![]() | |
None of the above |
Question 48 |
![]() | |
![]() | |
![]() | |
![]() |
= No. of comparisons if element present in 1st position + No. of comparisons if element present in 2nd position + ............. + No. of comparisons if element present in nth position
= 1 + 2 + 3 + ... + n
= n(n+1)/2
Since there are n elements in the list, so average no. of comparisons
= Total comparisons/Total no. of elements
= (n(n+1)/2)/n
= n+1/2
Question 49 |
C1 < C2 | |
C1 > C2 | |
C1 = C2 | |
we cannot say anything for arbitrary n. |
So, option is (C) is correct.
Question 50 |
![]() | |
![]() | |
![]() | |
![]() |
∴

Question 51 |
O(n) | |
O(n2) | |
O(n3) | |
O(3n2) | |
O(1.5n2) | |
B, C, D and E |

⇒ In this 'n' is constant. So, n is added to n times itself which is O(n2).
Hence, (a) is wrong. And rest (B), (C), (D), (E) are correct.
Question 52 |
f(n) = O(g(n) and g(n) ≠ O(f(n)) | |
g(n) = O(f(n) and f(n) ≠ O(g(n)) | |
f(n) ≠ O(g(n)) and g(n) ≠ O(f(n)) | |
f(n) = O(g(n)) and g(n) = O(f(n)) |
Cancel nlogn from f(n), g(n)
f(n) = n; g(n) = (logn)9
n is too large than (logn)9
So f(n)! = O(g(n)) and g(n) = O(f(n))
Question 53 |
O(n2) | |
O(mn) | |
O(m+n) | |
O(m logn) | |
O(m2) | |
B, D and E |
→ Where m is no. of edges and n is number of vertices then n = O(m2)
→ O(m logn) < O(mn)