Time-Complexity

Question 1
A
Θ(log⁡log⁡n)
B
Θ(log⁡n)
C
Θ(√n)
D
Θ(n)
       Algorithms       Time-Complexity       GATE 2017(set-02)
Question 1 Explanation: 
T(n) = 2T(√n) + 1
(or)
T(n) = 2T(n(1⁄2)) + 1
Here, Assume n = 2k
T(2k) = 2T(2k)(1⁄2) + 1
T(2k) = 2T(2(k/2) ) + 1
Assume T(2k) = S(k)
S(k) = 2S(k/2) + 1
Apply Master’s theorem Case-1
a=2, b=2
S(k) = k(log22 )
S(k) = θ(k')
but S(k) = T(2k)
T(2k) = θ(k')
but n = 2k
k = log⁡n
T(n) = θ(logn)
Question 2

Consider the following C function.

int fun (int n)   {
         int i, j;
         for (i = 1; i <= n; i++)   {
                for (j = 1; j < n; j += i)   {
                        printf("%d %d",i,j);
                }
          }
}

Time complexity of fun in terms of Θ notation is

A
Θ(n√n)
B
Θ(n2 )
C
Θ(n log⁡n)
D
Θ(n2 logn)
       Algorithms       Time-Complexity       GATE 2017(set-02)
Question 2 Explanation: 
We can solve iterative programs time complexity with the help of rollback method.
int fun(int n)
{
int i, j;
for (i = 1; i <= n ; i++)    /* It is independent loop. So, it takes O(n) time */
{
for (j = 1; j < n; j += i)     /* It is dependent loop, It always incrementing with the help of i. It will take approximately O(log n) times*/
{
printf("%d %d", i, j);     /* It takes O(1) time */
}
}
}
So, the overall complexity of the program is θ(n log⁡n) times.
Question 3

The given diagram shows the flowchart for a recursive function A(n). Assume that all statements, except for the recursive calls, have O(1) time complexity. If the worst case time complexity of this function is O(nα), then the least possible value (accurate upto two decimal  positions) of α is _________.

 
A
2.3219280
B
2.3219281
C
2.3219282
D
2.3219283
       Algorithms       Time-Complexity       GATE 2016 set-2
Question 3 Explanation: 
This type of problem, we have to consider worst case time complexity, it mean that all possibilities.
According to flow chart total 5 worst case possibilities of function calls.

The remaining function calls/return statements will execute only constant amount of time.
So, total function calls 5.
The Recurrence will be
A(n) = 5A(n/2) + O(1)
Apply Master’s theorem,
A=5, b=2, k=0, p=0
a > bk case
A(n) = n(logba ) = n(log25) = n2.3219280
∴α = 2.3219280
Question 4

An algorithm performs (log N)1/2 find operations, N insert operations, (log N)1/2 delete operations, and (log N)1/2 decrease-key operations on a set of data items with keys drawn from a linearly ordered set. For a delete operation, a pointer is provided to the record that must be deleted. For the decrease-key operation, a pointer is provided to the record that has its key decreased. Which one of the following data structures is the most suited for the algorithm to use, if the goal is to achieve the best total asymptotic complexity considering all the operations?

A
Unsorted array
B
Min-heap
C
Sorted array
D
Sorted doubly linked list
       Algorithms       Time-Complexity       GATE 2015 (Set-01)
Question 4 Explanation: 
Question 5
Consider a rooted n node binary tree represented using pointers. The best upper bound on the time required to determine the number of subtrees having exactly 4 nodes is O(nalogbn). Then the value of a+10b is ________.
A
1
B
2
C
3
D
4
       Data-Structures       Time-Complexity       GATE 2014(Set-01)
Question 5 Explanation: 
Binary tree traversal takes O(n) time for reaching 4th level from the leaf node. Every node has to check their subtree having exactly 4 nodes. Checking time can be done in maximum constant time for each node. Nodes in the level is always less than n and it's complexity is O(n). Finally a value becomes 1 and b value becomes 0.
Substitute into a and b values in equation.
⇒ a+10b
⇒ a*1 + 10*0
⇒ 1+0
⇒ 1
Question 6
A
Half of the product of the 3 consecutive integers.
B
One-third of the product of the 3 consecutive integers.
C
One-sixth of the product of the 3 consecutive integers.
D
None of the above.
       Data-Structures       Time-Complexity       GATE 2014(Set-01)
Question 6 Explanation: 
D = 2
for i = 1 to n do
for j = i to n do
for k = j + 1 to n do
D = D * 3;

Also you can try for smaller ‘n’.
Question 7
A
Θ(n)
B
Θ(nlog n)
C
Θ(n2)
D
Θ(log⁡n)
       Algorithms       Time-Complexity       Gate 2014 Set -02
Question 7 Explanation: 
T(n)=2T(n/2)+logn
Apply Master’s theorem,
a=2,b=2,k=0,p=1
According to Master’s theorem, we can apply Case-I
(I) If a>bk, then T(n)=θ(n(logba ) ) =θ(n(log22 ) )= θ (n)
Question 8

Suppose you want to move from 0 to 100 on the number line. In each step, you either move right by a unit distance or you take a shortcut. A shortcut is simply a pre-specified pair of integers i, j with i < j.  Given a shortcut i, j if you are at position i on the number line, you may directly move to j. Suppose T(k) denotes the smallest number of steps needed to move from k to 100. Suppose further that there is at most 1 shortcut involving any number, and in particular from 9 there is a shortcut to 15. Let y and z be such that T(9) = 1 + min(T(y),T(z)).  Then the value of the product yz is _____.

A
150
B
151
C
152
D
153
       Theory-of-Computation       Time-Complexity       Gate 2014 Set -03
Question 8 Explanation: 
T(k) is the smallest no. of steps needed to move from k to 100.
Now, it is given that
T(9) = 1 + min(T(y),T(z))
where,
T(y) = steps from y to 100
T(z) = steps from z to 100
where y and z are two possible values that can be reached from 9.
One number that can be reached from 9 is 10. Another no. is 15, the shortcut path from 9, as given in the question.
∴ The value of 'y' and 'z' are 10 and 15.
So, y × z = 10 × 15 = 150
Question 9
A
Θ(n)
B
Θ(n + k)
C
Θ(nk)
D
Θ(n2)
       Algorithms       Time-Complexity       Gate 2013
Question 9 Explanation: 
Initially the queue is empty and we have to perform n operations.
i. One option is to perform all Enqueue operations i.e. n Enqueue operations. Complexity will be Ѳ(n).
or
ii. We can perform a mix of Enqueue and Dequeue operations. It can be Enqueue for first n/2 times and then Dequeue for next n/2, or Enqueue and Dequeue alternately, or any permutation of Enqueues and Dequeues totaling ‘n’ times. Completity will be Ѳ(n).
or iii. We can perform Enqueues and MultiDequeues. A general pattern could be as follows: Enqueue Enqueue …(ktimes) MultiDequeue Enqueue Enqueue …(ktimes) MultiDequeue …Up to total n …. K items enqueued ….k items deleted….k items enqueued…..k items deleted --- and so on.
The number of times this k-Enqueues, MultiDequeue cycle is performed = n/k+1
So, Complexity will be k times Enqueue + 1 MultiDequeue) ×n/k+1
Which is (2k×n/k+1) = (n)
or
iv. We can just perform n MultiDequeues (or n Dequeues for the matter):
Each time the while condition is false (empty queue), condition is checked just once for each of the ‘n’ operations. So Ѳ(n).
Question 10
Let W(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n.  Which of the following is ALWAYS TRUE?
A
A(n) = Ω (W(n))
B
A(n) = Θ (W(n))
C
A(n) = O (W(n))
D
A(n) = o (W(n))
       Algorithms       Time-Complexity       Gate 2012
Question 10 Explanation: 
The average case time can be lesser than or even equal to the worst case.
So, A(n) would be upper bounded by W(n) and it will not be strict upper bound as it can even be same (e.g. Bubble Sort and merge sort).
A(n)=O(W(n))
Note: Option A is wrong because A(n) is not equal to Ω(w(n)) .
Question 11
Two alternative packages A and B are available for processing a database having 10k records. Package A requires 0.0001n2 time units and package B requires 10nlog10n time units to process n records.What is the smallest value of k for which package B will be preferred over A?
A
12
B
10
C
6
D
5
       Algorithms        Time-Complexity       2010
Question 11 Explanation: 
As per given information Package B 10nlog10n is lesser than or equals to Package A 0.0001n2 0 because n2 is asymptotically larger than nlogn. Finally, 10nlog10n ≤ 0.0001n2
Let n = 10k records. Substitute into 10nlog10n ≤ 0.0001n2
10(10k)log1010k ≤ 0.0001(10k)2
10k+1k ≤ 0.0001 × 102k
k ≤ 102k−k−1−4
k ≤ 10k−5
According to the problem value 6 is suitable for K.
Question 12
 
A
θ(n)
B
θ(n log n)
C
θ(n2)
D
θ(n2log n )
       Algorithms        Time-Complexity       2009
Question 12 Explanation: 
The above recurrence relation is form of master theorem. So, we can apply directly master's theorem T(n) = aT(n/b) + nk logp n)
The recurrence relation is T(n) = T(n/3) + cn
Here a=1 , b=3, k=1, p=0.
So, ak and p>=0. So, it comes under case 3.
Therefore T(n) = θ(nk(logpn)) => θ(n1(log0 n)) => θ(n)
Question 13
 
A
f(n) = O(g(n)); g(n) = O(h(n))
B
f(n) = Ω(g(n)); g(n) = O(h(n))
C
g(n) = O(f(n)); h(n) = O(f(n))
D
h(n) = O(f(n)); g(n) = Ω(f(n))
       Algorithms        Time-Complexity       Gate-2008
Question 13 Explanation: 
When we want to find asymptotically bigger/smaller functions, we have to follow basically 2 steps:
1. Check whether the function if polynomial time or exponential time.
2. Group them into polynomial functions and exponential function. The exponential functions are always bigger than polynomial functions.
As per the above 3 functions, the g(n) is greater than others. f(n) greater than h(n). So, the order of growth h(n) < f(n) < g(n).
Note: For computing, take always big number.
Question 14
The minimum number of comparisons required to determine if an integer appears more than n/2 times in a sorted array of n integers is
A
θ(n)
B
θ(logn)
C
θ(log*n)
D
θ(1)
       Algorithms        Time-Complexity       Gate-2008
Question 14 Explanation: 
The Best way to find out whether an integer appears more than n/2 times in a sorted array(Ascending Order) of n integers, would be binary search approach.
1. The First occurrence of an element can be found out in O(log(n))time using divide and conquer technique,lets say it is i.
2. The Last occurrence of an element can be found out in O(log(n)) time using divide and conquer technique,lets say it is j.
3. Now number of occurrence of that element(count) is (j-i+1).
Overall time complexity = log n +log n +1 = O(logn)
Program:
/* If x is present in arr[low...high] then returns the index of first occurrence of x, otherwise returns -1 */
int binarySearch(int arr[], int low, int high, int x){
if (high >= low) {
int mid = (low + high)/2; /*low + (high - low)/2;*/
/* Check if arr[mid] is the first occurrence of x.
arr[mid] is first occurrence if x is one of the following is true:
(i) mid == 0 and arr[mid] == x
(ii) arr[mid-1] < x and arr[mid] == x */
if ( (mid == 0 || x > arr[mid-1]) && (arr[mid] == x) )
return mid;
else if (x > arr[mid]
) return _binarySearch(arr, (mid + 1), high, x);
else
return _binarySearch(arr, low, (mid -1), x);
}
return -1;
}
Note: The code finds out the first occurrence of the element in the array and checks if the element at (n/2+1)th position is same(as the array is sorted). Takes O(logn) time.
Question 15
A
θ(n) and θ(n)
B
θ(2n) and θ(n)
C
θ(n) and θ(2n)
D
θ(2n) and θ(2n)
       Algorithms        Time-Complexity       Gate-2008
Question 15 Explanation: 
Time complexity of f1 is given by
T(n) = T(n-1) + T(n-2) (multiplication by 2 and 3 won't affect complexity as it is a constant time operation)
T(0) = T(1) = 1
The recurrence is similar to fibonacci series. The time complexity is O(2n)
T(n) = T(n-1) + T(n-2) + c
T(0) = 1
T(n-1) ≈ T(n-2)
But in reality, T(n-1) > T(n-2)
So to find upper bound the expression will be
T(n) = 2T(n-1) + c
= 4T(n-2) + 3c
= 8T(n-3) +7c
= 2kT(n-k) + (2k-1)c
n-k=0 ⇒ k = n
T(n) = 2nT(0) + (2n-1)c
= 2n +(2n-1)c
⇒ T(n) = O(2n)
The time required for f2 is O(n) (because it consists of only one loop).
Question 16
 
A
A, D, C, E, B
B
D, A, C, E, B
C
A, C, D, E, B
D
A, C, D, B, E
       Algorithms        Time-Complexity       Gate 2008-IT
Question 16 Explanation: 
A < D < C < E < B
n1/3 < n log9 < n7/4 < 1.0000001n < en
Question 17
 
A
B
C
D
       Algorithms        Time-Complexity       Gate 2008-IT
Question 17 Explanation: 
Question 18
Which of the following sorting algorithms has the lowest worst-case complexity?
A
Merge sort
B
Bubble Sort
C
Quick Sort
D
Selection Sort
       Algorithms        Time-Complexity       Gate-2007
Question 18 Explanation: 
Worst case time complexities are
Merge sort→ O(nlogn)
Bubble sort→ O(n2)
Quick sort→ O(n2)
Selection sort→ O(n2)
Question 19
 
A
⌈log2n⌉ + 1
B
n
C
⌈log2n⌉
D
⌊log2n⌋+1
       Algorithms        Time-Complexity       Gate-2007
Question 19 Explanation: 
Let us consider n=6, then
1<=6 (✔️)
2<=6 (✔️)
4<=6 (✔️)
8<=6 (❌)
4 comparisons required
Option A:
[log n]+1
[log 6]+1
3+1=4 (✔)
Option B:
n=6 (❌)
Option C:
[log n]
[log 6]=3 (❌)
Option D:
[log2n]+1
[log26]+1=2+1=3 (❌)
Question 20
In an unweighted, undirected connected graph, the shortest path from a node S to every other node is computed most efficiently, in terms of time complexity, by
A
Dijkstra’s algorithm starting from S.
B
Warshall’s algorithm.
C
Performing a DFS starting from S.
D
Performing a BFS starting from S.
       Algorithms        Time-Complexity       Gate-2007
Question 20 Explanation: 
→ Time Complexity of the Dijkstra’s algorithm : It depends on your implementation of Dijkstra's Algorithm. Simple algorithm is given below with Time complexity of O(V2). There are also some time-efficient Algorithms: Graph represented using adjacency list can be reduced to O(E log V) with the help of binary heap.
→ Time Complexity of the Warshall’s algorithm: O(n3). Warshall’s algorithm basically we are using to find all pair shortest path.
→ DFS cannot be used for finding shortest paths.
→ Time Complexity for BFS : O(E+V)
Question 21
A
θ (log2 n)
B
Ω (n)
C
θ (log2log2 n)
D
θ(√n)
       Algorithms        Time-Complexity       Gate-2007
Question 21 Explanation: 
The given code is to calculate the greatest common divisor (GCD) using Euclidean algorithm.
Then, time complexity is = O(log(a∙b) = O(log(a+b)) = O(log n)
Question 22
 
A
Ω (n2)
B
θ (nlog2 n)
C
θ (log2 n)
D
θ (log2log2 n)
       Algorithms        Time-Complexity       Gate-2007
Question 22 Explanation: 

Now apply Master's theorem,
a=1, b=2, k=0, p=0
a = bk, so Case-2 will be applied, and p>-1, so
Question 23
An array of n numbers is given, where n is an even number. The maximum as well as the minimum of these n numbers needs to be determined. Which of the following is TRUE about the number of comparisons needed?
A
At least 2n - c comparisons, for some constant c, are needed.
B
At most 1.5n - 2 comparisons are needed.
C
At least nlog2n comparisons are needed.
D
None of the above.
       Algorithms        Time-Complexity       Gate-2007
Question 23 Explanation: 
→ Take first two elements of an array and compare them and mark one of them as max and min for another one. So for now one comparision required.
→ Now take other two element and compare between them and we will get one minimum element which will be compared with min and we will get one maximum element which will be compared with max and accordingly we will mark them. So in total 3 comparisions are required, and we will do this with all (n-1)/2 pairs.
So in total, no. of comparisions required,
= 3(n-2)/2 + 2
= 3n/2 - 3 + 2
= 3n/2 - 1
= 1.5n - 1
Question 24
 
A
T(n) = O(√n) and T(n) = Ω(√n)
B
T(n) = O(√n) and T(n) = Ω(1)
C
T(n) = O(n) and T(n) = Ω(√n)
D
None of the above
       Algorithms        Time-Complexity       Gate-2007
Question 24 Explanation: 
Here in best case time complexity is omega(1) and in worst case time complexity is O(sqrt(n)).
Question 25
 
A
val(j) = θ(logn)
B
val(j) = θ(√n)
C
val(j) = θ(n)
D
val(j) = θ(n logn)
       Algorithms        Time-Complexity       Gate-2006
Question 25 Explanation: 
The loop following series is n/2 + n/4 + n/8 + .. + 1 = Θ(n).
Question 26
An element in an array X is called a leader if it is greater than all elements to the right of it in X. The best algorithm to find all leaders in an array
A
Solves it in linear time using a left to right pass of the array
B
Solves it in linear time using a right to left pass of the array
C
Solves it using divide and conquer in time θ(nlogn)
D
Solves it in time θ(n2)
       Algorithms        Time-Complexity       Gate-2006
Question 26 Explanation: 
Solves it in linear time using a right to left pass of the array will takes time complexity is O(n).
Question 27
The median of n elements can be found in O(n) time. Which one of the following is correct about the complexity of quick sort, in which median is selected as pivot?
A
θ(n)
B
θ(nlog n)
C
θ(n2)
D
θ(n3)
       Algorithms        Time-Complexity       Gate-2006
Question 27 Explanation: 
If we choose pivot element element as median then the recurrence relation will be
T(n) = 2T(n/2) +O(n)
The above recurrence in the form of masters theorem: a=2, b=2, k=1, p=0. Since, a=bk, so Case-2 is applied
and also p > -1, so
Question 28
   
A
Takes O(3n) and Ω(2n) time if hashing is permitted
B
Takes O(n3) and Ω(n2.5) time in the key comparison model
C
Takes θ(n) time and space
D
Takes O(√n) time only if the sum of the 2n elements is an even number
       Algorithms        Time-Complexity       Gate-2006
Question 28 Explanation: 
As per the above question, the algorithms follows in
1. Since there are total n elements, maximum sum is n for both arrays.
2. Difference between two sums varies from -n to n. So there are total 2n + 1 possible values of difference.
3. If differences between prefix sums of two arrays become same at two points, then subarrays between these two points have same sum.
Question 29
 
A
T(n) = θ(loglogn)
B
T(n) = θ(logn)
C
T(n) = θ(√n)
D
T(n) = θ(n)
       Algorithms        Time-Complexity       Gate-2006
Question 29 Explanation: 
T(n) = 2T(√n)+1
Let n =2m
T(2m) = 2T(2m/2) + 1 --------(1)
Let T(2m) = S(m)
Then equation (1) becomes,
S(m) = 2S(m/2) + 1
So now lets apply Master's theorem,
a=2, b=2, k=0
Since, a>bk, so Case 1 applied here

But m = log n
So finally the answer is,
O(log n)
Question 30
The time complexity of computing the transitive closure of a binary relation on a set of n elements is known to be:
A
O(n)
B
O(n log n)
C
O(n3/2)
D
O(n3)
       Algorithms        Time-Complexity       Gate-2005
Question 30 Explanation: 
First of all, In mathematics the transitive closure of a binary relation R on a set X is defined as the smallest relation on X that contains R and is transitive. More formally, the transitive closure of a binary relation R on a set X is the transitive relation R+ on set X such that R+ contains R and R+ is minimal.
For example, if X is a set of airports and x R y means "there is a direct flight from airport x to airport y" (for x and y in X), then the transitive closure of R on X is the relation R+ such that x R+ y means "it is possible to fly from x to y in one or more flights". Informally, the transitive closure gives you the set of all places you can get to from any starting place.
To find the transitive closure of given relation, we can represent the given relation by the graph such that if x R y then there should be directed edge between x and y in a graph. For this graph, we have to apply modified Floyd_Warshall Algorithm to find the transitive closure of the graph.
The time complexity of this algorithm is O(V3) where V is the number of vertices in the given graph. You can take here V as the number of elements is set i.e., N. Therefore time complexity for the given question is O(n3).
Question 31
Suppose there are ⌈log n⌉ sorted lists of ⌊n/log n⌋ elements each. The time complexity of producing a sorted list of all these elements is: (Hint: Use a heap data structure)
A
O(nlog log n)
B
θ(nlog n )
C
Ω(nlog n )
D
Ω(n3/2)
       Algorithms        Time-Complexity       Gate-2005
Question 31 Explanation: 
We can merge P sorted arrays of each size Q in O(PQ LogP)
P = log n
Q = n/log n
∴ Putting values we get,
O(n log log n)
Question 32
 
A
T(n) = O(n2)
B
T(n) = θ(n log n)
C
T(n) = Ω(n2)
D
T(n) = O(n log n)
       Algorithms        Time-Complexity       Gate-2005
Question 32 Explanation: 
Apply masters theorem to the above example,
a=2, b=2, k=1, p=0.
The recurrence relation result will be O(nlogn).
Question 33
The tightest lower bound on the number of comparisons, in the worst case, for comparison-based sorting is of the order of
A
n
B
n2
C
nlogn
D
nlog2n
       Algorithms        Time-Complexity       Gate-2004
Question 33 Explanation: 
In comparision based sorting, the tighest bound of number of comparisions is θ(n log n).
→Tightest upper bound is (big O).
Tightest lower bound is (big Ω).
Question 34
Two matrices M1 and M2 are to be stored in arrays A and B respectively. Each array can be stored either in row-major or column-major order in contiguous memory locations. The time complexity of an algorithm to compute M1 × M2 will be
A
best if A is in row-major, and B is in column major order
B
best if both are in row-major order
C
best if both are in column-major order
D
independent of the storage scheme
       Algorithms        Time-Complexity       Gate-2004
Question 34 Explanation: 
Running time of an algorithm is always independent of the storage scheme. While computing the running time of an algorithm we assume that to access any element time taken is same. So answer is (D).
But if the question would have asked best time complexity in which of the following implementation (not algorithm) then Option (A) is correct.
Question 35
 
A
Ω(n2)
B
Ω(nlog n) and O(n2)
C
θ(n)
D
o(n)
       Algorithms        Time-Complexity       Gate-2004
Question 35 Explanation: 
If the string contains all zero's then it takes O(n) time, because f(n) can call n times.
If the string contains all one's then it takes O(n) time, because counter++ can execute n times.
If it contains half 0's and half 1's then also it takes O(n) time.
Question 36
 
A
O(n)
B
O(n log n)
C
O(n2)
D
O(2n)
       Algorithms        Time-Complexity       Gate-2004
Question 36 Explanation: 
if (n==1)
return(1)
else
return(recursive(n-1) + recursive(n-1))
n>0:
T(2) = T(1) + T(1) = 2T(1)
T(3) = T(2) + T(2) = 2T(2) = 2⋅2T(1) = 22T(1)
T(4) = 23⋅T(1)

T(n) = 2n⋅T(1) = O(2n)
Question 37

A program takes as input a balanced binary search tree with n leaf nodes and computes the value of a function g(x) for each node x. If the cost of computing g(x) is min(number of leaf-nodes in left-subtree of x, number of leaf-nodes in right-subtree of x) then the worst-case time complexity of the program is

A
Θ (n)
B
Θ (n log n)
C
Θ (n2)
D
Θ (n2 log n)
       Algorithms        Time-Complexity       Gate-2004
Question 37 Explanation: 
Contains a balanced binary search tree.
⇒ g(x) = min (no. of leaf nodes of left-subtree of x, no. of leaf nodes in the right-subtree of x)
→ Total no. of leaves = n
Time complexity for a binary search = log n
Time complexity of the program is = O(n(log n))
Question 38
 
A
I and II
B
I and III
C
II and III
D
I, II, and III
       Algorithms        Time-Complexity       Gate-2003
Question 38 Explanation: 

Which is true by considering leading ordered term present in polynomial expression.

2n×2n can't be written as Θ(2n)
So, this is False.
Question 39
The usual Θ(n2) implementation of Insertion Sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If, instead, we use binary search to identify the position, the worst case running time will  
A
remain Θ(n2)
B
become Θ(n (log n)2)
C
become Θ(n log n)
D
become Θ(n)
       Algorithms        Time-Complexity       Gate-2003
Question 39 Explanation: 
While using Insertion sort to sort array by using linear search then time complexity = Θ(n2)
Instaed, linear search use binary search then (log n) will be the worst case time complexity of binary search and performing n swaps to place an element in right position for the corresponding n elements
i.e., n×(logn+n)
Θ((n×logn)+n2)
Θ(n2)
Remains same.
Question 40
The cube root of a natural number n is defined as the largest natural number m such that m3≤n. The complexity of computing the cube root of n (n is represented in binary notation) is  
A
O(n) but not O(n0.5)
B
O(n0.5 but not O((log n)k) for any constant k>0
C
O((log n)k) for some constant k>0, but not O((log log n)m) for any constant m>0
D
O((log log n)k) for some constant k>0.5, but not O((log log n)0.5)
       Algorithms        Time-Complexity       Gate-2003
Question 40 Explanation: 
Time complexity to search using binary search is O(log n). The cube root involves to serach again O(log n) times in worst case. So time taken to find cube root is O(log2n) ... (I)
The time complexity is never less than O(log n) because to represent a binary search will takes O(log n) ... (II)
From (I), option A and B False
From (II), option D False.
Question 41
 
A
O(n)
B
O(log n)
C
O(log log n)
D
O(1)
       Algorithms        Time-Complexity       Gate-2002
Question 41 Explanation: 
T(n) = T(√n)+1
Let n=2m
So, T(2m) = T(2m/2)+1
We substitute T(2m) = S(m),
So now the equation becomes,
S(m) = S(m/2)+1
Now after applying master's theorem,
S(m) = O(log m)
T(n) = O(log log n)
Question 42
 
A
h(n) is O (f(n))
B
h(n) is O (g(n))
C
g(n) is not O (f(n))
D
f(n) is O(g(n))
       Algorithms        Time-Complexity       Gate-2000
Question 42 Explanation: 
Consider n value as 210
Then
f(n) = 3(n32)=3*(210)32 = 3*2320
g(n) = 2320
h(n) = 1024!
So relation between the functions can be:
f(n) and g(n) are of same order, so f(n) is O(g(n)) and g(n)=O(f(n)). Option C is wrong.
h(n) is n! Which is of higher order than f(n) and g(n). So options A and B are wrong.
Question 43
Suppose  we  want  to  arrange  the  n  numbers  stored  in  any  array  such  that  all negative  values  occur  before  all  positive  ones. Minimum  number  of  exchanges required in the worst case is
A
n - 1
B
n
C
n + 1
D
None of the above
       Algorithms        Time-Complexity       Gate-1999
Question 43 Explanation: 
Minimum no. of exchanges required in worst case will be when 1st half will contain all +ve nos and 2nd half will conain all -ve nos.
Now we will swap 1st no. with nth no. and then 2nd no. with (n-1)th no. and then 3rd no. with (n-2)th and so on. Like this we will have to do n/2 swaps in worst case.
Question 44
If n is a power of 2, then the minimum number of multiplications needed to compute a* is
A
log2 n
B
√n
C
n-1
D
n
       Algorithms        Time-Complexity       Gate-1999
Question 44 Explanation: 

We require 4 multiplications to calculate a16 .....(I)
→ Like that 3 multiplications requires to calculate a8 .....(II)
I, II are satisfied with the option A.
Question 45
 
A
M – W N – V O – U P - X
B
M – W N – U O – X P - V
C
M – V N – W O – X P - U
D
None of the above
       Algorithms        Time-Complexity       Gate-1999
Question 45 Explanation: 
(M) T(n) = Sum of first n matural nos. = n(n+1)/2 = O(n2)
(N) Apply Master's theorem
T(n) = θ(n) = O(n)
(O) Apply Master's theorem
T(n) = θ(n logn) = O(n logn)
(P) Here we are adding the log of firstn natural numbers.
So,
Tn = log1 + log2 + log3 + ... + logn
= log (1×2×...n)
= log(n!)
= θ(n logn)
Question 46
Which of the following is false?
A
B
C
D
       Algorithms        Time-Complexity       Gate-1996
Question 46 Explanation: 
Question 47
 
A
O(n)
B
O(log n)
C
D
None of the above
       Algorithms        Time-Complexity       Gate-1996
Question 47 Explanation: 
Apply Master's theorem.
Question 48
The average number of key comparisons done on a successful sequential search in list of length n is
A
B
C
D
       Algorithms        Time-Complexity       Gate-1996
Question 48 Explanation: 
Total comparisons required
= No. of comparisons if element present in 1st position + No. of comparisons if element present in 2nd position + ............. + No. of comparisons if element present in nth position
= 1 + 2 + 3 + ... + n
= n(n+1)/2
Since there are n elements in the list, so average no. of comparisons
= Total comparisons/Total no. of elements
= (n(n+1)/2)/n
= n+1/2
Question 49
 
A
C1 < C2
B
C1 > C2
C
C1 = C2
D
we cannot say anything for arbitrary n.
       Algorithms        Time-Complexity       Gate-1996
Question 49 Explanation: 
Both are the worst cases of Quick sort, i.e., either the elements are already in increasing order or the elements are already in decreasing order.
So, option is (C) is correct.
Question 50
The recurrence relation that arises in relation with the complexity of binary search is:
A
B
C
D
       Algorithms       Time-Complexity       Gate-1994
Question 50 Explanation: 
In binary search, search for the half of the list and constant time for comparing. So,
Question 51
 
A
O(n)
B
O(n2)
C
O(n3)
D
O(3n2)
E
O(1.5n2)
F
B, C, D and E
       Algorithms       Time-Complexity       Gate-1993
Question 51 Explanation: 

⇒ In this 'n' is constant. So, n is added to n times itself which is O(n2).
Hence, (a) is wrong. And rest (B), (C), (D), (E) are correct.
Question 52
Let f(n) = n2logn and g(n) = n(logn)10 be two positive functions of n. Which of the following statements is correct?
A
f(n) = O(g(n) and g(n) ≠ O(f(n))
B
g(n) = O(f(n) and f(n) ≠ O(g(n))
C
f(n) ≠ O(g(n)) and g(n) ≠ O(f(n))
D
f(n) = O(g(n)) and g(n) = O(f(n))
       Algorithms        Time-Complexity       Gate-2001
Question 52 Explanation: 
f(n) = n2logn; g(n) = n(logn)10
Cancel nlogn from f(n), g(n)
f(n) = n; g(n) = (logn)9
n is too large than (logn)9
So f(n)! = O(g(n)) and g(n) = O(f(n))
Question 53
 
A
O(n2)
B
O(mn)
C
O(m+n)
D
O(m logn)
E
O(m2)
F
B, D and E
       Algorithms       Time-Complexity       Gate-1991
Question 53 Explanation: 
Though the edges are sorted still due to finding union operation complexity is O(m log n).
→ Where m is no. of edges and n is number of vertices then n = O(m2)
→ O(m logn) < O(mn)
There are 53 questions to complete.