0% found this document useful (0 votes)
4 views

2. Divide and Conquer

The document outlines the divide and conquer strategy, detailing its three main steps: dividing the problem into subproblems, solving them recursively, and combining their solutions. It specifically addresses the Maximum Subarray Problem and Strassen's Algorithm for matrix multiplication, providing algorithms and time complexities for each. Additionally, it discusses the properties of sorting algorithms and the merge sort method as an application of the divide and conquer approach.

Uploaded by

Aakus REvol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

2. Divide and Conquer

The document outlines the divide and conquer strategy, detailing its three main steps: dividing the problem into subproblems, solving them recursively, and combining their solutions. It specifically addresses the Maximum Subarray Problem and Strassen's Algorithm for matrix multiplication, providing algorithms and time complexities for each. Additionally, it discusses the properties of sorting algorithms and the merge sort method as an application of the divide and conquer approach.

Uploaded by

Aakus REvol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

2 Divide and Conquer

The divide and conquer strategy involves 2. Conquer: By algorithm given below.
three main steps. a) Finding maximum subarray of A[l…m]
and A[m+1…h]
y Step 1: Problem is divided into some
b) Finding a maximum subarray that
subproblems that are smaller parts of the
crosses the midpoint.
same problem.
3. Combine: Returning the maximum of the
y Step 2: Solve (Conquer) the subproblems
above tree.
in a recursive manner. If the subproblem is
This process of finding solution works
small, then solve it directly.
because any subarray may lie completely on
y Step 3: Combine solutions of all the
one side of the midpoint, on the other side,
subproblems into the original problem’s
or crossing the midpoint.
solution.
y When the subproblem is not small Algorithm:
enough to find solution directly (without FIND-MAXIMUM-CROSSING-SUBARRAY (Arr,
recursion), i.e., with base case, it is divided l, m, h)
into smaller subproblems.
l-sum = −∞
The Maximum Subarray Problem sum = 0
Problem: for i ¬ m downto l
Given an array A[1...n], one should find a sum ¬ sum + Arr[i]
continuous subarray that sums up to the
if sum > l-sum
maximum value.
l-sum ¬ sum
Input: Array A[1...n]
max-left ¬ i
Output:
r-sum ¬ – ∞
The maximum sum of values possible in the
subarray A[i...j] and the indices i and j. sum ¬ 0
for j ¬ m + 1 to h
Note:
Maximum subarray might not be unique. sum ¬ sum + Arr[j]
if sum > r-sum
Example:
r-sum ¬ sum
Array 5 −4 3 4 −2 1 max-right ¬ j
A
return (max-left, max-right, l-sum + r-sum)
1    2   3   4 5 6 Algorithm returns the indices i and j and the
Maximum subarray: A[1…4] (i = 1, j = 4) and sum of two subarrays.
sum = 8. y If the subarray Arr[l…h] contain n entries,
then n = h–l+1.
Divide and Conquer Strategy: y Since, each iteration of two for loops takes
ivide: The array A[l…h] is divided into two
1. D θ(1) time, and the number of iterations
subarrays of as equal size as possible by is to be counted to get the total time
calculating the mid. complexity.

Divide and Conquer 17


y The first for loop makes m-l+1 iterations,
and the second for loop makes h-m Previous Years’ Question
iterations.
Total number of iterations = (m-l+1) + (h-m) Consider a sequence of 14 elements:
= h-l+1 A= [-5, -10, 6, 3, -1, -2, 13, 4, -9, -1, 4, 12,
-3, O].
=n The sequence sum S(i, j) = ~t=i A[k].
∴ FIND-MAXIMUM-CROSSING-SUBARRAY(Arr, Determine the maximum of S(i, j), where
l, m, h) takes θ(n) time. 0 ~ i ~j < 14.
(Divide and conquer approach may be used.)
y With the help of the FIND-MAXIMUM-
Solution: Sum = 6 + 3 - 1 - 2 + 13 + 4 - 9 - 1
CROSSING-SUBARRAY procedure in hand
+ 4 + 12 = 29
that runs in linear time, pseudocode for
divide and conquer can be written to solve
the maximum subarray sum problem. Strassen’s Algorithm for Matrix
Multiplication
Find-Maximum-Subarray(Arr, L, H)
Given 2 square matrices A and B of size n×n
1. If h == l
each, find the product of two matrices.
2. Return (l, h, Arr[l])
3. Else m = (l + h) / 2 Naive method:
SQUARE–MATRIX–MULTIPLY (A, B)
4. (L-low, l-high, l-sum) = FIND-MAXIMUM-
1. n ¬ A.rows
SUBARRAY(Arr, l, m)
2. let C be a new n×n matrix
5. (R-low, r-high, r-sum) = FIND-MAXIMUM-
3. for i ¬ 1 down to n
SUBARRAY (Arr, m+1, h)
4. for j ¬ 1 down to n
6. (Cross-low, cross-high, cross-sum) = FIND-
5. Cij ¬ 0
MAXIMUM-CROSSING-SUBARRAY (Arr, l, m, h)
6. for k ¬ 1 down to n
7. If l-sum ≥ r-sum and l-sum ≥ cross-sum
7. Cij ¬ Cij + aik . bkj
8. Return (l-low, l-high, l-sum)
8. return C
9. Else if r-sum ≥ l-sum and r-sum ≥ cross-
a) Because each of the triply–nested loops
sum
runs for exactly n iterations, and each
10. Return (r-low, r-high, r-sum)
execution of line 7 takes constant time.
11. Else return(cross-low, cross-high, cross-
∴ Time complexity = O(n3) time for naive
sum)
method
Remarks:
1. Initial call: FIND-MAXIMUM-SUBARRAY A simple divide and conquer algorithm:
(Arr, l, h). y Divide matrices A and B into four n/2 × n/2
2. Base case is used when the subarray matrices
can not be divided further, i.e., with  A 11 A 12  B11 B12   C11 C12 
= A =  ,B =  ,C   (1)
1 element. A21 A22  B21 B22  C21 C22 
3. Divide is based on the value of m.
Conquer by the two recursive calls to So that we rewrite the equation C = A.B as
FIND-MAXIMUM-SUBARRAY and a call to  C11 C12   A 11 A 12  B11 B12 
 = . 
FIND-MAXIMUM-CROSSING-SUBARRAY.
C21 C22  A21 A22  B21 B22 
Combine the solutions by finding which of
the three results gives the maximum sum. where
4. Complexity:
T(n) = 2.T(n/2) + θ(n) + θ(1)
= θ(nlogn) by masters theorem.

18 Divide and Conquer


y By using these equations, create a straight y In Strassen’s matrix multiplication, the
forward recursive, divide–and–conquer result of four sub-matrices is calculated
algorithm. using the formula below:
SQUARE–MATRIX–MULTIPLY–RECURSIVE
 A 11 A 12  B11 B12  P5 + P4 – P2 + P6 P1 + P2 
(A,B)  × =  
A21 A22  B21 B22  P3 + P4 P5 + P1 – P3 – P7 
1. N = A. rows
2. Let C be a new n×n matrix  A 11 A 12  B11 B12  P5 + P4 – P2 + P6 P1 + P2 
  ×  =  
3. If n == 1 A21 A22  B21 B22  P3 + P4 P5 + P1 – P3 – P7 
4. C11 = a11 . b11
5. Else partition A, B, and C as in equations (1) Where
6. C11 = SQUARE–MATRIX–MULTIPLY–RECURSIVE
(A11, B11)
+ SQUARE–MATRIX–MULTIPLY–RECURSIVE
(A12, B21)
7. C12 = SQUARE–MATRIX–MULTIPLY–RECURSIVE
(A11, B12)
+ SQUARE–MATRIX–MULTIPLY–RECURSIVE
(A12, B22)
8. C21 = SQUARE–MATRIX–MULTIPLY–RECURSIVE
(A21, B11)
+ SQUARE–MATRIX–MULTIPLY–RECURSIVE
So, a total of 7 multiplications are required.
(A22, B21)
9. C22 = SQUARE–MATRIX–MULTIPLY–RECURSIVE Time complexity of Strassen’s method:
(A21, B12) Adding and subtracting two matrices takes
+ SQUARE–MATRIX–MULTIPLY–RECURSIVE O(n2) time.
(A22, B22)
Hence, the time complexity taken by the
10.Return C
algorithm can be written as
y In the above method, we do eight
multiplications for matrices of size  θ ( 1)
 if n = 1
T ( n ) =
n/2×n/2 and 4 additions.
y Addition of 2 matrices takes O(n2) time

( )
7T (n / 2) + θ n
2
if n > 1

So, the time complexity can be given as By master’s theorem, the time complexity is
O(nlog2 7), which is approximately O(n2.8074)

θ 1() if n =
1
NOTE:
()
T n =
( ) ( )
8T n / 2 + θ n
2
if n > 1 Strassen’s method is not suggested for
practical purposes due to the following
By masters theorem, the time complexity is
reasons:
O(n3), which is the same as the naive method.
y More number of constants are used
Strassen’s method: and naive methods performs better in
y Strassen’s method makes the recursion most of the cases.
tree sparse. y There are better methods available for
y Instead of eight recursive multiplications of sparse matrices in particular.
n/2x n/2 matrices, it performs only seven. y The sub-matrices occupy more space
y Strassen’s method also uses the divide in recursion method.
and conquer strategy, i.e., it divides the y Strassen’s algorithm gives more errors
matrix into some smaller matrices of due to precision limitations in computer
order n/2 x n/2. arithmetic on non-integer value.

Divide and Conquer 19


Problem of Sorting y The main operation of merge sort algorithm
is to merge two sorted arrays.
Input: An array A of size n. y MERGE(A,p,q,r) is used to merge the arrays
Output: Reordering of the array such that such that p ≤ q < r.It assumes the two
A[0] ≤ A[1] ≤ … ≤ A[n−1]. arrays A[p...q] and A[q+1...r] are in sorted
order.
Properties of Sorting algorithms:
In place: Algorithm

“A sorting algorithm is said to be in place if MERGE_FUNCTION (arr, low, mid, high)


a maximum of O(1) (in some cases it can be 1. Let n1 ¬ (mid – low) + 1
increased) elements are stored outside the 2. Let n2 ¬ high – mid
array at any moment. 3. Let arr1[1...(n1 + 1)] & arr ¬ 2[1...(n2 + 1)] be
the new arrays
E.g., Insertion-sort is in place. 4. for i ¬ 1 to n1
Stability: 5. arr1[i] ¬ arr[low + i − 1]
6. for j ¬ 1 to n2
A sorting algorithm is said to be stable if the
7. arr2[j] ¬ arr[mid + j]
order of repeated elements is not changed.
8. arr1[n1 + 1] ¬ ∞
E.g., Insertion sort, merge sort, etc. 9. arr2[n2 + 1] ¬ ∞
Adaptive: 10. i ¬ 1
11. j ¬ 1
A sorting algorithm falls into the adaptive sort 12. for k ¬ low to high
family if it takes advantage of the existing 13. if arr1[i] ≤ arr2[j]
order in its input. 14. arr[k] ¬ arr1[i]
E.g., Insertion sort 15. i ¬ i + 1
16. else arr[k] = arr2[j]
Merge sort:
17. j ¬ j + 1
The merge sort algorithm uses divide and y First line finds the length n1 of the subarray
conquer approach. arr[low...mid], and 2nd line finds the length
Divide: n2 of the subarray arr[mid+1...high].
y arr1 and arr2 arrays are created with
Divide the array of n elements each into two lengths n1 and n2, respectively.
subarrays of n/2 elements each.
y The extra position in each array holds the
Conquer: sentinal.
Sort the subarrays in the recursive method y For loop of the line (4-5 and 6-7) copies
by using merge sort. subarray arr[low…mid] into arr1[1…n1] and
arr[mid + 1…high] into arr2[1…n2].
Combine: y Traverse arr1 and arr2 simultaneously
y Merge the two subarrays to result in the from left to right, and write the smallest
sorted array. element of the current positions to arr.

20 Divide and Conquer


Example:

Divide and Conquer 21


MERGE-SORT (arr, low, high) 3. MERGE-SORT (arr, low, mid)
1. if low < high 4. MERGE-SORT (arr, mid+1, high)
5. MERGE (arr, low, mid, high)
2.=
mid (low + high) / 2

Example:

Analysis: Combine:
y Even the Merge sort works correctly on MERGE_FUNCTION procedure applied on an
odd-length arrays. Recurrence analysis is n-sized subarray takes time θ(n).
simplified if we assume the input array
Recurrence for the worst-case running time
length is of power of 2.
T(n) of merge sort.
Break down of running time:
Divide: θ(1) if n =
1

T(n) =  n
This step finds the middle of the subarray in 2T  2  + θ(n) if n > 1
  
constant time, i.e., O(1).
Conquer: By Masters Theorem:

Solving two subproblems of


n size each T(n) = θ(nlogn)
2
n
recursively results in 2T   time complexity.
2

22 Divide and Conquer


Algorithm:
QUICKSORT_ALGO (Arr, low, high)
Rack Your Brain
1. if low < high
2. mid ¬ PARTITION_OF_ARRAY (Arr, low,
Think about the properties of the merge
high)
sort and other kind of implementation of
3. QUICKSORT_ALGO (Arr, low, mid-1)
merge sort
4. QUICKSORT_ALGO (Arr, mid+1, high)
Hint: There is an in place merge sort
implementation. The main part of the algorithm is the
PARTITION_OF_ARRAY procedure, which
makes the subarray Arr[low...high] in place.
Previous Years’ Question PARTITION_OF_ARRAY (Arr, low, high)
1. x ¬ Arr[high]
Assume that a merge sort algorithm in the 2. i ¬ low-1
worst case takes 30 seconds for an input of 3. for j ¬ low to high-1
size 64. Which of the following most closely 4. i f Arr[j]<= x
approximates the maximum input size of a 5. i ¬ i + 1
problem that can be solved in 6 minutes? 6. swap Arr[i] with Arr[j]
(A) 256 (B) 512 7. swap Arr[i + 1] with Arr[high]
(C) 1024 (D) 2048 8. return i + 1
Solution: (B) Example:

Quick sort: (A)


Quick sort uses 3 step divide and conquer
approach to sort an array A[p...r].
(B)
Divide:
Rearrange the array A into two subarrays (C)
A[p...q-1] and A[q+1...r].
The partition procedure computes the index
q such that the elements A[p...q-1] are (D)
smaller than A[q] and the elements A[q+1...r]
are greater.
(E)
Conquer:
Sort the 2 subarrays A[p…q-1] and A[q+1…r]
(F)
recursively in quicksort.

Combine:
(G)
Since all the subarrays are already sorted,
no extra work is needed to combine them
together. Array A[p...r] is now sorted. (H)

(I)

Divide and Conquer 23


PARTITION_OF_ARRAY applied on an Array: =T(n-1)+cn ( removing Θ with constant)
y Array entry A[r] be the pivot element x. =T(n-2)+c(n-1)+cn
y Orange colour elements are all in the first =T(n-3)+c(n-2)+c(n-1)
partition values no greater than x.
=T(1)+c(2)+c(3)+...+(n-2)+(n-1)+n
y Pink colour elements are in the second
partition with values greater than x. =c(1+2+3+...+n)
y Blue colour elements, have not yet been =c(n(n+1)/2)=O(n^2).
put in one of the first two partitions.
Average case:
Analysing quicksort:
T(n): Average number of comparisons during
The choice of pivot is most crucial:
quicksort on n elements.
y A incorrect pivot may lead to worst-
case time complexity O(n2), whereas a θ ( 1) , T(0) =
T(1) = θ ( 1)
pivot that divides the array into 2 equal-
sized subarrays gives the best case time 1 n
complexity, i.e., O(nlogn).
T(n)
= ∑ ( T(i − 1)T(n − i) + n − 1)
n i= 1

The worst-case choice:


2 n
The pivot may be the largest or the smallest
T(n)
= ∑ ( T(i − 1)) + n − 1
n i= 1
of all the elements in the array. n

y When the first iteration is done, then one = 2∑ ( T(i − 1)) + (n − 1)n ...(i)
nT(n) ...(1)
i= 1
subarray contains 0 elements, and the
other subarray has n-1 elements. For (n-1)
y Quicksort is applied recursively on this n− 1

second subarray of n-1 elements. − 1) 2∑ ( T(i − 1)) + (n − 1)(n − 1 − 1)


(n − 1)T(n=
i= 1

n− 1
− 1) 2∑ ( T(i − 1)) + (n − 1)(n − 2)
(n − 1)T(n= ...(2)
...(ii)
i= 1

Subtracting equation (2) from (1), we get

nT(n) − (n − 1)T(n − 1)
= 2T(n − 1) + 2(n − 1)

nT(n) − (n + 1)T(n − 1) = 2(n − 1)

T(n) T(n − 1) 2(n − 1)


− =
n+ 1 n n(n + 1)

However, quicksort is fast on the “randomly 2(n − 1)


scattered” pivots. g(n) − g(n − 1) =
n(n + 1)
The partition step needs atleast n-1
comparisons. T(n)
where g(n) =
n+ 1
y The recurrence for the running time is
Taking RHS-
= T(n − 1) + Θ (n) 
T(n)
 2(n − 1) 2(n + 1) − 4
T(1) = θ ( 1)  =
n(n + 1) n(n + 1)
T(n)=T(n-1)+Θ(n)

24 Divide and Conquer


2 4
= −
n n(n + 1) Previous Years’ Question
2 4 4
= − + Let p be a quicksort program to sort
n n n+ 1
numbers in ascending order using the first
element as pivot. Let t and t be the number
4 2
= − of comparisons 1 2 made by p for the inputs
n+ 1 n
{1,2,3,4,5} and {4,1,5,3,2}, respectively.
Now, Which one of the following holds?
[CS 2014 (Set-1)]
4 2 (A) t = 5 1
g(n) − g(n − 1)
= −
n+ 1 n (B) t1 < t2
(C) t1 > t2
Hence,
(D) t = t 1 2
4  n 1 Solution: (C)
g(n) = +  2∑  − 2
n + 1  j= 2 j 

4  n 1 Previous Years’ Question


= +  2∑  − 4
n + 1  j= 1 j 
In quicksort, for sorting n elements, the (n/4)
4 th smallest element is selected as pivot
= + 2H(n) − 4
n+ 1 using an O(n) time algorithm. What is the
worstcase time complexity of the quicksort?
Now, [CS 2009]
(A) θ(n)
 4  (B) θ(nlogn)
T(n) =+
(n 1)  + 2H(n) − 4 
n + 1  (C) θ(n2)
(D) θ(n logn) 2
T(n) = 2(n+1) H(n) - 4(n) Solution: (B)
T(n) = 2(n+1) logen + 1.16(n+1) - 4n
T(n) = 2nlogen - 2.84n + O(1)
T(n) = 2nlogen Previous Years’ Question

The average number of comparisons during Randomised quicksort is an extension of


quicksort on n elements approaches.
quicksort where the pivot is chosen randomly.
2nlog2n - 2.84n What is the worst-case complexity of sorting
= 1.39n log2n - O(n) n numbers using randomised quicksort?
[CS 2001]
So, the average case time complexity as
(A) O(n)
quicksort is O(nlogn).
(B) O(nlogn)
(C) O(n2)
(D) O(n!)
Solution: (C)

Divide and Conquer 25


Randomised quicksort: Let the pivot is ith smallest element. Then
we get (i-1) element on the left side and
y Assume all elements are distinct.
(n-i) element on the right side. And we
y Partition around a random element, i.e.,
have to do randomised-quicksort on (i-1)
the pivot is chosen randomly from the set th
and (n-i)th element. So, it take T(i-1) and
of elements.
T(n-i) time, respectively.
y This implies the splits n-1, n-2, …, n-1 have
 T(n) = T(i-1) + T(n-i) + (n-1) with probability
the same probability of 1/n.
1/n.

Note: 
Randomisation is a general tool to improve this is because atleast (n-1)
algorithms with bad worst-case but good comparison is required.
or average-case complexity.
Hence,
Randomised_Partition (Arr, p, r)
1 n
{ T(n)
= ∑ ( T( j − 1) + T(n − j) + n − 1)
n j= 1
i  Random(p, r) /* it generate a
random number (replacing i with j for simplicity)
between p = This is the expectation.
          and r and including

p and r */ n

∑ T( j − 1) ==
This quantity varies
this quantity variesfrom
fromT(0)
T(0)to
toT(n-1)
T(n-1)
exchange Arr[r]  Arr[i] j= 1

return Partition (Arr, p, r) n

} ∑ T(n − j)==
j= 1
this quantity
This quantity varies
varies from
from T(n-1)
T(n-1) to
to T(0)
T(0)
Randomised_Quicksort (Arr, p, r)
So, every term T(0) to T(n-1) appears twice.
{
So, we write it as:
if p < r
2 n− 1 1 n
q  Randomised_Partition (Arr, p, r)
=
=∑
n j 0=
T( j) + ∑ n − 1
n j1
Randomised_Quicksort (Arr, p, q-1)
Randomised_Quicksort (Arr, q+1, r) 2 n− 1 n(n − 1)
} = ∑
n j=0
T( j) +
n
Time complexity:
y Let’s assume T(n) be the number of= 2 ∑ T( j) + n − 1
n− 1
...(3)
comparisons needed to sort n numbers n j=0
using quicksort.
y Since each split occurs with probability = O(nlog 2 n)
1/n. In quicksort, every number requires
atleast (n-1) comparison because the (n-1) \ The expected number of comparisons is
number of elements has to be compared O(nlogn). But the worst-case time complexity
with the pivot element. of randomised quicksort is O(n2).

26 Divide and Conquer


n + 1  (n − 1) 
Note: <  T(n − 1) + 2 2 * < 2
y The running time of quicksort does not  n   n 
depend on the input. It depends on 4
[ex =
:2 * 2=* 0.8 1.6]
the random numbers provided by the 5
generation.  1 n − 1 + 1 
y Thus, for the same input the program < n +   T(n − 2) + 2  + 2
 n n − 1 
might take 2sec today and 5sec
tomorrow. n + 1  n 
<  T(n − 2) + 2  + 2
y The average time taken over many  n n − 1 
different runs of the program would
n+ 1 2(n + 1)
give us the expected time. < T(n − 2) + +2
(n − 1) n
y Same as saying that we are taking
average over all possible random n + 1 n − 2 + 1  2(n + 1)
<  T(n − 3) + 2  + +2
number sequences provided by the (n − 1)  n − 2  n
generator.
n+ 1  n−1  2(n + 1)
<  T(n − 3) + 2  + +2
(n − 1)  (n − 2)  n
Solution of quicksort( ) recurrence relation:
n+ 1 2(n + 1) 2(n + 1)
2 n− 1 < T(n − 3) + + +2
T(n)
= ∑ T(i) + n − 1
n i=0
...(1) ...(i)
(from eq.(3))
(from eq (3)) (n − 2) (n − 1) n

n+ 1 1 1 
Note that T(0) = 0, T(1) = 0. <
T(n − 3) + 2(n + 1)  + +2
(n − 2)  n (n − 1) 
2 n− 2
Now,=
T(n − 1) ∑ T(i) + (n − 2)
(n − 1) i=0 <
n + 1 n − 3 + 1

 1
T(n − 4) + 2 + 2(n + 1)  +
1 
+2
(n − 2)  n − 3   n (n − 1) 
2 < n−n2 + 1 n − 3 + 1  1
T(n − 4) + 2 + 2(n + 1)  +
1 
or, (T(n − 1) − n + 2) = ∑ T(i)  +2
(n − 1) i(n
=0
− 2)  n − 3   n (n − 1) 
n+ 1  n−2  1 1 
Multiply both sides by (n − 1) <  T(n − 4) + 2 + 2(n + 1)  + +2
n (n − 2)  (n − 3)   n (n − 1) 
n+ 1  n−2  1 1 
2 n− 2 (n − 1) (n − <
1) T(n − 4) + 2 + 2(n + 1)  + +2
∑( ) n
(n − 1) i=0
T(i) × = × ( T(n − 1)
n (n − 2)  (n − 3)
− n + 2 )   n (n − 1) 

2 n− 2 (n − 1) n+ 1 2(n + 1) 1 1 
or, ∑ T(i)= n =
n i=0
( T(n − 1) − n + 2) ...(ii)
...(2) <
(n − 3)
T(n − 4) +
(n − 2)
+ 2(n + 1)  + +2
 n (n − 1) 

from (1) and (2); (n + 1) 1 1 1 


< T(n − 4) + 2(n + 1)  + + +2
(n − 1) 2 (n − 3)  n (n − 1) (n − 2) 
T(n)
=
n
( T(n − 1) − n + 2 ) + T(n − 1) + (n − 1)
n 

n − 1 + 2 2(n − 1) (n + 1) 1 1 1 1
= T(n − 1)   −n+ 1+ + (n − 1) < T(2) + 2(n + 1)  + + + ... +  + 2
 n  n (n − (n + 3))  n (n − 1) (n − 2) 4
(n + 1) 2(n − 1) 
= T(n − 1) + (n + 1) 1 1 1 1
n n < T(2) + 2(n + 1)  + + + ... +  + 2
(n − (n + 3))  n (n − 1) (n − 2) 4 
n+ 1 2(n − 1)
So, we got = T(n) T(n − 1) + 
n n

Divide and Conquer 27


(n + 1) 1 1 1 1 Heap
1  as a tree:
< T(0) + 2(n + 1)  + + + ... + +  + 2
(n − n + 1)  n (n − 1) (n − 2) 3 y2 Root:
First element in the array
(n + 1) 1 1 1 1 1
T(0) + 2(n + 1)  + + + ... + +  + 2 y parent i) =  i 2 :
n − n + 1)  n (n − 1) (n − 2) 3 2
Returns index of node’s parent
1 1 1 1 y left i) = 2i :
< 2(n + 1)  + + + ... +  + 2
n (n − 1) (n − 2)
 2
  Returns index of node’s left child.
y right i) = 2i + 1:
1 1 1 1 1 1 1 1 
 + + + + + + ... + + =harmonic seriesReturns

index of node’s right child.
 1 2 3 4 5 6 (n − 1) n Height
 of a binary heap is O(logn).
 = (log n) 
There are two types of binary heaps:
= harmonic series
Max–heaps and min–heaps
= (log n) In both the heaps, the values in the nodes
satisfy the heap property.
< 2(n + 1)(log n − 1) + 2
In a max heap, the max–heap property is that
= O(nlog n) for every node i other than the root node,
A[PARENTi)] ≥ A[i]
Heap sort: A min–heap property is that for every node i
y Heap is a data structure used to store other than the root,
and manage information. It is the design A[PARENT i)] ≤ A[i]
technique used by heapsort.
y A binary heap is an almost complete The smallest valued node in a min-heap is
binary tree. at the root.
y For the heap sort algorithm, we use max–
16
heap.
14 10
16 14 10 8 7 9 3 2 4 1
y Min–heaps commonly implement priority
8 7 9 3 queues.
2 4 1 Heap operations:
y Build–Max–Heap:
y The tree is said to be almost complete Produce a max–heap from an unordered
if all the levels are filled except the last array
level. The nodes in the last level are also y Max_heapify:
filled from left to right. Changes every single violation of the heap
Binary tree representation: property in every subtree at its root.
y Insert, extract_max, heap sort
1 1 Max_heapify:
2 3 2 3
y Assume that the trees rooted at left i) and
4 5 6 7 4 5 6 7
right i) are max_heap.
8 9 10 11 12 13 14 15 8 9 10 y If element A[i] violates the max_heap
property, correct violation by “trickling”
A full binary tree of height 3. A complete binary tree with
10 nodes and height 3. element A[i] down the tree, making the
subtree rooted at index i a max_heap.

28 Divide and Conquer


Max–heapify (example):
Note:
The number of nodes present in a complete
 n 
binary tree at height h is  h− 1  , where n is
2 
the total nodes of the tree.

y Observe, however, that MAX–HEAPIFY


takes O(1) time for nodes that are a level
above the leaves, and O(L) for the nodes
v that are L levels above the leaves.

y Max–Heapify Pseudocode
We have n / 4  nodes with level logn
MAX–HEAPIFY(Arr, i)
1. l ¬ left_node i) We have n / 8 nodes with level logn–1,
2. r ¬ right_node i)
3. if (l ≤ Arr.size_of_heap && Arr[i]<Arr[l])
4. maximum ¬ l We have 1 root node that is 0 level above the
5. else maximum ¬ i leaves.
6. if (r ≤ Arr.size_of_heap && Arr[maximum] y Total amount of work in BUILD–MAX–HEAP
< Arr[r]) for loop can be summed as
7. maximum ¬ r n/4 (1C) + n/8 (2C) + n/16 (3C) + … + 1(lognC)
8. if (maximum ≠ i) substitute n/4 = 2K
9. swap Arr[i] and Arr[maximum] C 2K (1/20 + 2/21 + 3/22 + …(K+1)/2K)
10. MAX–HEAPIFY (Arr, maximum) The term in brackets is bounded by a
y Convert array A[1…n] into a max–heap, constant.
where n=A.length. y This means that build–max–heap is O(n).
y The elements in the subarray
Build–max–heap (example):
A ( n / 2 + 1) ...n are all leaves of the tree,
A 4 1 3 2 16 9 10 14 8 7
and so each is a 1–element heap to begin
with.
1 4
3 MAX –HEAPIFY (A, 5)
2
Build–max–heap (a): 1 3 No change
4 5 7
1. A. size_of_heap = A. length 2
6 MAX –HEAPIFY (A, 4)
16 9 10
2. for i = A.length / 2 down to 1 8 9 10
Swap A[4] and A[8]
14 8 7
3. MAX–HEAPIFY (A, i)

Divide and Conquer 29


1 4
3 MAX–HEAPIFY (A, 3)
2
1 3 Swap A[3] and A[7]
4 5 6 7
14 16 9 10
8 9 10
2 8 7

1 4 MAX–HEAPIFY (A, 2)
2 3
1 10 Swap A[2] and A[5]
4 5 6 7 Swap A[5] and A[10]
14 16 9 3
8 9 10
2 8 7

1 4 MAX–HEAPIFY (A, 1)
2 3
Swap A[1] with A[2]
16 10
4 5 6 7 Swap A[2] with A[4]
14 7 9 3 Swap A[4] with A[9]
8 9 10
2 8 1

1 16
2 3
14 10
A 4 1 3 2 16 9 10 14 8 7 4 7
5 6
8 7 9 3
8 9 10
2 4 1

Heap Sort max-heaps. Run max-heap to solve this.


Sorting strategy: 6. Go to step 2 if the heap is not empty.

1. A max-heap is built using an unordered Heapsort(A)


array.
{
2. Maximum element A[i] is found.
1. BUILD–MAX–HEAP(A)
3. Exchange the values of A[n] and A[1],
2. for (i = A.length() down to 2)
which results in changing the maximum to
3. swap A[1] with A[i]
the end of the array.
4. A.heap–size = (A.heap–size–1)
4. Delete node n from the heap.
5. MAX–HEAPIFY (A, 1)
5. The new root added may violate the max-
}
heap property but its children are already

30 Divide and Conquer


Heap–sort demo: 1 14
2 3
8 10
1 16 4 5 6 7
2 3 4 7 9 3
14 10 8 9
4 5 6 7 2 1
8 7 9 3
8 9 10 Swap A[9] and A[1]
2 4 1
Swap A[10] and A[1]
1 1
2 3
8 10
1 1 4 7
2 3 heap – size = 9 5 6
4 7 9 3
14 10
8 9 10
4 5 6 7 2 14 16 Not part of heap
8 7 9 3
8 9 10 MAX–HEAPIFY (A, 1)
2 4 16 Not part of heap

MAX–HEAPIFY (A, 1) 1 10
2 3
8 9
4 5 6 7
1 14 4 7 1 3
2 3 8
8 10 2
4 5 6 7 _________________________
4 7 9 3 ______
8 9
2 1

_________________________
______

1 10 Running time:
2 3
8 9 y The heap is empty after n iterations.
4 5 6 7
y Each iteration does a swap and a max-
4 7 1 3
8 heapify operation, which involves O(logn)
2 time.
Swap A[8] and A[1] y So it takes O(nlogn) in total.
Note:
1 2 Heapsort is an efficient algorithm and has
2 3 its own uses, but quicksort beats it, if it is
8 9 implemented correctly.
4 5 6 7
4 7 1 3 y The most commonly used application of
8 9 10
10 14 16 Not part of heap heaps is priority queues.

Divide and Conquer 31


y Same as with heaps, priority queues are of time on the top of O(logn) time for MAX_
two types, HEAPIFY.
Max-priority queue y The HEAP-INCREASE_KEY implements the
Min-priority queue INCREASE KEY operation.
y A max priority queue has the following Heap_increase_element (Arr, i, key):
operations:
1. if key < Arr[i]
Insert(S,x)
2. ERROR /* since the key is lesser than the
Adds an element to set S ,i.e., S=S U {x}
element */
Maximum(S)
3. Arr[i] ¬ key
y Returns the largest element in the set. 4. while (i > 1) && [Arr[PARENT(i)] < Arr[i]]
Extract max(S) 5. exchange Arr[i] with Arr[PARENT(i)]
y Returns the largest element in S and 6. i ¬ PARENT(i)
deletes it from the set. The HEAP-INCREASE-KEY takes O(log n) time
Increase(S,x,k) on an n element heap. The path from the
y Increases the value of x to k, and it is updated node in the 3rd line to the root has
assumed that k ≥ x. length O(logn).
y Apart from these, a min-priority queue
supports the operations MINIMUM,
EXTRACT MIN, INSERT and DECREASE KEY.
y The procedure HEAP MAXIMUM on the
max-priority queue implements the
maximum operations in θ(1) time.
Heap–maximum (A):
1. return A[1]
The procedure HEAP–EXTRACT–MAX
implements the EXTRACT–MAX operation.
Heap–extract–max (A):
1. if A. heap–size <1 y The procedure MAX–HEAP–INSERT
2. error “heap underflow” implements the INSERT operation.
3. max = A[1]
Max–heap–insert (A, key):
4. A[1] = A[A. heap–size]
5. A. heap–size = A. heap–size – 1 1. A. heap–size = A. heap–size + 1
6. MAX–HEAPIFY (A, 1) 2. A [A. heap–size] = – ∞
7. return max 3. HEAP–INCREASE–KEY (A, A. heap–size, key)
The running time of MAX–HEAP–INSERT on
y The HEAP-EXTRACT-MAX takes O(logn)
an n–element heap is O(logn).
since it performs O(1) the amount of

Note:
Time–taken to perform some operations in Max–heap is given below

Fing max Delete max Insert Increase Decrease key Find min Search Delete

O(1) O(Logn) O(Logn) O(Logn) O(Logn) O(n) O(n) O(n)

32 Divide and Conquer


Previous Years’ Question
Rack Your Brain
Consider the following array of elements
y Give an O(nlogk) – Time algorithm to (89, 19, 50, 17, 12, 15, 2, 5, 7, 11, 6, 9, 100).
merge k sorted lists into one sorted The minimum number of interchanges
list, where n is the total number of needed to convert it into a maxheap is:
elements in all the input lists. Hint : [CS 2015]
Use a min-heap for k–way merging.
(A) 4 (B) 5
y Write Pseudocode for the procedures
(C) 2 (D) 3
HEAP MINIMUM, HEAP–EXTRACT–
MIN, HEAP–DECREASE–KEY, and Solution: (D)
MIN–HEAP INSERT that implement a
min–priority queue w ith a min–heap. Insertion Sort
Insertion sort is an efficient algorithm to sort
arrays with a small number of elements.
y It is the same as sorting and playing cards
Previous Years’ Question in your hand.
y Start with an empty hand and pick one
The number of elements that can be card at a time from the cards that are
sorted in O(logn) time using heap sort is faced down on the table.
[CS 2013] y Check the card in the right hand with the
(A) 0(1) cards in the left hand. Compare it with
(B) 0 ( log n ) cards from right to left and place it in its
correct position.
 log n  y At any given time, the cards in the left
(C) 0  
 log log n  hand are sorted and are the top cards in
the file.
Solution: (C) Pseudocode for insertion sort:
Insertion–sort (A):
1.for j from 2 to A. length
2.Key = A[j]
Previous Years’ Question
3.// Insert A[j] into sorted sequence A[1…j–1]
4.i=j–1
Consider a max heap, represented by the 5.While i > 0 & A[i] > Key
array 40, 30, 20, 10, 15, 16, 17, 8, 4. Now 6.A[i+1] = A[i]
consider that a value 35 is inserted into 7.i = i –1
this heap. After insertion, the new heap is 8.A[i + 1] = Key
[CS 2015] y INSERTION–SORT, takes as a parameter
(A) 40, 30, 20, 10, 15, 16, 17, 8, 4, 35 an array A[1--n] containing a sequence of
(B) 40, 35, 20, 10, 30, 16, 17, 8, 4, 15 n length that is to be sorted.
(C) 40, 30, 20, 10, 35, 16, 17, 8, 4, 15 y The algorithm sorts the input within
(D) 40, 35, 20, 10, 15, 16, 17, 8, 4, 30 the array A, with at most O(1) or O(logn)
Solution: (B) numbers of them are stored outside the
array.

Divide and Conquer 33


y The operation of INSERTION–SORT on Complexity:
array A = {5, 2, 4, 6, 1, 3} O(nlogn) comparisons
1 2 3 4 5 6 O(n2) swaps. So overall time complexity is
(A) 5 2 4 6 1 3 O(n^2).
1 2 3 4 5 6
(B) 2 5 4 6 1 3

Rack Your Brain


1 2 3 4 5 6
(C) 2 4 5 6 1 3
Rewrite the INSERTION-SORT procedure
1 2 3 4 5 6 to sort into non-increasing instead of
(D) 2 4 5 6 1 3 non-decreasing order.

1 2 3 4 5 6
(E) 1 2 4 5 6 3 NOTE:
INSERTION_SORT gives time complexity
1 2 3 4 5 6 depending on how nearly the input
(F) 1 2 3 4 5 6 sequence is sorted. It is better for almost
Analysis of Insertion Sort sorted sequences.
Worst-case behaviour on an array of n length:
y Inner loop could be executed ‘i’ times Previous Years’ Question
y ‘i’ swaps per loop ⇒ O(n2) total swaps
Consider the following array

23 32 45 69 72 73 89 97
Rack Your Brain
Which algorithm out of the following options
uses the least number of comparisons
What sort of input leads to the worst case
(among the array elements) to sort the
time complexity in insertion sort?
above array in ascending order?
Best case behaviour on an array of length ‘n’ [CS-2021 (Set-1)]
(A) Selection sort
y When the input array is sorted (B) Merge sort
y Inner loop executed 0 times ⇒ 0 swaps (C) Insertion sort
y While condition is entry condition (D) Q
uicksort using the last element as
(always performed at least once) pivot
So, O(n) comparisons are in the best case to Solution: (C)
verify the array is indeed sorted.
Bubble sort:
Binary Insertion Sort Bubble sort is a popular yet inefficient sorting
BINARY–INSERTION–SORT (A, n) algorithm.
for j ← 2 to n y It works by repeatedly swapping adjacent
insert A[j] Key into the (already sorted) elements that are out of order.
sub–array A[1...j–1]
use binary search to find its right position Bubble sort (Arr)
y Binary search will take O(logn) time. 1. for i ¬ 1 up to Arr. length – 1
However, shifting the elements after 2. for j ¬ Arr. length down to i + 1
insertion will still take O(n) time 3. if Arr [j] < Arr [j – 1]
4. swap Arr[j] and Arr[j–1]

34 Divide and Conquer


Analysis: smallest ← i
Exchange A[j] ↔ A[smallest]
Worst-Case
Time complexity:
y The i-th iteration of the “for loop” of lines
1–4 will cause “n – i” iterations of the for y (n–1) comparisons are needed to find the
loop of lines 2–4, each with constant time minimum element from an array of ‘n’
execution, so the worst–case running time elements.
of bubble sort is O(n2) y The size of an unsorted array decreases
Selection Sort to (n–1) after the minimum element is
placed in its proper position, and then
The selection sort finds the minimum
(n–2) comparisons are required to find the
repeatedly and places it at the beginning.
minimum in the unsorted array.
y It divides the array into two subarrays. One Therefore,
is sorted, and another one is unsorted.
y The minimum element is found in an n (n – 1 )
unsorted array and is added at the end of (n–1) + (n–2) + ... + 1 = comparisons
2
the sorted array. and n swaps
Example:
∴ Time complexity = O(n2)
A = {1, 20, 6, 30, 42}
Counting Sort
Find minimum element in A[0...4] and place
y Counting sort assumes every element in
it at the beginning.
the n sized array is in range 0 to k, where
k is an integer.

The algorithm:
Find minimum element in A[1...4] and place it Input
at the beginning of A[1...4]
Array [1…n], where A[j] ∈ [1,2,3…k]
Output
The array [1…n] holds the sorted output and
Find minimum element in A[2...4] and place the array count[0…k] provides temporary
it at the beginning of A[2...4] working storage.
Counting_sort (array, array_size):
1. Max ← maximum element in the array
and create an array Count of size max and
Find minimum element in A[3...4] and place initialize with zeroes
it at the beginning of A[3...4] 2. For j ← 0 to size
3. Find the total count of each unique
element and store the count at jth index
in count array
Pseudocode of selection sort: 4. For i ← 1 to max
5. Find the cumulative sum and store it in
Sleection–sort (A): count array itself
for j ← 1 to n–1 6. For j ← size down to 1 restore the elements
smallest ← j to array
for i ← j + 1 to n 7. Decrement count_arr[arr[j]] and make
if A[i] < A[smallest] arr[count_arr[arr[j]]]=j

Divide and Conquer 35


Counting sort example:

1 2 3 4 5 6 7 8
(A) 2 5 3 0 2 3 0 3

0 1 2 3 4 5 0 1 2 3 4 5
(C) 2 0 2 3 0 1 (C) 2 2 4 7 7 8

(A) (B)

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
(B) 3 (B) 0 3

0 1 2 3 4 5 0 1 2 3 4 5
(C) 2 2 4 6 7 8 (C) 1 2 4 6 7 8

(C) (D)

1 2 3 4 5 6 7 8
(B) 0 3 3

0 1 2 3 4 5 1 2 3 4 5 6 7 8
(C) 1 2 4 5 7 8 (B) 0 0 2 2 3 3 3 5

(E) (F)

Finding the maximum element takes O(n) y Bucket sort assumes the input is given a
time. uniform distribution function and runs in
O(n) time.
Creating an array and initializing it to
zeroes takes O(k). therefrore, total time y Same as counting sort, bucket sort is also
complexity=O(n+k) space complexity=O(k) fast since it assumes something about the
for creating new array. input.
y Counting sort thinks that the input
Since the elements are changed in-place,it is consists of n integers in a smaller range,
an inplace algorithm whereas bucket sort assumes that the
input is generated by a random process
that distributes elements in the array
uniformly and independently over an
Rack Your Brain interval [0, 1).
y Bucket sort divides the [0, 1) interval into n
Why don’t we always use counting sort? equal-sized intervals (known as buckets)
Hint: Depends on the range K of elements. and distributes n input elements into the
buckets.
Bucket Sort y Since the numbers are uniformly and
y Counting sort assumes every element in independently distributed over [0, 1), the
the array of size n is in the range 0 to k, chances of getting more elements in the
where k is an integer. same bucket are very, very less.

36 Divide and Conquer


y To get the output, sort the elements in time (assuming that every number that
each bucket and list them in order to get we are going to insert next has to be just
the final sorted order. inserted at the beginning of the list).
y Bucket sort assumes that the input is an y We are going to insert all the number ‘n’
array A of n elements, and each element times, then the total time complexity is
A[i] satisfies the condition 0 ≤ A[i] ≤ 1. going to be O(n) for all the insertion, and
y The code requires an auxiliary array then we have to scan all the number one
B[0...n-1] of linked lists (buckets) and time, so again O(n).
assumes that there is a mechanism for y Therefore, T(n) = O(n).
maintaining such lists.
Worst-case:
Bucket-sort (A):
y Let us assume that we have ‘k’ bucket,
1.Let B[0....size-1] be a new array created.
and since elements, are uniformly
2.n = A.length
distributed. So, each bucket might
3.For itr<-0 to size-1
n
4.Make B[itr] empty get   element as a list and if we are
5.For itr<-1 to size k 
6.Do insert A[itr] into list B[n A[itr]] going to apply insertion sort with all the
element, which are present in a bucket
7.For itr<-0 to size-1
n
8.Do sort list B[itr] with insertion-sort then we might have to compare with  
9.Merge all lists B[0], B[1],.., B[size-1] together k 
in order element then in that case time complexity
Example:  n 
Let array A have 10 elements as shown below:
 k  ( )
is O    n  = O n2 .
  
y Therefore, O(n2) is the worst-case time
complexity.
Space complexity:
The array B[0…9] of sorted lists (buckets) y If we assume that the number of bucket is
after line 8 of the algorithm are shown below: ‘k’, then ‘k’ cells have been dedicated for
the bucket, and for all the n-elements, we
should have space of size n.
y So, space complexity = O(n+k).

Radix-sort:
y Alike counting and bucket sorting
algorithms, even radix-sort assumes that
the input has some information.
y It assumes that the input values are to
be sorted from the base, i.e., means all
numbers in the array have d digits.
y In radix sort, sorting done with each digit
from the last to the first.
Time Complexity y A stable sort algorithm is used to sort all
Best Case the numbers by least significant, then last
but one, so on.
y In case if all the elements are uniformly
y Counting sort gives a time complexity of
distributed, then every insertion take O(1)
O(nd)  O(n) for this procedure in radix sort.

Divide and Conquer 37


Algorithm: Searching Algorithms

y Radix sort (A, d) /* Each key in


A[1…n] is a d digit
integer, and if the Linear search Binary search
y For (i=1 to d) do keys are not of
Linear Search
d-digit, then we
have to make ‘d’ y Given an array arr[ ] of n elements and a
y Use a stable sorting digit number by search element ‘x’ to be searched in arr[ ]
appending zero. y Begin with the leftmost element of arr[ ]
*/ and compare x with each element of arr[
y Algorithm to sort A ] one by one.
y On digit i. y Return the index if x matches an element.
y /* digits are numbers 1 to d from right to y Return -1 if x does not match any of the
left */ elements.
Example: Example:
y The operation of radix sort on a list of Find 20, i.e., x = 20.
seven 3-digit numbers is shown below: 0 1 2 3 4 5 6 7
10 30 50 70 60 80 20 40

return '6'

Time complexity of the linear search is


written as O(n)

Note:
Binary search and hash tables are used
more significantly in practice compared
y Let n d-digited numbers be given, in which to Linear search since they allow faster
each digit can take k values possible. searching.
y RADIX-SORT sorts the input numbers
Binary Search
correctly in (d(n+k)) time. If the
intermediate stable algorithm used to sort y Binary search is one of the most efficient
the numbers takes d(n+k). search algorithms available.
y When each digit is between 0 to k-1 and k is y It is based on the divide-conquer strategy.
not too large, counting sort is the best choice.
Note:
Then each pass over n d-digited numbers
Binary search algorithms can be applied
takes (n+k). Such passes are d. Therefore,
only on sorted arrays.
radix sort takes (d(n+k)) time in total.
y As a result, the elements must be
y When d is constant and K = O(n), we can
arranged in a certain order.
make radix sort run in linear time. More
y If the elements are numbers, use either
generally, we have some flexibility in how
ascending or descending order.
to break each key into digits.
y If the elements are strings, use
Searching dictionary order.
y Searching is a process of locating a specific To use binary search on an array that is not
element among a set of elements. sorted.
y If the required element is found, the search
y Sort the array first using the sorting
is considered successful. Otherwise, it is
technique.
unsuccessful.

38 Divide and Conquer


y After that, employ the binary search y Every call to the binary search algorithm is
algorithm. dividing the array into two equal half’s and
Binary search (A, low, high, key)
comparing the key with middle element.
if high < low Based on the comparison with middle
return (low-1) element, recursing to (left/right) half.
  high − low   ∴ T(n) = T(n/2) + C
mid ← low +  
  2  By master method
if key = A[mid] T(n) = Θ(logn)
return mid
y O(1) space in case of iterative
else if key < A[mid]
return Binary search (A, low, mid-1, key) implementation, and O(logn) recursion
else call stack space in case of recursive
return Binary search (A, mid+1, high, key) implementation.

Chapter Summary

Sorting Time Complexcity Space


Best Case Average Case Worst Case Complexity
Algorithms
Worst Case
Bubble Sort 0(n) q(n2) 0(n2) 0(1)
Insertion Sort 0(n) q(n2) 0(n2) 0(1)
Selection Sort 0(n2) q(n2) 0(n2) 0(1)
Merge Sort 0(nlogn) q(nlogn) 0(nlogn) 0(n)
Quick Sort 0(nlogn) q(nlogn) 0(n )
2
0(n)
Heap Sort 0(nlogn) q(nlogn) 0(nlogn) 0(n)

Sorting Algorithms In-place Stable


Bubble Sort Yes Yes
Insertion Sort Yes Yes
Selection Sort Yes No
Merge Sort No Yes
Quick Sort Yes No
y The algorithm that can sort a dynamic input added while sorting is called online
sorting algorithm.
Eg: Insertion sort algorithm.
y The application and implementation details play a big role in determining which
algorithm is best.
y Quicksort out performs heapsort, it is practically used for sorting large input arrays
since its worst case time complexity is 0(nlogn).
y If stability and space issues are considered, then merge sort is the best choice.
y When the input array is nearly sorted or the input size is small, insertion sort is preferable.

Divide and Conquer 39

You might also like