ADA Lab Manual Final
ADA Lab Manual Final
COMPUTER ENGINEERING
Laboratory Manual
Laboratory Manual
DEPARTMENT OF COMPUTER ENGINEERING
VISION
To be recognized for the quality education and research in the field of Information Technology
known for its accomplished graduates.
MISSION
The mission of Information Technology Department, Silver Oak College of Engineering and
Technology, Ahmedabad is to:
1. Continually improve the standard of our graduates by engaging in innovative teaching
learning methods with high caliber motivated faculty members keeping in-line with the rapid
technological advancements.
2. Promote and support research activities over a wide range of academic interests among
students and staff for growth of individual knowledge and continuous learning.
3. Provide an education system that promotes innovation, creativity, entrepreneurial spirit,
leadership as well as freedom of thought with emphasis on professionalism and ethical
behavior
PREFACE
It gives us immense pleasure to present the first edition of Analysis and Design of
Algorithms Practical Book for the B.E. 3rd year students for SILVER OAK GROUP OF
INSITUTES.
The Analysis and Design of Algorithms theory and laboratory course at SILVER OAK
COLLEGE OF ENGINEERING & TECHNOLOGY, AHMEDABAD is designed in such a
way that students develop the basic understanding of the subject in the theory classes and then try
their hands on the experiments to realize the various devices and circuits learnt during the
theoretical sessions. The main objective of the ADA laboratory course is: Learning ADA through
algorithm. The objective of this ADA Practical Book is to provide a comprehensive source for all
the experiments included in the ADA laboratory course.
We acknowledge the authors and publishers of all the books which we have consulted
while developing this Practical book. Hopefully this Analysis and Design of Algorithms Practical
Book will serve the purpose for which it has been developed.
Lab Manual Revised By: Prof. Nirzari Patel, Silver Oak College of Engineering and Technology
Head of Department:...........................................
TABLE OF CONTENT
Page No Mar
S Date Date of
Experiment Title Sign ks
r of Completi
(out
. Star on
T Fro of
N t
o m 10)
o
Implementation and Time 1 7
analysis of sorting
algorithms.
1
a) Bubble sort,
b) Selection sort,
c) Insertion sort.
Implementation and Time 8 16
analysis of sorting
algorithms.
2
a) Merge sort
b) Quick sort.
Implementation and Time 1 22
3 analysis of linear and 7
binary search algorithm.
Implementation of max- 2 27
4 heap sort algorithm. 3
Implementation of a knapsack 2 31
5 problem using dynamic 8
programming.
3 35
Implementation of chain
6 matrix multiplication using 2
dynamic programming.
Implementation of making a 3 38
change problem using 6
7
dynamic programming.
Page No
Mar
S Date Date of
Experiment Title Sign ks
r of Completi
(out
. Star on
T Fro of
N t
o m 10)
o
Implementation of a 3 44
knapsack problem using 9
8
greedy algorithm.
4 50
Implementation of Graph
9 and Searching (DFS) 5
5 57
Implementation of Graph 1
1 and Searching (BFS)
0
5 62
1 Implement prim’s algorithm. 8
1
6 68
1 Implement kruskal’s 3
2 algorithm.
PRACTICAL SET - 1
AIM: Implementation and Time analysis of different sorting algorithms.
Bubble Sort:
Bubble Sort is a simple algorithm which is used to sort a given set of n elements provided in form
of an array with n number of elements. Bubble Sort compares all the element one by one and sort
them based on their values.
If the given array has to be sorted in ascending order, then bubble sort will start by comparing the
first element of the array with the second element, if the first element is greater than the second
element, it will swap both the elements, and then move on to compare the second and the third
element, and so on. If we have total n elements, then we need to repeat this process for n-1 times.
It is known as bubble sort, because with every complete iteration the largest element in the given
array, bubbles up towards the last place or the highest index, just like a water bubble rises up to
the water surface.
Sorting takes place by stepping through all the elements one-by-one and comparing it with the
adjacent element and swapping them if required.
Following are the steps involved in bubble sort (for sorting a given array in ascending order):
1. Starting with the first element (index = 0), compare the current element with the next
element of the array.
2. If the current element is greater than the next element of the array, swap them.
3. If the current element is less than the next element, move to the next element. Repeat Step 1.
Following are the Time and Space complexity for the Bubble Sort algorithm.
● Worst Case Time Complexity [ Big-O ]: O(n2)
● Best Case Time Complexity [Big-omega]: O(n)
● Average Time Complexity [Big-theta]: O(n2)
● Space Complexity: O(1)
AIM 1.1: Write a program to implement Bubble
sort. Code:
#include<stdio.h>
void main(){
int arr[]={80,70,50,90,10,45,20,15,55,60};
int k=0,i=0,j=0;
for(k=0;k<10;k++){
printf("%d ",arr[k]);
}
printf("\n");
for(i=0;i<10;i++){
for(j=0;j<10-i;j++){
if(arr[j]>arr[j+1]){
int swp=arr[j+1];
arr[j+1]=arr[j];
arr[j]=swp;
}
}
}
k=0;
for(k=0;k<10;k++){
printf("%d ",arr[k]);
}
}
Output:
80 70 50 90 10 45 20 15 55 60
10 15 20 40 45 50 55 60 70 80
Selection Sort:
Selection sort is conceptually the simplest sorting algorithm. This algorithm will first find
the smallest element in the array and swap it with the element in the first position, then it will
find the second smallest element and swap it with the element in the second position, and it will
keep on doing this until the entire array issorted.
It is called selection sort because it repeatedly selects the next-smallest element and swaps it into
the right place.
1. Starting from the first element, we search the smallest element in the array, and replace itwith
the element in the first position.
2. We then move on to the second position, and look for smallest element present in
the subarray, starting from index 1, till the last index.
3. We replace the element at the second position in the original array, or we can say at thefirst
position in the subarray, with the second smallest element.
4. This is repeated, until the array is completelysorted.
The following will be the time and space complexity for selection sort algorithm:
● Worst Case Time Complexity [ Big-O ]: O(n2)
● Best Case Time Complexity [Big-omega]: O(n2)
● Average Time Complexity [Big-theta]: O(n2)
● Space Complexity: O(1)
AIM 1.2: Write a program to implement Selection
sort. Code:
#include<stdio.h>
int main()
{
int n, i, j, temp, min;
int arr[]={80,70,50,90,10,45,20,15,55,60};
for(i = 0; i < n; i++)
{
printf("%d ", arr[i]);
}
min = i;
for (j = i + 1; j < n; j++)
if (arr[j] < arr[min])
min = j;
temp = arr[min];
arr[min] = arr[i];
arr[i] = temp;
}
i=0;
for(i = 0; i < n; i++)
{
printf("%d ", arr[i]);}}
Output:
80 70 50 90 10 45 20 15 55 60
10 15 20 40 45 50 55 60 70 80
Insertion Sort:
The insertion sort works in a slightly different way. It always maintains a sorted sublist in the
lower positions of the list. Each new item is then “inserted” back into the previous sublist such
that the sorted sublist is one item larger. The shaded items represent the ordered sublists as the
algorithm makes each pass.
int main()
{
int n, d, t, flag = 0,k;
int arr[]={80,70,50,90,10,45,20,15,55,60};
for(k=0;k<10;k++){
printf("%d ",arr[k]);
}
return 0;
}
Output:
80 70 50 90 10 45 20 15 55 60
10 15 20 40 45 50 55 60 70 80
1) The way a card game player arranges his cards as he picks them one by one can be comparedto
2) The correct order of the efficiency of the following sorting algorithms according to theiroverall
running time comparison is
a) Insertion>selection>bubble b) Insertion>bubble>selection
c) Selection>bubble>insertion. d) bubble>selection>insertion
3) Analysis the different sorting algorithm of Bubble, Selection and Insertion Sort.
Justifyyour answer which one is better sorting algorithm among them.
Best case time complexity of bubble,selection and insertion sort is O(n), O(n 2),
O(n) respectively. Wrost case time complexity of these three are O(n 2), O(n2), O(n2).
In Bubble and insertion sort insertion sort do less comparison than bubble sort. So
insertion sort is best among these three. Insertion sort outperform merge sort too if
number of element is less.
--------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------
------------------------------------- ------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------
PRACTICAL SET - 2
AIM: Implementation and Time analysis of Merge & Quick sort algorithms.
Merge Sort:
Merge sort is a divide-and-conquer algorithm based on the idea of breaking down a list into
several sub-lists until each sublist consists of a single element and merging those sublists in a
manner that results into a sorted list.
While comparing two sublists for merging, the first element of both lists is taken into
consideration. While sorting in ascending order, the element that is of a lesser value becomes a
new element of the sorted list. This procedure is repeated until both the smaller sublists are empty
and the new combined sublist comprises all the elements of both the sublists.
Algorithm:
MergeSort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two
halves: middle m = (l+r)/2
2. Call mergeSort for first
half: Call mergeSort(arr,
l, m)
3. Call mergeSort for second
half: Call mergeSort(arr,
m+1, r)
4. Merge the two halves sorted in step 2 and
3: Call merge(arr, l, m, r)
The following will be the time and space complexity for merge sort algorithm:
Worst Case Time Complexity [ Big-O ]: O(n logn)
Best Case Time Complexity [Big-omega]: O(n logn)
Average Time Complexity [Big-theta]: O(n log n)
Space Complexity: O(n)
Example:
2.1 : Write a program to implement Merge sort.
Code:
#include<stdio.h>
int main()
{
int n=10,i;
int arr[]={80,70,50,90,10,45,20,15,55,60};
for(i=0;i<n;i++)
printf("%d ",arr[i]);
mergesort(arr,0,n-1);
i=0;
for(i=0;i<n;i++)
printf("%d ",arr[i]);
return 0;
}
if(i<j)
{
mid=(i+j)/2;
mergesort(a,i,mid);
mergesort(a,mid+1,j);
merge(a,i,mid,mid+1,j);
}
}
while(i<=j1)
temp[k++]=a[i++];
while(j<=j2)
temp[k++]=a[j++];
for(i=i1,j=0;i<=j2;i++,j++)
a[i]=temp[j];
}
Output:
80 70 50 90 10 45 20 15 55 60
10 15 20 40 45 50 55 60 70 8
Quick Sort:
Quick sort is based on the divide-and-conquer approach based on the idea of choosing one
element as a pivot element and partitioning the array around it such that: Left side of pivot
contains all the elements that are less than the pivot element Right side contains all elements
greater than the pivot.
It reduces the space complexity and removes the use of the auxiliary array that is used in merge
sort. Selecting a random pivot in an array results in an improved time complexity in most of the
cases.
Select the first element of array as the pivot element First, we will see how the partition of the
array takes place around the pivot.
Choose the highest index value has pivot
Take two variables to point left and right of the list excluding pivot
left points to the low index
right points to the high
while value at left is less than pivot move right
while value at right is greater than pivot move left
if both step 5 and step 6 does not match swap left and right
if left ≥ right, the point where they met is new pivot
Algorithm:
quickSort(left, right)
if right-left <= 0
return
else
pivot = A[right]
partition = partitionFunc(left, right, pivot)
quickSort(left,partition-1)
quickSort(partition+1,right)
end if
end procedure
Complexity Analysis of Merge Sort:
The following will be the time and space complexity for merge sort algorithm:
Worst Case Time Complexity [ Big-O ]: O(n^2)
Best Case Time Complexity [Big-omega]: O(n logn)
Average Time Complexity [Big-theta]: O(n log n)
Space Complexity: O(n)
Example:
2.2 : Write a program to implement Quick sort.
Code:
#include<stdio.h>
void quicksort(int number[],int first,int last){
int i, j, pivot, temp;
if(first<last){
pivot=first;
i=first;
j=last;
while(i<j){
while(number[i]<=number[pivot]&&i<last)
i++;
while(number[j]>number[pivot])
j--;
if(i<j){
temp=number[i];
number[i]=number[j];
number[j]=temp;
}
}
temp=number[pivot];
number[pivot]=number[j];
number[j]=temp;
quicksort(number,first,j-1);
quicksort(number,j+1,last);
}
}
int main(){
int i, count;
int arr[]={80,70,50,90,10,45,20,15,55,60},k;
for(k=0;k<10;k++){
printf("%d ",arr[k]);
}
quicksort(arr,0,count-1);
k=0;
for(k=0;k<10;k++){
printf("%d ",arr[k]);
}
return 0;
}
Output:
80 70 50 90 10 45 20 15 55 60
10 15 20 40 45 50 55 60 70 80
Solve the below mention questionnaires:
54 26 93 17 77 31 44 55 20
/ \
54 26 93 17 77 31 44 55 20
/ \ / \
54 26 93 17 77 31 44 55 20
/ \ /\ / \ / \
54 26 93 17 77 31 44 55 20
\ / \ / \ / \ /
26 54 17 93 31 77 20 44 55
\ / \ /
17 26 54 93 20 31 44 55 77
\ /
17 20 26 31 44 54 55 77 93
PRACTICAL SET - 3
AIM: Implementation and Time analysis of linear and binary search algorithm.
Algorithm:
#include<stdio.h>
void main(){
int arr[]={80 ,70 ,50 ,90 ,10 ,45 ,20 ,15 ,55 ,60};
printf("enter the key:");
int key,i;
scanf("%d",&key);
for(i=0;i<10;i++){
if(arr[i]==key){
printf(" found at index number %d",i);
break;
}
}
}
Output:
Enter the key:80
Found at index number 0
Binary Search Algorithm:
The binary search algorithm can be used with only sorted list of element. That means, binary
search can be used only with list of element which are already arranged in a order. The binary
search cannot be used for list of element which are in random order.
This search process starts comparing of the search element with the middle element in the
list. If both are matched, then the result is "element found". Otherwise, we check whether the
search element is smaller or larger than the middle element in the list. If the search element is
smaller, then we repeat the same process for left sublist of the middle element. If the search
element is larger, then we repeat the same process for right sublist of the middle element.
We repeat this process until we find the search element in the list or until we left with a
sublist of only one element. And if that element also doesn't match with the search element, then
the result is "Element not found in the list".
Algorithm:
Procedurebinary_searc
h A ← sorted array
n ← size of array
x ← value to be searched
Set lowerBound = 1
Set upperBound = n
if A[midPoint] < x
set lowerBound = midPoint + 1
if A[midPoint] > x
set upperBound = midPoint - 1
if A[midPoint] == x
EXIT: x found at location midPoint
end while
end procedure
void main(){
int arr[]={10 ,15 ,20 ,40 ,45 ,50 ,55 ,60 ,70 ,80};
printf("enter the key:");
int key,i;
scanf("%d",&key);
printf("found at %d",binary_search(arr,0,9,key));
}
Output:
enter the key:45
found at 4
Solve the below mention questionnaires:
Answer: ( B )
Answer: ( D )
Answer: ( A )
Conclusion / Outcome:
I learned linear and binary search
PRACTICAL SET – 4
AIM: Implementation of max-heap sort algorithm.
Explanation: Heaps can be used in sorting an array. In max-heaps, maximum element will
always be at the root. Heap Sort uses this property of heap to sort the array. It is similar to
selection sort where we first find the maximum element and place the maximum element at the
end. We repeat the same process for remaining elements.
if (max != i) {
swap(&arr[i], &arr[max]);
heapify(arr, n, max);
}
}
heapify(arr, i, 0);
}
}
int main() {
int arr[]={80,70,50,90,10,45,20,15,55,60};
int n = 10;
for(k=0;k<10;k++){
printf("%d ",arr[k]);
}
heapSort(arr, n);
for(k=0;k<10;k++){
printf("%d ",arr[k]);
}
}
Output:
80 70 50 90 10 45 20 15 55 60
10 15 20 40 45 50 55 60 70 80
Solve the below mention questionnaires:
Answer : ( C )
Answer : ( A )
Answer : ( B )
4. The max heap constructed from the list of numbers 30, 10, 80, 60, 15, 55 is
a) 80, 55, 60, 15, 10, 30 c) 80, 60, 55, 30, 10, 15
b) 60, 80, 55, 30, 10, 15 d) None
Answer : ( D )
5. Always heap is a
a) Binary search tree c) None
b) Full binary tree d) Complete binary tree
Answer : ( D )
Conclusion / Outcome:
Dynamic programming is a method for solving a complex problem by breaking it down into
simpler sub-problems, solving each of those sub-problems just once, and storing their solutions –
in an array usually.
Now, every time the same sub-problem occurs, instead of recomputing its solution, the previously
calculated solutions are used, thereby saving computation time at the expense of storage space.
Memoization – Memoization uses the top-down technique to solve the problem i.e. it begin with
original problem then breaks it into sub-problems and solve these sub-problems in the same way.
In this approach, you assume that you have already computed all sub-problems. You typically
perform a recursive call (or some iterative equivalent) from the main problem. You ensure that the
recursive call never recomputes a sub-problem because you cache the results, and thus duplicate
sub-problems are not recomputed.
Tabulation – Tabulation is the typical Dynamic Programming approach. Tabulation uses the
bottom up approach to solve the problem, i.e., by solving all related sub-problems first, typically
by storing the results in an array. Based on the results stored in the array, the solution to the “top”
/ original problem is then computed.
Memoization and tabulation are both storage techniques applied to avoid recomputation of a sub-
problem
The idea behind dynamic programming, In general, is to solve a given problem, by solving
different parts of the problem (sub-problems), then using the cached solutions of the sub-problems
to reach an overall solution.
PRACTICAL – 5
AIM : Implementation of a knapsack problem using dynamic programming.
● In this problem we have a Knapsack that has a weight limit W, which representsknapsack
capacity.
● Given two integer arrays val[0..n-1] and wt[0..n-1] which represent values andweights
associated with n items respectively.
● Find out the maximum value subset of val[] such that sum of the weights of this subset
is smaller than or equal to W.
Code:
#include <stdio.h>
int max(int a,int b){
if(a>b){
return a;
}
return b;
}
void main(){
int n=3,W=50;
int t[n+1][W+1];
int wt[]={10,20,30},vl[]={60,100,120};
int i,j;
for( i=0;i<=n;i++){
for( j=0;j<=W;j++){
if( i==0||j==0){
t[i][j]=0;
}
}
}
i=0;j=0;
for(i=1;i<=n;i++){
for(j=1;j<=W;j++){
if(wt[i-1]<=j){
t[i][j]=max(t[i-1][j],(vl[i-1]+t[i-1][j-wt[i-1]]));
}else{
t[i][j]=t[i-1][j];
}
}
}
printf("%d",t[n][W]);
}
Output:
1. You are given a knapsack that can carry a maximum weight of 60. There are 4 items
with weights {20, 30, 40, 70} and values {70, 80, 90, 200}. What is the maximum value
of the items you can carry using the knapsack?
a) 160 c) 170
b) 200 d) 90
Answer : ( A )
Answer : ( C )
Answer : ( C )
Answer : ( C)
Answer : (A )
Conclusion / Outcome:
I leaned 0/1 knapsack and how it works
PRACTICAL – 6
AIM : Implementation of chain matrix multiplication using dynamic
programming.
Explanation: Given a sequence of matrices, find the most efficient way to multiply these
matrices together. The problem is not actually to perform the multiplications, but merely to decide
in which order to perform the multiplications.
However, the order in which we parenthesize the product affects the number of simple arithmetic
operations needed to compute the product, or the efficiency. For example, suppose A is a 10 × 30
matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix. Then,
Given an array p[] which represents the chain of matrices such that the ith matrix Ai is of
dimension p[i-1] x p[i]. We need to write a function MatrixChainOrder() that should return the
minimum number of multiplications needed to multiply the chain.
This problem can be solved using Dynamic Programming. First we will compute results for sub-
chains of the original chain and store these results in a 2-D array. Then we will use these results to
compute results for larger chains.
Code:
#include <stdio.h>
#include <conio.h>
#include <string.h>
#define P printf("\n")
int n,i,j,k,l,s,round;
int d[20],m[20][20],mul[5];
void PrintLine();
void main()
{
clrscr();
for(i=0;i<=n;i++)
{
printf("\nEnter D%d :--> ",i);
scanf("%d",&d[i]);
}
round=n-1;
s=0;
PrintLine();
while(s<=round)
{
if(s==0)
{
for(i=1;i<=n;i++)
{
m[i][i]=0;
printf("\nm%d%d=%d",i,i,m[i][i]);
}
PrintLine();
getch();
}
// ---------------------------------------------
if(s==1)
{
for(i=1;i<n;i++)
{
m[i][i+1]=d[i-1]*d[i]*d[i+1];
printf("\nm%d%d=%d",i,i+1,m[i][i+1]);
}
PrintLine();
getch();
getch();
}
// ---------------------------------------------
if(s>1)
{
for(i=1;i<=n-s;i++)
{
l=i+1;
j=0;
for(k=i;k<(s+i);k++)
{
mul[j]=(m[i][k]+m[l][s+i]+
d[i-1]*d[k]*d[s+i]);
l++;
j++;
}
int temp,x,y;
for(x=0;x<j-1;x++)
{
for(y=x+1;y<j;y++)
{
if(mul[x]>mul[y])
{
temp=mul[x];
mul[x]=mul[y];
mul[y]=temp;
}
}
}
m[i][i+s]=mul[0];
printf("\nm%d%d=%d",i,i+s,m[i][i+s]);
}
PrintLine();
getch();
}
s++;
}
printf("\n Minimum Number Of Multiplications :--> %d ",m[1][n]);
getch();
}
void PrintLine()
{
printf("\n-----------");
}
Output:
Enter Number of Matrix To Multiply :--> 4
Enter D0 :--> 5
Enter D1 :--> 4
Enter D2 :--> 6
Enter D3 :--> 2
Enter D4 :--> 7
-----------
m11=0
m22=0
m33=0
m44=0
-----------
m12=120
m23=48
m34=84
-----------
m13=88
m24=104
-----------
m14=158
-----------
Minimum Number Of Multiplications :--> 158
Solve the below mention questionnaires :
3. Which of the following methods can be used to solve the matrix chain
multiplication problem?
a) Dynamic programming c) Recursion
b) Brute force d) All of the mentioned
Answer : (D )
4. Consider the matrices P, Q and R which are 10 x 20, 20 x 30 and 30 x 40 matrices
respectively. What is the minimum number of multiplications required tomultiply
the three matrices?
a) 18000 c) 24000
b) 12000 d) 32000
Answer : ( C )
Answer : ( D)
Conclusion / Outcome:
Explanation:
Given a value N, if we want to make change for N cents, and we have infinite supply of each of
S = { S1, S2, .. , Sm} valued coins, how many ways can we make the change? The order of coins
doesn\’t matter.
For example, for N = 4 and S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. So
output should be 4. For N = 10 and S = {2, 5, 3, 6}, there are five solutions: {2,2,2,2,2}, {2,2,3,3},
{2,2,6}, {2,3,5} and {5,5}. So the output should be 5
Code:
#include <stdio.h>
int max(int a,int b){
if(a>b){
return a;
}
return b;
}
void main(){
int n=3,W=4;
int wt[]={1,2,3};
int t[n+1][W+1];
int i,j;
for(i=0;i<=n;i++){
for(j=0;j<=W;j++){
if(i==0){
t[i][j]=0;
}else if(j==0){
t[i][j]=1;
}
}
}
i=0;j=0;
for(i=1;i<=n;i++){
for(j=1;j<=W;j++){
if(wt[i-1]<=j){
t[i][j]=t[i-1][j]+t[i][j-wt[i-1]];
}else{
t[i][j]=t[i-1][j];
}
}
}
printf("%d ",t[i][j]);
Output:
4
Give the Answer to the below Short Questions:
1. What is time complexity of making change using dynamic programming?
O( nW ) m=size of array W= capacity
2. If S = 5 and N = {1,2,3}, means we have a sum of 5 and want to make change for thissum
using coins of denomination 1,2 and 3. There is infinite supply of these coins. We have to
find in how many ways we can make this change.
Maximum number of ways =4
Conclusion / Outcome:
Description:
Given a set of items, each with a weight and a value, determine a subset of items to include in a
collection so that the total weight is less than or equal to a given limit and the total value is as
large as possible.
Problem Scenario
A thief is robbing a store and can carry a maximal weight of W into his knapsack. There are n
items available in the store and weight of ith item is wi and its profit is pi. What items should the
thief take?
In this context, the items should be selected in such a way that the thief will carry those items for
which he will gain maximum profit. Hence, the objective of the thief is to maximize the profit.
● Fractional Knapsack
● Knapsack
Fractional Knapsack
In this case, items can be broken into smaller pieces, hence the thief can select fractions of items.
subject to constraint,
∑n=1n(xi.wi)⩽W∑n=1n(xi.wi)⩽W
It is clear that an optimal solution must fill the knapsack exactly, otherwise we could add a
fraction of one of the remaining items and increase the overall profit.
Analysis
If the provided items are already sorted into a decreasing order of pi/wi, then the whileloop takes
a time in O(n); Therefore, the total time including the sort is in O(n logn).
Example
Let us consider that the capacity of the knapsack W = 60 and the list of provided
items are shown in the following table −
Item A B C D
Profit 2 1 1 1
8 0 2 2
0 0 0 0
Weight 4 1 2 2
0 0 0 4
Ratio 7 1 6 5
(pi)/(wi) 0
As the provided items are not sorted based on pi/wi. After sorting, the items are as shown in the
following table.
Item B A C D
Profit 1 2 1 1
0 8 2 2
0 0 0 0
Weight 1 4 2 2
0 0 0 4
Ratio 1 7 6 5
(pi/wi) 0
Solution
After sorting all the items according to pi/wi. First all of B is chosen as weight of B is less than
the capacity of the knapsack. Next, item A is chosen, as the available capacity of the knapsack is
greater than the weight of A. Now, C is chosen as the next item. However, the whole item cannot
be chosen as the remaining capacity of the knapsack is less than the weight of C.
Hence, fraction of C (i.e. (60 − 50)/20) is chosen.
Now, the capacity of the Knapsack is equal to the selected items. Hence, no more item can be
selected.
And the total profit is 100 + 280 + 120 * (10/20) = 380 + 60 = 440
This is the optimal solution. We cannot gain more profit selecting any different combination of
items.
Code:
# include<stdio.h>
if (i < n)
x[i] = u / weight[i];
tp = tp + (x[i] * profit[i]);
int main() {
float weight[20], profit[20], capacity;
int num, i, j;
float ratio[20], temp;
temp = weight[j];
weight[j] = weight[i];
weight[i] = temp;
temp = profit[j];
profit[j] = profit[i];
profit[i] = temp;
}
}
}
2. Which of the following methods can be used to solve the Knapsack problem?
a) Brute force algorithm b) Recursion
c) Dynamic programming d) All of the mentioned Answer: D
(D )
3. You are given a knapsack that can carry a maximum weight of 60. There are 4 items withweights
{20, 30, 40, 70} and values {70, 80, 90, 200}. What is the maximum value of the items you can
carry using the knapsack?
a)
160 b) 200
c) 170 d) 90 Answer: ( A)
4.
Whic o t followin proble i equivale t t 0- Knapsac proble
h f h g ms s nt o h 1 k m?
e e
a) You are given a bag that can carry a maximum weight of W. You are given N items which
have
a weight of {w1, w2, w3,…., wn} and a value of {v1, v2, v3,…., vn}. You can break the items
into smaller pieces. Choose the items in such a way that you get the maximum value.
b) You are studying for an exam and you have to study N questions. The questions take {t1, t2,
t3,… .., tn} time(in hours) and carry {m1, m2, m3,…., mn} marks. You can study for a
maximum of T hours. You can either study a question or leave it. Choose the questions in such a
way that your score is maximized.
c) You are given infinite coins of denominations {v1, v2, v3,….., vn} and a sum S. You have
to find the minimum number of coins required to get the sum S.
Answer: (B )
Practical Set-9
AIM: Implementation of Graph and Searching (DFS)
Explanation:
A standard DFS implementation puts each vertex of the graph into one of two categories:
1. Visited
2. Not Visited
The purpose of the algorithm is to mark each vertex as visited while avoiding cycles.
DFS Example
Let's see how the Depth First Search algorithm works with an example. We use an undirected
graph with 5 vertices.
We start from vertex 0, the DFS algorithm starts by putting it in the Visited list and putting all its
adjacent vertices in the stack.
Next, we visit the element at the top of stack i.e. 1 and go to its adjacent nodes. Since 0 has
already been visited, we visit 2 instead.
Vertex 2 has an unvisited adjacent vertex in 4, so we add that to the top of the stack and visit it.
After we visit the last element 3, it doesn't have any unvisited adjacent nodes, so we have
completed the Depth First Traversal of the graph.
Code:
#include<stdio.h>
#include<stdlib.h>
int DepthFirstSearch(int);
int n;
int DepthFirstSearch(int i)
{
int j;
completed[i] = 1;
for(j=0;j<n;j++)
{
if(!completed[j] && graph[i][j] == 1)
{
DepthFirstSearch(j);
}
}
return 0;
}
int main()
{
int i, j;
scanf("%d",&n);
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
scanf("%d",&graph[i][j]);
}
}
for(i=0;i<n;i++)
{
completed[i]=0;
DepthFirstSearch(0);
return 0;
}
Output:
Conclusion/ Outcome:
I learned deapth first search algorithm.
Practical Set-10
AIM: Implementation of Graph and Searching (BFS)
Explanation:
Traversal means visiting all the nodes of a graph. Breadth first traversal or Breadth first Search is
a recursive algorithm for searching all the vertices of a graph or tree data structure. In this article,
you will learn with the help of examples the BFS algorithm, BFS pseudocode and the code of the
breadth first search algorithm with implementation in C++, C, Java and Python programs.
BFS algorithm
A standard DFS implementation puts each vertex of the graph into one of two categories:
1. Visited
2. Not Visited
The purpose of the algorithm is to mark each vertex as visited while avoiding cycles.
The algorithm works as follows:
1. Start by putting any one of the graph's vertices at the back of aqueue.
2. Take the front item of the queue and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add the ones which aren't in the visited list to
the back of the queue.
4. Keep repeating steps 2 and 3 until the queue is empty.
The graph might have two different disconnected parts so to make sure that we cover every
vertex, we can also run the BFS algorithm on every node
BFS example
Let's see how the Breadth First Search algorithm works with an example. We use an undirected
graph with 5 vertices.
We start from vertex 0, the BFS algorithm starts by putting it in the Visited list and putting all its
adjacent vertices in the stack.
Next, we visit the element at the front of queue i.e. 1 and go to its adjacent nodes. Since 0 has
already been visited, we visit 2 instead.
Vertex 2 has an unvisited adjacent vertex in 4, so we add that to the back of the queue and visit 3,
which is at the front of the queue.
Only 4 remains in the queue since the only adjacent node of 3 i.e. 0 is already visited. We visit it.
Since the queue is empty, we have completed the Depth First Traversal of the graph.
Code:
#include<stdio.h>
int a[20][20], q[20], visited[20], n, i, j, f = 0, r = -1;
void bfs(int v) {
for(i = 1; i <= n; i++)
if(a[v][i] && !visited[i])
q[++r] = i;
if(f <= r) {
visited[q[f]] = 1;
bfs(q[f++]);
}
}
void main() {
int v;
printf("\n Enter the number of vertices:");
scanf("%d", &n);
edges)
a) O(V + E) b) O(V)
c) O(E) d) None Answer: (A)
3. The Data structure used in standard implementation of Breadth First Search is?
a) S
t
a
ck b) Queue
c) Linked List d) None Answer: (B)
4. What can be the applications of Breadth First
Search? a) Finding shortest path between two nodes
b) Finding bipartitions of a graph
c) GPS navigation system
d)
All of the mentioned Answer: (D)
Conclusion/ Outcome:
I learned breath forst algorithm
Practical Set-11
Explanation:
The idea behind Prim’s algorithm is simple, a spanning tree means all vertices must be connected.
So the two disjoint subsets (discussed above) of vertices must be connected to make
a Spanning Tree. And they must be connected with the minimum weight edge to make it
a Minimum Spanning Tree.
Algorithm
1) Create a set mstSet that keeps track of vertices already included in MST.
2) Assign a key value to all vertices in the input graph. Initialize all key values as
INFINITE. Assign key value as 0 for the first vertex so that it is picked first.
3) While mstSet doesn’t include all vertices
a) Pick a vertex u which is not there in mstSet and has minimum key value.
b) Include u to mstSet.
c) Update key value of all adjacent vertices of u. To update the key values,
iteratethrough all adjacent vertices. For every adjacent vertex v, if weight of edge u-v is
less than the previous key value of v, update the key value as weight of u-v
The idea of using key values is to pick the minimum weight edge from cut. The key values are
used only for vertices which are not yet included in MST, the key value for these vertices indicate
the minimum weight edges connecting them to the set of vertices included in MST.
Example:
The set mstSet is initially empty and keys assigned to vertices are {0, INF, INF, INF, INF, INF,
INF, INF} where INF indicates infinite. Now pick the vertex with minimum key value. The vertex
0 is picked, include it in mstSet. So mstSet becomes {0}. After including to mstSet, update key
values of adjacent vertices. Adjacent vertices of 0 are 1 and 7. The key values of 1 and 7 are
updated as 4 and 8. Following subgraph shows vertices and their key values, only the vertices
with finite key values are shown. The vertices included in MST are shown in green color.
Pick the vertex with minimum key value and not already included in MST (not in mstSET). The
vertex 1 is picked and added to mstSet. So mstSet now becomes {0, 1}. Update the key values of
adjacent vertices of 1. The key value of vertex 2 becomes 8.
Pick the vertex with minimum key value and not already included in MST (not in mstSET).
We can either pick vertex 7 or vertex 2, let vertex 7 is picked. So mstSet now becomes {0, 1, 7}.
Update the key values of adjacent vertices of 7. The key value of vertex 6 and 8 becomes finite
(7 and 1 respectively).
Pick the vertex with minimum key value and not already included in MST (not in mstSET).
Vertex 6 is picked. So mstSet now becomes {0, 1, 7, 6}. Update the key values of adjacent
vertices of 6. The key value of vertex 5 and 8 are updated.
We repeat the above steps until mstSet includes all vertices of given graph. Finally, we get the
following graph.
Code:
#include<stdio.h>
#include<stdlib.h>
int G[MAX][MAX],spanning[MAX][MAX],n;
int prims();
int main()
{
int i,j,total_cost;
printf("Enter no. of vertices:");
scanf("%d",&n);
for(i=0;i<n;i++)
for(j=0;j<n;j++)
scanf("%d",&G[i][j]);
total_cost=prims();
printf("\nspanning tree matrix:\n");
for(i=0;i<n;i++)
{
printf("\n");
for(j=0;j<n;j++)
printf("%d\t",spanning[i][j]);
}
int prims()
{
int cost[MAX][MAX];
int u,v,min_distance,distance[MAX],from[MAX];
int visited[MAX],no_of_edges,i,min_cost,j;
for(i=0;i<n;i++)
for(j=0;j<n;j++)
{
if(G[i][j]==0)
cost[i][j]=infinity;
else
cost[i][j]=G[i][j];
spanning[i][j]=0;
}
distance[0]=0;
visited[0]=1;
for(i=1;i<n;i++)
{
distance[i]=cost[0][i];
from[i]=0;
visited[i]=0;
}
min_cost=0;
no_of_edges=n-1;
while(no_of_edges>0)
{
min_distance=infinity;
for(i=1;i<n;i++)
if(visited[i]==0&&distance[i]<min_distance)
{
v=i;
min_distance=distance[i];
}
u=from[v];
spanning[u][v]=distance[v];
spanning[v][u]=distance[v];
no_of_edges--;
visited[v]=1;
for(i=1;i<n;i++)
if(visited[i]==0&&cost[i][v]<distance[i])
{
distance[i]=cost[i][v];
from[i]=v;
}
min_cost=min_cost+cost[u][v];
}
return(min_cost);
}
Output:
Enter no. of vertices:6
Enter the adjacency matrix:
031600
305030
150564
605002
036006
004260
spanning tree matrix:
031000
300030
100004
000002
030000
004200
Total cost of spanning tree=13
Give the Answer to the below Questions:
1. What is the time complexi t extract a vertex from the priority queue in
Prim’s ty o
algorithm?
a) log (V) c) V.V
b) E.E d) log (E) Answer: (A
)
2. What algorithm technique is used in the implementation of Prim’s solution for the MST?
a) Greedy Technique
b) Divide-and-Conquer Technique
c) Dynamic Programming Technique
d) The algorithm combines more than one of the above techniques Answer: (A )
3. The output of Prims algorithm is MST .
Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a weighted undirected
graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the
total weight of all the edges in the tree is minimized.
Conclusion/ Outcome:
I learned how to implement Prim’s Algorithm.
Practical Set-12
Explanation:
Below are the steps for finding MST using Kruskal’s algorithm
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far. If
cycleis not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
The algorithm is a Greedy Algorithm. The Greedy Choice is to pick the smallest weight edge that
does not cause a cycle in the MST constructed so far. Let us understand it with an example:
Consider the below input graph.
The graph contains 9 vertices and 14 edges. So, the minimum spanning tree formed will be having
(9 – 1) = 8 edges.
After sorting:
Weig Sr Des
ht c t
1 7 6
2 8 2
2 6 5
4 0 1
4 2 5
6 8 6
7 2 3
7 7 8
8 0 7
8 1 2
9 3 4
10 5 4
11 1 7
14 3 5
Now pick all edges one by one from sorted list of edges
8. Pick edge 7-8: Since including this edge results in cycle, discard it.
10. Pick edge 1-2: Since including this edge results in cycle, discard it.
Since the number of edges included equals (V – 1), the algorithm stops here.
Code:
#include<stdio.h>
#define MAX 30
edgelist elist;
int G[MAX][MAX],n;
edgelist spanlist;
void kruskal();
int find(int belongs[],int vertexno);
void union1(int belongs[],int c1,int c2);
void sort();
void print();
void main()
{
int i,j,total_cost;
scanf("%d",&n);
for(i=0;i<n;i++)
for(j=0;j<n;j++)
scanf("%d",&G[i][j]);
kruskal();
print();
}
void kruskal()
{
int belongs[MAX],i,j,cno1,cno2;
elist.n=0;
for(i=1;i<n;i++)
for(j=0;j<i;j++)
{
if(G[i][j]!=0)
{
elist.data[elist.n].u=i;
elist.data[elist.n].v=j;
elist.data[elist.n].w=G[i][j];
elist.n++;
}
}
sort();
for(i=0;i<n;i++)
belongs[i]=i;
spanlist.n=0;
for(i=0;i<elist.n;i++)
{
cno1=find(belongs,elist.data[i].u);
cno2=find(belongs,elist.data[i].v);
if(cno1!=cno2)
{
spanlist.data[spanlist.n]=elist.data[i];
spanlist.n=spanlist.n+1;
union1(belongs,cno1,cno2);
}
}
}
for(i=0;i<n;i++)
if(belongs[i]==c2)
belongs[i]=c1;
}
void sort()
{
int i,j;
edge temp;
for(i=1;i<elist.n;i++)
for(j=0;j<elist.n-1;j++)
if(elist.data[j].w>elist.data[j+1].w)
{
temp=elist.data[j];
elist.data[j]=elist.data[j+1];
elist.data[j+1]=temp;
}
}
void print()
{
int i,cost=0;
for(i=0;i<spanlist.n;i++)
{
printf("\n%d\t%d\t%d",spanlist.data[i].u,spanlist.data[i].v,spanlist.data[i].w);
cost=cost+spanlist.data[i].w;
}
2 0 1
5 3 2
1 0 3
4 1 3
5 2 4
1. What is the time complexity to extract a vertex from the priority queue
in Krushkal’s algorithm?
4. Assume a graph is having 10 vertices and 20 edges. In Krushkal’s minimum spanning tree
method, 5 edges are rejected. How many edges are not considered during execution of
algorithm on the given graph?
a) 6 c) 5
b) 4 d) 10 Answer: ( )
Conclusion/ Outcome:
I learned how to implement krushkal’s Algorithm.
References
1. http://www.personal.kent.edu/~rmuhamma/Algorithms/algorithm.html
2. http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Merge_sort.html
3. http://cs.uef.fi/pages/franti/asa/notes.html
4. https://www8.cs.umu.se/kurser/TDBA77/VT06/algorithms/INDEX.HTM
5. IETE e Material
6. https://www.geeksforgeeks.org
7. https://www.programiz.com/
8. https://www.thecrazyprogrammer.com
9. http://c-program-example.com/2011/10
10.https://www.tutorialspoint.com/data_structures_algorith
ms 11.http://www.c4learn.com/c-programs
12.http://c-program-example.com/