DS Theory Notes
DS Theory Notes
Abstract Data Types (ADTs) – ADTs and classes -Overview of linked lists, operations on single
linked list – Doubly linked list and Circular linked list and its operations. Implementing
problems on Linked Lists: Search a book in a list. Reverse Names list using doubly linked list.
Data Structures
Based on how data is organized, data structures can be classified into two types – linear
data structures and non-linear data structures. Array, Linked list, Stack and Queue are
linear data structures. Array is a static linear data structure with fixed memory size and the
array elements can be accessed easily. Linked list, Stack and Queue are dynamic linear data
structures in which memory size is not fixed and changes randomly. In linear data structures
data is organized sequentially.
Graph and trees are non-linear data structures. The data elements of graphs and trees are
not organized sequentially and all the elements cannot be accessed easily in a single run.
An array is a linear data structure that contains elements of the same data type and
stores them in contiguous and adjacent memory locations.
Array elements are accessed using their indices / subscripts and the
index of first element is always 0.
Array-type array-name[size];
Example
int arr[5];
Initializing a 1D Array
scanf(“%d”,&arr[i]);
printf("%d\t",arr[i]);
Array-type array-name[rows][cols];
Example
Initializing a 1D Array
};
Step 1: Search for the given element in the given array. If element found note the index and
goto Step 2, otherwise goto Step 3.
Step 2: Shift all the elements from index + 1 by 1 position to the left.
Step 3: Display the message ‘Element not found’.
Example 1:
Array elements are stored in contiguous memory locations and hence they can be
accessed easily and quickly. They also consume less memory.
Memory for an array is allocated during compile-time and they have a fixed size.
Hence there may be wastage of memory space.
Linked List
A linked list is a linear data structure, in which the elements are not stored at contiguous
memory locations.
The elements in a linked list are linked using pointers.
Linked List forms a series of connected nodes, where each node stores the data and the
address of the next node.
Node Structure: A node in a linked list typically consists of two components:
Data: It holds the actual value or data associated with the node.
Next Pointer: It stores the memory address (reference) of the next node in the
sequence.
Head and Tail: The linked list is accessed through the head node, which points to the
first node in the list. The last node in the list points to NULL or null pointer, indicating
the end of the list. This node is known as the tail node.
Linked list is a dynamic data structure for which memory can be allocated or de-
allocated dynamically during run-time. Hence memory is utilized efficiently.
Insertion and deletion of elements can be done quickly.
• In this type of linked list, each node contains reference of next node.
• Traversing a single linked list is done in forward direction.
• Applications: used for memory management (allocation and de-allocation of
memory), used in database indexing for faster insert and delete operations, used to
represent polynomials and sparse matrices, used in operating system for scheduling
processes and managing system resources.
2. Double or doubly linked list
• Each node contains references to both the next and previous nodes.
• Allows traversal in both forward and backward directions.
•
3. Circular linked list
• The last node points back to the head node, creating a circular structure.
• It can be either singly or doubly linked.
1. Insertion: Adding a new node to a linked list involves adjusting the pointers of the
existing nodes to maintain the proper sequence. Insertion can be performed at the
beginning, end, or any position within the list
2. Deletion: Removing a node from a linked list requires adjusting the pointers of the
neighboring nodes to bridge the gap left by the deleted node. Deletion can be performed
at the beginning, end, or any position within the list.
3. Searching: Searching for a specific value in a linked list involves traversing the list
from the head node until the value is found or the end of the list is reached.
Data addr
A node in a singly linked list has two parts – data and address of next node.
A structure is defined to represent nodes of a singly linked list. The members in the
structure definition must include data / datum of any data type and a pointer variable of
same structure type to store address of next node.
Example
struct node
{
int data;
struct node *next;
};
The following pointer variables must be declared for the above struct definition.
*head – to store address of first node
*tail – to store address of last node
*n – to store new node data
*ptr – to traverse the nodes in the list.
struct node
{
int data;
struct node *next;
}*head, *tail, *n, *ptr;
head 10 NULL
n
If list is not empty, and new node is added at the beginning of the list
n->next = head; //new node next is made to point to existing head node
head = n; //new node is made head node
head head tail
25 30 15 NULL
n
If list is not empty and new node is added at the end of the list
tail->next = n; //last node next is made to point to new node
tail = n; // new node is made tail node
head tail tail
30 15 25 NULL
n
If list is not empty and new node is inserted between two nodes then locate the
node after which the new node is inserted and it is called previous node.
The next of previous node is made to point to new node and new node next is
made to point to already existing previous node next.
head tail
30 15 25 NULL
head tail
30 15 5 25
NULL
head tail
30 15
NULL
5 25
head tail
NULL
30 15 5
ptr 25
head tail
NULL
30 15 5 25
ptr
head tail
NULL
30 15 5 25
ptr
head tail
NULL
30 15 5 25
ptr
head tail
NULL
30 15 5 25
ptr
for loop or while loop may be used for traversing a singly linked list.
for( ptr = head ; ptr != NULL ; ptr = ptr->next)
OR
ptr = head;
while (ptr != NULL)
{
ptr = ptr next;
}
NULL
30 15 25
If node located is tail node, make previous node as tail node and
free the memory of the located node.
30 15 25 NULL
head tail
NULL
30 15 5 25
#include <stdio.h>
#include <malloc.h>
#include <stdlib.h>
struct node
{
int data;
struct node *next;
}*head, *tail, *ptr, *n;
//function to create a new node
void createnode()
{
//allocate memory for the new node
n = (struct node *) malloc( sizeof(struct node) );
//accept input for new node data
printf("\nEnter data for new node ");
scanf("%d",&n->data);
}
//function to add node in the beginning
void addbeg()
{
createnode(); //function call to create a node
if (head==NULL) //list is empty
{
tail = head = n; //new node is head node and tail node
n->next = NULL;
}
else
{
n->next = head;
head = n;
}
printf("\nNew node inserted at the beginning of the list");
}
//function to add node at the end of the list
void addend()
{
createnode(); //function call to create a node
if (head==NULL) //list is empty
{
tail = head = n; //new node is head node and tail node
}
else
{
tail->next = n;
tail = n;
}
n->next=NULL;
printf("\nNew node inserted at the end of the list");
}
//function to insert node in between
void insert()
{
int s, flag=0;
if ( head == NULL )
{
printf("\nEmpty list...Cannot insert...\n");
return;
}
printf("\nEnter search node data ");
scanf("%d",&s);
//locate the node containing search data s
for(ptr=head; ptr!=NULL; ptr=ptr->next)
{
if (ptr->data == s)
{
flag=1; //search node located
break;
}
}
if (flag==0)
printf("\nSearch node not located...Cannot insert...\n");
else
{
createnode(); //function call to create a new node
n->next = ptr->next;
ptr->next = n;
printf("\nNode inserted...\n");
}
}
//function to delete node from the list containing data x
void delnode(int x)
{
struct node *previous, *t;
int flag=0;
if (head==NULL)
{
printf("\nEmpty list...Cannot delete...\n");
return;
}
for(ptr=previous=head; ptr!=NULL ; ptr=ptr->next)
{
if (ptr->data == x)
{
flag=1; //search node located
break;
}
previous=ptr;
}
if (flag==0)
printf("\nSearch node does not exist...Cannot delete...\n");
else
{
if (ptr==head && ptr==tail)
{
printf("\nDeleting single node\n");
free(ptr);
head=tail=NULL;
}
else if (ptr==tail) //deleting tail node
{
printf("\nDeleting tail node\n");
t=tail; //store tail node address in pointer t
tail=previous; //make previous node as tail node
free(t); //free memory
tail->next=NULL;
}
else if (ptr==head)
{
printf("\nDeleting head node\n");
t=ptr;
head = head->next;
free(t);
}
else
{
printf("\nDeleting node in between \n");
t=ptr;
previous->next = ptr->next;
free(t);
}
printf("\nNode deleted...\n");
}
}
//function to display data in all the nodes
void display()
{
if (head==NULL)
{
printf("\nEmpty list\n");
return;
}
for(ptr=head; ptr!=NULL; ptr=ptr->next)
printf("%d\n",ptr->data);
}
/* main() function */
int main()
{
int choice, d;
head = tail = NULL;
do
{
printf("\n1.Add node in the beginning\n");
printf("\n2.Add node at the end\n");
printf("\n3.Insert node\n");
printf("\n4.Delete node\n");
printf("\n5.Display data in all the nodes\n");
printf("\n6.Exit \n");
printf("Enter your choice ");
scanf("%d",&choice);
switch( choice )
{
case 1:
addbeg();
break;
case 2:
addend();
break;
case 3:
insert();
break;
case 4:
printf("Enter data of node to delete ");
scanf("%d",&d);
delnode(d);
break;
case 5:
display();
break;
case 6:
exit(0);
break;
default:
printf("\nInvalid choice\n");
}
}while (choice!=6);
return 0;
}
A node in a doubly linked list has three parts – address of previous node, data and address of
next node.
A structure is defined to represent nodes of a doubly linked list. The members in the
structure definition must include data / datum of any data type and a pointer variable of
same structure type to store address of next node and previous node.
Example
struct node
{
int data;
struct node *next, *prev;
};
The following pointer variables must be declared for the above struct definition.
*head – to store address of first node
*tail – to store address of last node
*n – to store new node data
*ptr – to traverse the nodes in the list.
struct node
{
int data;
struct node *next, *prev;
}*head, *tail, *n, *ptr;
Inserting a new node in a doubly linked list is very similar to inserting new node in linked list.
There is a little extra work required to maintain the link of the previous node. A node can be
inserted in a Doubly Linked List in four ways:
At the front of the Doubly linked list.
In between two nodes : Either after a given node or before a given node.
At the end of the Doubly linked list.
To add a new node check if list is empty by verifying if head==NULL.
NULL
NULL 10
n
Adding new node to the beginning of list containing nodes
n->next = head; //new node next is made to point to existing head node
head->prev = n; //head node prev pointer is made to point to new node
head = n; //new node is made head node
head head tail
NULL
NULL 4
2 3
2
n
Adding node at the end of the list
If list is not empty and new node is added at the end of the list
tail->next = n; //last node next is made to point to new node
tail = n; // new node is made tail node
head tail tail
NULL 4 NULL
2 3
2
n
Inserting new node in between
If list is not empty and new node is inserted between two nodes then locate the
node after which the new node is inserted and it is called previous node.
The next of previous node is made to point to new node and new node prev is
made to point to this previous node.
The new node next is made to point to already existing next of the previous
node. The previous node next previous is made to point to the new node..
head tail
NULL NULL
4
2 2 3
2
Traversing a doubly linked list can be done in both directions, forward and reverse.
To traverse the list in forward direction, start from head node and access the other nodes using
next pointer till the last node.
head tail
NULL NULL
4
2 2 3
2
ptr
To traverse the list in reverse direction, start from tail node and access the previous nodes
using prev pointer till the head node.
head tail
NULL NULL
4
2 2 3
2
ptr
Deleting node from doubly linked list
NULL NULL
30 15 25
If node located is tail node, make previous node as tail node and
the next pointer of previous node must be made to point to NULL.
The memory of the located node is freed.
30 15 25 NULL
NULL
ptr (node located)
If node located is in between two nodes, link the previous node
with the next node. Free the memory of the located node.
head tail
NULL
30 15 5 25
A Doubly linked list can be traversed in both forward and backward directions.
The delete operation in Doubly linked list is more efficient if a pointer to the node to be
deleted is given.
We can quickly insert a new node before a given node.
In a singly linked list, to delete a node, a pointer to the previous node is needed. To get this
previous node, sometimes the list is traversed. In Doubly linked list, we can get the previous
node using the previous pointer.
Every node of Doubly linked list requires extra space for a previous pointer.
All operations require an extra pointer previous to be maintained.
}
if (flag==0)
printf("\nSearch node does not exist...\n");
else
{
if (ptr==head && ptr==tail)
{
free(ptr);
head=tail=NULL;
printf("\nNode deleted...List is empty...\n");
}
else if (ptr==tail) //deleting tail node
{
t=tail; //store tail node address in pointer t
tail=tail->prev; //make previous node as tail node
tail->next = NULL;
free(t); //free memory
printf("\nTail node deleted...\n");
}
else if (ptr==head)
{
t=ptr;
head = head->next;
head->prev = NULL;
free(t);
printf("\nHead node deleted...\n");
}
else
{
t=ptr;
ptr->prev->next = ptr->next;
ptr->next->prev = ptr->prev;
free(t);
printf("\nNode in between deleted...\n");
}
}
}
//function to display data in all the nodes
void display()
{
if (head==NULL)
{
printf("\nEmpty list...\n");
return;
}
for(ptr=head; ptr!=NULL; ptr=ptr->next)
printf("%d\n",ptr->data);
}
/* main() function */
int main()
{
int choice, d;
head = tail = NULL;
do
{
printf("\n1.Add node in the beginning\n");
printf("\n2.Add node at the end\n");
printf("\n3.Insert node\n");
printf("\n4.Delete node\n");
printf("\n5.Display data in all the nodes\n");
printf("\n6.Exit \n");
printf("Enter your choice ");
scanf("%d",&choice);
switch( choice )
{
case 1:
addbeg();
break;
case 2:
addend();
break;
case 3:
insert();
break;
case 4:
printf("Enter data of node to delete ");
scanf("%d",&d);
delnode(d);
break;
case 5:
display();
break;
case 6:
exit(0);
break;
default:
printf("\nInvalid choice\n");
}
}while (choice!=6);
return 0;
}
In a circular linked list, the first node and the last node are connected to
each other to form a circle. There is no NULL pointer at the end.
Example
There are two types of circular linked list: Circular Singly linked list and Circular
Doubly linked list.
Circular Singly linked list
Data addr
A node in a singly linked list has two parts – data and address of next node.
A structure is defined to represent nodes of a singly linked list. The members in the
structure definition must include data / datum of any data type and a pointer variable of
same structure type to store address of next node.
Example
struct node
{
int data;
struct node *next;
};
The following pointer variables must be declared for the above struct definition.
*head – to store address of first node
*tail – to store address of last node
*n – to store new node data
*ptr – to traverse the nodes in the list.
Structure definition along with pointer variables
struct node
{
int data;
struct node *next;
}*head, *tail, *n, *ptr;
head 10
n
If list is not empty, and new node is added at the beginning of the list
n->next = head; //new node next is made to point to existing head node
head = n; //new node is made head node
tail->next = head; //tail node next is made to point to new head
head head tail
n 25 30 15
If list is not empty and new node is added at the end of the list
tail->next = n; //last node next is made to point to new node
tail = n; // new node is made tail node
head tail tail
30 15 25
If list is not empty and new node is inserted between two nodes then locate the
node after which the new node is inserted and it is called previous node.
The next of previous node is made to point to new node and new node next is
made to point to already existing previous node next.
head tail
30 15 25
head tail
30 15 5 25
head tail
30 15 5 25
head tail
30 15
ptr 5 25
ptr
head tail
30 15 5 25
ptr
head tail
30 15 5 25
ptr
head tail
30 15 5 25
ptr
do..while loop is suitable for traversing a Circular singly linked list.
ptr = head;
do
{
ptr = ptr next;
}while(ptr!=head);
30 15 25
If node located is tail node, make previous node as tail node and
free the memory of the located node.
30 15 25
head tail
30 15 5 25
#include <stdio.h>
#include <malloc.h>
#include <stdlib.h>
struct node
{
int data;
struct node *next;
}*head, *tail, *ptr, *n;
//function to create a new node
void createnode()
{
//allocate memory for the new node
n = (struct node *) malloc( sizeof(struct node) );
//accept input for new node data
printf("\nEnter data for new node ");
scanf("%d",&n->data);
}
//function to add node in the beginning
void addbeg()
{
createnode(); //function call to create a node
if (head==NULL) //list is empty
{
tail = head = n; //new node is head node and tail node
n->next = tail;
}
else
{
n->next = head;
head = n;
tail->next=head;
}
printf("\nNew node inserted at the beginning of the list");
}
//function to add node at the end of the list
void addend()
{
createnode(); //function call to create a node
if (head==NULL) //list is empty
{
tail = head = n; //new node is head node and tail node
}
else
{
tail->next = n;
tail = n;
tail->next = head;
}
}
/* main() function */
int main()
{
int choice, d;
head = tail = NULL;
do
{
printf("\n1.Add node in the beginning\n");
printf("\n2.Add node at the end\n");
printf("\n3.Insert node\n");
printf("\n4.Delete node\n");
printf("\n5.Display data in all the nodes\n");
printf("\n6.Exit \n");
printf("Enter your choice ");
scanf("%d",&choice);
switch( choice )
{
case 1:
addbeg();
break;
case 2:
addend();
break;
case 3:
insert();
break;
case 4:
printf("Enter data of node to delete ");
scanf("%d",&d);
delnode(d);
break;
case 5:
display();
break;
case 6:
exit(0);
break;
default:
printf("\nInvalid choice...\n");
}
}while (choice!=6);
return 0;
}
UNIT II - STACK AND QUEUE DATA STRUCTURE
Overview of Stack data structure, push and pop operations on stack using array and linked
list- Queue ADT – double ended queues. Problems on Stack: evaluation of expression,
balanced parenthesis, string reverse using stack. Overview of Queue data structure, insert and
delete operations on Queue using array and linked list. Implement Queue based programs:
generate binary no’s using queue 1 to N.
A stack is a linear data structure in which the insertion of a new element and removal of an
existing element takes place at the same end represented as the top of the stack.
All operations in a Stack are performed in LIFO (Last In First Out) order.
Real life examples:
Stack Of plates – plates in a canteen placed one on top of another
Stack follows LIFO (Last In First Out) strategy. According to this strategy the item that is inserted last will
come out first.
A top pointer is always used to maintain top of stack.
Basic operations that can be carried out on a stack are
Push – Inserting an element into the stack
Pop – Removing an element from stack
int main()
{
int choice, d;
top = NULL;
do
{
printf("\nChoose your stack operation");
printf("\n1.Push");
printf("\n2.Pop");
printf("\n3.Display");
printf("\n4.Exit\n");
printf("Enter your choice ");
scanf("%d",&choice);
switch( choice )
{
case 1:
push();
break;
case 2:
d=pop();
if (d != -1)
printf("\nPopped %d from stack \n",d);
break;
case 3:
display();
break;
case 4:
exit(0);
break;
default:
printf("\nInvalid choice\n");
}
}while (choice!=4);
return 0;
}
Step 1: The first element is 2 which is an operand, so push it into the stack.
Stack
Stack
Stack
Step 4: The fourth element is * which is an operator. Pop two operands 1 and 3 from stack and using the
operator * evaluate the expression 3 * 1 and push the result 3 back into the stack.
Step 5: The fifth element is + which is an operator. Pop two operands 3 and 2 from stack and using the
operator + evaluate the expression 2 + 3 and push the result 5 back into the stack.
Step 5: The sixth element is 9 which is an operand, so push it into the stack.
Step 6: The last element is - which is an operator. Pop two operands 9 and 5 from stack and using the
operator - evaluate the expression 5 - 9 and push the result -4 back into the stack.
Step 7: There are no more elements, so pop the element -4 from top of stack and display it as result of the
expression.
#include <stdio.h>
#include <ctype.h>
#define SIZE 20
int stack[SIZE];
char expr[SIZE];
int top = -1;
int i;
//function to push element into stack
void push( int x )
{
if (top==SIZE-1)
printf("\n\tStack is full\n");
else
{
stack[++top]= x;
}
}
//function to pop element from stack
int pop()
{
int a;
if (top==-1)
{
printf("\n\tStack is empty\n");
return -1;
}
else
{
return stack[top--];
}
}
/* main() function */
int main()
{
int a,b;
printf("\n\tEnter postfix expression : ");
scanf("%s",expr);
for(i=0; expr[i] != '\0' ; i++)
{
if (isdigit(expr[i]))
push(expr[i]-48);
else
{
b = pop();
a = pop();
switch(expr[i])
{
case '+':
push(a + b);
break;
case '-':
push(a - b);
break;
case '*':
push(a * b);
break;
case '/':
push(a / b);
break;
}
}
}
printf("Result of expression %s is %d ",expr, pop());
return 0;
}
Program output
#include <stdio.h>
char str[20], stack[20];
int i, top=-1;
void push( char ch )
{
stack[++top] = ch;
}
char pop()
{
if (top==-1)
return '\0';
else
return stack[top--];
}
int main()
{
char ch;
printf("Enter a string ");
scanf("%s",str);
for(i=0; str[i]!= '\0' ; i++)
{
push( str[i] );
}
i=0;
while ( (ch=pop()) != '\0')
{
str[i++]=ch;
}
Program output
An item which enters the queue first must leave the queue first.
Queue data structure can be implemented using array or linked list.
Adding an item to a queue is called enqueue operation and removing an item from queue is called dequeue
operation.
Queues in which items are added at both ends and removed from both ends are known as Double Ended
queues.
In the array implementation of queue, the array size is fixed before data is stored in the
queue.
If all the elements in the array used queue is stored, the queue is full and no more
elements can be added to the queue.
When removing an element from a queue, the elements from index 1 till rear are shifted
to the previous index. This requires extra time.
#include <stdio.h>
#define SIZE 5
int que[SIZE]={0};
int front =-1, rear= -1;
//function to add item to queue
void enqueue( int x )
{
if (front==-1) //QUEUE IS EMPTY
{
front++;
rear++;
que[rear]=x;
printf("\n\tItem was added to the queue\n");
}
else if (rear==SIZE-1)
printf("\n\tQueue is full...Cannot add new item\n");
else
{
que[++rear]=x;
printf("\n\tItem was added to the queue\n");
}
}
//function to display items from queue
void display()
{
if (rear==-1)
{
printf("\n\tQueue is Empty...\n");
return;
}
printf("\n\tDisplaying items in queue\n");
for(int i=front; i<=rear ; i++)
printf("\t%d\n",que[i]);
}
//function to delete item from queue
void dequeue()
{
int i;
if (rear==-1)
printf("\n\tCannot delete...Queue is empty...\n");
else
{
printf("\n\tDeleteing .... %d\n",que[front]);
//shifting elements from position front + 1 to previous position
for(i=0; i<=rear; i++)
que[i] = que[i+1];
rear--;
}
}
/* main() function */
int main()
{
int n, choice;
char ans='y';
while (ans=='y')
{
printf("\n\tChoose your queue operation \n");
printf("\n\t1.Enqueue\n\t2.Dequeue\n\t3.Display\n\tEnter your choice : ");
scanf("%d",&choice);
switch(choice)
{
case 1:
printf("\n\tEnter value to add to the queue ");
scanf("%d",&n);
enqueue( n );
break;
case 2:
dequeue();
break;
case 3:
display();
break;
default:
printf("\n\tInvalid choice...\n");
}
printf("\n\tContinue y / n ? ");
fflush(stdin);
scanf("%c",&ans);
}
return 0;
}
Insert and delete operations on Queue using linked list
}while (choice!=4);
return 0;
}
UNIT III - SORTING AND SEARCHING
Bubble sort – selection sort – insertion sort – merge sort – quick sort – time complexity of the sorting
methods– linear search – binary search – Hashing techniques: liner probing and chaining method,
analysis of time complexity.
Bubble Sort
This simple sorting algorithm starts at the beginning of the list of values stored in an array.
It compares each pair of adjacent elements and swaps them if they are in the unsorted order.
This comparison and passes to be continued until no swaps are needed, which indicates that the
list of values stored in an array is sorted.
The algorithm is a comparison sort, is named for the way smaller elements "bubble" to the top of
the list.
Although the algorithm is simple, it is too slow and less efficient when compared to insertion
sort and other sorting methods.
Pseudo code
1. Start with the first element i.e., index = 0, compare the current element with the next element of
the
array.
2. If the current element is greater than the next element of the array, swap them.
3. If the current element is less than the next or right side of the element, move to the next element.
Go to
Step 1 and repeat until end of the index is reached.
Example
Let's consider an array with values {15, 11, 16, 12, 14, 13}
Below, we have a pictorial representation of how bubble sort will
sort the given array.
Selection sort
This simple sorting algorithm improves on the performance of bubble sort by making only
one exchange for every pass through the list.
This algorithm will first find the smallest elements in array and swap it with the element in
the first position of an array.
It will then find the second smallest element and swap that element with the element in the
second position and continue until the entire array is sorted in respective order.
This algorithm repeatedly selects the next-smallest element and swaps in into the right place
for every pass. Hence it is called selection sort.
Pseudo code
1. Start from the first element i.e., index-0, we search the smallest element in the array, and
replace it with the element in the first position.
2. Now we move on to the second element position, and look for smallest element present in
the sub- array, from starting index to till the last index of sub - array.
3. Now replace the second smallest identified in step-2 at the second position in the or original
array, or also called first position in the sub array.
4. This is repeated, until the array is completely sorted.
Example
Insertion Sort
This simple sorting algorithm works by taking elements from the list one by one and
inserting then in their correct position into a new sorted list.
This algorithm builds the final sorted array at the end.
This algorithm uses n-1 number of passes to get the final sorted list.
Pseudo code
Example
At the end of the pass the insertion sort algorithm gives the sorted output in ascending order as
shown below:
Merge Sort
Example 1
Example 2
Illustration
Step 1: Declare the variable low and high to mark the start and end of the array.
Step 2: Low will equal 0, and high will equal array size -1.
Step 3: Calculate mid using low + high / 2.
Step 4: Call the mergeSort function on the part (low, mid) and (mid+1, high).
Step 5: This calling will continue till low<high is satisfied.
Step 6: Finally, call the merge function to merge these two halves.
Quick Sort
This sorting algorithm is based on the Divide and Conquer algorithm that picks an element as
a pivot and partitions the given array around the picked pivot by placing the pivot in its
correct position in the sorted array.
The key process in quickSort is a partition(). The target of partitions is to place the pivot
(any element can be chosen to be a pivot) at its correct position in the sorted array and put all
smaller elements to the left of the pivot, and all greater elements to the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its correct
position and this finally sorts the array.
Example – Considering the last element as Pivot element.
Best Time Complexity: Define the input for which algorithm takes less time or
minimum time. In the best case calculate the lower bound of an algorithm. Example: In
the linear search when search data is present at the first location of large data then the
best case occurs.
Average Time Complexity: In the average case take all random inputs and calculate
the computation time for all inputs.
And then we divide it by the total number of inputs.
Worst Time Complexity: Define the input for which algorithm takes a long time or
maximum time. In the worst calculate the upper bound of an algorithm. Example: In
the linear search when search data is present at the last location of large data then the
worst case occurs.
Best Case: The best case occurs when the array is already sorted. So the number of
comparisons required is N-1 and the number of swaps required = 0. Hence the best
case complexity is O(N).
Worst Case: The worst-case condition for bubble sort occurs when elements of the
array are arranged in decreasing order.
In the worst case, the total number of iterations or passes required to sort a given array
is (N-1). where ‘N’ is the number of elements present in the array.
At pass 1:
Number of comparisons = (N-1)
Number of swaps = (N-1)
At pass 2:
Number of comparisons = (N-2)
Number of swaps = (N-2)
At pass 3:
Number of comparisons = (N-3)
Number of swaps = (N-3)
At pass N-1:
Number of comparisons = 1
Number of swaps = 1
Now, calculating total number of comparison required to sort the array
= (N-1) + (N-2) + (N-3) + . . . 2 + 1
= (N-1)*(N-1+1)/2 { by using sum of N natural Number formula }
= (N * (N-1)) / 2
In worst case, Total number of swaps = Total number of comparison
Total number of comparison (Worst case) = N(N-1)/2
Total number of swaps (Worst case) = N(N-1)/2
So worst case time complexity is O(N2) as N2 is the highest order term.
Average Case Time Complexity: The number of comparisons is constant in Bubble
Sort. So in average case, there are O(N 2) comparisons. This is because irrespective of
the arrangement of elements, the number of comparisons C(N) is same.
For the number of swaps, consider the following points:
If an element is in index I1 but it should be in index I2, then it will take a minimum of
(I2-I1) swaps to bring the element to the correct position.
Consider an element E is at a distance of I3 from its position in sorted array. Then the
maximum value of I3 will be (N-1) for the edge elements and N/2 for the elements at
the middle.
The sum of maximum difference in position across all elements will be:
(N – 1) + (N – 3) + (N – 5) . . . + 0 + . . . + (N-3) + (N-1)
= N x (N – 2) x (1 + 3 + 5 + … + N/2)
= N2 – (2 x N2 / 4)
= N2 – N2 / 2
= N2 / 2
Therefore, in average case the number of comparisons is O(N 2)
Linear Search
Linear Search is the most basic method of searching an element from a list.
It is also known as sequential search, as it sequentially checks each element of the list
until the key element is found or the entire list has been traversed.
Example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30.
Step 1: Start from the first element (index 0) and compare key with each element
(arr[i]).
Advantages of linear search
Linear search can be used irrespective of whether the array is sorted or not. It can
be used on arrays of any data type.
Does not require any additional memory.
It is a well-suited algorithm for small datasets.
Time Complexity:
Best Case: In the best case, the key might be present at the first index. So the best
case complexity is O(1)
Worst Case: In the worst case, the key might be present at the last index i.e.,
opposite to the end from which the search has started in the list. So the worst-case
complexity is O(N) where N is the size of the list.
Average Case: O(N)
Binary Search
Example
Array elements:-
Search element:- 60
Calculate mid index as
mid = low + (high – low) / 2
low = 0, high = 9
mid = 0 +(9-0)/2 = 9/2 = 4
Element at mid index 4 is 50
Search value 60 > mid value 50. Select the elements to the right side of 50.
low = mid + 1 = 4 + 1 = 5
Calculate mid index
mid = 5 + ( 9 – 5 )/ 2 = 5 + 4/2 = 5 + 2 = 7
Element at mid index 7 is 80
Mid element 80 is > search value 60. Select the elements to the left of 80.
high = mid – 1 = 7 – 1 = 6
Calculate mid index
mid = 5 + (6 - 5) / 2 = 5
Element at mid index 5 is 50.
Mid element 50 = search value 50.
Display message ‘Element found at index 5’
Hashing Techniques: Linear probing and Chaining methods
A group of values can be stored, accessed and processed efficiently using array data structure.
Storing data in an array can be done quickly, but searching requires more time if dataset is
large.
This inefficiency of array data structure is overcome by using hash data structure.
Hash or Hash table is a data structure that maps keys (data) to values.
It uses a hash function to calculate index for the data key and the data key is stored in the
index.
Hashing is a technique or process of generating index for data keys using hash function and
storing them into a hash table.
Index
Data keys
Storage and retrieval in hash data structure can be done in a constant time.
Hash function
The hash function creates a mapping between key and value, this is done through the use of
mathematical formulas known as hash functions. The result of the hash function is referred to
as a hash value or hash.
Division method
0
1
2
3
4
5 35
6
7
8
9
0 50
1 11
2
3
4
5 35
6
7
8
9
0 50
1 11
2
3
4
5 35
6 76
7
8
9
0 50
1 11
2
3
4
5 35
6 76
7
8
9 79
0 50
1 11
2
3
4
5 35
6 76
7
8 58
9 79
0 50
1 11
2 92
3
4
5 35
6 76
7
8 58
9 79
0 50
1 11
2 92
3 53
4
5 35
6 76
7
8 58
9 79
0 50
1 11
2 92
3 53
4 64
5 35
6 76
7
8 58
9 79
0 50
1 11
2 92
3 53
4 64
5 35
6 76
7 17
8 58
9 79
Collision
The hashing process generates a small number for a big key, so there is a possibility that two
keys could produce the same value. The situation where the newly inserted key maps to an
already occupied, and it must be handled using some collision handling technology.
A hash collision occurs when a hash algorithm produces the same hash value for two different
input values.
Linear probing
Example:
Consider the following sequence of keys {9, 7, 11, 13, 12, 8}
Step 1: Empty hash table of size 10 with possible hash values 0 to 9
Step 2:
First Key to be inserted in the hash table = 9.
h(k) = 2k + 5
h(9) = 2*9 + 5 = 23
h(k, i) = [h(k) + i] mod m
h(9, 0) = [23 + 0] mod 10 = 3
So, key 9 will be inserted at index 3 of the hash table.
Step 3:
Next Key to be inserted in the hash table = 7.
h(k) = 2k + 5
h(7) = 2*7 + 5 = 19
h(k, i) = [h(k) + i] mod m
h(7, 0) = [19 + 0] mod 10 = 9
So, key 7 will be inserted at index 9 of the hash table
Step 4:
Next Key to be inserted in the hash table = 11.
h(k) = 2k + 5
h(11) = 2*11 + 5 = 27
h(k, i) = [h(k) + i] mod m
h(11, 0) = [27 + 0] mod 10 = 7
So, key 11 will be inserted at index 7 of the hash table
Step 5:
Next Key to be inserted in the hash table = 13.
h(k) = 2k + 5
h(13) = 2*13 + 5 = 31
h(k, i) = [h(k) + i] mod m
h(13, 0) = [31 + 0] mod 10 = 1
So, key 13 will be inserted at index 1 of the hash table
Step 6:
Next key to be inserted in the hash table = 12.
h(k) = 2k + 5
h(12) = 2*12 + 5 = 27
h(k, i) = [h(k) + i] mod m
h(12, 0) = [27 + 0] mod 10 = 7
Here Collision has occurred because index 7 is already filled.
Now we will increase i by 1.
h(12, 1) = [27 + 1] mod 10 = 8
So, key 12 will be inserted at index 8 of the hash table.
Step 7:
Next key to be inserted in the hash table = 8.
h(k) = 2k + 5
h(8) = 2*8 + 5 = 21
h(k, i) = [h(k) + i] mod m
h(8, 0) = [21 + 0] mod 10 = 1
Here Collision has occurred because index 1 is already filled.
Now we will increase i by 1 now i become 1.
h(k) = 2k + 5
h(8) = 2*8 + 5 = 21
h(k, i) = [h(k) + i] mod m
h(8, 0) = [21 + 1] mod 10 = 2
index 2 is vacant so 8 will be inserted at index 2.
Chaining
Example:
Using hash function h(key) = key % table size
Hash table size 7
Data key 42 hash index is 42 % 7 = 0
Data key 38 hash index is 38 % 7 = 3.
Data key 52 hash index = 52 % 7 = 3 same as hash index for 38. Collision has
occurred. Data key 52 will also go into index 3 as given below.
Time Complexity is a concept in computer science that deals with the quantification of the
amount of time taken by a set of code or algorithm to process or run as a function of the amount
of input. In other words, the time complexity is how long a program takes to process a given
input. The efficiency of an algorithm depends on two parameters:
Time Complexity
Space Complexity
Time Complexity: It is defined as the number of times a particular instruction set is executed
rather than the total time taken. It is because the total time taken also depends on some external
factors like the compiler used, the processor’s speed, etc.
Space Complexity: It is the total memory space required by the program for its execution.
1. Big O Notation
Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it
gives the worst-case complexity of an algorithm. By using big O- notation, we can
asymptotically limit the expansion of a running time to a range of constant factors above and
below. It is a model for quantifying algorithm performance.
2. Omega Notation
Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best-case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time complexity. It is defined
as the condition that allows an algorithm to complete statement execution in the shortest
amount of time.
3. Theta Notation
Theta notation encloses the function from above and below. Since it represents the upper and
the lower bound of the running time of an algorithm, it is used for analyzing the average-
case complexity of an algorithm. The execution time serves as both a lower and upper bound
on the algorithm’s time complexity. It exists as both, the most, and least boundaries for a
given input value.
Tree ADT
Example
Edge
The top most node or very first node in a tree is called Root node. Every tree must
have a root node. Root node is the origin of Tree data structure. In the above tree
structure, is the root node. There must be only one Root node in a tree.
The connecting link between any 2 nodes is called Edge. If there are n nodes in a tree,
there will be n-1 edges. In the above tree there is an edge between nodes A and B, A
and C, and D, B and E etc. There are 11 nodes in the above tree and 10 edges.
The node which is predecessor of any node or a node which has child / children is
called Parent node. In the above tree, A, B, C, E and G are the parent nodes.
The node which is descendent of any node is called Child node. In
A node that does not have any child or a node with degree 0 is called Leaf node.
Degree of any node is the total number of children the node has. In the above tree,
degree of A is 2, degree of B is 3, degree of node K is zero.
Height of a tree is the total number of edges from Leaf node to a particular node in
the longest in the longest path. Height of the above tree is 3. In the above tree, the
longest path are A B E I, A B E J, A C G K and hence
height is 3. The height of leaf node is 0.
Depth of a node is the total number of edges from Root node to that particular node.
Path between 2 nodes is the sequence of nodes and edges from one node to another
node. The path between nodes A and J is A – B – E – J.
Binary Tree ADT
Full binary tree - A binary tree is a full binary tree if all leaves are at the
same level and every non leaf node has exactly two children.
Complete binary tree - Every non leaf node has exactly two children but all
leaves are not necessary at the same level.
Tree traversal
A tree traversal can be defined as the process of visiting each node exactly once in
some order. There are 3 different traversal methods
1. Pre Order Traversal: DLR: Visit the root node first, then visit left node,
then right node.
2. In Order Traversal: LDR: Visit left sub tree, then root then right sub
tree.
3. Post Order Traversal: LRD: visit the left sub tree first, then right sub
tree, then the root node
In pre-order traversal, each node is processed before (pre) either of its sub-
trees.
This is the simplest traversal.
1
In the tree given above, node 1 is visited first.
1, 2
1, 2, 4, 5, 3, 6, 7
Final Pre-order traversal is 1, 2, 4, 5, 3, 6, 7
4, 2
4, 2, 5, 1
4, 2, 5, 1, 6, 3, 7
Binary Search Tree is a node-based binary tree data structure which has
the following properties:
The left subtree of a node contains only nodes with keys lesser than
the node’s key.
The right subtree of a node contains only nodes with keys greater
than the node’s key.
The left and right subtree each must also be a binary search tree.
Example
Consider data elements - 45, 15, 79, 90, 10, 55, 12, 20, 50
The first data 45 is inserted as Root node of the tree.
Creation of Binary Search Tree is completed.
Searching in a binary search tree
Searching a binary search tree is finding a specific data in the tree. It is easy to search for a
node in binary search tree because the nodes are arranged in a specific order.
Steps to search for a node in a binary search tree:
1. First, compare the element to be searched with the root element of the tree.
2. If root is matched with the target element, then return the node's location.
3. If it is not matched, then check whether the item is less than the root element, if it is
smaller than the root element, then move to the left subtree.
4. If it is larger than the root element, then move to the right subtree.
5. Repeat the above procedure recursively until the match is found.
6. If the element is not found or not present in the tree, then return NULL.
Example: Consider the following binary search tree. The node to be searched is 20.
Step 1: Compare Root node data 45 and Search node data 20. Search node data 20 < Root node
data 45. So move to left i.e Root = RootLeft.
Step 2: The current node data is 15. Search node data 20 > Current root node data 15. Hence move
to right i.e Root = RootRight.
Step 3: New Root node data is 20 which is the Search node data. Hence search node located.
Find the inorder successor of the node to be deleted i.e find the minimum element in
the right child of the node.
After that, replace that node with the inorder successor until the target node is
placed at the leaf of tree.
And at last, replace the node with NULL and free up the allocated space.
If balance factor of any node is 1, it means that the left sub-tree is one level higher
than the right sub-tree.
If balance factor of any node is 0, it means that the left sub-tree and right sub-tree
contain equal height.
If balance factor of any node is -1, it means that the left sub-tree is one level lower
than the right sub-tree.
An AVL tree is given in the following figure. We can see that, balance factor
associated with each node is in between -1 and +1. therefore, it is an example of AVL
tree.
Example
Heaps
Heap is a complete binary tree. A complete binary tree is a binary tree in which all
nodes except leaf node have two children.
The heap tree is a special balanced binary tree data structure where the root node is
compared with its children and arranged accordingly.
There are 2 types of heaps – Min heap, Max heap.
In Min heap the value of parent node must be less than or equal to either of its
children i.e for every node, the node value is greater than or equal to its parent
value.
Example:
In Max heap tree, the value of Parent node is greater than or equal to its children.
Example
A multiway tree is defined as a tree that can have more than two children. If a
multiway tree can have maximum m children, then this tree is called as multiway
tree of order m (or an m-way tree).
As with the other trees that have been studied, the nodes in an m-way tree will be
made up of m-1 key fields and pointers to children.
A multiway tree is defined as a tree that can have more than two children. If a
multiway tree can have maximum m children, then this tree is called as multiway
tree of order m (or an m-way tree).
As with the other trees that have been studied, the nodes in an m-way tree will be
made up of m-1 key fields and pointers to children.
Graph ADT – representations of graph – graph traversals – Search methods : BFS, DFS -
DAG – topological ordering – greedy algorithms – dynamic programming – shortest paths –
minimum spanning trees – introduction to complexity classes and intractability.
Graph ADT
Non-linear data structure.
Collection of nodes that have data and are connected to other nodes.
Graph is composed of a set of vertices (V) and a set of edges ( E ).
The graph is denoted by G(E, V).
Vertices:
Fundamental units of the graph.
Also known as nodes.
Can be labeled or unlabelled.
Edges:
Edges are lines that connect any two nodes in the graph.
Can connect any two nodes in any possible way. There are no rules.
Also known as arcs.
Every edge can be labeled / unlabelled.
Graph example
The Facebook uses graph data structure to store data.
Every thing posted on Facebook like user, photo, group, album, comment, story,
video, link etc are considered as nodes in a graph.
When a photo is posted on Facebook by a user, an edge is drawn between user
node and the photo node.
Facebook is a collection of nodes and edges.
Application of graphs
Facebook Graph
Trivial graph
A graph is called a trivial graph if it has only one vertex present in it. The trivial
graph is the smallest possible graph that can be created with the least number of
vertices that is one vertex only.
1 2
5
Disconnected graph
A graph is said to be a disconnected graph where there does not exist any path
between at least one pair of vertices.
Cyclic graph
If a graph has a minimum of one cycle present, it is called a cyclic graph.
There are 2 cycles in the graph given below.
Acyclic graph
Representations of Graph
1 2
3 4
0 1 2 3 4
0
1 0 1 1 0
2 1 0 0 0
3 1 0 0 1
4 0 0 1 0
1 2
3 4
0 1 2 3 4
0
0 1 1 0
0 0 0 0
0 0 0 1
0 0 0 0
1
2
3
4
Starting from the root, all the nodes at a particular level are visited first and then the
nodes of the next level are traversed till all the nodes are visited.
BFS uses a queue for traversal. All the adjacent unvisited nodes of the current level
are pushed into the queue and the nodes of the current level are marked visited and
removed from the queue.
To avoid visiting the same node again and again, the vertices are divided into 2
categories – visited nodes, not visited nodes.
An array is used to mark the visited nodes, the values stored in this array is 1 or 0.
Example:
Consider undirected graph given below:
Step 1:
Initially Queue and Visited array is empty.
Step 2:
Step 3:
Remove Node 0 from front of Queue and visit the unvisited neighbours and push
them into the Queue.
Step 4:
Remove Node 1 from front of Queue and visit the unvisited neighbours and push
them into the Queue.
1 3
Step 5
Remove node 2 from front of queue and visit the unvisited neighbours and add
them to the Queue.
1 3
2 4
Step 6
Remove node 3 from front of queue and visit the unvisited neighbours
and add them to the Queue.
Every neighbor of node 3 is visited.
Move to next node in front of Queue
1 3
2 4
Step 7
Remove node 4 from front of queue and visit the unvisited neighbours
and add them to the Queue.
Every neighbor of node 3 is visited.
Move to next node in front of Queue.
Queue is empty, process is terminated.
C program to implement BFS graph traversal method for the graph
given below
0 0 1 1 0 0 0
1 1 0 1 1 0
2 1 1 0 0 1
3 0 1 0 0 1
4 0 0 1 1 0
#include <stdio.h>
int main()
{
int v;
printf("Enter the number of vertices: ");
scanf("%d", &n);
printf("Enter graph data in matrix form: \n");
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
{
printf("Enter 1 if there is an edge between nodes %d and %d, otherwise 0 : ",i,j);
scanf("%d", &adj[i][j]);
}
printf("Enter the starting vertex: ");
scanf("%d", &v);
bfs(v);
printf("The nodes that cab be visited from %d are: \n",v);
for (i = 1; i < n; i++)
if (visited[i])
printf("%d\t", i);
return 0;
}
Depth-first search is an algorithm for traversing graph data structure. The algorithm
starts at the root node (any node in the graph can be selected as root node) and explores as
far as possible along each path
DFS can be used to detect cycles in a graph. It can also be used to find the path
between two nodes.
Pseudo code
Step 3: If stack is not empty, pop a node from top of stack, otherwise go to Step 6
Step 5: Push all nodes that can be visited from current node into stack if it not already
visited. Go to Step 3.
Sample graph (undirected), empty Visited array and empty stack are given below:
Stack
0
0 1 2 3 4
top
Visited array
0 0 0 0 0
0 1 2 3 4
Visited array
1 0 0 0 0
0 1 2 3 4
Stack
1 2 3
0 1 2 3 4
top
Step 6: Pop node 1 from Stack. It is not already visited. Update Visited [1] to 1.
Stack
2 3
Visited
1 1 0 0 0
0 1 2 3 4
Step 7: Push the nodes that can be visited from node 1 to stack if they are not already
visited. But nodes 0 and 1 are already visited and they are not pushed to stack.
Step 8: Pop node 2 from top of stack. It is not already visited. Update Visited [2] to 1.
Stack
3
top
Visited
1 1 1 0 0
0 1 2 3 4
Step 9: Push the nodes that can be visited from node 2 to stack if they are not already
visited. Node 4 is not already visited and 4 is pushed to stack.
Stack
3 4
top
Step 10: Pop node 3 from top of stack. It is not already visited. Update Visited [3] to 1. The
node that can be visited from 3 is 0. It is already visited and not pushed to stack.
Visited
1 1 1 1 0
0 1 2 3 4
Stack
4
top
Step 11: Pop node 4 from Stack and update Visited [4] to 1.
Visited
1 1 1 1 1
0 1 2 3 4
Empty Stack (top is -1)
int n;
int adj[10][10];
int visited[10];
int stack[10];
int top = -1;
void create_graph()
{
int i,max_edges,origin,destin;
printf("\n\tEnter number of nodes : ");
scanf("%d",&n);
max_edges=n*(n-1);
for(i=1;i<=max_edges;i++)
{
printf("\n\tEnter edge %d( -1 -1 to quit ) : ",i);
scanf("%d %d",&origin,&destin);
void push(int v)
{
if(top == 9)
{
printf("\nStack Overflow\n");
return;
}
top=top+1;
stack[top] = v;
}
int pop()
{
int v;
if(top == -1)
{
printf("\nStack Underflow\n");
exit(1);
}
else
{
v = stack[top];
top=top-1;
return v;
}
}
int isEmpty_stack( )
{
if(top == -1)
return 1;
else
return 0;
}
void DFS(int v)
{
int i;
push(v);
while(!isEmpty_stack())
{
v = pop();
if(visited[v]==0)
{
printf("%d ",v);
visited[v]=1;
}
for(i=n-1; i>=0; i--)
{
if(adj[v][i]==1 && visited[i]==0)
push(i);
}
}
}
void DF_Traversal()
{
int v;
int main()
{
create_graph();
display();
DF_Traversal();
return 0;
}
T1 = a + b
T2 = T1 + c
T3 = T1 x T2
Shortest Paths
The shortest path problem involves finding the shortest path between two vertices
(or nodes) in a graph.
Algorithms such as the Floyd-Warshall algorithm and different variations of
Dijkstra's algorithm are used to find solutions to the shortest path problem.
Dijkstra's Algorithm was designed and published by Dr. Edsger W. Dijkstra, a
Dutch Computer Scientist, Software Engineer, Programmer, Science Essayist, and
Systems Scientist.
Dijkstra's algorithm is a simple modification to breadth first search. It is used to
find the shortest path from a given node to all other nodes, where edges may have
non-negative lengths.
Step 1: First, we will mark the source node with a current distance of 0 and set the
rest of the nodes to INFINITY.
Step 2: We will then set the unvisited node with the smallest current distance as the
current node, suppose X.
Step 3: For each neighbor N of the current node X: We will then add the current
distance of X with the weight of the edge joining X-N. If it is smaller than the
current distance of N, set it as the new current distance of N.
Step 4: We will then mark the current node X as visited.
Step 5: We will repeat the process from 'Step 2' if there is any node unvisited left in
the graph.
Example: Consider the input graph given below with start node as A.
1. With the above graph as the input, let node A be the source.
2. Mark all the nodes as unvisited.
3. Set the path to 0 at node A and INFINITY for all the other nodes.
4. Mark source node A as visited and access its neighboring nodes.
Note: Neighboring nodes are only accessed and not visited.
5. Update the path to node B by 4 with the help of relaxation because the path to node A is 0 and
the path from node A to B is 4, and the minimum((0 + 4), INFINITY) is 4.
6. We will also update the path to node C by 5 with the help of relaxation because the path to
node A is 0 and the path from node A to C is 5, and the minimum((0 + 5), INFINITY) is 5.
Both the neighbors of node A are now relaxed; therefore, we can move ahead.
7. Select the next unvisited node with the least path and visit it. Visit node B and perform
relaxation on its unvisited neighbors. After performing relaxation, the path to node C will
remain 5, whereas the path to node E will become 11, and the path to node D will become 13.
8. We will now visit node E and perform relaxation on its neighboring nodes B, D, and F. Since
only node F is unvisited, it will be relaxed. Thus, the path to node B will remain as it is,
i.e., 4, the path to node D will also remain 13, and the path to node F will become 14 (8 + 6).
9. Now we will visit node D, and only node F will be relaxed. However, the path to node F will
remain unchanged, i.e., 14.
10. Since only node F is remaining, we will visit it but not perform any relaxation as all its
neighboring nodes are already visited.
11. Once all the nodes of the graphs are visited, the program will end.
This is one of the popular algorithms for finding the minimum spanning tree from a
connected, undirected graph. This is a greedy algorithm. The algorithm workflow is
as below:
First, it sorts all the edges of the graph by their weights,
Then starts the iterations of finding the spanning tree.
At each iteration, the algorithm adds the next lowest-weight edge one by one,
such that the edges picked until now does not form a cycle.
This algorithm can be implemented efficiently using a DSU ( Disjoint-Set ) data
structure to keep track of the connected components of the graph. This is used in a
variety of practical applications such as network design, clustering, and data
analysis.
A data structure that stores non overlapping or disjoint subset of elements is called
disjoint set data structure. The disjoint set data structure supports following
operations:
Adding new sets to the disjoint set.
Merging disjoint sets to a single disjoint set using Union operation.
Finding representative of a disjoint set using Find operation.
Check if two sets are disjoint or not.
This is also a greedy algorithm. This algorithm has the following workflow:
It starts by selecting an arbitrary vertex and then adding it to the MST.
Then, it repeatedly checks for the minimum edge weight that connects one
vertex of MST to another vertex that is not yet in the MST.
This process is continued until all the vertices are included in the MST.
To efficiently select the minimum weight edge for each iteration, this algorithm uses
priority_queue to store the vertices sorted by their minimum edge weight currently.
It also simultaneously keeps track of the MST using an array or other data structure
suitable considering the data type it is storing.
This algorithm can be used in various scenarios such as image segmentation based
on color, texture, or other features. For Routing, as in finding the shortest path
between two points for a delivery truck to follow.
In computer science, there exist some problems whose solutions are not yet found,
the problems are divided into classes known as Complexity Classes.
In complexity theory, a Complexity Class is a set of problems with related
complexity.
These classes help scientists to group problems based on how much time and space
they require to solve problems and verify the solutions.
It is the branch of the theory of computation that deals with the resources required to
solve a problem.
The common resources are time and space, meaning how much time the algorithm
takes to solve a problem and the corresponding memory usage.
The time complexity of an algorithm is used to describe the number of steps
required to solve a problem, but it can also be used to describe how long it takes to
verify the answer.
The space complexity of an algorithm describes how much memory is required for
the algorithm to operate.
Complexity classes are useful in organizing similar types of problems.
The following complexity classes are available
P Class
NP Class
CoNP Class
NP-hard
NP-complete
The P Class
The P in the P class stands for Polynomial Time. It is the collection of decision
problems(problems with a “yes” or “no” answer) that can be solved by a
deterministic machine in polynomial time.
Features:
1. The solution to P problems is easy to find.
2. P is often a class of computational problems that are solvable and tractable.
Tractable means that the problems can be solved in theory as well as in
practice. But the problems that can be solved in theory but not in practice are
known as intractable.
This class contains many natural problems:
1. Calculating the greatest common divisor.
2. Finding a maximum matching.
3. Decision versions of linear programming.