UNIT 1 DSA
UNIT 1 DSA
INTRODUCTION
Today, due to the advent of technology, a huge amount of data is created and
stored regularly in digital format all over the world. It becomes difficult to fetch
the required data from the entire pool and present it in an understandable and
usable manner to the client or the user. Let’s compare the situation to a
supermarket which is not at all well-organized. No staff knows where the
commodities are placed, and the entire place seems a mess. Now isn’t it difficult
for a customer to find out required commodities and it is also difficult for the
storekeepers to keep track of inventories? The same happens with an
unorganized set of data. Here comes the role of ‘Data Structures’.
Processor speed: To handle very large amout of data, high speed processing is
required, but as the data is growing day by day to the billions of files per entity,
processor may fail to deal with that much amount of data.
Data Structure
struct structure_name
{
data_type member 1;
data_type member 2;
data_type member 3;
:
:
:
:
data_type member n;
};
(a) STACK:- A Stack is a linear data structure that follows the LIFO
(Last-In-First-Out) principle. Stack has one end, whereas the Queue has
two ends (front and rear). It contains only one pointer top
pointer pointing to the topmost element of the stack. Whenever an
element is added in the stack, it is added on the top of the stack, and the
element can be deleted only from the stack. In other words, a stack can
be defined as a container in which insertion and deletion can be done
from the one end known as the top of the stack.
Working of Stack
Stack works on the LIFO pattern. As we can observe in the below figure there
are five memory blocks in the stack; therefore, the size of the stack is 5.
Suppose we want to store the elements in a stack and let's assume that stack is
empty. We have taken the stack of size 5 as shown below in which we are
pushing the elements one by one until the stack becomes full.
Since our stack is full as the size of the stack is 5. In the above cases, we can
observe that it goes from the top to the bottom when we were entering the new
element in the stack. The stack gets filled up from the bottom to the top.
When we perform the delete operation on the stack, there is only one way for
entry and exit as the other end is closed. It follows the LIFO pattern, which
means that the value entered first will be removed last. In the above case, the
value 5 is entered first, so it will be removed only after the deletion of all the
other elements.
(b)Queue: - A queue in the data structure can be considered similar to the queue
in the real-world. A queue is a data structure in which whatever comes first will
go out first. It follows the FIFO (First-In-First-Out) policy. In Queue, the
insertion is done from one end known as the rear end or the tail of the queue,
whereas the deletion is done from another end known as the front end or the
head of the queue. In other words, it can be defined as a list or a collection with
a constraint that the insertion can be performed at one end called as the rear end
or tail of the queue and deletion is performed on another end called as the front
end or the head of the queue.
© Linked List:- Linked List can be defined as collection of objects
called nodes that are randomly stored in the memory.
o A node contains two fields i.e. data stored at that particular address and
the pointer which contains the address of the next node in the memory.
o The last node of the list contains pointer to the null.
(i) Tree
(ii) Graph
(i) TREE:- A tree is also one of the data structures that represent
hierarchical data. Suppose we want to show the employees and their
positions in the hierarchical form then it can be represented as shown
below:
The above tree shows the organization hierarchy of some company. In the
above structure, john is the CEO of the company, and John has two direct
reports named as Steve and Rohan. Steve has three direct reports named Lee,
Bob, Ella where Steve is a manager. Bob has two direct reports
named Sal and Emma. Emma has two direct reports named Tom and Raj.
Tom has one direct report named Bill. This particular logical structure is known
as a Tree. Its structure is similar to the real tree, so it is named a Tree. In this
structure, the root is at the top, and its branches are moving in a downward
direction. Therefore, we can say that the Tree data structure is an efficient way
of storing the data in a hierarchical way.
10 Sec
Prime Ministers of India | List of Prime Minister of India (1947-2020)
o Root: The root node is the topmost node in the tree hierarchy. In other
words, the root node is the one that doesn't have any parent. In the above
structure, node numbered 1 is the root node of the tree. If a node is
directly linked to some other node, it would be called a parent-child
relationship.
o Child node: If the node is a descendant of any node, then the node is
known as a child node.
o Parent: If the node contains any sub-node, then that node is said to be the
parent of that sub-node.
o Sibling: The nodes that have the same parent are known as siblings.
o Leaf Node:- The node of the tree, which doesn't have any child node, is
called a leaf node. A leaf node is the bottom-most node of the tree. There
can be any number of leaf nodes present in a general tree. Leaf nodes can
also be called external nodes.
o Internal nodes: A node has atleast one child node known as an internal
o Ancestor node:- An ancestor of a node is any predecessor node on a path
from the root to that node. The root node doesn't have any ancestors. In
the tree shown in the above image, nodes 1, 2, and 5 are the ancestors of
node 10.
o Descendant: The immediate successor of the given node is known as a
descendant of a node. In the above figure, 10 is the descendant of node 5.
(ii) GRAPH:-
A graph can be defined as group of vertices and edges that are used to connect
these vertices. A graph can be seen as a cyclic tree, where the vertices (Nodes)
maintain any complex relationship among them instead of having parent child
relationship.
Definition
A graph G can be defined as an ordered set G(V, E) where V(G) represents the
set of vertices and E(G) represents the set of edges which are used to connect
these vertices.
A Graph G(V, E) with 5 vertices (A, B, C, D, E) and six edges ((A,B), (B,C),
(C,E), (E,D), (D,B), (D,A)) is shown in the following figure.
In a directed graph, edges form an ordered pair. Edges represent a specific path
from some vertex A to another vertex B. Node A is called initial node while
node B is called terminal node.
GRAPH TERMINOLOGY
Path
A path can be defined as the sequence of nodes that are followed in order to
reach some terminal node V from the initial node U.
Closed Path
A path will be called as closed path if the initial node is same as terminal node.
A path will be closed path if V0=VN.
Simple Path
If all the nodes of the graph are distinct with an exception V0=VN, then such
path P is called as closed simple path.
Cycle
A cycle can be defined as the path which has no repeated edges or vertices
except the first and last vertices.
Connected Graph
A connected graph is the one in which some path exists between every two
vertices (u, v) in V. There are no isolated nodes in connected graph.
Complete Graph
A complete graph is the one in which every node is connected with all other
nodes. A complete graph contain n(n-1)/2 edges where n is the number of nodes
in the graph.
Weighted Graph
In a weighted graph, each edge is assigned with some data such as length or
weight. The weight of an edge e can be given as w(e) which must be a positive
(+) value indicating the cost of traversing the edge.
Digraph
A digraph is a directed graph in which each edge of the graph is associated with
some direction and the traversing can be done only in the specified direction.
Loop
An edge that is associated with the similar end points can be called as Loop.
Adjacent Nodes
If two nodes u and v are connected via an edge e, then the nodes u and v are
called as neighbours or adjacent nodes.
A degree of a node is the number of edges that are connected with that node. A
node with degree 0 is called as isolated node.
If the size of data structure is n then we can only insert n-1 data elements into it.
4) Searching: The process of finding the location of an element within the data
structure is called Searching. There are two algorithms to perform searching,
Linear Search and Binary Search. We will discuss each one of them later in this
tutorial.
6) Merging: When two lists List A and List B of size M and N respectively, of
similar type of elements, clubbed or joined to produce the third list, List C of
size (M+N), then this process is called merging
Analysis of Algorithm:-
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following −
A Priori Analysis − This is a theoretical analysis of an algorithm.
Efficiency of an algorithm is measured by assuming that all other
factors, for example, processor speed, are constant and have no effect on
the implementation.
A Posterior Analysis − This is an empirical analysis of an algorithm.
The selected algorithm is implemented using programming language.
This is then executed on target computer machine. In this analysis, actual
statistics like running time and space required, are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with
the execution or running time of various operations involved. The running time
of an operation can be defined as the number of computer instructions executed
per operation.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space
used by the algorithm X are the two main factors, which decide the efficiency
of X.
Time Factor − Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.
Space Factor − Space is measured by counting the maximum memory
space required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage
space required by the algorithm in terms of n as the size of input data.
Asymptotic Notation
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running
time complexity of an algorithm.
Ο Notation
Ω Notation
θ Notation
Consider the following graph drawn for the values of f(n) and C g(n) for input
(n) value on X-Axis and time required is on Y-Axis
In above graph after a particular input value n 0, always C g(n) is greater than
f(n) which indicates the algorithm's upper bound.
Example
f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input
(n) value on X-Axis and time required is on Y-Axis
In above graph after a particular input value n 0, always C g(n) is less than f(n)
which indicates the algorithm's lower bound.
Example
f(n) = Θ(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input
(n) value on X-Axis and time required is on Y-Axis
In above graph after a particular input value n 0, always C1 g(n) is less than f(n)
and C2 g(n) is greater than f(n) which indicates the algorithm's average bound.
Example
Space-Time Trade-off:-
Worst Case: The record we are trying to search is the last record of the
list. If f(n) is the function which gives the running time and/ or storage
space requirement of the algorithm in terms of the size n of the input
data, this particular case of the algorithm will produce a complexity
C(n)=n for our algorithm f(n) as the algorithm will run n times until
it finds the desired record.
Average Case: The record we are trying to search can be any record in the
list. In this case we do not know at which position it might be. Hence we
take an average of all the possible times our algorithm may run. Hence
assuming for n data, we have a probability of finding any one of them is
1/n. Multiplying each of these with the number of times our algorithm
might run for finding each of them and then taking a sum of all those
multiples, we can obtain the complexity C(n) for our algorithm f(n) in case
of an average case as following:
The time and space it uses are two major measures of the efficiency of an
algorithm. Sometimes the choice of data structure involves a space-time
trade-off; by increasing the amount of space for storing the data one may
be able to reduce the time needed for processing the data or vice
versa. Hence let us look over them in detail:
The best algorithm to solve a given problem is one that requires less space in
memory and takes less time to complete its execution. But in practice it is not
always possible to achieve both these objectives. As we know there may be
more than one approach to solve a particular problem. One approach may take
more space but takes less time to complete its execution while the other
approach may take less space but takes more time to complete its
execution. We may have to sacrifice one at the cost of the other. If space is
our constraint, then we have to choose a program that requires less space
at the cost of more execution time. On the other hand if time is our
constraint then we have to choose a program that takes less time to
complete its execution at the cost of more space.
Binary Search:-
Binary Search Algorithm is one of the widely used searching techniques. It can be used to
sort arrays. This searching technique follows the divide and conquer strategy. The search
space always reduces to half in every iteration.
Binary Search Algorithm is a very efficient technique for searching but it needs some
order on which partition of the array will occur.
o Complexity
SN Performance Complexity
In 1st step :
1. BEG = 0
2. END = 8ron
3. MID = 4
4. a[mid] = a[4] = 13 < 23, therefore
in Second step:
1. Beg = mid +1 = 5
2. End = 8
3. mid = 13/2 = 6
4. a[mid] = a[6] = 20 < 23, therefore;
in third step:
1. beg = mid + 1 = 7
2. End = 8
3. mid = 15/2 = 7
4. a[mid] = a[7]
5. a[7] = 23 = item;
6. therefore, set location = mid;
7. The location of the item will be 7.
As the desired element is available in the array, So Search
is successful.
Difference between Linear Search and Binary Search
Definition The linear search starts searching from It finds the position of the
the first element and compares each searched element by finding the
element with a searched element till the middle element of the array.
element is not found.
Sorted data In a linear search, the elements don't need The pre-condition for the
to be arranged in sorted order. binary search is that the
elements must be arranged in a
sorted order.
Size It is preferrable for the small-sized data It is preferrable for the large-
sets. size data sets.
Efficiency It is less efficient in the case of large-size It is more efficient in the case of
data sets. large-size data sets.
Worst-case In a linear search, the worst- case scenario In a binary search, the worst-
scenario for finding the element is O(n). case scenario for finding the
element is O(log2n).
Best-case In a linear search, the best-case scenario In a binary search, the best-case
scenario for finding the first element in the list is scenario for finding the first
O(1). element in the list is O(1).
MODULE-2
ADT Stack
A stack is an Abstract Data Type (ADT),
commonly used in most programming languages. It is named stack as it behaves
like a real-world stack, for example – a deck of cards or a pile of plates, etc.
A real-world stack allows operations at one end only. For example, we can
place or remove a card or plate from the top of the stack only. Likewise, Stack
ADT allows all data operations at one end only. At any given time, we can only
access the top element of a stack.
This feature makes it LIFO data structure. LIFO stands for Last-in-first-out.
Here, the element which is placed (inserted or added) last, is accessed first. In
stack terminology, insertion operation is called PUSH operation and removal
operation is called POP operation.
Stack Representation
The following diagram depicts a stack and its operations −
Basic Operations
Stack operations may involve initializing the stack, using it and then de-
initializing it. Apart from these basic stuffs, a stack is used for the following two
primary operations −
push() − Pushing (storing) an element on the stack.
pop() − Removing (accessing) an element from the stack.
When data is PUSHed onto stack.
To use a stack efficiently, we need to check the status of stack as well. For the
same purpose, the following functionality is added to stacks −
peek() − get the top data element of the stack, without removing it.
isFull() − check if stack is full.
isEmpty() − check if stack is empty.
At all times, we maintain a pointer to the last PUSHed data on the stack. As this
pointer always represents the top of the stack, hence named top. The top pointer
provides top value of the stack without actually removing it.
First we should learn about procedures to support stack functions −
peek()
Algorithm of peek() function −
begin procedure peek
return stack[top]
end procedure
Implementation of peek() function in C programming language −
Example
int peek() {
return stack[top];
}
isfull()
Algorithm of isfull() function −
begin procedure isfull
if top equals to MAXSIZE
return true
else
return false
endif
end procedure
Implementation of isfull() function in C programming language −
Example
bool isfull() {
if(top == MAXSIZE)
return true;
else
return false;
}
isempty()
Algorithm of isempty() function −
begin procedure isempty
end procedure
Implementation of isempty() function in C programming language is slightly
different. We initialize top at -1, as the index in array starts from 0. So we check
if the top is below zero or -1 to determine if the stack is empty. Here's the code
−
Example
bool isempty() {
if(top == -1)
return true;
else
return false;
}
Push Operation
The process of putting a new data element onto stack is known as a Push
Operation. Push operation involves a series of steps −
Step 1 − Checks if the stack is full.
Step 2 − If the stack is full, produces an error and exit.
Step 3 − If the stack is not full, increments top to point next empty space.
Step 4 − Adds data element to the stack location, where top is pointing.
Step 5 − Returns success.
If the linked list is used to implement the stack, then in step 3, we need to
allocate space dynamically.
Algorithm for PUSH Operation
A simple algorithm for Push operation can be derived as follows −
begin procedure push: stack, data
if stack is full
return null
endif
top ← top + 1
stack[top] ← data
end procedure
Implementation of this algorithm in C, is very easy. See the following code −
Example
void push(int data) {
if(!isFull()) {
top = top + 1;
stack[top] = data;
} else {
printf("Could not insert data, Stack is full.\n");
}
}
Pop Operation
Accessing the content while removing it from the stack, is known as a Pop
Operation. In an array implementation of pop() operation, the data element is
not actually removed, instead top is decremented to a lower position in the stack
to point to the next value. But in linked-list implementation, pop() actually
removes data element and deallocates memory space.
A Pop operation may involve the following steps −
Step 1 − Checks if the stack is empty.
Step 2 − If the stack is empty, produces an error and exit.
Step 3 − If the stack is not empty, accesses the data element at
which top is pointing.
Step 4 − Decreases the value of top by 1.
Step 5 − Returns success.
if stack is empty
return null
endif
data ← stack[top]
top ← top - 1
return data
end procedure
Implementation of this algorithm in C, is as follows −
Example
int pop(int data) {
if(!isempty()) {
data = stack[top];
top = top - 1;
return data;
} else {
printf("Could not retrieve data, Stack is empty.\n");
}
}
Applications of Stacks:
1. Expression Evaluation and Conversion
There are 3 types of expression we use in Programming, which are Infix
Expression, Prefix Expression and Postfix Expression.
Similarly, Stack is also used for Converting one expression into another. For
example, converting Infix to Postfix or Infix to Prefix.
Expression Representation
3 Empty 3
* * 3
3 * 33
/ / 33*
( /( 33*
4 /( 33*4
- /(- 33*4
1 /(- 33*41
) - 33*41-
+ + 33*41-/
6 + 33*41-/6
* +* 33*41-/62
2 +* 33*41-/62
Empty 33*41-/62*+
#include <stdio.h>
#include <ctype.h>
#define SIZE 50
char s[SIZE];
int top=-1;
push(char elem)
{
s[++top]=elem;
return 0;
}
char pop()
{
return(s[top--]);
}
int pr(char elem)
{
switch(elem)
{
case '#': return 0;
case '(': return 1;
case '+':
case '-': return 2;
case '*':
case '/': return 3;
}
return 0;
}
void main()
{
char infx[50], pofx[50], ch, elem;
int i=0, k=0;
printf("\n\nEnter Infix Expression: ");
scanf("%s",infx);
push('#');
while( (ch=infx[i++]) != '\0')
{
if( ch == '(') push(ch);
else
if(isalnum(ch)) pofx[k++]=ch;
else
if( ch == ')')
{
while( s[top] != '(')
pofx[k++]=pop();
elem=pop();
}
else
{
while( pr(s[top]) >= pr(ch) )
pofx[k++]=pop();
push(ch);
}
}
while( s[top] != '#')
pofx[k++]=pop();
pofx[k]='\0';
printf("\n\n Given Infix Expression: %s \n Postfix Expresssion: %s\
n",infx,pofx);
}
Output:
2. Infix to Prefix
Algorithm for Infix to Prefix Conversion:
Step 1: Insert “)” onto stack, and add “(” to end of the A .
Step 2: Scan A from right to left and repeat Step 3 to 6 for each element of A
until the stack is empty .
Step 7: Exit
#include<stdio.h>
#include <conio.h>
#include <string.h>
#include <ctype.h>
#define MAX 50
struct infix
{
char target[MAX] ;
char stack[MAX] ;
char *s, *t ;
int top, l ;
};
void initinfix ( struct infix * ) ;
void setexpr ( struct infix *, char * ) ;
void push ( struct infix *, char ) ;
char pop ( struct infix * ) ;
void convert ( struct infix * ) ;
int priority ( char c ) ;
void show ( struct infix ) ;
void main( )
{
struct infix q ;
char expr[MAX] ;
clrscr( ) ;
initinfix ( &q ) ;
printf ( "\nEnter an expression in infix form: " ) ;
gets ( expr ) ;
setexpr ( &q, expr ) ;
convert ( &q ) ;
printf ( "The Prefix expression is: " ) ;
show ( q ) ;
getch( ) ;
}
/* initializes elements of structure variable */
void initinfix ( struct infix *pq )
{
pq -> top = -1 ;
strcpy ( pq -> target, "" ) ;
strcpy ( pq -> stack, "" ) ;
pq -> l = 0 ;
}
/* reverses the given expression */
void setexpr ( struct infix *pq, char *str )
{
pq -> s = str ;
strrev ( pq -> s ) ;
pq -> l = strlen ( pq -> s ) ;
*( pq -> target + pq -> l ) = '\0' ;
pq -> t = pq -> target + ( pq -> l - 1 ) ;
}
/* adds operator to the stack */
void push ( struct infix *pq, char c )
{
if ( pq -> top == MAX - 1 )
printf ( "\nStack is full.\n" ) ;
else
{
pq -> top++ ;
pq -> stack[pq -> top] = c ;
}
}
/* pops an operator from the stack */
char pop ( struct infix *pq )
{
if ( pq -> top == -1 )
{
printf ( "Stack is empty\n" ) ;
return -1 ;
}
else
{
char item = pq -> stack[pq -> top] ;
pq -> top-- ;
return item ;
}
}
/* converts the infix expr. to prefix form */
void convert ( struct infix *pq )
{
char opr ;
while ( *( pq -> s ) )
{
if ( *( pq -> s ) == ' ' || *( pq -> s ) == '\t' )
{
pq -> s++ ;
continue ;
}
if ( isdigit ( *( pq -> s ) ) || isalpha ( *( pq -> s ) ) )
{
while ( isdigit ( *( pq -> s ) ) || isalpha ( *( pq -> s ) ) )
{
*( pq -> t ) = *( pq -> s ) ;
pq -> s++ ;
pq -> t-- ;
}
}
if ( *( pq -> s ) == ')' )
{
push ( pq, *( pq -> s ) ) ;
pq -> s++ ;
}
if ( *( pq -> s ) == '*' || *( pq -> s ) == '+' || *( pq -> s ) == '/' ||*( pq -> s )
== '%' || *( pq -> s ) == '-' || *( pq -> s ) == '$' )
{
if ( pq -> top != -1 )
{
opr = pop ( pq ) ;
while ( priority ( opr ) > priority ( *( pq -> s ) ) )
{
*( pq -> t ) = opr ;
pq -> t-- ;
opr = pop ( pq ) ;
}
push ( pq, opr ) ;
push ( pq, *( pq -> s ) ) ;
}
else
push ( pq, *( pq -> s ) ) ;
pq -> s++ ;
}
if ( *( pq -> s ) == '(' )
{
opr = pop ( pq ) ;
while ( opr != ')' )
{
*( pq -> t ) = opr ;
pq -> t-- ;
opr = pop ( pq ) ;
}
pq -> s++ ;
}
}
while ( pq -> top != -1 )
{
opr = pop ( pq ) ;
*( pq -> t ) = opr ;
pq -> t-- ;
}
pq -> t++ ;
}
/* returns the priotity of the operator */
int priority ( char c )
{
if ( c == '$' )
return 3 ;
if ( c == '*' || c == '/' || c == '%' )
return 2 ;
else
{
if ( c == '+' || c == '-' )
return 1 ;
else
return 0 ;
}
}
/* displays the prefix form of given expr. */
void show ( struct infix pq )
{
while ( *( pq.t ) )
{
printf ( " %c", *( pq.t ) ) ;
pq.t++ ;
}
}
Output:
3. Postfix to Infix
Following is an algorithm for Postfix to Infix conversion:
Step 6: Else,
a. Delete the top 2 values from the stack.
b. Put the operator, with the values as arguments and form a string.
c. Encapsulate the resulted string with parenthesis.
d. Insert the resulted string back to stack.
Step 7: If there is only one value in the stack, that value in the stack is the
desired infix string.
Step 8: If there are more values in the stack, show error /* The user input has
too many values */
efg-+he-sh-o+/* NULL
fg-+he-sh-o+/* “e”
g-+he-sh-o+/* “f”
“e”
-+he-sh-o+/* “g”
“f”
“e”
+he-sh-o+/* “f”-“g”
“e”
he-sh-o+/* “e+f-g”
e-sh-o+/* “h”
“e+f-g”
-sh-o+/* “e”
“h”
“e+f-g”
sh-o+/* “h-e”
“e+f-g”
h-o+/* “s”
“h-e”
“e+f-g”
-o+/* “h”
“s”
“h-e”
“e+f-g”
o+/* “h-s”
“h-e”
“e+f-g”
+/* “o”
“s-h”
“h-e”
“e+f-g”
/* “s-h+o”
“h-e”
“e+f-g”
* “(h-e)/( s-h+o)”
“e+f-g”
#include <stdio.h>
#include <stdlib.h>
int top = 10;
struct node
{
char ch;
struct node *next;
struct node *prev;
} *stack[11];
typedef struct node node;
void push(node *str)
{
if (top <= 0)
printf("Stack is Full ");
else
{
stack[top] = str;
top--;
}
}
node *pop()
{
node *exp;
if (top >= 10)
printf("Stack is Empty ");
else
exp = stack[++top];
return exp;
}
void convert(char exp[])
{
node *op1, *op2;
node *temp;
int i;
for (i=0;exp[i]!='\0';i++)
if (exp[i] >= 'a'&& exp[i] <= 'z'|| exp[i] >= 'A' && exp[i] <= 'Z')
{
temp = (node*)malloc(sizeof(node));
temp->ch = exp[i];
temp->next = NULL;
temp->prev = NULL;
push(temp);
}
else if (exp[i] == '+' || exp[i] == '-' || exp[i] == '*' || exp[i] == '/' || exp[i] == '^')
{
op1 = pop();
op2 = pop();
temp = (node*)malloc(sizeof(node));
temp->ch = exp[i];
temp->next = op1;
temp->prev = op2;
push(temp);
}
}
void display(node *temp)
{
if (temp != NULL)
{
display(temp->prev);
printf("%c", temp->ch);
display(temp->next);
}
}
void main()
{
char exp[50];
clrscr();
printf("Enter the postfix expression :");
scanf("%s", exp);
convert(exp);
printf("\nThe Equivalant Infix expression is:");
display(pop());
printf("\n\n");
getch();
}
Output:
4. Prefix to Infix
Algorithm for Prefix to Infix Conversion
Step 1: The reversed input string is inserted into a stack -> prefixToInfix(stack)
Expression Stack
-bc+-pqr Empty
/-bc+-pq “q”
“r”
/-bc+- “p”
“q”
“r”
/-bc+ “p-q”
“r”
/-bc “p-q+r”
/-b “c”
“p-q+r”
/- “b”
“c”
“p-q+r”
/ “b-c”
“p-q+r”
NULL “((b-c)/((p-q)+r))”
#include <string.h>
#include <ctype.h>
Output:
2. Backtracking
Backtracking is a recursive algorithm which is used for solving the optimization
problem.
3. Parenthesis Checking
In Programming, we make use of different type of parenthesis, like – (, ), {, },
which are used for opening and closing a block of code.
So, these parenthesis get stored in Stack and control the flow of our program.
4. Function Call
In Programming, whenever you make a call from one function to the another
function. The address of the calling function gets stored in the Stack.
So, when the called function gets terminated. The program control move back to
the calling function with the help of the address which was stored in the Stack.
So, Stack plays the main role when it comes to Calling a Function from other
Function.
5. String Reversal
String Reversal is another amazing Application of Stack. Here, one by one each
character of the Stack get inserted into the Stack.
So, the first character of the Stack is on the bottom of the Stack and the last
character of the String is on the Top of the Stack.
After performing the pop operation in Stack, we get the String in Reverse order.
6. Syntax Parsing
As many of the Programming Languages are context-free languages. So, Stack
is also heavily used for Syntax Parsing by most of the Compilers.
7. Memory Management
Memory Management is the important function of the Operating System. Stack
also plays the main role when it comes to Memory Management.
Queue Representation
As we now understand that in queue, we access both ends for different reasons.
The following diagram given below tries to explain queue representation as
data structure −
Basic Operations
Queue operations may involve initializing or defining the queue, utilizing it,
and then completely erasing it from the memory. Here we shall try to
understand the basic operations associated with queues −
enqueue() − add (store) an item to the queue.
dequeue() − remove (access) an item from the queue.
Few more functions are required to make the above-mentioned queue operation
efficient. These are −
peek() − Gets the element at the front of the queue without removing it.
isfull() − Checks if the queue is full.
isempty() − Checks if the queue is empty.
In queue, we always dequeue (or access) data, pointed by front pointer and
while enqueing (or storing) data in the queue we take help of rear pointer.
Let's first learn about supportive functions of a queue −
peek()
This function helps to see the data at the front of the queue. The algorithm of
peek() function is as follows −
Algorithm
begin procedure peek
return queue[front]
end procedure
Implementation of peek() function in C programming language −
Example
int peek() {
return queue[front];
}
isfull()
As we are using single dimension array to implement queue, we just check for
the rear pointer to reach at MAXSIZE to determine that the queue is full. In
case we maintain the queue in a circular linked-list, the algorithm will differ.
Algorithm of isfull() function −
Algorithm
begin procedure isfull
isempty()
Algorithm of isempty() function −
Algorithm
begin procedure isempty
Queues maintain two data pointers, front and rear. Therefore, its operations
are comparatively difficult to implement than that of stacks.
The following steps should be taken to enqueue (insert) data into a queue −
Step 1 − Check if the queue is full.
Step 2 − If the queue is full, produce overflow error and exit.
Step 3 − If the queue is not full, increment rear pointer to point the next
empty space.
Step 4 − Add data element to the queue location, where the rear is
pointing.
Step 5 − return success.
Accessing data from the queue is a process of two tasks − access the data
where front is pointing and remove the data after access. The following steps
are taken to perform dequeue operation −
Step 1 − Check if the queue is empty.
Step 2 − If the queue is empty, produce underflow error and exit.
Step 3 − If the queue is not empty, access the data where front is
pointing.
Step 4 − Increment front pointer to point to the next available data
element.
Step 5 − Return success.
data = queue[front]
front ← front + 1
return true
end procedure
Implementation of dequeue() in C programming language −
Example
int dequeue() {
if(isempty())
return 0;
return data;
}
Circular Queue
Before we start to learn about Circular queue, we
should first understand, why we need a circular queue, when we already
have linear queue data structure.
In a Linear queue, once the queue is completely full, it's not possible to
insert more elements. Even if we dequeue the queue to remove some of the
elements, until the queue is reset, no new elements can be inserted. You
must be wondering why?
The only way is to reset the linear queue, for a fresh start.
Circular Queue is also a linear data structure, which follows the principle
of FIFO(First In First Out), but instead of ending the queue at the last
position, it again starts from the first position after the last, hence making
the queue behave like a circular data structure.
Basic features of Circular Queue
1. In case of a circular queue, head pointer will always point to the front
of the queue, and tail pointer will always point to the end of the
queue.
2. Initially, the head and the tail pointers will be pointing to the same
location, this would mean that the queue is empty.
3. New data is always added to the location pointed by the tail pointer,
and once the data is added, tail pointer is incremented to point to the
next available location.
6. Also, the head and the tail pointers can cross each other. In other
words, head pointer can be greater than the tail. Sounds odd? This
will happen when we dequeue the queue a couple of times and
the tail pointer gets reinitialised upon reaching the end of the queue.
Going Round and Round
Another very important point is keeping the value of the tail and
the head pointer within the maximum queue size.
In the diagrams above the queue has a size of 8, hence, the value
of tail and head pointers will always be between 0 and 7.
So the formula to increment the head and tail pointers to make them go
round and round over and again will be, head = (head+1) % maxSize or tail =
(tail+1) % maxSize
Insertion and In linear queue, insertion is done from the rear In circular queue, the insertion
Deletion end, and deletion is done from the front end. and deletion can take place
from any end.
Memory space The memory space occupied by the linear It requires less memory as
queue is more than the circular queue. compared to linear queue.
Order of
execution
Below we have some common real-world examples where circular queues
are used:
Priority queue
The priority queue in the data structure is an
extension of the “normal” queue. It is an abstract data type that contains a
group of items. It is like the “normal” queue except that the dequeuing
elements follow a priority order. The priority order dequeues those items
first that have the highest priority. This blog will give you a deeper
understanding of the priority queue and its implementation in the C
programming language.
In the above table, 4 has the highest priority, and 45 has the lowest
priority.
Descending Order Priority Queue
A descending order priority queue gives the highest priority to the highest
number in that queue. For example, you have six numbers in the priority
queue that are 4, 8, 12, 45, 35, 20. Firstly, you will arrange these numbers
in ascending order. The new list is as follows: 45, 35, 20, 12, 8, 4. In this
list, 45 is the highest number. Hence, the descending order priority queue
treats number 45 as the highest priority.
45 35 20 12 8 4
In the above table, 4 has the lowest priority, and 45 has the highest
priority.
Implementation of the Priority Queue in Data
Structure
You can implement the priority queues in one of the following ways:
Linked list
Binary heap
Arrays
Binary search tree
The binary heap is the most efficient method for implementing
the priority queue in the data structure.
The below tables summarize the complexity of different operations in a
priority queue.
Operation Unordered Array Ordered Array Binary Heap Binary Search Tree
Max Heap
The max heap is a binary heap in which a parent node has a value either
equal to or greater than the child node value. The root node of the tree has
the highest value.
Inserting an Element in a Max Heap Binary Tree
You can perform the following steps to insert an element/number in
the priority queue in the data structure.
1. The algorithm scans the tree from top to bottom and left to right to find an
empty slot. It then inserts the element at the last node in the tree.
2. After inserting the element, the order of the binary tree is disturbed. You
must swap the data with each other to sort the order of the max heap
binary tree. You must keep shuffling the data until the tree satisfies the
max-heap property.
Algorithm to Insert an Element in a Max Heap Binary Tree
If the tree is empty and contains no node,
create a new parent node newElement.
else (a parent node is already available)
insert the newElement at the end of the tree (i.e., last node of the tree
from left to right.)
max-heapify the tree
Deleting an Element in a M ax Heap Binary Tree
1. You can perform the following steps to delete an element in the Priority
Queue in Data Structure.
2. Choose the element that you want to delete from the binary tree.
3. Shift the data at the end of the tree by swapping it with the last node data.
4. Remove the last element of the binary tree.
5. After deleting the element, the order of the binary tree is disturbed. You
must sort the order to satisfy the property of max-heap. You must keep
shuffling the data until the tree meets the max-heap property.
The priority queue in a data structure is used in Google Maps for searching the
optimal path to reach any destination. Dijkstra’s Shortest Path algorithm utilizes
a min priority queue to store all possible paths to reach the destination,
considering distance as a parameter for priority assignment. In the end, it will
return the element with the highest priority as the optimal route. So, in this
tutorial, you will discover priority queue in data structure to understand their
functionalities and applications.
Priority queue in a data structure is an extension of a linear queue that possesses the
following properties:
It will delete the element with higher priority before the element with lower priority.
If multiple elements have the same priority, it does their removal from the queue according to the
FCFS principle.
Now, understand these properties with the help of an example. Consider you have to insert 7,
2, 45, 32, and 12 in a priority queue. The element with the least value has the highest
property. Thus, you should maintain the lowest element at the front node.
The image above shows how it maintains the priority during insertion in a queue. But, if you
carry the N comparisons for each insertion, time-complexity will become O(N^2).
Representation of Priority Queue
You can also implement a priority queue using arrays, however, if you consider
array implementation of the priority queue, then inserting elements into the
sorted array will cost you O(n). In general, processing each element will further
cost you O(n^2). Because of this complexity, implementing a priority queue
using arrays is not an ideal approach. Hence, you will only learn about the
representation of the priority queue using a linked list.
The diagram above shows how it will insert the new node consisting of
elements in a linked queue. This particular scenario of insertion seems perfect
as it doesn’t cost you more time. But what if the element is significantly larger
than all the nodes of a queue?
For instance, say you want to insert a node consisting of element 45. Here, it
will compare element 45 with each element inside the queue. However, this
insertion will cost you O(N). Representation of the linked queue below displays
how it will insert element 45 in a priority queue.
Different Implementation Strategies for Priority Queue
Binary heap and binary tree provide almost similar complexities. These
approaches cost you O(logn) for insertion and deletion and O(1) for peek
operation. But, which approach is the best to implement a priority queue?
To answer this question, you need to explore memory management in the case
of both data structures. Since binary heap utilizes arrays, there is always a better
locality of reference, and operations become more cache-friendly. Whereas, the
Binary search trees use pointers to implement front and rear nodes, which takes
up more space in memory. Hence, building a self-balancing BST’s cost
O(NlogN) where binary heap just costs you O(n). These facts clarify that the
Binary Heap is the best data structure for priority queue implementation.
Moving forward, you will understand what heap data structure is and how it
works.
There are two types of priority queues based on the priority of elements.
If the element with the smallest value has the highest priority, then that
priority queue is called the min priority queue.
If the element with a higher value has the highest priority, then that priority
queue is known as the max priority queue.
Furthermore, you can implement the min priority queue using a min heap,
whereas you can implement the max priority queue using a max heap.
The following are the applications of the priority queue in data structures:
Inserting the Element in a Priority Queue: Once you perform the new
insertion, the new element will move to the empty space from top to bottom
and left to right. Additionally, if the element is not in the correct position, it
will compare it with the parent node. Following that, if the element is not in
proper order, then it swaps the elements. The swapping process continues
until all the elements inside the queue are in the correct positions.
In the example above, it inserted the new element 43 into the max heap
representation of the priority queue. But because of this insertion, the order gets
disturbed. To make sure that all the elements arrive in proper order, a swapping
operation takes place. This operation is also known as Heapify in the Heap data
structure.
Deletion in Priority Queue: As you know that in a max heap, the maximum
element is the root node. And it will remove the element which has maximum
priority first. Thus, you remove the root node from the queue. This removal
creates an empty slot, which will be further filled with new insertion. Then, it
compares the newly inserted element with all the elements inside the queue to
maintain the heap invariant.