0% found this document useful (0 votes)
13 views

UNIT 1 DSA

DSA Module

Uploaded by

jaisingh2744
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

UNIT 1 DSA

DSA Module

Uploaded by

jaisingh2744
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 80

lMODULE-1

INTRODUCTION
Today, due to the advent of technology, a huge amount of data is created and
stored regularly in digital format all over the world. It becomes difficult to fetch
the required data from the entire pool and present it in an understandable and
usable manner to the client or the user. Let’s compare the situation to a
supermarket which is not at all well-organized. No staff knows where the
commodities are placed, and the entire place seems a mess. Now isn’t it difficult
for a customer to find out required commodities and it is also difficult for the
storekeepers to keep track of inventories? The same happens with an
unorganized set of data. Here comes the role of ‘Data Structures’.

DEFINITION OF DATA STRUCTURE: -


Data structure refers to the way of assembling or organising the data i.e. the
logical or mathematical model of organising data is called data structure. It
describe the representation of data and the operations which can be performed
on the data. Data structure can also be viewed as collection of data elements
whose organisation can be characterized by accessing function. The accessing
function are also to store and access individual data items. They are methods in
which data is stored and organized efficiently so that it could be fetched
whenever required.
In the digital world, it’s imperative that data is not only identified and
acquired but is also arranged in a particular fashion that makes sense and easy
recovery of data possible. This task is done through data structures. Data
structures mean the ways data is organized and stored in memory. By applying
these data structures, data is organized to form ‘information’ so that it is
presented as required by the user.

HOW ARE DATA STRUCTURES USED?


Data structure forms part of the implementation phase of the programs in the
digital world. They help to store and retrieve information stored in primary as
well as secondary memories. Data structure form an essential and integral part
of programming. Algorithms do this. Algorithms are the finite set of
instructions used to carry out a specific task and get the desired
output. Irrespective of the programming languages they are designed, they can
be used with multiple programming languages. Algorithms paired with data
together make up a program.

Usage of Data Structure involves the following terminologies:

 Data: These are the basic values or collections of values.


 Group Items: This includes a grouping of subordinate data items.
 Record: This is the collection of various data items that can be grouped.
 File: A File is a collection of various records of one type of entity.
 Entity and Attribute: An entity contains various attributes, where the entity
represents the class of Objects, and each attribute represents a particular
property of that entity.

NEED OF DATA STRUCTURES


As applications are getting complexed and amount of data is increasing day by
day, there may arrise the following problems:

Processor speed: To handle very large amout of data, high speed processing is
required, but as the data is growing day by day to the billions of files per entity,
processor may fail to deal with that much amount of data.

Data Search: Consider an inventory size of 106 items in a store, If our


application needs to search for a particular item, it needs to traverse 106 items
every time, results in slowing down the search process.

Multiple requests: If thousands of users are searching the data simultaneously


on a web server, then there are the chances that a very large server can be failed
during that process in order to solve the above problems, data structures are
used. Data is organized to form a data structure in such a way that all items are
not required to be searched and required data can be searched instantly.

ADVANTAGES OF DATA STRUCTURES


Efficiency: Efficiency of a program depends upon the choice of data structures.
For example: suppose, we have some data and we need to perform the search
for a particular record. In that case, if we organize our data in an array, we will
have to search sequentially element by element. Hence, using array may not be
very efficient here. There are better data structures which can make the search
process efficient like ordered array, binary search tree or hash tables.
Reusability: Data structures are reusable, i.e. once we have implemented a
particular data structure, we can use it at any other place. Implementation of
data structures can be compiled into libraries which can be used by different
clients.

Abstraction: Data structure is specified by the ADT which provides a level of


abstraction. The client program uses the data structure through interface only,
without getting into the implementation details.

DATA STRUCTURE CLASSIFICATION

Data structure are of two types:-


1. In Build data structure
2. User-Defined data structure

Data Structure

BUILD-IN DATA STRUCTURE USER DEFINED DATA STRUCTURE

ARRAY STRUCTURE LINEAR DATA NON LINEAR DATA


STRUCTURE STRUCTURE
STACK QUEUE LINKED LIST TREE GRAPH

1. BUILD IN DATA STRUCTURE: - Build in data structure is


a data structure which are internally provided by the high level language.
Build in data structure are created with the help of data types (such as int ,
char, float etc.).
It is of two types:-

(I) ARRAY: - A finite ordered set of homogenous elements is


called an array.
Where
 Finite means elements are countable.
 Ordered set means that each elements of the sets has a unique position
i.e. ordering is imposed on the elements.
 Homogenous means that all the elements of the set are of same type
i.e. they may be real, integer or character.
Aspiring Data Scientists should master array construction before moving on to
other structures such as queues or stacks.

(II) STRUCTURE:- A structure is a data structure which


contain more than one data elements with different data types into a
single unit is called a structure.

The syntax of the structure is given below:-

struct structure_name
{
data_type member 1;
data_type member 2;
data_type member 3;
:
:
:
:
data_type member n;
};

(II) USER DEFINED DATA STRUCTURE:- User


defined data structure are those data structure which are formed
by the users according to their needs.
These data structure are formed with the help of build in data structure.
The user defined data structure are of the following types:
(i) Linear data structure
(ii) Nonlinear data structure

(i) LINEAR DATA STRUCTURE:- A data structure is said to


be linear if its elements are in sequence. It is of following types:

(a) STACK:- A Stack is a linear data structure that follows the LIFO
(Last-In-First-Out) principle. Stack has one end, whereas the Queue has
two ends (front and rear). It contains only one pointer top
pointer pointing to the topmost element of the stack. Whenever an
element is added in the stack, it is added on the top of the stack, and the
element can be deleted only from the stack. In other words, a stack can
be defined as a container in which insertion and deletion can be done
from the one end known as the top of the stack.

Some key points related to stack


o It is called as stack because it behaves like a real-world stack, piles of
books, etc.
o A Stack is an abstract data type with a pre-defined capacity, which means
that it can store the elements of a limited size.
o It is a data structure that follows some order to insert and delete the
elements, and that order can be LIFO or FILO.

Working of Stack

Stack works on the LIFO pattern. As we can observe in the below figure there
are five memory blocks in the stack; therefore, the size of the stack is 5.
Suppose we want to store the elements in a stack and let's assume that stack is
empty. We have taken the stack of size 5 as shown below in which we are
pushing the elements one by one until the stack becomes full.

Since our stack is full as the size of the stack is 5. In the above cases, we can
observe that it goes from the top to the bottom when we were entering the new
element in the stack. The stack gets filled up from the bottom to the top.

When we perform the delete operation on the stack, there is only one way for
entry and exit as the other end is closed. It follows the LIFO pattern, which
means that the value entered first will be removed last. In the above case, the
value 5 is entered first, so it will be removed only after the deletion of all the
other elements.

(b)Queue: - A queue in the data structure can be considered similar to the queue
in the real-world. A queue is a data structure in which whatever comes first will
go out first. It follows the FIFO (First-In-First-Out) policy. In Queue, the
insertion is done from one end known as the rear end or the tail of the queue,
whereas the deletion is done from another end known as the front end or the
head of the queue. In other words, it can be defined as a list or a collection with
a constraint that the insertion can be performed at one end called as the rear end
or tail of the queue and deletion is performed on another end called as the front
end or the head of the queue.
© Linked List:- Linked List can be defined as collection of objects
called nodes that are randomly stored in the memory.

o A node contains two fields i.e. data stored at that particular address and
the pointer which contains the address of the next node in the memory.
o The last node of the list contains pointer to the null.

Uses of Linked List


o The list is not required to be contiguously present in the memory. The
node can reside anywhere in the memory and linked together to make a
list. This achieves optimized utilization of space.
o List size is limited to the memory size and doesn't need to be declared in
advance.
o Empty node cannot be present in the linked list.
o We can store values of primitive types or objects in the singly linked list.

(2) NON LINEAR DATA STRUCTURE : - A data structure is


said to be non-linear data structure if its elements do not form a sequence but
it represents hierarchical relationship between its elements.
It is of two types:-

(i) Tree
(ii) Graph

(i) TREE:- A tree is also one of the data structures that represent
hierarchical data. Suppose we want to show the employees and their
positions in the hierarchical form then it can be represented as shown
below:

The above tree shows the organization hierarchy of some company. In the
above structure, john is the CEO of the company, and John has two direct
reports named as Steve and Rohan. Steve has three direct reports named Lee,
Bob, Ella where Steve is a manager. Bob has two direct reports
named Sal and Emma. Emma has two direct reports named Tom and Raj.
Tom has one direct report named Bill. This particular logical structure is known
as a Tree. Its structure is similar to the real tree, so it is named a Tree. In this
structure, the root is at the top, and its branches are moving in a downward
direction. Therefore, we can say that the Tree data structure is an efficient way
of storing the data in a hierarchical way.

10 Sec
Prime Ministers of India | List of Prime Minister of India (1947-2020)

Let's understand some key points of the Tree data structure.

o A tree data structure is defined as a collection of objects or entities known


as nodes that are linked together to represent or simulate hierarchy.
o A tree data structure is a non-linear data structure because it does not
store in a sequential manner. It is a hierarchical structure as elements in a
Tree are arranged in multiple levels.
o In the Tree data structure, the topmost node is known as a root node. Each
node contains some data, and data can be of any type. In the above tree
structure, the node contains the name of the employee, so the type of data
would be a string.
o Each node contains some data and the link or reference of other nodes
that can be called children.

Some basic terms used in Tree data structure.

Let's consider the tree structure, which is shown below:


In the above structure, each node is labeled with some number. Each arrow
shown in the above figure is known as a link between the two nodes.

o Root: The root node is the topmost node in the tree hierarchy. In other
words, the root node is the one that doesn't have any parent. In the above
structure, node numbered 1 is the root node of the tree. If a node is
directly linked to some other node, it would be called a parent-child
relationship.
o Child node: If the node is a descendant of any node, then the node is
known as a child node.
o Parent: If the node contains any sub-node, then that node is said to be the
parent of that sub-node.
o Sibling: The nodes that have the same parent are known as siblings.
o Leaf Node:- The node of the tree, which doesn't have any child node, is
called a leaf node. A leaf node is the bottom-most node of the tree. There
can be any number of leaf nodes present in a general tree. Leaf nodes can
also be called external nodes.
o Internal nodes: A node has atleast one child node known as an internal
o Ancestor node:- An ancestor of a node is any predecessor node on a path
from the root to that node. The root node doesn't have any ancestors. In
the tree shown in the above image, nodes 1, 2, and 5 are the ancestors of
node 10.
o Descendant: The immediate successor of the given node is known as a
descendant of a node. In the above figure, 10 is the descendant of node 5.

(ii) GRAPH:-
A graph can be defined as group of vertices and edges that are used to connect
these vertices. A graph can be seen as a cyclic tree, where the vertices (Nodes)
maintain any complex relationship among them instead of having parent child
relationship.

Definition

A graph G can be defined as an ordered set G(V, E) where V(G) represents the
set of vertices and E(G) represents the set of edges which are used to connect
these vertices.

A Graph G(V, E) with 5 vertices (A, B, C, D, E) and six edges ((A,B), (B,C),
(C,E), (E,D), (D,B), (D,A)) is shown in the following figure.

DIRECTED AND UNDIRECTED GRAPH


A graph can be directed or undirected. However, in an undirected graph, edges
are not associated with the directions with them. An undirected graph is shown
in the above figure since its edges are not attached with any of the directions. If
an edge exists between vertex A and B then the vertices can be traversed from B
to A as well as A to B.

In a directed graph, edges form an ordered pair. Edges represent a specific path
from some vertex A to another vertex B. Node A is called initial node while
node B is called terminal node.

A directed graph is shown in the following figure.

GRAPH TERMINOLOGY
Path

A path can be defined as the sequence of nodes that are followed in order to
reach some terminal node V from the initial node U.

Closed Path

A path will be called as closed path if the initial node is same as terminal node.
A path will be closed path if V0=VN.

Simple Path

If all the nodes of the graph are distinct with an exception V0=VN, then such
path P is called as closed simple path.
Cycle

A cycle can be defined as the path which has no repeated edges or vertices
except the first and last vertices.

Connected Graph

A connected graph is the one in which some path exists between every two
vertices (u, v) in V. There are no isolated nodes in connected graph.

Complete Graph

A complete graph is the one in which every node is connected with all other
nodes. A complete graph contain n(n-1)/2 edges where n is the number of nodes
in the graph.

Weighted Graph

In a weighted graph, each edge is assigned with some data such as length or
weight. The weight of an edge e can be given as w(e) which must be a positive
(+) value indicating the cost of traversing the edge.

Digraph

A digraph is a directed graph in which each edge of the graph is associated with
some direction and the traversing can be done only in the specified direction.

Loop

An edge that is associated with the similar end points can be called as Loop.

Adjacent Nodes

If two nodes u and v are connected via an edge e, then the nodes u and v are
called as neighbours or adjacent nodes.

Degree of the Node

A degree of a node is the number of edges that are connected with that node. A
node with degree 0 is called as isolated node.

OPERATIONS ON DATA STRUCTURE


1) Traversing: Every data structure contains the set of data elements.
Traversing the data structure means visiting each element of the data structure
in order to perform some specific operation like searching or sorting.

Example: If we need to calculate the average of the marks obtained by a


student in 6 different subject, we need to traverse the complete array of marks
and calculate the total sum, then we will devide that sum by the number of
subjects i.e. 6, in order to find the average.

2) Insertion: Insertion can be defined as the process of adding the elements to


the data structure at any location.

If the size of data structure is n then we can only insert n-1 data elements into it.

3) Deletion:The process of removing an element from the data structure is


called Deletion. We can delete an element from the data structure at any random
location.

If we try to delete an element from an empty data structure


then underflow occurs.

4) Searching: The process of finding the location of an element within the data
structure is called Searching. There are two algorithms to perform searching,
Linear Search and Binary Search. We will discuss each one of them later in this
tutorial.

5) Sorting: The process of arranging the data structure in a specific order is


known as Sorting. There are many algorithms that can be used to perform
sorting, for example, insertion sort, selection sort, bubble sort, etc.

6) Merging: When two lists List A and List B of size M and N respectively, of
similar type of elements, clubbed or joined to produce the third list, List C of
size (M+N), then this process is called merging

 Analysis of Algorithm:-

The term "analysis of algorithms" was coined by Donald Knuth.


Algorithm analysis is an important part of computational complexity theory,
which provides theoretical estimation for the required resources of an
algorithm to solve a specific computational problem. Most algorithms are
designed to work with inputs of arbitrary length. Analysis of algorithms is the
determination of the amount of time and space resources required to execute it.
Usually, the efficiency or running time of an algorithm is stated as a function
relating the input length to the number of steps, known as time complexity, or
volume of memory, known as space complexity.

The Need for Analysis


In this chapter, we will discuss the need for analysis of algorithms and how to
choose a better algorithm for a particular problem as one computational
problem can be solved by different algorithms.
By considering an algorithm for a specific problem, we can begin to develop
pattern recognition so that similar types of problems can be solved by the help
of this algorithm.
Algorithms are often quite different from one another, though the objective of
these algorithms are the same. For example, we know that a set of numbers can
be sorted using different algorithms. Number of comparisons performed by one
algorithm may vary with others for the same input. Hence, time complexity of
those algorithms may differ. At the same time, we need to calculate the
memory space required by each algorithm.
Analysis of algorithm is the process of analyzing the problem-solving
capability of the algorithm in terms of the time and size required (the size of
memory for storage while implementation). However, the main concern of
analysis of algorithms is the required time or performance. Generally, we
perform the following types of analysis −
 Worst-case − The maximum number of steps taken on any instance of
size a.
 Best-case − The minimum number of steps taken on any instance of
size a.
 Average case − An average number of steps taken on any instance of
size a.
 Amortized − A sequence of operations applied to the input of
size a averaged over time.
To solve a problem, we need to consider time as well as space complexity as
the program may run on a system where memory is limited but adequate space
is available or may be vice-versa. In this context, if we compare bubble
sort and merge sort. Bubble sort does not require additional memory, but
merge sort requires additional space. Though time complexity of bubble sort is
higher compared to merge sort, we may need to apply bubble sort if the
program needs to run in an environment, where memory is very limited.

Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following −
 A Priori Analysis − This is a theoretical analysis of an algorithm.
Efficiency of an algorithm is measured by assuming that all other
factors, for example, processor speed, are constant and have no effect on
the implementation.
 A Posterior Analysis − This is an empirical analysis of an algorithm.
The selected algorithm is implemented using programming language.
This is then executed on target computer machine. In this analysis, actual
statistics like running time and space required, are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with
the execution or running time of various operations involved. The running time
of an operation can be defined as the number of computer instructions executed
per operation.

Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space
used by the algorithm X are the two main factors, which decide the efficiency
of X.
 Time Factor − Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.
 Space Factor − Space is measured by counting the maximum memory
space required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage
space required by the algorithm in terms of n as the size of input data.

 Asymptotic Notation

Whenever we want to perform analysis of an algorithm, we need to calculate the


complexity of that algorithm. But when we calculate the complexity of an
algorithm it does not provide the exact amount of resource required. So instead
of taking the exact amount of resource, we represent that complexity in a
general form (Notation) which produces the basic nature of that algorithm. We
use that general form (Notation) for analysis process.
Asymptotic notation of an algorithm is a mathematical representation of its
complexity.
Note - In asymptotic notation, when we want to represent the complexity of an
algorithm, we use only the most significant terms in the complexity of that
algorithm and ignore least significant terms in the complexity of that algorithm
(Here complexity can be Space Complexity or Time Complexity).

Majorly, we use THREE types of Asymptotic Notations and those are as


follows...
 Best Case − Minimum time required for program execution.
 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.

Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running
time complexity of an algorithm.

 Ο Notation
 Ω Notation
 θ Notation

Big - Oh Notation (O)


Big - Oh notation is used to define the upper bound of an algorithm in terms of
Time Complexity.
That means Big - Oh notation always indicates the maximum time required by
an algorithm for all input values. That means Big - Oh notation describes the
worst case of an algorithm time complexity.
Big - Oh Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the


most significant term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1.
Then we can represent f(n) as O(g(n)).
f(n) = O(g(n))

Consider the following graph drawn for the values of f(n) and C g(n) for input
(n) value on X-Axis and time required is on Y-Axis

In above graph after a particular input value n 0, always C g(n) is greater than
f(n) which indicates the algorithm's upper bound.

Example

Consider the following f(n) and g(n)...


f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all
values of C > 0 and n0>= 1
f(n) <= C g(n)
⇒3n + 2 <= C n
Above condition is always TRUE for all values of C = 4 and n >= 2.
By using Big - Oh notation we can represent the time complexity as follows...
3n + 2 = O(n)

Big - Omege Notation (Ω)

Big - Omega notation is used to define the lower bound of an algorithm in


terms of Time Complexity.
That means Big-Omega notation always indicates the minimum time required
by an algorithm for all input values. That means Big-Omega notation describes
the best case of an algorithm time complexity.
Big - Omega Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the


most significant term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1.
Then we can represent f(n) as Ω(g(n)).

f(n) = Ω(g(n))

Consider the following graph drawn for the values of f(n) and C g(n) for input
(n) value on X-Axis and time required is on Y-Axis
In above graph after a particular input value n 0, always C g(n) is less than f(n)
which indicates the algorithm's lower bound.

Example

Consider the following f(n) and g(n)...


f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all
values of C > 0 and n0>= 1
f(n) >= C g(n)
⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big - Omega notation we can represent the time complexity as
follows...
3n + 2 = Ω(n)
Big - Theta Notation (Θ)
Big - Theta notation is used to define the average bound of an algorithm in
terms of Time Complexity.
That means Big - Theta notation always indicates the average time required by
an algorithm for all input values. That means Big - Theta notation describes the
average case of an algorithm time complexity.
Big - Theta Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the


most significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0,
C2 > 0 and n0 >= 1. Then we can represent f(n) as Θ(g(n)).

f(n) = Θ(g(n))

Consider the following graph drawn for the values of f(n) and C g(n) for input
(n) value on X-Axis and time required is on Y-Axis
In above graph after a particular input value n 0, always C1 g(n) is less than f(n)
and C2 g(n) is greater than f(n) which indicates the algorithm's average bound.

Example

Consider the following f(n) and g(n)...


f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <=
C2 g(n) for all values of C1 > 0, C2 > 0 and n0>= 1
C1 g(n) <= f(n) <= C2 g(n)
⇒C1 n <= 3n + 2 <= C2 n
Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 2.
By using Big - Theta notation we can represent the time compexity as follows...
3n + 2 = Θ(n)

 Space-Time Trade-off:-

The complexity of an algorithm is the function which gives the


running time and/ or space in term of input size. In simple words, the
complexity of an algorithm refers to how fast or slow a particular
algorithm performs. We define complexity as a numerical
function T(n) - time versus the input size n.
Further, each of our algorithm will involve a particular data structure.
Accordingly we may not always be able to use the most efficient
algorithm, since the choice of data structure depends on many things
including the type of data and the frequency with which various data
operations are applied.

 How to Find the Complexity of an Algorithm?


Let us understand this with the help of an example. Suppose we are
implementing an algorithm that helps us to search for an record amongst a list
of records. We can have the following three cases which relate to the relative
success our algorithm can achieve with respect to time:
 Best Case: The record we are trying to search is the first record of the list.
If f(n) is the function which gives the running time and/ or storage space
requirement of the algorithm in terms of the size n of the input data, this
particular case of the algorithm will produce a complexity C(n)=1 for
our algorithm f(n) as the algorithm will run only 1 time until it finds
the desired record.

 Worst Case: The record we are trying to search is the last record of the
list. If f(n) is the function which gives the running time and/ or storage
space requirement of the algorithm in terms of the size n of the input
data, this particular case of the algorithm will produce a complexity
C(n)=n for our algorithm f(n) as the algorithm will run n times until
it finds the desired record.

 Average Case: The record we are trying to search can be any record in the
list. In this case we do not know at which position it might be. Hence we
take an average of all the possible times our algorithm may run. Hence
assuming for n data, we have a probability of finding any one of them is
1/n. Multiplying each of these with the number of times our algorithm
might run for finding each of them and then taking a sum of all those
multiples, we can obtain the complexity C(n) for our algorithm f(n) in case
of an average case as following:

Hence in this way, we can find the complexity of an algorithm.

The time and space it uses are two major measures of the efficiency of an
algorithm. Sometimes the choice of data structure involves a space-time
trade-off; by increasing the amount of space for storing the data one may
be able to reduce the time needed for processing the data or vice
versa. Hence let us look over them in detail:

 Space Complexity: It is also known as memory requirement. The space


complexity of an algorithm is the amount of memory it needs to run to
completion. We would usually want our algorithm to take the least possible
memory for operation, however in more powerful machines more resources
are usually allocated for the operation in order to reduce the time taken.

 Time Complexity: It is also known as performance requirement. Time


Complexity is calculated of referred in instances when we may be
interested to know in advance whether the program will provide a
satisfactory real time response or not. There may be several possible
solutions to a problem with different time requirements or with different
time complexity. Time complexity is heavily taken care of in cases when
an algorithm needs to be modelled to be run on even the least powerful
machines.

How To Balance The Two Then?

The best algorithm to solve a given problem is one that requires less space in
memory and takes less time to complete its execution. But in practice it is not
always possible to achieve both these objectives. As we know there may be
more than one approach to solve a particular problem. One approach may take
more space but takes less time to complete its execution while the other
approach may take less space but takes more time to complete its
execution. We may have to sacrifice one at the cost of the other. If space is
our constraint, then we have to choose a program that requires less space
at the cost of more execution time. On the other hand if time is our
constraint then we have to choose a program that takes less time to
complete its execution at the cost of more space.

 Linear Search Algorithm (Sequential Search Algorithm)


Linear search algorithm finds a given element in a list of elements
with O(n) time complexity where n is total number of elements in the list. This
search process starts comparing search element with the first element in the list.
If both are matched then result is element found otherwise search element is
compared with the next element in the list. Repeat the same until search element
is compared with the last element in the list, if that last element also doesn't
match, then the result is "Element not found in the list". That means, the search
element is compared with element by element in the list.

Linear search is implemented using following steps...

 Step 1 - Read the search element from the user.


 Step 2 - Compare the search element with the first element in the list.
 Step 3 - If both are matched, then display "Given element is found!!!"
and terminate the function
 Step 4 - If both are not matched, then compare search element with the
next element in the list.
 Step 5 - Repeat steps 3 and 4 until search element is compared with last
element in the list.
 Step 6 - If last element in the list also doesn't match, then display
"Element is not found!!!" and terminate the function.

Advantages of a linear search

 Will perform fast searches of small to medium lists. With today's


powerful computers, small to medium arrays can be searched
relatively quickly.

 The list does not need to sort. ...

 Not affected by insertions and deletions.

Disadvantages of Linear search

1. Linear Search is very slow for large lists.


2. As the number of elements in the array/list increases the time
complexity also increases. ... As the number of elements in the
array/list increases the time complexity also increases.
Example

Binary Search:-

Binary Search Algorithm is one of the widely used searching techniques. It can be used to
sort arrays. This searching technique follows the divide and conquer strategy. The search
space always reduces to half in every iteration.
Binary Search Algorithm is a very efficient technique for searching but it needs some
order on which partition of the array will occur.

Advantages of Binary Search Algorithm


1. Since it follows the technique to eliminate half of the array elements, it
is more efficient as compared to linear search for large data.
2. Better time complexity and thus takes less compilation time.
3. An improvement over linear search as it breaks the array down in half
rather than sequentially traversing through the array elements.

Limitations of Binary Search Algorithm


1. Binary Search algorithm could only be implemented over a sorted
array.
2. Small unsorted arrays would take considerate time in sorting and then
searching the desired element. So, binary search is not preferred in
such cases.
3. It has poor locality of reference compared to linear search algorithm
when comes to in-memory searching for short intervals.

Applications of Binary Search


1. This algorithm is used to search element in a given sorted array with
more efficiency.
2. It could also be used for few other additional operations like- to find
the smallest element in the array or to find the largest element in the
array.

BINARY_SEARCH (A, lower_bound, upper_bound,


VAL)
o Step 1: [INITIALIZE] SET BEG = lower_bound
END = upper_bound, POS = - 1
o Step 2: Repeat Steps 3 and 4 while BEG <=END
o Step 3: SET MID = (BEG + END)/2
o Step 4: IF A[MID] = VAL
SET POS = MID
PRINT POS
Go to Step 6
ELSE IF A[MID] > VAL
SET END = MID - 1
ELSE
SET BEG = MID + 1
[END OF IF]
[END OF LOOP]
o Step 5: IF POS = -1
PRINT "VALUE IS NOT PRESENT IN THE ARRAY"
[END OF IF]
o Step 6: EXIT

o Complexity

SN Performance Complexity

1 Worst case O(log n)

2 Best case O(1)

3 Average Case O(log n)

4 Worst case space complexity O(1)


Example
Let us consider an array arr = {1, 5, 7, 8, 13, 19, 20, 23, 29}. Find the location of the item 23
in the array.

In 1st step :

1. BEG = 0
2. END = 8ron
3. MID = 4
4. a[mid] = a[4] = 13 < 23, therefore

in Second step:

1. Beg = mid +1 = 5
2. End = 8
3. mid = 13/2 = 6
4. a[mid] = a[6] = 20 < 23, therefore;

in third step:

1. beg = mid + 1 = 7
2. End = 8
3. mid = 15/2 = 7
4. a[mid] = a[7]
5. a[7] = 23 = item;
6. therefore, set location = mid;
7. The location of the item will be 7.
As the desired element is available in the array, So Search
is successful.
Difference between Linear Search and Binary Search

Linear Search Binary Search

Definition The linear search starts searching from It finds the position of the
the first element and compares each searched element by finding the
element with a searched element till the middle element of the array.
element is not found.

Sorted data In a linear search, the elements don't need The pre-condition for the
to be arranged in sorted order. binary search is that the
elements must be arranged in a
sorted order.

Approach It is based on the sequential approach. It is based on the divide and


conquer approach.

Size It is preferrable for the small-sized data It is preferrable for the large-
sets. size data sets.

Efficiency It is less efficient in the case of large-size It is more efficient in the case of
data sets. large-size data sets.

Worst-case In a linear search, the worst- case scenario In a binary search, the worst-
scenario for finding the element is O(n). case scenario for finding the
element is O(log2n).

Best-case In a linear search, the best-case scenario In a binary search, the best-case
scenario for finding the first element in the list is scenario for finding the first
O(1). element in the list is O(1).
MODULE-2

ADT Stack
A stack is an Abstract Data Type (ADT),
commonly used in most programming languages. It is named stack as it behaves
like a real-world stack, for example – a deck of cards or a pile of plates, etc.

A real-world stack allows operations at one end only. For example, we can
place or remove a card or plate from the top of the stack only. Likewise, Stack
ADT allows all data operations at one end only. At any given time, we can only
access the top element of a stack.
This feature makes it LIFO data structure. LIFO stands for Last-in-first-out.
Here, the element which is placed (inserted or added) last, is accessed first. In
stack terminology, insertion operation is called PUSH operation and removal
operation is called POP operation.

Stack Representation
The following diagram depicts a stack and its operations −

A stack can be implemented by means of Array, Structure, Pointer, and Linked


List. Stack can either be a fixed size one or it may have a sense of dynamic
resizing. Here, we are going to implement stack using arrays, which makes it a
fixed size stack implementation.

Basic Operations
Stack operations may involve initializing the stack, using it and then de-
initializing it. Apart from these basic stuffs, a stack is used for the following two
primary operations −
 push() − Pushing (storing) an element on the stack.
 pop() − Removing (accessing) an element from the stack.
When data is PUSHed onto stack.
To use a stack efficiently, we need to check the status of stack as well. For the
same purpose, the following functionality is added to stacks −
peek() − get the top data element of the stack, without removing it.
isFull() − check if stack is full.
isEmpty() − check if stack is empty.
At all times, we maintain a pointer to the last PUSHed data on the stack. As this
pointer always represents the top of the stack, hence named top. The top pointer
provides top value of the stack without actually removing it.
First we should learn about procedures to support stack functions −
peek()
Algorithm of peek() function −
begin procedure peek
return stack[top]
end procedure
Implementation of peek() function in C programming language −
Example
int peek() {
return stack[top];
}
isfull()
Algorithm of isfull() function −
begin procedure isfull
if top equals to MAXSIZE
return true
else
return false
endif

end procedure
Implementation of isfull() function in C programming language −
Example
bool isfull() {
if(top == MAXSIZE)
return true;
else
return false;
}
isempty()
Algorithm of isempty() function −
begin procedure isempty

if top less than 1


return true
else
return false
endif

end procedure
Implementation of isempty() function in C programming language is slightly
different. We initialize top at -1, as the index in array starts from 0. So we check
if the top is below zero or -1 to determine if the stack is empty. Here's the code

Example
bool isempty() {
if(top == -1)
return true;
else
return false;
}

Push Operation
The process of putting a new data element onto stack is known as a Push
Operation. Push operation involves a series of steps −
 Step 1 − Checks if the stack is full.
 Step 2 − If the stack is full, produces an error and exit.
 Step 3 − If the stack is not full, increments top to point next empty space.
 Step 4 − Adds data element to the stack location, where top is pointing.
 Step 5 − Returns success.

If the linked list is used to implement the stack, then in step 3, we need to
allocate space dynamically.
Algorithm for PUSH Operation
A simple algorithm for Push operation can be derived as follows −
begin procedure push: stack, data

if stack is full
return null
endif

top ← top + 1
stack[top] ← data

end procedure
Implementation of this algorithm in C, is very easy. See the following code −
Example
void push(int data) {
if(!isFull()) {
top = top + 1;
stack[top] = data;
} else {
printf("Could not insert data, Stack is full.\n");
}
}

Pop Operation
Accessing the content while removing it from the stack, is known as a Pop
Operation. In an array implementation of pop() operation, the data element is
not actually removed, instead top is decremented to a lower position in the stack
to point to the next value. But in linked-list implementation, pop() actually
removes data element and deallocates memory space.
A Pop operation may involve the following steps −
 Step 1 − Checks if the stack is empty.
 Step 2 − If the stack is empty, produces an error and exit.
 Step 3 − If the stack is not empty, accesses the data element at
which top is pointing.
 Step 4 − Decreases the value of top by 1.
 Step 5 − Returns success.

Algorithm for Pop Operation


A simple algorithm for Pop operation can be derived as follows −
begin procedure pop: stack

if stack is empty
return null
endif

data ← stack[top]
top ← top - 1
return data

end procedure
Implementation of this algorithm in C, is as follows −
Example
int pop(int data) {

if(!isempty()) {
data = stack[top];
top = top - 1;
return data;
} else {
printf("Could not retrieve data, Stack is empty.\n");
}
}
Applications of Stacks:
1. Expression Evaluation and Conversion
There are 3 types of expression we use in Programming, which are Infix
Expression, Prefix Expression and Postfix Expression.

Infix Expression is represented as X + Y. Prefix Expression is represented as


+XY and Postfix Expression is represented as XY+.

In order to evaluate these expressions in Programming, a Data Structure called


Stack is used.

Similarly, Stack is also used for Converting one expression into another. For
example, converting Infix to Postfix or Infix to Prefix.

Expression Representation

There are three popular methods used for representation of an expression:

Infix A+B Operator between operands.

Prefix + AB Operator before operands.

Postfix AB + Operator after operands.

1. Conversion of Infix to Postfix


Algorithm for Infix to Postfix

Step 1: Consider the next element in the input.

Step 2: If it is operand, display it.

Step 3: If it is opening parenthesis, insert it on stack.

Step 4: If it is an operator, then

 If stack is empty, insert operator on stack.


 If the top of stack is opening parenthesis, insert the operator on stack
 If it has higher priority than the top of stack, insert the operator on stack.
 Else, delete the operator from the stack and display it, repeat Step 4.
Step 5: If it is a closing parenthesis, delete the operator from stack and display
them until an opening parenthesis is encountered. Delete and discard the
opening parenthesis.

Step 6: If there is more input, go to Step 1.

Step 7: If there is no more input, delete the remaining operators to output.

Example: Suppose we are converting 3*3/(4-1)+6*2 expression into postfix


form.

Following table shows the evaluation of Infix to Postfix:

Expression Stack Output

3 Empty 3

* * 3

3 * 33

/ / 33*

( /( 33*

4 /( 33*4

- /(- 33*4

1 /(- 33*41

) - 33*41-

+ + 33*41-/

6 + 33*41-/6

* +* 33*41-/62
2 +* 33*41-/62

Empty 33*41-/62*+

So, the Postfix Expression is 33*41-/62*+

Example: Program for Infix to Postfix Conversion

#include <stdio.h>
#include <ctype.h>
#define SIZE 50
char s[SIZE];
int top=-1;
push(char elem)
{
s[++top]=elem;
return 0;
}
char pop()
{
return(s[top--]);
}
int pr(char elem)
{
switch(elem)
{
case '#': return 0;
case '(': return 1;
case '+':
case '-': return 2;
case '*':
case '/': return 3;
}
return 0;
}
void main()
{
char infx[50], pofx[50], ch, elem;
int i=0, k=0;
printf("\n\nEnter Infix Expression: ");
scanf("%s",infx);
push('#');
while( (ch=infx[i++]) != '\0')
{
if( ch == '(') push(ch);
else
if(isalnum(ch)) pofx[k++]=ch;
else
if( ch == ')')
{
while( s[top] != '(')
pofx[k++]=pop();
elem=pop();
}
else
{
while( pr(s[top]) >= pr(ch) )
pofx[k++]=pop();
push(ch);
}
}
while( s[top] != '#')
pofx[k++]=pop();
pofx[k]='\0';
printf("\n\n Given Infix Expression: %s \n Postfix Expresssion: %s\
n",infx,pofx);
}

Output:
2. Infix to Prefix
Algorithm for Infix to Prefix Conversion:

Step 1: Insert “)” onto stack, and add “(” to end of the A .

Step 2: Scan A from right to left and repeat Step 3 to 6 for each element of A
until the stack is empty .

Step 3: If an operand is encountered, add it to B .

Step 4: If a right parenthesis is encountered, insert it onto stack .

Step 5: If an operator is encountered then,


a. Delete from stack and add to B (each operator on the top of stack)
which has same or higher precedence than the operator.
b. Add operator to stack.

Step 6: If left parenthesis is encountered then ,


a. Delete from the stack and add to B (each operator on top of stack until
a left parenthesis is encountered).
b. Remove the left parenthesis.

Step 7: Exit

Example: Program for Infix to Prefix Conversion

#include<stdio.h>
#include <conio.h>
#include <string.h>
#include <ctype.h>
#define MAX 50
struct infix
{
char target[MAX] ;
char stack[MAX] ;
char *s, *t ;
int top, l ;
};
void initinfix ( struct infix * ) ;
void setexpr ( struct infix *, char * ) ;
void push ( struct infix *, char ) ;
char pop ( struct infix * ) ;
void convert ( struct infix * ) ;
int priority ( char c ) ;
void show ( struct infix ) ;
void main( )
{
struct infix q ;
char expr[MAX] ;
clrscr( ) ;
initinfix ( &q ) ;
printf ( "\nEnter an expression in infix form: " ) ;
gets ( expr ) ;
setexpr ( &q, expr ) ;
convert ( &q ) ;
printf ( "The Prefix expression is: " ) ;
show ( q ) ;
getch( ) ;
}
/* initializes elements of structure variable */
void initinfix ( struct infix *pq )
{
pq -> top = -1 ;
strcpy ( pq -> target, "" ) ;
strcpy ( pq -> stack, "" ) ;
pq -> l = 0 ;
}
/* reverses the given expression */
void setexpr ( struct infix *pq, char *str )
{
pq -> s = str ;
strrev ( pq -> s ) ;
pq -> l = strlen ( pq -> s ) ;
*( pq -> target + pq -> l ) = '\0' ;
pq -> t = pq -> target + ( pq -> l - 1 ) ;
}
/* adds operator to the stack */
void push ( struct infix *pq, char c )
{
if ( pq -> top == MAX - 1 )
printf ( "\nStack is full.\n" ) ;
else
{
pq -> top++ ;
pq -> stack[pq -> top] = c ;
}
}
/* pops an operator from the stack */
char pop ( struct infix *pq )
{
if ( pq -> top == -1 )
{
printf ( "Stack is empty\n" ) ;
return -1 ;
}
else
{
char item = pq -> stack[pq -> top] ;
pq -> top-- ;
return item ;
}
}
/* converts the infix expr. to prefix form */
void convert ( struct infix *pq )
{
char opr ;
while ( *( pq -> s ) )
{
if ( *( pq -> s ) == ' ' || *( pq -> s ) == '\t' )
{
pq -> s++ ;
continue ;
}
if ( isdigit ( *( pq -> s ) ) || isalpha ( *( pq -> s ) ) )
{
while ( isdigit ( *( pq -> s ) ) || isalpha ( *( pq -> s ) ) )
{
*( pq -> t ) = *( pq -> s ) ;
pq -> s++ ;
pq -> t-- ;
}
}
if ( *( pq -> s ) == ')' )
{
push ( pq, *( pq -> s ) ) ;
pq -> s++ ;
}
if ( *( pq -> s ) == '*' || *( pq -> s ) == '+' || *( pq -> s ) == '/' ||*( pq -> s )
== '%' || *( pq -> s ) == '-' || *( pq -> s ) == '$' )
{
if ( pq -> top != -1 )
{
opr = pop ( pq ) ;
while ( priority ( opr ) > priority ( *( pq -> s ) ) )
{
*( pq -> t ) = opr ;
pq -> t-- ;
opr = pop ( pq ) ;
}
push ( pq, opr ) ;
push ( pq, *( pq -> s ) ) ;
}
else
push ( pq, *( pq -> s ) ) ;
pq -> s++ ;
}
if ( *( pq -> s ) == '(' )
{
opr = pop ( pq ) ;
while ( opr != ')' )
{
*( pq -> t ) = opr ;
pq -> t-- ;
opr = pop ( pq ) ;
}
pq -> s++ ;
}
}
while ( pq -> top != -1 )
{
opr = pop ( pq ) ;
*( pq -> t ) = opr ;
pq -> t-- ;
}
pq -> t++ ;
}
/* returns the priotity of the operator */
int priority ( char c )
{
if ( c == '$' )
return 3 ;
if ( c == '*' || c == '/' || c == '%' )
return 2 ;
else
{
if ( c == '+' || c == '-' )
return 1 ;
else
return 0 ;
}
}
/* displays the prefix form of given expr. */
void show ( struct infix pq )
{
while ( *( pq.t ) )
{
printf ( " %c", *( pq.t ) ) ;
pq.t++ ;
}
}

Output:

3. Postfix to Infix
Following is an algorithm for Postfix to Infix conversion:

Step 1: While there are input symbol left.

Step 2: Read the next symbol from input.

Step 3: If the symbol is an operand, insert it onto the stack.

Step 4: Otherwise, the symbol is an operator.


Step 5: If there are fewer than 2 values on the stack, show error /* input not
sufficient values in the expression */

Step 6: Else,
a. Delete the top 2 values from the stack.
b. Put the operator, with the values as arguments and form a string.
c. Encapsulate the resulted string with parenthesis.
d. Insert the resulted string back to stack.

Step 7: If there is only one value in the stack, that value in the stack is the
desired infix string.

Step 8: If there are more values in the stack, show error /* The user input has
too many values */

Example: Suppose we are converting efg-+he-sh-o+/* expression into Infix.

Following table shows the evaluation of Postfix to Infix:

efg-+he-sh-o+/* NULL

fg-+he-sh-o+/* “e”

g-+he-sh-o+/* “f”
“e”

-+he-sh-o+/* “g”
“f”
“e”

+he-sh-o+/* “f”-“g”
“e”

he-sh-o+/* “e+f-g”

e-sh-o+/* “h”
“e+f-g”

-sh-o+/* “e”
“h”
“e+f-g”

sh-o+/* “h-e”
“e+f-g”

h-o+/* “s”
“h-e”
“e+f-g”

-o+/* “h”
“s”
“h-e”
“e+f-g”

o+/* “h-s”
“h-e”
“e+f-g”

+/* “o”
“s-h”
“h-e”
“e+f-g”

/* “s-h+o”
“h-e”
“e+f-g”

* “(h-e)/( s-h+o)”
“e+f-g”

NULL “(e+f-g)* (h-e)/( s-h+o)”

So, the Infix Expression is (e+f-g)* (h-e)/(s-h+o)

Example: Program for Postfix to Infix conversion

#include <stdio.h>
#include <stdlib.h>
int top = 10;
struct node
{
char ch;
struct node *next;
struct node *prev;
} *stack[11];
typedef struct node node;
void push(node *str)
{
if (top <= 0)
printf("Stack is Full ");
else
{
stack[top] = str;
top--;
}
}
node *pop()
{
node *exp;
if (top >= 10)
printf("Stack is Empty ");
else
exp = stack[++top];
return exp;
}
void convert(char exp[])
{
node *op1, *op2;
node *temp;
int i;
for (i=0;exp[i]!='\0';i++)
if (exp[i] >= 'a'&& exp[i] <= 'z'|| exp[i] >= 'A' && exp[i] <= 'Z')
{
temp = (node*)malloc(sizeof(node));
temp->ch = exp[i];
temp->next = NULL;
temp->prev = NULL;
push(temp);
}
else if (exp[i] == '+' || exp[i] == '-' || exp[i] == '*' || exp[i] == '/' || exp[i] == '^')
{
op1 = pop();
op2 = pop();
temp = (node*)malloc(sizeof(node));
temp->ch = exp[i];
temp->next = op1;
temp->prev = op2;
push(temp);
}
}
void display(node *temp)
{
if (temp != NULL)
{
display(temp->prev);
printf("%c", temp->ch);
display(temp->next);
}
}
void main()
{
char exp[50];
clrscr();
printf("Enter the postfix expression :");
scanf("%s", exp);
convert(exp);
printf("\nThe Equivalant Infix expression is:");
display(pop());
printf("\n\n");
getch();
}
Output:

4. Prefix to Infix
Algorithm for Prefix to Infix Conversion

Step 1: The reversed input string is inserted into a stack -> prefixToInfix(stack)

Step 2: If stack is not empty,


a. Temp -> pop the stack
b. If temp is an operator,
Write a opening parenthesis “(“ to show -> prefixToInfix(stack)
Write temp to show -> prefixToInfix(stack)
Write a closing parenthesis “)” to show
c. Else If temp is a space ->prefixToInfix(stack)
d. Else, write temp to show if stack.top != space ->prefixToInfix(stack)

Example: Suppose we are converting /-bc+-pqr expression from Prefix to Infix.

Following table shows the evaluation of Prefix to Infix Conversion:

Expression Stack

-bc+-pqr Empty

/-bc+-pq “q”
“r”

/-bc+- “p”
“q”
“r”

/-bc+ “p-q”
“r”
/-bc “p-q+r”

/-b “c”
“p-q+r”

/- “b”
“c”
“p-q+r”

/ “b-c”
“p-q+r”

NULL “((b-c)/((p-q)+r))”

So, the Infix Expression is ((b-c)/((p-q)+r))

Example: Program for Prefix to Infix Conversion

#include <string.h>
#include <ctype.h>

char opnds[50][80], oprs[50];


int topr=-1, topd=-1;
pushd(char *opnd)
{
strcpy(opnds[++topd],opnd);
}
char *popd()
{
return(opnds[topd--]);
}
pushr(char opr)
{
oprs[++topr]=opr;
}
char popr()
{
return(oprs[topr--]);
}
int empty(int t)
{
if( t == 0) return(1);
return(0);
}
void main()
{
char prfx[50],ch,str[50],opnd1[50],opnd2[50],opr[2];
int i=0,k=0,opndcnt=0;
clrscr();
printf("Enter Prefix Expression = ");
gets(prfx);
printf(" Given Prefix Expression is : %s\n",prfx);
while( (ch=prfx[i++]) != '\0')
{
if(isalnum(ch))
{
str[0]=ch; str[1]='\0';
pushd(str); opndcnt++;
if(opndcnt >= 2)
{
strcpy(opnd2,popd());
strcpy(opnd1,popd());
strcpy(str,"(");
strcat(str,opnd1);
ch=popr();
opr[0]=ch;opr[1]='\0';
strcat(str,opr);
strcat(str,opnd2);
strcat(str,")");
pushd(str);
opndcnt-=1;
}
}
else
{
pushr(ch);
if(opndcnt==1)opndcnt=0; /* operator followed by single operand*/
}
}
if(!empty(topd))
{
strcpy(opnd2,popd());
strcpy(opnd1,popd());
strcpy(str,"(");
strcat(str,opnd1);
ch=popr();
opr[0]=ch;opr[1]='\0';
strcat(str,opr);
strcat(str,opnd2);
strcat(str,")");
pushd(str);
}
printf(" Infix Expression: ");
puts(opnds[topd]);
getch();
}

Output:

2. Backtracking
Backtracking is a recursive algorithm which is used for solving the optimization
problem.

So, In order to find the optimized solution of a problem with Backtracking, we


have to find each and every possible solution of the problem, doesn’t matter if it
is correct or not.
In Backtracking, while finding the every possible solution of a problem, we
store the solution of a previously calculated problem in Stack and use that
solution to solve the upcoming problems.

3. Parenthesis Checking
In Programming, we make use of different type of parenthesis, like – (, ), {, },
which are used for opening and closing a block of code.

So, these parenthesis get stored in Stack and control the flow of our program.

4. Function Call
In Programming, whenever you make a call from one function to the another
function. The address of the calling function gets stored in the Stack.

So, when the called function gets terminated. The program control move back to
the calling function with the help of the address which was stored in the Stack.

So, Stack plays the main role when it comes to Calling a Function from other
Function.

5. String Reversal
String Reversal is another amazing Application of Stack. Here, one by one each
character of the Stack get inserted into the Stack.

So, the first character of the Stack is on the bottom of the Stack and the last
character of the String is on the Top of the Stack.

After performing the pop operation in Stack, we get the String in Reverse order.

6. Syntax Parsing
As many of the Programming Languages are context-free languages. So, Stack
is also heavily used for Syntax Parsing by most of the Compilers.

7. Memory Management
Memory Management is the important function of the Operating System. Stack
also plays the main role when it comes to Memory Management.

QUEUE Queue is an abstract data structure, somewhat similar to Stacks.


Unlike stacks, a queue is open at both its ends. One end is always used to insert
data (enqueue) and the other is used to remove data (dequeue). Queue follows
First-In-First-Out methodology, i.e., the data item stored first will be accessed
first.

A real-world example of queue can be a single-lane one-way road, where the


vehicle enters first, exits first. More real-world examples can be seen as queues
at the ticket windows and bus-stops.

Queue Representation

As we now understand that in queue, we access both ends for different reasons.
The following diagram given below tries to explain queue representation as
data structure −

As in stacks, a queue can also be implemented using Arrays, Linked-lists,


Pointers and Structures. For the sake of simplicity, we shall implement queues
using one-dimensional array.

Basic Operations
Queue operations may involve initializing or defining the queue, utilizing it,
and then completely erasing it from the memory. Here we shall try to
understand the basic operations associated with queues −
 enqueue() − add (store) an item to the queue.
 dequeue() − remove (access) an item from the queue.
Few more functions are required to make the above-mentioned queue operation
efficient. These are −
 peek() − Gets the element at the front of the queue without removing it.
 isfull() − Checks if the queue is full.
 isempty() − Checks if the queue is empty.
In queue, we always dequeue (or access) data, pointed by front pointer and
while enqueing (or storing) data in the queue we take help of rear pointer.
Let's first learn about supportive functions of a queue −

 peek()
This function helps to see the data at the front of the queue. The algorithm of
peek() function is as follows −
Algorithm
begin procedure peek
return queue[front]
end procedure
Implementation of peek() function in C programming language −
Example
int peek() {
return queue[front];
}

 isfull()
As we are using single dimension array to implement queue, we just check for
the rear pointer to reach at MAXSIZE to determine that the queue is full. In
case we maintain the queue in a circular linked-list, the algorithm will differ.
Algorithm of isfull() function −
Algorithm
begin procedure isfull

if rear equals to MAXSIZE


return true
else
return false
endif
end procedure
Implementation of isfull() function in C programming language −
Example
bool isfull() {
if(rear == MAXSIZE - 1)
return true;
else
return false;
}

 isempty()
Algorithm of isempty() function −
Algorithm
begin procedure isempty

if front is less than MIN OR front is greater than rear


return true
else
return false
endif
end procedure
If the value of front is less than MIN or 0, it tells that the queue is not yet
initialized, hence empty.
Here's the C programming code −
Example
bool isempty() {
if(front < 0 || front > rear)
return true;
else
return false;
}
 Enqueue Operation

Queues maintain two data pointers, front and rear. Therefore, its operations
are comparatively difficult to implement than that of stacks.
The following steps should be taken to enqueue (insert) data into a queue −
 Step 1 − Check if the queue is full.
 Step 2 − If the queue is full, produce overflow error and exit.
 Step 3 − If the queue is not full, increment rear pointer to point the next
empty space.
 Step 4 − Add data element to the queue location, where the rear is
pointing.
 Step 5 − return success.

Sometimes, we also check to see if a queue is initialized or not, to handle any


unforeseen situations.
Algorithm for enqueue operation
procedure enqueue(data)
if queue is full
return overflow
endif
rear ← rear + 1
queue[rear] ← data
return true
end procedure
Implementation of enqueue() in C programming language −
Example
int enqueue(int data)
if(isfull())
return 0;
rear = rear + 1;
queue[rear] = data;
return 1;
end procedure
 Dequeue Operation

Accessing data from the queue is a process of two tasks − access the data
where front is pointing and remove the data after access. The following steps
are taken to perform dequeue operation −
 Step 1 − Check if the queue is empty.
 Step 2 − If the queue is empty, produce underflow error and exit.
 Step 3 − If the queue is not empty, access the data where front is
pointing.
 Step 4 − Increment front pointer to point to the next available data
element.
 Step 5 − Return success.

Algorithm for dequeue operation


procedure dequeue
if queue is empty
return underflow
end if

data = queue[front]
front ← front + 1
return true

end procedure
Implementation of dequeue() in C programming language −
Example
int dequeue() {
if(isempty())
return 0;

int data = queue[front];


front = front + 1;

return data;
}
Circular Queue
Before we start to learn about Circular queue, we
should first understand, why we need a circular queue, when we already
have linear queue data structure.

In a Linear queue, once the queue is completely full, it's not possible to
insert more elements. Even if we dequeue the queue to remove some of the
elements, until the queue is reset, no new elements can be inserted. You
must be wondering why?

When we dequeue any element to remove it from the queue, we are


actually moving the front of the queue forward, thereby reducing the
overall size of the queue. And we cannot insert new elements, because
the rear pointer is still at the end of the queue.

The only way is to reset the linear queue, for a fresh start.

Circular Queue is also a linear data structure, which follows the principle
of FIFO(First In First Out), but instead of ending the queue at the last
position, it again starts from the first position after the last, hence making
the queue behave like a circular data structure.
Basic features of Circular Queue

1. In case of a circular queue, head pointer will always point to the front
of the queue, and tail pointer will always point to the end of the
queue.
2. Initially, the head and the tail pointers will be pointing to the same
location, this would mean that the queue is empty.

3. New data is always added to the location pointed by the tail pointer,
and once the data is added, tail pointer is incremented to point to the
next available location.

4. In a circular queue, data is not actually removed from the queue.


Only the head pointer is incremented by one position
when dequeue is executed. As the queue data is only the data
between head and tail, hence the data left outside is not a part of the
queue anymore, hence removed.
5. The head and the tail pointer will get reinitialised to 0 every time they
reach the end of the queue.

6. Also, the head and the tail pointers can cross each other. In other
words, head pointer can be greater than the tail. Sounds odd? This
will happen when we dequeue the queue a couple of times and
the tail pointer gets reinitialised upon reaching the end of the queue.
Going Round and Round

Another very important point is keeping the value of the tail and
the head pointer within the maximum queue size.

In the diagrams above the queue has a size of 8, hence, the value
of tail and head pointers will always be between 0 and 7.

This can be controlled either by checking everytime


whether tail or head have reached the maxSize and then setting the
value 0 or, we have a better way, which is, for a value x if we divide it
by 8, the remainder will never be greater than 8, it will always be
between 0 and 0, which is exactly what we want.

So the formula to increment the head and tail pointers to make them go
round and round over and again will be, head = (head+1) % maxSize or tail =
(tail+1) % maxSize

Application of Circular Queue


Operations on Circular Queue
The following are the two operations that can be performed on a circular queue
are:

o Enqueue: It inserts an element in a queue. The given below are the


scenarios that can be considered while inserting an element:
1. If the queue is empty, then the front and rear are set to 0 to insert a
new element.
2. If queue is not empty, then the value of the rear gets incremented.
3. If queue is not empty and rear is equal to n-1, then rear is set to 0.
o Dequeue: It performs a deletion operation in the Queue. The following
are the points or cases that can be considered while deleting an element:
1. If there is only one element in a queue, after the dequeue operation
is performed on the queue, the queue will become empty. In this
case, the front and rear values are set to -1.
2. If the value of the front is equal to n-1, after the dequeue operation
is performed, the value of the front variable is set to 0.
3. If either of the above conditions is not fulfilled, then the front value
is incremented.

Differences between linear Queue and Circular Queue


Basis of Linear Queue Circular Queue
comparison
Meaning The linear queue is a type of linear data The circular queue is also a
structure that contains the elements in a linear data structure in which
sequential manner. the last element of the Queue is
connected to the first element,
thus creating a circle.

Insertion and In linear queue, insertion is done from the rear In circular queue, the insertion
Deletion end, and deletion is done from the front end. and deletion can take place
from any end.

Memory space The memory space occupied by the linear It requires less memory as
queue is more than the circular queue. compared to linear queue.

Memory The usage of memory is inefficient. The memory can be more


utilization efficiently utilized.

Order of
execution
Below we have some common real-world examples where circular queues
are used:

1. Computer controlled Traffic Signal System uses circular queue.


2. CPU scheduling and Memory management.

Implementation of Circular Queue

Below we have the implementation of a circular queue:

1. Initialize the queue, with size of the queue defined (maxSize),


and head and tail pointers.
2. enqueue: Check if the number of elements is equal to maxSize - 1:
o If Yes, then return Queue is full.
o If No, then add the new data element to the location
of tail pointer and increment the tail pointer.
3. dequeue: Check if the number of elements in the queue is zero:
o If Yes, then return Queue is empty.
o If No, then increment the head pointer.
4. Finding the size:
o If, tail >= head, size = (tail - head) + 1
o But if, head > tail, then size = maxSize - (head - tail) + 1

Priority queue
The priority queue in the data structure is an
extension of the “normal” queue. It is an abstract data type that contains a
group of items. It is like the “normal” queue except that the dequeuing
elements follow a priority order. The priority order dequeues those items
first that have the highest priority. This blog will give you a deeper
understanding of the priority queue and its implementation in the C
programming language.

What is a Priority Queue?


It is an abstract data type that provides a way to maintain the dataset. The
“normal” queue follows a pattern of first-in-first-out. It dequeues elements
in the same order followed at the time of insertion operation. However, the
element order in a priority queue depends on the element’s priority in that
queue. The priority queue moves the highest priority elements at the
beginning of the priority queue and the lowest priority elements at the
back of the priority queue.
It supports only those elements that are comparable. Hence, a priority
queue in the data structure arranges the elements in either ascending or
descending order.
You can think of a priority queue as several patients waiting in line at a
hospital. Here, the situation of the patient defines the priority order. The
patient with the most severe injury would be the first in the queue.
What are the Characteristics of a Priority
Queue?
A queue is termed as a priority queue if it has the following characteristics:
 Each item has some priority associated with it.
 An item with the highest priority is moved at the front and deleted first.
 If two elements share the same priority value, then the priority queue
follows the first-in-first-out principle for de queue operation.
What are the Types of Priority Queue?
A priority queue is of two types:
 Ascending Order Priority Queue
 Descending Order Priority Queue
Ascending Order Priority Queue
An ascending order priority queue gives the highest priority to the lower
number in that queue. For example, you have six numbers in the priority
queue that are 4, 8, 12, 45, 35, 20. Firstly, you will arrange these numbers
in ascending order. The new list is as follows: 4, 8, 12, 20. 35, 45. In this
list, 4 is the smallest number. Hence, the ascending order priority queue
treats number 4 as the highest priority.
4 8 12 20 35 45

In the above table, 4 has the highest priority, and 45 has the lowest
priority.
Descending Order Priority Queue
A descending order priority queue gives the highest priority to the highest
number in that queue. For example, you have six numbers in the priority
queue that are 4, 8, 12, 45, 35, 20. Firstly, you will arrange these numbers
in ascending order. The new list is as follows: 45, 35, 20, 12, 8, 4. In this
list, 45 is the highest number. Hence, the descending order priority queue
treats number 45 as the highest priority.
45 35 20 12 8 4

In the above table, 4 has the lowest priority, and 45 has the highest
priority.
Implementation of the Priority Queue in Data
Structure
You can implement the priority queues in one of the following ways:
 Linked list
 Binary heap
 Arrays
 Binary search tree
The binary heap is the most efficient method for implementing
the priority queue in the data structure.
The below tables summarize the complexity of different operations in a
priority queue.
Operation Unordered Array Ordered Array Binary Heap Binary Search Tree

Insert 0(1) 0(N) 0(log(N)) 0(log(N))

Peek 0(N) 0(1) 0(1) 0(1)

Delete 0(N) 0(1) 0(log (N)) 0(log(N))


Binary Heap
A binary heap tree organises all the parent and child nodes of the tree in a
particular order. In a binary heap tree, a parent node can have a maximum
of 2 child nodes. The value of the parent node could either be:
 equal to or less than the value of a child node.
 equal to or more than the value of a child node.
The above process divides the binary heap into two types: max heap and
min-heap.

Max Heap
The max heap is a binary heap in which a parent node has a value either
equal to or greater than the child node value. The root node of the tree has
the highest value.
Inserting an Element in a Max Heap Binary Tree
You can perform the following steps to insert an element/number in
the priority queue in the data structure.
1. The algorithm scans the tree from top to bottom and left to right to find an
empty slot. It then inserts the element at the last node in the tree.
2. After inserting the element, the order of the binary tree is disturbed. You
must swap the data with each other to sort the order of the max heap
binary tree. You must keep shuffling the data until the tree satisfies the
max-heap property.
Algorithm to Insert an Element in a Max Heap Binary Tree
If the tree is empty and contains no node,
create a new parent node newElement.
else (a parent node is already available)
insert the newElement at the end of the tree (i.e., last node of the tree
from left to right.)
max-heapify the tree
Deleting an Element in a M ax Heap Binary Tree
1. You can perform the following steps to delete an element in the Priority
Queue in Data Structure.
2. Choose the element that you want to delete from the binary tree.
3. Shift the data at the end of the tree by swapping it with the last node data.
4. Remove the last element of the binary tree.
5. After deleting the element, the order of the binary tree is disturbed. You
must sort the order to satisfy the property of max-heap. You must keep
shuffling the data until the tree meets the max-heap property.

Algorithm to Delete an Element in a Max Heap Binary Tree


If the elementUpForDeletion is the lastNode,
delete the elementUpForDeletion
else replace elementUpForDeletion with the lastNode
delete the elementUpForDeletion
max-heapify the tree

The priority queue in a data structure is used in Google Maps for searching the
optimal path to reach any destination. Dijkstra’s Shortest Path algorithm utilizes
a min priority queue to store all possible paths to reach the destination,
considering distance as a parameter for priority assignment. In the end, it will
return the element with the highest priority as the optimal route. So, in this
tutorial, you will discover priority queue in data structure to understand their
functionalities and applications.

Introduction to Priority Queue in Data Structure

Priority Queue is an abstract data type that performs operations on data


elements per their priority. To understand it better, first analyze the real-life
scenario of a priority queue.

The hospital emergency queue is an ideal real-life example of a priority queue.


In this queue of patients, the patient with the most critical situation is the first in
a queue, and the patient who doesn’t need immediate medical attention will be
the last. In this queue, the priority depends on the medical condition of the
patients.
The priority queue in data structure resembles the properties of the hospital
emergency queue. Thus, it is highly used in sorting algorithms. It behaves
similar to a linear queue except for the fact that each element has some priority
assigned to it. The priority of elements determines the order of removal in a
queue, i.e., the element with higher priority will leave the queue first, whereas
the element with the lowest priority at last.
Characteristics of Priority Queue

Priority queue in a data structure is an extension of a linear queue that possesses the
following properties:

 Every element has a certain priority assigned to it.

 Every element of this queue must be comparable.

 It will delete the element with higher priority before the element with lower priority.

 If multiple elements have the same priority, it does their removal from the queue according to the
FCFS principle.

Now, understand these properties with the help of an example. Consider you have to insert 7,
2, 45, 32, and 12 in a priority queue. The element with the least value has the highest
property. Thus, you should maintain the lowest element at the front node.

The image above shows how it maintains the priority during insertion in a queue. But, if you
carry the N comparisons for each insertion, time-complexity will become O(N^2).
Representation of Priority Queue

You can also implement a priority queue using arrays, however, if you consider
array implementation of the priority queue, then inserting elements into the
sorted array will cost you O(n). In general, processing each element will further
cost you O(n^2). Because of this complexity, implementing a priority queue
using arrays is not an ideal approach. Hence, you will only learn about the
representation of the priority queue using a linked list.

Consider a linked queue having 3 data elements 3, 17, 43, respectively. It


arranges all these elements according to priority. But, what if you have to insert
a new node into the linked queue consisting of value 2? Since 2 is smaller than
the element at the front (head) node, insertion from the front will be more
efficient. Additionally, it will allow you to have O(1) time-complexity during
deletion.

The diagram above shows how it will insert the new node consisting of
elements in a linked queue. This particular scenario of insertion seems perfect
as it doesn’t cost you more time. But what if the element is significantly larger
than all the nodes of a queue?

For instance, say you want to insert a node consisting of element 45. Here, it
will compare element 45 with each element inside the queue. However, this
insertion will cost you O(N). Representation of the linked queue below displays
how it will insert element 45 in a priority queue.
Different Implementation Strategies for Priority Queue

Three approaches implement a priority queue in data structure with a


complexity less than O(n^2). They are shown in the table below:

Binary heap and binary tree provide almost similar complexities. These
approaches cost you O(logn) for insertion and deletion and O(1) for peek
operation. But, which approach is the best to implement a priority queue?

To answer this question, you need to explore memory management in the case
of both data structures. Since binary heap utilizes arrays, there is always a better
locality of reference, and operations become more cache-friendly. Whereas, the
Binary search trees use pointers to implement front and rear nodes, which takes
up more space in memory. Hence, building a self-balancing BST’s cost
O(NlogN) where binary heap just costs you O(n). These facts clarify that the
Binary Heap is the best data structure for priority queue implementation.

Moving forward, you will understand what heap data structure is and how it
works.

Types of Priority Queue

There are two types of priority queues based on the priority of elements.
 If the element with the smallest value has the highest priority, then that
priority queue is called the min priority queue.
 If the element with a higher value has the highest priority, then that priority
queue is known as the max priority queue.
 Furthermore, you can implement the min priority queue using a min heap,
whereas you can implement the max priority queue using a max heap.

Applications of Priority Queue in Data Structure

The following are the applications of the priority queue in data structures:

 IP Routing to Find Open Shortest Path First: OSPF is a link-state routing


protocol that is used to find the best path between the source and the
destination router. This protocol works on the principle of Dijkstra’s shortest
path algorithm by using a priority queue to track an optimal route.
 Data Compression in WINZIP / GZIP: The Huffman encoding algorithm uses
a priority queue to maintain the codes for data contents. They store these
codes in a min heap, considering the size of codes as a parameter to decide
the priority.
 Used in implementing Prim’s algorithm: Prim’s algorithm generates a
minimum spanning tree from an undirected, connected, and weighted graph.
It uses a min priority queue to maintain the order of elements for generating a
minimum spanning tree.
 Used to perform the heap sort: When you provide an unsorted array to this
algorithm, it converts it into a sorted array. This algorithm uses a min priority
queue to generate an order of elements.
 Load balancing and Interrupt handling: Load balancing fields the requests
from the users and distributes them to different available servers. Whereas,
interrupt handling is a mechanism for handling interrupts in OS. The priority
queue is used to manage requests, and it interrupts both functionalities
simultaneously.

Priority Queue Operations


The operations that you can perform on a priority queue in data structure are
insertion, deletion, and peek. Let’s analyze how you can achieve these
operations on max heap representation of priority queue.

 Inserting the Element in a Priority Queue: Once you perform the new
insertion, the new element will move to the empty space from top to bottom
and left to right. Additionally, if the element is not in the correct position, it
will compare it with the parent node. Following that, if the element is not in
proper order, then it swaps the elements. The swapping process continues
until all the elements inside the queue are in the correct positions.

In the example above, it inserted the new element 43 into the max heap
representation of the priority queue. But because of this insertion, the order gets
disturbed. To make sure that all the elements arrive in proper order, a swapping
operation takes place. This operation is also known as Heapify in the Heap data
structure.
 Deletion in Priority Queue: As you know that in a max heap, the maximum
element is the root node. And it will remove the element which has maximum
priority first. Thus, you remove the root node from the queue. This removal
creates an empty slot, which will be further filled with new insertion. Then, it
compares the newly inserted element with all the elements inside the queue to
maintain the heap invariant.

You might also like