0% found this document useful (0 votes)
1 views

OS CH-1 and 2

This document provides an overview of operating systems, detailing their roles, purposes, and types. It discusses the importance of operating systems in managing hardware and software, as well as their evolution through various generations. Different types of operating systems, such as batch, time-sharing, distributed, network, real-time, multiprogramming, and mobile operating systems are also described, along with their advantages and disadvantages.

Uploaded by

tadessegerema
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

OS CH-1 and 2

This document provides an overview of operating systems, detailing their roles, purposes, and types. It discusses the importance of operating systems in managing hardware and software, as well as their evolution through various generations. Different types of operating systems, such as batch, time-sharing, distributed, network, real-time, multiprogramming, and mobile operating systems are also described, along with their advantages and disadvantages.

Uploaded by

tadessegerema
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 110

Chapter 1

Introduction to Operating System

1.1. Role and purpose of operating systems


1.2. History of operating system development
1.3. Types of operating systems

Introduction :
An operating system is a collection of software between the user and their computer hardware. It
controls and manages all of a computer's resources, including CPU (central processing unit)
usage, memory, input and output, and network connections. An operating system also provides a
UI (user interface) for users to interact with the computer and its programs. This includes the
desktop, shortcut icons, file explorers, menus, taskbars, and much more. The operating system
therefore takes care of interacting with the hardware.

It manages all the computer hardware. It provides the base for application program and acts as
an intermediary between a user and the computer hardware.

The operating system has two objectives such as:


Firstly, an operating system controls the computer’s hardware.
The second objective is to provide an interactive interface to the user and interpret
commands so that it can communicate with the hardware.

The operating system is very important part of almost every computer system.

Managing Hardware

Goals of an Operating System


The primary objective of a computer is to execute an instruction in an efficient manner and to
increase the productivity of processing resources attached with the computer system such as
hardware resources, software resources and the users. In other words, you can say that maximum
CPU utilisation is the main objective, because it is the main device which is to be used for the
execution of the programs or instructions. Brief the goals as:
1. The primary goal of an operating system is to make the computer convenient to use.
2. The secondary goal is to use the hardware in an efficient manner.

Role of Operating System


The role of operating system to the computer system.
The role is sub-divided into two parts, namely:
1. Role of operating system to manage hardware components of a computer.
2. Role of operating in software management.

1
1. Role of operating system to manage hardware components of a computer.
Operating system as we all know performs a vital role to hardware components in a computer
system. Without the operating system, hardware components cannot be driven by their software
because only operating system offers a platform for software to be installed and r un. In line with
this importance, Barnsley [2009] averred that an operating system provides the operating system
provides the environment within which programs are executed. Internally, operating systems
vary greatly in their make-up, since they are organized along many different lines.
Laban [2000] opined that operating system provides services to programs and to the users of
those programs.

Role of operating system in software management


Just like in hardware, operating system performs and plays a vital role in software management.
All application packages [software] are being installed on a computer through the operating
system. Without an operating system, packages like Corel draw, Photoshop, MS office, etc. will
not work on a computer.
All software drivers that enable hardware to work are installed on the operating system
environment.
According to Nidpy [2009], an operating system is the most fundamental of all system software
programs that performs the following rules:
Hiding the complexities of hardware from the user
Managing between the hardware’s resources which include the processor, memory, data storage
and input/output [I/O] controllers.
Handling “interrupts” generated by the input/output [ I/O] controllers.
Sharing of input/output [I/O] between many programs using the CPU

Types of operating systems


An Operating System performs all the basic tasks like managing files, processes, and
memory. Thus operating system acts as the manager of all the resources, i.e. resource
manager. Thus, the operating system becomes an interface between user and machine.
Types of Operating Systems: Some widely used operating systems are as follows-
1. Batch Operating System –
This type of operating system does not interact with the computer directly. There is an
operator which takes similar jobs having the same requirement and group them into
batches. It is the responsibilityy of the operator to sort jobs with similar needs.

Advantages of Batch Operating System:


 It is very difficult to guess or know the time required for any job to complete. Processors
of the batch systems know how long the job would be when it is in queue
 Multiple users can share the batch systems
 The idle time for the batch system is very less
 It is easy to manage large work repeatedly in batch systems
Disadvantages of Batch Operating System:
 The computer operators should be well known with batch systems
 Batch systems are hard to debug
 It is sometimes costly
 The other jobs will have to wait for an unknown time if any job fails
2
Examples of Batch based Operating System: Payroll System, Bank Statements, etc.

2. Time-Sharing Operating Systems –


Each task is given some time to execute so that all the tasks work smoothly. Each user gets
the time of CPU as they use a single system. These systems are also known as Multitasking
Systems. The task can be from a single user or different users also. The time that each task
gets to execute is called quantum. After this time interval is over OS switches over to the next
task.

Advantages of Time-Sharing OS:


 Each task gets an equal opportunity
 Fewer chances of duplication of software
 CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
 Reliability problem
 One must have to take care of the security and integrity of user programs and data
 Data communication problem
Examples of Time-Sharing OSs are: Multics, Unix, etc.

3.Distributed Operating System –


These types of the operating system is a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, with a great pace.
Various autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU.
These are referred to as loosely coupled systems or distributed systems. These system’s
processors differ in size and function. The major benefit of working with these types of the
operating system is that it is always possible that one user can access the files or software
which are not actually present on his system but some other system connected within this
network i.e., remote access is enabled within the devices connected in that network.

3
Advantages of Distributed Operating System:
 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication
 To establish distributed systems the language which is used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only that
the underlying software is highly complex and not understood well yet
Examples of Distributed Operating System are- LOCUS, etc.

4. Network Operating System –


These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions. These types of operating systems
allow shared access of files, printers, security, applications, and other networking functions
over a small private network. One more important aspect of Network Operating Systems is
that all the users are well aware of the underlying configuration, of all other users within the
network, their individual connections, etc. and that’s why these computers are popularly
known as tightly coupled systems.

Advantages of Network Operating System:


 Highly stable centralized servers
 Security concerns are handled through servers
 New technologies and hardware up-gradation are easily integrated into the system
 Server access is possible remotely from different locations and types of systems
Disadvantages of Network Operating System:
 Servers are costly
 User has to depend on a central location for most operations
 Maintenance and updates are required regularly
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD, etc.

5. Real-Time Operating System –


These types of OSs serve real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.
Two types of Real-Time Operating System which are as follows:
 Hard Real-Time Systems:
These OSs are meant for applications where time constraints are very strict and even the
shortest possible delay is not acceptable. These systems are built for saving life like
automatic parachutes or airbags which are required to be readily available in case of any
accident. Virtual memory is rarely found in these systems.
4
 Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less strict.

Advantages of RTOS:
 Maximum Consumption: Maximum utilization of devices and system, thus more output
from all the resources
 Task Shifting: The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to another,
and in the latest systems, it takes 3 microseconds.
 Focus on Application: Focus on running applications and less importance to applications
which are in the queue.
 Real-time operating system in the embedded system: Since the size of programs are
small, RTOS can also be used in embedded systems like in transport and others.
 Error Free: These types of systems are error-free.
 Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
 Limited Tasks: Very few tasks run at the same time and their concentration is very less
on few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so good and they
are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult for the designer to
write on.
 Device driver and interrupt signals: It needs specific device drivers and interrupts signals
to respond earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are very less prone
to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.

Multiprogramming Operating System


Like the name suggests, a multiprogramming operating system is an operating system that is
capable of running multiple programs at the same time. The main aim in multiprogramming
operating systems is to improve resource utilization and system throughput. This is achieved by

5
organizing the computing jobs in a manner that ensures that the CPU always has a job to execute
at any one time.

These operating systems are sometimes referred to as multitasking operating systems because
they allow two or more processes to run simultaneously. This is to mean that data from two or
more processes can be held in the primary memory at a given time.

There are mainly two types of multiprogramming operating systems. These are as follows:
Multitasking Operating System. Multiuser Operating System.
Example : Windows, Lunux, Unix etc

Mobile Operating System:


A mobile operating system is an operating system that helps to run other application software on
mobile devices. It is the same kind of software as the famous computer operating systems like
Linux and Windows, but now they are light and simple to some extent.
The operating systems found on smartphones include Symbian OS, iPhone OS, RIM's
BlackBerry, Windows Mobile, Palm WebOS, Android, and Maemo. Android, WebOS, and
Maemo are all derived from Linux. The iPhone OS originated from BSD and NeXTSTEP,
which are related to Unix

History of Operating System


Operating systems have been evolving over the years. you will briefly look at this development
of the operating systems with respect to the evolution of the hardware/architecture of the
computer systems in this section. Since operating systems have historically been closely tied
with the architecture of the computers on which they run, you will look at successive generations
of computers to see what their operating systems were like. You may not exactly map the
operating systems generations to the generations of the computer, but roughly it provides the
idea behind them.

You can roughly divide them into five distinct generations that are characterized by hardware
component technology, software development, and mode of delivery of computer services.

0th Generation
The term 0th generation is used to refer to the period of development of computing, which
predated the commercial production and sale of computer equipment. You consider that the
period might be way back when Charles Babbage invented the Analytical Engine. Afterwards
the computers by John Atanasoff in 1940; the Mark I, built by Howard Aiken and a group of
IBM engineers at Harvard in 1944; the ENIAC, designed and constructed at the University of
Pencylvania by Wallace Eckert and John Mauchly and the EDVAC, developed in 1944-46 by
John Von Neumann, Arthur Burks, and Herman Goldstine (which was the first to fully
implement the idea of the stored program and serial execution of instructions) were designed.
The development of EDVAC set the stage for the evolution of commercial computing and
operating system software.

The hardware component technology of this period was electronic vacuum tubes.
The actual operation of these early computers took place without the benefi t of an operating
system. Early programs were written in machine language and each contained code for initiating
operation of the computer itself.

The mode of operation was called “open-shop” and this meant that users signed up for computer
time and when a user’s time arrived, the entire (in those days quite large) computer system was
turned over to the user. The individual user (programmer) was responsible for all machine set

6
up and operation, and subsequent clean-up and preparation for the next user. This system was
clearly inefficient and dependent on the varying competencies of the individual programmer as
operators.

First Generation (1951-1956)


The first generation marked the beginning of commercial computing, including the introduction
of Eckert and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701 which was also
known as the Defence Calculator. The fi rst generation was characterized again by the vacuum
tube as the active component technology.

“closed shop” and was characterised by the appearance of hired operators who would select the
job to be run, initial program load the system, run the user’s program, and then select another
job, and so forth. Programs began to be written in higher level, procedure-oriented languages,
and thus the operator’s routine expanded. The operator now selected a job, ran the translation
program to assemble or compile the source program, and combined the translated object
program
along with any existing library programs that the program might need for input to the linking
program, loaded and ran the composite linked program, and then handled the next job in a
similar fashion.
Application programs were run one at a time, and were translated with absolute computer
addresses that bound them to be loaded and run from these reassigned storage addresses set by
the translator, obtaining their data from specific physical I/O device. There was no provision for
moving a program to different location in storage for any reason. Similarly, a program bound to
specific devices could not be run at all if any of these devices were busy or broken.

The inefficiencies inherent in the above methods of operation led to the development of the
mono-programmed operating system, which eliminated some of the human intervention in
running job and provided programmers with a number of desirable functions. The OS consisted
of a permanently resident kernel in main storage, and a job scheduler and a number of utility
programs kept in secondary storage. User application programs were preceded by control or
specification cards (in those day, computer program were submitted on data cards) which
informed the OS of what system resources (software resources such as compilers and loaders;
and hardware resources such as tape drives and printer) were needed to run a particular
application.

The systems were designed to be operated as batch processing system.


These systems continued to operate under the control of a human operator who initiated
operation by mounting a magnetic tape that contained the operating system executable code
onto a “boot device”, and then pushing the IPL (Initial Program Load) or “boot” button to
initiate
the bootstrap loading of the operating system. Once the system was loaded, the operator entered
the date and time, and then initiated the operation of the job scheduler program which read
and interpreted the control statements, secured the needed resources, executed the first user
program, recorded timing and accounting information, and then went back to begin processing
of another user program,

The first generation saw the evolution from hands-on operation to closed shop operation to the
development of mono-programmed operating systems. At the same time, the development of
programming languages was moving away from the basic machine languages; first to assembly
language, and later to procedure oriented languages, the most significant being the development
of FORTRAN by John W. Backus in 1956. Several problems remained, however, the most
obvious was the inefficient use of system resources, which was most evident when the CPU
waited while the relatively slower, mechanical I/O devices were reading or writing program

7
data. In addition, system protection was a problem because the operating system kernel was not
protected from being overwritten by an erroneous application program.

Second Generation (1956-1964)


The second generation of computer hardware was most notably characterized by transistors
replacing vacuum tubes as the hardware component technology. In addition, some very
important changes in hardware and software architectures occurred during this period. For the
most part, computer systems remained card and tape-oriented systems. Significant use of
random access devices, that is, disks, did not appear until towards the end of the second
generation.
These hardware developments led to enhancements of the operating system. I/O and data
channel communication and control became functions of the operating system, both to relieve
the application programmer from the difficult details of I/O programming and to protect the
integrity of the system to provide improved service to users by segmenting jobs and running
shorter jobs first (during “prime time”) and relegating longer jobs to lower priority or night time
runs. System libraries became more widely available and more comprehensive as new utilities
and application software components were available to programmers.

In order to further mitigate the I/O wait problem, system were set up to spool the input batch
from slower I/O devices such as the card reader to the much higher speed tape drive and
similarly, the output from the higher speed tape to the slower printer. In this scenario, the user
submitted a job at a window, a batch of jobs was accumulated and spooled from cards to tape
“off line,” the tape was moved to the main computer, the jobs were run, and their output was
collected on another tape that later was taken to a satellite computer for off line tape-to-printer
output. User then picked up their output at the submission windows.
Program processing was, for the most part, provided by large centralized computers operated
under mono-programmed batch processing operating systems.

The most significant innovations addressed the problem of excessive central processor delay
due to waiting for input/output operations. Recall that programs were executed by processing
the machine instructions in a strictly sequential order. As a result, the CPU, with its high speed
electronic component, was often forced to wait for completion of I/O operations which involved
mechanical devices (card readers and tape drives) that were order of magnitude slower. This
problem led to the introduction of the data channel, an integral and special-purpose computer
with its own instruction set, registers, and control unit designed to process input/output
operations separately and asynchronously from the operation of the computer’s main CPU near
the end of the first generation, and its widespread adoption in the second generation.

The second generation was a period of intense operating system development. Also it was the
period for sequential batch processing. But the sequential processing of one job at a time
remained a significant limitation. Thus, there continued to be low CPU utilization for I/O bound
jobs and low I/O device utilization for CPU bound jobs. This was a major concern, since
computers were still very large (room-size) and expensive machines. Researchers began to
experiment with multiprogramming and multiprocessing in their computing services called the
time-sharing system.

Third Generation (1964-1979)


The third generation officially began in April 1964 with IBM’s announcement of its System/360
family of computers. Hardware technology began to use Integrated Circuits (ICs) which yielded
significant advantages in both speed and economy.
Operating system development continued with the introduction and widespread adoption of
multiprogramming. This marked first by the appearance of more sophisticated I/O buffering

8
in the form of spooling operating systems, such as the HASP (Houston Automatic Spooling)
system that accompanied the IBM OS/360 system. These systems worked by introducing two
new systems programs, a system reader to move input jobs from cards to disk, and a system
writer to move job output from disk to printer, tape, or cards. Operation of spooling system was,
as before, transparent to the computer user who perceived input as coming directly from the
cards and output going directly to the printer.
The idea of taking fuller advantage of the computer’s data channel I/O capabilities continued to
develop. That is, designers recognized that I/O needed only to be initiated by a CPU instruction

the actual I/O data transmission could take place under control of separate and asynchronously
operating channel program. Thus, by switching control of the CPU between the currently
executing user program, the system reader program, and the system writer program, it was
possible to keep the slower mechanical I/O device running and minimizes the amount of time
the CPU spent waiting for I/O completion. The net result was an increase in system throughput
and resource utilization, to the benefit of both user and providers of computer services.

The spooling operating system in fact had multiprogramming since more than one program
was resident in main storage at the same time. Later this basic idea of multiprogramming was
extended to include more than one active user program in memory at time. To accommodate
this extension, both the scheduler and the dispatcher were enhanced. The scheduler became able
to manage the diverse resource needs of the several concurrently active used programs, and
the dispatcher included policies for allocating processor resources among the competing user
programs. In addition, memory management became more sophisticated in order to assure that
the program code for each job or at least that part of the code being executed, was resident in
main storage.

The third generation was an exciting time, indeed, for the development of both computer
hardware and the accompanying operating system. During this period, the topic of operating
systems became, in reality, a major element of the discipline of computing.

Fourth Generation (1979 – Present)


The fourth generation is characterized by the appearance of the personal computer and the
workstation. Miniaturisation of electronic circuits and components continued and Large Scale
Integration (LSI), the component technology of the third generation, was replaced by Very Large
Scale Integration (VLSI), which characterizes the fourth generation. VLSI with its capacity for
containing thousands of transistors on a small chip, made possible the development of desktop
computers with capabilities exceeding those that filled entire rooms and floors of building just
twenty years earlier.
The operating systems that control these desktop machines have brought us back in a full circle,
to
the open shop type of environment where each user occupies an entire computer for the duration
of a job’s execution. This works better now, not only because the progress made over the years
has made the virtual computer resulting from the operating system/hardware combination so
much easier to use, or, in the words of the popular press “user-friendly.”
However, improvements in hardware miniaturisation and technology have evolved so fast that
you
now have inexpensive workstation – class computers capable of supporting multiprogramming
and time-sharing. Hence the operating systems that supports today’s personal computers and
workstations look much like those which were available for the minicomputers of the third
generation
However, many of these desktop computers are now connected as networked or distributed
systems. Computers in a networked system each have their operating systems augmented with
communication capabilities that enable users to remotely log into any system on the network

9
and transfer information among machines that are connected to the network. The machines that
make up distributed system operate as a virtual single processor system from the user’s point of
view; a central operating system controls and makes transparent the location in the system of the
particular processor or processors and file systems that are handling any given program.

Functions of Operating System


Some typical operating system functions may include managing memory, files, processes, I/O
system & devices, security, etc.

Below are the main functions of Operating System:

Functions of Operating System

In an operating system software performs each of the function:

1. Process management: Process management helps OS to create and delete processes. It


also provides mechanisms for synchronization and communication among processes.

2. Memory management: Memory management module performs the task of allocation and
de-allocation of memory space to programs in need of this resources.

3. File management: It manages all the file-related activities such as organization storage,
retrieval, naming, sharing, and protection of files.

4. Device Management: Device management keeps tracks of all devices. This module also
responsible for this task is known as the I/O controller. It also performs the task of
allocation and de-allocation of the devices.

5. I/O System Management: One of the main objects of any OS is to hide the peculiarities
of that hardware devices from the user.

6. Secondary-Storage Management: Systems have several levels of storage which includes


primary storage, secondary storage, and cache storage. Instructions and data must be
stored in primary storage or cache so that a running program can reference it.

7. Security: Security module protects the data and information of a computer system against
malware threat and authorized access.

10
8. Command interpretation: This module is interpreting commands given by the and acting
system resources to process that commands.

9. Networking: A distributed system is a group of processors which do not share memory,


hardware devices, or a clock. The processors communicate with one another through the
network.

10. Job accounting: Keeping track of time & resource used by various job and users.

11. Communication management: Coordination and assignment of compilers, interpreters,


and another software resource of the various users of the computer systems.

Features of Operating System (OS)


Here is a list important features of OS:

 Protected and supervisor mode


 Allows disk access and file systems Device drivers Networking Security
 Program Execution
 Memory management Virtual Memory Multitasking
 Handling I/O operations
 Manipulation of the file system
 Error Detection and handling
 Resource allocation
 Information and Resource Protection

Advantage of Operating System

 Allows you to hide details of hardware by creating an abstraction


 Easy to use with a GUI
 Offers an environment in which a user may execute programs/applications
 The operating system must make sure that the computer system convenient to use
 Operating System acts as an intermediary among applications and the hardware
components
 It provides the computer system resources with easy to use format
 Acts as an intermediator between all hardware’s and software’s of the system

11
Disadvantages of Operating System

 If any issue occurs in OS, you may lose all the contents which have been stored in your
system
 Operating system’s software is quite expensive for small size organization which adds
burden on them. Example Windows

 It is never entirely secure as a threat can occur at any time

Difference between Firmware and Operating System


Below are the Key Differences between Firmware and Operating System:

Firmware Operating System

Define Firmware: Firmware is one kind of programming Define Operating System: OS provides
that is embedded on a chip in the device which controls functionality over and above that which is
that specific device. provided by the firmware.

Firmware is programs that been encoded by the


OS is a program that can be installed by the
manufacture of the IC or something and cannot be
user and can be changed.
changed.

It is stored on non-volatile memory. OS is stored on the hard drive.

Difference between 32-Bit and 64-Bit Operating System


Below are the Key Differences between 32-Bit and 64-Bit Operating System:

Parameters 32. Bit 64. Bit

Architecture and Allow 32 bit of data processing Allow 64 bit of data processing
Software simultaneously simultaneously

32-bit applications require 32-bit OS 64-bit applications require a 64-bit OS and


Compatibility
and CPUs. CPU.

All versions of Windows 8, Windows


Windows XP Professional, Vista, 7, Mac
Systems Available 7, Windows Vista, and Windows XP,
OS X and Linux.
Linux, etc.

32-bit systems are limited to 3.2 GB of 64-bit systems allow a maximum 17


Memory Limits
RAM. Billion GB of RAM.

12
Chapter 2
Processes and process management

2.1 Process and Thread :

A process is an instance of a program that is being executed. When we run a program, it does not
execute directly. It takes some time to follow all the steps required to execute the program, and
following these execution steps is known as a process.

A process can create other processes to perform multiple tasks at a time; the created processes
are known as clone or child process, and the main process is known as the parent process. Each
process contains its own memory space and does not share it with the other processes. It is
known as the active entity. A typical process remains in the below form in memory.

A process in OS can remain in any of the following states:

o NEW: A new process is being created.


o READY: A process is ready and waiting to be allocated to a processor.
o RUNNING: The program is being executed.
o WAITING: Waiting for some event to happen or occur.
o TERMINATED: Execution finish.

How do Processes work?

When we start executing the program, the processor begins to process it. It takes the following
steps:

o Firstly, the program is loaded into the computer's memory in binary code after
translation.
o A program requires memory and other OS resources to run it. The resources such that
registers, program counter, and a stack, and these resources are provided by the OS.
o A register can have an instruction, a storage address, or other data that is required by the
process.
o The program counter maintains the track of the program sequence.
o The stack has information on the active subroutines of a computer program.
o A program may have different instances of it, and each instance of the running program
is knowns as the individual process.

Features of Process
o Each time we create a process, we need to make a separate system call for each process
to the OS. The fork() function creates the process.

13
o Each process exists within its own address or memory space.
o Each process is independent and treated as an isolated process by the OS.
o Processes need IPC (Inter-process Communication) in order to communicate with each
other.
o A proper synchronization between processes is not required.

What is Thread?

A thread is the subset of a process and is also known as the lightweight process. A process can
have more than one thread, and these threads are managed independently by the scheduler. All
the threads within one process are interrelated to each other. Threads have some common
information, such as data segment, code segment, files, etc., that is shared to their peer threads.
But contains its own registers, stack, and counter.

How does thread work?

As we have discussed that a thread is a subprocess or an execution unit within a process. A


process can contain a single thread to multiple threads. A thread works as follows:

o When a process starts, OS assigns the memory and resources to it. Each thread within a
process shares the memory and resources of that process only.
o Threads are mainly used to improve the processing of an application. In reality, only a
single thread is executed at a time, but due to fast context switching between threads
gives an illusion that threads are running parallelly.
o If a single thread executes in a process, it is known as a single-threaded And if multiple
threads execute simultaneously, then it is known as multithreading.

14
Types of Threads

There are two types of threads, which are:

1. User Level Thread


As the name suggests, the user-level threads are only managed by users, and the kernel does not
have its information.
These are faster, easy to create and manage.
The kernel takes all these threads as a single process and handles them as one process only.
The user-level threads are implemented by user-level libraries, not by the system calls.

2. Kernel-Level Thread

The kernel-level threads are handled by the Operating system and managed by its kernel. These
threads are slower than user-level threads because context information is managed by the kernel.
To create and implement a kernel-level thread, we need to make a system call.

Features of Thread
o Threads share data, memory, resources, files, etc., with their peer threads within a
process.
o One system call is capable of creating more than one thread.
o Each thread has its own stack and register.
o Threads can directly communicate with each other as they share the same address space.
o Threads need to be synchronized in order to avoid unexpected scenarios.

Key Differences Between Process and Thread


o A process is independent and does not contained within another process, whereas all
threads are logically contained within a process.
o Processes are heavily weighted, whereas threads are light-weighted.
o A process can exist individually as it contains its own memory and other resources,
whereas a thread cannot have its individual existence.
o A proper synchronization between processes is not required. In contrast, threads need to
be synchronized in order to avoid unexpected scenarios.
o Processes can communicate with each other using inter-process communication only; in
contrast, threads can directly communicate with each other as they share the same
address space.

Difference Table Between Process and Thread


Process Thread

15
A process is an instance of a program Thread is a segment of a process or a
that is being executed or processed. lightweight process that is managed by the
scheduler independently.

Processes are independent of each Threads are interdependent and share memory.
other and hence don't share a memory
or other resources.

Each process is treated as a new The operating system takes all the user-level
process by the operating system. threads as a single process.

If one process gets blocked by the If any user-level thread gets blocked, all of its
operating system, then the other peer threads also get blocked because OS takes
process can continue the execution. all of them as a single process.

Context switching between two Context switching between the threads is fast
processes takes much time as they are because they are very lightweight.
heavy compared to thread.

The data segment and code segment Threads share data segment and code segment
of each process are independent of the with their peer threads; hence are the same for
other. other threads also.

The operating system takes more time Threads can be terminated in very little time.
to terminate a process.

New process creation is more time A thread needs less time for creation.
taking as each new process takes all
the resources.

2.2 The concept of multi-threading


Multithreading allows the application to divide its task into individual threads. In multi-threads,
the same process or task can be done by the number of threads, or we can say that there is more
than one thread to perform the task in multithreading. With the use of multithreading,
multitasking can be achieved.

16
The main drawback of single threading systems is that only one task can be performed at a time,
so to overcome the drawback of this single threading, there is multithreading that allows
multiple tasks to be performed.
For example:

In the above example, client1, client2, and client3 are accessing the web server without any
waiting. In multithreading, several tasks can run at the same time.
In an operating system, threads are divided into the user-level thread and the Kernel-level thread.
User-level threads handled independent form above the kernel and thereby managed without any
kernel support. On the opposite hand, the operating system directly manages the kernel-level
threads. Nevertheless, there must be a form of relationship between user-level and kernel-level
threads.
There exists three established multithreading models classifying these relationships are:
Many to one multithreading model
One to one multithreading model
Many to Many multithreading models
Many to one multithreading model:
The many to one model maps many user levels threads to one kernel thread. This type of
relationship facilitates an effective context-switching environment, easily implemented even on
the simple kernel with no thread support.
The disadvantage of this model is that since there is only one kernel-level thread schedule at any
given time, this model cannot take advantage of the hardware acceleration offered by
multithreaded processes or multi-processor systems. In this, all the thread management is done
in the userspace. If blocking comes, this model blocks the whole system.

17
In the above figure, the many to one model associates all user-level threads to single kernel-level
threads.
One to one multithreading model
The one-to-one model maps a single user-level thread to a single kernel-level thread. This type
of relationship facilitates the running of multiple threads in parallel. However, this benefit comes
with its drawback. The generation of every new user thread must include creating a
corresponding kernel thread causing an overhead, which can hinder the performance of the
parent process. Windows series and Linux operating systems try to tackle this problem by
limiting the growth of the thread count.

In the above figure, one model associates that one user-level thread to a single kernel-level
thread.
Many to Many Model multithreading model
In this type of model, there are several user-level threads and several kernel-level threads. The
number of kernel threads created depends upon a particular application. The developer can
create as many threads at both levels but may not be the same. The many to many model is a
compromise between the other two models. In this model, if any thread makes a blocking system
call, the kernel can schedule another thread for execution. Also, with the introduction of multiple
threads, complexity is not present as in the previous models. Though this model allows the
creation of multiple kernel threads, true concurrency cannot be achieved by this model. This is
because the kernel can schedule only one process at a time.

18
Many to many versions of the multithreading model associate several user-level threads to the
same or much less variety of kernel-level threads in the above figure.

Inter-Process Communication.
In general, Inter Process Communication is a type of mechanism usually provided by the
operating system (or OS). The main aim or goal of this mechanism is to provide
communications in between several processes. In short, the intercommunication allows a process
letting another process know that some event has occurred.
Let us now look at the general definition of inter-process communication, which will explain the
same thing that we have discussed above.

Definition
"Inter-process communication is used for exchanging useful information between numerous
threads in one or more processes (or programs)."
To understand inter process communication, you can consider the following given diagram that
illustrates the importance of inter-process communication.

Need interprocess communication.


There are numerous reasons to use inter-process communication for sharing the data. Here are
some of the most important reasons that are given below:
o It helps to speedup modularity
o Computational
o Privilege separation
o Convenience
o Helps operating system to communicate with each other and synchronize their actions as
well

19
Race Condition

A condition in which the critical section (a part of the program where shared memory is
accessed) is concurrently executed by two or more threads. It leads to incorrect behavior of a
program.

In layman terms, a race condition can be defined as, a condition in which two or more threads
compete together to get certain shared resources.

For example, if thread A is reading data from the linked list and another thread B is trying to
delete the same data. This process leads to a race condition that may result in run time error.

Types of Race Condition :

There are two types of race conditions:

1. Read-modify-write
2. Check-then-act

The read-modify-write patterns signify that more than one thread first read the variable, then
alter the given value and write it back to that variable. Let's have a look at the following code
snippet.

t occurs when two or more threads operate on the same object without proper synchronization
and their operation incorporates each other.

Example of Race condition

Suppose, there are two processes A and B that are executing on different processors. Both
processes are trying to call the function bankAccount() concurrently. The value of the shared
variable that we are going to pass in the function is 1000.

Consider, A call the function bankAccount() and passing a value 200 as a parameter. In the same
way, process B is also calling the function bankAccount() and passing a value of 100 as a
parameter.

The result look like as follows:

o Process A loads 1100 into the CPU register.


o Process B will load 1100 into its register.
o Process A will add 200 to its register then the result will be 1300
o Process B will add 100 its register and the calculated result will be 1200
o Process A will store 1400 in a shared variable and process B will store 1150 in a shared
variable.

Check-then-act.

20
This race condition happens when two processes check a value on which they will take each
take an external action. The processes both check the value, but only one process can take
the value with it. The later-occurring process will read the value as null. This results in a
potentially out-of-date or unavailable observation being used to determine what the program
will do next. For example, if a map application runs two processes simultaneously that
require the same location data, one will take the value first so the other can't use it. The later
process reads the data as null.

Avoiding Race Condition :

There are the following two solutions to avoid race conditions.

o Mutual exclusion
o Synchronize the process

Another solution to avoid race condition is mutual exclusion. In mutual exclusion, if a thread is
using a shared variable or thread, another thread will exclude itself from doing the same thing.

In order to prevent the race conditions, one should ensure that only one process can access the
shared data at a time. It is the main reason why we need to synchronize the processes.Here, a
given part of the program can only execute one thread at a time.

2.3.2 Critical Sections and mutual exclusion

A critical section, CS, is a section of code in which a process accesses shared resources.

A section of code, or a collection of operations, in which only one process may be executing at a given
time and which we want to make “sort of” atomic. Atomic means either an operation happens in its
entirely (everything happens at once) or NOT at all; i.e., it cannot be interrupted in the middle. Atomic
operations are used to ensure that cooperating processes execute correctly. Mutual exclusion
mechanisms are used to solve the critical region problem .

2.4 Process Scheduling


2.4.1 Pre-emptive and Computer organization and Architecture non pre-emptive scheduling
2.4.2 Scheduling policies
Scheduling is a fundamental function of OS. When a computer is multi-programmed, it has
multiple processes completing for the CPU at the same time. If only one CPU is available,
then a choice has to be made regarding which process to execute next. This decision making
process is known as scheduling and the part of the OS that makes this choice is called a
scheduler. The algorithm it uses
in making this choice is called scheduling algorithm.

Scheduling queues: As processes enter the system, they are put job queue. This
into a queue

21
consists of all process in the system. The process that are residing in main memory and are
ready &

waiting to execute or kept on a list called ready queue.

This queue is generally stored as a linked list. A ready queue header contains pointers to the
first & final PCB in the list. The PCB includes a pointer field that points to the next PCB
in the ready queue. The lists of processes waiting for a particular I/O device are kept on a
list called device queue. Each device has its own device queue. A new process is initially
put in the ready queue. It
waits in the ready queue until it is selected for execution & is given the CPU.

SCHEDULERS:

A process migrates between the various scheduling queues throughout its life-time purposes.
The OS must select for scheduling processes from these queues in some fashion. This
selection process is carried out by the appropriate scheduler. In a batch system, more processes
are submitted and then executed immediately. So these processes are spooled to a mass storage
device like disk, where they are kept for later execution.
Types of schedulers:

There are 3 types of schedulers mainly used:

1. Long term scheduler: Long term scheduler selects process from the disk & loads them
into memory for execution. It controls the degree of multi-programming i.e. no. of processes in
memory. It executes less frequently than other schedulers. If the degree of
multiprogramming is stable than the average rate of process creation is equal to the average
departure rate of processes leaving the system. So, the long term scheduler is needed to be
invoked only when a process leaves the system. Due to longer intervals between executions it
can afford to take more time to decide which process should be selected for execution. Most
processes in the CPU are either I/O bound or CPU bound. An I/O bound process (an
interactive ‘C’ program is one that spends most of its time in I/O operation than it spends
in doing I/O operation. A CPU bound process is one that spends more of its time in doing
computations than I/O operations (complex sorting program). It is important that the
long term scheduler should select a good mix of I/O bound & CPU bound processes.
2. Short - term scheduler: The short term scheduler selects among the process that are
ready to execute & allocates the CPU to one of them. The primary distinction between these
two schedulers is the frequency of their execution. The short-term scheduler must select a new
process for the CPU quite frequently. It must execute at least one in 100ms. Due to the
short duration of time between executions, it must be very fast.

3. Medium - term scheduler: some operating systems introduce an additional


intermediate
level of scheduling known as medium - term scheduler. The main idea behind this scheduler is
that sometimes it is advantageous to remove processes from memory & thus reduce the degree
of multiprogramming. At some later time, the process can be reintroduced into memory
& its execution can be continued from where it had left off. This is called as swapping.
The process is swapped out & swapped in later by medium term scheduler. Swapping is
necessary to improve the process miss or due to some change in memory requirements,
the available memory limit is exceeded which requires some memory to be freed up.

Process control block:


Each process is represented in the OS by a process control block. It is also by a process
control block. It is also known as task control block.
A process control block contains many pieces of information associated with a specific
process. It includes the following information.
Process state: The state may be new, ready, running, waiting or terminated state.
Program counter: it indicates the address of the next instruction to be executed for this
purpose.
CPU registers: The registers vary in number & type depending on the computer architecture. It
includes accumulators, index registers, stack pointer & general purpose registers, plus any
condition- code information must be saved when an interrupt occurs to allow the process to be
continued correctly after- ward.
CPU scheduling information: This information includes process priority pointers to
scheduling queues & any other scheduling parameters.
Memory management information: This information may include such information as the
value of the bar & limit registers, the page tables or the segment tables, depending upon the
memory system used by the operating system.
Accounting information: This information includes the amount of CPU and real time used,
time limits, account number, job or process numbers and so on.
I/O Status Information: This information includes the list of I/O devices allocated to this
process, a list of open files and so on. The PCB simply serves as the repository for any
information that may vary from process to process.

CPU Scheduling Algorithm:


CPU Scheduling deals with the problem of deciding which of the processes in the ready queue
is to be allocated first to the CPU. There are four types of CPU scheduling that exist.
1. First Come, First Served Scheduling (FCFS) Algorithm: This is the simplest CPU
scheduling algorithm. In this scheme, the process which requests the CPU first, that is allocated
to the CPU first. The implementation of the FCFS algorithm is easily managed with a FIFO
queue. When a process enters the ready queue its PCB is linked onto the rear of the queue.
The average waiting time under FCFS policy is quiet long.
Example :

Process CPU
P1 time
3
P2 5
P3 2
P4 4
Using FCFS algorithm find the average waiting time and average turnaround time if the order
is

P1, P2, P3, P4.


Solution: If the process arrived in the order P1, P2, P3, P4 then according to the FCFS the
Gantt chart will be:

P1 P2 P3 P4
0 3 8 10 14

The waiting time for process P1 = 0, P2 = 3, P3 = 8, P4 = 10 then the turnaround time


for process P1 = 0 + 3 = 3, P2 = 3 + 5 = 8, P3 = 8 + 2 = 10, P4 = 10 + 4 =14.
Then average waiting time = (0 + 3 + 8 + 10)/4 = 21/4 = 5.25

Average turnaround time = (3 + 8 + 10 + 14)/4 = 35/4 = 8.75

The FCFS algorithm is non pre-emptive means once the CPU has been allocated to a
process then the process keeps the CPU until the release the CPU either by terminating or
requesting I/O.

P1 3

P1 1 3
0
P2 1
1
P3 3
2.. Shortest Job First Scheduling (SJF) Algorithm: This algorithm associates with each
process if the CPU is available. This scheduling is also known as shortest next CPU burst,
because the scheduling is done by examining the length of the next CPU burst of the process
rather than its total length. Consider the following example:
Process CPU time

Solution: According to the SJF the Gantt chart will be

P3 P1 P2 P4
0 2 5 9 14

The waiting time for process P1 = 0, P2 = 2, P3 = 5, P4 = 9 then the turnaround time for
process

P3 = 0 + 2 = 2, P1 = 2 + 3 = 5, P4 = 5 + 4 = 9, P2 = 9 + 5 =14. Then average waiting time = (0


+ 2 + 5 + 9)/4 = 16/4 = 4
Average turnaround time = (2 + 5 + 9 + 14)/4 = 30/4 = 7.5

The SJF algorithm may be either pre-emptive or non pre-emptive algorithm. The pre-emptive
SJF
is also known as shortest remaining time first. Consider the following example.

Process Arrival Time CPU time


P1 0 8
P2 1 4
P3 2 9
P4 3 5
In this case the Gantt chart will be

P1 P2 P4 P1 P3
0 1 5 10 17 26

The waiting time for process

P1 1 4
0
P2 1
1
P3 3
P1 = 10 - 1 = 9

P2 = 1 – 1 = 0

P3 = 17 – 2 = 15

P4 = 5 – 3 = 2

The average waiting time = (9 + 0 + 15 + 2)/4 = 26/4 = 6.5

3. Priority Scheduling Algorithm: In this scheduling a priority is associated with each


process and the CPU is allocated to the process with the highest priority. Equal priority
processes are scheduled in FCFS manner. Consider the following example:
Process Arrival Time CPU time

P1 1 5
0
P2 1
1
P3 3
P4 1 4
P5 5 2
According to the priority scheduling the Gantt chart will be

P2 P5 P1 P3 P4
0 1 6 16 18 19

The waiting time for process

P1 = 6

P2 = 0

P3 = 16

P4 = 18

P4 = 1

The average waiting time = (0 + 1 + 6 + 16 + 18)/5 = 41/5 = 8.2

4. Round Robin Scheduling Algorithm: This type of algorithm is designed only for the time sharing system.
It is similar to FCFS scheduling with pre-emption condition to switch between processes. A small unit of time
called quantum time or time slice is used to switch between the processes. The average waiting time under the
round robin policy is quiet long. Consider the
following example:

Process CPU time


P1 3
P2 5
P3 2
P4 4
Time Slice = 1 millisecond.

P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P4 P2 P4 P2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

The waiting time for process

P1 = 0 + (4 – 1) + (8 – 5) = 0 + 3 + 3 = 6
P2 = 1 + (5 – 2) + (9 – 6) + (11 – 10) + (12 – 11) + (13 – 12) = 1 + 3 + 3 + 1 + 1 + 1 = 10

P3 = 2 + (6 – 3) = 2 + 3 = 5

P4 = 3 + (7 – 4) + (10 – 8) + (12 – 11) = 3 + 3 + 2 + 1 = 9

The average waiting time = (6 + 10 + 5 + 9)/4 = 7.5

Process Synchronization:
A co-operation process is one that can affect or be affected by other processes executing in the
system. Co-operating process may either directly share a logical address space or be allotted to
the shared data only through files. This concurrent access is known as Process synchronization.
Critical Section Problem:
Consider a system consisting of n processes (P0, P1, ………Pn -1) each process has a segment
of code which is known as critical section in which the process may be changing common
variable, updating a table, writing a file and so on. The important feature of the system is that
when the process is executing in its critical section no other process is to be allowed to execute
in its critical section. The execution of critical sections by the processes is a mutually exclusive.
The critical section problem is
to design a protocol that the process can use to cooperate each process must request permission
to enter its critical section. The section of code implementing this request is the entry section.
The critical section is followed on exit section. The remaining code is the remainder section.

1. Mutual Exclusion: If process Pi is executing in its critical section then no any other
process can be executing in their critical section.
2. Progress: If no process is executing in its critical section and some process wish to enter
their critical sections then only those process that are not executing in their remainder section
can enter its critical section next.
3. Bounded waiting: There exists a bound on the number of times that other processes are
allowed to enter their critical sections after a process has made a request.
Semaphores:

2.5 Dead lock


Deadlock is a situation which involves the interaction of more than one resources and processes
with each other. When a process requests for the resource that is been held another process
which needs another resource to continue, but is been held by the first process, then it is called a
deadlock. Deadlock prevention works by preventing one of the four Coffman conditions from
occurring. Removing the mutual exclusion condition means that no process will have exclusive
access to a resource. Algorithms that avoid mutual exclusion are called non-blocking
synchronization algorithms.
The following four conditions must hold for there to be a deadlock
Mutual exclusion condition-each resource is assigned to exactly one process.
Hold and wait condition-process holding resources can request additional resources.
No preemption condition-previously granted resources can’t be forcibly taken away; only the
process can voluntarily release resource.
Circular wait condition-there must be a circular chain of two or more processes. Each of which
is waiting for a resource held by the next member of the chain.
One mechanism of attacking deadlock is by trying to negate some of the conditions-deadlock
avoidance.

Deadlock Avoidance
Avoid deadlock by careful resource scheduling.
Requires that the system has some additional prior information available.
Simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need.

Operating System Page 31


The deadlock avoidance algorithm dynamically examines the resource allocation state to ensure
that there can never be a circular wait.
Resource allocation state is defined by the number of available and allocated resources, and the
maximum demands of the process.
Deadlock Prevention
Prevent deadlock by resource scheduling so as to negate at least one of the four conditions
Attacking the mutual exclusion condition
Attacking the hold and wait condition
Attacking the no preemption condition
Attacking the circular wait condition

Operating System Page 32


Operating System Page 33
Operating System Page 34
Operating System Page 35
Operating System Page 36
Operating System Page 37
Operating System Page 38
Operating System Page 39
Operating System Page 40
Operating System Page 41
Operating System Page 42
Operating System Page 43
Operating System Page 44
Operating System Page 45
Operating System Page 46
Operating System Page 47
Operating System Page 48
Operating System Page 49
Operating System Page 50
Operating System Page 51
Operating System Page 52
Operating System Page 53
Operating System Page 54
Operating System Page 55
Operating System Page 56
Operating System Page 57
Operating System Page 58
Operating System Page 59
Operating System Page 60
Operating System Page 61
Operating System Page 62
Operating System Page 63
Operating System Page 64
Operating System Page 65
Operating System Page 66
Operating System Page 67
Operating System Page 68
Operating System Page 69
Operating System Page 70
Operating System Page 71
Operating System Page 72
Operating System Page 73
Operating System Page 74
Operating System Page 75
Operating System Page 76
Operating System Page 77
Operating System Page 78
Operating System Page 79
Operating System Page 80
Operating System Page 81
Operating System Page 82
Operating System Page 83
Operating System Page 84
Operating System Page 85
Operating System Page 86
Operating System Page 87
Operating System Page 88
Operating System Page 89
Operating System Page 90
Operating System Page 91
Operating System Page 92
Operating System Page 93
Operating System Page 94
Operating System Page 95
Operating System Page 96
Operating System Page 97
Operating System Page 98
Operating System Page 99
Operating System Page 100
Operating System Page 101
Operating System Page 102
Operating System Page 103
Operating System Page 104
Operating System Page 105
Operating System Page 106
Operating System Page 107
Operating System Page 108
Operating System Page 109
Operating System Page 110
Operating System Page 111

You might also like