0% found this document useful (0 votes)
242 views

OSY Board Questions With Answers

The document defines operating systems and describes their basic functions and components. It discusses different types of operating systems including sequential/serial processing systems, batch systems, multiprogramming systems, time-sharing systems, and multiprocessor systems. Key points include: - An operating system acts as an interface between the user and computer hardware, controlling and coordinating hardware use among application programs. - Early computer systems used sequential/serial processing where jobs were run one at a time. Batch systems grouped similar jobs to reduce setup time. - Multiprogramming systems improved CPU utilization by allowing multiple processes to share the CPU and switch between them rapidly. - Time-sharing systems further improved sharing by allowing many users

Uploaded by

Hemil Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
242 views

OSY Board Questions With Answers

The document defines operating systems and describes their basic functions and components. It discusses different types of operating systems including sequential/serial processing systems, batch systems, multiprogramming systems, time-sharing systems, and multiprocessor systems. Key points include: - An operating system acts as an interface between the user and computer hardware, controlling and coordinating hardware use among application programs. - Early computer systems used sequential/serial processing where jobs were run one at a time. Batch systems grouped similar jobs to reduce setup time. - Multiprogramming systems improved CPU utilization by allowing multiple processes to share the CPU and switch between them rapidly. - Time-sharing systems further improved sharing by allowing many users

Uploaded by

Hemil Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Chapter-1

1. Define operating system?


 It is the interface between a User of the Computer, and the Computer Hardware.
 It controls and Co- Ordinates the use of the Hardware among the various Application
Programs, for the various Users.
 The Purpose of an Operating System (OS) is to provide an environment, in which the User
may execute Programs.
 The Primary goal of an Operating System is to make the Computer System convenient to use.
 And, the Secondary Goal of the Operating System is to use the Computer Hardware, in an
efficient manner.

2. Describe the computer system architecture with neat diagram?

User
Application Program
Operating System
Hardware

 An Operating System is an important part of almost every Computer System. The Computer
System can roughly be divided into Four components:
1) Hardware:
It provides the Basic Computing Resources.
2) Operating System:
It provides the Interface, or the means for the proper use, of the Resources, in the
Operation of the Computer System.
3) Application Program:
It defines the Base, in which these Resources are used to solve the Computing Problems of
the User.
4) Users:
Users are classified into Three Categories:
i. Programmer:
 The One who creates the Software.
ii. Operational Users:
 The one who carries out Installation and Maintenance Process.
iii. End- Users:
 Any User of the System.
3. Describe the operating system operations?
1) Program Management:
 The Program gets executed in the Main Memory, with the help of Resources. It requires
Resources to complete it’s Execution.
 Scheduling of the Programs is done by the Operating System, so, there is no interference in
the Programs, in the activities of each other.
 Proper Synchronization is carried out.
 The Operating System is responsible to provide access, with respect to Single Mode, or
Dual Mode.
2) Resource Management:
 System is the set of Resources, so it’s Management is necessary.
 The Operating System is responsible for managing the Resources. The major focus on
Resource Type is Time and Space.
 For Time Management, it checks which Program should be allocated, for what, and how
much of Time.
 Space Management checks that what amount of Space is necessary, and what amount of
space is to be allocated for a Program to execute.
3) Protection and Security:
 Protection is a Mechanism for controlling the access of a Program, or a User of the
Resource.
 Security refers to the Defense System against the Internal and External Attacks.

4. Explain Sequential or Serial processing with neat diagram, advantages and disadvantages.
 It is a technique in which Jobs are executed sequentially, one after the other.
 Early Computers were very large machines, which were run from the Console. The
Programmer would write a Program, and then would operate the Program directly from the
Console.
 Firstly, the program would be loaded manually into the Memory, either from the Paper
Tape, or from Punch Cards. Then, with the help of appropriate Buttons, Execution of the
Program starts. If some error occurs, then the Programmer could halt the Program,
examines the Contents of the Memory Registers, and Debug the Program directly from the
Console.
 The Output was printed either on the Paper Tapes, or the Punch Cards.
 After all, this process gets accomplished. Then only, the programmer can start the
Execution of the next job.
 Thus, the jobs are processed sequentially, one after the other. This process was Time
Consuming. The User was not able to use the Hardware Resources efficiently, and the
CPU could sit Idle for a longer period, or for a longer period of time.
 For example, if the User has to load three Programs, that are, a Cobol Program, then a
Fortan Program, and then a Cobol Program again.
 Then, for the First Program, the Cobol Compiler, with other Resources, was loaded. After
the First Program was executed, then the Cobol Compiler with it’s other Resources was
unloaded.
 Then, the Fortan Compiler, and the Other Resources are loaded.
 After finishing the Fortan Program, it’s Resources are unloaded, and the Cobol Resources
are loaded once again. This process continues for all the Programs.
 This increases the Job Setup Time, and the CPU sits Idle for a longer period of time, or
for a longer duration. But, it provides the fastest Computation Time.

5. Explain batch operating system with neat diagram.


 The major problem of Sequential Processing, or the Serial Processing, was to reduce the
Job- Setup Time.
 For example, suppose, these three Programs work together, the Setup Time may be reduced.

 If the Machine- Time was sequentially divided to all the three Programs, this could be the
repetition of work.
 Instead of this, in Batch Operating System, the Jobs with similar needs are batched together,
and executed as a Group, thus reducing the Setup Time, required for the Job.
 In the above case, the Cobol Compiler, with it’s other Resources, will be loaded only once,
with less Setup time required.
 This type of Batching System improves the Job Setup Time, and also improves the CPU
Utilization. But, the major drawback of Batch Operating System is that the CPU sits idle,
while the Transition of one Batch of Process, to the other.
6. Explain Multiprogramming system with neat diagram, advantages and disadvantages.
 Batch Operating System improves the performance of a Single User- Access System, but
still, a Single User cannot keep the CPU, or the I/O Devices busy, at all the time.
 So, an attempt of Multiprogramming was developed. It is an attempt to increase the CPU
Utilization, by always giving the CPU some Process to execute.
 The Idea behind the approach of Multiprogramming is as follows:
 The Operating System keeps up, and begins to execute one of the Jobs in the Memory.
Eventually, the Job being executed may have to wait for some other Task, such as an I/O
Operation Completion.
 When the Job has to wait, the Operating System switches to execute another Job.
 Again, when this particular Job needs to wait for some I/O Operation, then again the
Operating System switches to execute some other Job, and so on. On the other hand,
when the First job finishes it’s I/O Operation, it gets allocated back, to the CPU.
 In this way, there is always at least one job allocated to the CPU, and the CPU never sits
idle.
 Multiprogramming Operating System increases the complexity of the System, as there
are several processes active in the Memory.
 So, the Operating System has to perform some CPU Scheduling, to schedule one job at a
time, in a way to improve the CPU Utilization.
 The Operating System has to apply some Security Mechanisms, so that two processes
cannot interfere in the activities of each other.

7. Explain Time sharing system with neat diagram.


 Time Sharing is a Logical Extension of Multiprogramming and Multitasking.
 A Time Sharing System allows many Users to share the System simultaneously, since
each Action, or Command, in the Time Sharing System requires CPU Action, as the CPU
switches rapidly from one user to the next.
 Each user is given the impression that the entire System is dedicated to it’s use, even
though it is shared by many users.

8. Explain Multiprocessor system with its types, advantages and disadvantages.


 It means that the System with multiple number of Processors. It is also called as “Parallel
System”, or “Tightly Paired Systems”. Such Systems have more than one processors in
Close Communication, sharing Computer, Bus, Clock, Memory, Peripheral Devices, etc.
 Multiprocessor Systems have 3 main Advantages.
1) Increased Throughput.
By increasing the number of Processors, we get more work done in less Time.
2) Economy of Scale.
These Systems can save more money than multiple Single Processor Systems, because
they can share the Resources, to the System. If several Programs operate on the same set
of Data, it is cheaper to store those data, and to have all the Processors share them.
3) Increased Reliability.
If the Functions can be distributed properly among several processes, then the failure of
one Processor will not halt the entire System, but only slows it down. For example, if we
have 3 Processors, and one of them fails, then each of the remaining two Processors must
pickup a share of the work of the failed Processor. Thus, the entire System runs only 10%
slower, rather than failing altogether.
 Disadvantages of Multiprocessor System are as follows:
1. It has a more complex Scheduling and Security.
2. It’s mechanism needs to be implemented, so that the processes do not interfere in the
activities of each other, and also use the Resources in an efficient manner.
 Types of Multiprocessor System:
There are 2 types of Multiprocessor System, namely:
1. Symmetric Multiprocessor System:
 In this type of Multiprocessor System, each Processor runs as an identical copy.
 Each Processor performs all the tasks within the System. The Processes communicate
with one another, as needed.
 Peer- To- Peer (Peers) Relationship exists among the Processors.
2. Asymmetric Multiprocessor System:
 In this type of Multiprocessor System, each Processor is assigned with a specific Task.
 A Master Processor controls the System, and the other Processors either look to the
Master Processor for the instructions, or have Pre- Defined Tasks.
 This Scheme defines the Master- Slave Relationship, where the Master Processor
schedules the work to be given to a Slave Processor.

9. Explain Real time systems with its types.


 Real- Time Systems are used in the environments, where large number of events, which are
mostly external, should be accepted, and processed.
 Such Applications include Industrial Control, Freight Control, Military Applications, etc.
 A Real- Time System has well defined, fixed Time Constraints. Processing must be done
within the defined Constraints of the System, otherwise the System will fail.
 The Real- Time System functions correctly, only if it returns the correct Results, within it’s
Time Constraints.
 The Primary Objective of Real- Time Systems is to provide quick Event- Response Time,
and thus, to meet the Scheduled deadline.
 The Real- Time Systems are classified into:
1. Hard Real- Time Systems:
This System guarantees that the Critical Tasks are completed on time.
2. Soft Real- Time Systems:
It is a less restrictive type of System, where a Critical Real- Time Task gets a higher
Priority, over the other Tasks, and retains that priority, until it completes.

10. Explain Mobile OS


 The Android Operating System is an Open- Source Operating System, developed by OHA
(Open- Headset Alliance), and led by Google, especially for Mobile Devices.
 It provides a set of various core applications, like Email, SMS, Calendar, Maps, Browser,
etc., and many more.
 The Android architecture consists of Applications, Application Framework, Libraries,
Android Runtime DVM (Dalvik Virtual Machine), and Linux Kernel.
11. Differentiate between DOS and UNIX O.S.
DOS Unix Operating System
It is only used in x86-based computer systems. It is used in all types of computer systems.
It is a single command-line OS. It is a multitasking OS.
It was firstly released in 1981. It was firstly released in November 1971.
It is an abbreviation for Disk Operating System. It is an abbreviation for Uniplexed Information
Computing System.
DOS consumes low power. Unix OS consumes high power.
DOS is not case-sensitive. Unix OS is case-sensitive.
Disk operating system uses the backslashes. Unix OS uses forward slashes.
It operates from a hard disk device. It was installed after being obtained from the
original AT&T Unix.
DOS contains batch files. Unix OS contains shell files.
DOS contains three proprietary versions (IBM DOS, UNIX OS contains numerous proprietary and free,
MS-DOS, and DR-DOS) and one free version and open-source implementations.
(FreeDOS).
It is mainly used in embedded systems. Unix OS is mainly used in servers.
DOS has neither virtual memory nor protected Unix OS usually has virtual memory and protected
memory. memory.

12. Describe Distributed systems with its types.


 A Network is a Communication Path, between two or more Communication Systems.
 Distributed Systems depends upon the Networking, for their functionality.
 Being able to communicate, Distributed Systems are able to share Computational Tasks, and
provide a rich set of Features, to the Users.
 The Distributed Systems are classified into two parts, as follows:
1. Client- Server System:
 These Systems maintain a Client- Server Architecture, where the Server Systems are
broadly categorized as:
i.) Compute Server:
 The Compute Server Systems provide an interface, to which the Clients can send
Request, to perform an action, in response to which they can execute the Action,
and send back the Result to the Clients.
ii.) File Server:
 File Server Systems provide a File System Interface, where the Clients can
Create, Update, Read, Delete, and can perform many such Operations.
2. Peer- To- Peer System:
 In Peer- To- Peer System, the Computers on a Network communicate with each other
as Equals. Each Computer is responsible for making it’s own resources available to
others, on the Network.
 It is also responsible for the Security of it’s Resources.
 Each Computer is responsible for accessing the Network Resources they need from
other Computers, in the Network.
 These Systems are also referred to as “Loosely Coupled System”.

13. Describe Clustered systems with its types.


 These Systems gather together multiple CPUs, to accomplish a Computational Work.
 Clustered Systems differ from Parallel Systems, as they are composed of two or more
individual Systems, coupled together.
 The Clustered Systems share storage, and are closely linked via Land Networking.
 Clustering can be done in two ways, which are:
1. Asymmetric Clustering:
 In Asymmetric Clustering, one Machine which is running in Hot- Standby Mode, while
others are running the Application.
 The Hot- Standby Machine does nothing, but monitors the Active Servers, and if that
Server fails, the Hot- Standby Machine becomes the Active Server.
2. Symmetric Clustering:
 In Symmetric Clustering, two or more Systems are running the Applications, sharing the
resources, and they are monitoring each other.
 This works in an efficient way, as it uses all the available resources, without interfering in
the Activities of each other.
Chapter-2
1. List all and explain any 3 services of Operating system?
1.) Program Execution,
2.) Input- Output Operations,
3.) File System Manipulation,
4.) Communication,
5.) Error- Detection,
6.) Resource Allocation,
7.) Accounting, and
8.) Protection.

1.) Program Execution: The User needs to execute many Programs, so, the System must be
able to load the Programs in the Memory, and run it. The Program must be able to terminate
the Execution, either normally, or abnormally.
2.) Input- Output (I/O) Operations: A running program may require an Input from a
Keyboard, or from a File, or from any other Input devices. Similarly, the Program may also
produce an Output to be displayed on the Screen, to a File, or to some Output devices. Since
the User program cannot execute the I/O Operations directly, the Operating System must
provide some means to do so.
3.) File System Manipulation: In this, the program needs to access the File System, so that it
can read and write Files, create or delete files, or Directories, and can also delete the files by
their identity.
4.) Communication: One process may need to exchange information with another process. The
other process may be on the same machine, or on some other machine that belongs into that
same network, or a different network. Communication can be implemented via Shared
Memory, or Message Passing Technique.
5.) Error- Detection: The Operating System needs to be constantly aware of the possible
errors. Errors may occur in CPU iterations, Memory, Hardware, User Programs, etc. For
each type of Error, the Operating System should take appropriate action, and provide
consistent Computing.
6.) Resource Allocation: When there are multiple processes running at the same time, the
Resources must be allocated to each of them. Many different types of Resources are
managed by the Operating System, and have a general request, or Release Code.
7.) Accounting: To keep a track of which user uses how much, and which kind of Computer
Resources, this Record Keeping may be used for accounting, so that the Users can be built
on simply, for accumulating Usage Statistics. Usage Statistics may be a valuable tool in
trying to configure the System to improve the Computing Services.
8.) Protection: The owners of the information may want to control the use of the information,
when several processes execute concurrently. It should not be possible for one process to
interfere with others, or with the Operating System itself. Protection involves ensuring that
all the access to the System Resources, and the Data, is controlled. This can be done either
by the means of Password, or by giving the Access permission.

2. Explain the different O.S services in detail.


Refer the above question for the Answer.
3. List all system components and explain any 1 in detail.
 We can create a System as large and complex, and can partition it into smaller parts. Each part
is a well- designed portion of the System with the designated Inputs, Outputs, and Functions.
 The different System Components are as follows:
1.) Process Management,
2.) Main Memory Management,
3.) File Management,
4.) I/O System Management, and
5.) Secondary Storage Management.

1. Process Management:
 A Process is an instance of a Program in it’s Execution. It needs certain Resources, like
CPU Time, I/O Devices, etc., to accomplish it’s Tasks. These Resources are provided
when the Processes are created, or allocated while it is running. In addition to various
Physical and Logical Resources that a Process obtains, various Initialization Inputs may
be past along.
 For example, consider a Process whose function is to display the status of the File or the
Terminal. This Process will be given Input as the name of the file, and will execute the
appropriate System Calls, to obtain and display the desired information on the Terminal.
The Operating System will reclaim any Reusable Resources.
 A Process is a unit of Work in a System. Such a System consists of a collection of
processes, some of which are Operating System Processes (those which execute the
System Codes), rest of which are the User Processes (those which executes the User
Codes).
 The Operating System is responsible in concern with the following activities:
 Creating and Deleting both, the User Processes, and the System Processes.
 Suspending and Resuming the Processes.
 Providing the mechanism Process Synchronization, Communication, and Deadlock
Handling.

2. Main Memory Management:


 Main Memory is the Central to the Operation of a System. It is a Repository of the
quickly accessible Data, shared by the CPU, and the I/O Devices.
 The Central Processor reads the instructions from the Main Memory, during the Data-
Fetch Cycle.
 It is generally the only large Storage that the CPU is able to address, and is able to access
directly, to improve both, the Utilization of the CPU, and the speed of the System’s
response to it’s User.
 The System must keep several Programs in the memory, creating a need of Memory
Management.
 Many different Memory Management Schemes are used. These Schemes reflect various
approaches, and effectiveness of any given Algorithm, which depends on the situation.
 The Operating System is responsible in concern with the following activities:
 Keeping a track of which parts of the Memory are currently being used, and by whom.
 Deciding which processes and Data are to be moved into, and out of the Memory.
 Allocating and De- Allocating Memory Space, as needed.
3. File Management:
 A File is a collection of all the related information, defined by it’s creator. Commonly,
Files represent Programs and Data. It is one of the most visible components of an
Operating System.
 Computers can store information on different types of Physical Mediums. Each of these
Media has it’s own characteristics, and Physical Organization.
 It’s properties include Access Speed Capacity, Data Transfer Rate, Access Methods, etc.
 The Operating System is responsible in concern with the following activities:
 Creating and Deleting Files
 Creating and Deleting the Directories, to organize Files.
 Supporting Primitives for manipulating Files and Directories.
 Mapping the Files onto the Secondary Storage.
 Backing up the Files on the Storage Media.

4. I/O System Management:


 I/O System Management is the major function of an Operating System. The Functions
include the Management of the I/O Devices, the Device Drivers, and it’s Supporting
Systems.
 One of the purposes of an Operating System is to hide the Peculiarities of the Devices
from the User. It hides the Complexity, and checks that there is proper Utilization of the
Devices or not. So, the Operating System keeps a Track of the I/O Devices, to assign the
required jobs to it.
 The Operating System is responsible in concern with the following activities:
 General Device Interface.
 Drivers of Specific Hardware.
NOTE:- The Device Driver is the interface between the Operating System (OS), and the Hardware.

5. Secondary Storage Management:


The Main Memory is too small to accommodate all the Data, and all the Programs. The
Data that it holds, is lost when the Power is lost. Hence, the Computer System must
provide a Secondary Storage to Back- Up the Main Memory. Most of the systems use a
Disk as the principle Storage Medium, for both, Programs, and Data.
Most of the Programs, including Compilers, Assemblers, Processors, Editors, Formatters,
etc. are stored on a Disk, until they are Loaded on, into the Memory, and then, the Disk is
used as both, the Source and the Destination of their Processing. Hence, proper
management of the Disk Storage is of Central importance to a Computer System.
 The Operating System is responsible in concern with the following activities:
 Free Space Management.
 Storage Allocation.
 Disk Scheduling.

4. Explain the system component Process management in detail.


Refer the above question for the Answer.

5. Explain the system component File management in detail.


Refer Question Number 3 for the Answer.
6. Explain the system component Main memory management in detail.
Refer Question Number 3 for the Answer.

7. Explain the system component I/O system management in detail.


Refer Question Number 3 for the Answer.

8. Explain the system component Secondary storage management in detail.


Refer Question Number 3 for the Answer.

9. Define system call.


System Calls provide an interface between the Processes, and the Operating System.

10. Explain the system call with neat diagram.


Refer the above question for the Answer.

11. List all and explain any 1 system call in detail.


The System Calls are grouped into 5 major categories:
1.) Process and Job Control,
2.) File Management,
3.) Device Management,
4.) Information Maintenance, and
5.) Communication.

1.) Process and Job Control:


 It is also called as Process Management.
 A running program can control the execution of itself, by invoking the System Calls, or
operating the System Calls.
 Following System Calls are used in Process Management:
i.) End, Abort:
 A running program needs to be able to halt it’s execution, either normally, or
abnormally. Program Termination normally means that, it exits itself on it’s own,
with written “end” instruction.
 Whereas, the termination of the program occurs abnormally when there may occur
some interruptions, or the “Abort” System Call is used to terminate the process.
ii.) Load, Execute:
 A process executing, may need to load and execute another program, as directed by
the User’s Command. After the initialization of the Shell, the command interpreter is
being executed.
 Command Interpreter: It is a special program running when the job is initialized,
or the 1st User Logs in.
iii.) Create, Terminate Process:
 These System Calls are used to create various processes, and terminate the processes.
iv.) Get, Set Process Attributes:
 If a Process is created, then the System, and a User, should be able to control it’s
execution.
 This control requires the ability to retrieve and reset the attributes of various
processes. For this, we require Get and Set process attributes.
v.) Wait for Time, Wait Event, and Signal Event:
 Having created new processes, we may need to wait for them to finish their
execution.
 We may want to wait for a certain amount of time to pass. This is called as Wait for
Time.
 More probably, we will want to wait for a specific event to occur, or a task gets
accomplished. This is called as Wait Event.
 The process should then Signal, when that Event has occurred, or the Task gets
accomplished. This is called as Signal Event.

2.) File Management:


 The several System Calls used in File Management are as follows:
i.) Create, Delete File:
 The User may need to create new Files, or Directories, and also delete the existing
Files and Directories, by their identity. For this, Create and Delete System Calls are
activated.
ii.) Open and Close:
 Once the File is created, the User needs to Open a File, or Close the File, as per the
needs of the User. For this, Open and Close System Calls are activated.
iii.) Read, Write, and Reposition:
 Read System Call is used to perform Read Operation.
 Whereas, Write System Call is used for Writing, Appending, Copying, etc.
 Repositioning System Call considers the position of the Cursor in the File.
iv.) Get, Set File Attributes:
 When any File is created, or is in much use, the information gets attached to that File
by the Operating System. This information can be viewed, or Reset, using the Get
and Set System Calls.
3.) Device Management:
 A Process may need several Resources to execute.
 If the Resources are available, they can be granted, and the Control can be returned to the
User Process. Otherwise, the Process will have to wait, until sufficient Resources are
available.
 The various Resources are controlled by the Operating System. Some of these are Physical
Devices, while others are Virtual Devices.
 The various System Calls used in Device Management are as follows:
1.) Request, Release Device,
2.) Read, Write, and Reposition, and
3.) Get, Set Device Attributes.
 In a Multi- User System, a User must initially request for a device to ensure that the User
has the exclusive use of it.
 After the Operation is finished, the User must release the Device. The Functions are similar
to Open and Close System Calls of Files.
 Once the Device has been Requested and Allocated, the User can Read from a Device,
Write to a Device, or can Reposition the Device, as it is done with the Files.

4.) Information Maintenance:


 Many System Calls exists simply for the purpose of transferring a set of information
between the User Program, and the Operating System.
 Most of the Systems have the System Calls to return Date, Time, System Data such as
number of Users, Version number of the Operating System, Capacity Speed, amount of
free Memory, etc.
 The System Calls used in Information Maintenance are as follows:-
1.) Get, Set Date,
2.) Get, Set Time, and
3.) Get, Set System Data.

5.) Communication:
 The System Calls used in Communication are as follows:
1.) Create and Delete Communication Connection,
2.) Send and Receive Messages, and
3.) Attach and Detach Remote Devices.
 The two Communication Models are as follows:
1.) Message Passing Model:
 In this Model, information is exchanged through the Inter- Process Communication
Facility, provided by the Operating System.
 Before some communication can take place, the connections must be established
among the processes. The name of the other process, to whom we need to
communicate, it must be known that the Process may be on the same Machine, or on
some other System in the Network.
 Similarly, each Process has a Process Identifier, by which the Operating System can
refer to. The System Calls gets the Process ID, and also gets the Host ID, which are
used to know the Process, and the Terminal.
2.) Shared Memory Model:
 In this type of Communication, there is an exchange of information by reading and
Writing the Data in the Shared Area of the Memory.
 The processes use Shared Memory, Create and Share the Memory, and attach the
System Calls to create, and gain the access to the regions of the Memory, owned by
the other Processes.

12. Explain the system call Process management in detail.


Refer the above question for the Answer.

13. Explain the system call file management in detail.


Refer Question Number 11 for the Answer.

14. Explain the system call device management in detail.


Refer Question Number 11 for the Answer.

15. Explain the system call Information maintenance in detail.


Refer Question Number 11 for the Answer.

16. Explain the system call Communication in detail with neat diagram.
Refer Question Number 11 for the Answer.

17. Explain the use of OS tools.


Refer Question Number 1 for the Answer.
18. What difference is between loosely coupled and tightly coupled system.
Loosely Coupled Multiprocessor System Tightly Coupled Multiprocessor System
Every CPU has its memory module in a loosely CPU shares the memory module in a tightly
coupled system. coupled system.
Memory conflict is rare in these types of systems. Memory conflict is common in these types of
systems.
It is very efficient when the processes executing on It allows for more process interaction and is very
several CPUs have minimal interaction between useful for high-speed and real-time processing.
them.
Its interconnection network is the Message Transfer Its interconnections are the (PMIN), (IOPIN), and
System (MTS). (ISIN).
It is less costly but bigger in size. It is more costly but smaller in size.
Its data rate is low. Its data rate is high.
The power consumption of the loosely coupled The power consumption of the tightly coupled
system is high. system is low.
Loosely connected multiprocessor applications are The tightly connected multiprocessor applications
used in distributed computing systems. are found in parallel processing systems.
It has a high delay. It has a low delay.
It runs on multiple OS. It runs on a single OS.
Every process has its cache memory. System cache memory allocates processes
according to the requirements of processing.
It contains low scalability. It contains high scalability.
Security is low. Security is high.
Each processor has its own memory module. Processors have shared memory modules.
Efficient when tasks running on different Efficient for high-speed or real-time processing.
processors, has minimal interaction.
It generally, do not encounter memory conflict. It experiences more memory conflicts.
Message transfer system (MTS). Interconnection networks PMIN, IOPIN, ISIN.
Low. High.
Less expensive. More expensive.
Chapter-3
1. Define process.
 The Early Computer Systems allowed only one Program to be executed at a time. This
Program had the complete control of the System, and had the access to all the System
Resources. The Current System allows multiple programs to be executed concurrently. This
evolution required a firmer control, and more compartmentalization of various programs.
This need resulted in the creation of Process.
 A Process is an instance of a Program in execution. It represents the current activity, which
includes various parameters, such as Program Counters, Registers, Stack, etc.
 A Program is a Passive Entity, such as a File, stored in a Disc.
 Whereas, a Process is an Active Entity, with a Program Counter, specifying the next
instruction to execute, and the set of associated Resources.

2. List all and explain process states with the help of neat diagram.
 As the Process executes, it changes it’s State. The State of a Process is defined in a part, by
the current activity of that Process.
 Each Process may be in one of the following States:

1.) New:
It is the state where the Process is being created.
2.) Ready:
It is the State when the Process is waiting to be assigned to a Processor.
3.) Running:
It is the State where the instructions are executed.
4.) Waiting:
It is the State when the Process is waiting for some event to occur, such as, an I/O
operation completion.
5.) Terminated:
It is the state when the Process has finished it’s execution, and has released all it’s
Resources.
3. Describe process control block with the help of neat diagram.
 Each process is represented in the Operating System, by the Process Control Block. The
Process Control Block is also called as Task Control Block. A PCB contains many pieces
of information, associated with a specific process, as shown in the figure below.:

1.) Pointer:
 The Pointer points to the next PCB in the Ready Queue.
2.) Process State:
 The Process State may be New, Ready, Running, etc.
3.) Program Counter:
 The Program Counter indicates the Address of the next instruction to be executed for
the Process.
4.) Process ID (Number):
 The Process ID is the identification number given by the System to the Process.
5.) CPU Registers:
 The Registers vary in number and Type, depending on the System Architecture. They
include accumulators, Index Registers, Stock Pointers, General- Purpose Registers, etc.
 Along with the Program Counter, this state of information, must be saved, so that, when
an interrupt occurs, to allow the Process to be continued, and to be executed correctly.
6.) CPU Scheduling Information:
 This information includes Priorities, Pointers to the Scheduling Queues, and any other
Scheduling Parameters.
7.) Memory Management Information:
 It consists of information, such as value of Base and Limit Register, Page Tables,
Segment Tables, depending on the Memory Management Scheme, used by the
Operating System.
8.) Accounting Information:
 This information includes the amount of CPU Time used, Time Limits, etc.
9.) I/O Status Information:
 This information includes the List of I/O Devices, allocated to the Process List of Open
Files, and so on.
*NOTE: A Process simply serves as a repository for any information that may vary from
process to process.
4. Explain process scheduling with scheduling queues with neat diagram.
 In some Systems, the Long- Term Schedulers may be absent. In these Systems, every new
process is kept in the Main Memory for the execution, which affects the stability of the
System.
 So, an intermediate level of Scheduling was introduced. The Scheduler was used in it. It is the
Medium- Term Scheduler.
 The figure below shows the addition of the Medium- Term Scheduling.:

Medium- Term Scheduler:


 The main purpose of Medium- Term Scheduler is that it removes the Process from the Main
Memory, and again reloads it afterwards, whenever it is required. This process is called as
“Swapping”, which is a Memory Management technique. This is helpful to improve the
performance of the System, and it can maintain the degree of Multiprogramming.
Long- Term Scheduler:
 The Long- Term Scheduler selects the Group, or a Batch of Processes, brought into the Ready
Queue.
Short- Term Scheduler:
 The Short- Term Scheduler selects the processes which should be executed next, and
allocated to the CPU.

5. Define all schedulers.


Refer Question Number 4 for the Answer.

6. Describe the concept of schedulers.


 A Process migrates between the various Scheduling Queues throughout it’s Lifetime. The
Operating System must select the Processes for these Queues, for the Scheduling Process.
 The Selection Process is carried out by the appropriate Scheduler. In a System, often more
Processes are submitted. Then they can be executed immediately. These processes are kept on
a List (Queue), where they are kept for later execution.
 The Primary distinction between these Schedulers is the Frequency of their execution.
 The Short- Term Scheduler must select a new process for the CPU frequently, whereas the
Long- term Scheduler’s Frequency is less, as it takes some minutes to select the Batch of
processes, and load them into the Memory.
 The main aspect of Schedulers is to maintain the degree of Multiprogramming. The degree
is constant, if the Average Rate of the process creation should be equal to the Average
Departure Rate of the processes leaving the System.
7. Describe the concept of medium term scheduling with neat diagram.
Refer Question Number 4 for the Answer.

8. Explain context switch with the help of neat diagram.


 Switching the CPU to another process requires saving the State of the old process, and
Loading the saved State of the new process. This task is known as “Context Switch”.
 The context of a process is represented in the PCB of the process, which includes the value of
Registers, Process States, Memory Management information, etc. The Context Switch Time is
pure overhead, because the System does no useful work while Switching.
 The speed varies from machine to machine, depending upon the System Architecture, and the
existence of the Special Instruction. More Complex the Operating System, more the work
must be done during the Context Switch.
 The Advanced Memory Management technique may require extra information to be switched
with each context. How the Address Space is preserved, and what amount of work is needed
to preserve it, depends upon the Memory Management methods, used by the Operating
System.
 The following figure shows the CPU Switch from process to process:

9. Describe message passing in IPC.


 The Inter- Process Communication, or simply the IPC, provides a mechanism, to allow the
processes to communicate, and to synchronize their actions.
 IPC is best provided by Message Passing System, and the Message Passing Systems can be
designed in many ways. An IPC facility provides at least the two Operations, which are:
1.) Send (message), and
2.) Receive (message).
 If the processes “P” and ”Q” want to communicate, they must Send messages to, and Receive
messages from each other, and a communication link must exist between them.
 The several methods for logically implementing a Link, and Send/ Receive are:
1.) Direct or Indirect Communication,
2.) Symmetric and Asymmetric Communication,
3.) Automatic or Explicit Buffering,
4.) Fixed or Variable Sized Message.
 Now, to achieve communication throughout these techniques, we require Naming, Buffering,
and Synchronization.

10. Describe direct and indirect communication in IPC.


i.) Direct Communication:
 With Direct Communication, the (each pair of) processes that wants to communicate with
each other must explicitly name the recipient, or the Sender of the Communication.
 In this Scheme, the Send and Receive Primitives are defined as:
a.) Send (P, Message): Sends a message to Process- P.
b.) Receive (Q, Message): Receives a message from Process- Q.
ii.) Indirect Communication:
 With Indirect Communication, the (each pair of) processes that wants to communicate
with each other must need to have a shared mailbox. The Sender places the message in
the Mailbox (also known as a port), and the Receiver removes the data or message from
the Mailbox.
 In this Scheme, the send and receive primitives are defined as follows:
a.) Send (P, message): send a message to mailbox p.
b.) Receive (Q, message): receive a message from mailbox q.

11. Describe synchronization in IPC.


 Communication between the processes takes place by the calls, to Send and Receive the
primitives.
 There are different Design Options for implementing each Primitive.
 Message Passing maybe either Blocking, or Non- Blocking, also known as
Synchronous, or Asynchronous.
i.) Blocking Send:
 The Sending process is blocked, until the message is received by the receiving
process, or by the Mailbox.
ii.) Non- Blocking Send:
 The Sending Process sends the Process and Resumes the Operation.
iii.) Blocking Receive:
 The Receiver blocks until the message is available.
iv.) Non- Blocking Receive:
 The receiver retrieves either a Valid Message, or a Null Message. Here, different
combinations of Send and Receive are possible.
12. Describe buffering in IPC.
 Weather the communication is Direct or Indirect, the messages exchanged by the
communicating process, reside in the Temporary Queue.
 They are implemented in 3 ways, which are as follows:
1.) Zero Capacity.
 The Queue has a maximum length of 0. Thus, the link cannot have any Messages
Waiting Limit.
 In this case, the Sender must Block, until the Receiver receives the message.
2.) Bounded Capacity.
 The Queue has finite length “n”. Thus, at most, “n” messages can reside in the
Queue.
 If a Queue is not full, a new message can reside in it, and the Sender can continue the
execution without waiting.
 If the link is full, the Sender must Block, until the space is available in the Queue.
3.) Unbounded Capacity.
 The Queue has Infinite length. Thus, any number of messages can wait in it, and the
Sender never Blocks.
 Note: The Zero Capacity Case is referred to as “No Buffering”, whereas the other two
Cases are referred to as “Automatic Buffering”.

13. Define threads.


A Thread, sometimes called a Light- Weight Process, is the basic unit of CPU Utilization. It
comprises of a Thread ID, a Program Counter, a Register Set, and a Stack.
It shares it’s Code section, Data Section, and other Operating System Resources, with other
Threads, belonging to the same Process.
A traditional process has a single Thread of Control. The figure below shows the difference
between Single Thread Process, and a Multi- Thread Process.

14. What is thread? Explain classical thread model with neat diagram. OR Explain threads in
detail with neat diagram.
Refer the above question for the Answer.
15. Explain the benefits of threads.
1.) Responsiveness:
 Multithreading and interactive applications may allow a Program to continue running, even
if a part of it is blocked, or is performing a lengthy operation, thereby increasing the
responsiveness to the user.
 For an instance, a Multithreaded Web- Browser could still allow the interaction in 1 thread,
while an Image is being loaded in another Thread.
2.) Resource- Sharing:
 By default, Threads share the Memory and the Resources, of the process to which they
belong.
 The benefit of the Code Sharing is that it allows an Application to have several different
Threads of activity, all within the same Address Space.
3.) Economy:
 Allocating the Memory and the Resources, for the process creation, maybe less
economical.
 Alternatively, because the Threads share Resources of the Process to which they belong, it
is more economical to create and Context- Switch Threads.
4.) Utilization of Multiprocessor Architecture:
 The benefit of Multithreading can be greatly increased in a Multiprocessor Architecture,
where each Thread maybe running in parallel, on different Processors.

16. Explain the types of threads.


 The Threads can be divided into two types, which are:
1.) User Threads:
 The User Threads are supported above the Kernel, and are implemented by a Thread
Library, at the User Level.
 The Library provides support for Thread Creation, Scheduling, and Management with no
support from the Kernel, because the Kernel is unaware of the User- Level Threads, as all
the Thread Creations and Executions are done in the User Space, without any need of
some Kernel intervention.
 Therefore, the User- Level Threads are generally fast to create and manage.
2.) Kernel Threads:
 Kernel Threads are directly supported by the Operating System.
 The Kernel performs the Thread- Created Scheduling and Management in the Kernel
Space, because the Thread Management is done by the Operating System.
 The Kernel Threads are generally slower than the User Threads, to Create and Manage.
17. Describe different multithreading model with neat diagram.
1.) Many- To- One Model:
 The Many- To- One Model has many ser- Level Threads, to one Kernel Thread. Thread
Management is done is done in User Space, so, it is efficient.
 But, the entire Process Block of a Thread makes a Blocking System Call.
 Also, because only 1 thread can access the Kernel at a time, multiple Threads are able to
run in parallel on the Multiprocessor.

2.) Many- To- Many Model:


 The Many- To- Many Model multiplexes many User- Level Threads to a small number of
Kernel Threads, maybe specific to a particular Application, on a particular Machine.
 The Many- To- One Model allows the developer to create as many, i.e. multiple number of
User Threads. True Concurrency is not gained, because the Kernel can schedule only 1
Thread at a time.
 The One- To- One Model allows for greater Concurrency, but the developer has to be more
careful, not to create too many Threads with an Application.
 The Many- To- Many Model suffers from neither of these shortcomings. The developers
can create as many User Threads as necessary, and the corresponding Kernel Thread can
run parallel on the Multiprocessor. Also, when a Thread performs a Blocking System Call,
the Kernel can schedule another Thread for execution of the User Threads.
3.) One- To- One Model:
 The One- To- One Model maps each one of the User Threads to the Kernel Thread.
 It provides more Concurrency than the Many- To- One Model, by allowing another Thread
to run, when a Thread makes a Blocking System Call. It also allows multiple Threads to
run in parallel, on the Multiprocessor.
 The only drawback in this Model is that, creating a User Thread requires creating the
corresponding Kernel Thread, because the Overhead of creating Kernel Threads can
burden up the performance of an Application.
Chapter-4
1. Explain CPU-I/O burst cycle with the help of neat diagram.
 Success of Scheduling depends on the property of the Process.
 The Process Execution consists of a “Cycle” of CPU Execution, and I/O Wait. The
Process alternates between these two States.
 The Process Execution begins with CPU Burst, followed by the I/O Burst, then another
CPU Burst, then another I/O Burst, and so on.
 Eventually, the last CPU Burst will end the System Request to terminate the execution.
 The alternate Sequence of a CPU I/O Burst Process:

2. Explain scheduling circumstances in detail.


Ans.)
Scheduling Queries or Circumstances:
 The CPU Scheduling Decisions may be taken under the following circumstances:
 When a Process switches from Running State, to Waiting State.
 (For Example: an I/O Request.)
 When a Process switches from Running State, to Ready State.
 (For Example: when an interrupt occurs.)
 When a Process switches from Waiting State, to Ready State.
 (For Example: Completion of an I/O Operation.)
 When a Process terminates.
 In circumstances or the situations 1 and 4 (from the above situations), there is no choice in
terms of Scheduling. A new Process must be selected for execution. However, there is a
choice in circumstances or situations 2 and 3.
 When a Scheduling takes place only under the situations of 1 and 4, we say that the
Scheduling Scheme is Non- Preemptive. Otherwise, if the Scheduling takes place under the
situations of 2 and 3, the Scheduling Scheme is Pre- Emptive.
3. Explain types of scheduling.
Ans.)
Types of Scheduling:
 Scheduling is divided into 2 types, which are as follows:
1.) Pre- Emptive Scheduling, and
2.) Non Pre- Emptive Scheduling.
 The types of Scheduling are explained below:
1.) Pre- Emptive Scheduling:
 Under Pre- Emptive Scheduling, once the CPU is allocated in the middle of time, the
CPU may be released from the Process, i.e. Preempted, and gets allocated to another
Process.
2.) Non Pre- Emptive Scheduling:
 Under Non- Preemptive Scheduling, once the CPU is allocated to the Process, the
Process keeps the CPU until it releases the CPU, either by terminating, or by switching
to the Waiting State.

4. Explain scheduling components.


Ans.)
1.) CPU Scheduler:
 The CPU Scheduler is also known as Process Scheduler, or Short- Term Scheduler.
 Whenever the CPU becomes idle, the Operating System must select one of the processes in
the Ready Queue to be executed.
 The Selection Process is carried out by the CPU Scheduler.
2.) Dispatcher:
 Another component involved in the CPU Scheduling Function is the Dispatcher.
 The Dispatcher is the Module that gives the control of the CPU to the Process, selected
by the CPU Scheduler, or the Short- Term Scheduler.

5. Describe different scheduling criteria.


Ans.)
1.) CPU Utilization:
 It means to keep the CPU as busy as possible.
 Conceptually, CPU Utilization can range from 0% to 100% in Real System. At least, it
should range from 40% to 90%.
2.) Throughput:
 If the CPU is busy executing the processes, then the work is being done.
 One major unit of work is the number of processes that are completed per unit Time is
called Throughput.
 For Long processes, this rate may be one process per minute, and for Short processes, the
rate may be 10 processes per Minute.
3.) Turnaround Time:
 The interval from the Time of submission of a process, to the Time of completion, is the
Turnaround Time.
 It is the sum of the time spent, waiting to get into the Memory waiting in the Ready
Queue, executing, and doing the I/O Operations.
4.) Waiting Time:
 It is the sum of the time spent, waiting in the Ready Queue.
5.) Response Time:
 It is the time from the submission of a Request, until the first response is produced from
the CPU.
 It is desirable to maximum CPU Utilization, and the Throughput, and to minimize the
Turnaround Time, Waiting Time, and Response Time.

6. Describe FCFS algorithm with the help of example, advantages and disadvantages.
Ans.)
 This is the simplest Scheduling Algorithm.
 With this scheme, the process that requests the CPU first, is allocated to the CPU first.
 The implementation of the FCFS Policy is easily managed by the FIFO (First In First Out)
Queue.
 When a process enters the Ready Queue, it’s PCB (Process Control Block) is linked at the
Tail of the Queue.
 When the CPU is free, it is allocated at the Head of the Queue.
 The FCFS process is a Non Pre- Emptive Technique.

Advantage of the FCFS Algorithm is:


 It is the simplest Algorithm.
Disadvantage of the FCFS Algorithm is:
 It has more Average Waiting Time.
 Consider the following set of processes with the Burst Time, given in milliseconds, as:
Process Burst Time
P1 24
P2 3
P3 3
Gantt Chart:
P1 P2 P3
0 24 27 30
The Average Waiting Time (AWT) = (0 + 24 + 27) ÷ 3 = 17.0 ms
The Average Turnaround Time (ATT) = (24 + 27 + 30) ÷ 3 = 27.0 ms

7. Describe SJF algorithm with the help of example, advantages and disadvantages.
Ans.)
 Another approach to CPU Scheduling is the Shortest Job First (SJF) Algorithm.
 This algorithm associates each process with the length of the next CPU Burst.
 When the CPU is available, it is assigned to the process that has the smallest next CPU
Burst.
 If the Burst Time of 2 processes is the same, the concept of FCFS Scheduling Algorithm is
used.
 SJF is also referred to as “Shortest Next CPU Burst Algorithm”.
 SJF is Pre- Emptive, as well as Non Pre- Emptive Scheduling Technique.
Advantage of the SJF Algorithm is:
 It is an Optimal Algorithm, that is, it gives minimal Average Waiting Time.
Disadvantage of the SJF Algorithm is:
 It has more Average Waiting Time.
 Consider the following set of processes with the Burst Time given in milliseconds:
Process Burst Time
P1 6
P2 8
P3 7
P4 3
Gantt Chart:
P1 P2 P3 P4
0 3 9 16 24
The Average Waiting Time (AWT) = (3 + 16 + 9 + 0) ÷ 4 = 7.0 ms
The Average Turnaround Time (ATT) = (3 + 9 + 16 + 24) ÷ 4 = 13.0 ms

8. Describe Priority algorithm with the help of example, advantages and disadvantages.
Ans.)
 In Priority Scheduling, a Priority is associated with each process, and the CPU is allocated to
the process with the highest Priority. The processes with equal Priority are scheduled in
FCFS Order. Priorities are generally indicated by some numbers, such as 1, 4, 5, and so on.
However, there is no general agreement on weather 1 is the Highest or the Lowest Priority.
 Some Systems use low numbers to represent low Priority, and high numbers for the
representation of high Priority, while, others use low numbers for high Priority, and high
numbers for low Priority. We assume low numbers for the higher Priority.
 Priority Scheduling can be Pre- Emptive or Non Pre- Emptive.
For Example:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt Chart:
P2 P5 P1 P4 P3
0 1 6 16 18 19
The Average Waiting Time (AWT) = (6 + 0 + 18 + 16 + 1) ÷ 5 = 8.2 ms
The Average Turnaround Time (ATT) = (16 + 2 + 4 + 19 + 9) ÷ 5 = 12.0 ms

9. Describe round robin algorithm with the help of example, advantages and disadvantages.
Ans.)
 Round Robin Scheduling Algorithm is designed especially for Time- Sharing System. It is
much similar to FCFS Scheduling, but the Pre- Emption is added to the Switch between the
processes. A small unit of Time, called Time Quantum, or Time Slice, is defined. Time
Quantum is generally between 1 and 100 milliseconds. The Ready Queue is treated as the
Circular Queue. The CPU Scheduler goes around the Ready Queue, selecting the processes
and the Dispatcher, allocating the CPU to each process, for a Time Interval of up to 1 Time
Quantum. To implement Round Robin Scheduling, we keep the Ready Queue as the FIFO
Queue of the processes. New processes are added to the Tail of the Ready Queue.
 For Example:
(ts= 4ms)
Process Burst Time
P1 24
P2 3
P3 3
Gantt Chart:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
The Average Waiting Time (AWT) = (6 + 4 + 7) ÷ 3 = 5.6 ms
The Average Turnaround Time (ATT) = (30 + 7 + 10) ÷ 4 = 15.6 ms
 In Round Robin Scheduling Algorithm, no process is allocated to the CPU, for more than 1
Quantum in a Row. If the process’s CPU Burst exceeds 1 Time Quantum, the process is Pre-
Pre
Empted and is put back in the Ready Queue. Thus, the Round Robin Scheduling is a Pre-
Pre
Emptive Technique.
Advantage of the Round Robin Scheduling Algorithm is is:
 The Average Waiting Time is minimal.
Disadvantage of the Round Robin Scheduling Algorithm is is:
 If the Time Quantum is too large, then it is as good as FCFS Policy.

10. Explain multilevel queue scheduling with the help of diagram.


Ans.)
 Another class of scheduling algorithm has been created fo for situations
tuations in which the
processes are easily classified into groups.
 For example, A common division is made between foreground and background processes.
These 2 types of process have different response time requirements and so may have
different scheduling needs.
eeds. In addition, foreground process may have a higher priority
(externally defined) over background processes.
 A multilevel queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some
property of process such as memory size, process priority or process type.
 For example, separate queues might be used for foreground and background processes. The
foreground process queue might be scheduled by RR algorithm wh while
ile background queue is
scheduled by FCFS algorithm. The technique or algorithm used is defined by the Operating
System.
 Consider the above example of multilevel queue scheduling algorithm with 5 queues, listed
below in order of priority.
1. System Process.
2. Interactive Process.
3. Interactive Editing Process.
4. Batch Process.
5. Student Process.
 Each process has absolute priority over lower priority queue. No process in the batch queue
could run unless the higher priority queue processes are empty. Another possibility is to
time slice among the queues. Each queue gets certain portion of CPU time which it can
then schedule among the various processes in its queue.

11. Define deadlock.


Ans.)
In a multiprogramming environment several processes may compete for a finite no. of resources.
A process requests resources, if the resources are not available at that time the process enters a
wait state. Waiting processes never again change state because the resources they have requested
are held by other waiting processes. This situation is called “Deadlock”.

12. Describe the necessary conditions that lead to deadlock.


Ans.)
 A deadlock situation can arise if the following 4 conditions hold simultaneously in a system.
1. Mutual Exclusion:
At least one resource must be held in a non shareable mode i.e. only one process at a time
can use the resource. If another process requests that resource, the requesting process must
be delayed until the resource has been released.
2. Hold &Wait:
A process must be holding at least one resource & waiting to acquire additional resources
that are currently being held by other processes.
3. No Pre-Emption:
Resources cannot be pre-empted, that is, the resources can be released voluntarily by the
process holding it, after that process has completed its task.
4. Circular Wait:
A set {P0, P1, P2, ….. Pn} of waiting processes must exist such that P0 is waiting for the
resource held by P1, P1 is waiting for resource held by P2…. Pn-1 is waiting for a resource
held by P0, and Pn is waiting for resource held by P0.
 It is emphasized that all the 4 conditions must held for deadlock to occur. The Circular Wait
Condition implies the Hold and Wait condition, so that the 4 conditions are not completely
independent.

13. Explain deadlock prevention in detail.


Ans.)
 Deadlock prevention provides a set of methods by ensuring that at least one of the necessary
conditions cannot hold, we can prevent the occurrence of a deadlock. We elaborate on this
approach by examining each of the four necessary conditions separately.
 Elimination of Mutual Exclusion:
 Here it is important that at mutual exclusion should hold that for non-sharable devices
like printers, tape drivers, these devices are non-sharable as can run a single process at a
time.
 The best example of sharable devices is read only files. If multiple processes try to open
these files simultaneously then permission is granted to open these files and access
them.
 In general we cannot prevent deadlock by denying mutual exclusion condition as some
resources are intrinsically non- sharable.
 Elimination of Hold and Wait condition:
 Another protocol permits a process to request the resources if and only if it has got, no
resources a process can request, for few resources and utilize them. This process should
release the devices or resources which are located currently, before it request for new
resources.
 Let's take example. Imagine a process which needs to copy data from DVD to hard disk,
sort those files on the disk and take printout of that data. According to First protocol this
process will request for DVD drive, disk and printer it will hold the printer till it finishes
working with DVD drive and disk.
 In case of second protocol, the process will request only DVD drive disk only. It will
finish Working with these both resources and then will put a request for the printer and
the DVD drive and disk both will be released after printing data, the printer will be
also released.
 Both these protocols have disadvantages as below:-
1. Resource utilization can be low as resource may not be used for long time.
2. Starvation is possible because process may need several common or popular
resources for which it may leave to wait indefinitely.

 Elimination of no Pre- Emption condition:


 To ensure that this condition does not hold we can use the following protocol.
 If a process is holding some resources and request another resource that cannot be
immediately allocated to it, then all the resources currently being held are pre-empted,
that is, the resources are implicitly released.
 The pre-empted resources are added to the list of resources for which the process is
waiting.
 Alternative, if a process requests some resources, we 1st check whether they are
available. If yes, we allocate them else we check whether we allocate to some other
process that is waiting for additional resources.

 Elimination of Circular Wait:


 One way to ensure that this condition never holds is to impose a total ordering of all the
resource types and to require that each process request resources in an increasing order
of enumerations.
 To illustrate we let R={r1,r2,…rN} be the set of resource types.
 We assign to each resource type a unique integer number, which allows us to compare
two resources and to determine whether 1 precedes another in our ordering.
 Formally F: r > N, where, N is the set of natural numbers.
 A process can initially request any number of instances of resources type say R i. After
that, the process can request instances of resource type R j if and only if F(Rj) > F(Ri).
 If several instances of same resource type are needed then a single request for all of
them must be issued.
 Alternatively we can require that whenever a process requests an instance of resource
type Rj, it has released any resources Ri such that F(Ri) >= F(Rj). If these two protocols
are used then the circular wait condition cannot hold.

14. Explain Resource allocation graph with example of cycle that leads in deadlock.
Ans.)
Deadlocks can be described more precisely in terms of directed graph called a system resource
allocation graph. This graph
aph consists of a set of vertices „V
„V‟ and a set of edges „E‟.
„E The set of
vertices “V” is partitioned into two different types of nodes P={P 1,P2,…..Pn}, the set consisting
of all active processes in the system and R={R 1,R2,…..Rn}, the set consisting of all resource
types in the system.

A directed edge from process Pi to resource type R j is denoted by Pi →Rj it signifies that process
Pi requested an instance of resource type R j and is currently waiting for that resource. A directed
edge from resource type Rj to process Pi is denoted by Rj → Pi, it signifies that an instance of
resource type Rj has been allocated to process Pi. A directed edge Pi → Rj is called a request
edge, a directed edge R →Pi is called an assignment edge. Pictorially, we represent each process
Pi as a circle, and each resource type R j as a square. Since resource type Rj may have more than
one instance, we represent each such instance as a dot within the square. Note that request
reque edge
points to only the square Rj whereas an assignment edge must also designate one of the dots in
the square. When process Pi requests an instance of resource type R j, a request edge is inserted in
the resource allocation graph. When this request can be fulfilled the request edge is
instantaneously transformed to an assignment edge. When process no longer needs access to the
resource it releases the resource and as a result the assignment edge is deleted.
The resource allocation graph shown in figure ddepicts the following situation.
1) The sets P,R and E
P= {P1, P2, P3}
R= {R1, R2, R3, R4}
E= {P1 → R1, P2 → R3, R1 → P2, R2 → P2, R2 → P1, R3 → P3}
2) Resource instances
One instance of resource type R1
Two instance of resource type R2
One instance of resource type R3
Three instance of resource type R4
3) Process states
 Process P1 is holding an instance of resource type R 2, and is waiting for an instance of
resource type R1.
 Process P2 is holding an instance of resource R 1 & R2 and is waiting for an instance of
resource type R3.
 Process P3 is holding an instance of R 3.
Given the definition of a resource allocation graph it can be shown that, if the graph contains no
cycles then no process in the system is deadlocked. If the grap
graph
h does contain a cycle, then
deadlock may exist. If each resource has exactly one instance, then a cycle implies that a
deadlock has occurred. If the cycle involves only a set of resource types, each of which has only
a single instance, then deadlock has ooccurred.
ccurred. Each process involved in cycle is deadlocked. In
this case, a cycle in the graph is both a necessary and sufficient condition for the existence of
deadlock.
If each resource type has several instances, then cycle does not necessarily imply that a deadlock
has occurred. In this case, a cycle in the graph is a necessary but not a sufficient condition for the
existence of deadlock.
To illustrate this concept, see the resource allocation graph shown in the figure before. Suppose
that process P3 requestss an instance of resource type R2. Since no resource instance is currently
available, a request edge
P3 → R2 is added to the graph as shown below.

At this point, two minimal cycles exist in the system.


P1→R1→P2→R3→P3→R2→P1
P2→R3→P3→R2→P2
Processes P1, P2 and P3 are deadlocked. Process P2 is waiting for resource R3, which is held by
process P3. Process P3, on the other hand is waiting for either process P 1 or process P2 to release
resource R2. In addition process P1 is waiting for process P2 to release resource R1.
Now consider the resource allocation graph shown below:
In this example we also have a cycle.
P1→R1→P3→R2→P1
However there is no deadlock. Observe that process P4 may release its instance of resource type
R2. That resource can then be allocated to P3, breaking the cycle.
In summary, if a resource allocation graph does not have a cycle, then the system is not in the
deadlock state.
On the other hand, if there is a cycle, then the system may or may not be in deadlock state. This
observation is important when we deal with the deadlock problem.

15. Explain Resource allocation graph with example of cycle that does not lead in deadlock.
Ans.)
If we have a resource allocation system with only one instance of each resource type, a variant of
resource allocation graph defined can be used for deadlock avoidance.
In addition to request & assignment edges, we introduce a new type of edge called “claim edge”.
A claim edge Pi → Rj indicates that process Pi may request resource Rj at some time in future.
This edge resembles the request edge in direction but is represented by a "dashed line". When
process Pi requests resource Rj the claim edge Pi → Rj is converted to a request edge. Similarly
when a resource Rj is released by Pi, the assignment edge Rj → Pi is reconverted to a claim edge
Pi → Rj. We note that the resources must be claimed in a prior in the system. That is, before
process Pi starts executing all its claim edges must already appear in the resource allocation
graph. We can allow acclaim edge Pi → Rj to be added to the graph only if all the edges
associated with process Pi are claim edges.
Suppose that process Pi requests resource Rj. The request can be granted only if converting the
request edge Pi → Rj to an assignment edge Rj → Pi does not result in the formation of a cycle in
the resource allocation graph. If no cycle exists then the allocation of the resource will leave the
system in a safe state. If the cycle found, then the allocation will put the system in an unsafe
state. Therefore process Pi will have to wait for its requests to be satisfied.
To illustrate this algorithm we consider the resource allocation graph as shown below:

16. Explain Resource allocation graph algorithm.


Ans.)
Same as above (Q.15).
17. Describe Banker’s algorithm in detail.
Ans.)
The resource allocation graph algorithm is not applicable to a resource allocation system with
multiple instances of each resource type. The deadlock avoidance algorithm that we describe
next is applicable to such a system but is less efficient than the resource allocation graph
scheme. This algorithm is commonly known as the Banker’s algorithm. The name was chosen
because this algorithm could be used in a banking system to ensure that the bank never allocates
its available cash such that it can no longer satisfy the needs of all its customers.When a new
process enters the system; it must declare the maximum no. of instances of each resource type
that it may need. This no. may not exceed the total no. of resources in the system. When a user
requests a set of resources, the system must determine whether the allocation of these resources
will leave the system in the safe state. If it will, the resources are allocated otherwise the process
must wait until some other process releases enough resources.
Several data structures must be maintained to implement the banker’s algorithm. These data
structures encode the state of resource allocation system. Let “n” be the number of processes on
the system and “m” be the number of resource types, we need the following data structures.
1.) Available:
A vector of length “m” indicates the no. of available resources of each type. If Available [j]
= k, there are k instances of resource type Rj available.
2.) Max:
A n*m matrix defines the maximum demand of each process. If Max [i,j]=k then process P i
may request at most k instances of resource type R j.
3.) Allocation:
An n*m matrix defines the no. of resources of each type currently allocated to each
process. If Allocation [i,j] = k, then process P i is currently allocated k instances of resource
type Rj.
4.) Need:
An n*m matrix indicates the remaining resources need of each process. If Need [i,j] = k
then process Pi may need k more instances of resource type Rj to complete its task. Note
that Need [i,j] = Max [i,j] – Allocation [i,j]
These data structures vary over time in both size & value.
Chapter-5
1. Explain static memory partitioning with neat diagram.
Ans.)
Static partition is also called fixed memory partitioning. In static partition, the partitions are
fixed. Any remaining space in a partition cannot be reutilized by any other process.

Input Queue is also known as Ready Queue.


The Descriptor Table contains:
Base Address Offset
The figure shows that the memory is divided into various fixed size partitions. For example,
partition is of 200k. There is input queue (ready queue) shown where processes which demands
for memory allocation. As the partition is free, the process is selected from input queue by O.S.
& is loaded into free partition. Static partition descriptor table consists of base address & offset.
If a process doesn’t need that much allocated space, then, that unoccupied space will be on
wastage because that space cannot be used by any process. This process in static partition is
known as internal fragmentation.

2. Explain dynamic memory partitioning with neat diagram.


Ans.)
Dynamic partition is also called variable memory partitioning. In dynamic memory partitioning,
memory is partitioned in various partitions that are of variable length. When a process arrives in
the input queue, and needs memory, the system searches for a partition that is large enough for
the process. If the partition is too large then it is divided into two parts. One part is allocated to
the arriving process, the other is returned back to the partition available list.
In this type of partitioning, the partitions are of variable length.
In the above figure as per the processes memory requirement the partition gets allocated and
created. But as there are various processes allocating the space, slowly it reaches the state in
which there are lot of small partition remained unutilized. In the above figure 2Mb may remain
unutilized because it may be too small to occupy the process. This particular situation of un-
utilization of memory is known as external fragmentation.
Another problem arises that, how to satisfy a request of the process of size ‘n’ to be allocated in
memory. There are mainly 3 strategies used for selecting a free partition from a set of available
ones, that are,:
1. First Fit: Allocate first partition that is big enough
2. Best Fit: Allocate the smallest partition that is big enough.
3. Worst Fit: Allocate the largest partition.

3. What are the different memory management techniques?


Ans.)
Paging:
To overcome the problem of external fragmentation, Paging technique is developed. It is a
memory management scheme in which physical memory is broken into fixed size blocks called
“frames” logical memory is broken into blocks of same size called “pages”. When a process is to
be executed, its pages are loaded into any available memory frames from the backing store. The
hardware support for paging is shown in the figure below:

The page size is defined by the hardware. The size of a page is typically a power of 2 varying
between 512 bytes to 16 MB per page depending on the computer architecture. Every logical
address is mapped with the physical address. So, paging is supposed to be a form of dynamic
relocation.
The above figure every address that is generated is divided into 2 parts i.e. page number and
page offset or displacement.
The page no. is used as an index into page table and page offset is the displacement within the
page. Depending on the page size the logical and physical address is generated. For example, in
the above figure we consider page size of 4 bytes then logical address & physical addresses are
generated 4 bytes differing from each other like 0, 4, 8, 12, …..
Segmentation:
In segmentation a logical address space is a collection of segments. Each segment has a name
and length. So the address specifies both the segment name and the offset (displacement/ length)
within the segment. For simplicity of implementation segments are numbered & are referred to
by a segment number rather than segment name. Thus a logical address consists of segment
number and offset. The figure below shows the address translation.

As we had seen in segmentation an object is referred by logical address i.e. by two- dimensional
address. But actual address or physical address is a one dimensional sequence. So it is necessary
to translate a logical address into physical address so that the program gets executed. This
mapping is done by segment table which consists of base and limit. The segment base contains
starting physical address whereas limit specifies the length of table.
The segment number is used as an index into segment table. The offset of logical address must
be with 0 and segment limit. The segment number is compared with limit if it is within limit, the
actual physical address is to generated by seeing the value of limit & base register contents. If it
is not in specified range the error occurs. So address translation from logical to physical is by
adding contents of base register & offset. Fig below shows that the segment gets stored in
physical address as shown. The segment table has separate entry for each segment.
Memory Partitioning:
The Multiple or Memory partitioning
tioning is one of the mem
memory
ory management techniques. With this
technique, where memory is divided into sev several or fixed sized partition, each
ach partition may
contain exactly one process. In the partition method the processes are waiting in an Input Queue
(or the Ready Queue). The Operating System takes into account the memory required for each
process. It also keeps track of free
ee space. There are 2 types of partitioning i.e. static (fixed) &
dynamic or (variable) partitioning.

4. Explain free space management with bit maps in detail.


 Frequently the free space list is implemented as Bit Map or Bit Vector.. Each block is
represented by 1 bit. If the block is free, it is 1 and if block is allocated it is 0.For example,
consider a disk where blocks 2,3,4,
2,3,4,5,8,9,10,11,12,13,17,18,25,26
5,8,9,10,11,12,13,17,18,25,26 & 27 are free and rest of the
blocks are allocated. The bit map would be:
001111001111110001100000011100000.....
 The main advantage of this approach is its relative simplicity & its efficiency in finding the
first free block orr "n" consecutive free blocks on the disk. Indeed many computers supply bit
manipulation instructions that can be used effectively for that purpose. One technique for
finding the first free block on a system that uses a bit vector to allocate disk space is to
sequentially check each word in the bit map.
 Again we see the hardware features driving software functionality. Unfortunately, bit vectors
are in efficient unless the entire vector is kept in main memory. Keeping it in main memory is
possible for smallerler disks but not necessarily for larger ones.

5. Explain free space management ment with linked list in detail


detail.
Ans.)
This is the approach to link together all the free disk block keeping a pointer to the first free
block in a special location on the disk and caching it in memory.
For example, we would keep a pointer to block 2 as the first free block, then block 2 would
contain a pointer to block 3 which wouwould point to block 4 and so on. However, this scheme is not
efficient to traverse the list, we must read each block which requires substantiate input output
time. Fortunately,y, traversing the free list is not a frequent action. Usually the operating system
syste
simply needs a free block so that it can allocate it to a file, so the first block in free
fr list is used. -
The FAT (File Allocation Table) method incorporates free block accou accounting
nting into the allocation
data structure. No separate method is needed.
6. Explain following allocation algorithm. a. First fit b. Best fit c. Worst fit with the help of
example.
Ans.)
The problem in Memory Management arises that, how to satisfy a request of the process of size
‘n’ to be allocated in memory. There are mainly 3 strategies used for selecting a free partition
from a set of available ones, which are:
First Fit: Allocate first partition that is big enough.
Best Fit: Allocate the smallest partition that is big enough.
Worst Fit: Allocate the largest partition.
Considering a swapping system in which memory consists of the following partition size
in memory order as follows: 10kb, 4kb, 20kb, 18kb, 7kb, 9kb,12kb, 15kb. Now we will
see which partition is selected for successive process request of i) 12kb ii) 10kb iii) 9kb,
of first fit, best fit and worst fit.
First Fit:
Allocate first partition that is big enough.

Best Fit:
Allocate the smallest partition that is big enough.

Worst Fit:
Allocate the largest partition.

7. What is the internal and external memory fragmentation.


Ans.)
Internal Fragmentation:
Internal fragmentation happens when the memory is split into mounted sized blocks. Whenever a
method request for the memory, the mounted sized block is allotted to the method. In case the
memory allotted to the method is somewhat larger than the memory requested, then the
distinction between allotted and requested memory is that the Internal fragmentation.
External Fragmentation:
External fragmentation exists when enough total disk space exists to satisfy the request but it is
not contiguous. Storage is fragmented into large number of small partitions depending upon the
total amount of disk storage and average file size; external fragmentation may be either a minor
or a major problem.
8. What is paging? Discuss basic paging technique in details with diagram. OR Explain paging in
detail with diagram.
Ans.)
To overcome the problem of external fragmentation paging ttechnique
echnique is developed. It is a
memory management scheme in which physical memory is broken into fixed size blocks called
“frames” logical memory is broken into blocks of same size called “pages”. When a process
proces is to
be executed, its pages are loaded into any available memory fram
frames
es from the backing store. The
hardware support for paging is shown in the figure below
below:

The page size is defined by the hardware. The size of a page is typically a power of 2 varying
between 512 bytes to 16 MB per page depending on the computer architecture. Every logical
address is mapped with the physical address. So, paging is supposed to be a form of dynamic
relocation.
The above figure every address that is generated is ddivided
ivided into 2 parts i.e. page number and
page offset or displacement.
The page no. is used as an index into page table and page offset is the displacement within the
page. Depending on the page size the llogical and physical address is generated. For.e.g. in
i the
above fig: we consider page size of 4 bytes then logical
address & physical addresses are generated 4 bytes differing from each other like 0,4,8,12.....

The logic of the address translation process in paged sys


system is illustrated in figure. The virtual
address is split by hardware into page no. & offset within that page. The page number is used to
index the page table & to obtain the corresponding pphysical frame number. This his value is
concatenated with the offset to produce the physical address wh
which
ich is used to reference the target
item in memory.
9. Explain Segmentation in detail with diagram. OR What is segmentation? Explain the basic
segmentation method with diagram.
Ans.)
Same as Page Number- 38.

10. Write short on: fixed & variable partition.


Ans.)
Same as Page Number- 36, Q1 and Q2.

11. Explain following page replacement algorithm in detail. i. LRU ii. FIFO
Ans.)
Least Recently Used (LRU): Associates with each page, the time that page is last used.
LRU replacement associates with each page the time of that page’s last use. When a page must
be replaced, LRU chooses the page that has not been used for the longest period of time. For
example consider the following reference string:

The result of applying LRU replacement to our example is shown in the above figure. The LRU
algorithm produces 12 faults. Notice that the 1st 5 faults are the same as those for optimal
replacement. When the reference to page 4 occurs, however, LRU replacement sees that of the
three frames in memory page 2 was used least recently. Thus the LRU algorithm replaces page 2,
not knowing that page 2 is about to be used. When it replaces, then the faults for page 2, the
LRU algorithm replaces page 3 since it is now the least recently used of the three pages in
memory. Despite these problems LRU replacement with 12 faults is much better than FIFO
replacement with 15 faults.
The LRU policy is often used as a page replacement algorithm and is considered to be good. But
the disadvantage of LRU is that it requires additional hardware assistance.

Optimal Page Replacement (OPT): Replaces each page that will not be used for the longest
period of time.
In optimal page replacement it replaces the page that will not be used for the longest time period.
An optimal page replacement algorithm has the lowest page fault rate of all algorithms. For
example consider the following reference string.

For our example reference string, our three frames are initially empty. The Optimal Page
Replacement Algorithm would field nine (9) page faults as shown in figure. The first three
reference cause faults that fill the 3 empty blocks. The reference to page 2 replaces the page 7
box because 7 will not be used for long period. The reference to page 3 replaces page 1 as page 1
will be the last of the 3 pages in the memory to be referenced again. With only 9 faults, Optimal
Page Replacement Algorithm is much better than FIFO algorithm which resulted in 15 faults.
Unfortunately, OPT is difficult to implement because it requires future knowledge of the
reference string. The main difference between FIFO and OPT is that FIFO algorithm uses the
time when page is to be used.

12. Explain the following page replacement algorithm. a) Optimal page replacement b) Least
recently used page replacement.
Ans.)
a.) For OPT, refer the above Question (Q.11, Page Numbers: 42, 43)

b.) First In First Out page replacement (FIFO): Associates with each page, the time when
that page was brought into the memory.
The simplest page replacement algorithm is a first in first out (FIFO). A FIFO replacement
algorithm associates with each page the time when that page was brought into memory. When a
page must be replaced, the oldest page is chosen. It is not strictly necessary to record the time
when a page is brought in. we can create a FIFO queue to hold all pages in memory. We
replace the page at the head at the queue. When a page is brought in memory we insert it at the
tail at the queue. For example: consider the following reference string:

For our example reference string our three pages are initially empty. The first three reference
(7,0,1) cause page fault & are brought into these empty frames. The next reference (2) replaces
page 7 because page 7 was brought in first since 0 is the next reference & 0 is already in memory
we have no fault for this reference. The first reference to 3 results in replacement of page 0
therefore it is now first in line. Because of this replacement the next reference to 0 will fault.
Page 1 is then replaced by page 0. This process continues as shown in frames. Every time a fault
occurs we show which pages are in our three frames. Therefore, according to the frames shown
there are all together 15 fault. The FIFO page replacement algorithm is easy to understand
however its performance is not always good. It increases the rate of page fault & slows process
execution.

13. Consider the following page reference string. 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2 How


many page faults would occur for the following replacement algorithm, assuming 3 frames
respectively? a. OPT page replacement. b. FIFO page replacement.

14. Free memory holes of sizes 15K, 10K, 5K, 25K, 30K, 40K are available. The processes of
size 12K, 2K, 25K, 20K is to be allocated. How processes are placed in first fit, best fit, worst fit.
Chapter-6
1. Explain the file concept.
Ans.)
Storing and retrieving information is the necessity of all computer application. There are 3
problems which occur while storing and retrieving information.
1. A process can store limited amount of information within its own address space when a
process is executed.
2. When process gets terminated information gets lost.
3. It is necessary for multiple processes to access the information at same time.
To overcome the above problems information must be stored on files. File management is the
most visible service of an operating system. Computer can store information on various storage
media such as tapes, disks, etc.
A file is a collection of related information defined by its creator. A file may be the collection of
various types of data depending upon data types. In general file is a collection of bits, bytes, lines
or records the meaning of which is defined by the creator.

2. Explain any 4 file attributes.


Ans.)
 Name: The symbolic name is the only information kept in human readable form. A file is
named for the convenience of its human users.
 Identifier: A unique tag usually a number that identifies the file within the file system.
 Type: This information is needed for the system that supports different types of files.
 Size: The current size of the file & possibly the maximum allowed size are included in this
attribute.
 Location: This information is the pointer to the device & to the location of file on that
device.
 Protection: Access control information determines who can do, reading, writing & so on.
 Time, date and User Identifier: This information may be kept creation, last modification
etc. This can be used for protection & usage monitoring.

3. Explain any 4 file operations.


Ans.)
The O.S can provide system calls for doing file operations.
1. Creating a File: Two steps are necessary for creating a file:
i.) A space in the file system.
ii.) An entry for new file must be made in the directory.
2. Writing a File: To write a file we make a system call specifying both name of file &
information to be written to the file. Given the name of the file the system searches the
directory to find the file location. The system must keep a right pointer to location in the file
where the next write is to take place. The write pointer must be updated wherever write
occurs.
3. Reading a File: To read from file we use the system call that specifies the name of the file.
Again the directory is searched for associated entry & the system needs to keep the read
pointer to the location in the file where the next read is to take place.
4. Repositioning within the File: A directory is searched for the appropriate entry & current
file position is set to a given value. This file operation is also known as “File Seek”.
5. Deleting a File: To delete a file we search the directory for named file.
6. Truncating a File: Sometimes a user may want to erase the contents of a file but keep its
attributes. In these cases truncating is done, this allows attributes to remain unchanged. These
all operations are those which are always required. Other common operations are appending
renaming copying etc.

4. Describe file system structure.


Ans.)
Structure of a file is of mainly 3 types. Some files store data plainly that is one by one character
or numbers are stored in other types there might be different arrangements made by the file
system. Files can be structured in a different ways.
Here are some common possibilities following are three major file structures:
1) Unstructured sequence of bytes (byte sequence)
2) Sequence of fixed length of records (record sequence)
3) Tree records of different or maybe of same length (tree sequence)
The figure below shows the type of file structures...

 In first type of file structure contents are stored in the form of byte i.e. sequence of zero
and one. Though this is an binary file, its contents from user point of view is a normal file.
So we can say that internally data is stored in byte but user program make these file
accessible and readable by user.
 In the next type where data is stored in the form of records .Records are a collection of
collective data means detail of one employee like employee code, name, salary, address
etc. could be form of one record.
 The third type is critical means used in commercial applications means data searching is
quite tedious. In this type as data is hierarchical order.

5. Describe the different file access methods.


Ans.)
Sequential Access Method: The sequential access method is the simplest method .Information
in a file is processed in order one record after the other the bulk of operation on the file is read
and writes. In a read operation “read next” reads the next portion of file and automatically
advances a file pointer. Similarly, the write operations “write next “appends to the end of the file
and advance to the end of newly written material, that is, the new end of the file.

The above figure shows the tape model for the file storage. The current portion shows the current
access of the file. In the sequential access method the required information is to be stored or
scanned sequentially one after another.
Direct Access Method: It is also called as relative access method. A file is made up of fixed
length records that allows program to read and write records rapidly in no particular order. Direct
access method is based on disc types storage devices since disk allows random access to file
block. For direct access file is viewed as no sequence of bocks or records.
For example, we may read block 14 then we may read block 53 and then write block 7. There are
no restrictions on order of reading or writing files for direct access. For direct access method the
file operation must be modified to include the block number as a parameter. Thus we have read
„n‟ or write „n‟ where n is the block number rather than read next or write next. The table below
shows the simulation of sequential access on direct access.
Sequential Access Direct Access
reset cp = 0
read next Read cp
cp = cp + 1
write next Write cp
cp = cp - 1

6. Explain contiguous file allocation method with neat diagram, advantages and disadvantages.
Ans.)
It requires that each file occupy a set of contiguous block on disk. This address is defined linear
ordering i.e. proceeding straight from one stage to another. With this ordering assuming that only
one job is accessing the disk, accessing block b+1 after the block b requires no head movement.
When head movement is needed, the head need only to move from one track to next. That is
from last sector to the first sector of next cylinder. Contiguous allocation of the file is defined by
the disk address and the length. If the file is in n blocks long and starts at location b then it
occupies b, b+1, b+2, ..., b+n-1. The directory entry for each file indicates the address of starting
block and the length of the area allocated for this file. Figure shows directory of each file
indicating the address of starting block and the length of the area allocated for this file.
Advantages:
1. Accessing of a file that has been allocated contiguously is easy.
2. Both sequential and direct access can be supported by contiguously allocated.
Disadvantages:
1. In contiguous allocation, the problem is that one couldn’t predict or determine the space
required for the allocation of the file.
2. Even if the total amount of space needed for a file is known in advance, tree allocation
maybe inefficient because it may lead to external fragmentation.
7. Explain Linked file allocation method with nneat eat diagram, advantages and disadvantages.
Ans.)
Linked allocation solves all the problems of contiguous allocation.
ocation. With linked allocation each
file is a linked list of disk blocks. The disk blocks are sca
scattered
red anywhere on the disk. The
directory structurere contains a pointer to the 1st and last block of the file, that is, start and the end.
The figure below shows the linked allocation.

For example, file of 5 blocks which might start at block 9 & ccontinueontinue at block 16 then block 1,
1
then block 10 and finally block 25. Each block contains a pointer to the next block. These
pointers are not made available to the users. To cr create a file we simply create a new entry in the
directory. Each
ach directory has a pointer to 1s 1st disk block of the file. When a new file
fi is created,
then the start pointer is having null value indicating a empty file. The file gets extended
according to the pointer value of the next block. These blocks get linked together with the help of
the pointer value.
Advantages:
1. There is no external al fragmentation w with
th linked allocation. Any free block can be used to
satisfy the request.
2. There is no need of declaring the size of the file when it is created. A file can continue
growing as long as free blocks are available.
Disadvantages:
1. The major problem is, it can be used effectively only for sequential access of file.
2. Extra space required for pointer.
3. It is not reliable, that is, If an pointer gets lost or damaged it could not have access to that
particular block or file.
8. Explain Indexed file allocation method with neat diagram, advantages and disadvantages.
Ans.)
Linked allocation cannot support efficient direct access because the pointer to blocks are
scattered with the block themselves all over the disk & must be retrieved in order. Index
allocation solve this problem by bringing all the pointer into one location called “index block”.
Figure below shows the indexed allocation.

th
Each file has its own index block which is an array of disk block addresses. The i entry in the
th
index block points to the i block of file. The directory contains the address of index block. To
th th
find & read the i block we use the pointer i.e. the i index block.
Advantage:
It supports the direct access without suffering from external fragmentation & no extra space
required for pointer.
Disadvantage:
Index allocation thus suffers from wasted space. The pointer overhead of index block is
generally greater than the pointer overhead of linked allocation.
For example, consider a common case in which we have a file of only 1 or 2 blocks.
With linked allocation we lose the space of only 1 pointer per block. With index allocation an
entire index block must be allocated even if only 1 or 2 pointers will be used, so remaining space
is on wastage.

9. Describe the different directory structures.


Ans.)
1. Single level directory structure:
It is a simple Directory Structure, where all the files are present in the same directory. These
directories are very easy to understand. This directory has some restrictions like if the number of
files increases then the file names should be unique under same directory. The fig shows the
single level directory structure.

There is another drawback that if single user works on this single level system it may be difficult
to remember the names of the files, if the files are more. Therefore the two major difficulties are
naming and grouping.
Advantages:
i. Single level directory is very simple, easy to understand, maintain and easy to implement.
ii. The software design is also very simple.
iii. The ability to locate files is very fast as there is only one directory.
Disadvantages:
i. It is not used in multiuser system. For e.g. if two users A and B use the same filename for
particular file then the file of user A or B may be overwritten.
ii. It is difficult to remember the names of the files if the files are very large in quantity.

2. Two- Level directory structure:


In Two- Level Directory Structure, the system maintains a master block that has one entry for
each user. This master block contains the addresses of the directory of the users. In this system
each user has his/her own “user file directory” (UFD). When user logs into the system the
“master file directory” (MFD) is searched. This MFD contains the index names of the users with
their account details. Once the login is done, 1st MFD & then UFD is searched.

See the figure above, it indicates that there are different users who have their own directories.
The process is very simple that when user logs on, he/she can access only his/ her own directory
and files present in it. There might be different users like user1, user2, user3 etc; and it may
contain the files, directories. Always, in Unix, all the devices directories, files each & every
object is treated as file.
There are many advantages of this type of structure. One of it is isolation, which keep one user
separate from other users. But ultimately if two users need to communicate with each other then
it becomes difficult. We can call this structure as two level structures which can be even called as
“inverted root tree”.
Here we use the terminology path name that is nothing but the exact location of the file where it
is placed. There could be the same file name for different user. This structure does efficient
searching. As different users are there grouping cannot be done.

3. Tree- Level Directory Structure:


It is most popular Tree Structure. It is very useful because it solves the problem of grouping, that
is, it can group different directories. Again the data can be efficiently searched due to path
concept. Path describes where exactly that file or directory is stored. This tree like structure
allows multiple users to share multiple directories & even file. Directories can even contain the
subdirectories. These subdirectories further may have files or again some subdirectories. This
helps users to organize their data in a particular and efficient manner.
There are two types of paths the one relative path & other one is absolute path. Relative path
starts with the current working directory e.g. if you are present in one directory then it becomes
your current directory where file may be present. If your file is not present in the same directory
then absolute path is used. Absolute path is full path given from the root directory.
Now there is one important issue is how to delete a file directory if it already contains sub-
directory or may be files. Some O.S directly allows deleting this type of directory, which already
has subdirectory or files. But other O.S does not allow the same thing. In that case 1st the
directory should made empty & then it can be deleted. This means subdirectories & files should
be deleted first.

10. Explain single level directory structure.


Ans.) Refer Q.9 for the answer.

11. Explain two level directory structures.


Ans.) Refer Q.9 for the answer.

12. Explain tree directory structure.


Ans.) Refer Q.9 for the answer.

13. List all and explain any 3 RAID levels.


Ans.)
Mirroring provides reliability but is expensive; Striping improves performance but does not
improve reliability. Accordingly, there are a number of different schemes that combine the
principals of mirroring and striping in different ways, in order to balance reliability versus
performance versus cost. These are described by different RAID levels which are as follows:
(In the diagram that follows, "C" indicates a copy of the data, and "P" indicates parity, that is,
error correcting or checksum bits.)
Raid Level 0:- This level includes striping only at the level of blocks, with no mirroring (i.e. no
redundancy).
Raid Level 1:- This level includes mirroring only, no striping.
Raid Level 2:- It is also known as memory-style error-correcting-code (ECC) organization. This
level stores error-correcting codes on additional disks, allowing for any damaged data to be
reconstructed by subtraction from the remaining undamaged data. Each byte in the memory
system may have a parity bit associated with it that records whether the number of bits in the
byte set to 1 (parity=0) or odd (parity=1). If one of the bits in the byte is damaged, the parity of
byte changes and thus does not match the computed parity. similarly if the stored parity bits is
damaged , it does not match the computed parity. Thus, all single bit errors are detected by the
memory system. Note that this scheme requires only three extra disks to protect 4 disks worth of
data, as opposed to full mirroring. The number of disks required is a function of the error-
correcting algorithms, and the means by which the particular bad bits are identified.
Raid Level 3:- Also called as bit-interleaved parity organization. This level is similar to level 2,
except that it takes advantage of the fact that each disk is still doing its own error-detection, so
that when an error occurs, there is no question about which disk in the array has the bad data.
Disk controllers can detect whether the sector has been read correctly or not. As a result, a single
parity bit is all that is needed to recover the lost data from an array of disks. Level 3 also includes
striping, which improves performance. The weakness with the parity approach is that every disk
must take part in every disk access, and the parity bits must be constantly calculated and
checked, reducing performance. Hardware-level parity calculations and NVRAM cache can help
with both of those issues. In practice level 3 is greatly preferred over level 2.
Raid Level 4:- Also called as block-interleaved parity organization. This level is similar to level
3, employing block-level striping instead of bit-level striping. The benefits are that multiple
blocks can be read independently, and changes to a block only require writing two blocks (data
and parity) rather than involving all disks. If one of the disk fails, the parity block can be used
with the corresponding blocks from the other disk to restore the blocks of the fail disk. Data
transfer rates for each access is slower, but multiple read accesses can continue in parallel,
leading to a higher overall, I/O rate. The transfer rate for large reads are high, since all the disks
can be read in parallel, large writes also have high transfer rate. Small independent writes cannot
be performed in parallel. An O.S written, of data smaller than a block requires that the block
read, modified with the new data, and written back and parity block must also be updated. This is
known as read modify- write cycle. Note that new disks can be added seamlessly to the system
provided they are initialized to all zeros, as this does not affect the parity results.
Raid Level 5:- Also known as block-interleaved distributed parity. This level is similar to level
4, except the parity blocks are distributed over all disks, thereby more evenly balancing the load
on the system. For any given block on the disks, one of the disks will hold the parity information
for that block and the other N-1 disks will hold the data. Note that the same disk cannot hold
both data and parity for the same block, as both would be lost in the event of a disk crash and
will not be recoverable. By spreading the parity across all the disks this level avoids the potential
overuse of a single parity disk.
Raid Level 6:- Also called as P+ Q redundancy scheme. It stores extra information to guard
against multiple disk failures. This level extends raid level 5 by storing multiple bits of error-
recovery codes, such as the Reed-Solomon codes for each bit position of data, rather than a
single parity bit. In the example shown below, 2 bits of ECC are stored for every 4 bits of data,
allowing data recovery in the face of up to two simultaneous disk failures. Note that this still
involves only 50% increase in storage needs, as opposed to 100% for simple mirroring which
could only tolerate a single disk failure.

You might also like