OSY Board Questions With Answers
OSY Board Questions With Answers
User
Application Program
Operating System
Hardware
An Operating System is an important part of almost every Computer System. The Computer
System can roughly be divided into Four components:
1) Hardware:
It provides the Basic Computing Resources.
2) Operating System:
It provides the Interface, or the means for the proper use, of the Resources, in the
Operation of the Computer System.
3) Application Program:
It defines the Base, in which these Resources are used to solve the Computing Problems of
the User.
4) Users:
Users are classified into Three Categories:
i. Programmer:
The One who creates the Software.
ii. Operational Users:
The one who carries out Installation and Maintenance Process.
iii. End- Users:
Any User of the System.
3. Describe the operating system operations?
1) Program Management:
The Program gets executed in the Main Memory, with the help of Resources. It requires
Resources to complete it’s Execution.
Scheduling of the Programs is done by the Operating System, so, there is no interference in
the Programs, in the activities of each other.
Proper Synchronization is carried out.
The Operating System is responsible to provide access, with respect to Single Mode, or
Dual Mode.
2) Resource Management:
System is the set of Resources, so it’s Management is necessary.
The Operating System is responsible for managing the Resources. The major focus on
Resource Type is Time and Space.
For Time Management, it checks which Program should be allocated, for what, and how
much of Time.
Space Management checks that what amount of Space is necessary, and what amount of
space is to be allocated for a Program to execute.
3) Protection and Security:
Protection is a Mechanism for controlling the access of a Program, or a User of the
Resource.
Security refers to the Defense System against the Internal and External Attacks.
4. Explain Sequential or Serial processing with neat diagram, advantages and disadvantages.
It is a technique in which Jobs are executed sequentially, one after the other.
Early Computers were very large machines, which were run from the Console. The
Programmer would write a Program, and then would operate the Program directly from the
Console.
Firstly, the program would be loaded manually into the Memory, either from the Paper
Tape, or from Punch Cards. Then, with the help of appropriate Buttons, Execution of the
Program starts. If some error occurs, then the Programmer could halt the Program,
examines the Contents of the Memory Registers, and Debug the Program directly from the
Console.
The Output was printed either on the Paper Tapes, or the Punch Cards.
After all, this process gets accomplished. Then only, the programmer can start the
Execution of the next job.
Thus, the jobs are processed sequentially, one after the other. This process was Time
Consuming. The User was not able to use the Hardware Resources efficiently, and the
CPU could sit Idle for a longer period, or for a longer period of time.
For example, if the User has to load three Programs, that are, a Cobol Program, then a
Fortan Program, and then a Cobol Program again.
Then, for the First Program, the Cobol Compiler, with other Resources, was loaded. After
the First Program was executed, then the Cobol Compiler with it’s other Resources was
unloaded.
Then, the Fortan Compiler, and the Other Resources are loaded.
After finishing the Fortan Program, it’s Resources are unloaded, and the Cobol Resources
are loaded once again. This process continues for all the Programs.
This increases the Job Setup Time, and the CPU sits Idle for a longer period of time, or
for a longer duration. But, it provides the fastest Computation Time.
If the Machine- Time was sequentially divided to all the three Programs, this could be the
repetition of work.
Instead of this, in Batch Operating System, the Jobs with similar needs are batched together,
and executed as a Group, thus reducing the Setup Time, required for the Job.
In the above case, the Cobol Compiler, with it’s other Resources, will be loaded only once,
with less Setup time required.
This type of Batching System improves the Job Setup Time, and also improves the CPU
Utilization. But, the major drawback of Batch Operating System is that the CPU sits idle,
while the Transition of one Batch of Process, to the other.
6. Explain Multiprogramming system with neat diagram, advantages and disadvantages.
Batch Operating System improves the performance of a Single User- Access System, but
still, a Single User cannot keep the CPU, or the I/O Devices busy, at all the time.
So, an attempt of Multiprogramming was developed. It is an attempt to increase the CPU
Utilization, by always giving the CPU some Process to execute.
The Idea behind the approach of Multiprogramming is as follows:
The Operating System keeps up, and begins to execute one of the Jobs in the Memory.
Eventually, the Job being executed may have to wait for some other Task, such as an I/O
Operation Completion.
When the Job has to wait, the Operating System switches to execute another Job.
Again, when this particular Job needs to wait for some I/O Operation, then again the
Operating System switches to execute some other Job, and so on. On the other hand,
when the First job finishes it’s I/O Operation, it gets allocated back, to the CPU.
In this way, there is always at least one job allocated to the CPU, and the CPU never sits
idle.
Multiprogramming Operating System increases the complexity of the System, as there
are several processes active in the Memory.
So, the Operating System has to perform some CPU Scheduling, to schedule one job at a
time, in a way to improve the CPU Utilization.
The Operating System has to apply some Security Mechanisms, so that two processes
cannot interfere in the activities of each other.
1.) Program Execution: The User needs to execute many Programs, so, the System must be
able to load the Programs in the Memory, and run it. The Program must be able to terminate
the Execution, either normally, or abnormally.
2.) Input- Output (I/O) Operations: A running program may require an Input from a
Keyboard, or from a File, or from any other Input devices. Similarly, the Program may also
produce an Output to be displayed on the Screen, to a File, or to some Output devices. Since
the User program cannot execute the I/O Operations directly, the Operating System must
provide some means to do so.
3.) File System Manipulation: In this, the program needs to access the File System, so that it
can read and write Files, create or delete files, or Directories, and can also delete the files by
their identity.
4.) Communication: One process may need to exchange information with another process. The
other process may be on the same machine, or on some other machine that belongs into that
same network, or a different network. Communication can be implemented via Shared
Memory, or Message Passing Technique.
5.) Error- Detection: The Operating System needs to be constantly aware of the possible
errors. Errors may occur in CPU iterations, Memory, Hardware, User Programs, etc. For
each type of Error, the Operating System should take appropriate action, and provide
consistent Computing.
6.) Resource Allocation: When there are multiple processes running at the same time, the
Resources must be allocated to each of them. Many different types of Resources are
managed by the Operating System, and have a general request, or Release Code.
7.) Accounting: To keep a track of which user uses how much, and which kind of Computer
Resources, this Record Keeping may be used for accounting, so that the Users can be built
on simply, for accumulating Usage Statistics. Usage Statistics may be a valuable tool in
trying to configure the System to improve the Computing Services.
8.) Protection: The owners of the information may want to control the use of the information,
when several processes execute concurrently. It should not be possible for one process to
interfere with others, or with the Operating System itself. Protection involves ensuring that
all the access to the System Resources, and the Data, is controlled. This can be done either
by the means of Password, or by giving the Access permission.
1. Process Management:
A Process is an instance of a Program in it’s Execution. It needs certain Resources, like
CPU Time, I/O Devices, etc., to accomplish it’s Tasks. These Resources are provided
when the Processes are created, or allocated while it is running. In addition to various
Physical and Logical Resources that a Process obtains, various Initialization Inputs may
be past along.
For example, consider a Process whose function is to display the status of the File or the
Terminal. This Process will be given Input as the name of the file, and will execute the
appropriate System Calls, to obtain and display the desired information on the Terminal.
The Operating System will reclaim any Reusable Resources.
A Process is a unit of Work in a System. Such a System consists of a collection of
processes, some of which are Operating System Processes (those which execute the
System Codes), rest of which are the User Processes (those which executes the User
Codes).
The Operating System is responsible in concern with the following activities:
Creating and Deleting both, the User Processes, and the System Processes.
Suspending and Resuming the Processes.
Providing the mechanism Process Synchronization, Communication, and Deadlock
Handling.
5.) Communication:
The System Calls used in Communication are as follows:
1.) Create and Delete Communication Connection,
2.) Send and Receive Messages, and
3.) Attach and Detach Remote Devices.
The two Communication Models are as follows:
1.) Message Passing Model:
In this Model, information is exchanged through the Inter- Process Communication
Facility, provided by the Operating System.
Before some communication can take place, the connections must be established
among the processes. The name of the other process, to whom we need to
communicate, it must be known that the Process may be on the same Machine, or on
some other System in the Network.
Similarly, each Process has a Process Identifier, by which the Operating System can
refer to. The System Calls gets the Process ID, and also gets the Host ID, which are
used to know the Process, and the Terminal.
2.) Shared Memory Model:
In this type of Communication, there is an exchange of information by reading and
Writing the Data in the Shared Area of the Memory.
The processes use Shared Memory, Create and Share the Memory, and attach the
System Calls to create, and gain the access to the regions of the Memory, owned by
the other Processes.
16. Explain the system call Communication in detail with neat diagram.
Refer Question Number 11 for the Answer.
2. List all and explain process states with the help of neat diagram.
As the Process executes, it changes it’s State. The State of a Process is defined in a part, by
the current activity of that Process.
Each Process may be in one of the following States:
1.) New:
It is the state where the Process is being created.
2.) Ready:
It is the State when the Process is waiting to be assigned to a Processor.
3.) Running:
It is the State where the instructions are executed.
4.) Waiting:
It is the State when the Process is waiting for some event to occur, such as, an I/O
operation completion.
5.) Terminated:
It is the state when the Process has finished it’s execution, and has released all it’s
Resources.
3. Describe process control block with the help of neat diagram.
Each process is represented in the Operating System, by the Process Control Block. The
Process Control Block is also called as Task Control Block. A PCB contains many pieces
of information, associated with a specific process, as shown in the figure below.:
1.) Pointer:
The Pointer points to the next PCB in the Ready Queue.
2.) Process State:
The Process State may be New, Ready, Running, etc.
3.) Program Counter:
The Program Counter indicates the Address of the next instruction to be executed for
the Process.
4.) Process ID (Number):
The Process ID is the identification number given by the System to the Process.
5.) CPU Registers:
The Registers vary in number and Type, depending on the System Architecture. They
include accumulators, Index Registers, Stock Pointers, General- Purpose Registers, etc.
Along with the Program Counter, this state of information, must be saved, so that, when
an interrupt occurs, to allow the Process to be continued, and to be executed correctly.
6.) CPU Scheduling Information:
This information includes Priorities, Pointers to the Scheduling Queues, and any other
Scheduling Parameters.
7.) Memory Management Information:
It consists of information, such as value of Base and Limit Register, Page Tables,
Segment Tables, depending on the Memory Management Scheme, used by the
Operating System.
8.) Accounting Information:
This information includes the amount of CPU Time used, Time Limits, etc.
9.) I/O Status Information:
This information includes the List of I/O Devices, allocated to the Process List of Open
Files, and so on.
*NOTE: A Process simply serves as a repository for any information that may vary from
process to process.
4. Explain process scheduling with scheduling queues with neat diagram.
In some Systems, the Long- Term Schedulers may be absent. In these Systems, every new
process is kept in the Main Memory for the execution, which affects the stability of the
System.
So, an intermediate level of Scheduling was introduced. The Scheduler was used in it. It is the
Medium- Term Scheduler.
The figure below shows the addition of the Medium- Term Scheduling.:
14. What is thread? Explain classical thread model with neat diagram. OR Explain threads in
detail with neat diagram.
Refer the above question for the Answer.
15. Explain the benefits of threads.
1.) Responsiveness:
Multithreading and interactive applications may allow a Program to continue running, even
if a part of it is blocked, or is performing a lengthy operation, thereby increasing the
responsiveness to the user.
For an instance, a Multithreaded Web- Browser could still allow the interaction in 1 thread,
while an Image is being loaded in another Thread.
2.) Resource- Sharing:
By default, Threads share the Memory and the Resources, of the process to which they
belong.
The benefit of the Code Sharing is that it allows an Application to have several different
Threads of activity, all within the same Address Space.
3.) Economy:
Allocating the Memory and the Resources, for the process creation, maybe less
economical.
Alternatively, because the Threads share Resources of the Process to which they belong, it
is more economical to create and Context- Switch Threads.
4.) Utilization of Multiprocessor Architecture:
The benefit of Multithreading can be greatly increased in a Multiprocessor Architecture,
where each Thread maybe running in parallel, on different Processors.
6. Describe FCFS algorithm with the help of example, advantages and disadvantages.
Ans.)
This is the simplest Scheduling Algorithm.
With this scheme, the process that requests the CPU first, is allocated to the CPU first.
The implementation of the FCFS Policy is easily managed by the FIFO (First In First Out)
Queue.
When a process enters the Ready Queue, it’s PCB (Process Control Block) is linked at the
Tail of the Queue.
When the CPU is free, it is allocated at the Head of the Queue.
The FCFS process is a Non Pre- Emptive Technique.
7. Describe SJF algorithm with the help of example, advantages and disadvantages.
Ans.)
Another approach to CPU Scheduling is the Shortest Job First (SJF) Algorithm.
This algorithm associates each process with the length of the next CPU Burst.
When the CPU is available, it is assigned to the process that has the smallest next CPU
Burst.
If the Burst Time of 2 processes is the same, the concept of FCFS Scheduling Algorithm is
used.
SJF is also referred to as “Shortest Next CPU Burst Algorithm”.
SJF is Pre- Emptive, as well as Non Pre- Emptive Scheduling Technique.
Advantage of the SJF Algorithm is:
It is an Optimal Algorithm, that is, it gives minimal Average Waiting Time.
Disadvantage of the SJF Algorithm is:
It has more Average Waiting Time.
Consider the following set of processes with the Burst Time given in milliseconds:
Process Burst Time
P1 6
P2 8
P3 7
P4 3
Gantt Chart:
P1 P2 P3 P4
0 3 9 16 24
The Average Waiting Time (AWT) = (3 + 16 + 9 + 0) ÷ 4 = 7.0 ms
The Average Turnaround Time (ATT) = (3 + 9 + 16 + 24) ÷ 4 = 13.0 ms
8. Describe Priority algorithm with the help of example, advantages and disadvantages.
Ans.)
In Priority Scheduling, a Priority is associated with each process, and the CPU is allocated to
the process with the highest Priority. The processes with equal Priority are scheduled in
FCFS Order. Priorities are generally indicated by some numbers, such as 1, 4, 5, and so on.
However, there is no general agreement on weather 1 is the Highest or the Lowest Priority.
Some Systems use low numbers to represent low Priority, and high numbers for the
representation of high Priority, while, others use low numbers for high Priority, and high
numbers for low Priority. We assume low numbers for the higher Priority.
Priority Scheduling can be Pre- Emptive or Non Pre- Emptive.
For Example:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt Chart:
P2 P5 P1 P4 P3
0 1 6 16 18 19
The Average Waiting Time (AWT) = (6 + 0 + 18 + 16 + 1) ÷ 5 = 8.2 ms
The Average Turnaround Time (ATT) = (16 + 2 + 4 + 19 + 9) ÷ 5 = 12.0 ms
9. Describe round robin algorithm with the help of example, advantages and disadvantages.
Ans.)
Round Robin Scheduling Algorithm is designed especially for Time- Sharing System. It is
much similar to FCFS Scheduling, but the Pre- Emption is added to the Switch between the
processes. A small unit of Time, called Time Quantum, or Time Slice, is defined. Time
Quantum is generally between 1 and 100 milliseconds. The Ready Queue is treated as the
Circular Queue. The CPU Scheduler goes around the Ready Queue, selecting the processes
and the Dispatcher, allocating the CPU to each process, for a Time Interval of up to 1 Time
Quantum. To implement Round Robin Scheduling, we keep the Ready Queue as the FIFO
Queue of the processes. New processes are added to the Tail of the Ready Queue.
For Example:
(ts= 4ms)
Process Burst Time
P1 24
P2 3
P3 3
Gantt Chart:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
The Average Waiting Time (AWT) = (6 + 4 + 7) ÷ 3 = 5.6 ms
The Average Turnaround Time (ATT) = (30 + 7 + 10) ÷ 4 = 15.6 ms
In Round Robin Scheduling Algorithm, no process is allocated to the CPU, for more than 1
Quantum in a Row. If the process’s CPU Burst exceeds 1 Time Quantum, the process is Pre-
Pre
Empted and is put back in the Ready Queue. Thus, the Round Robin Scheduling is a Pre-
Pre
Emptive Technique.
Advantage of the Round Robin Scheduling Algorithm is is:
The Average Waiting Time is minimal.
Disadvantage of the Round Robin Scheduling Algorithm is is:
If the Time Quantum is too large, then it is as good as FCFS Policy.
14. Explain Resource allocation graph with example of cycle that leads in deadlock.
Ans.)
Deadlocks can be described more precisely in terms of directed graph called a system resource
allocation graph. This graph
aph consists of a set of vertices „V
„V‟ and a set of edges „E‟.
„E The set of
vertices “V” is partitioned into two different types of nodes P={P 1,P2,…..Pn}, the set consisting
of all active processes in the system and R={R 1,R2,…..Rn}, the set consisting of all resource
types in the system.
A directed edge from process Pi to resource type R j is denoted by Pi →Rj it signifies that process
Pi requested an instance of resource type R j and is currently waiting for that resource. A directed
edge from resource type Rj to process Pi is denoted by Rj → Pi, it signifies that an instance of
resource type Rj has been allocated to process Pi. A directed edge Pi → Rj is called a request
edge, a directed edge R →Pi is called an assignment edge. Pictorially, we represent each process
Pi as a circle, and each resource type R j as a square. Since resource type Rj may have more than
one instance, we represent each such instance as a dot within the square. Note that request
reque edge
points to only the square Rj whereas an assignment edge must also designate one of the dots in
the square. When process Pi requests an instance of resource type R j, a request edge is inserted in
the resource allocation graph. When this request can be fulfilled the request edge is
instantaneously transformed to an assignment edge. When process no longer needs access to the
resource it releases the resource and as a result the assignment edge is deleted.
The resource allocation graph shown in figure ddepicts the following situation.
1) The sets P,R and E
P= {P1, P2, P3}
R= {R1, R2, R3, R4}
E= {P1 → R1, P2 → R3, R1 → P2, R2 → P2, R2 → P1, R3 → P3}
2) Resource instances
One instance of resource type R1
Two instance of resource type R2
One instance of resource type R3
Three instance of resource type R4
3) Process states
Process P1 is holding an instance of resource type R 2, and is waiting for an instance of
resource type R1.
Process P2 is holding an instance of resource R 1 & R2 and is waiting for an instance of
resource type R3.
Process P3 is holding an instance of R 3.
Given the definition of a resource allocation graph it can be shown that, if the graph contains no
cycles then no process in the system is deadlocked. If the grap
graph
h does contain a cycle, then
deadlock may exist. If each resource has exactly one instance, then a cycle implies that a
deadlock has occurred. If the cycle involves only a set of resource types, each of which has only
a single instance, then deadlock has ooccurred.
ccurred. Each process involved in cycle is deadlocked. In
this case, a cycle in the graph is both a necessary and sufficient condition for the existence of
deadlock.
If each resource type has several instances, then cycle does not necessarily imply that a deadlock
has occurred. In this case, a cycle in the graph is a necessary but not a sufficient condition for the
existence of deadlock.
To illustrate this concept, see the resource allocation graph shown in the figure before. Suppose
that process P3 requestss an instance of resource type R2. Since no resource instance is currently
available, a request edge
P3 → R2 is added to the graph as shown below.
15. Explain Resource allocation graph with example of cycle that does not lead in deadlock.
Ans.)
If we have a resource allocation system with only one instance of each resource type, a variant of
resource allocation graph defined can be used for deadlock avoidance.
In addition to request & assignment edges, we introduce a new type of edge called “claim edge”.
A claim edge Pi → Rj indicates that process Pi may request resource Rj at some time in future.
This edge resembles the request edge in direction but is represented by a "dashed line". When
process Pi requests resource Rj the claim edge Pi → Rj is converted to a request edge. Similarly
when a resource Rj is released by Pi, the assignment edge Rj → Pi is reconverted to a claim edge
Pi → Rj. We note that the resources must be claimed in a prior in the system. That is, before
process Pi starts executing all its claim edges must already appear in the resource allocation
graph. We can allow acclaim edge Pi → Rj to be added to the graph only if all the edges
associated with process Pi are claim edges.
Suppose that process Pi requests resource Rj. The request can be granted only if converting the
request edge Pi → Rj to an assignment edge Rj → Pi does not result in the formation of a cycle in
the resource allocation graph. If no cycle exists then the allocation of the resource will leave the
system in a safe state. If the cycle found, then the allocation will put the system in an unsafe
state. Therefore process Pi will have to wait for its requests to be satisfied.
To illustrate this algorithm we consider the resource allocation graph as shown below:
The page size is defined by the hardware. The size of a page is typically a power of 2 varying
between 512 bytes to 16 MB per page depending on the computer architecture. Every logical
address is mapped with the physical address. So, paging is supposed to be a form of dynamic
relocation.
The above figure every address that is generated is divided into 2 parts i.e. page number and
page offset or displacement.
The page no. is used as an index into page table and page offset is the displacement within the
page. Depending on the page size the logical and physical address is generated. For example, in
the above figure we consider page size of 4 bytes then logical address & physical addresses are
generated 4 bytes differing from each other like 0, 4, 8, 12, …..
Segmentation:
In segmentation a logical address space is a collection of segments. Each segment has a name
and length. So the address specifies both the segment name and the offset (displacement/ length)
within the segment. For simplicity of implementation segments are numbered & are referred to
by a segment number rather than segment name. Thus a logical address consists of segment
number and offset. The figure below shows the address translation.
As we had seen in segmentation an object is referred by logical address i.e. by two- dimensional
address. But actual address or physical address is a one dimensional sequence. So it is necessary
to translate a logical address into physical address so that the program gets executed. This
mapping is done by segment table which consists of base and limit. The segment base contains
starting physical address whereas limit specifies the length of table.
The segment number is used as an index into segment table. The offset of logical address must
be with 0 and segment limit. The segment number is compared with limit if it is within limit, the
actual physical address is to generated by seeing the value of limit & base register contents. If it
is not in specified range the error occurs. So address translation from logical to physical is by
adding contents of base register & offset. Fig below shows that the segment gets stored in
physical address as shown. The segment table has separate entry for each segment.
Memory Partitioning:
The Multiple or Memory partitioning
tioning is one of the mem
memory
ory management techniques. With this
technique, where memory is divided into sev several or fixed sized partition, each
ach partition may
contain exactly one process. In the partition method the processes are waiting in an Input Queue
(or the Ready Queue). The Operating System takes into account the memory required for each
process. It also keeps track of free
ee space. There are 2 types of partitioning i.e. static (fixed) &
dynamic or (variable) partitioning.
Best Fit:
Allocate the smallest partition that is big enough.
Worst Fit:
Allocate the largest partition.
The page size is defined by the hardware. The size of a page is typically a power of 2 varying
between 512 bytes to 16 MB per page depending on the computer architecture. Every logical
address is mapped with the physical address. So, paging is supposed to be a form of dynamic
relocation.
The above figure every address that is generated is ddivided
ivided into 2 parts i.e. page number and
page offset or displacement.
The page no. is used as an index into page table and page offset is the displacement within the
page. Depending on the page size the llogical and physical address is generated. For.e.g. in
i the
above fig: we consider page size of 4 bytes then logical
address & physical addresses are generated 4 bytes differing from each other like 0,4,8,12.....
11. Explain following page replacement algorithm in detail. i. LRU ii. FIFO
Ans.)
Least Recently Used (LRU): Associates with each page, the time that page is last used.
LRU replacement associates with each page the time of that page’s last use. When a page must
be replaced, LRU chooses the page that has not been used for the longest period of time. For
example consider the following reference string:
The result of applying LRU replacement to our example is shown in the above figure. The LRU
algorithm produces 12 faults. Notice that the 1st 5 faults are the same as those for optimal
replacement. When the reference to page 4 occurs, however, LRU replacement sees that of the
three frames in memory page 2 was used least recently. Thus the LRU algorithm replaces page 2,
not knowing that page 2 is about to be used. When it replaces, then the faults for page 2, the
LRU algorithm replaces page 3 since it is now the least recently used of the three pages in
memory. Despite these problems LRU replacement with 12 faults is much better than FIFO
replacement with 15 faults.
The LRU policy is often used as a page replacement algorithm and is considered to be good. But
the disadvantage of LRU is that it requires additional hardware assistance.
Optimal Page Replacement (OPT): Replaces each page that will not be used for the longest
period of time.
In optimal page replacement it replaces the page that will not be used for the longest time period.
An optimal page replacement algorithm has the lowest page fault rate of all algorithms. For
example consider the following reference string.
For our example reference string, our three frames are initially empty. The Optimal Page
Replacement Algorithm would field nine (9) page faults as shown in figure. The first three
reference cause faults that fill the 3 empty blocks. The reference to page 2 replaces the page 7
box because 7 will not be used for long period. The reference to page 3 replaces page 1 as page 1
will be the last of the 3 pages in the memory to be referenced again. With only 9 faults, Optimal
Page Replacement Algorithm is much better than FIFO algorithm which resulted in 15 faults.
Unfortunately, OPT is difficult to implement because it requires future knowledge of the
reference string. The main difference between FIFO and OPT is that FIFO algorithm uses the
time when page is to be used.
12. Explain the following page replacement algorithm. a) Optimal page replacement b) Least
recently used page replacement.
Ans.)
a.) For OPT, refer the above Question (Q.11, Page Numbers: 42, 43)
b.) First In First Out page replacement (FIFO): Associates with each page, the time when
that page was brought into the memory.
The simplest page replacement algorithm is a first in first out (FIFO). A FIFO replacement
algorithm associates with each page the time when that page was brought into memory. When a
page must be replaced, the oldest page is chosen. It is not strictly necessary to record the time
when a page is brought in. we can create a FIFO queue to hold all pages in memory. We
replace the page at the head at the queue. When a page is brought in memory we insert it at the
tail at the queue. For example: consider the following reference string:
For our example reference string our three pages are initially empty. The first three reference
(7,0,1) cause page fault & are brought into these empty frames. The next reference (2) replaces
page 7 because page 7 was brought in first since 0 is the next reference & 0 is already in memory
we have no fault for this reference. The first reference to 3 results in replacement of page 0
therefore it is now first in line. Because of this replacement the next reference to 0 will fault.
Page 1 is then replaced by page 0. This process continues as shown in frames. Every time a fault
occurs we show which pages are in our three frames. Therefore, according to the frames shown
there are all together 15 fault. The FIFO page replacement algorithm is easy to understand
however its performance is not always good. It increases the rate of page fault & slows process
execution.
14. Free memory holes of sizes 15K, 10K, 5K, 25K, 30K, 40K are available. The processes of
size 12K, 2K, 25K, 20K is to be allocated. How processes are placed in first fit, best fit, worst fit.
Chapter-6
1. Explain the file concept.
Ans.)
Storing and retrieving information is the necessity of all computer application. There are 3
problems which occur while storing and retrieving information.
1. A process can store limited amount of information within its own address space when a
process is executed.
2. When process gets terminated information gets lost.
3. It is necessary for multiple processes to access the information at same time.
To overcome the above problems information must be stored on files. File management is the
most visible service of an operating system. Computer can store information on various storage
media such as tapes, disks, etc.
A file is a collection of related information defined by its creator. A file may be the collection of
various types of data depending upon data types. In general file is a collection of bits, bytes, lines
or records the meaning of which is defined by the creator.
In first type of file structure contents are stored in the form of byte i.e. sequence of zero
and one. Though this is an binary file, its contents from user point of view is a normal file.
So we can say that internally data is stored in byte but user program make these file
accessible and readable by user.
In the next type where data is stored in the form of records .Records are a collection of
collective data means detail of one employee like employee code, name, salary, address
etc. could be form of one record.
The third type is critical means used in commercial applications means data searching is
quite tedious. In this type as data is hierarchical order.
The above figure shows the tape model for the file storage. The current portion shows the current
access of the file. In the sequential access method the required information is to be stored or
scanned sequentially one after another.
Direct Access Method: It is also called as relative access method. A file is made up of fixed
length records that allows program to read and write records rapidly in no particular order. Direct
access method is based on disc types storage devices since disk allows random access to file
block. For direct access file is viewed as no sequence of bocks or records.
For example, we may read block 14 then we may read block 53 and then write block 7. There are
no restrictions on order of reading or writing files for direct access. For direct access method the
file operation must be modified to include the block number as a parameter. Thus we have read
„n‟ or write „n‟ where n is the block number rather than read next or write next. The table below
shows the simulation of sequential access on direct access.
Sequential Access Direct Access
reset cp = 0
read next Read cp
cp = cp + 1
write next Write cp
cp = cp - 1
6. Explain contiguous file allocation method with neat diagram, advantages and disadvantages.
Ans.)
It requires that each file occupy a set of contiguous block on disk. This address is defined linear
ordering i.e. proceeding straight from one stage to another. With this ordering assuming that only
one job is accessing the disk, accessing block b+1 after the block b requires no head movement.
When head movement is needed, the head need only to move from one track to next. That is
from last sector to the first sector of next cylinder. Contiguous allocation of the file is defined by
the disk address and the length. If the file is in n blocks long and starts at location b then it
occupies b, b+1, b+2, ..., b+n-1. The directory entry for each file indicates the address of starting
block and the length of the area allocated for this file. Figure shows directory of each file
indicating the address of starting block and the length of the area allocated for this file.
Advantages:
1. Accessing of a file that has been allocated contiguously is easy.
2. Both sequential and direct access can be supported by contiguously allocated.
Disadvantages:
1. In contiguous allocation, the problem is that one couldn’t predict or determine the space
required for the allocation of the file.
2. Even if the total amount of space needed for a file is known in advance, tree allocation
maybe inefficient because it may lead to external fragmentation.
7. Explain Linked file allocation method with nneat eat diagram, advantages and disadvantages.
Ans.)
Linked allocation solves all the problems of contiguous allocation.
ocation. With linked allocation each
file is a linked list of disk blocks. The disk blocks are sca
scattered
red anywhere on the disk. The
directory structurere contains a pointer to the 1st and last block of the file, that is, start and the end.
The figure below shows the linked allocation.
For example, file of 5 blocks which might start at block 9 & ccontinueontinue at block 16 then block 1,
1
then block 10 and finally block 25. Each block contains a pointer to the next block. These
pointers are not made available to the users. To cr create a file we simply create a new entry in the
directory. Each
ach directory has a pointer to 1s 1st disk block of the file. When a new file
fi is created,
then the start pointer is having null value indicating a empty file. The file gets extended
according to the pointer value of the next block. These blocks get linked together with the help of
the pointer value.
Advantages:
1. There is no external al fragmentation w with
th linked allocation. Any free block can be used to
satisfy the request.
2. There is no need of declaring the size of the file when it is created. A file can continue
growing as long as free blocks are available.
Disadvantages:
1. The major problem is, it can be used effectively only for sequential access of file.
2. Extra space required for pointer.
3. It is not reliable, that is, If an pointer gets lost or damaged it could not have access to that
particular block or file.
8. Explain Indexed file allocation method with neat diagram, advantages and disadvantages.
Ans.)
Linked allocation cannot support efficient direct access because the pointer to blocks are
scattered with the block themselves all over the disk & must be retrieved in order. Index
allocation solve this problem by bringing all the pointer into one location called “index block”.
Figure below shows the indexed allocation.
th
Each file has its own index block which is an array of disk block addresses. The i entry in the
th
index block points to the i block of file. The directory contains the address of index block. To
th th
find & read the i block we use the pointer i.e. the i index block.
Advantage:
It supports the direct access without suffering from external fragmentation & no extra space
required for pointer.
Disadvantage:
Index allocation thus suffers from wasted space. The pointer overhead of index block is
generally greater than the pointer overhead of linked allocation.
For example, consider a common case in which we have a file of only 1 or 2 blocks.
With linked allocation we lose the space of only 1 pointer per block. With index allocation an
entire index block must be allocated even if only 1 or 2 pointers will be used, so remaining space
is on wastage.
There is another drawback that if single user works on this single level system it may be difficult
to remember the names of the files, if the files are more. Therefore the two major difficulties are
naming and grouping.
Advantages:
i. Single level directory is very simple, easy to understand, maintain and easy to implement.
ii. The software design is also very simple.
iii. The ability to locate files is very fast as there is only one directory.
Disadvantages:
i. It is not used in multiuser system. For e.g. if two users A and B use the same filename for
particular file then the file of user A or B may be overwritten.
ii. It is difficult to remember the names of the files if the files are very large in quantity.
See the figure above, it indicates that there are different users who have their own directories.
The process is very simple that when user logs on, he/she can access only his/ her own directory
and files present in it. There might be different users like user1, user2, user3 etc; and it may
contain the files, directories. Always, in Unix, all the devices directories, files each & every
object is treated as file.
There are many advantages of this type of structure. One of it is isolation, which keep one user
separate from other users. But ultimately if two users need to communicate with each other then
it becomes difficult. We can call this structure as two level structures which can be even called as
“inverted root tree”.
Here we use the terminology path name that is nothing but the exact location of the file where it
is placed. There could be the same file name for different user. This structure does efficient
searching. As different users are there grouping cannot be done.