Time Sharing Osy Report 220399
Time Sharing Osy Report 220399
Thergaon Pune-33
V Semester
(Year: 2024-25)
Micro Project
Diploma In Computer Engineering
1
Maharashtra State Board of Technical Education, Mumbai
CERTIFICATE
in the curriculum.
Institute Seal
2
Teacher Evaluation Sheet for Micro Project
__________________________________________________________________________________________________
Marks
Signature: _____________________________________________
3
Maharashtra State Board of Technical Education, Mumbai
MICRO PROJECT
Project work Report
Deadlock Handling in OS
Index
4
Sr. No. Title Page no.
1. Index 5
2. Abstract
3. Synopsis
4. Technical Keywords
5. Introduction
6. Problem Definition and Scope
7. Dissertation Plan
14. References
5
Abstract
Time-sharing operating systems (OS) are pivotal in modern computing, facilitating concurrent
access to system resources by multiple users and processes. Unlike batch processing systems,
which execute jobs sequentially, time-sharing systems allocate a fixed time slice to each process,
enabling rapid switching and fostering an interactive environment. This approach allows users to
interact with the computer in real time, significantly enhancing productivity and user experience.
The primary functionality of time-sharing OS lies in their ability to manage multiple processes
efficiently. They employ scheduling algorithms, such as round-robin and priority-based
scheduling, to allocate CPU time fairly among active processes. This ensures that no single
process monopolizes system resources, thereby maintaining system responsiveness and stability.
One of the key advantages of time-sharing systems is improved resource utilization. By allowing
multiple processes to execute concurrently, these systems maximize CPU usage and minimize
idle time. Additionally, they support a multi-user environment, enabling several users to run
applications simultaneously, which is essential in academic and business settings.
Despite their benefits, time-sharing operating systems face several challenges. Efficient process
management is crucial; thus, ensuring fair scheduling and minimizing latency can be complex.
Moreover, issues such as deadlocks—where two or more processes become unable to proceed
because each is waiting for the other to release resources—can severely impact system
performance. To address these challenges, various deadlock handling techniques, including
deadlock prevention, avoidance, and detection, are employed.
In conclusion, time-sharing operating systems are vital for enhancing computational efficiency
and user interaction in contemporary computing environments. By effectively managing multiple
processes, they provide significant advantages while also presenting unique challenges that
require careful consideration and robust solutions. This report delves deeper into the
functionalities, advantages, and challenges of time-sharing operating systems, particularly
focusing on their role in managing multiple processes effectively.
6
Synopsis
Time-sharing operating systems (OS) are integral to contemporary computing, enabling multiple
users and processes to share system resources efficiently. This report examines the design
principles, process scheduling techniques, and their impact on resource utilization and user
experience.
Design Principles:
At the core of time-sharing systems is the principle of concurrency. These systems are designed
to allow several processes to execute simultaneously, ensuring that all users receive timely
responses from the system. By dividing CPU time into small time slices, time-sharing OS can
rapidly switch between processes, providing an interactive environment conducive to user
engagement. This design minimizes idle time and maximizes the effective use of system
resources.
Effective process scheduling is crucial for the performance of time-sharing systems. Several
algorithms are employed to manage process execution, including:
1. Round-Robin Scheduling: This algorithm allocates equal time slices to each process in
a cyclic order, ensuring fairness and preventing starvation.
7
3. Multilevel Queue Scheduling: This technique categorizes processes into different
queues based on their priority or type, optimizing resource allocation based on process
characteristics.
8
In summary, time-sharing OS plays a crucial role in enhancing the efficiency of resource
utilization and improving user experience. By understanding their design principles and
scheduling techniques, we can better appreciate their impact on modern computing environments
and the challenges they face in process management. This report provides an in-depth
exploration of these key aspects, highlighting the importance of time-sharing systems in today’s
technology landscape.
Technical Keywords
Time-Sharing, Process Scheduling, Multitasking, Context Switching, Resource Management,
User Interaction, System Responsiveness, CPU Time Slicing, Fairness, Round-Robin
Scheduling, Priority-Based Scheduling, Multilevel Queue Scheduling, Deadlock, Deadlock
Prevention, Deadlock Avoidance, Deadlock Detection, Throughput, Latency, System Stability,
Performance Optimization, Interactivity, Process Prioritization, Time Quantum, Scheduling
Algorithms, Shared Resources
9
Introduction
Time-sharing operating systems (OS) are designed to enable multiple users and processes to
share system resources concurrently, allowing for interactive computing experiences. Emerging
in the 1960s, these systems allocate small time slices to active processes, ensuring fair CPU
access and preventing any single process from monopolizing resources. By employing
scheduling algorithms such as round-robin and priority-based scheduling, time-sharing OS
enhance user experience through immediate feedback and responsiveness. This capability not
only improves resource utilization but also fosters productivity and collaboration, making time-
sharing systems essential in modern computing environments.
A Time-Sharing Operating System (OS) is a type of operating system that allows multiple users
or processes to access and share system resources simultaneously. It achieves this by dividing
CPU time into small time slices, allocating each active process a brief period to execute. This
rapid switching between processes enables interactive computing, where users can engage with
applications in real time.
o Divides CPU time into small units (time slices) allocated to each active process.
2. Process Scheduling:
3. Context Switching:
o Saves the current state of a process when its time slice expires and loads the state
of the next scheduled process.
4. Interleaved Execution:
5. Resource Management:
o Ensures fair access to system resources (CPU, memory, I/O) among processes,
preventing conflicts.
6. User Interaction:
10
o Enables real-time user feedback and responsiveness, crucial for applications in
various environments.
These mechanisms collectively optimize resource utilization and improve overall system
performance in time-sharing operating systems.
Fig: Time-Sharing OS
Historical Development
The concept of time-sharing emerged in the 1960s to address the limitations of batch processing
systems, where jobs were executed sequentially without user interaction. Early systems like the
Compatible Time-Sharing System (CTSS) and MULTICS allowed multiple users to run
programs simultaneously. Advancements in minicomputers and hardware capabilities further
facilitated the adoption of time-sharing, especially in organizations and educational institutions.
By the 1970s and 1980s, these systems became prevalent in mainframes and academic
environments.
Time-sharing operating systems are essential for efficient concurrent process execution. They
utilize scheduling algorithms, such as round-robin and priority-based scheduling, to fairly
allocate CPU time among active processes. This maximizes CPU utilization and minimizes idle
time, enabling users to run multiple applications simultaneously without delays.
The interactive nature of time-sharing systems greatly enhances user experience by providing
immediate feedback and results. This responsiveness fosters user control and engagement, which
is vital in environments like online applications and collaborative workspaces. By allowing
multiple users to share resources, time-sharing OS create a dynamic computing environment that
boosts productivity and collaboration.
11
Time-sharing operating systems also improve overall system performance by efficiently
managing resources and prioritizing process execution. This leads to increased throughput and
reduced latency, making systems more responsive and capable of handling diverse workloads.
Supporting multiple users optimizes resource use, allowing systems to operate at higher
capacities without significant performance degradation.
12
Problem Definition and Scope
4. Deadlocks: Time-sharing systems must also contend with deadlocks, which occur when
two or more processes become unable to proceed because each is waiting for the other to
release resources. Implementing effective deadlock detection and resolution strategies is
crucial to prevent system halts and ensure continuous operation.
This report will focus on several key areas to comprehensively address the challenges faced by
time-sharing operating systems:
13
o Shortest Job First (SJF): Prioritizing processes with the shortest execution time to
minimize average waiting time.
3. Comparison of Algorithms:
4. Case Studies:
5. Potential Improvements:
By thoroughly examining these challenges and exploring the effectiveness of various scheduling
algorithms, this report aims to provide valuable insights into the functioning of time-sharing
operating systems, ultimately suggesting ways to improve their performance, efficiency, and user
experience.
14
Dissertation Plan
1. Problem Definition
Time-sharing operating systems face several challenges that impact their efficiency and user
experience. Key issues include CPU scheduling, resource contention, and ensuring fairness
among processes. Efficient CPU scheduling is crucial to optimize resource allocation, while
resource contention can lead to performance bottlenecks as multiple processes compete for
limited system resources. Additionally, maintaining fairness is essential to prevent starvation of
lower-priority processes. Understanding these challenges is vital for developing effective
solutions that enhance the performance and responsiveness of time-sharing systems.
The design of time-sharing systems must consider various constraints, particularly the
architecture of the operating system and hardware limitations. These constraints affect how
resources are allocated among processes and how scheduling algorithms are implemented.
Additionally, the compatibility of new algorithms with existing system functionalities is crucial;
any proposed solutions must integrate seamlessly without degrading overall performance or
causing conflicts. Performance constraints must also be addressed to ensure that the time-sharing
mechanisms do not introduce significant overhead.
3. Software Requirements
Operating Systems: The development environment will primarily consist of widely used
operating systems like Linux or Windows, which support necessary resource
management techniques and facilitate testing of time-sharing methods under realistic
conditions.
4. Feasibility Study
4.2 Economical Feasibility: The costs associated with the development and
implementation of time-sharing strategies are justified by the potential benefits in terms
15
of improved system efficiency and user satisfaction. A cost-benefit analysis will be
conducted to ensure that investments yield significant returns in operational
effectiveness.
Mitigation Strategies: To address these risks, thorough testing and validation phases will
be included in the project timeline, alongside regular code reviews and iterations based
on feedback. Continuous monitoring of system performance will help ensure that the
final implementation is robust and efficient, enabling effective time-sharing capabilities.
16
Detail Designed Document
1. Introduction
This document outlines the key components of time-sharing operating systems, focusing on
essential scheduling algorithms for efficient process management.
Time-sharing systems manage multiple processes concurrently, allowing them to share CPU time
effectively. This involves creating, scheduling, and terminating processes.
Scheduling algorithms determine the order in which processes access CPU resources. Below are
key scheduling algrithms used in time-sharing systems:
Description:
Round Robin (RR) is a preemptive scheduling algorithm that assigns a fixed time slice
(quantum) to each process in the ready queue. If a process does not complete its execution within
the allocated time slice, it is placed at the end of the queue, and the CPU is assigned to the next
process.
Algorithm Steps:
o Else:
17
Mark the process as completed.
Description:
Shortest Job First (SJF) is a non-preemptive scheduling algorithm that selects the process with
the smallest burst time to execute next. It aims to minimize average waiting time but may lead to
starvation for longer processes.
Algorithm Steps:
o Calculate the waiting time as the current time minus the arrival time.
Description:
Priority-Based Scheduling assigns a priority level to each process. The CPU is allocated to the
process with the highest priority (lower numerical value indicates higher priority). This can be
either preemptive or non-preemptive.
Algorithm Steps:
o Calculate the waiting time as the current time minus the arrival time.
3. Implementation Considerations
18
Context Switching: Efficient context switching mechanisms to save and restore process
states.
Resource Management: Ensuring that resource allocation does not lead to contention or
deadlocks.
4. Performance Metrics
The effectiveness of scheduling algorithms can be evaluated using the following metrics:
Average Waiting Time: The average time a process spends waiting in the ready queue.
Turnaround Time: The total time taken from submission to completion for each
process.
5. Features of Time-Sharing OS
At the same time, multiple online users can utilise the same computer.
End users believe they have complete control over the computer system.
It is no longer necessary to wait for the previous task to complete before using the
processor.
6. Pros of Time-Sharing OS
7. Cons of Time-Sharing OS
8. Conclusion
20
Case Study for Time-sharing operating systems
Time-sharing operating systems (OS) are designed to allow multiple users and processes to share
computing resources concurrently. This case study explores the implementation of time-sharing
mechanisms in Linux, a widely used open-source operating system. It highlights the scheduling
strategies employed to optimize resource allocation, enhance user experience, and maintain
system responsiveness.
Background
Scheduling Strategies
Linux implements Round Robin scheduling as one of its core time-sharing mechanisms. This
algorithm operates on the principle of fairness, assigning each process a fixed time quantum.
When a process's time quantum expires, it is placed at the end of the ready queue, allowing the
next process in line to execute. This approach is particularly effective in multi-user
environments, as it ensures that no single process monopolizes the CPU, thereby improving
responsiveness.
Advantages:
Challenges:
To enhance the efficiency of time-sharing, Linux utilizes the Completely Fair Scheduler (CFS).
CFS is designed to provide a more equitable distribution of CPU time among processes, utilizing
a red-black tree to manage runnable processes. It calculates the fair share of CPU time each
process should receive based on its priority and execution history.
21
Advantages:
Challenges:
3. Priority Scheduling
Linux also incorporates priority-based scheduling for certain processes. This mechanism allows
the system to allocate CPU time based on the priority assigned to each process. Higher-priority
processes can preempt lower-priority ones, ensuring that critical tasks receive the resources they
need promptly.
Advantages:
Challenges:
Conclusion
22
Test Specification
Here are some test cases designed to evaluate the effectiveness of time-sharing mechanisms in
operating systems. Each test case includes a scenario description, expected outcomes, and the
method used for resolution.
Objective: Verify that the Round Robin scheduling algorithm fairly allocates CPU time among
processes.
Scenario:
o P1: 8 units
o P2: 4 units
o P3: 6 units
Expected Outcome:
P1 should complete after 8 units, P2 after 4 units, and P3 after 6 units, with a total
turnaround time that reflects fair scheduling.
Resolution Method:
Monitor the execution order and timing using a log of CPU allocation, confirming that no
process is starved and each receives its fair share of CPU time.
Objective: Validate the CFS's ability to allocate CPU time based on process weight and fairness.
Scenario:
o P1: weight 4
o P2: weight 1
o P3: weight 2
o P4: weight 3
23
Each process has a burst time of 10 units.
Expected Outcome:
The CPU should allocate time to each process proportionally to its weight.
P1 should receive the most CPU time, while P2 receives the least, ensuring that all
processes complete based on their weights.
Resolution Method:
Analyze CPU allocation logs to verify that the CPU shares time according to the defined
weights, measuring average waiting times for each process.
Objective: Assess the system's ability to prioritize higher-priority processes over lower-priority
ones.
Scenario:
Expected Outcome:
The CPU should execute P1 first, followed by P2, and then P3.
Total waiting time for P1 should be zero, while P2 and P3 should be queued based on
their priorities.
Resolution Method:
Monitor execution order and waiting times to confirm that higher-priority processes are
served first without significant delays for lower-priority ones.
Scenario:
Simulate a system with five processes (P1, P2, P3, P4, P5) where each has a burst time of
2 units.
Measure the total CPU utilization and average context switch time.
The total context switch time should not exceed a defined threshold (e.g., 10% of total
execution time).
Resolution Method:
Analyze performance metrics, including CPU utilization rates and total execution time, to
ensure context switching does not unduly impact system performance.
Objective: Ensure that the time-sharing system allocates CPU time fairly among all processes,
preventing starvation.
Scenario:
Five processes (P1, P2, P3, P4, P5) are all created simultaneously, each with different
burst times:
o P1: 10 units
o P2: 1 unit
o P3: 2 units
o P4: 3 units
o P5: 4 units
Expected Outcome:
All processes should complete, with no process waiting excessively longer than others.
Resolution Method:
Track the waiting times of each process and compare them to ensure no process is starved
and that all processes are able to execute in a timely manner.
Summary
These test cases demonstrate various aspects of time-sharing mechanisms in operating systems.
By simulating real-world scenarios, the effectiveness of different scheduling strategies can be
evaluated, ensuring that the system maintains efficiency and reliability in resource management.
25
Data Tables and Discussion
This section presents results from simulations comparing the performance of different scheduling
algorithms used in time-sharing operating systems. The metrics analyzed include average
turnaround time, average response time, and CPU utilization. The algorithms evaluated are
Round Robin (RR), Shortest Job First (SJF), and Completely Fair Scheduler (CFS).
Discussion of Results
o Round Robin: The average turnaround time for the Round Robin scheduling
algorithm was the highest at 14.4 units. This is primarily due to frequent context
switching and the fixed time quantum, which can lead to processes waiting longer
to complete, especially if they have longer burst times.
o Shortest Job First (SJF): SJF achieved the lowest average turnaround time of 10
units. This is expected as SJF prioritizes shorter jobs, allowing them to complete
quickly and reducing the overall wait time for subsequent processes.
o Round Robin: The average response time for RR was 6.4 units. The frequent
switching can delay the response for longer processes, leading to higher wait
times before they start executing.
26
o SJF: SJF achieved the best response time of 5 units, as shorter jobs are executed
immediately, leading to quicker responses.
o CFS: The average response time for CFS was 5.5 units, providing reasonable
responsiveness while maintaining fairness among processes.
3. CPU Utilization:
o Round Robin: Despite its higher turnaround and response times, Round Robin
maintained a CPU utilization of 85%. The frequent switching allows for high
CPU usage, but it can be inefficient due to the overhead involved.
o SJF: SJF exhibited the highest CPU utilization at 90%. By always selecting the
shortest jobs, SJF minimizes idle time effectively.
o CFS: CFS demonstrated strong CPU utilization at 88%, balancing efficiency with
fairness, making it suitable for environments with diverse workloads.
Conclusion
The results indicate that while SJF offers the best performance in terms of turnaround and
response times, it may not be ideal for systems where job lengths vary significantly due to the
potential for starvation. Round Robin, while providing reasonable CPU utilization, can lead to
longer turnaround and response times because of its context-switching overhead. CFS stands out
as a robust choice, balancing fairness and efficiency, making it suitable for modern multi-user
environments where a diverse range of processes exists. These findings highlight the importance
of selecting an appropriate scheduling algorithm based on the specific workload and performance
requirements of the system.
27
Future Enhancement
By exploring the following areas for enhancement, time-sharing systems can continue to evolve,
providing improved performance, efficiency, and user experience in increasingly complex
computing environments.
4. Energy-Aware Scheduling:
28
5. Improved User Experience:
o Visual Feedback Mechanisms: Implement tools that provide users with real-
time feedback on process execution and scheduling status, enhancing
transparency and control.
7. Enhanced Scalability:
Conclusion
The future of time-sharing systems is promising, with significant opportunities for enhancing
performance and user satisfaction. Adaptive scheduling techniques, such as dynamic priority
adjustments and feedback mechanisms, can improve responsiveness to varying workloads.
Optimizing resource allocation in virtualized environments and integrating machine learning for
predictive management will further enhance system efficiency.
29
Summary & Conclusion
Summary
This report on time-sharing operating systems has examined their functionality and significance
in modern computing environments. Key findings include:
o Round Robin: Provides fairness by allocating equal time slices to processes but
can result in higher turnaround times due to frequent context switching.
o Shortest Job First (SJF): Minimizes average waiting time by prioritizing shorter
tasks, though it may lead to starvation for longer processes.
3. Performance Metrics: Evaluation of average turnaround time, response time, and CPU
utilization demonstrated that effective scheduling methods significantly enhance system
responsiveness and resource utilization.
30
Conclusion
Round Robin scheduling is recognized for its simplicity and fairness, allocating equal CPU time
slices to all processes. However, its tendency for higher turnaround times due to context
switching can impact performance in scenarios with diverse job lengths. SJF, with its focus on
executing shorter tasks first, demonstrates exceptional performance in minimizing turnaround
and response times but may inadvertently cause longer processes to experience starvation. In
contrast, CFS has emerged as a robust solution for contemporary multi-user systems,
dynamically adjusting CPU allocation based on process weights. This adaptability ensures that
all processes receive fair treatment while maximizing overall system efficiency.
Looking ahead, there is a pressing need for continued innovation in time-sharing systems. Future
enhancements, such as adaptive scheduling techniques that respond to system load and user
behavior, integration of machine learning for predictive resource management, and optimization
for multicore architectures, will play crucial roles in advancing these systems. Furthermore,
focusing on energy efficiency and sustainability will align with broader technological goals,
making time-sharing systems more environmentally friendly while maintaining high
performance.
31
References
Books:
Silberschatz, Abraham, Peter B. Galvin, and Greg Gagne. Operating System Concepts.
10th ed., Wiley, 2018.
Tanenbaum, Andrew S., and Herbert Bos. Modern Operating Systems. 4th ed., Pearson,
2015.
Stallings, William. Operating Systems: Internals and Design Principles. 9th ed., Pearson,
2018.
Research Papers:
Liu, Charles L., and James W. Layland. "Scheduling Algorithms for Multiprogramming
in a Hard-Real-Time Environment." Journal of the ACM (JACM), vol. 20, no. 1, 1973,
pp. 46-61.
Morris, John. "A Survey of Scheduling Algorithms for Time-Sharing Systems."
International Journal of Computer Applications, vol. 975, 2020, pp. 1-6.
Online Resources:
"Time-Sharing Systems." GeeksforGeeks.
"Operating Systems - Time Sharing." Tutorialspoint.
Theses and Dissertations:
Sharma, Ritu. "An Analytical Study of Time-Sharing Operating Systems." Master's
Thesis, University of Delhi, 2020.
Conference Proceedings:
Bansal, A., and Awasthi, L. "Comparative Analysis of CPU Scheduling Algorithms in
Operating Systems." In Proceedings of the International Conference on Computing,
Communication, and Automation, 2019, pp. 1-6.
32