0% found this document useful (0 votes)
5 views

os_qs

An Operating System (OS) is a software layer that manages hardware resources and provides services for computer programs, including process, memory, file, and device management. Key concepts include processes, threads, context switching, deadlock, virtual memory, and scheduling algorithms. The document also discusses synchronization mechanisms like semaphores, the differences between monolithic and microkernels, and techniques for efficient memory management such as paging and demand paging.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

os_qs

An Operating System (OS) is a software layer that manages hardware resources and provides services for computer programs, including process, memory, file, and device management. Key concepts include processes, threads, context switching, deadlock, virtual memory, and scheduling algorithms. The document also discusses synchronization mechanisms like semaphores, the differences between monolithic and microkernels, and techniques for efficient memory management such as paging and demand paging.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

1. What is an Operating System?

An Operating System (OS) is a software layer between the


hardware and the user. It manages hardware resources (CPU,
memory, storage) and provides services for computer
programs.
It acts as an intermediary, providing an environment
where users can execute programs efficiently and safely.
Key functions include:
• Process Management
• Memory Management
• File System Management
• I/O System Management
• Security and Protection
• User Interface (CLI/GUI)

2. What are the main functions of an Operating System?


The main functions are:
• Process Management: Scheduling and management of
processes (programs in execution).
• Memory Management: Keeping track of each byte in
memory, allocation and deallocation.
• File Management: Handling the reading/writing of files
and storage organization.
• Device Management: Managing device communication
via drivers.
• Security & Protection: Safeguarding data from
unauthorized access.
• User Interface: Command Line (CLI) or Graphical (GUI) to
interact with the system.

3. What is a Process and how is it different from a Program?


• A program is a passive entity, like a file stored on disk.
• A process is an active entity, a program in execution with
a process state (running, waiting, ready).
• A process also has: o Program Counter (PC) o Stack
(function calls) o Heap (dynamic memory) o Data section
(global variables)

4. Explain Process States.


Typical states are:
• New: Process is being created.
• Ready: Process is waiting to be assigned to a CPU.
• Running: Instructions are being executed.
• Waiting: Process is waiting for an event (e.g., I/O
completion).
• Terminated: Process has finished execution.
Processes switch between these states through events like
scheduling, I/O requests, interrupts, etc.

5. What is a Thread? How is it different from a Process?


• A Thread is a lightweight process.
• Threads within the same process share the same
memory space but have separate stacks and program
counters.
• Differences:
o Process: Heavyweight, separate memory.
o Thread: Lightweight, shared memory.
Threads improve performance through parallelism but
require careful synchronization.

6. What is Context Switching?


Context Switching is the mechanism to save the state of a
running process and load the state of the next process. This
involves saving:
• CPU registers
• Program counter
• Memory maps
Importance: It allows multitasking, but adds overhead as
switching consumes CPU time.
7. What is Deadlock? What are the conditions for deadlock?
A deadlock occurs when a set of processes are blocked
because each process is holding a resource and waiting for
another resource held by another process.
Four Necessary Conditions (Coffman’s conditions):
• Mutual Exclusion: At least one resource must be held in
a non-sharable mode.
• Hold and Wait: Process holding resources can request
additional ones.
• No Preemption: Resources cannot be forcibly taken.
• Circular Wait: Set of processes are waiting for each other
in a cycle.

8. How can Deadlock be prevented?


Deadlock can be handled using:
• Deadlock Prevention: Ensure at least one Coffman
condition is violated (e.g., request all resources at once).
• Deadlock Avoidance: Use algorithms like Banker’s
Algorithm to ensure safe resource allocation.
• Deadlock Detection and Recovery: Allow deadlocks but
detect them and recover.

9. What is Virtual Memory?


Virtual Memory is a technique where the OS uses hard disk
space as an extension of RAM, allowing execution of
processes that may not be fully in memory.
• It uses paging and segmentation to swap pages between
disk and RAM.
• Virtual memory provides the illusion of a very large
memory to programs, while only a part is physically in
RAM.

10. Explain Paging and Page Replacement Algorithms.


Paging:
• Memory is divided into fixed-size pages and frames.
• The OS maintains a Page Table to map virtual addresses
to physical addresses.
Page Replacement Algorithms:
• FIFO (First-In, First-Out): Oldest page replaced first.
• LRU (Least Recently Used): Page not used for the longest
time replaced.
• Optimal: Replace the page that will not be used for the
longest future time (ideal but impractical).

11. What is a System Call?


A System Call is a way for programs to interact with the OS.
When a program needs a service (e.g., file operations,
memory allocation, process control), it uses system calls.
Examples:
• open(), read(), write() for file handling
• fork(), exec() for process management
System calls switch the process mode from user mode to
kernel mode.

12. What is Scheduling? What are types of Scheduling


algorithms?
Scheduling is the method the OS uses to decide which
process runs next.
Types:
• FCFS (First Come First Serve): Processes served in the
order they arrive.
• SJF (Shortest Job First): Process with smallest burst time
first.
• Round Robin: Fixed time slice (quantum) assigned
cyclically.
• Priority Scheduling: Based on process priority.
Each has trade-offs between fairness, throughput, and
response time.

13. What is a Semaphore? How is it used?


A Semaphore is a synchronization primitive used to manage
concurrent processes.
• It is an integer variable that can be incremented (signal)
and decremented (wait).
• If a process decrements and the value is negative, it
blocks.
Types:
• Binary Semaphore: 0 or 1 (like a mutex lock).
• Counting Semaphore: Any non-negative integer value.
Usage: Prevent race conditions, synchronize access to shared
resources.

14. What is the difference between Monolithic Kernel and


Microkernel?
Feature Monolithic Kernel Microkernel
Only minimal services
All OS services run in
Design (IPC, scheduling) in
kernel space kernel space

Faster (less context Slightly slower (more


Performance
switching) context switches)
Less stable (one bug
More stable (services are
Stability can crash whole
isolated)
system)
Feature Monolithic Kernel Microkernel
Minix, QNX, modern
Example Linux, Unix Windows kernels are
hybrid (mix)

15. What is Demand Paging?


Demand Paging is a technique where pages are loaded into
memory only when required during program execution.
• Initially, no pages are loaded (or minimal set).
• Page faults occur when a process tries to access a page
not in RAM.
• OS loads the required page from disk.
This helps in efficient memory usage but can increase page
fault rate if not managed carefully.

16. What is a Race Condition?


A race condition happens when two or more processes or
threads access shared resources and try to change them at
the same time.
• The final outcome depends on the sequence of access →
Unpredictable results.
• Occurs commonly in multithreading.
Solution:
Use synchronization techniques (like mutex locks,
semaphores) to serialize access.

17. Difference between Preemptive and Non-preemptive


Scheduling.
Preemptive Non-preemptive
Aspect
Scheduling Scheduling
CPU given until the
Can be taken from a
CPU Allocation process finishes or
running process
blocks
More responsive to
Responsiveness Less responsive
high-priority tasks
FCFS, SJF (non-
Examples Round Robin, SRTF
preemptive version)
Preemption is needed in real-time or interactive systems to
ensure fast reaction.

18. What is the difference between Paging and


Segmentation?
Feature Paging Segmentation
Divides memory into
Divides memory
Division logical variable-size
into fixed-size pages
segments
Feature Paging Segmentation
Reflects program
Simplifies memory
Purpose structure (e.g., code,
management
stack, heap)
Internal
Fragmentation External fragmentation
fragmentation
• Paging: Breaks memory into uniform chunks.
• Segmentation: Breaks memory based on logical parts.

19. What is Thrashing in OS?


Thrashing occurs when the system spends more time
swapping pages in and out of memory than executing
processes.
• It happens if too many processes are running with
insufficient RAM.
• Leads to a severe drop in system performance.
Solution:
• Reduce degree of multiprogramming.
• Use good page replacement algorithms.
• Allocate sufficient frames to each process.

20. Explain Banker’s Algorithm briefly.


Banker’s Algorithm is used for deadlock avoidance.
It checks whether a resource allocation request can be safely
granted without pushing the system into deadlock.
Steps:
1. Pretend to allocate the requested resource.
2. Check if the system remains in a "safe state".
3. If yes, proceed; if no, the process must wait.
It's called "banker’s" because it works like a banker only
granting a loan if it can be safely repaid later.

Process state diagram

Paging

You might also like