OS_Nov_2024.pdf - Compiled Answers
OS_Nov_2024.pdf - Compiled Answers
PART C - (3 × 10 = 30 marks)
Answer any THREE questions, each in 500 words.
First-Come, First-Served (FCFS): Processes are executed in the order of arrival. It’s
simple but can lead to the “convoy effect,” where short processes wait behind long
ones, increasing average waiting time. For example, if processes P1 (10ms), P2 (5ms),
and P3 (8ms) arrive sequentially, P1 finishes at 10ms, P2 at 15ms, and P3 at 23ms,
with P2 waiting 10ms unnecessarily.
Shortest Job Next (SJN) or Shortest Process Next (SPN): The process with the
shortest execution time is scheduled next. It minimizes waiting time but requires prior
knowledge of process length, which is impractical. For instance, with P1 (5ms), P2
(10ms), and P3 (2ms), P3 runs first (2ms), P1 (7ms), then P2 (17ms), reducing total
wait time.
Round Robin (RR): Each process gets a fixed time slice (quantum), and the CPU
switches to the next process after the slice expires. It’s fair and prevents starvation but
can increase context-switching overhead if the quantum is too short. With a 4ms
quantum, P1 (10ms), P2 (5ms), and P3 (8ms) might cycle as P1 (4ms), P2 (4ms), P3
(4ms), P1 (6ms), P2 (1ms), P3 (4ms), and so on.
Priority Scheduling: Processes are assigned priorities, and the highest-priority
process runs first. Preemptive versions allow higher-priority tasks to interrupt lower
ones. It’s effective but can cause starvation if low-priority processes are indefinitely
delayed, mitigated by aging (gradually increasing priority).
Multilevel Queue Scheduling: Processes are divided into queues (e.g., system,
interactive, batch) with different scheduling algorithms per queue. Higher-priority
queues get more CPU time, ensuring responsiveness for interactive tasks.
Multilevel Feedback Queue (MFQ): Extends multilevel queues by allowing process
movement between queues based on behavior (e.g., long CPU bursts move to lower
queues). It adapts dynamically, balancing fairness and efficiency.
These algorithms are evaluated using metrics like turnaround time, waiting time, and
throughput. FCFS suits simple systems, while RR and MFQ are common in modern OSs like
Linux, which uses a variant of MFQ with the Completely Fair Scheduler (CFS). The choice
depends on workload and system goals, with preemptive algorithms (e.g., RR) enhancing
responsiveness in real-time systems.
The resource allocation graph represents processes as nodes and resources as edges. A cycle
indicates a potential deadlock. For example, consider three processes (P1, P2, P3) and two
resources (R1, R2). P1 holds R1 and requests R2, P2 holds R2 and requests R1, and P3 is
idle. If P1 and P2 enter this state simultaneously, a cycle forms (P1 → R2 → P2 → R1 →
P1), signaling a deadlock. The OS uses a wait-for graph, derived by collapsing resource
nodes, to confirm the cycle.
Alternatively, the Banker’s algorithm (for a single resource type) tracks allocated and
available resources. With processes P1 (needs 3, holds 1), P2 (needs 2, holds 1), and 2 units
available, if P1 requests 2 more and P2 requests 1, the system checks if granting either leaves
enough for the other. If both requests create an unsafe state (no safe sequence to complete
all), a deadlock is detected.
Recovery involves process termination (e.g., killing P1) or resource preemption (e.g., seizing
R1 from P1). In a database system, if two transactions lock tables A and B circularly,
detection triggers a rollback of one, resolving the deadlock. Modern OSs like Windows use
timeout mechanisms alongside graph-based detection, balancing overhead and accuracy.
22. What are the various techniques used for mapping virtual addresses to real
addresses under paging? Explain.
Paging is a memory management technique that maps virtual addresses to physical
(real) addresses using a page table, enabling efficient memory utilization and
protection. Virtual memory is divided into fixed-size pages, and physical memory into
frames, with mapping techniques ensuring seamless translation.
Single-Level Page Table: Each process has a page table mapping virtual page
numbers (VPNs) to physical frame numbers (PFNs). The virtual address splits into a
page number and offset. The CPU’s Memory Management Unit (MMU) uses the page
table base register to locate the table, indexing the VPN to find the PFN, then adds the
offset. For example, a 32-bit address with 4KB pages has 20 bits for the VPN and 12
for the offset. This is simple but memory-intensive for large address spaces.
Multi-Level Page Table: For large virtual spaces, a hierarchical structure (e.g., two-
level in x86) reduces memory use. The VPN is split into multiple indices (e.g., page
directory and page table indices), each pointing to the next level. The final level yields
the PFN. This is efficient for sparse memory but adds lookup overhead.
Inverted Page Table: Instead of one entry per virtual page, it has one entry per
physical frame, using a hash of the VPN to locate the mapping. It saves space in
systems with many processes but requires collision resolution.
Translation Lookaside Buffer (TLB): A hardware cache stores recent VPN-to-PFN
mappings. On a TLB hit, translation is fast; on a miss, the page table is consulted, and
the entry is cached. TLBs significantly boost performance, with hit rates often
exceeding 90%.
Modern OSs like Linux use multi-level paging with TLBs, supporting 64-bit architectures.
For instance, x86-64 uses four-level paging, handling vast virtual spaces (up to 2^48 bytes)
while minimizing memory overhead. These techniques ensure flexibility, security (via page
permissions), and efficient memory sharing.
Sequential Access: Data is accessed in a linear, consecutive order, starting from the
beginning of the file. Each read or write operation moves the file pointer to the next
record. This method is efficient for files processed in order, such as logs or audio
streams. For example, reading a text file line by line using a tape drive (historically)
or a sequential file in a program relies on this. The OS maintains a current position
pointer, advancing it with each operation. While simple and fast for ordered data, it’s
inefficient for non-sequential access, requiring rewinding or fast-forwarding, which
can be time-consuming on mechanical devices.
Direct Access (Random Access): Data can be accessed at any position without
reading preceding content, using an offset or record number. This is ideal for
databases or indexed files where specific records (e.g., a customer record by ID) are
retrieved. The OS uses file allocation tables (e.g., FAT) or inodes to locate blocks
directly. For instance, in a disk file, the OS calculates the physical block address from
the offset, enabling instant jumps. Modern systems like NTFS support this with
efficient indexing, though it requires more complex file system management and can
lead to fragmentation if not handled properly.
Indexed Access: An index table maps keys (e.g., record IDs) to file locations,
allowing rapid access to specific records. Common in database systems, the index acts
as a lookup mechanism, stored separately or within the file. For example, a library
catalog might use an indexed file to find a book by ISBN, jumping directly to its
storage location. The OS maintains the index, updating it with file modifications. This
method balances speed and flexibility but increases overhead due to index
maintenance and storage.
Keyed Access: A variation of indexed access, it uses keys (e.g., names or numbers) to
locate data, often in hierarchical or B-tree structures. Used in advanced file systems or
databases, it supports complex queries. For instance, a payroll system might access
employee records by name via a B-tree index. The OS ensures index integrity, making
it suitable for large, dynamic datasets, though it requires significant computational
resources.
These methods are implemented differently across file systems. UNIX uses inodes for direct
access, while Windows supports random access via the Master File Table (MFT). The choice
depends on application needs—sequential for streaming, random for databases. Modern OSs
optimize these with caching and prefetching, enhancing performance. For example, Linux’s
ext4 uses extent-based allocation to improve direct access speed, adapting to diverse
workloads.
Device Management: The subsystem identifies and initializes devices during boot,
maintaining a device table with details like type, status, and drivers. For example, it
detects a USB drive and loads the appropriate driver, ensuring compatibility across
diverse hardware.
I/O Scheduling: It prioritizes and schedules I/O requests to optimize performance
and fairness. Algorithms like FCFS (First-Come, First-Served) or Shortest Seek Time
First (SSTF) for disk I/O minimize latency. For instance, in a multi-user system, it
queues print jobs, preventing resource contention.
Buffering: Data is temporarily stored in buffers to handle speed mismatches between
devices and the CPU. For example, when reading from a slow disk, the kernel buffers
data in memory, allowing the CPU to process it without waiting, improving
throughput.
Caching: Frequently accessed data is cached in memory (e.g., disk blocks in the page
cache) to reduce I/O operations. The kernel manages cache consistency, flushing
changes to devices as needed, enhancing performance in systems like Linux.
Interrupt Handling: The subsystem processes hardware interrupts generated by
devices, signaling completion or errors. For instance, a keyboard interrupt triggers the
kernel to read input, ensuring real-time responsiveness.
Error Handling: It detects and manages I/O errors (e.g., disk failure or network
drop), retrying operations or notifying users. This ensures system reliability, with logs
maintained for diagnostics.
Device Driver Interface: The kernel provides a standardized interface for device
drivers, abstracting hardware specifics. Drivers translate OS commands into device-
specific instructions, as seen in graphics card drivers for rendering.
Synchronization: It coordinates I/O operations among multiple processes, using
semaphores or locks to prevent race conditions. For example, two processes writing to
a printer are serialized to avoid data corruption.
Security and Access Control: The subsystem enforces permissions, ensuring only
authorized processes access devices. For instance, it restricts raw disk access to
privileged users, enhancing system integrity.
Modern OSs like Windows and Linux implement these via layered architectures, with the
kernel I/O subsystem interacting with user-space libraries (e.g., POSIX I/O). For example,
Linux’s VFS (Virtual File System) layer unifies I/O across devices, while device files (/dev)
provide a uniform interface. This subsystem is vital for multitasking, supporting real-time
systems (e.g., automotive controllers) and high-performance computing, adapting to evolving
hardware like SSDs with tailored scheduling.