Paging in Operating System
Last Updated :
22 May, 2025
Paging is the process of moving parts of a program, called pages, from secondary storage (like a hard drive) into the main memory (RAM). The main idea behind paging is to break a program into smaller fixed-size blocks called pages.
To keep track of where each page is stored in memory, the operating system uses a page table. This table shows the connection between the logical page numbers (used by the program) and the physical page frames (actual locations in RAM). The memory management unit uses the page table to convert logical addresses into physical addresses, so the program can access the correct data in memory.
Paging in Memory Management Paging is a memory management technique that addresses common challenges in allocating and managing memory efficiently. Here we can understand why paging is needed as a Memory Management technique:
Memory isn’t always available in a single block: Programs often need more memory than what is available in a single continuous block. Paging breaks memory into smaller, fixed-size pieces, making it easier to allocate scattered free spaces. Processes size can increase or decrease: programs don’t need to occupy continuous memory, so they can grow dynamically without the need to be moved. Terminologies Associated with Memory Control Logical Address Space or Virtual Address Space: The Logical Address Space, also known as the Virtual Address Space, refers to the set of all possible logical addresses that a process can generate during its execution. It is a conceptual range of memory addresses used by a program and is independent of the actual physical memory (RAM). Physical Address Space: The Physical Address Space refers to the total range of addresses available in a computer's physical memory (RAM). It represents the actual memory locations that can be accessed by the system hardware to store or retrieve data. Important Features of Paging Logical to physical address mapping: Paging divides a process's logical address space into fixed-size pages. Each page maps to a frame in physical memory, enabling flexible memory management . Fixed page and frame size: Pages and frames have the same fixed size. This simplifies memory management and improves system performance. Page table entries: Each logical page is represented by a page table entry (PTE). A PTE stores the corresponding frame number and control bits. Number of page table entries: The page table has one entry per logical page. Thus, its size equals the number of pages in the process's address space. Page table stored in main memory: The page table is kept in main memory. This can add overhead when processes are swapped in or out. Working of Paging When a process requests memory, the operating system allocates one or more page frames to the process and maps the process's logical pages to the physical page frames. When a program runs, its pages are loaded into any available frames in the physical memory.
Each program has a page table, which the operating system uses to keep track of where each page is stored in physical memory. When a program accesses data, the system uses this table to convert the program's address into a physical memory address.
Steps Involved in Paging :
Step 1 Divide Memory : Logical → Pages, Physical → Frames .
Step 2 Allocate Pages : Load pages into available frames.
Step 3 Page Table : Map logical pages to physical frames.
Step 4 Translate Address : Convert logical to physical address.
Step 5 Handle Page Fault : Load missing pages from disk.
Step 6 Run Program : CPU uses page table during execution.
If Logical Address Space = 128 M words = 2 7 * 2 20 words, then Logical Address = log 2 2 27 = 27 bits If Physical Address Space = 16 M words = 2 4 * 2 20 words, then Physical Address = log 2 2 24 = 24 bits The mapping from virtual to physical address is done by the Memory Management Unit (MMU) which is a hardware device and this mapping is known as the paging technique.
The Physical Address Space is conceptually divided into a number of fixed-size blocks, called frames. The Logical Address Space is also split into fixed-size blocks, called pages. Page Size = Frame Size Example Physical Address = 12 bits, then Physical Address Space = 4 K words Logical Address = 13 bits, then Logical Address Space = 8 K words Page size = frame size = 1 K words (assumption) Number of frames = Physical Address Space / Frame Size = 4K / 1K = 4 = 2 2 Number of Pages = Logical Address Space / Page Size = 8K / 1K = 2 3
The address generated by the CPU is divided into
Page number(p): Number of bits required to represent the pages in Logical Address Space or Page number Page offset(d): Number of bits required to represent a particular word in a page or page size of Logical Address Space or word number of a page or page offset. A Physical Address is divided into two main parts:
Frame Number(f): Number of bits required to represent the frame of Physical Address Space or Frame number frame Frame Offset(d): Number of bits required to represent a particular word in a frame or frame size of Physical Address Space or word number of a frame or frame offset. So, a physical address in this scheme may be represented as follows:
Physical Address = (Frame Number << Number of Bits in Frame Offset) + Frame Offset where "<<" represents a bitwise left shift operation.
The TLB is associative, high-speed memory. Each entry in TLB consists of two parts: a tag and a value. When this memory is used, then an item is compared with all tags simultaneously. If the item is found, then the corresponding value is returned. Hardware implementation of Paging The hardware implementation of the page table can be done by using dedicated registers. But the usage of the register for the page table is satisfactory only if the page table is small. If the page table contains a large number of entries then we can use TLB(translation Look-aside buffer), a special, small, fast look-up hardware cache.
The TLB is associative, high-speed memory. Each entry in TLB consists of two parts: a tag and a value. When this memory is used, then an item is compared with all tags simultaneously. If the item is found, then the corresponding value is returned. Hit and Miss In Paging Main memory access time = m If page table are kept in main memory, Effective access time = m(for page table) + m(for particular page in page table)
Read more about - TLB hit and miss
Advantages of Paging Eliminates External Fragmentation: Paging divides memory into fixed-size blocks (pages and frames), so processes can be loaded wherever there is free space in memory. This prevents wasted space due to fragmentation. Efficient Memory Utilization: Since pages can be placed in non-contiguous memory locations, even small free spaces can be utilized, leading to better memory allocation. Supports Virtual Memory: Paging enables the implementation of virtual memory, allowing processes to use more memory than physically available by swapping pages between RAM and secondary storage. Ease of Swapping: Individual pages can be moved between physical memory and disk (swap space) without affecting the entire process, making swapping faster and more efficient. Improved Security and Isolation: Each process works within its own set of pages, preventing one process from accessing another's memory space. Disadvantages of Paging Internal Fragmentation: If the size of a process is not a perfect multiple of the page size, the unused space in the last page results in internal fragmentation. Increased Overhead: Maintaining the Page Table requires additional memory and processing. For large processes, the page table can grow significantly, consuming valuable memory resources. Page Table Lookup Time: Accessing memory requires translating logical addresses to physical addresses using the page table. This additional step increases memory access time, although Translation Lookaside Buffers (TLBs) can help reduce the impact. I/O Overhead During Page Faults: When a required page is not in physical memory (page fault), it needs to be fetched from secondary storage, causing delays and increased I/O operations. Complexity in Implementation: Paging requires sophisticated hardware and software support, including the Memory Management Unit (MMU) and algorithms for page replacement, which add complexity to the system. Read more about - Memory Management Unit (MMU)
Also read - Multilevel Paging in Operating System Also read - Paged Segmentation and Segmented Paging
Similar Reads
Starvation and Aging in Operating Systems
Starvation occurs when a process in the OS runs out of resources because other processes are using it. This is a problem with resource management while Operating systems employ aging as a scheduling approach to keep them from starving. It is one of the most common scheduling algorithms in batch syst
6 min read
Introduction of System Call
A system call is a programmatic way in which a computer program requests a service from the kernel of the operating system on which it is executed. A system call is a way for programs to interact with the operating system. A computer program makes a system call when it requests the operating system'
11 min read
Difference between User Level thread and Kernel Level thread
User-level threads are threads that are managed entirely by the user-level thread library, without any direct intervention from the operating system's kernel, whereas, Kernel-level threads are threads that are managed directly by the operating system's kernel. In this article, we will see the overvi
5 min read
Introduction of Process Synchronization
Process Synchronization is used in a computer system to ensure that multiple processes or threads can run concurrently without interfering with each other.The main objective of process synchronization is to ensure that multiple processes access shared resources without interfering with each other an
10 min read
Critical Section in Synchronization
A critical section is a segment of a program where shared resources, such as memory, files, or ports, are accessed by multiple processes or threads. To prevent issues like data inconsistency and race conditions, synchronization techniques ensure that only one process or thread accesses the critical
8 min read
Inter Process Communication (IPC)
Processes need to communicate with each other in many situations. Inter-Process Communication or IPC is a mechanism that allows processes to communicate. It helps processes synchronize their activities, share information, and avoid conflicts while accessing shared resources.Types of Process Let us f
5 min read
Semaphores in Process Synchronization
Semaphores are a tool used in operating systems to help manage how different processes (or programs) share resources, like memory or data, without causing conflicts. A semaphore is a special kind of synchronization data that can be used only through specific synchronization primitives. Semaphores ar
15+ min read
Mutex vs Semaphore
In the Operating System, Mutex and Semaphores are kernel resources that provide synchronization services (also known as synchronization primitives). Synchronization is required when multiple processes are executing concurrently, to avoid conflicts between processes using shared resources. In this ar
8 min read
Deadlock, Starvation, and Livelock
Deadlock, starvation, and livelock are problems that can occur in computer systems when multiple processes compete for resources. Deadlock happens when processes get stuck waiting for each other indefinitely, so none can proceed. Starvation occurs when a process is repeatedly denied access to resour
7 min read
Introduction of Deadlock in Operating System
A deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss deadlock, its necessary conditions, etc. in detail.Deadlock is a situation in computing where two
11 min read