Chapter 6 - Memory Management
Chapter 6 - Memory Management
Operating Systems
2
Chapter Objectives
To provide a detailed description of various ways of
organizing memory hardware.
To discuss various memory-management techniques,
including paging and segmentation.
3
Topics Included
Introduction
Basic Memory Management
Multiprogramming with variable partitions – swapping
Virtual Memory
Paging
Page Replacement Algorithms
Segmentation
4
Introduction
Memory is one of the most important resources of the computer system
that is used to store data and programs temporarily.
Program must be brought into memory and placed within a process for it
to be run.
Ideally programmers want memory that is
large
fast
non volatile
Memory hierarchy
small amount of fast, expensive memory – cache
some medium-speed, medium price main memory
gigabytes of slow, cheap disk storage
Operating system coordinate how these memories are used.
5
Introduction(cont’d)
Part of the operating system that manages memory is called
the memory manager (MM).
The main functions of the MM are the following:
Keeping track of which part of memory are in use and which parts are
free
Allocating and de-allocating memory to processes
Managing swapping between memory and disk when memory is not big
enough to hold all the processes
6
Basic Memory Management
Memory management systems can be divided into
two basic classes:
those that move processes back and forth between main
memory and disk during execution (swapping and paging),
and those that do not:
7
Monoprogramming without Swapping or Paging
8
Memory Partitioning
In order to run multiple programs in parallel a memory
should be partitioned.
There are two partitioning options – fixed partitioning and
dynamic partitioning
Fixed partitioning – partitioning is done before the processes
comes to memory
Dynamic partitioning – partitioning is done when processes
request memory space.
1. Fixed partitioning
It is the simplest and oldest partitioning technique
Variations of fixed partitioning
Equal sized fixed partitions
Unequal sized fixed partitions
9
Equal sized fixed partitions
The memory is partitioned into equal-sized partitions
Any process whose size is less than or equal to the partition size can be
loaded into any available partition
If all partitions are occupied with processes that are not ready to run, then
one of these processes is swapped-out and a new process is loaded or a
suspended process is swapped in and loaded
Advantage: It is simple to implement and requires minimal management
overhead
Problems:
A program may be too big to fit into a partition
Inefficient use of memory due to internal fragmentation.
Can happen if there is hole of size 15,000 bytes and a process needs 14,900 bytes;
Keeping a hole of size 100 bytes is not worth the effort so the process is allocated
15,000 bytes.
The size difference of 100 bytes is memory internal to a partition, but not being used
10
Example of Fixed Partitioning
11
Unequal sized fixed partitioning
Memory is partitioned into unequal-sized fixed partitions
In this case there are two memory allocation schemes:
Using multiple queues
Using a single queue
Using multiple queues
Each partition has its own queue
A process is assigned to the smallest partition within which it will fit and it can
only be loaded into the partition it belongs
Advantage: Minimize internal fragmentation
Disadvantage processes are waiting in some other queues: Some partitions
may remain unused.
12
Unequal sized fixed partitioning(cont’d)
14
2. Dynamic partitioning
Partitions are created dynamically during the allocation of
memory.
The size and number of partitions vary throughout of the
operation of the system.
Processes will be allocated exactly as much memory they
require.
If there is not enough main memory to hold all the
currently active processes, so excess processes must be
kept on disk and brought in to run dynamically.
15
Dynamic partitioning(cont’d)
Dynamic partitioning leads to a situation in which there are
a lot of small useless holes in memory. This phenomenon
is called external fragmentation.
Solution to external fragmentation is compaction –
combining the multiple useless holes in to one big hole.
Problem: it requires a lot of CPU time. For instance, on a 1-GB
machine that can copy at a rate of 2 GB/sec (0.5 nsec/byte) it takes
about 0.5 sec to compact all of memory.
16
Dynamic partitioning(cont’d)
17
Dynamic partitioning(cont’d)
18
Dynamic Memory Summary
Advantage
Dynamic memory allocation provides a better memory
utilization
Disadvantage
Management is more complex
Includes memory compaction overhead
19
Relocation and Protection
Multiprogramming introduces two essential problems
that must be solved – protection and relocation
Protection
Programs in other processes should not be able to reference
memory locations in a process for reading or writing purposes
without permission.
A user program shouldn’t be able to address a memory area
which is not in its partition.
A user process must not access any portion of the OS.
Usually a program in one process can not branch to an instruction
in another process.
Without permission, a program in one process cannot access the
data area of another process.
20
Relocation and Protection(cont’d)
Relocation
Location of a program in main memory is unpredictable due to loading,
swapping and compaction.
To solve this problem, a distinction is made among several types of
addresses.
There are two types of addresses:
Logical address
It is a reference to a memory location independent of the current assignment of data to
memory.
Physical/absolute address
An actual locator in the main memory
The user program deals with logical addresses; it never sees the
real physical addresses.
The memory-mapping hardware converts logical addresses into
physical addresses.
21
Logical vs. Physical Address Space
The concept of a logical address space that is bound to a separate physical
address space is central to proper memory management
Logical address – generated by the CPU.
Physical address – address seen by the memory unit
Logical and physical addresses are:
The same in compile-time and load-time address-binding schemes;
They differ in execution-time address-binding scheme. In that case the logical
address is referred to as virtual address.
We use Logical address and virtual address interchangeably
Logical address space is the set of all logical addresses generated by a
program
Physical address space is the set of all physical addresses corresponding to a
given logical address space.
22
Memory-Management Unit (MMU)
Hardware device that at run time maps virtual
addresses to physical address
23
Relocation techniques
There are two possible solutions for relocation problems –
static relocation and dynamic relocation.
Static Relocation
When a process is first loaded, all relative memory references are
replace by an absolute addresses based on the base address of the
loaded process
It can be applied for processes which are always assigned to the
same partition, for instance in the case of unequal size fixed partitions
scheme with multiple process queue.
It doesn’t solve the protection problem. A malicious program can
always construct a new instruction and jump to it.
24
Relocation techniques(cont’d)
Dynamic Relocation
When a process is assigned for running, a special processor
register (base register) is loaded with the starting address of
the program and a limit register is loaded with ending address
of the partition
Limit and base registers are protected by the hardware not to
be modified by user programs
Hardware mechanisms are used for translating relative
addresses to physical addresses at the time of execution of
the instructions that contains the reference.
25
Relocation techniques(cont’d)
26
Swapping
A process can be swapped temporarily out of memory to a backing store and then brought
back into memory for continued execution
Total physical memory space of all processes can exceed the real physical memory of the system.
27
Swapping(cont’d)
A process has three major sections - stack, data and code
With fixed partitions, the space given to process is usually larger
than it asked for and hence the process can expand if it needs.
With dynamic partitions since a process is allocated exactly as
much memory it requires, it creates an overhead of moving and
swapping processes whenever the process grows dynamically.
It is expected that most processes will grow as they run, so it is a
good idea to allocate a little extra memory whenever a process is
loaded into the memory.
As shown in the following diagram the extra space is located
between the stack and data segments because the code segment
doesn’t grow.
28
Swapping(cont’d)
29
Schematic View of Swapping
30
Memory Usage Management
When memory is assigned dynamically, the operating
system must manage it.
There are two ways to keep track of memory usage –
bitmap and linked list
Bitmaps
The memory is divided into fixed size allocation units.
Each allocation unit is represented with a bit in the bit map. If the
bit is 0, it means the allocation unit is free and if it is 1, it means it
is occupied.
Example: Look at the following portion of a memory. Let the size
of the allocation units is 5k as shown in the diagram.
31
Memory Management with Bitmaps
32
Memory Management with Bitmaps(cont’d)
The size of the allocation unit is an important design issue.
The smaller the allocation unit, the larger the bitmap size,
which will result to memory and CPU overheads. Searching
will take time.
E.g. Assume an allocation unit of 4bytes and assume the
computer has 800KB of memory
Number of bits required = 800KB/4B = 200Kbits
Size of bitmap = 200Kbits/8 = 25KB
Percentage of the memory used for the bitmap = (25/800)*100%
3%
33
Memory Management with Linked Lists
Maintain a linked list of allocated and free memory segments
Each segment is either a process(P) or a hole(H) between two
processes and contains a number of allocation units
Each entry in the list consists of
Segment type: P/H (1bit)
The address at which it starts
The length of the segment
A pointer to the next entry
Example 1: The memory in the bitmap example can be represented
using linked list as follows:
34
Example 2: bitmap and linked list
35
Memory Management with Linked Lists(cont’d)
The list is kept sorted by address. Sorting this way
has the advantage of easily updating when processes
terminates or is swapped out.
A terminating process normally has two neighbors,
except when it is at the very beginning or end of the
list.
The neighbors can be either process or holes, leading
to four combinations as shown on the next slide:
36
Memory Management with Linked Lists(cont’d)
37
Memory Allocation Algorithms
Memory Allocation algorithms (Placement algorithms)
When a new process is created or swapped in, the OS
must decide which free block to allocate.
The following are some of the placement algorithms
First-Fit
It scans the memory from the beginning and chooses the first
available block that is large enough
It is the simplest and fastest algorithm
Disadvantage: Searching from the beginning always may result in
slow performance
38
Memory Allocation Algorithms(cont’d)
Next Fit
It starts scanning the memory from the location of the last
placement and chooses the next available block that is large
enough
Disadvantage: It may cause the last section of memory to be
quickly fragmented
Best Fit
It searches the entire list of available holes and chooses the lock
that is closest in size to the request
Disadvantage: External fragmentation and slower because it
scans the whole list .
39
40
Memory Allocation Algorithms(cont’d)
Worst Fit
It searches for the largest available hole, so that the hole
broken off will be big enough to be useful
Disadvantage: It is as slow as best fit
Quick Fit
It maintains separate list for some of the more common size
requests
Advantage: Finding a hole of the required size is extremely
fast
Disadvantage: When a process terminates or is swapped out,
finding its neighbors to see if a merge is possible is expensive.
41
Virtual Memory
A process may be larger than the available main memory
In the early days overlays were used to solve this problem
The programmer will divide the program into modules and the main
program is responsible for switching the modules in and out of
memory as needed.
Although the actual work of swapping overlays in and out was done
by the system, the decision of how to split the program into pieces
had to be done by the programmer.
Drawback: The programmer must know how much memory is
available and it wastes the programmer time.
42
Virtual Memory(cont’d)
Piece of programs are swapped between disk and
memory as needed.
Example, a 512-MB program can run on a 256-MB
machine by carefully choosing which 256 MB to keep in
memory at each instant, with pieces of the program
being swapped between disk and memory as needed.
Program generated addresses are called virtual
addresses and the set of such addresses form the
virtual address space.
The virtual address will go to the Memory Management
Unit (MMU) that maps the virtual addresses into physical
addresses.
43
Paging
There are two virtual memory implementation techniques
Paging
Segmentation
Paging
The virtual address space is divided into units called pages. The
corresponding units in the physical memory are called page frames.
Size of a frame is power of 2 between 512 bytes and 16 Mbytes
Pages and page frames should have the same size
Transfer between memory and disks are always in units of pages
The operation of dividing a process into pages is done by the compiler or
MM
Example: Assume a 64KB virtual address and each process is
divided into 4K pages and the physical memory size is 32KB.
44
Virtual Memory…
45
Paging(cont’d)
Examples:
1. When the program tries to access address 0 using the instruction -
MOVE REG, 0
The MMU sees that this virtual address falls in page 0 (0-4095), which according
to its mapping is page frame 2 (8192 to 12287). It thus transforms the address to
8192 and outputs address 8192 onto the bus.
46
Address
0. 0-4095
8. 32767-36863
1. 4096- 8191
9. 36864- 40959
2. 8192 – 12287
10. 40960– 45055
3. 12288 – 16383
11. 45056– 49151
4. 16384 – 20479
12. 49152– 53247
5. 20480 – 24575
13. 53248– 57343
6. 24576 – 28671
14. 57344– 61439
7. 28672 - 32767
15. 61440- 65535
47
Paging(cont’d)
Example: Address Bits = 16, Page
size = 4K, Logical address = 8196.
The following figure shows the
internal operation of the MMU with 16
4K pages. split into a 4-bit page
number and a 12-bit offset.
49
Paging(cont’d)
Page Size
There are two issues:
The smaller the page size, the less number of internal
fragmentation which optimizes the use of main memory.
The smaller the page size, the greater the number of pages
and thus the larger the page table.
50
Paging(cont’d)
If an instruction in a process tries to access an address in
unmapped page
The MMU causes the CPU to generate an interrupt, called page
fault
The OS puts the interrupted process in a blocking state and takes
control
The OS chooses a page(page frame) to remove from memory and
updates its contents back to the disk if it is modified since its last
load
he OS issues a disk I/O read request to fetch the page just
referenced into the page frame just freed and the OS can dispatch
another process to run while the disk I/O is performed.
51
Page Replacement Algorithms
Page Replacement Algorithms (PRA)
When a page fault occurs, the OS has to choose a page to remove
from memory to make room for the page that has to be brought in.
Optimal PRA
It says replace the page that will not be referenced soon
Drawback – impossible to know whether the page will be referenced or
not in the future (its unrealizable)
52
Page Replacement Algorithms(cont’d)
FIFO algorithm uses the time when a page was
brought into memory, whereas the OPT algorithm
uses the time when a page is to be used.
Least Recently Used (LRU) PRA
Remove pages that have less used in the past.
We can think of this strategy as the optimal page-replacement
algorithm looking backward in time, rather than forward.
53
Page Replacement Algorithms(cont’d)
Second Chance PRA
55
Segmentation
Memory-management scheme that supports user’s view of memory
A program is a collection of segments -- a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
Each segment can to reside in different parts of memory. Way to circumvent the
contiguous allocation requirement.
In a one-dimensional memory, these segments would have to be allocated
contiguous chunks of virtual address space
56
Segmentation(cont’d)
58
Example of Segmentation
59
Paging Vs Segmentation