Memory Management

Learning Objectives

By the end of this reading, you will be able to:

  • Understand the role of memory management in operating systems
  • Explain logical vs physical address spaces
  • Describe contiguous memory allocation schemes
  • Understand paging and its benefits over contiguous allocation
  • Explain how page tables work and their structure
  • Understand segmentation and its advantages
  • Describe the Translation Lookaside Buffer (TLB) and its role
  • Explain virtual memory and demand paging
  • Understand page replacement algorithms

Table of Contents

  1. Introduction to Memory Management
  2. Address Binding
  3. Logical vs Physical Address Space
  4. Contiguous Memory Allocation
  5. Paging
  6. Page Tables
  7. Translation Lookaside Buffer (TLB)
  8. Segmentation
  9. Virtual Memory
  10. Page Replacement Algorithms
  11. Exercises
  12. Summary

Introduction to Memory Management

Memory management is one of the most critical functions of an operating system. It manages the computer's primary memory (RAM), keeping track of each byte in memory, which bytes are allocated to which processes, and how to allocate and deallocate memory.

Goals of Memory Management

  1. Abstraction: Provide each process with its own address space
  2. Protection: Prevent processes from accessing each other's memory
  3. Efficiency: Maximize memory utilization and minimize fragmentation
  4. Sharing: Allow controlled sharing of memory between processes
  5. Relocation: Allow processes to be moved in memory

Memory Hierarchy

+------------------+  Fastest
|    Registers     |  Smallest
+------------------+  Most Expensive
|      Cache       |
|   (L1, L2, L3)   |
+------------------+
|   Main Memory    |  <-- Focus of this reading
|      (RAM)       |
+------------------+
|   Secondary      |
|    Storage       |
|  (HDD, SSD)      |
+------------------+  Slowest
|   Tertiary       |  Largest
|    Storage       |  Least Expensive
+------------------+

Memory Manager Responsibilities

  1. Allocation: Assign memory to processes
  2. Deallocation: Reclaim memory from terminated processes
  3. Protection: Ensure process isolation
  4. Translation: Map logical addresses to physical addresses
  5. Swapping: Move processes between memory and disk

Address Binding

Address binding is the mapping from one address space to another. Programs reference memory using symbolic addresses (variable names), which must eventually map to physical memory locations.

Binding Times

Source Code:           int x = 5;
                       ↓ (Compilation)
Object Code:           LOAD R1, [x]
                       ↓ (Link/Load)
Logical Address:       LOAD R1, [0x1000]
                       ↓ (Execution)
Physical Address:      LOAD R1, [0x5000]
  1. Compile Time: Absolute addresses generated if memory location known in advance
  2. Load Time: Relocatable addresses generated; binding occurs when loaded
  3. Execution Time: Binding delayed until runtime; allows process to move in memory

Address Binding Example

// Source code
int global_var = 100;

int main() {
    int local_var = 50;
    int *ptr = malloc(sizeof(int));
    *ptr = 25;

    printf("Global: %p\n", &global_var);  // Data segment
    printf("Local:  %p\n", &local_var);   // Stack
    printf("Heap:   %p\n", ptr);          // Heap

    free(ptr);
    return 0;
}

Sample Output:

Global: 0x601040    (Logical address)
Local:  0x7ffc8b2e  (Logical address)
Heap:   0x1a4e010   (Logical address)

These are logical addresses seen by the program. The OS maps them to physical addresses in RAM.

Logical vs Physical Address Space

Definitions

  • Logical Address (Virtual Address): Generated by CPU; seen by the process
  • Physical Address: Actual location in RAM

Address Translation

CPU generates          Memory Management Unit          Physical Memory
Logical Address  -->      (MMU) translates      -->    Physical Address

Example:
Logical: 0x1000  -->  MMU (Base=0x4000)  -->  Physical: 0x5000

Simple Base Register Scheme

+--------+
|  CPU   |
+--------+
    |
    | Logical Address (1000)
    v
+--------+
|  MMU   |
+--------+
|  Base  |  (4000)
| Register|
+--------+
    |
    | Physical Address = Base + Logical = 4000 + 1000 = 5000
    v
+-----------+
|  Memory   |
+-----------+
| 0x0000    |
| ...       |
| 0x4000    |  <-- Process starts here
| 0x5000    |  <-- Accessed location
| ...       |
+-----------+

Base and Limit Registers

To provide protection, use both base and limit registers:

Logical Address: 1000

+--------+
|  MMU   |
+--------+
| Base:  |  4000
| Limit: |  8000
+--------+

Check: 0 ≤ 1000 < 8000  ✓ (within bounds)
Physical Address = 4000 + 1000 = 5000

If Logical Address = 9000:
Check: 0 ≤ 9000 < 8000  ✗ (out of bounds)
Result: SEGMENTATION FAULT

Contiguous Memory Allocation

In contiguous allocation, each process occupies a single continuous section of memory.

Fixed Partitioning

Memory divided into fixed-size partitions.

+------------------+  0x0000
|   Operating      |
|     System       |
+------------------+  0x1000
|   Partition 1    |
|    (100 KB)      |
+------------------+  0x2000
|   Partition 2    |
|    (200 KB)      |
+------------------+  0x4000
|   Partition 3    |
|    (300 KB)      |
+------------------+  0x7000
|   Partition 4    |
|    (400 KB)      |
+------------------+  0xB000

Problems:

  • Internal fragmentation (wasted space within partition)
  • Limited flexibility

Dynamic Partitioning

Memory partitions created dynamically based on process needs.

Initial:                    After Process Allocation:
+------------------+        +------------------+
|                  |        | OS (100 KB)      |
|                  |        +------------------+
|   Free Memory    |        | P1 (150 KB)      |
|                  |        +------------------+
|                  |        | P2 (200 KB)      |
+------------------+        +------------------+
                            | Free (550 KB)    |
                            +------------------+

After P1 terminates:        After P3 allocated (100 KB):
+------------------+        +------------------+
| OS (100 KB)      |        | OS (100 KB)      |
+------------------+        +------------------+
| Free (150 KB)    | Hole  | P3 (100 KB)      |
+------------------+        +------------------+
| P2 (200 KB)      |        | Free (50 KB)     | Hole
+------------------+        +------------------+
| Free (550 KB)    | Hole  | P2 (200 KB)      |
+------------------+        +------------------+
                            | Free (550 KB)    | Hole
                            +------------------+

Dynamic Partition Allocation Algorithms

  1. First Fit: Allocate first hole large enough

    • Fast
    • May leave many small holes
  2. Best Fit: Allocate smallest hole large enough

    • Minimizes wasted space
    • Slower (must search entire list)
    • Creates tiny unusable holes
  3. Worst Fit: Allocate largest hole

    • Leaves larger remaining holes
    • Also slow

Fragmentation

External Fragmentation: Free memory is broken into small pieces

| P1 | Free(10) | P2 | Free(15) | P3 | Free(20) |
Total free = 45 KB, but can't allocate 40 KB process

Internal Fragmentation: Allocated memory larger than requested

Process needs 99 KB, allocated 100 KB partition → 1 KB wasted

Solution: Compaction (expensive) or Paging (better)

Paging

Paging eliminates external fragmentation by dividing memory into fixed-size blocks.

Basic Concepts

  • Frame: Fixed-size block of physical memory
  • Page: Fixed-size block of logical memory (same size as frame)
  • Page Size: Typically 4 KB (4096 bytes)

Paging Mechanism

Logical Memory                Physical Memory
(Process View)                (RAM)

Page 0 (4KB)                  Frame 0
Page 1 (4KB)   ----\          Frame 1
Page 2 (4KB)   -----)------>  Frame 2  (Page 2)
Page 3 (4KB)   ---/-\         Frame 3  (Page 0)
                  \  \----->  Frame 4  (Page 1)
                   \          Frame 5  (Page 3)
                    \------>  Frame 6
                              Frame 7

Process sees continuous memory
But physically scattered across frames

Address Translation in Paging

A logical address is divided into:

  • Page Number (p): Index into page table
  • Page Offset (d): Offset within the page
Logical Address (m bits):
+------------------+------------------+
|   Page Number    |   Page Offset    |
|      (p)         |       (d)        |
+------------------+------------------+
     n bits              m-n bits

Example with 16-bit addresses, 4KB pages:
- Page size = 4096 = 2^12 bytes
- Offset needs 12 bits
- Page number gets 16-12 = 4 bits

Address: 0x2ABC (binary: 0010 1010 1011 1100)
Page number: 0010 (2)
Offset:      1010 1011 1100 (0xABC = 2748)

Address Translation Example

Logical Address: 2748 (in decimal)
Page Size: 1024 bytes (1 KB)

Step 1: Calculate page number and offset
Page Number = 2748 / 1024 = 2
Offset = 2748 % 1024 = 700

Step 2: Look up page table
+-------+-------+
| Page  | Frame |
+-------+-------+
|   0   |   5   |
|   1   |   3   |
|   2   |   7   | <-- Page 2 is in Frame 7
|   3   |   1   |
+-------+-------+

Step 3: Calculate physical address
Physical Address = (Frame Number × Page Size) + Offset
                 = (7 × 1024) + 700
                 = 7168 + 700
                 = 7868

Paging Example in C

#include <stdio.h>

#define PAGE_SIZE 1024  // 1 KB pages
#define NUM_PAGES 4
#define NUM_FRAMES 8

// Simplified page table
int page_table[NUM_PAGES] = {5, 3, 7, 1};

int translate_address(int logical_addr) {
    // Calculate page number and offset
    int page_num = logical_addr / PAGE_SIZE;
    int offset = logical_addr % PAGE_SIZE;

    // Check if page number is valid
    if (page_num >= NUM_PAGES) {
        printf("Error: Invalid page number %d\n", page_num);
        return -1;
    }

    // Look up frame number in page table
    int frame_num = page_table[page_num];

    // Calculate physical address
    int physical_addr = (frame_num * PAGE_SIZE) + offset;

    printf("Logical Address: %d\n", logical_addr);
    printf("  Page: %d, Offset: %d\n", page_num, offset);
    printf("  Frame: %d\n", frame_num);
    printf("Physical Address: %d\n\n", physical_addr);

    return physical_addr;
}

int main() {
    translate_address(0);      // Page 0
    translate_address(1500);   // Page 1
    translate_address(2748);   // Page 2
    translate_address(3072);   // Page 3

    return 0;
}

Benefits of Paging

  1. No External Fragmentation: All frames are same size
  2. Easy Allocation: Any free frame can be used
  3. Easy to Share: Multiple processes can share pages
  4. Enables Virtual Memory: More on this later

Drawbacks of Paging

  1. Internal Fragmentation: Last page may not be fully used
  2. Page Table Space: Each process needs a page table
  3. Translation Overhead: Extra memory access for page table

Page Tables

The page table is the data structure used to store the mapping from pages to frames.

Basic Page Table Structure

Page Table for Process
+-------+-------+---------+
| Page  | Frame | Valid   |
+-------+-------+---------+
|   0   |   5   |    1    |
|   1   |   3   |    1    |
|   2   |   -   |    0    | (Not in memory)
|   3   |   7   |    1    |
|   4   |   1   |    1    |
+-------+-------+---------+

Each entry contains:
- Frame number (where page is stored)
- Valid bit (is page in memory?)
- Protection bits (read/write/execute)
- Modified bit (dirty bit)
- Referenced bit (for replacement)

Page Table Entry (PTE)

32-bit Page Table Entry:
+---+---+---+---+--------+--------------------+
| V | R | W | X |Reserved|   Frame Number     |
+---+---+---+---+--------+--------------------+
  1   1   1   1     4            24 bits

V = Valid bit
R = Referenced bit
W = Write/Dirty bit
X = Execute permission

Page Table Storage

For a 32-bit address space with 4 KB pages:

  • 32 bits = 4 GB logical address space
  • Page size = 4 KB = 2^12 bytes
  • Number of pages = 4 GB / 4 KB = 2^20 = 1,048,576 pages
  • Page table size = 1,048,576 entries × 4 bytes = 4 MB per process!

Problem: Large page tables consume significant memory.

Solutions to Large Page Tables

1. Hierarchical (Multi-Level) Page Tables

Break page table into smaller pieces.

Two-Level Page Table:

Logical Address:
+----------+----------+----------+
|  P1 (10) |  P2 (10) | Offset(12)|
+----------+----------+----------+

        P1              P2
    +-------+       +-------+
    |   0   | ----> |   0   | ---> Frame
    |   1   |       |   1   |
    |  ...  |       |  ...  |
    +-------+       +-------+
  Outer Table     Inner Table

2. Hashed Page Tables

Use a hash table to store page mappings. Efficient for sparse address spaces.

Hash(page_num) -> Linked List of (page, frame) pairs

+---+     +--------+--------+--------+
| 0 | --> | P: 100 | F: 5   | Next --|--> | P: 356 | F: 9 | NULL |
+---+     +--------+--------+--------+
| 1 | --> | P: 7   | F: 12  | NULL |
+---+

3. Inverted Page Table

One entry per frame (not per page). Saves space but slower lookup.

Frame Table:
+-------+------+------+
| Frame | PID  | Page |
+-------+------+------+
|   0   |  2   |  5   | Frame 0 holds page 5 of process 2
|   1   |  1   |  3   | Frame 1 holds page 3 of process 1
|   2   |  2   |  0   | Frame 2 holds page 0 of process 2
+-------+------+------+

Translation Lookaside Buffer (TLB)

The TLB is a fast cache for page table entries. It dramatically speeds up address translation.

The Problem

Without TLB, each memory access requires two physical memory accesses:

  1. Access page table to get frame number
  2. Access actual memory location

This doubles memory access time!

TLB Structure

Fast Cache Memory:
+-------+-------+-------+
| Page  | Frame | Other |
+-------+-------+-------+
|   2   |   7   |  ...  |
|   5   |   3   |  ...  |
|   8   |   1   |  ...  |
|   12  |   9   |  ...  |
+-------+-------+-------+
Small (64-256 entries)
Very fast (< 1 clock cycle)

Address Translation with TLB

                    CPU
                     |
                     | Logical Address
                     v
              +-------------+
              |     TLB     |
              +-------------+
                   / \
                  /   \
           Hit   /     \   Miss
                /       \
               v         v
          Frame #    Page Table
              |          |
              |          | Frame #
              v          v
          Physical Address
                |
                v
            Memory

TLB Access Process

1. CPU generates logical address (page, offset)
2. Check TLB for page number

   IF found (TLB hit):
       Use frame number from TLB
       Calculate physical address
       Access memory (1 memory access total)

   ELSE (TLB miss):
       Access page table in memory
       Get frame number
       Update TLB with new entry
       Calculate physical address
       Access memory (2 memory accesses total)

Effective Access Time

TLB Hit Ratio = 80%
TLB Access Time = 1 ns
Memory Access Time = 100 ns

With TLB:
  Hit:  1 ns (TLB) + 100 ns (memory) = 101 ns
  Miss: 1 ns (TLB) + 100 ns (page table) + 100 ns (memory) = 201 ns

Effective Access Time = 0.8 × 101 + 0.2 × 201 = 120.8 ns

Without TLB:
  100 ns (page table) + 100 ns (memory) = 200 ns

Speedup = 200 / 120.8 = 1.66x faster!

TLB Example in Python

class TLB:
    def __init__(self, size=4):
        self.size = size
        self.entries = {}  # {page: frame}
        self.order = []    # For LRU replacement

    def lookup(self, page):
        """Returns frame if found, None otherwise"""
        if page in self.entries:
            # TLB hit - move to end (most recently used)
            self.order.remove(page)
            self.order.append(page)
            return self.entries[page]
        return None

    def insert(self, page, frame):
        """Insert page-frame mapping into TLB"""
        if len(self.entries) >= self.size:
            # TLB full - remove least recently used
            lru_page = self.order.pop(0)
            del self.entries[lru_page]

        self.entries[page] = frame
        self.order.append(page)

# Page table
page_table = {0: 5, 1: 3, 2: 7, 3: 1, 4: 9}

# TLB (holds 4 entries)
tlb = TLB(size=4)

def translate(logical_addr, page_size=1024):
    page = logical_addr // page_size
    offset = logical_addr % page_size

    print(f"\nLogical Address: {logical_addr}")
    print(f"  Page: {page}, Offset: {offset}")

    # Check TLB first
    frame = tlb.lookup(page)
    if frame is not None:
        print(f"  TLB HIT! Frame: {frame}")
    else:
        print(f"  TLB MISS")
        # Access page table
        frame = page_table[page]
        print(f"  Page Table: Frame {frame}")
        # Update TLB
        tlb.insert(page, frame)

    physical_addr = frame * page_size + offset
    print(f"Physical Address: {physical_addr}")

# Test sequence
translate(0)      # Page 0 - TLB miss
translate(1024)   # Page 1 - TLB miss
translate(500)    # Page 0 - TLB hit!
translate(2048)   # Page 2 - TLB miss
translate(3072)   # Page 3 - TLB miss
translate(4096)   # Page 4 - TLB miss (evicts page 0)
translate(1500)   # Page 1 - TLB hit!

Segmentation

Segmentation is a memory management scheme that supports the user's view of memory. A program is a collection of segments (code, data, stack, heap).

Logical View vs Physical View

User's View (Logical):          Physical Memory:
+----------------+              +----------------+
| main()         |              | Frame 0        |
| function1()    |              | Frame 1        |
| function2()    | Code Segment| Frame 2        |
+----------------+              +----------------+
| global_vars    |              | ... scattered  |
| constants      | Data Segment| across memory  |
+----------------+              +----------------+
| local_vars     |
| return addrs   | Stack Segment
+----------------+
| malloc data    | Heap Segment
+----------------+

Segment Table

+----------+--------+-------+
| Segment  | Base   | Limit |
+----------+--------+-------+
| Code     | 5000   | 2000  |
| Data     | 8000   | 1000  |
| Stack    | 12000  | 3000  |
| Heap     | 20000  | 5000  |
+----------+--------+-------+

Segmentation Address Translation

Logical Address = (segment number, offset)

Example: (2, 500)  [Access stack segment, offset 500]

1. Check segment table for segment 2 (Stack)
   Base = 12000
   Limit = 3000

2. Verify offset within limit
   500 < 3000 ✓

3. Calculate physical address
   Physical Address = Base + Offset = 12000 + 500 = 12500

Segmentation with Paging

Combine benefits of both: Paged Segmentation

Logical Address:
+----------+--------+--------+
| Segment  | Page   | Offset |
+----------+--------+--------+

1. Use segment number to find segment's page table
2. Use page number to find frame
3. Use offset within frame

Virtual Memory

Virtual memory allows execution of processes that may not be completely in physical memory. This provides several benefits:

  1. Programs can be larger than physical memory
  2. More processes can be in memory simultaneously
  3. Less I/O needed for process swapping

Demand Paging

Pages are loaded into memory only when needed (on demand).

Page Table with Valid Bit:
+------+-------+-------+
| Page | Frame | Valid |
+------+-------+-------+
|  0   |   5   |   1   | In memory
|  1   |   -   |   0   | On disk
|  2   |   7   |   1   | In memory
|  3   |   -   |   0   | On disk
+------+-------+-------+

Page Fault Handling

When a process accesses a page not in memory:

1. Process accesses page 1
   +
2. MMU checks page table
   +
3. Valid bit = 0 → PAGE FAULT
   +
4. Trap to OS
   +
5. OS finds page on disk
   +
6. Find free frame (or evict a page)
   +
7. Read page from disk into frame
   +
8. Update page table
   +
9. Restart instruction

Page Fault Illustration

Before Page Fault:              After Page Fault:
Physical Memory:                Physical Memory:
+-------+-------+               +-------+-------+
| Frame | Page  |               | Frame | Page  |
+-------+-------+               +-------+-------+
|   0   |  P0   |               |   0   |  P0   |
|   1   |  P2   |               |   1   |  P1   | <- Loaded from disk
|   2   |  P5   |               |   2   |  P5   |
+-------+-------+               +-------+-------+

Disk:                           Disk:
Contains P1, P3, P4, P6...      Contains P3, P4, P6...

Page Replacement Algorithms

When memory is full and a page fault occurs, we must choose which page to evict.

FIFO (First-In-First-Out)

Replace the oldest page.

Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2
Frames: 3

Time:  1  2  3  4  5  6  7  8  9  10 11 12 13
Ref:   7  0  1  2  0  3  0  4  2  3  0  3  2
      --- --- --- --- --- --- --- --- --- --- --- --- ---
F1:   7  7  7  2  2  2  2  4  4  4  0  0  0
F2:      0  0  0  0  3  3  3  2  2  2  3  3
F3:         1  1  1  1  0  0  0  3  3  3  2
      --- --- --- --- --- --- --- --- --- --- --- --- ---
Fault: F  F  F  F     F  F  F  F  F  F     F

Total Page Faults: 12

Optimal (OPT)

Replace page that won't be used for longest time in future.

Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2
Frames: 3

Time:  1  2  3  4  5  6  7  8  9  10 11 12 13
Ref:   7  0  1  2  0  3  0  4  2  3  0  3  2
      --- --- --- --- --- --- --- --- --- --- --- --- ---
F1:   7  7  7  2  2  2  2  2  2  2  2  2  2
F2:      0  0  0  0  0  0  0  0  3  3  3  3
F3:         1  1  1  3  3  4  4  4  0  0  0
      --- --- --- --- --- --- --- --- --- --- --- --- ---
Fault: F  F  F  F     F     F            F

Total Page Faults: 7 (optimal - theoretical minimum)

LRU (Least Recently Used)

Replace page that hasn't been used for longest time.

Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2
Frames: 3

Time:  1  2  3  4  5  6  7  8  9  10 11 12 13
Ref:   7  0  1  2  0  3  0  4  2  3  0  3  2
      --- --- --- --- --- --- --- --- --- --- --- --- ---
F1:   7  7  7  2  2  2  2  4  4  4  0  0  0
F2:      0  0  0  0  0  0  0  2  2  2  3  3
F3:         1  1  1  3  3  3  3  3  3  3  2
      --- --- --- --- --- --- --- --- --- --- --- --- ---
Fault: F  F  F  F     F  F  F  F  F  F     F

Total Page Faults: 11

Page Replacement Simulator in Python

def fifo(pages, num_frames):
    frames = []
    faults = 0
    queue = []

    for page in pages:
        if page not in frames:
            faults += 1
            if len(frames) < num_frames:
                frames.append(page)
                queue.append(page)
            else:
                old_page = queue.pop(0)
                frames.remove(old_page)
                frames.append(page)
                queue.append(page)
    return faults

def lru(pages, num_frames):
    frames = []
    faults = 0
    recent = []

    for page in pages:
        if page not in frames:
            faults += 1
            if len(frames) < num_frames:
                frames.append(page)
            else:
                lru_page = recent.pop(0)
                frames.remove(lru_page)
                frames.append(page)
        else:
            recent.remove(page)
        recent.append(page)
    return faults

# Test
reference_string = [7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2]
num_frames = 3

print(f"Reference String: {reference_string}")
print(f"Number of Frames: {num_frames}")
print(f"FIFO Page Faults: {fifo(reference_string, num_frames)}")
print(f"LRU Page Faults: {lru(reference_string, num_frames)}")

Exercises

Basic Exercises

  1. Address Translation

    • Given logical address 5342, page size 1024, calculate page number and offset
    • If page table shows this page is in frame 6, what is the physical address?
  2. Page Table Size

    • Calculate page table size for 32-bit address space, 4KB pages, 4-byte PTEs
    • How many pages does the page table itself require?
  3. TLB Effectiveness

    • TLB hit ratio = 75%, TLB time = 2ns, Memory time = 100ns
    • Calculate effective access time with and without TLB

Intermediate Exercises

  1. Paging Simulation Implement a paging simulator in C or Python:

    • Support logical to physical address translation
    • Include page table with valid/invalid bits
    • Generate page faults for invalid pages
    • Track memory accesses and page faults
  2. TLB Simulator Create a TLB simulator:

    • Implement TLB with configurable size
    • Use LRU replacement policy
    • Track hit ratio
    • Compare performance with different TLB sizes
  3. Page Replacement Comparison Implement FIFO, LRU, and Optimal algorithms:

    • Test with different reference strings
    • Compare page fault rates
    • Vary the number of frames and observe behavior

Advanced Exercises

  1. Multi-Level Page Table Implement a two-level page table:

    • Support 32-bit addresses with 4KB pages
    • Use 10 bits for each level
    • Calculate memory savings vs single-level table
  2. Virtual Memory Simulator Build a complete virtual memory system:

    • Demand paging with page faults
    • Page replacement algorithm (LRU or Clock)
    • Track physical and virtual memory
    • Simulate disk I/O for page swapping
  3. Segmentation System Implement segmentation:

    • Support multiple segments (code, data, stack, heap)
    • Segment table with base and limit
    • Address translation with bounds checking
    • Handle segmentation faults
  4. Working Set Model Implement the working set page replacement:

    • Track page references in time window
    • Determine working set for process
    • Implement WS-Clock algorithm

Challenge Exercises

  1. Complete Memory Manager Design and implement a memory manager:

    • Support both paging and segmentation
    • Implement multiple page replacement algorithms
    • Include TLB simulation
    • Provide statistics (hit ratio, page faults, etc.)
    • Create visualizations of memory state
  2. Belady's Anomaly

    • Demonstrate Belady's Anomaly with FIFO
    • Show example where more frames → more page faults
    • Prove LRU doesn't suffer from this anomaly
    • Implement stack-based algorithms

Summary

In this reading, we explored memory management in operating systems:

Key Takeaways

  1. Address Spaces: Programs use logical addresses; OS translates to physical addresses

  2. Contiguous Allocation: Simple but suffers from fragmentation

  3. Paging: Eliminates external fragmentation by dividing memory into fixed-size blocks

    • Pages (logical) map to frames (physical)
    • Enables efficient memory utilization
    • Supports virtual memory
  4. Page Tables: Store page-to-frame mappings

    • Can be large (multi-level tables help)
    • Each memory access requires page table lookup
    • Critical data structure for memory management
  5. TLB: Cache for page table entries

    • Dramatically improves performance
    • Small but effective (high hit ratios)
    • Essential for practical paging systems
  6. Segmentation: Supports programmer's view of memory

    • Logical division (code, data, stack, heap)
    • Can be combined with paging
  7. Virtual Memory: Allows programs larger than physical memory

    • Demand paging loads pages as needed
    • Page faults trigger OS intervention
    • Enables multiprogramming with limited RAM
  8. Page Replacement: When memory full, choose victim page

    • FIFO: Simple but not optimal
    • Optimal: Best but requires future knowledge
    • LRU: Good practical approximation

Important Concepts to Remember

  • Paging eliminates external fragmentation but introduces internal fragmentation
  • TLB is crucial for performance - without it, every memory access becomes two
  • Page tables can be huge; hierarchical tables save space
  • Virtual memory allows illusion of unlimited memory
  • Page replacement algorithms significantly impact performance

Looking Ahead

In the next reading, we'll explore File Systems, examining how operating systems organize and manage persistent storage, including file allocation methods, directories, and modern file system designs like FAT, inodes, and journaling file systems.

Memory management provides the foundation for process execution, but file systems provide persistent storage. Together, they form the core of OS resource management.


Previous Reading: 02-threads.md - Threads and Concurrency

Next Reading: 04-file-systems.md - File Systems