Synchronization and Deadlocks
Learning Objectives
By the end of this reading, you will be able to:
- Understand the need for synchronization in concurrent systems
- Explain critical sections and race conditions
- Implement mutual exclusion using locks and mutexes
- Use semaphores for synchronization and signaling
- Understand monitors and condition variables
- Identify the four conditions necessary for deadlock
- Implement deadlock prevention and avoidance strategies
- Detect and recover from deadlocks
- Apply classical synchronization problems (Producer-Consumer, Readers-Writers, Dining Philosophers)
Table of Contents
- Introduction to Synchronization
- Critical Section Problem
- Locks and Mutexes
- Semaphores
- Monitors and Condition Variables
- Classical Synchronization Problems
- Deadlock Introduction
- Deadlock Conditions
- Deadlock Prevention
- Deadlock Avoidance
- Deadlock Detection and Recovery
- Exercises
- Summary
Introduction to Synchronization
Synchronization is the coordination of multiple threads or processes to ensure correct execution when accessing shared resources.
Why Synchronization is Needed
Without Synchronization:
Thread 1: Thread 2:
balance = 100 balance = 100
read balance (100) read balance (100)
add 50 add 30
write 150 write 130
Final balance = 130 (WRONG!)
With Synchronization:
Thread 1: Thread 2:
balance = 100 (waiting...)
lock acquired
read balance (100) (waiting...)
add 50 (waiting...)
write 150 (waiting...)
lock released lock acquired
read balance (150)
add 30
write 180
lock released
Final balance = 180 (CORRECT!)
Synchronization Goals
- Mutual Exclusion: Only one thread in critical section at a time
- Progress: Selection of next thread cannot be postponed indefinitely
- Bounded Waiting: Limit on how long a thread waits to enter critical section
- No Assumptions: Don't assume relative speeds of threads
Types of Synchronization
- Mutual Exclusion: Prevent simultaneous access (locks, mutexes)
- Coordination: Order execution of threads (semaphores, condition variables)
- Communication: Exchange data between threads (message passing, shared memory)
Critical Section Problem
A critical section is a code segment that accesses shared resources and must not be executed by more than one thread at a time.
Critical Section Structure
do {
// Entry section
acquire_lock();
// Critical section
// Access shared resource
shared_variable++;
// Exit section
release_lock();
// Remainder section
// Other work
} while (1);
Critical Section Requirements
Requirements:
1. Mutual Exclusion
┌──────────────┐
│ Thread 1 │ ← Only one thread
│ (in critical)│ in critical section
└──────────────┘
┌──────────────┐
│ Thread 2 │ ← Must wait
│ (waiting) │
└──────────────┘
2. Progress
If no thread in critical section and some want to enter,
only threads not in remainder section can participate in
decision, and decision cannot be postponed indefinitely.
3. Bounded Waiting
After thread requests entry, there is a limit on number
of times other threads can enter before this thread.
Race Condition Example
import threading
import time
counter = 0
def increment():
global counter
for _ in range(100000):
# Race condition here!
temp = counter
temp = temp + 1
counter = temp
# Create threads
t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=increment)
# Start threads
t1.start()
t2.start()
# Wait for completion
t1.join()
t2.join()
print(f"Final counter: {counter}")
print(f"Expected: 200000")
# Output might be less than 200000 due to race condition!
Locks and Mutexes
A lock (or mutex - mutual exclusion) is the simplest synchronization primitive.
Lock Concept
Lock States:
┌─────────────┐
│ Unlocked │
└─────────────┘
↓ acquire()
┌─────────────┐
│ Locked │
│ (Owner: T1) │
└─────────────┘
↓ release()
┌─────────────┐
│ Unlocked │
└─────────────┘
Lock Properties:
- Binary state (locked/unlocked)
- Provides mutual exclusion
- Owner thread must release
Mutex Implementation in C (pthreads)
#include <stdio.h>
#include <pthread.h>
int counter = 0;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
void *increment(void *arg) {
for (int i = 0; i < 100000; i++) {
pthread_mutex_lock(&mutex);
// Critical section
counter++;
pthread_mutex_unlock(&mutex);
}
return NULL;
}
int main() {
pthread_t t1, t2;
pthread_create(&t1, NULL, increment, NULL);
pthread_create(&t2, NULL, increment, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
printf("Final counter: %d\n", counter); // Should be 200000
pthread_mutex_destroy(&mutex);
return 0;
}
Mutex in Python
import threading
counter = 0
lock = threading.Lock()
def increment():
global counter
for _ in range(100000):
lock.acquire()
counter += 1
lock.release()
# Better: Use context manager
def increment_better():
global counter
for _ in range(100000):
with lock:
counter += 1
# Create and start threads
t1 = threading.Thread(target=increment_better)
t2 = threading.Thread(target=increment_better)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Final counter: {counter}") # Always 200000
Spin Lock vs Blocking Lock
Spin Lock: Thread busy-waits (keeps checking)
void spin_lock(int *lock) {
while (__sync_lock_test_and_set(lock, 1)) {
// Spin (busy wait)
while (*lock) {
// Keep checking
}
}
}
void spin_unlock(int *lock) {
__sync_lock_release(lock);
}
Blocking Lock: Thread sleeps until lock available
# Python locks are blocking by default
lock.acquire() # Thread sleeps if lock held
Lock Granularity
Coarse-Grained Locking:
┌────────────────────────────┐
│ One lock for all data │
│ │
│ +------+ +------+ │
│ |Data 1| |Data 2| │
│ +------+ +------+ │
│ │
└────────────────────────────┘
+ Simple
- Low concurrency
Fine-Grained Locking:
┌────────────────────────────┐
│ ┌──────┐ ┌──────┐ │
│ │Lock 1│ │Lock 2│ │
│ └───┬──┘ └───┬──┘ │
│ │ │ │
│ +---v--+ +---v--+ │
│ |Data 1| |Data 2| │
│ +------+ +------+ │
└────────────────────────────┘
+ High concurrency
- Complex, risk of deadlock
Semaphores
A semaphore is a synchronization primitive with an integer value that can be used for signaling and counting.
Semaphore Operations
Semaphore S (value = 3)
wait(S) / P(S) / down(S):
S = S - 1
if S < 0:
block this thread
signal(S) / V(S) / up(S):
S = S + 1
if S <= 0:
wake up one blocked thread
Operations are atomic!
Binary Semaphore (Like Mutex)
Binary Semaphore (value = 0 or 1):
Thread 1: Thread 2:
wait(S) // S = 1 -> 0 wait(S) // S = 0 -> -1, BLOCK
// Critical section (waiting...)
signal(S) // S = 0 -> 1 (wakes up, S = 1 -> 0)
// Critical section
signal(S) // S = 0 -> 1
Counting Semaphore
Used to limit number of concurrent accesses.
Counting Semaphore (value = 3) for resource pool:
Thread 1: wait(S) // S = 3 -> 2 (allowed)
Thread 2: wait(S) // S = 2 -> 1 (allowed)
Thread 3: wait(S) // S = 1 -> 0 (allowed)
Thread 4: wait(S) // S = 0 -> -1 (BLOCKED)
Thread 5: wait(S) // S = -1 -> -2 (BLOCKED)
Thread 1: signal(S) // S = -2 -> -1, wake Thread 4
Thread 4: (wakes up, enters)
Semaphore Implementation in C
#include <stdio.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>
sem_t semaphore;
void *worker(void *arg) {
int id = *((int *)arg);
printf("Thread %d: Waiting for semaphore\n", id);
sem_wait(&semaphore); // P operation
// Critical section
printf("Thread %d: In critical section\n", id);
sleep(2); // Simulate work
printf("Thread %d: Leaving critical section\n", id);
sem_post(&semaphore); // V operation
return NULL;
}
int main() {
pthread_t threads[5];
int ids[5] = {1, 2, 3, 4, 5};
// Initialize semaphore with value 2 (allow 2 concurrent threads)
sem_init(&semaphore, 0, 2);
// Create threads
for (int i = 0; i < 5; i++) {
pthread_create(&threads[i], NULL, worker, &ids[i]);
}
// Wait for threads
for (int i = 0; i < 5; i++) {
pthread_join(threads[i], NULL);
}
sem_destroy(&semaphore);
return 0;
}
Semaphore in Python
import threading
import time
# Counting semaphore (max 3 concurrent threads)
semaphore = threading.Semaphore(3)
def worker(thread_id):
print(f"Thread {thread_id}: Waiting for semaphore")
semaphore.acquire()
# Critical section
print(f"Thread {thread_id}: Acquired semaphore")
time.sleep(2) # Simulate work
print(f"Thread {thread_id}: Releasing semaphore")
semaphore.release()
# Create 10 threads
threads = []
for i in range(10):
t = threading.Thread(target=worker, args=(i,))
threads.append(t)
t.start()
# Wait for all
for t in threads:
t.join()
print("All threads completed")
Semaphore for Signaling
import threading
import time
# Binary semaphore for signaling (starts at 0)
event = threading.Semaphore(0)
def producer():
print("Producer: Preparing data...")
time.sleep(2)
print("Producer: Data ready, signaling")
event.release() # Signal consumer
def consumer():
print("Consumer: Waiting for data...")
event.acquire() # Wait for signal
print("Consumer: Data received, processing")
# Start threads
t1 = threading.Thread(target=producer)
t2 = threading.Thread(target=consumer)
t2.start()
t1.start()
t1.join()
t2.join()
Monitors and Condition Variables
A monitor is a high-level synchronization construct that encapsulates shared data and procedures operating on it.
Monitor Concept
Monitor BankAccount {
private:
int balance = 0
mutex lock
condition_variable cv
public:
deposit(amount):
lock.acquire()
balance += amount
cv.signal() // Notify waiting threads
lock.release()
withdraw(amount):
lock.acquire()
while (balance < amount):
cv.wait(lock) // Wait until sufficient balance
balance -= amount
lock.release()
}
Monitor ensures:
- Only one thread executes monitor procedure at a time
- Automatic mutual exclusion
Condition Variables
Condition variables allow threads to wait for certain conditions.
Condition Variable Operations:
wait(condition, lock):
1. Release lock
2. Put thread to sleep
3. When woken up, re-acquire lock
signal(condition):
Wake up one waiting thread
broadcast(condition):
Wake up all waiting threads
Condition Variable in Python
import threading
import time
class BankAccount:
def __init__(self):
self.balance = 0
self.lock = threading.Lock()
self.condition = threading.Condition(self.lock)
def deposit(self, amount):
with self.condition:
self.balance += amount
print(f"Deposited {amount}, balance = {self.balance}")
self.condition.notify() # Wake up waiting withdrawers
def withdraw(self, amount):
with self.condition:
while self.balance < amount:
print(f"Insufficient balance {self.balance}, waiting...")
self.condition.wait() # Wait for deposit
self.balance -= amount
print(f"Withdrew {amount}, balance = {self.balance}")
# Example usage
account = BankAccount()
def depositor():
time.sleep(1) # Wait a bit
account.deposit(100)
def withdrawer():
account.withdraw(50)
account.withdraw(30)
t1 = threading.Thread(target=withdrawer)
t2 = threading.Thread(target=depositor)
t1.start()
t2.start()
t1.join()
t2.join()
Condition Variable in C (pthreads)
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int ready = 0;
void *producer(void *arg) {
sleep(2); // Simulate preparation
pthread_mutex_lock(&mutex);
ready = 1;
printf("Producer: Data ready, signaling\n");
pthread_cond_signal(&cond); // Wake up consumer
pthread_mutex_unlock(&mutex);
return NULL;
}
void *consumer(void *arg) {
pthread_mutex_lock(&mutex);
while (!ready) {
printf("Consumer: Waiting for data...\n");
pthread_cond_wait(&cond, &mutex); // Release lock and wait
}
printf("Consumer: Data received, processing\n");
pthread_mutex_unlock(&mutex);
return NULL;
}
int main() {
pthread_t t1, t2;
pthread_create(&t2, NULL, consumer, NULL);
pthread_create(&t1, NULL, producer, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
pthread_mutex_destroy(&mutex);
pthread_cond_destroy(&cond);
return 0;
}
Classical Synchronization Problems
1. Producer-Consumer Problem (Bounded Buffer)
Producers add items to buffer, consumers remove items.
Buffer: [_][_][_][_][_] (size = 5)
Constraints:
- Producer can't add when buffer full
- Consumer can't remove when buffer empty
- Only one thread accesses buffer at a time
Solution with Semaphores:
import threading
import time
import random
from collections import deque
class BoundedBuffer:
def __init__(self, size):
self.buffer = deque()
self.size = size
self.mutex = threading.Semaphore(1) # Mutual exclusion
self.empty = threading.Semaphore(size) # Count empty slots
self.full = threading.Semaphore(0) # Count full slots
def produce(self, item):
self.empty.acquire() # Wait for empty slot
self.mutex.acquire() # Enter critical section
self.buffer.append(item)
print(f"Produced: {item}, Buffer size: {len(self.buffer)}")
self.mutex.release() # Exit critical section
self.full.release() # Signal item available
def consume(self):
self.full.acquire() # Wait for item
self.mutex.acquire() # Enter critical section
item = self.buffer.popleft()
print(f"Consumed: {item}, Buffer size: {len(self.buffer)}")
self.mutex.release() # Exit critical section
self.empty.release() # Signal slot available
return item
# Example usage
buffer = BoundedBuffer(5)
def producer():
for i in range(10):
buffer.produce(i)
time.sleep(random.uniform(0.1, 0.5))
def consumer():
for _ in range(10):
buffer.consume()
time.sleep(random.uniform(0.1, 0.5))
t1 = threading.Thread(target=producer)
t2 = threading.Thread(target=consumer)
t1.start()
t2.start()
t1.join()
t2.join()
2. Readers-Writers Problem
Multiple readers can read simultaneously, but writers need exclusive access.
Allowed:
R R R R (Multiple readers)
W (Single writer)
Not Allowed:
R W (Reader and writer)
W W (Multiple writers)
Solution:
import threading
import time
class ReadersWriters:
def __init__(self):
self.readers = 0
self.mutex = threading.Lock() # Protect readers count
self.write_lock = threading.Lock() # Writers exclusive access
def start_read(self):
with self.mutex:
self.readers += 1
if self.readers == 1:
self.write_lock.acquire() # First reader blocks writers
def end_read(self):
with self.mutex:
self.readers -= 1
if self.readers == 0:
self.write_lock.release() # Last reader unblocks writers
def start_write(self):
self.write_lock.acquire() # Writers get exclusive access
def end_write(self):
self.write_lock.release()
# Example
rw = ReadersWriters()
shared_data = 0
def reader(reader_id):
for i in range(3):
rw.start_read()
print(f"Reader {reader_id}: Reading data = {shared_data}")
time.sleep(0.1)
rw.end_read()
time.sleep(0.2)
def writer(writer_id):
global shared_data
for i in range(2):
rw.start_write()
shared_data += 1
print(f"Writer {writer_id}: Wrote data = {shared_data}")
time.sleep(0.3)
rw.end_write()
time.sleep(0.4)
# Create readers and writers
readers = [threading.Thread(target=reader, args=(i,)) for i in range(3)]
writers = [threading.Thread(target=writer, args=(i,)) for i in range(2)]
for t in readers + writers:
t.start()
for t in readers + writers:
t.join()
3. Dining Philosophers Problem
5 philosophers sit at a table, each needs 2 forks to eat.
P0
/ \
F0 F1
| |
P4 P1
| |
F4 F2
\ /
P3
|
F3
|
P2
Problem: Deadlock if all pick up left fork simultaneously
Solution (Resource Hierarchy):
import threading
import time
class DiningPhilosophers:
def __init__(self, num_philosophers=5):
self.num = num_philosophers
# Create forks (mutexes)
self.forks = [threading.Lock() for _ in range(num_philosophers)]
def philosopher(self, philosopher_id):
left_fork = philosopher_id
right_fork = (philosopher_id + 1) % self.num
for _ in range(3):
# Think
print(f"Philosopher {philosopher_id}: Thinking")
time.sleep(0.5)
# Pick up forks (avoid deadlock by ordering)
first_fork = min(left_fork, right_fork)
second_fork = max(left_fork, right_fork)
self.forks[first_fork].acquire()
print(f"Philosopher {philosopher_id}: Picked up fork {first_fork}")
self.forks[second_fork].acquire()
print(f"Philosopher {philosopher_id}: Picked up fork {second_fork}")
# Eat
print(f"Philosopher {philosopher_id}: EATING")
time.sleep(0.3)
# Put down forks
self.forks[second_fork].release()
self.forks[first_fork].release()
print(f"Philosopher {philosopher_id}: Put down forks")
# Example
dp = DiningPhilosophers(5)
philosophers = [threading.Thread(target=dp.philosopher, args=(i,))
for i in range(5)]
for p in philosophers:
p.start()
for p in philosophers:
p.join()
print("All philosophers finished dining")
Deadlock Introduction
A deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource held by another process.
Deadlock Example
Thread 1: Thread 2:
lock(A) lock(B)
lock(B) <- WAIT lock(A) <- WAIT
unlock(B) unlock(A)
unlock(A) unlock(B)
Thread 1 holds A, waits for B
Thread 2 holds B, waits for A
DEADLOCK! Neither can proceed
Resource Allocation Graph
Processes: P1, P2
Resources: R1, R2
P1 → R1 (P1 requests R1)
R1 → P2 (R1 allocated to P2)
P2 → R2 (P2 requests R2)
R2 → P1 (R2 allocated to P1)
P1 ←→ R2
↓ ↑
↓ ↑
R1 ←→ P2
Cycle detected → Deadlock!
Deadlock Conditions
Four conditions must hold simultaneously for deadlock:
1. Mutual Exclusion
Only one thread can hold resource at a time
Thread 1: holds Lock A
Thread 2: wants Lock A → BLOCKED
Can't be prevented (needed for correctness)
2. Hold and Wait
Thread holds resources while waiting for others
Thread 1:
- Holds Lock A
- Waiting for Lock B
Can prevent by requiring all resources at once
3. No Preemption
Resources can't be forcibly taken away
Thread 1 holds Lock A
System can't take Lock A from Thread 1
Can allow preemption (complex)
4. Circular Wait
Circular chain of threads waiting for resources
T1 waits for T2
T2 waits for T3
T3 waits for T1
Cycle: T1 → T2 → T3 → T1
Can prevent by resource ordering
Visualizing All Four Conditions
┌─────────────────────┐
│ Mutual Exclusion │
│ (Lock exclusive) │
└─────────────────────┘
+
┌─────────────────────┐
│ Hold and Wait │
│ (Hold while request)│
└─────────────────────┘
+
┌─────────────────────┐
│ No Preemption │
│ (Can't take away) │
└─────────────────────┘
+
┌─────────────────────┐
│ Circular Wait │
│ (Cycle exists) │
└─────────────────────┘
║
▼
DEADLOCK!
Deadlock Prevention
Prevent deadlock by ensuring at least one of the four conditions cannot hold.
1. Prevent Mutual Exclusion
Make resources shareable (not always possible).
Read-only files: Can be shared
Printers: Must be exclusive
Usually can't prevent this condition
2. Prevent Hold and Wait
Require threads to request all resources at once.
# Bad: Hold and Wait possible
lock_a.acquire()
# ... do work ...
lock_b.acquire() # Hold A while waiting for B
# Good: Request all at once
def acquire_all(locks):
"""Acquire all locks or none"""
acquired = []
try:
for lock in locks:
lock.acquire()
acquired.append(lock)
return True
except:
# Release all acquired locks
for lock in acquired:
lock.release()
return False
if acquire_all([lock_a, lock_b]):
# Work with both locks
lock_a.release()
lock_b.release()
3. Allow Preemption
If thread can't get resource, release all held resources.
def try_lock_with_timeout(lock1, lock2, timeout=1.0):
"""Try to acquire locks with timeout"""
lock1.acquire()
if lock2.acquire(timeout=timeout):
return True # Got both locks
else:
# Couldn't get lock2, release lock1
lock1.release()
return False # Failed, try again later
4. Prevent Circular Wait
Impose ordering on resource acquisition.
# Assign order to locks
lock_a.order = 1
lock_b.order = 2
lock_c.order = 3
def acquire_ordered(lock1, lock2):
"""Always acquire in order"""
if lock1.order < lock2.order:
lock1.acquire()
lock2.acquire()
else:
lock2.acquire()
lock1.acquire()
# Thread 1:
acquire_ordered(lock_a, lock_c) # Acquires 1, then 3
# Thread 2:
acquire_ordered(lock_c, lock_a) # Also acquires 1, then 3
# No cycle possible!
Deadlock Avoidance
Avoid deadlock by making smart allocation decisions using additional information.
Banker's Algorithm
System has information about maximum resource needs.
Resources: R1=10 instances
Processes:
Max Allocated Need Available
P1 7 2 5 3
P2 5 2 3 3
P3 9 3 6 3
Safe sequence: P2 → P1 → P3
1. P2 needs 3, available 3 → Grant
P2 completes, releases 5 → available = 8
2. P1 needs 5, available 8 → Grant
P1 completes, releases 7 → available = 15
3. P3 needs 6, available 15 → Grant
P3 completes, releases 9 → available = 24
System is in SAFE STATE (deadlock-free)
Banker's Algorithm Implementation
class BankersAlgorithm:
def __init__(self, num_processes, num_resources, available):
self.num_processes = num_processes
self.num_resources = num_resources
self.available = available
self.maximum = [[0] * num_resources for _ in range(num_processes)]
self.allocation = [[0] * num_resources for _ in range(num_processes)]
def set_max(self, process, resources):
self.maximum[process] = resources
def is_safe(self):
"""Check if system is in safe state"""
work = self.available.copy()
finish = [False] * self.num_processes
safe_sequence = []
while len(safe_sequence) < self.num_processes:
found = False
for i in range(self.num_processes):
if finish[i]:
continue
# Calculate need
need = [self.maximum[i][j] - self.allocation[i][j]
for j in range(self.num_resources)]
# Check if need can be satisfied
if all(need[j] <= work[j] for j in range(self.num_resources)):
# Process can complete
for j in range(self.num_resources):
work[j] += self.allocation[i][j]
finish[i] = True
safe_sequence.append(i)
found = True
break
if not found:
# No process can complete → Unsafe
return False, []
return True, safe_sequence
def request_resources(self, process, request):
"""Try to allocate resources"""
# Check if request valid
for i in range(self.num_resources):
if request[i] > self.available[i]:
return False # Not enough resources
# Tentatively allocate
for i in range(self.num_resources):
self.available[i] -= request[i]
self.allocation[process][i] += request[i]
# Check if safe
safe, sequence = self.is_safe()
if safe:
print(f"Request granted. Safe sequence: {sequence}")
return True
else:
# Rollback allocation
for i in range(self.num_resources):
self.available[i] += request[i]
self.allocation[process][i] -= request[i]
print("Request denied (would cause unsafe state)")
return False
# Example
banker = BankersAlgorithm(3, 1, [10])
banker.set_max(0, [7])
banker.set_max(1, [5])
banker.set_max(2, [9])
banker.allocation[0] = [2]
banker.allocation[1] = [2]
banker.allocation[2] = [3]
banker.available = [3]
safe, seq = banker.is_safe()
print(f"Initial state safe: {safe}, sequence: {seq}")
banker.request_resources(0, [2]) # Should be granted
banker.request_resources(2, [5]) # Should be denied
Deadlock Detection and Recovery
Deadlock Detection
Periodically check for deadlocks using resource allocation graphs.
class DeadlockDetector:
def __init__(self, num_processes, num_resources):
self.num_processes = num_processes
self.num_resources = num_resources
self.allocation = [[0] * num_resources for _ in range(num_processes)]
self.request = [[0] * num_resources for _ in range(num_processes)]
self.available = [0] * num_resources
def detect(self):
"""Detect deadlock"""
work = self.available.copy()
finish = [all(self.allocation[i][j] == 0
for j in range(self.num_resources))
for i in range(self.num_processes)]
while True:
found = False
for i in range(self.num_processes):
if finish[i]:
continue
# Check if request can be satisfied
if all(self.request[i][j] <= work[j]
for j in range(self.num_resources)):
# Process can finish
for j in range(self.num_resources):
work[j] += self.allocation[i][j]
finish[i] = True
found = True
if not found:
break
# Processes not finished are deadlocked
deadlocked = [i for i in range(self.num_processes) if not finish[i]]
return deadlocked
# Example
detector = DeadlockDetector(3, 2)
detector.available = [0, 0]
detector.allocation = [[1, 0], [0, 1], [0, 0]]
detector.request = [[0, 1], [1, 0], [0, 0]]
deadlocked = detector.detect()
if deadlocked:
print(f"Deadlock detected involving processes: {deadlocked}")
else:
print("No deadlock")
Deadlock Recovery
Process Termination
- Abort all deadlocked processes
- Abort one at a time until deadlock resolved
Resource Preemption
- Preempt resources from processes
- Rollback process to safe state
- Risk of starvation
def recover_from_deadlock(deadlocked_processes):
"""Recover by terminating lowest priority process"""
# Sort by priority (lower number = lower priority)
sorted_processes = sorted(deadlocked_processes,
key=lambda p: p.priority)
for process in sorted_processes:
print(f"Terminating process {process.id}")
process.terminate()
# Check if deadlock resolved
if not detect_deadlock():
print("Deadlock resolved")
break
Exercises
Basic Exercises
Race Condition
- Create a program with a race condition
- Demonstrate the bug with multiple runs
- Fix it using a mutex
- Verify correctness
Critical Section
- Identify critical sections in given code
- Protect them with appropriate locks
- Test with multiple threads
Semaphore vs Mutex
- Implement the same problem with mutex and semaphore
- Compare implementations
- Explain when to use each
Intermediate Exercises
Producer-Consumer
- Implement bounded buffer with semaphores
- Support multiple producers and consumers
- Add statistics (items produced/consumed, wait times)
Readers-Writers
- Implement reader-writer lock
- Test with varying reader/writer ratios
- Implement writer-preference variant
Dining Philosophers
- Implement with resource ordering
- Implement with odd-even strategy
- Compare deadlock-free solutions
Advanced Exercises
Banker's Algorithm
- Implement complete Banker's algorithm
- Support multiple resource types
- Test with various scenarios
- Visualize safe sequences
Deadlock Detector
- Build deadlock detection system
- Create resource allocation graph
- Detect cycles
- Suggest recovery actions
Lock-Free Data Structure
- Implement lock-free queue using atomic operations
- Compare performance with locked version
- Test under high contention
Thread Pool
- Design thread pool with work queue
- Use semaphores for synchronization
- Support task priorities
- Implement graceful shutdown
Challenge Exercises
Database Transaction Manager
- Implement two-phase locking
- Detect and resolve deadlocks
- Support transaction rollback
- Measure throughput and latency
Concurrent Cache
- Build thread-safe LRU cache
- Use reader-writer locks
- Implement lock striping for scalability
- Benchmark with different workloads
Summary
In this reading, we explored synchronization and deadlocks:
Key Takeaways
Synchronization Need: Concurrent access to shared resources requires coordination
Critical Sections: Code accessing shared resources must be protected
Locks/Mutexes: Provide mutual exclusion
- Simple binary state
- Owner must release
- Can be spin or blocking
Semaphores: Counting synchronization primitive
- Binary semaphore like mutex
- Counting semaphore limits concurrent access
- Useful for signaling
Condition Variables: Wait for conditions
- Always used with mutex
- Support wait, signal, broadcast
- Avoid busy waiting
Classical Problems: Standard synchronization challenges
- Producer-Consumer: Bounded buffer coordination
- Readers-Writers: Concurrent read, exclusive write
- Dining Philosophers: Resource deadlock
Deadlock: Processes stuck waiting for each other
- Four necessary conditions
- Can prevent, avoid, or detect/recover
Deadlock Prevention: Break one of four conditions
- Resource ordering most practical
- Hold and wait prevention possible
Deadlock Avoidance: Banker's algorithm
- Requires advance knowledge
- Checks for safe state
- Conservative but safe
Deadlock Detection: Find and recover
- Periodic detection
- Terminate or preempt
- Balance overhead vs recovery cost
Important Concepts to Remember
- Race conditions cause non-deterministic bugs
- Always protect critical sections
- Choose right synchronization primitive for task
- Deadlock requires all four conditions
- Prevention often simpler than avoidance
- Test concurrent code thoroughly
Best Practices
- Minimize Critical Sections: Hold locks briefly
- Consistent Lock Ordering: Prevent circular wait
- Avoid Nested Locks: Reduce deadlock risk
- Use High-Level Constructs: Prefer monitors over raw locks
- Test Concurrency: Use stress tests and race detectors
Looking Ahead
This completes our exploration of operating systems fundamentals. You've learned about:
- Processes and threads
- Memory management
- File systems
- Synchronization and deadlocks
These concepts form the foundation of modern operating systems and are essential for systems programming, distributed systems, and high-performance computing.
Previous Reading: 04-file-systems.md - File Systems
Module Complete: You have finished Module 6 - Operating Systems!