COMP 3511: Lecture 14
Date: 2024-10-22 15:07:19
Reviewed:
Topic / Chapter:
summary
βQuestions
Notes
Synchronization Tools
-
Background
- processes: execute concurrently
- might be interrupted at any time, for many possible reasons
- concurrent access to any shared data: may result in data inconsistency
- maintaining data consistency: requires OS mechanism
- to ensure orderly execution of cooperating processes
- processes: execute concurrently
-
Illustration of Problem
- e.g. producer-consumer problem
- integer
counter
: used to keep track of no. of buffers occupiedcounter = 0
initially- incremented each time producer places item in buffer
- decremented each time consumer consumes item in buffer
- example code
// producer while (true) { // produce an item while (counter == BUFFER_SIZE) ; buffer[in] = next_produced; in = (in + 1) % BUFFER_SIZE; counter++; } while (true) { while (counter == 0) ; // do nothing next_consumer = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; // consume an item }
- race condition:
counter++
can be made as:register1 = counter register1 = register1 + 1 counter = register1
counter--
can be made as:register2 = counter register2 = register2 - 1 counter = register2
- order of execution: cannot be determined
- following scenario is possible, from
counter = 5
register1 = counter // 5 register1 = register1 + 1 // 6 register2 = counter // 5 counter = register1 // counter = 6 register2 = register2 - 1 // 4 counter = register2 // counter = 4
- following scenario is possible, from
- expected result:
counter = 5
- but trouble happened!
- only internal order of consumer / producer can be trusted, but not inter-process
- π¨βπ« fundamental problem of sharing data
- very hard to debug, and happens in small chance
- individual code & logic: has no error!
- similar case:
fork()
occurring at almost same time- resulting in: identical
pid
- resulting in: identical
- must be: no interleave with another process sharing the variable
-
Race condition
- race condition: undesirable situation where several processes access / manipulate shared data concurrently
- and outcome of executions: depend on particular order access / executions take place
- which is not guaranteed
- thus: resulting in non-deterministic result
- and outcome of executions: depend on particular order access / executions take place
- critical section segment: during which
- process / thread: changing shared var., updating a table, writing a file, etc.
- must ensure: when one process is in critical section, no other can be in its critical section
- mutual exclusion and critical section: implies the same
- π¨βπ« can be long, can be short (= hundreds of lines)
- critical section problem: to design a protocol to solve it
- i.e. each process:
- ask permission before entering a critical section (entry section)
- notify exit (exit section)
- and remainder section
- i.e. each process:
- general structure
do { // entry section critical section // do something critical // exit section remainder section // others can access critical section } while (true);
- race condition: undesirable situation where several processes access / manipulate shared data concurrently
-
Solution of critical section problem
- 3 solutions
- mutual exclusion: if process is in its critical section
- no other processes can be executing their critical section
- progress: if a process wish to inter critical sections, it cannot de delayed / waited for indefinitely
- i.e. must know no. of process in front of queue, etc. must be in
- bounded waiting: bound: must exist on no. of time processes are allowed to enter
- π¨βπ one shouldn't use toilet alone for too long
- assume: each process execute at nonzero speed
- and with no assumption on relative speed on individual process
- mutual exclusion: if process is in its critical section
- critical section problem in kernel
- kernel code: code OS is running
- subject to: many possible race conditions
- kernel data structure: keeps list of all open files
- that can be updated by multiple kernel processes
- other kernel DS: e.g. for maintaining memory allocation, process lists, interrupt handling, etc.
- two general approaches
- preemptive: let preemption of process running in kernel mode
- possible race condition
- more difficult to maintain in SMP architecture
- π¨βπ« much more effective!
- possible race condition
- non-preemptive: runs until exiting kernel mode, blocks, or giving up CPU
- free of race conditions in kernel mode
- only possible in single-processor system
- preemptive: let preemption of process running in kernel mode
- kernel code: code OS is running
- 3 solutions
-
Synchronization tools
- many systems: provide hardware support on critical section code implementation
- for uni-processor systems: simply disable interrupts
- such approach: inefficient for multiprocessor
- OS: provide hardware & high level API support for critical section code
- example
programs share programs hardware load / store, disable interrupts, test & set, compare & swap high level APIs locks, semaphores - synchronization hardware
- modern OS: provides atomic hardware instructions
- atomic: non-interruptible
- ensures: execution of atomic instruction: cannot be interrupted
- => no race condition!
- building blocks for more sophisticated mechanisms
- two commonly used atomic hardware instructions
- test a memory word & set a value:
Test_and_Set()
- swap contents of two memory words:
Compare_and_Swap()
- modern OS: provides atomic hardware instructions
- implementation
Test_and_Set()
bool test_and_set(bool *target) { bool rv = *target; *target = true; // set here return rv; }
- if
target
was initiallytrue
: than it returnstrue
- i.e. we shouldn't interfere
- integrated solution: let shared boolean var.
lock
, initiallyfalse
do { while (test_and_set(&lock)) ; critical section // do something critical lock = false; remainder section // others can access critical section } while (true)
- π¨βπ« this: only guarantees mutual exclusion!
- dedicated data structure exist for those test variables
- if
Compare_and_Swap()
bool compare_and_swap(int *value, int expected, int new_value) { int temp = *value; if (*value == expected) *value = new_value; return temp; }
- π¨βπ« return value: doesn't change
- it only determines: whether I edit the variable or not
- example
do { while (compare_and_swap(&lock, 0, 1) != 0) ; critical section // !!! lock = 0; remainder section // !!! } while (true);
- β can we modify it a bit and
- this function code can be used,
- but use of more sophisticated tools built upon primitives: easier
- β can we modify it a bit and
- direct use of primitive: for bounded-waiting mutual exclusion
do { waiting[i] = true; key = true; // local variable while (waiting[i] && key) key = test_and_set(&lock); waiting[i] = false; critical section // !!! j = (i + 1) % n; while ((j != i) && !waiting[j]) j = (j + 1) % n; if (j == i) lock = false; // no one waiting: release the lock else waiting[j] = false; // unblock process j // force process j to get out of waiting loop remainder section // !!! } while (true);
- shared:
lock
&waiting
; local:key
- π¨βπ« return value: doesn't change
- sketch proof
- mutual exclusion: : enters critical section only if:
- either
waiting[i]=false
orkey=false
key=false
only iftest_and_set
is executed- only first process executing
test_and_set
findkey==false
- others: wait
waiting[i]
: can become false only if another process leaves critical section- only one
waiting[i]
set to false
- only one
- maintain: mutex requirement
- either
- progress: process exiting its section:
- either set
lock
tofalse
orwaiting[j]
tofalse
- both: allow a waiting process to enter the critical section to proceed
- either set
- bounded waiting: count in turns
- when a process leaving critical section:
- it scans array waiting in cyclic order
{i+1, i+2, ..., n-1, 0, 1, ..., i-1}
- then designates: first process in the ordering w/
waiting[j]
- as next one to enter critical section
- it scans array waiting in cyclic order
- π¨βπ« worst case: waiting
n-1
processes in front of me
- when a process leaving critical section:
- mutual exclusion: : enters critical section only if:
- many systems: provide hardware support on critical section code implementation
-
Atomic variables
- provides atomic updates on basic data types (integers & booleans)
- e.g.
increment()
on atomic variable ensures increment without interruptionvoid increment(atomic_int *v) { int temp; do { temp = *v; } while (temp != (compare_and_swap(v, temp, temp+1)) ); }
- supported by Linux
-
Mutex locks
- OS: builds a no. of SW tools to solve critical section problem
- supported by most OS: mutex lock
- to access a critical region: must ask
acquire
- use
release()
after use
- use
- calls to
acquire()
,release()
- must be atomic- often implemented via hardware atomic instructions
- however: solution requires busy waiting - must keep checking
- thus: has a nickname: spinlock
- wastes CPU cycles due to busy waiting
- advantage: context switch is not required when process is waiting
- context switch: might take long time
- spinlock: thus useful when locks: expected to hold short
- often used in multiprocessor systems
- as one thread: spin on one processor
- while another thread performs its critical section on another processor
- as one thread: spin on one processor
- advantage: context switch is not required when process is waiting
acquire()
andrelease()
- solution: based on the idea of lock on protecting critical section
- operations: atomic
- lock before entering critical section (accessing shared data)
- unlock upon departure from critical section after access
- wait if locked: synchronization = involves busy waiting
- "sleep" or "block" if waiting for a long time
void acquire() { while (!available) ; /* busy wait */ available = false; } void release() { available = true; } do { acquire lock // critical section release lock // remainder section } while (true);
- solution: based on the idea of lock on protecting critical section