A Synchronization Mechanism¶
All direct implementations of await statements require busy-waiting. That is suitable if there are not more processes than processors or if the expected waiting time is short (for example, when incrementing a counter). The Linux kernel supports spinlocks for that purpose.
Semaphores were introduced by E.W. Dijkstra in the early 60's as a means for efficient synchronization, allowing variables to be used only for communication.
Abstractly, a semaphore is a non-negative integer with two operations, historically called P
(from Dutch for "pass") and V
(from Dutch for "free"), as used in railroad semaphores:
var s: semaphore = init
P(s): ⟨await s > 0 then s := s - 1⟩
V(s): ⟨s := s + 1⟩
declaration with initial value
the wait operation
the signal operation
If nP
is the number of completed P
operations and nV
is the number of completed V
operations, then the semaphore invariant of s
is:
s = init + nV – nP ∧ s ≥ 0
As semaphores are defined using await
, all what was said previously carries over:
- If two processes execute
P(s)
withs = 1
, only one will succeed. If one executesP(s)
and oneV(s)
, both will succeed, in some order. - If a process executes
P(s)
ands > 0
is true continuously, then with a weakly fair scheduler the process will eventually succeed. - If a process executes
P(s)
ands > 0
infinitely often, thenP(s)
will succeed only with a strongly fair scheduler.
Mutual Exclusion with Semaphores¶
The critical section problem assumes that processes repeatedly try to enter a critical section, but only one is allowed to do. This can be enforced by using a binary semaphore, i.e. a semaphore whose value is either 0 or 1:
algorithm
var mutex: semaphore = 1
algorithm
process CS1
while true do
P(mutex)
critical section
V(mutex)
noncritical section
algorithm
process CS2
while true do
P(mutex)
critical section
V(mutex)
noncritical section
To argue for the correctness, we add ghost variables in1
, in2
to the two processes, CS1
, CS2
, that indicate if the process is in its critical section; in1 = 1
if CS1
is in its critical section, otherwise in1 = 0
. Ghost variables are only assigned to and appear in invariants, but are not used in the program; obviously, they can be left out without affecting the program:
The critical section property is in1 + in2 ≤ 1
, which is a consequence of CS
above. The assumption is that neither critical section
nor noncritical section
contain operations on mutex
, in1
, in2
.
Question: What the correctness conditions for the transitions and for non-interference?
Answer: The conditions for the correctness of the transitions of CS1
are:
CS ∧ C1₁ ∧ mutex > 0 ⇒ (CS ∧ C1₂)[mutex, in1 := mutex − 1, 1]
CS ∧ C1₃ ⇒ (CS ∧ C1₄)[mutex,in1 := mutex + 1, 0]
The conditions for the transition with mutex := mutex - 1
of CS1
not interfering with CS2
are:
CS ∧ C1₁ ∧ mutex > 0 ∧ C2₁ ⇒ C2₁[mutex, in1 := mutex − 1, 1]
CS ∧ C1₁ ∧ mutex > 0 ∧ C2₂ ⇒ C2₂[mutex, in1 := mutex − 1, 1]
CS ∧ C1₁ ∧ mutex > 0 ∧ C2₃ ⇒ C2₃[mutex, in1 := mutex − 1, 1]
CS ∧ C1₁ ∧ mutex > 0 ∧ C2₄ ⇒ C2₄[mutex, in1 := mutex − 1, 1]
CS ∧ C1₁ ∧ mutex > 0 ∧ C2₅ ⇒ C2₅[mutex, in1 := mutex − 1, 1]
CS ∧ C1₁ ∧ mutex > 0 ∧ C2₆ ⇒ C2₆[mutex, in1 := mutex − 1, 1]
The conditions for the transition with mutex := mutex + 1
of CS1
not interfering with CS2
are:
CS ∧ C1₃ ∧ C2₁ ⇒ C2₁[mutex, in1 := mutex + 1, 1]
CS ∧ C1₃ ∧ C2₂ ⇒ C2₂[mutex, in1 := mutex + 1, 1]
CS ∧ C1₃ ∧ C2₃ ⇒ C2₃[mutex, in1 := mutex + 1, 1]
CS ∧ C1₃ ∧ C2₄ ⇒ C2₄[mutex, in1 := mutex + 1, 1]
CS ∧ C1₃ ∧ C2₅ ⇒ C2₅[mutex, in1 := mutex + 1, 1]
CS ∧ C1₃ ∧ C2₆ ⇒ C2₆[mutex, in1 := mutex + 1, 1]
These can be shown to hold. Because of symmetry, it follows that each transition of CS2
is also correct and that CS2
does not interfere with the states of CS1
.
A single binary semaphore is also sufficient for mutual exclusion of an arbitrary number of processes:
var mutex: semaphore = 1
process CS(i: 0 .. N – 1)
while true do
P(mutex)
critical section
V(mutex)
noncritical section
To prove the correctness of, to each process i
we add the ghost variable in(i)
, which indicates if it is in its critical section; in(i) = 1
if process i
is in its critical section, otherwise in(i) = 0
:
var mutex: semaphore = 1
var in: array 0 .. N - 1 of 0 .. 1 = [0, ..., 0]
process CS(i: 0 .. N - 1)
while true do
⟨await mutex > 0 then mutex := mutex – 1 ; in(i) := 1⟩
critical section
⟨mutex := mutex + 1 ; in(i) := 0⟩
noncritical section
The critical section property is now:
CS: in(0) + … + in(N - 1) ≤ 1
This can be proved by showing that following property is an invariant:
P: 0 ≤ mutex = 1 - (in(0) + … + in(N - 1)) ≤ 1
We then have that P ⇒ CS
. The annotated program is:
algorithm
process CS(i: 0 .. N - 1)
while true do
{P}
⟨await mutex > 0 then mutex := mutex - 1 ; in(i) := 1⟩
{P ∧ mutex = 0 ∧ in(i) = 1}
critical section
{P ∧ mutex = 0 ∧ in(i) = 1}
⟨mutex := mutex + 1 ; in(i) := 0⟩
{P}
noncritical section
Exercise: Prove that each process CS(i)
is correct with respect to above annotation. Prove that CS(i)
does not interfere with CS(j)
, for i ≠ j
.
Here is an example in Python with two processes outputting strings; due different durations of sleep
, one process will output roughly twice as much as the other.
from threading import Thread, Semaphore
from time import sleep
from sys import stdout
class Ping(Thread):
def run(self):
while True:
s.acquire() # wait
print('ping') # critical section
s.release() # signal
sleep(2) # noncritical section
class Pong(Thread):
def run(self):
while True:
s.acquire() # wait
print('pong') # critical section
s.release() # signal
sleep(4) # noncritical section
s = Semaphore(1) # create semaphore
ping = Ping()
pong = Pong() # create new threads
ping.start()
pong.start() # run threads
The output may be garbled, as output of print
is buffered: print('str')
leads to stdout.write('str')
followed by stdout.write('\n')
: write
is atomic, but print
not. Replace print('str')
with stdout.write('str\n')
to get proper output!
Barrier Synchronization¶
Suppose we repeatedly need to perform two (or more) tasks that can be done in parallel, but both have to finish before a new cycle can start:
while true do
task1 ‖ task2
This pattern is typical for scientific computations, e.g. the discrete simulation of planetary movements: each task calculates the position of a planet in the next time step based on the current position of all other planets.
Alternatively, two worker processes are created only once rather than in each iteration:
process worker1
while true do
task 1
wait for task 2 to complete
process worker2
while true do
task 2
wait for task 1 to complete
How can synchronization be expressed with semaphores? How many semaphores are needed?
Two semaphores are needed:
var barrier1, barrier2: semaphore = 0, 0
algorithm
process worker1
while true do
task 1
V(barrier1)
P(barrier2)
algorithm
process worker2
while true do
task 2
V(barrier2)
P(barrier1)
To argue for the correctness, we introduce ghost variables. Let
arrive1
,arrive2
be the number of timesworker1
,worker2
has arrived at the barrier,depart1
,depart2
be the number of timesworker1
,worker2
has departed from the barrier.
The barrier property BR
states that worker1
cannot get past the barrier any more times than worker2
has arrived, and symmetrically for worker2
:
BR: depart1 ≤ arrive2 ∧ depart2 ≤ arrive1
The processes with updates to the ghost variables maintain invariant P
and we have that P ⇒ BR
:
algorithm
var arrive1, depart1, arrive2, depart2: integer = 0, 0, 0, 0
{P: BR ∧ barrier1 = arrive1 - depart2 ∧ barrier2 = arrive2 - depart1}
algorithm
process worker1
while true do
task 1
{P ∧ arrive1 = depart1}
⟨barrier1, arrive1 := barrier1 + 1, arrive1 + 1⟩
{P ∧ arrive1 = depart1 + 1}
⟨await barrier2 > 0 then
barrier2, depart1 := barrier2 - 1, depart1 + 1⟩
{P ∧ arrive1 = depart1}
algorithm
process worker2
while true do
task 2
{P ∧ arrive2 = depart2}
⟨barrier2, arrive2 := barrier2 + 1, arrive2 + 1⟩
{P ∧ arrive2 = depart2 + 1}
⟨await barrier1 > 0 then
barrier1, depart2 := barrier1 - 1, depart2 + 1⟩
{P ∧ arrive2 = depart2}
Exercise: Draw the state diagram and give the conditions for the correctness of the transitions and for non-interference.
Exercise: Generalize this to N
processes!
Here is an implementation in Python with tasks of different duration, but progressing in sync.
from threading import Thread, Semaphore
from time import sleep
from sys import stdout
class Ping(Thread):
def run(self):
while True:
stdout.write('ping\n')
sleep(2) # task
barrier1.release() # signal
barrier2.acquire() # wait
class Pong(Thread):
def run(self):
while True:
stdout.write('pong\n')
sleep(4) # task
barrier2.release() # signal
barrier1.acquire() # wait
barrier1, barrier2 = Semaphore(0), Semaphore(0) # create semaphores
ping = Ping()
pong = Pong() # create new threads
ping.start()
pong.start() # run threads
Producers and Consumers¶
A producer and consumer accessing a shared buffer, buf
, need two binary semaphores for synchronization:
var buf: T
var empty, full: semaphore = 1, 0
algorithm
process producer
while true do
produce data
P(empty)
buf := data
V(full)
algorithm
process consumer
while true do
P(full)
data := buf
V(empty)
consume data
To argue for the correctness, we introduce ghost variables. Let
inD
andafterD
be the number of times the producer has started and finished depositing data intobuf
,inF
andafterF
be the number of times the consumer has started and finished fetching data frombuf
.
The producer-consumer property PC
specifies that deposit and fetch must alternate:
PC: inD ≤ afterF + 1 ∧ inF ≤ afterD
The processes with updates to the ghost variables maintain invariant P
and we have that P ⇒ PC
:
algorithm
var buf: T
var inD, afterD, inF, afterF: integer = 0, 0, 0, 0
{P: PC ∧ empty = afterF - inD + 1 ∧ full = afterD - inF}
algorithm
process producer
while true do
produce data
⟨await empty > 0 then
empty, inD := empty - 1, inD + 1⟩
buf := data
⟨full, afterD := full + 1, afterD + 1⟩
algorithm
process consumer
while true do
⟨await full > 0 then
full, inF := full - 1, inF + 1⟩
data := buf
⟨empty, afterF := empty + 1, afterF + 1⟩
consume data
Exercise: Draw the state diagram and give the conditions for the correctness of the transitions and for non-interference!
Exercise: Generalize this to M
producers and N
consumers!
Here is an implementation in Python in which the producer is "slow":
from threading import Thread, Semaphore
from time import sleep
from sys import stdout
class Producer(Thread):
def run(self):
global buf
n = 0
while True:
n += 1
sleep(4) # produce
stdout.write('producing' + str(n) + '\n')
empty.acquire() # wait
buf = n # deposit
full.release() # signal
class Consumer(Thread):
def run(self):
global buf
while True:
full.acquire() # wait
data = buf # fetch
empty.release() # signal
stdout.write('fetching' + str(data) + '\n')
empty, full = Semaphore(1), Semaphore(0) # create semaphores
producer = Producer()
consumer = Consumer() # create new threads
producer.start()
consumer.start() # run threads
In addition to being binary, the producer-consumer semaphores have the property that at most one of the them is 1
; this is called a split binary semaphore. In this case, for semaphores b1, …, bn
, following global invariant has to hold:
0 ≤ b1 + … + bn ≤ 1
Split binary semaphores can used to implement general mutual exclusion among a number of processes. Suppose one semaphore is initialized with 1
and all others with 0
. If every execution path starts with a P
on one of the semaphores and ends with a V
on one of the semaphores, then all statements between P
and V
execute with mutual exclusion. If one process is between P
and V
, then all semaphores must be 0
, and no other process can complete P
.
Bounded Buffers¶
If producer and consumer progress is in bursts, as it commonly is, the potential blocking of either one can be reduced by increasing the buffer capacity.
To avoid dynamic memory allocation (which may fail) and to prevent the producer from "running away", buffers are allocated of a fixed maximal size, typically as arrays used in a circular fashion.
In this example, empty
and full
are general semaphores that are used as resource counters. We assume that there is only a single producer and a single consumer:
algorithm
var buf: array 0 .. C - 1 of T
var in, out: integer = 0, 0
var empty, full: semaphore = C, 0 # of empty, full slots
{C - 2 ≤ empty + full ≤ C}
algorithm
process producer
while true do
produce data
P(empty)
buf(in) := data
in := (in + 1) mod C
V(full)
algorithm
process consumer
while true do
P(full)
data := buf(out)
out := (out + 1) mod C
V(empty)
consume data
Question:
- When is
empty + full = C
? - When is
empty + full = C - 1
? - When is
empty + full = C - 2
?
Answer:
- When neither processes is between
P
andV
. - When exactly one process is between
P
andV
. - When both processes are between
P
andV
.
If two or more processes would try to deposit an element in the buffer, P(empty)
would not block if there is space left and updates of buf
and in
would be executed concurrently. This is avoided by introducing a semaphore for protecting the updates of buf
and in
for producers and of buf
and out
for consumers:
algorithm
var buf: array 0 .. C - 1 of T
var in, out: integer = 0, 0
var empty, full: semaphore = C, 0
{C - 2 ≤ empty + full ≤ C}
var deposit, fetch: semaphore = 1, 1
algorithm
process producer(i: 0 .. M - 1)
while true do
produce data
P(empty)
P(deposit)
buf(in) := data
in := (in + 1) mod C
V(deposit)
V(full)
algorithm
process consumer(j: 0 .. N - 1)
while true do
P(full)
P(fetch)
data := buf(out)
out := (out + 1) mod C
V(fetch)
V(empty)
consume data
Dining Philosophers¶
The problem is due to E. W. Dijkstra, who solved it with semaphores; above is the formulation of C. A. R. Hoare. It is representative of the problem of processes competing competing for a non-disjoint set of resources, where the processes do not communicate directly with each other (they may not "know" about each other).
Each philosopher becomes a process and each fork a resource that is protected by a binary semaphore:
var fork: array 0 .. 4 of semphore = [1, ..., 1]
process philosopher(i: 0 .. 4)
while true do
think
acquire forks
eat
release forks
Suppose we acquire forks by first picking up the left and the right fork, and release them similarly:
acquire forks: P(fork(i)) ; P(fork((i + 1) mod 5))
release forks: V(fork(i)) ; V(fork((i + 1) mod 5))
Question: Is it guaranteed that every philosopher will eat with two forks, which implies that no two neighbours can eat at the same time (safety), that philosophers don't deadlock (safety), and that every philosopher will eventually eat (liveness)?
from threading import Thread, Semaphore
from time import sleep
from sys import stdout
class Philosopher(Thread):
def __init__(self, i):
self.i = i
Thread.__init__(self)
def run(self):
while True:
stdout.write(str(self.i) + ' is thinking\n')
sleep(2)
fork[self.i].acquire()
fork[(self.i + 1) % 5].acquire()
stdout.write(str(self.i) + ' is eating\n')
sleep(1)
fork[self.i].release()
fork[(self.i + 1) % 5].release()
fork = [Semaphore(1) for i in range(5)]
phil = [Philosopher(i) for i in range(5)]
for i in range(5):
phil[i].start()
Answer: Philosophers will eat with both forks, but if all philosophers pick up first their left fork, no one can pick up their right fork and we have a deadlock.
Deadlocking is avoided if we "break the cycle" and have one process, say the last one, pick up the right fork before the left fork:
- philosopher
0
picks up forks0
,1
- philosopher
1
picks up forks1
,2
- philosopher
2
picks up forks2
,3
- philosopher
3
picks up forks3
,4
- philosopher
4
picks up forks0
,4
Essentially, we have ordered the 5
forks and ensured that they are always taken in that order. In general, circular waiting can be avoided by ordering the resources. This is a global decision to which all processes have to adhere. If processes acquire new resources as they progress and these may have a lower order than those they are holding, the processes first have to release the resources they are holding and then acquire them again.
In the Python implementation, each philosopher constructs a set f
with the two forks they need and then first acquires fork min(f)
and then fork max(f)
:
from threading import Thread, Semaphore
from time import sleep
from sys import stdout
class Philosopher(Thread):
def __init__(self, i):
self.i = i
Thread.__init__(self)
def run(self):
while True:
stdout.write(str(self.i) + ' is thinking\n')
sleep(1)
f = {self.i, (self.i + 1) % 5}
fork[min(f)].acquire()
fork[max(f)].acquire()
stdout.write(str(self.i) + ' is eating\n')
sleep(1)
fork[min(f)].release()
fork[max(f)].release()
fork = [Semaphore(1) for i in range(5)]
phil = [Philosopher(i) for i in range(5)]
for i in range(5):
phil[i].start()
Readers and Writers¶
Two kinds or processes, readers and writers, access common data, like a database, file, or a data structure. Readers only examine the data while writers both examine and update the data. To preclude interference, a writer process must have exclusive access to the data. Assuming no writer is accessing the data, any number of readers may concurrently access the data. In a first attempt we use a single semaphore when accessing the data:
var rw: semaphore = 1
algorithm
procedure reader
while true do
P(rw)
read data
V(rw)
algorithm
procedure writer
while true do
P(rw)
write data
V(rw)
Question: With multiple readers, as in reader ‖ reader ‖ writer
, what is the problem with this solution?
Answer. Only one reader (or writer) can access the data.
We can relax this restriction by having only the first reader get a lock for all readers, and the last reader release it again. A variable nr
counts the number of readers. A semaphore mutexRW
is used for exclusive access to rw
:
var nr: integer = 0
var rw, mutexRW: semaphore = 1, 1
algorithm
procedure reader
while true do
P(mutexRW)
nr := nr + 1
if nr = 1 then P(rw)
V(mutexRW)
read data
P(mutexRW)
nr := nr - 1
if nr = 0 then V(rw)
V(mutexRW)
algorithm
procedure writer
while true do
P(rw)
write data
V(rw)
Question: What happens when:
- a writer accesses the data and both another reader and writer try as well,
- a reader accesses the data and both another reader and writer try so as well?
Answer: Previous solution may lead to writers never being able to access the data if a reader gets access first and new readers come before existing readers finish the access. Hence writers may starve.
For developing a fair solution, we start with a coarse-grained formulation. Let nr
and nw
be the number of readers and writers; RW
is the reader-writer invariant:
algorithm
var nr, nw: integer = 0, 0
{RW: (nr = 0 ∨ nw = 0) ∧ nw ≤ 1}
algorithm
procedure reader
while true do
⟨await nw = 0 then nr := nr + 1⟩
read data
⟨nr := nr - 1⟩
algorithm
procedure writer
while true do
⟨await nr = 0 ∧ nw = 0 then nw := nw + 1⟩
write data
⟨nw := nw - 1⟩
The idea for implementing the entry and exit protocols is to use three binary semaphores and counters for the number of delayed processes:
var e: semaphore = 1
var r, w: semaphore = 0, 0
var nr, nw : integer = 0, 0
var dr, dw : integer = 0, 0
procedure reader
while true do
–– ⟨await nw = 0 then nr := nr + 1⟩
P(e)
if nw > 0 then
dr := dr + 1 ; V(e) ; P(r)
nr := nr + 1
if dr > 0 then dr := dr - 1 ; V(r)
else V(e)
read data
–– ⟨nr := nr - 1⟩
P(e)
nr := nr - 1
if nr = 0 and dw > 0 then
dw := dw - 1 ; V(w)
else V(e)
procedure writer
while true do
–– ⟨await nr = 0 ∧ nw = 0 then nw := nw + 1⟩
P(e)
if nr > 0 or nw > 0 then
dw := dw + 1 ; V(e) ; P(w)
nw := nw + 1
V(e)
write data
–– ⟨nw := nw - 1⟩
P(e)
nw := nw - 1
if dr > 0 then dr := dr - 1 ; V(r)
else if dw > 0 then dw := dw - 1 ; V(w)
else V(e)
In both entry and exit protocols, every process needs to get a lock on e
. Then, depending on dr
and dw
, the readers and writers decide either to continue and access the data or to pass the baton to another reader or writer, who could then release the lock or pass it on further.
- The entry protocol of
reader
delays a new reader if a writer is waiting; once the reader can proceed, it checks if there are are delayed readers and wakes up on (who in turn may wake up another one, etc.). - The exit protocol of
reader
has the last reader wake up a delayed writer. - The entry protocol of
writer
delays a new writer if there is already a reader or writer. - The exit protocol of
writer
wakes up delayed reader, if there is one, or otherwise wakes up a delayed writer, if there is one.
from threading import Thread, Semaphore
from time import sleep
from sys import stdout
class Reader(Thread):
def __init__(self, name):
self.n = name
Thread.__init__(self)
def run(self):
global nr, nw, dr, dw
while True:
# ⟨await nw == 0 then nr += 1⟩
e.acquire()
if nw > 0:
# if nw > 0 or dw > 0 :
dr += 1
e.release()
r.acquire()
nr += 1
if dr > 0:
dr -= 1
r.release()
else:
e.release()
# read data
stdout.write(self.n + ' reading\n')
sleep(1)
# ⟨nr -= 1⟩
e.acquire()
nr -= 1
if nr == 0 and dw > 0:
dw -= 1
w.release()
else:
e.release()
class Writer(Thread):
def __init__(self, name):
self.n = name
Thread.__init__(self)
def run(self):
global nr, nw, dr, dw
while True:
# ⟨await nr == 0 and nw = 0 then nw += 1⟩
e.acquire()
if nr > 0 or nw > 0:
dw += 1
e.release()
w.acquire()
nw += 1
e.release()
# write data
stdout.write(self.n + ' writing\n')
sleep(2)
# ⟨nw -= 1⟩
e.acquire()
nw -= 1
if dr > 0:
dr -= 1
r.release()
elif dw > 0:
dw -= 1
w.release()
else:
e.release()
# if dw > 0: dw -= 1; w.release()
# elif dr > 0: dr -= 1; r.release()
# else: e.release()
e = Semaphore(1)
r, w = Semaphore(0), Semaphore(0)
nr, nw = 0, 0
dr, dw = 0, 0
r1 = Reader('R1')
r2 = Reader('R2')
w1 = Writer('W1')
w2 = Writer('W2')
r1.start()
r2.start()
w1.start()
w2.start()
This solution still gives readers preference over writers. To give writers preference, we modify it such that
- new readers are delayed if a writer is waiting; the first
if
statement inreader
is modified to:
if nw > 0 or dw > 0 then dr := dr + 1 ; V(e) ; P(r)
- a delayed reader is awakened only if no writer is currently writing; the second
if
statement inwriter
is modified to:
if dw > 0 then dw := dw - 1 ; V(w)
else if dr > 0 then dr := dr - 1 ; V(r)
else V(e)
To have a fair solution, with e.g. readers and writers taking alternate turns when both are waiting, we need to
- delay a new reader when a writer is waiting,
- delay a new writer when a reader is waiting,
- awaken one waiting writer, if any, when a reader finishes,
- awaken all waiting readers, if any, when a writer finishes, otherwise awaken one waiting writer, if any.
Implementing Semaphores¶
Operating systems distinguish between processes, which don't share memory, and threads, which share memory. As they are scheduled in similar ways, we will just refer to processes. Every process is in one of four states:
- executing: running on one of the processors (cores), not in any queue
- ready: in the queue of ready processes
- blocked: in the queue of one semaphore
- terminated: reached its end
A semaphore consists, in principle, of two fields:
- a non-negative integer counter
- a pointer to a queue of processes blocked on that semaphore
A process is represented by a pointer to a process descriptor (process control block) that contains the location of the code, stack, and possibly other saved registers. Following operations affect the state of processes and semaphores. A timer interrupts one of the processors periodically:
fork(p)
adds processp
to the ready queue and callsdispatch()
P(s)
decrements the counter ofs
if it is positive; if zero, adds current process to the queue ofs
; callsdispatch()
V(s)
increments the counter ofs
if queue empty; if not empty, moves one process from the queue ofs
to the ready queue; callsdispatch()
- timer interrupt: adds the current process to the ready queue and calls
dispatch()
.
Procedure dispatch()
assigns a ready process to a processor:
- the state of the calling process is saved in the process descriptor
- a process is removed from the ready queue and its previous state is restored
The ready queue and the semaphore queues are typically first-in first-out queues, which ensures (weak) fairness.
Interrupts may also occur to due I/O requests: the process waiting on I/O is moved to the ready queue and dispatch()
is called.
The dispatcher takes priorities into account. An absolute priority requires that ready processes with higher priority are always given preference. A relative priority only suggests to the dispatcher to give proportionally more or less time.
Question: Give examples of absolute and relative priorities in scheduling!
Answer:
- Absolute priority: Processes handling network and file I/O have higher priority than processes handling keyboard and mouse events, which have higher priority than regular processes. Some microcontrollers employ only two priorities, the higher for interrupts.
- Relative priority: A router gives requests for streaming media higher priority than web pages, and those higher priority than downloads. On operating system gives foreground processes higher priority than background processes. A browser gives visible tabs higher priority than other tabs. Unix (Linux, macOS) has the
nice
shell command. Java threads can have 20 different priority levels.
Python and Java add a created state:
producer = Producer() # producer is created, but not ready
producer.start() # producer is ready, dispatcher can switch to executing
producer.join() # producer has terminated
Linux semaphores maintain an additional field with a spinlock for waiting while the semaphore structure is updated:
struct semaphore {
raw_spinlock_t lock;
unsigned int count;
struct list_head wait_list;
};