Why Programs Fail¶
The specification may be faulty:
- It is incomplete or inconsistent
- It does not capture the user's intent
The program design may be faulty:
- It has a logical error, like a forgotten case or synchronization error
- The is based on idealized hypotheses, e.g. about integer range, available memory, processing speed
- The program makes incorrect assumptions about other components
The underlying machine may be faulty:
- The compilation is faulty
- The run-time system or library is faulty
- The hardware fails
Failure, Faults, and Errors¶
A failure is an event that causes a deviation of the program (or system in general) from its specification. The deviation itself is the error. The cause of the error is the fault (e.g. an index computation is off by one). However, a fault does not necessarily cause an error (e.g. if a faulty procedure is never called). Errors may or may not be observable by a user and may or may not lead to a failure. If the specification allows it, a failure may lead to a degraded mode in which the specification is only partially fulfilled; this is then a partial failure.
According to the taxonomy by Avizienis, Laprie, Randell, Landwehr:
- availability: readiness for correct service
- reliability: continuity of correct service
- safety: absence of catastrophic consequences on the users and the environment
- integrity: absence of improper system alterations
- maintainability: ability to undergo modifications and repairs
Detecting Failures¶
Some errors are commonly detected by the underlying machine and are indicative of a failure:
- Indexing an array out of bounds
- Allocating memory when none is available
- Reading a file beyond its end
Some failures can be detected by instrumenting programs, e.g. in Eiffel:
class STACK
capacity: INTEGER
count: INTEGER
invariant
count <= capacity
push is ...
Some failures are either impractical or impossible to detect, for example that
- only a single pointer to an object exists,
- the precondition and invariant of binary search are valid,
- a program terminates.
Responding to Detected Failures¶
Even with best efforts in preventing failures, the possibility of a failure in a complex system remains. We need a systematic approach to dealing with failures. Consider the statement
S1 ; S2 ; S3 ; S4
where S1
, S3
may detect a failure; in that case, T
should be executed instead. One approach is to test if for an expected error explicitly a priori or an observed error a posteriori:
if S1 possible then
S1 ; S2
if S3 possible then
S3 ; S4
else T
else T
S1
if S1 successful then
S2 ; S3
if S3 successful then
S4
else T
else T
A posteriori detection is common under C and Unix. The alternative is a dedicate control structure for exception handling:
try
S1 ; S2 ; S3 ; S4
catch
T
With proper exception handling, no additional variables and control structures are interspersed; the original program structure remains visible:
f = fopen(filename, "r");
if (f == NULL) {
... treat error
} else {
... read file, checking for failure
fclose(f);
}
try {
f = fopen(filename, "r");
... read file, possibly failing
fclose(f);
} catch {
... treat error
}
As an example, consider Monte Carlo Integration in Python: function f
is evaluated randomly, which may lead to an arithmetic exception:
import random
def area(f, a, b, l, u, n):
c = 0
for i in range(n):
try:
x = random.uniform(a, b)
y = random.uniform(l, u)
if 0 <= y <= f(x):
c = c + 1
elif f(x) <= y <= 0:
c = c - 1
except:
pass
return (u - l) * (b - a) * c / n
def reciprocal(x):
return 1 / x
area(reciprocal, -1, 1, -1000, 1000, 10000)
-0.4
Here, an exception is “rare and undesired”, but possible. The exception handler does nothing, but the quality of the result is affected.
Exception handling
- allows dealing with unanticipated failures that result from faults in the design,
- is useful for rare or undesired cases, that would otherwise obstruct the original design, e.g. when operating on files, in floating point computations that may overflow or underflow,
- allows for imperfections during the design process, supporting extension and contraction, e.g. MS Developer Documentation for .NET:
static void FutureFeature()
{
// Not developed yet.
throw new NotImplementedException();
}
- is more efficient if a priori tests require substantial overhead, e.g. testing arithmetic addition for possible overflow requires a subtraction, which means doubling the number of operations in numerical computations.
- is needed when the underlying machine fails and there no means to test for that a priory or a posteriori, e.g when a procedure call leads to a call stack overflow, when a transient hardware failure occurs.
Exception handling allows these cases to be treated uniformly.
Statements With Exceptions¶
In the presence of exceptions, every statement has one entry, but two exits, the normal and the exceptional exit. In state diagrams, exceptional final states are visually distinguished by a dashed contour. In what follows, every statement is revisited and new statements for exception handling are introduced.
The skip
statement always terminates normally:
algorithm
skip
The raise
statement always terminates exceptionally:
algorithm
raise
The assignment x := E
evaluates expression E
. If the evaluation fails, the assignment terminates exceptionally, otherwise the value of E
is assigned to variable x
.
x := E
In the generalization to multiple assignments, all expressions are first evaluated and only if no evaluation fails, the values are assigned to the variables.
The sequential composition S ; T
first attempts S
and if that does terminate normally, it continues with T
. If either one terminates exceptionally, the whole statement terminates exceptionally:
algorithm
S ; T
The exceptional composition try S catch T
first attempts S
and if that does terminate exceptionally, it continues with T
. If either one terminates normally, the whole statement terminates normally:
algorithm
try S catch T
The conditional statement if B then S else T
evaluates Boolean expression B
and executes either S
or T
, depending on the value of B
, provided that the evaluation of B
succeeds. If the evaluation of B
fails or if either S
or T
terminate exceptionally, the whole statement terminates exceptionally:
algorithm
if B then S else T
The conditional statement if B then S
is similar to above, except that T
is skip
:
algorithm
if B then S
The repetitive statement while B do S
evaluates the condition B
. If that succeeds and the value is true, the body S
is executed and B
is evaluated again. If B
is false, the statement terminates normally. If either the evaluation of B
fails or S
terminates exceptionally, the whole statement terminates exceptionally.
while B do S
Annotations and Correctness¶
For predicates P
, Q
, X
, we extend the correctness assertions and the state diagrams to express that under precondition P
, statement S
either establishes postcondition Q
on normal exit or postcondition X
on exceptional exit:
algorithm
{P} S {Q, X}
For example,
If X
is false, i.e. S
does not raise an exception, we leave out X
and have as previously:
algorithm
{P} S {Q}
The revised correctness rules with exceptions are directly derived from the corresponding state diagrams.
Rules for correctness of statements with exceptions:
algorithm
{P} x := E {Q, X}
{P} S ; T {R, X}
{P} try S catch T {Q, Y}
algorithm
{P} if B then S else T {Q, X}
{P} if B then S {Q, X}
{P} while B do S {Q, X}
if `{P} S {Q, X}` and `{Q} T {R, X}`
if `{P} S {Q, X}` and `{X} T {Q, Y}`
if `{P ∧ ∆B ∧ B'} S {Q, X}` and `{P ∧ ∆B ∧ ¬B'} T {Q, X}` and `P ∧ ¬∆B ⇒ X`
if `{P ∧ ∆B ∧ B'} S {Q, X}` and `P ∧ ∆B ∧ ¬B' ⇒ Q`
and `P ∧ ¬∆B ⇒ X`
if `{P ∧ ∆B ∧ B'} S {P, X}` and `P ∧ ∆B ∧ ¬B' ⇒ Q` and `P ∧ ¬∆B ⇒ X`
Derived Statements¶
An array assignment is defined in terms of the alter function; recall that if xthe only difference is that the evaluation of E
and F
may fail, in which case an exception is raised:
x(E) := F = x := (x; E: F)
The statement assert B
raises an exception if B
does not hold, other does nothing:
assert B = if ¬B then raise
The statement try S finally U
always executes U
, whether S
terminates normally or exceptionally. The whole statement terminates normally only if both S
and U
do so as well, otherwise it terminates exceptionally:
try S finally U
try S finally U = try S catch (U ; raise) ; U
The statement try S catch T finally U
unifies try-catch
and try-finally
. It can be defined in two equivalent ways:
try S catch T finally U
algorithm
try S catch T finally U = try S catch (try T catch (U ; raise)) ; U
= try (try S catch T) finally U
Example: Saturating Vector Division¶
algorithm
{true}
i := 0
{P: 0 ≤ i ≤ n ∧ ∀ j ∈ 0 .. i - 1 • (b(j) ≠ 0 ∧ c(j) = a(j) div b(j)) ∨ (b(j) = 0 ∧ c(j) = maxint)}
while i < n do
{i < n ∧ P}
try c(i) := a(i) div b(i)
{i < n ∧ P ∧ b(i) ≠ 0 ∧ c(i) = a(i) div b(i), i < n ∧ P ∧ b(i) = 0}
catch
{i < n ∧ P ∧ b(i) = 0}
c(i) := maxint
{i < n ∧ P ∧ b(i) = 0 ∧ c(i) = maxint}
{i < n ∧ P ∧ ((b(i) ≠ 0 ∧ c(i) = a(i) div b(i)) ∨ (b(i) = 0 ∧ c(i) = maxint))}
i := i + 1
{P}
{∀ j ∈ 0 .. n - 1 . (b(j) ≠ 0 ∧ c(j) = a(j) div b(j)) ∨ (b(j) = 0 ∧ c(j) = maxint)}
Procedure Specifications¶
So far we considered only a single exceptional exit. Languages like Java allow to specify different exception types, that can be caught selectively, allowing to express statements with multiple exits. A method specification can then state what has to hold at what exit, for example:
public static void int search(int[] a, int x)
throws NullPointerException, NotFoundException
/* requires: a is sorted
ensures: 0 <= result < a.length && a[result] == x
signals NullPointerException: a == null
signals NotFoundException: x not in a
*/
However, this style of specification assumes that all exceptions are anticipated, which does not allow coping with unanticipated failures; it amounts to using exceptions as a control structure for undesired or rare cases.
Pattern: Masking¶
try request next command
catch command := help
The desired (but possibly weakened) postcondition is always established. This is an instance of forward recovery.
If
algorithm
{P} S {Q, H}
{H} T {Q}
then
algorithm
{P} try S catch T {Q}
Pattern: Masking with Re-raising¶
try process file A and output file B
catch (delete file B ; raise)
In a modular design, each module restores a consistent state before passing on the exception. This pattern can be used for backward recovery (rolling back) or forward recovery.
If
algorithm
{P} S {Q, H}
{H} T {R, R}
then:
algorithm
{P} try S catch (T ; raise) {Q, R}
Pattern: Flagging¶
try (process file A and output file B ; done := true)
catch (delete file B ; done := false)
The occurrence of an exception is recorded for further actions, e.g. further forward or backward recovery.
If
algorithm
{P} S {Q, H}
{H} T {R}
then:
algorithm
{P} try (S ; done := true) catch (T ; done := false) {(done ∧ Q) ∨ (¬done ∧ R)}
The role of T
is to establish an alternate postcondition; in above formulation, T
must not fail.
Pattern: Rollback with Masking¶
u0, v0, w0 := u, v, w
try display form for entering u, v, w
catch u, v, w := u0, v0, w0
The pattern prevents that an inconsistent state, e.g. with a broken invariant, or an undesirable state, e.g. one which only allows termination, is left.
Let B
stand for "backup available". If
algorithm
{P} backup {P ∧ B}
{B} restore {P}
{P ∧ B} S {Q, B}
{P} T {Q}
then:
algorithm
{P} backup ; try S catch (restore ; T) {Q}
Statement T
can either do some clean-up for backward recovery or do forward recovery. The pattern assumes that both restore
and T
do not fail.
Pattern: Rollback with Propagation¶
The difference to Rollback with Masking is that now restore
can fail.
If
algorithm
{P} backup {P ∧ B, P}
{B} restore {P, P}
{P ∧ B} S {Q, B}
then:
algorithm
{P} backup ; try S catch (restore ; raise) {Q, P}
Partial Correctness¶
Suppose S
either establishes the desired postcondition or fails but re-establishes the precondition in that case:
algorithm
{P} S {Q, P}
Then S
is partially correct with respect to P
, Q
.
Several patterns ensure partial correctness. Eiffel method specifications can be understood as partial correctness specifications:
method is
require
pre
do
body
ensure
post
rescue
handler
Pattern: Degraded Service¶
Here is a procedure for computing √(x² + y²)
by Hull et al (1994):
algorithm
try -- try the simplest formula, will work most of the time
z := √(x² + y²)
catch -- overflow or underflow has occurred
try
m := max(abs(x), abs(y))
try -- try the formula with scaling
t := √((x / m)² + (y / m)²)
catch -- underflow has occurred
t := 1
z := m × t
catch -- overflow on unscaling has occurred
z := +∞ ; raise
Several alternatives achieve the same goal, but some are preferred over others. If the first one fails, we fall back to a less desirable one.
Suppose the alternatives are S₁
, S₂
, S₃
. If
algorithm
{P} S₁ {Q, H₁}
{H₁} S₂ {Q, H₂}
{H₂} S₃ {Q, R}
then:
algorithm
{P} try S₁ catch (try S₂ catch S₃) {Q, R}
Pattern: Recovery Block¶
The recovery block goes back to Horning et al (1974), based on analysis of existing software. Randell (1975) (reprint) gives it following syntactic form, where A
is a Boolean expression, the acceptance test, S₁
, S₂
, S₃
are the alternatives in that order of preference. If one alternative fails, the original state is restored before the next is taken. If all fail, the whole statement fails. To the right is the formulation with exceptions that generalizes the original formulation by allowing each alternative to have a different acceptance test.
algorithm
ensure A
by S₁
else by S₂
else by S₃
else error
algorithm
backup
try S₁ ; assert A₁
catch
restore
try S₂ ; assert A₂
catch
restore
try S₃ ; assert A₃
catch restore ; raise
Let RB
be above statement, P
the precondition, and Q
the desired postcondition. If
backup
succeeds making a backup or fails, but preserves the precondition in that case,
algorithm
{P} backup {P ∧ B, P}
restore
always succeeds restoring the original precondition and preserves the backup,
algorithm
{B} restore {P ∧ B}
- each alternative either succeeds or fails and in any case preserves the backup,
algorithm
{P ∧ B} Sᵢ {Qᵢ ∧ B, B}
- if the acceptance test passes, the desired postcondition is established,
algorithm
Qᵢ ∧ Aᵢ ⇒ Q
then the recovery block is partially correct with respect to P
, Q
:
algorithm
{P} RB {Q, P}
Pattern: Bounded Retry¶
algorithm
while n > 0 do
try S ; n := 0
catch
T ; n := n - 1
if n = 0 then raise
The role of statement T
is to "clean up" so that S
can be attempted again. Let BR
stand for above statement. If
algorithm
{P} S {Q, R}
{R} T {P}
then
algorithm
{n ≥ 0 ∧ P} BR {Q, P}
The postconditions P
and Q
can be strengthened to include n = 0
.
Pattern: Bounded Retry with Rollback and Pause¶
This is similar to a recommended pattern for Azure.
algorithm
backup
while n > 0 do
try S ; n := 0
catch
restore ; n := n - 1
if n = 0 then raise
pause
Statements S
, restore
, pause
must not modify n
. The role of pause
is to delay execution so that another attempt is more likely to succeed. Let RR
stand for above statement. If
algorithm
{P} backup {P ∧ B, P}
{B} restore {P ∧ B}
{P ∧ B} S {Q, B}
then
algorithm
{n ≥ 0 ∧ P} RR {Q, P}
The postconditions P
and Q
can be strengthened to include n = 0
.
Pattern: Conditional Retry¶
This pattern mimics Eiffel's rescue
and retry
statements:
algorithm
done := false
while ¬done and B do
try S ; done := true
catch T
if ¬done then raise
Let CR
stand for above statement. If
algorithm
{∆B ∧ B' ∧ P} S {Q, R}
{R} T {P}
then:
algorithm
{P} CR {Q, P}
If B
is not defined, i.e. ¬∆B
, then the statement terminates exceptionally with P
.
Eiffel Exceptions¶
Eiffel statements have three exits,
- the normal exit,
- an exit when an exception is raised, and
- an exit for retrying a method body.
Postconditions are now triples {Q, X, R}
of the normal postcondition Q
, the exceptional postcondition X
, and the retry postcondition R
. If R
or X, R
are left out, they are assume to be false
.
Method sqrt
computes the approximate integer square root of n
in the interval l .. u
. Let p
be 0 ≤ l < u ∧ l² ≤ n < u²
:
algorithm
sqrt(n, l, u : INTEGER) : INTEGER
{p}
local
m : INTEGER
{rescue invariant: p}
do
{loop invariant: p}
from until u − l = 1 loop
m := l + (u − l) // 2
{p ∧ m = (l + u) // 2}
if n < m ∗ m then u := m else l := m end
{p, p ∧ m = (l + u) // 2 ∧ n < m²}
end
{p ∧ u − l = 1}
Result := l
rescue
{p ∧ m = (l + u) // 2 ∧ n < m²}
u := m
{p}
retry
{false, false, p}
Try-Catch-Finally¶
The statement try S finally U
always executes U
, whether S
terminates normally or exceptionally. The whole statement terminates normally only if both S
and U
do so as well, otherwise it terminates exceptionally:
try S finally U
try S finally U = try S catch (U ; raise) ; U
Intuitively,
- the
catch
statement ensures safety by establishing a consistent state, - the
finally
statement ensures liveness by freeing all resources (freeing memory; closing files, windows, network connections).
The difficulty is that U
is entered with two different preconditions and exited differently. If
algorithm
{P} S {Q, X}
{Q} U {R, Y}
{X} U {Y, Y}
then:
algorithm
{P} try S finally U {Q, Y}
This rule would require two separate annotations of U
, which would make it awkward in practice. The key is to design U
such that
U
never fails: SupposeU
failed when started inQ
; in that caseY
has to be sufficiently weak so it can also be established (byU
) fromX
and whateverS
achieved by establishingQ
is "lost". SupposedU
failed when started inX
; as both normal and exceptional termination leads to the same conditionY
, that has to be weak enough,U
only "cleans up" by affecting some variables that are not mentioned inQ
andX
, soR
and includeQ
andY
can includeX
,- the precondition of
U
is implied by bothQ
andX
.
(Draft) The nondeterministic assignment v := ?
sets v
to an arbitrary value of its type and S ⫿ T
chooses nondeterministically between S
and T
. Suppose File
is defined by:
algorithm
class File
var s: {closed, reading, writing} = closed
var c: seq(byte)
var p: integer
method reset(fn: string)
if s = closed → skip ⫿ true → raise
s, p, c := reading, 0, ?
method rewrite(fn: string)
if s = closed → skip ⫿ true → raise
s, c := writing, []
method close()
if s ≠ closed → skip ⫿ true → raise
s := closed
method read() → (b: byte)
if s = reading → skip ⫿ true → raise
b, p := c(p), p + 1
method write(b: byte)
if s = writing → skip ⫿ true → raise
c := c + [b]
method eof → (e: boolean)
if s = reading → skip ⫿ true → raise
e := p = #c
Note: if s = closed → skip ⫿ true → raise
can equivalently be expressed as raise ⫿ assert s = closed
.
Exercise: Develop a reliable program to copy file "hocus"
to file "pocus"
!
algorithm
var h, p: File
var e: boolean, b: byte
h ← new File() ; p ← new File()
try
h.reset("hocus")
try
p.rewrite("pocus")
e ← h.eof()
while ¬e do
b ← h.read() ; p.write(b) ; e ← h.eof()
finally
p.rewrite("pocus") ; p.close()
finally
h.close()
Procedures¶
Consider a general procedure specfication of the form:
procedure p(v: V) → (w: W)
modifies u
requires P(u, v)
ensures Q(u₀, u, w)
S
Extending partial correctness to procedures, the implementation S
is partially correct with respect to its specification if S
either establishes the desired postcondition Q
on normal termination or preserves the precondition on exceptional termination:
algorithm
{P(u, v) and u₀ = u} S {Q(u₀, u, w), P(u, v)}
{P(u, v) and u₀ = u} S {Q(u₀, u, w)}
Classes¶
Consider following specification of stacks; all fields are "private" and all methods are "public". The class is meant to be abstract as it serves only for specification: (In Java, this could be expressed by an interface
.)
algorithm
class Stack
var s: seq(integer)
method push(x: integer)
ensures s = [x] + s₀
method pop() → (x: integer)
requires s ≠ []
ensure x, s = s₀(0), s₀[1:]
method empty → (r: boolean)
ensures r, s = (s = []), s₀
The implementation shall store the elements of the sequence in an array of fixed size. This implies that push
cannot be faithfully to its specification, which does not set an upper bound on the length of the stack. Using dynamically allocated data structures can alleviate that, but due to the finiteness of memory, at some point that will lead to the same problem. For simplicity, we discuss a partial class implementation with arrays of fixed size:
algorithm
class ArrayStack implements Stack
const C = 100
var a: array 0 .. C - 1 of integer
var n: integer = 0
{CI: 0 ≤ n ≤ C ∧ s = reverse(a[0 : n - 1])}
method push(x: integer)
a(n), n := x, n + 1
method pop() → (x: integer)
x, n := a(n - 1), n - 1
method empty → (r: boolean)
r := n = 0
The concrete invariant of ArrayStack
is
0 ≤ n ≤ C
and the abstraction function from ArrayStack
to Stack
is
s = reverse(a[0 : n - 1])
Their conjunction is the coupling invariant or abstraction relation. With a totally correct implementation, each method would preserve the coupling invariant, which is not the case here.
Consider
algorithm
class C
var a: A
method m
requires P(a)
ensures Q(a₀, a)
class D implements C
var b: B
invariant I(a, b)
method m
T
Class D
is a (totally) correct implementation of C
if, for every method:
algorithm
{a = a₀ ∧ I(a, b) ∧ P(a)} T {∃ a · I(a, b) ∧ Q(a₀, a)}
Class D
is a partially correct implementation of C
if, for every method:
algorithm
{a = a₀ ∧ I(a, b) ∧ P(a)} T {∃ a · I(a, b) ∧ Q(a₀, a), ∃ a · a = a₀ ∧ I(a, b) ∧ P(a)}
Method push
is only partially correct, meaning:
algorithm
{s = s₀ ∧ CI} a(n), n := x, n + 1 {∃ s · CI ∧ s = [x] + s₀, ∃ s · s = s₀ ∧ CI}
For the proof, we rewrite a(n) := x
as a := (a; n: x)
, so the condition becomes:
(1)
algorithm
{s = s₀ ∧ CI} a, n := (a; n: x), n + 1 {∃ s · CI ∧ s = [x] + s₀, ∃ s · s = s₀ ∧ CI}
From the definition of ∆
follows that ∆(a; n: x) = 0 ≤ n < C
. Assuming that addition wil not overflow, we have ∆(n + 1) = true
. Applying the rules for the correctness of statements with exceptions, (1) holds if:
(1.1)
algorithm
CI ∧ s = s₀ ∧ 0 ≤ n < C ⇒ (∃ s · CI ∧ s = [x] + s₀)[a, n := (a; n: x), n + 1]
(1.2)
algorithm
CI ∧ s = s₀ ∧ ¬(0 ≤ n < C) ⇒ (∃ s · s = s₀ ∧ CI)
For (1.1) we have:
algorithm
CI ∧ s = s₀ ∧ 0 ≤ n < C ⇒ (∃ s · CI ∧ s = [x] + s₀)[a, n := (a; n: x), n + 1]
≡ «substitution, definition of CI»
CI ∧ s = s₀ ∧ 0 ≤ n < C ⇒ (∃ s · 0 ≤ n + 1 ≤ C ∧ s = reverse((a; n: x)[0 : n]) ∧ s = [x] + s₀)
≡ «definition of CI, one-point rule»
0 ≤ n < C ⇒ 0 ≤ n + 1 ≤ C ∧ reverse((a; n: x)[0 : n]) = [x] + reverse(a[0 : n - 1])
≡ «arithmetic, property of reverse»
reverse((a; n: x)[0 : n]) = reverse(a[0 : n - 1] + [x])
⇐ «Leibnitz»
(a; n: x)[0 : n] = a[0 : n - 1] + [x])
≡ «from definition of slicing»
true
For (1.2) we have:
algorithm
CI ∧ s = s₀ ∧ ¬(0 ≤ n < C) ⇒ (∃ s · s = s₀ ∧ CI)
≡ «definition of CI, one-point rule»
0 ≤ n ≤ C ∧ s = reverse(a[0 : n - 1]) ∧ ¬(0 ≤ n < C) ⇒ 0 ≤ n ≤ C ∧ s = reverse(a[0 : n - 1])
⇐ «arithmetic, logic»
n = C ⇒ 0 ≤ n ≤ C
≡ «logic»
true
Exercise: State if pop
and empty
are totally or partially correct and give the proofs!
Question: Is following implementation of push
and pop
also partially correct?
method push(x: integer)
a(n) := x ; n := n + 1
method pop() → (x: integer)
n := n - 1; x := a(n)
Above, push
on a full stack will raise an exception without changing a
or n
. However, pop
on an empty stack will change n
before raising an exception, therefore violating the coupling invariant.
The alternative to employing partial correctness is to incorporate all possible implementation restrictions in the specification. In that case, Stack
would be an unimplementable specification.
Summary¶
- Despite putting forth best effort in the design, possibility of faults remains and programs need to respond to faults.
- Exception handling with
try-catch
statements allows systematic treatment of faults (c.f. resumption). - The notion of partial correctness is a methodological guide: either desired postcondition is established or precondition re-established. In particular, it allows for unanticipated failures.
- Exception patterns: masking, flagging, propagating, rollback, degraded service, recovery block, bounded retry, conditional retry. These state what kind of unanticipated exception can be tolerated.
- Use of exception should reserved for failures rather than as an extra control structure, otherwise the intention gets blurred.
- Distinguishing between different exception types is not essential for most exception patterns. In above pattern, the only use is to guide the retrying patterns if a retry is advisable or not.