Typically, we assume that basic operations on natural numbers (e.g., adding or multiplying two natural numbers together) are performed in constant time. In practice, this assumption is correct whenever we restrict ourselves to natural numbers with some maximum size (e.g., 64 bit natural numbers, for which basic operations are supported directly by modern processors). Applications such as cryptography often work with huge natural numbers, however (e.g., 4048 bit values, which can hold a maximum of ≈3.7⋅101218). Hence, for these applications we can no longer assume that operations on natural numbers are in constant time: these applications require the development of efficient algorithms even for basic operations on natural numbers.
Consider two n-digit natural numbers A=a1…an and B=b1…bn written in base 10: the digits a1…an and b1…bn each have a value in 0…9.
For example, if n=4, then we could have A=3456,B=9870, in which case a1=3,a2=4,a3=5,a6=6,b1=9,b2=8,b3=7,b4=0.
1.1
Write an algorithm ADD(A, B) that computes A+B in Θ(n). Explain why your algorithm is correct and the runtime complexity is Θ(n).
Assumption: one converts A and B into two arrays of n integers, A=[a1…an] and B=[b1…bn].
Algorithm 25 ADD(A, B)
Input: A:=[a1…an]
Input: B:=[b1…bn]
1:C←[] where ∣C∣=n+1
2:carry←0
3:i←n−1
4:while i≥0 do
5:C[i+1]←(ai+bi+carry)mod10
6:carry←⌊(ai+bi+carry)/10⌋
7:i←i−1
8:end while
9:C[0]←carry
10:if C[0]==0 then
11:C←C[1…n]
12:end if
Output: C
Runtime complexity: Θ(n)
L1 takes Θ(n) time to initialise.
while loop iterates n times, each iteration perform constant time operations (additions, modulo, division) in Θ(1) time.
Finally, the adjustment of the output array C takes Θ(1) time.
C[m+1]=(am+bm+cm)mod10, or C[m+1] holds correct digits after addition of am,bm and carry cm
f(i) strictly decreases after each iteration, inew:=i+1
Therefore the invariants holds.
1.2
What is the runtime complexity of this algorithm in terms of the number of digits in A and B?
Runtime complexity is Θ(n2), where n is the number of digits in A and B.
For each digits of B, it multiply every digits of A, which results in n2 operations.
Each addition operation takes at most 2n digit additions, and we perform n of these additions, therefore resulting in O(n2) time.
Overall, pen-and-paper addition of two n-digit numbers takes Θ(n2) time.
1.3
Let C be an n-digit number with n=2m. Hence, C=Chigh⋅10m+Clow where Chigh the first m digits of C and Clow is the remaining m digits of C. For example, if n=4,A=3456,B=9870, then m=2 and
Here is the following recursive algorithm BREAKSDOWNMULTIPLY(A, B) that computes A×B:
Algorithm 26 BREAKSDOWNMULTIPLY(A, B)
Input: A and B have n=2m digits
1:if n=1 then
2:return a1×b1
3:else
4:hh:=BREAKSDOWNMULTIPLY(Ahigh,Bhigh)
5:hl:=BREAKSDOWNMULTIPLY(Ahigh,Blow)
6:lh:=BREAKSDOWNMULTIPLY(Alow,Bhigh)
7:ll:=BREAKSDOWNMULTIPLY(Alow,Blow)
8:return hh⋅102m+(hl+lh)⋅10m+ll
9:end if
10:return A×B
Prove that algorithm BREAKSDOWNMULTIPLY(A, B) is correct.
The proposed BREAKSDOWNMULTIPLY(A, B) is a variant of Karatsuba’s algorithm.
Base case: m=1⟹n=2, which implies A×B are correct (multiplication of two two-digits number).
Through recursions, at any level k,k=log2n,nk=2k⋅m, one would observe:
Ak=Ahighk⋅10mk+Alowk
Bk=Bhighk⋅10mk+Blowk
The recursive call hhk,hlk,lhk,llk correctly computes the product of Ak×Bk til the base case.
The combination of the products is proven through previous math steps, therefore, the algorithm is correct.
1.4
Give a recurrence T(n) for the runtime complexity of BREAKSDOWNMULTIPLY(A, B) Explain each term in the recurrence.
Draw a recurrence tree for T(n) and use this recurrence tree to solve the recurrence T(n) by proving that T(n)=Θ(f(n)) for some function f(n)
What is the runtime complexity of BREAKSDOWNMULTIPLY(A, B)? Do you expect this algorithm to be faster than the pen-and-paper multiplication algorithm?
Hint: Feel free to assume that n=2k,k∈N. Feel free to assume that we can add two v-digit number in Θ(v) (e.g., using ADD) and that we can multiply a v-digit number with 10w in Θ(v+w).
For two n digits number A and B, the recurrent T(n) is:
T(n)={Θ(1)4T(n/2)+Θ(n)if n=1if n>1
The base case when n=1 is Θ(1), as it only performs a single digit multiplication, without no recursive calls.
The recursive case when n>1 performs 4 recursive calls, each with n/2 digits, each on number half the size of original input (since n=2m), hence 4T(n/2).
Θ(n) is the linear time complexity adding the products of the recursive calls, per our assumption that we can multiply a v-digit number with 10w in Θ(v+w).
The recurrence tree for T(n) is:
The total number of nodes at depth k is 4k, since each level of recursion calls the function four times.
Work done at level k is 4k⋅n/2k=2k⋅n, since work done per depth is n times the number of nodes add that depth.
Depth of the tree is log2n, since the input size is halved at each level.
The final rewritten form of A×B only requires three multiplication terms, namely Ahigh×Bhigh,Alow×Blow,(Ahigh+Alow)×(Bhigh+Blow)
Use the observation to construct a recursive multiplication SMARTMATHSMULTIPLY(A, B) that only perform three recursive multiplications. Argue why SMARTMATHSMULTIPLY(A, B) is correct.
The proposed SMARTMATHSMULTIPLY(A, B) is the basis of Karatsuba’s algorithm.
Base case: n=1, which implies A×B are correct (multiplication of two single digit number).
Assume that SMARTMATHSMULTIPLY(A, B) correctly computes the product of A×B for A,B with lest than n digits.
The following invariants hold per recursive call:
A=Ahigh⋅10m+Alow∧B=Bhigh⋅10m+Blow where m=2n (true from problem statement and n=2k)
recursive call computes P1,P2,P3 correctly, where P1=Ahigh×Bhigh,P2=Alow×Blow,P3=(Ahigh+Alow)×(Bhigh+Blow) for numbers fewer than n digits (from induction hypothesis)
combination invariants: P4=P3−P2−P1∧A×B=P1⋅102m+P4⋅10m+P2 (true from previous statement)
Thus, the algorithm is correct.
1.6
Give a recurrence T(n) for the runtime complexity of SMARTMATHSMULTIPLY(A, B) Explain each term in the recurrence.
Solve the recurrence T(n) by proving that T(n)=Θ(f(n)) for some function f(n). Use any methods that you find comfortable with.
What is the runtime complexity of SMARTMATHSMULTIPLY(A, B)? Do you expect this algorithm to be faster than the pen-and-paper multiplication algorithm?
Hint: Feel free to assume that n=2k,k∈N. Feel free to assume that we can add two v-digit number in Θ(v) (e.g., using ADD) and that we can multiply a v-digit number with 10w in Θ(v+w).
For two n digits number A and B, the recurrent T(n) is:
T(n)={Θ(1)3T(n/2)+Θ(n)if n=1if n>1
The base case when n=1 is Θ(1), as it only performs a single digit multiplication, without no recursive calls.
The recursive case when n>1 performs 3 recursive calls, each with n/2 digits, each on number half the size of original input (since n=2m), hence 3T(n/2).
Θ(n) is the linear time complexity adding the products of the recursive calls, per our assumption that we can multiply a v-digit number with 10w in Θ(v+w).
Using Master Theorem, we can solve for T(n), with a=3,b=2,f(n)=Θ(n)=nlog23.
The master theorem states that if f(N)=Θ(Nlogbalogk(N)), with k>0, then T(N)=Θ(Nlogbalogk+1N).
Thus T(n)=Θ(nlog23log(n))=Θ(nlog23)
Runtime complexity of SMARTMATHSMULTIPLY(A, B) > Θ(nlog23)≈Θ(n1.585)
This algorithm is expected to be faster than the pen-and-paper multiplication algorithm, which also takes Θ(n2) time.