Problem 1 [8 points] Consider the system Ax=bAx = b, where A=[0.10.30.90.30.92.70.60.70.1]A=\begin{bmatrix} 0.1 & 0.3 & 0.9\\ 0.3 & 0.9 & 2.7\\ 0.6 & 0.7 & 0.1 \end{bmatrix} and b=[1.33.91.4]Tb = \begin{bmatrix} 1.3 & 3.9 & 1.4\end{bmatrix}^T

a. [2 points] Show that AA is singular.

b. [2 points] If we were to use Gaussian elimination with partial pivoting to solve this system using exact arithmetic, show where the process fails.

c. [2 points] Solve this system in double precision using partial pivoting. Do not use Matlab’s functions. What is the solution that you obtain?

d. [2 points] Matlab’s A\b produces NaN -Inf Inf as a solution. Explain why NaN, -Inf and Inf.

Answer:

a. For AA to be singular, prove det(A)=0det(A) = 0

Using Gaussian elimination without partial pivoting

Ab=[0.10.30.91.30.30.92.73.90.60.70.11.4] R2R1Ab=[0.10.30.91.30.20.61.82.60.60.70.11.4] R33R1Ab=[0.10.30.91.30.20.61.82.60.30.22.62.5] R312R2Ab=[0.10.30.91.30.20.61.82.60.20.53.53.8] Thus Ab[0.10.30.91.30.20.61.82.60.20.53.53.8]  det(A)=a(eifh)b(difg)+c(dheg),A=[abcdefghi]  det(A)=0.1(0.63.5+1.80.5) 0.3(0.23.51.80.2)+ 0.9(0.50.20.60.2)=0\begin{aligned} A|b &= \begin{bmatrix} 0.1 & 0.3 & 0.9 & | & 1.3\\ 0.3 & 0.9 & 2.7 & | & 3.9\\ 0.6 & 0.7 & 0.1 & | & 1.4 \end{bmatrix} \\\ R_{2} - R_{1} \rightarrow A|b &= \begin{bmatrix} 0.1 & 0.3 & 0.9 & | & 1.3\\ 0.2 & 0.6 & 1.8 & | & 2.6\\ 0.6 & 0.7 & 0.1 & | & 1.4 \end{bmatrix} \\\ R_{3} - 3*R_{1} \rightarrow A|b &= \begin{bmatrix} 0.1 & 0.3 & 0.9 & | & 1.3\\ 0.2 & 0.6 & 1.8 & | & 2.6\\ 0.3 & -0.2 & -2.6 & | & -2.5 \end{bmatrix} \\\ R_3 - \frac{1}{2}*R_2 \rightarrow A|b &= \begin{bmatrix} 0.1 & 0.3 & 0.9 & | & 1.3\\ 0.2 & 0.6 & 1.8 & | & 2.6\\ 0.2 & -0.5 & -3.5 & | & -3.8 \end{bmatrix} \\\ \text{Thus } \rightarrow A|b \leftarrow &\begin{bmatrix} 0.1 & 0.3 & 0.9 & | & 1.3\\ 0.2 & 0.6 & 1.8 & | & 2.6\\ 0.2 & -0.5 & -3.5 & | & -3.8 \end{bmatrix} \\\ & \\\ det(A) = a(ei−fh)−b(di−fg)+c(dh−eg), A &=\begin{bmatrix} a & b & c \\ d & e & f\\ g & h & i \end{bmatrix} \\\ & \\\ \rightarrow det(A) = 0.1*(-0.6*3.5+1.8*0.5) - & \\\ 0.3*(-0.2*3.5-1.8*0.2) + & \\\ 0.9*(-0.5*0.2-0.6*0.2) &= 0 \end{aligned}

Lemma

AA is singular

b. With partial pivoting:

Ab=[0.10.30.91.30.30.92.73.90.60.70.11.4] R3R1Ab=[0.60.70.11.40.30.92.73.90.10.30.91.3] R212R1Ab=[0.60.70.11.400.552.653.20.10.30.91.3] R316R1Ab=[0.60.70.11.400.552.653.200.183333330.883333331.06666667] R313R2Ab=[0.60.70.11.400.552.653.20000.3]\begin{align} A|b &=\begin{bmatrix} 0.1 & 0.3 & 0.9 & | & 1.3\\ 0.3 & 0.9 & 2.7 & | & 3.9\\ 0.6 & 0.7 & 0.1 & | & 1.4 \end{bmatrix} \\\ R3 \leftrightarrow R1 \leftarrow A|b&=\begin{bmatrix} 0.6 & 0.7 & 0.1 & | & 1.4\\ 0.3 & 0.9 & 2.7 & | & 3.9\\ 0.1 & 0.3 & 0.9 & | & 1.3 \end{bmatrix} \\\ R2 - \frac{1}{2}R1 \leftarrow A|b&=\begin{bmatrix} 0.6 & 0.7 & 0.1 & | & 1.4\\ 0 & 0.55 & 2.65 & | & 3.2\\ 0.1 & 0.3 & 0.9 & | & 1.3 \end{bmatrix} \\\ R3 - \frac{1}{6}R1 \leftarrow A|b&=\begin{bmatrix} 0.6 & 0.7 & 0.1 & | & 1.4\\ 0 & 0.55 & 2.65 & | & 3.2\\ 0 & 0.18333333 & 0.88333333 & | & 1.06666667 \end{bmatrix} \\\ R3 - \frac{1}{3}R2 \leftarrow A|b&=\begin{bmatrix} 0.6 & 0.7 & 0.1 & | & 1.4\\ 0 & 0.55 & 2.65 & | & 3.2\\ 0 & 0 & 0 & | & -0.3 \end{bmatrix} \end{align}

We notice that R313R20=0.3R3-\frac{1}{3}R2 \rightarrow 0=-0.3, thus invalid.

c. With partial pivoting in double precision

The LULU decomposition of A=[0.10.30.90.30.92.70.60.70.1]A=\begin{bmatrix} 0.1 & 0.3 & 0.9\\ 0.3 & 0.9 & 2.7\\ 0.6 & 0.7 & 0.1 \end{bmatrix}

The following portray steps to calculate UU (lower triangular):

R3R1U=[0.60.70.10.30.92.70.10.30.9],P1=[001010100] R212R1U=[0.60.70.100.552.65000000000000040.10.30.9] R316R1U=[0.60.70.100.552.650000000000000400.183333333333333350.8833333333333333] R313R2U=[0.60.70.100.552.6500000000000004004.8109664400423476×1017]\begin{aligned} R_3 \leftrightarrow R_1 \rightarrow U &= \begin{bmatrix} 0.6 & 0.7 & 0.1\\ 0.3 & 0.9 & 2.7\\ 0.1 & 0.3 & 0.9 \end{bmatrix}, \quad P_1 = \begin{bmatrix} 0 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 0 \end{bmatrix} \\\ R_2 - \frac{1}{2}R_1 \rightarrow U &= \begin{bmatrix} 0.6 & 0.7 & 0.1\\ 0 & 0.55 & 2.6500000000000004\\ 0.1 & 0.3 & 0.9 \end{bmatrix} \\\ R_3 - \frac{1}{6}R_1 \rightarrow U &= \begin{bmatrix} 0.6 & 0.7 & 0.1\\ 0 & 0.55 & 2.6500000000000004\\ 0 & 0.18333333333333335 & 0.8833333333333333 \end{bmatrix} \\\ R_3 - \frac{1}{3}R_2 \rightarrow U &= \begin{bmatrix} 0.6 & 0.7 & 0.1\\ 0 & 0.55 & 2.6500000000000004\\ 0 & 0 & 4.8109664400423476 \times 10^{-17} \end{bmatrix} \end{aligned}

_note: the a33a_{33} is close to zero, hence consistent with previous finding_

L=[1000.5100.166666666666666690.333333333333333261]L=\begin{bmatrix} 1 & 0 & 0\\ 0.5 & 1 & 0\\ 0.16666666666666669 & 0.33333333333333326 & 1 \end{bmatrix}

To solve for xx with LULU decomposition, We solve L(Ux)=PbL(Ux)=Pb

x=[14.00699300699310.489510489510483.3846153846153832]\rightarrow x=\begin{bmatrix} 14.006993006993 & -10.48951048951048 & 3.3846153846153832\end{bmatrix}

d. Since A is singular, it doesn’t have an inverse. Matlab uses LU decomposition, and as we explored above, a pivot element is found to be zero or close to zero (matrix is probably ill-conditioned), which leads to 0x1+0x2+0x3=non negative value0x_1 + 0x_2 + 0x_3=\text{non negative value}, which results in NaN. For the second value -Inf, the division is small. Inf is due to division by zero


Problem 2 [2 points] Apply Gaussian elimination with partial pivoting on the following matrix A=[1000111001111011111111111]A=\begin{bmatrix} 1 & 0 & 0 & 0 & 1\\ −1 & 1 & 0 & 0 & 1\\ −1 & −1 & 1 & 0 & 1\\ −1 & −1 & −1 & 1 & 1\\ −1 & −1 & −1 & −1 & 1 \end{bmatrix}

Show all the steps.

Answer:

A=[1000111001111011111111111]A=\begin{bmatrix} 1 & 0 & 0 & 0 & 1\\ −1 & 1 & 0 & 0 & 1\\ −1 & −1 & 1 & 0 & 1\\ −1 & −1 & −1 & 1 & 1\\ −1 & −1 & −1 & −1 & 1 \end{bmatrix}

R2+R1 and R3+R1 and R4+R1 and R5+R1A=[1000101002011020111201112]R2+R1 \text{ and } R3+R1\text{ and } R4+R1\text{ and } R5+R1\rightarrow A=\begin{bmatrix} 1 & 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 & 2\\ 0 & −1 & 1 & 0 & 2\\ 0 & −1 & −1 & 1 & 2\\ 0 & −1 & −1 & −1 & 2 \end{bmatrix}

R3+R2 and R4+R2 and R5+R2A=[1000101002001040011400114]R3+R2 \text{ and } R4+R2\text{ and } R5+R2\rightarrow A=\begin{bmatrix} 1 & 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 & 2\\ 0 & 0 & 1 & 0 & 4\\ 0 & 0 & −1 & 1 & 4\\ 0 & 0 & −1 & −1 & 4 \end{bmatrix}

R4+R3 and R5+R3A=[1000101002001040001800018]R4+R3 \text{ and } R5+R3\rightarrow A=\begin{bmatrix} 1 & 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 & 2\\ 0 & 0 & 1 & 0 & 4\\ 0 & 0 & 0 & 1 & 8\\ 0 & 0 & 0 & −1 & 8 \end{bmatrix}

R5+R4A=[10001010020010400018000016]R5+R4\rightarrow A=\begin{bmatrix} 1 & 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 & 2\\ 0 & 0 & 1 & 0 & 4\\ 0 & 0 & 0 & 1 & 8\\ 0 & 0 & 0 & 0 & 16 \end{bmatrix}

Problem 3 [5 points] (a) (3 points) Let AA, BB, and CC be n×nn × n matrices, where BB and CC are nonsingular. For an nn-vector bb, describe how you would implement the formula x=C1(A+I)(A+B1)b.x = C^{-1} (A + I)(A + B^{−1})b. without computing any inverses. Here, II is the n×nn × n identity matrix.

(b) (2 points) What is the complexity of your approach in terms of big-O notation?

Answer:

a. Given BB and CC are non-singular

  1. Step 1: Using LULU decomposition of B, such that B=LUB=LU
  2. Step 2: Solve for yy in By=bBy=b (As y=B1by=B^{-1}b)
    1. solve for zz in Lz=bLz=b via forward substitution
    2. solve for yy in Uy=zUy=z via backward substitution
  3. Step 3: Compute z=(A+B1)bz=(A+B^{-1})b
    1. This becomes z=Ab+yz=Ab+y
  4. Step 4: Compute w=(A+I)zw = (A+I)z
    1. Via matrix multiplication w=Az+z\rightarrow w=Az + z
  5. Step 5: using LULU decomposition of C, such that C=LUC=LU
  6. Step 6: Solve for xx in Cx=wCx=w (As x=C1wx=C^{-1}w)
    1. Solve for zz' in Lz=wLz'=w via forward substitution
    2. Solve for xx in Ux=zUx=z' via backward substitution

With expansion, solved x=C1(A+I)(A+B1)b.x = C^{-1} (A + I)(A + B^{−1})b.

b. Complexity analysis

Let total_cost be the big-O notation

Step 1 using LULU decomposition of BB total_cost=O(n3)\rightarrow \text{total\_cost}=O(n^3)

Step 2 solving each Lz=bLz=b and Uy=zUy=z takes O(n2)O(n^2) each, thus solving Lz=bLz=b using LULU decomposition takes O(2n2)O(2n^2) total_cost=O(n3)+O(2n2)\rightarrow \text{total\_cost}=O(n^3) + O(2n^2)

Step 3 Compute z=(A+B1)bz=(A+B^{-1})b

  • MatmulOp of AbAb is O(n2)O(n^2)
  • AddOp of Ab+yAb+y is O(n)O(n)
  • Total for this step O(n2)+O(n)O(n^2) + O(n) total_cost=O(n3)+O(3n2)+O(n)\rightarrow \text{total\_cost}=O(n^3) + O(3n^2) + O(n)

Step 4 Compute w=(A+I)zw = (A+I)z

  • MatmulOp of AbAb is O(n2)O(n^2)
  • AddOp of Ab+yAb+y is O(n)O(n)
  • Total for this step O(n2)+O(n)O(n^2) + O(n) total_cost=O(n3)+O(4n2)+O(2n)\rightarrow \text{total\_cost}=O(n^3) + O(4n^2) + O(2n)

Step 5 using LULU decomposition of CC total_cost=O(2n3)+O(4n2)+O(2n)\rightarrow \text{total\_cost}=O(2n^3) + O(4n^2) + O(2n)

Step 6 solving each Lz=wLz'=w and Ux=zUx=z' using LU composition takes O(2n2)O(2n^2) total_cost=O(2n3)+O(6n2)+O(2n)\rightarrow \text{total\_cost}=O(2n^3) + O(6n^2) + O(2n)


Problem 4 [6 points] An n×nn × n Hilbert matrix, denote it by HH, has entries hij=1(i+j1),i,j=1,...,n.h_{ij} = \frac{1}{(i+j-1)}, i, j = 1, . . . , n.

For n=2,3,...n = 2, 3, . . . , generate the Hilbert matrix of order nn, and also generate the nn-vector b=Hxb = Hx, where xx is a random vector. Solve the resulting system Hx=bHx = b to obtain an approximate solution x^\hat{x}. (See the functions hilb and rand.)

(a) [2 points] How large can you take nn before the error x^xx\frac{\Vert{\hat{x} - x}\Vert}{\Vert{x}\Vert} is 100 percent?

(b) [2 points] For nn up to the value you find in (a), report rb\frac{\Vert{r}\Vert}{\Vert{b}\Vert} , where r=bHx^r = b − H\hat{x}, and x^xx\frac{\Vert{\hat{x} - x}\Vert}{\Vert{x}\Vert}.

(c) [2 points] As nn increases, how does the number of correct digits in the computed solution relate to the condition number of the matrix? See the cond function.

Submit your Matlab program producing the above results. Name the Matlab file hilb_problem.m.

Answer:

The following hilb_problem.m is used:

hilb_problem.m
function hilb_problem()
    n = 1;
    while true
        % Generate Hilbert matrix of order n
        H = hilb(n);
 
        % Generate random vector x
        x = rand(n, 1);
 
        % Compute b = Hx
        b = H * x;
 
        % Solve the system Hx = b
        x_hat = H \ b;
 
        % Compute the relative error
        error = norm(x_hat - x) / norm(x);
        fprintf("error=%d, n=%d\n", error, n)
        % If the error is 100 percent, break
        if error >= 1
            break;
        end
 
        n = n + 1;
    end
 
    fprintf('\n=============\n\nThe largest n before the error is 100 percent is: %d\n\n=============\n', n-1);
 
    for i = 1:n-1
        H = hilb(i);
        x = rand(i, 1);
        b = H * x;
        x_hat = H \ b;
 
        r = b - H * x_hat;
        rel_resid = norm(r) / norm(b);
        rel_error = norm(x_hat - x) / norm(x);
 
        %fprintf('%d %.16f\n',i, rel_resid)
        fprintf('| %d | %.32f | %.32f |\n', i, rel_resid, rel_error);
    end
 
    cond_num = cond(H);
    fprintf('The condition number of the matrix for n = %d is: %f\n', n-1, cond_num);
end

a. largest n=12n=12 before the error x^xx\frac{\Vert{\hat{x} - x}\Vert}{\Vert{x}\Vert} is 100 percent.

b. The following entails the value of rb\frac{\Vert{r}\Vert}{\Vert{b}\Vert} and x^xx\frac{\Vert{\hat{x} - x}\Vert}{\Vert{x}\Vert}

nrb\frac{\Vert{r}\Vert}{\Vert{b}\Vert}x^xx\frac{\Vert{\hat{x} - x}\Vert}{\Vert{x}\Vert}
10.000000000000000000000000000000000.00000000000000000000000000000000
20.000000000000000000000000000000000.00000000000000013220372219891702
30.000000000000000000000000000000000.00000000000000363350625815651572
40.000000000000000000000000000000000.00000000000006709266750580992637
50.000000000000000077339751176242870.00000000000747821082933078000054
60.000000000000000139342075067363820.00000000023960543432895825359428
70.000000000000000106605703983710850.00000000837749558262967895463873
80.000000000000000071655651845704070.00000009992506975169996005028294
90.000000000000000070765498384471140.00000608952488692639798140973303
100.000000000000000126628405307077190.00002450986238666613242472361311
110.000000000000000119976337808137890.00379971054180424641297242338567
120.000000000000000065033380665053650.25404291536273732043937911839748

c. As nn increases, the condition number increases, which means the matrix becomes more ill-conditioned. This means fewer digits in the computed solution are correct.

IMPORTANT

The number of correct digits in the computed solution decreases due to the increase in the condition number as nn increases


Problem 5 [4 points] You have to interpolate sin(x)sin(x) by a polynomial of degree five using equally spaced points in [0, 1].

(a) [2 points] What (absolute) error would you expect if you use this polynomial?

(b) [2 points] Using equally spaced points, what degree polynomial would you use to achieve a maximum error of 10810^{-8}?

Answer:

a. Interpolate sin(x)sin(x) by a polynomial of degree five using equally spaced on in [0,1][0,1], Error as follow

f(x)pn(x)=E(x)=fn+1(ξ)(n+1)!i=0n(xxi)f(x) - p_n(x) = E(x) = \frac{f^{n+1}(\xi)}{(n+1)!}\prod_{i=0}^{n}{(x-x_i)}

where

  • nn is the degree of the polynomial (n=5n=5)
  • fn+1(ξ)f^{n+1}(\xi) is (n+1)-th(n+1)\text{-th} derivate of ff

Derivate of sin(x)sin(x) every 4 terms is sin(x),cos(x),sin(x),cos(x)sin(x), cos(x), -sin(x), -cos(x). Therefore the 6th derivative is cos(x)-cos(x)

Here h=ban=15h=\frac{b-a}{n}=\frac{1}{5} and M=max0t1cos(t)=1cos(1)=2sin2(12)M = max_{0\leq t\leq 1}|-cos(t)| = 1 - cos(1) = 2sin^2(\frac{1}{2})

Therefore E(x)=f(x)sin(x)M4(n+1)hn+1=2sin2(12)4(6)(1/5)61.225860517684960×106|E(x)| = |f(x) - sin(x)| \leq \frac{M}{4(n+1)}h^{n+1}=\frac{2sin^2(\frac{1}{2})}{4(6)}(1/5)^6 \approx 1.225860517684960×10^{−6}

b. To achieve maximum error of 10810^{-8}, We have

f(x)sin(x)max0t1sin(n+1)(t)4(n+1)nn+1=108|f(x) - sin(x)| \leq\frac{max_{0\leq t\leq 1}|sin^{(n+1)}(t)|}{4(n+1)*n^{n+1}} = 10^{-8}

derivative of sin(x)sin(x) cycles every 4 term, thus the max value of sin(n+1)(t)|sin^{(n+1)}(t)| over [0,1][0,1] is 1

Thus we need to solve for nn in 14(n+1)nn+1=108n7 (through trial and error)\frac{1}{4(n+1)n^{n+1}}=10^{-8} \rightarrow n\approx 7 \text{ (through trial and error)}

Hence considering to use polynomial degree seven to achieve the desired error bound.


Problem 6 [3 points] You are given the values of x\sqrt{x} at three points

x149
x\sqrt{x}123

(a) [2 points] Construct the interpolating polynomial interpolating these data.

(b) [1 points] Using this polynomial, approximate 1.5\sqrt{1.5}.

Answer:

a. To construct the interpolating polynomial for these data, we will use Lagrange basis

P(x)=i=0n1yiLi(x)P(x)=\sum_{i=0}^{n-1}{y_i}{L_i(x)}

where Li(x)L_i(x) is the i-thi\text{-th} Lagrange basis polynomial, defined as

Li(x)=j=0,jin1xxjxixjL_i(x) = \prod_{j=0,j\neq i}^{n-1}\frac{x-x_j}{x_i-x_j}

With y(x)=xy(x) = \sqrt{x}, and data point x0=1,y0=1;x1=4,y1=2;x2=9,y2=3x_0=1,y_0=1;x_1=4,y_1=2;x_2=9,y_2=3

P(x)=i=02yiLi(x) where Li(x)=j=0,ji2xxjxixjP(x)=\sum_{i=0}^{2}{y_i}{L_i(x)} \text{ where } L_i(x) = \prod_{j=0,j\neq i}^{2}\frac{x-x_j}{x_i-x_j}

L0(x)=(xx1)(xx2)(x0x1)(x0x2)=(x4)(x9)(14)(19)=(x4)(x9)24L_0(x) = \frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)} = \frac{(x-4)(x-9)}{(1-4)(1-9)} = \frac{(x-4)(x-9)}{24} L1(x)=(xx0)(xx2)(x1x0)(x1x2)=(x1)(x9)(41)(49)=(x1)(9x)15L_1(x) = \frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}=\frac{(x-1)(x-9)}{(4-1)(4-9)}=\frac{(x-1)(9-x)}{15} L2(x)=(xx0)(xx1)(x2x0)(x2x1)=(x1)(x4)(91)(41)=(x4)(x1)40L_2(x) = \frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}=\frac{(x-1)(x-4)}{(9-1)(4-1)} = \frac{(x-4)(x-1)}{40}

P(x)=y0L0(x)+y1L1(x)+y2L2(x)=1(x4)(x9)24+2(x1)(9x)15+3(x4)(x1)40P(x) = y_0L_0(x) + y_1L_1(x) + y_2L_2(x) = 1 * \frac{(x-4)(x-9)}{24} + 2*\frac{(x-1)(9-x)}{15} + 3*\frac{(x-4)(x-1)}{40}

IMPORTANT

The interpolating polynomial P(x)=(x4)(x9)24+2(x1)(9x)15+3(x4)(x1)40P(x)=\frac{(x-4)(x-9)}{24} + \frac{2(x-1)(9-x)}{15} + \frac{3(x-4)(x-1)}{40}

b. The approximation of P(1.5)=(1.54)(1.59)24+2(1.51)(91.5)15+3(1.54)(1.51)40=1.1875P(\sqrt{1.5})=\frac{(1.5-4)(1.5-9)}{24} + \frac{2(1.5-1)(9-1.5)}{15} + \frac{3(1.5-4)(1.5-1)}{40}=1.1875


Problem 7 [7 points] Let f(x)=sin(x)(1+20x)2f(x) = \frac{sin(x)}{(1+20x)^2}. Interpolate this function over x[1,1]x \in [−1, 1] using

(a) [2 points] polynomial interpolation of degree n=15n = 15 at equally spaced points. Then evaluate this polynomial at N=100N = 100 equally spaced points. Denote the interpolating polynomial by p(x)p(x). Plot

  • f(x)f(x) and p(x)p(x) versus xx at the interpolation points and at the NN points (on the same plot);
  • f(x)p(x)|f(x) − p(x)| versus xx at the NN points. You can use the polyfit function. See the linspace function.

(b) [2 points] Repeat (a) but now using Chebyshev points.

(c) [2 points] Repeat (a) but now using spline interpolation at n+1n + 1 equally spaced points. See the spline function.

(d) [1 points] Discuss the accuracies of your results.

Submit your plots (6 in total) and the Matlab code producing them. Name your Matlab file interp_problem.m.

Answer

f(x)f(x) implementation in matlab are as follow:

f = @(x) sin(x)./((1 + 20*x).^2);

a. The following is a snippet of interp_problem.m for polynomial interpolation of degree n=15n=15

% (a) Polynomial interpolation of degree n = 15 at equally spaced points
 
% Define the number of interpolation points and the degree of the polynomial
n = 15;
N = 100;
 
% Generate n+1 equally spaced points in the interval [-1, 1]
x = linspace(-1, 1, n+1);
y = f(x);
 
% Interpolate using polyfit
p_coeff = polyfit(x, y, n);
 
% Evaluate the interpolating polynomial at N equally spaced points
x_N = linspace(-1, 1, N);
p_N = polyval(p_coeff, x_N);
 
% Plot f(x) and p(x) on the same graph
figure;
plot(x_N, f(x_N), 'b-', x_N, p_N, 'r--', x, y, 'go');
legend('f(x)', 'p(x)', 'Interpolation Points');
title('f(x) and p(x) vs. x');
xlabel('x');
ylabel('y');
 
% Plot the absolute error |f(x) - p(x)| at the N points
figure;
plot(x_N, abs(f(x_N) - p_N), 'm-');
title('Absolute Error |f(x) - p(x)| vs. x');
xlabel('x');
ylabel('Error');

b. The following is a snippet of interp_problem.m for Cheybyshev points

% (b) Polynomial interpolation using Chebyshev points
 
% Generate Chebyshev points in the interval [-1, 1]
x_cheb = cos((2*(1:n+1)-1)*pi/(2*n));
y_cheb = f(x_cheb);
 
% Interpolate using polyfit
p_cheb_coeff = polyfit(x_cheb, y_cheb, n);
 
% Evaluate the interpolating polynomial at N equally spaced points
p_cheb_N = polyval(p_cheb_coeff, x_N);
 
% Plot f(x) and p(x) using Chebyshev points on the same graph
figure;
plot(x_N, f(x_N), 'b-', x_N, p_cheb_N, 'r--', x_cheb, y_cheb, 'go');
legend('f(x)', 'p(x) with Chebyshev', 'Interpolation Points');
title('f(x) and p(x) with Chebyshev vs. x');
xlabel('x');
ylabel('y');
 
% Plot the absolute error |f(x) - p(x)| using Chebyshev points at the N points
figure;
plot(x_N, abs(f(x_N) - p_cheb_N), 'm-');
title('Absolute Error |f(x) - p(x) with Chebyshev| vs. x');
xlabel('x');
ylabel('Error');

c. The following is a snippet of interp_problem.m through spline interpolation at n+1n + 1 equally spaced points.

% (c) Spline interpolation at n+1 equally spaced points
 
% Evaluate the function at n+1 equally spaced points
y_spline = f(x);
 
% Use the spline function to get the piecewise polynomial representation
pp = spline(x, y_spline);
 
% Evaluate the spline at N equally spaced points
spline_N = ppval(pp, x_N);
 
% Plot f(x) and the spline on the same graph
figure;
plot(x_N, f(x_N), 'b-', x_N, spline_N, 'r--', x, y_spline, 'go');
legend('f(x)', 'spline(x)', 'Interpolation Points');
title('f(x) and spline(x) vs. x');
xlabel('x');
ylabel('y');
 
% Plot the absolute error |f(x) - spline(x)| at the N points
figure;
plot(x_N, abs(f(x_N) - spline_N), 'm-');
title('Absolute Error |f(x) - spline(x)| vs. x');
xlabel('x');
ylabel('Error');

d. Discussion

  1. The polynomial interpolation using equally spaced points might show oscillations near endpoints due to Runge phenomenon (oscillations near the endpoints of the interpolated interval become pronounced). We saw oscillation in the error graph here.
  2. Polynomial interpolation using Chebyshev points should mitigate the oscillations
  3. The spline interpolation will provide a piecewise polynomial that should fit the function smoothly and might offer better accuracy than polynomial interpolation

Problem 8 [4 points] Given the three data points (1,1),(0,0),(1,1)(−1, 1), (0, 0), (1, 1), determine the interpolating polynomial of degree two using:

a. [1 point] monomial basis

b. [1 point] Lagrange basis

c. [1 point] Newton basis

[1 point] Show that the three representations give the same polynomial.

Answer:

a. Monomial basis

The monomial basis for a polynomial of degree two is given by: p(x)=a0+a1x+a2x2p(x)=a_0+a_1*x+a_2*x^2

The linear system as follow

a0a1+a2=1a_0-a_1+a_2=1 a0=0a_0=0 a0+a1+a2=1a_0+a_1+a_2=1

Solving this system to obtain the a0=0,a1=0,a2=1a_0=0,a_1=0, a_2=1

NOTE

Thus monomial basis of this polynomial of degree two is p(x)=x2p(x) = x^2

b. Lagrange basis

The Lagrange basis for a polynomial of degree two is given by: p(x)=j=02yjLj(x)=f(x0)L0(x)+f(x1)L1(x)+f(x2)L2(x)p(x)=\sum_{j=0}^{2}{y_j}{L_j(x)} = f(x_0)L_0{(x)} + f(x_1)L_1{(x)} + f(x_2)L_2{(x)} where L0(x)=(xx1)(xx2)(x0x1)(x0x2)=x(x1)2L_0(x) = \frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)} = \frac{x(x-1)}{2} L1(x)=(xx0)(xx2)(x1x0)(x1x2)=x(x1)L_1(x) = \frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}=-x(x-1) L2(x)=(xx0)(xx1)(x2x0)(x2x1)=x(x+1)2L_2(x) = \frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}=\frac{x(x+1)}{2}

Thus p(x)=1x(x1)2+0(x(x1))+x(x+1)2=x2p(x) = 1*\frac{x(x-1)}{2} + 0*(-x(x-1)) + \frac{x(x+1)}{2} = x^2

NOTE

Thus Lagrange basis of this polynomial of degree two is p(x)=x2p(x) = x^2

c. Newton basis

The Newton basis for a polynomial of degree two is given by: p(x)=f(x0)+(xx0)f[x0,x1]+(xx0)(xx1)f[x0,x1,x2]p(x)=f(x_0)+(x-x_0)f[x_0, x_1] + (x-x_0)(x-x_1)f[x_0, x_1, x_2] where f[x0,x1]=f(x1)f(x0)x1x0=010+1=1f[x_0,x_1]=\frac{f(x_1)-f(x_0)}{x_1-x_0} = \frac{0-1}{0+1} = -1 f[x0,x1,x2]=f[x1,x2]f[x0,x1]x2x0=1+11+1=1f[x_0,x_1,x_2]=\frac{f[x_1, x_2]-f[x_0, x_1]}{x_2-x_0} = \frac{1+1}{1+1} = 1

We have f[x1,x2]=f(x2)f(x1)x2x1=1010=1f[x_1,x_2]=\frac{f(x_2)-f(x_1)}{x_2-x_1} = \frac{1-0}{1-0} = 1

Thus p(x)=1+(x+1)(1)+(x+1)(x)2=1x1+(x2+x)=x2p(x)=1+(x+1)(−1)+(x+1)(x)*2 =1 - x-1 + (x^2+x)=x^2

NOTE

Thus Newton basis of this polynomial of degree two is p(x)=x2p(x) = x^2

Therefore, we prove that all three basis yield the same polynomial for degree two.