The graph show extreme spike at both end of the range, close to +-1
The graph shows an extreme spike in the number of function evaluations at both ends of the x range, close to ±1. This is consistent with the expectation that as x approaches ±1, the integrand of the complete elliptic integral of the first kind, 1−x2sin2θdθ, approaches a singularity for some theta within the interval [0,π/2].
When x is near ±1, the term x2sin2θ can approach 1, causing the denominator to approach zero and the integrand to become very large or approach infinity, especially as θ approaches π/2.
The adaptive Simpson’s method tries to maintain the specified tolerance by increasing the number of intervals (thus function evaluations) where the integrand varies rapidly or becomes difficult to approximate due to singular behavior. Near these singularities, even small intervals can have large differences in the integrand values, leading the adaptive algorithm to recursively subdivide the intervals, resulting in a substantial increase in function evaluations.
The sharp increase in function evaluations at the edges of the graph indicates that the algorithm is working as expected, refining the integration intervals to handle the challenging behavior of the integrand near the points where it is not well-behaved. The function evaluations become extremely high as the integrand requires very fine subdivisions to approximate the integral within the specified tolerance near the singular points.
Problem 3
C Trapezoidal rule
C Simpson’s rule
a.
Given ∫02πexcos(x)dx with absolute error of at most tol=10−4
Trapezoidal
The error bound is given by Et≤12n3(b−a)3maxa≤x≤b∣f′′(x)∣, where f(x)=excos(x)
f′′(x)=ex(2cos(x)−2sin(x))
Since ex increasing and ∣cos(x)−sin(x)∣ maximised at x=4π
Therefore f′′(x) is maximised at x=4π for interval [0,2π]
max∣f′′(x)∣=∣e4π(2cos(4π)−2sin(4π))∣=e4π2
Then, we need to solve for 12n2(2π)3e4π2≤10−4 and gives n≥101 to satisfy the tol
Simpson’s
The error bound is given by Es≤180n4(b−a)5maxa≤x≤b∣f4(x)∣
f4(x)=ex(−4sin(x)−4cos(x)) on interval [0,2π] is approx. 19.2419
Then, we need to solve for 180n4(2π)5max∣f4(x)∣≤10−4, which yields n≥12
b.
Trapezoidal
Using the following
yield n≥110
Simpson’s
Using the following
yield n≥8
c.
Trapezoidal
The following
yields
Simpson’s
The following:
yields
d.
Trapezoidal
Error bound for theoretical is proportional to n21, therefore on the loglog the theoretical appears to be a straight lines with negative slope. Slope should be -2, because the error bound decreases with square of # n
The actual error observed also diminished as n becomes larger. Similar to error bound, the actual error is expected to decrease with increase in n, but may decrease faster/slower. In loglog plot, it then appears to be straight line.
Simpson’s
Error bound for theoretical is proportional to n41, therefore on the loglog the theoretical appears to be a straight lines with negative slope.
The actual error observed when using Simpson’s rule also shows a rapid decrease with increasing n. The actual error may decrease faster than the error bound predicts because the bound is a worst-case estimate. The true error often is less than this bound, especially for well-behaved functions.
The difference in slopes between the actual error curve and the theoretical error bound curve is expected. The theoretical curve represents the maximum possible error, not the exact error, which can be much less depending on how the function behaves within each subinterval.
The actual error may flatten as n increases past a certain point. This is due to the limitations of numerical precision in Matlab.
Problem 4
Yields
Reason for krow≈3
Overhead of function call: we include a lot of measurement noise in the function, so probably will increase system load and other process.
addR memory access: addR is not optimal since MATLAB’s column-major order. Accessing elements row-wise can lead to cache misses and inefficient usage of memory bandwidth.
Added overheads, maybe associated with MATLAB’s JIT compilation, memory management.
Polynomial fitting: LS model fits a polynomial of form t=c+kn2. If error that increase with n, then there is a leading overestimation of the quadratic term.
Problem 5
y=aex2+bx3
For each datapoint (xi,yi), compute the residual as
ri=aexi2+bxi3−yi
Sum of squared residuals S=∑i=1nri2
Or in this case S=(ae−1−b−0)+(a−1)2+(ae+b−2)2 is minimized
Or ∂a∂S=0 and ∂b∂S=0
which results to 2(ae−1−b)(e−1)+2(a−1)+2(ae+b−2)e=0 and −2(ae−1−b)+2(ae+b−2)=0
a=1+4e2+e42e+2e2+2e3 and b=1+4e2+e4−e3+2+e+4e2
Problem 6
a.
rk=k(lk−l0)−F(lk)
ϕ(k)=∑k=1n[k(lk−l0)−F(lk)]2
∂k∂ϕ=∑k=1n2[k(lk−l0)−F(lk)](lk−l0)=0
Or k∑k=1n(lk−l0)2=∑k=1nF(lk)(lk−l0)→k=∑k=1n(lk−l0)2∑k=1nF(lk)(lk−l0)
k≈0.8996N/m
b.
Using the same logic with additional data, we get k≈0.9052N/m
To determine which constant k best fit the dataset, we calculate the sum of squares of residuals SSR using entire datasets
This yield SSR from A is approx. 0.9062, whereas from part B is approx 0.8962.
The lower the better here, which means part B is a better fit to the entire data comparing to part A.