1 Truncation Errors and the Taylor series
1.1 Definition
Truncation errors are those that result from using an approximation in place of an exact mathematical procedure.
Example Approximation of the derivative
One of the most important methods used in numerical methods to approximate mathematical functions is: Taylor series
1.2 Using Taylor series to estimate truncation errors
Let us see how Taylor series may be used to estimate truncation errors.
How to determine the error introduced by using this formulation to compute the derivative instead of using the real mathematical definition?
Using a Taylor series truncated at the first-order:
Therefore,
We have now an estimate of the truncation error:
Truncation error =
So the order of the error due to the formulation used to compute the derivative is h.
2 Numerical Solution of Ordinary Differential Equations (ODE)
2.1 Definition
An equation that consists of derivatives is called a differential equation. Differential equations have applications in all areas of science and engineering. Mathematical formulation of most of the physical and engineering problems lead to differential equations. So, it is important for engineers and scientists to know how to set up differential equations and solve them.
2.2 Euler’s Method:
Numerically approximate values for the solution of the initial-value problem 𝑦 ′ = 𝐹(𝑥, 𝑦), 𝑦 𝑥0 = 𝑦0, with step size ℎ, at 𝑥𝑛 = 𝑥𝑛−1 + ℎ, are
𝑦𝑛 = 𝑦𝑛−1 + ℎ ∙ 𝐹(𝑥𝑛−1, 𝑦𝑛−1)
Example:
2.3 Runge-Kutta 2nd order:
The Runge-Kutta 2nd order method is a numerical technique used to solve an ordinary differential equation of the
Only first order ordinary differential equations can be solved by using the Runge-Kutta 2nd order method. In other sections, we will discuss how the Euler and Runge-Kutta methods are used to solve higher order ordinary differential equations or coupled (simultaneous) differential equations.
3 Numerical Integration
3.1 Newton-Raphson method:
Let x0 be an initial guess to the root α of f(x) = 0. Let h is the correction i.e. α = x0 + h. Then f(α) = 0 implies f(x0 + h) = 0. Now assuming h small and f twice continuously differentiable, we find
3.2 TRAPEZOIDAL RULE
Trapezoidal rule is based on the Newton-Cotes formula that if we approximate the integrand by an nth order polynomial, then the integral of the function is approximated by the integral of that nth order polynomial.
The trapezoidal rule works by approximating the region under the graph of the function f(x) as a trapezoid and calculating its area. It follows that
3.3 SIMPSON’S 1/3RD RULE
Trapezoidal rule was based on approximating the integrand by a first order polynomial, and then integrating the polynomial in the interval of integration. Simpson’s 1/3rd rule is an extension of Trapezoidal rule where the integrand is approximated by a second order polynomial.
Since for Simpson’s 1/3rd Rule, the interval is broken into 2 segments, the segment width is
Hence the Simpson’s 1/3rd rule is given by
Since the above form has 1/3 in its formula, it is called Simpson’s 1/3rd Rule.
4 Roots of Equations
4.1 False position method
A shortcoming of the bisection method is that in dividing the interval from xl to xu into equal halves, no account is taken of the magnitude of f(xi) and f(xu). Indeed, if f(xi) is close to zero, the root is more close to xl than x0.
The false position method uses this property:
A straight line joins f(xi) and f(xu). The intersection of this line with the x-axis represents an improved estimate of the root. This new root can be computed as:
Giving This is called the false-position formula
4.2 Secant method
Here we don’t insist on bracketing of roots. Given two initial guess. Given two approximation xn−1, xn, we take the next approximation xn+1 as the intersection of line joining (xn−1, f(xn−1)) and (xn, f(xn)) with the x-axis. Thus xn+1 need not lie in the interval [xn−1, xn]. If the root is α and α is a simple zero, then it can be proved that the method converges for initial guess in sufficiently small neighborhood of α.
Thanks,
Team Gradeup
Comments
write a comment