Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Rounding and Truncation Errors | Foundations of Numerical Computation
Numerical Methods for Scientific Computing with Python

bookRounding and Truncation Errors

When you perform calculations with computers, you often encounter two main types of errors: rounding errors and truncation errors. Understanding the difference between these errors is essential for ensuring that your numerical results are as accurate as possible.

Rounding errors occur because computers cannot represent most real numbers exactly due to their finite precision. For example, the decimal number 0.1 cannot be represented exactly in binary floating-point arithmetic, so it is stored as the closest possible value. Each arithmetic operation can introduce small discrepancies, and these can accumulate over many calculations.

Truncation errors, on the other hand, arise when an infinite process is approximated by a finite one. For example, when you use a finite number of terms in a Taylor series to approximate a function, the difference between the exact value and the approximation is the truncation error. Similarly, using a finite number of steps in a numerical integration method introduces truncation error.

To see the difference, imagine calculating the value of Ο€ using the Gregory-Leibniz series:

Ο€β‰ˆ4Γ—(1βˆ’1/3+1/5βˆ’1/7+...)Ο€ β‰ˆ 4 Γ— (1 - 1/3 + 1/5 - 1/7 + ...)

If you stop after the first four terms, the error is a truncation error, since you have not summed the infinite series. If you continue summing more terms, rounding errors from limited floating-point precision may also start to affect the result, especially if you use a large number of terms.

12345678910111213141516171819202122232425262728293031
import numpy as np # Demonstrate accumulation of rounding errors sum_rounding = 0.0 for i in range(1000000): sum_rounding += 0.1 # 0.1 cannot be represented exactly expected = 0.1 * 1000000 print("Sum with rounding error:", sum_rounding) print("Expected sum:", expected) print("Rounding error:", abs(sum_rounding - expected)) # Demonstrate accumulation of truncation errors using a Taylor series for exp(1) def taylor_exp1(n_terms): result = 0.0 factorial = 1.0 for n in range(n_terms): if n > 0: factorial *= n result += 1.0 / factorial return result approx_5_terms = taylor_exp1(5) approx_15_terms = taylor_exp1(15) true_value = np.exp(1) print("\nexp(1) with 5 terms (truncation error):", approx_5_terms) print("exp(1) with 15 terms (truncation error):", approx_15_terms) print("True exp(1):", true_value) print("Truncation error (5 terms):", abs(approx_5_terms - true_value)) print("Truncation error (15 terms):", abs(approx_15_terms - true_value))
copy
Note
Study More

The IEEE 754 standard defines how floating-point numbers are represented and manipulated in computers. This standard is fundamental to understanding rounding errors in scientific computing, as it specifies the precision, rounding rules, and special values (such as NaN and infinity) used by most modern hardware and software. Exploring the IEEE 754 standard can help you appreciate why certain rounding errors are unavoidable and how they can affect your numerical algorithms.

question mark

Which of the following statements is true about rounding and truncation errors?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 3

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

bookRounding and Truncation Errors

Swipe to show menu

When you perform calculations with computers, you often encounter two main types of errors: rounding errors and truncation errors. Understanding the difference between these errors is essential for ensuring that your numerical results are as accurate as possible.

Rounding errors occur because computers cannot represent most real numbers exactly due to their finite precision. For example, the decimal number 0.1 cannot be represented exactly in binary floating-point arithmetic, so it is stored as the closest possible value. Each arithmetic operation can introduce small discrepancies, and these can accumulate over many calculations.

Truncation errors, on the other hand, arise when an infinite process is approximated by a finite one. For example, when you use a finite number of terms in a Taylor series to approximate a function, the difference between the exact value and the approximation is the truncation error. Similarly, using a finite number of steps in a numerical integration method introduces truncation error.

To see the difference, imagine calculating the value of Ο€ using the Gregory-Leibniz series:

Ο€β‰ˆ4Γ—(1βˆ’1/3+1/5βˆ’1/7+...)Ο€ β‰ˆ 4 Γ— (1 - 1/3 + 1/5 - 1/7 + ...)

If you stop after the first four terms, the error is a truncation error, since you have not summed the infinite series. If you continue summing more terms, rounding errors from limited floating-point precision may also start to affect the result, especially if you use a large number of terms.

12345678910111213141516171819202122232425262728293031
import numpy as np # Demonstrate accumulation of rounding errors sum_rounding = 0.0 for i in range(1000000): sum_rounding += 0.1 # 0.1 cannot be represented exactly expected = 0.1 * 1000000 print("Sum with rounding error:", sum_rounding) print("Expected sum:", expected) print("Rounding error:", abs(sum_rounding - expected)) # Demonstrate accumulation of truncation errors using a Taylor series for exp(1) def taylor_exp1(n_terms): result = 0.0 factorial = 1.0 for n in range(n_terms): if n > 0: factorial *= n result += 1.0 / factorial return result approx_5_terms = taylor_exp1(5) approx_15_terms = taylor_exp1(15) true_value = np.exp(1) print("\nexp(1) with 5 terms (truncation error):", approx_5_terms) print("exp(1) with 15 terms (truncation error):", approx_15_terms) print("True exp(1):", true_value) print("Truncation error (5 terms):", abs(approx_5_terms - true_value)) print("Truncation error (15 terms):", abs(approx_15_terms - true_value))
copy
Note
Study More

The IEEE 754 standard defines how floating-point numbers are represented and manipulated in computers. This standard is fundamental to understanding rounding errors in scientific computing, as it specifies the precision, rounding rules, and special values (such as NaN and infinity) used by most modern hardware and software. Exploring the IEEE 754 standard can help you appreciate why certain rounding errors are unavoidable and how they can affect your numerical algorithms.

question mark

Which of the following statements is true about rounding and truncation errors?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 3
some-alt