Introduction

How to choose a tolerance

The close_at_tolerance algorithm

The check_is_close algorithm

The check_is_small algorithm

Implementation

Acknowledgements

References

In most cases it is unreasonable to use an operator==(...) for a floating-point
values equality check The simple solution like abs(f1-f2) <= e does not work for very small or
very big values. This floating-point comparison algorithm is based on the more confident solution
presented by Knuth in [1]. For a given floating point values *u* and
*v* and
a tolerance *e*:

| e" relationship between u and
v |
(1) |

| e" relationship between u
and v |
(2) |

Both relationships are commutative but are not transitive. The relationship
defined by inequations (**1**) is stronger that the relationship defined by inequations (**2**)
(i.e. (**1**) => (**2**) ). Because of the multiplication in the right side of inequations,
that could cause an unwanted underflow condition, the implementation is using modified version of
the inequations (**1**) and (**2**) where all underflow, overflow conditions could be guarded
safely:

| u - v | / |u| <= e or | u - v |
/ |v| <= e |
(1`)( 2`) |

In case of absence of a domain specific requirements the value of tolerance could be chosen as a sum of the predicted upper limits for "relative rounding errors" of compared values. The "rounding" is the operation by which a real value 'x' is represented in a floating-point format with 'p' binary digits (bits) as the floating-point value 'X'. The "relative rounding error" is the difference between the real and the floating point values in relation to real value: |x-X|/|x|. The discrepancy between real and floating point value may be caused by several reasons:

- Type promotion
- Arithmetic operations
- Conversion from a decimal presentation to a binary presentation
- Non-arithmetic operation

The first two operations proved to have a relative rounding error that does not exceed 1/2 * "machine epsilon value" for the appropriate floating point type (represented by std::numeric_limits<FPT>::epsilon()). Conversion to binary presentation, sadly, does not have such requirement. So we can't assume that float 1.1 is close to real 1.1 with tolerance 1/2 * "machine epsilon value" for float (though for 11./10 we can). Non arithmetic operations either do not have a predicted upper limit relative rounding errors. Note that both arithmetic and non arithmetic operations might also produce others "non-rounding" errors, such as underflow/overflow, division-by-zero or 'operation errors'.

All theorems about the upper limit of a rounding error, including that of 1/2*epsilon, refers only to the 'rounding' operation, nothing more. This means that the 'operation error', that is, the error incurred by the operation itself, besides rounding, isn't considered. In order for numerical software to be able to actually predict error bounds, the IEEE754 standard requires arithmetic operations to be 'correctly or exactly rounded'. That is, it is required that the internal computation of a given operation be such that the floating point result is the exact result rounded to the number of working bits. In other words, it is required that the computation used by the operation itself doesn't introduce any additional errors. The IEEE754 standard does not require same behavior from most non-arithmetic operation. The underflow/overflow and division-by-zero errors may cause rounding errors with unpredictable upper limits.

At last be aware that 1/2*epsilon rules are not transitive. In other words combination of two arithmetic operations may produce rounding error that significantly exceed 2*1/2*epsilon. All in all there is no generic rules on how to select the tolerance and users need to apply common sense and domain/problem specific knowledge to decide on tolerance value.

To simplify things in most usage cases latest version of algorithm below opted to use percentage values for tolerance specification (instead of fractions of related values). In other words now you use it to check that difference between two values does not exceed x percent.

For more reading about floating-point comparison see references at the end.

The close_at_tolerance algorithm allows to check the relationship
defines by inequations (**1**) or (**2**). It is implemented
as binary predicate.

enum floating_point_comparison_type { FPC_STRONG, FPC_WEAK }; template<typename FPT, typename PersentType = FPT> class close_at_tolerance { public: explicit close_at_tolerance( PersentType percentage_tolerance, floating_point_comparison_type fpc_type = FPC_STRONG ); bool operator()( FPT left, FPT right ) const; };

The first constructor allows to specify a percentage
tolerance value to compare against. The fpc_type switch allows to select
comparison type. The default behavior is to check strong relationship defined
by inequations (**1**). Use FPC_WEAK to check weak
relationship defined by inequations (**2**)

The check_is_close algorithm present an alternative interface for the close_at_tolerance algorithm.check_is_close is defined as predicate with four arguments

struct check_is_close_t { typedef bool result_type; template<typename FPT, typename PersentType> bool operator()( FPT left, FPT right, PersentType percentage_tolerance, floating_point_comparison_type fpc_type = FPC_STRONG ); }; namespace { check_is_close_t check_is_close; }

The check_is_small algorithm checks that absolute value of the argument is small enough. Absolute value of the tolerance is supplied as a second argument. check_is_small is defined as binary predicate.

struct check_is_close_t { typedef bool result_type; template<typename FPT> bool operator()( FPT fpv, FPT tolerance ); }; namespace { check_is_small_t check_is_small; }In spite of the fact that it's not recommended to use absolute values for floating point values comparisons the need for simple "is it small" checks is arousing from time to time. This is an algorithms to fill this niche.

The all algorithms are implemented in the header file floating_point_comparison.hpp. It is recommended to use test tools wrappers located on test_tools.hpp. Note that you still need to include floating_point_comparison.hpp yourself since it does not get included automatically.

Fernando Cacciola for very helpful discussion of floating point arithmetic on the boost forum.

[1] Knuth D.E. *The art of computer programming* (vol II).

[2] David Goldberg What Every Computer Scientist
Should Know About Floating-Point Arithmetic

[3] Kulisch U.
Rounding near zero.

[4] Philippe Langlois From Rounding Error Estimation
to Automatic Correction with Automatic Differentiation

[5] Lots of information on William Kahan home page

[4] Alberto Squassabia Comparing Floats:
How To Determine if Floating Quantities Are Close Enough Once a Tolerance Has Been Reached C++
Report March 2000.

[5] Pete Becker The Journeyman's Shop: Trap Handlers, Sticky Bits, and Floating-Point Comparisons
C/C++ Users Journal December 2000.