Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for an old version of boost. Click here for the latest Boost documentation.
PrevUpHomeNext

Minimax Approximations and the Remez Algorithm

The directory libs/math/minimax contains a command line driven program for the generation of minimax approximations using the Remez algorithm. Both polynomial and rational approximations are supported, although the latter are tricky to converge: it is not uncommon for convergence of rational forms to fail. No such limitations are present for polynomial approximations which should always converge smoothly.

It's worth stressing that developing rational approximations to functions is often not an easy task, and one to which many books have been devoted. To use this tool, you will need to have a reasonable grasp of what the Remez algorithm is, and the general form of the approximation you want to achieve.

Unless you already familar with the Remez method you should first read the brief background article explaining the principals behind the Remez algorithm.

The program consists of two parts:

main.cpp

Contains the command line parser, and all the calls to the Remez code.

f.cpp

Contains the function to approximate.

Therefore to use this tool, you must modify f.cpp to return the function to approximate. The tools supports multiple function approximations within the same compiled program: each as a separate variant:

NTL::RR f(const NTL::RR& x, int variant);

Returns the value of the function variant at point x. So if you wish you can just add the function to approximate as a new variant after the existing examples.

In addition to those two files, the program needs to be linked to a patched NTL library to compile.

Note that the function f must return the rational part of the approximation: for example if you are approximating a function f(x) then it is quite common to use:

f(x) = g(x)(Y + R(x))

where g(x) is the dominant part of f(x), Y is some constant, and R(x) is the rational approximation part, usually optimised for a low absolute error compared to |Y|.

In this case you would define f to return f(x)/g(x) and then set the y-offset of the approximation to Y (see command line options below).

Many other forms are possible, but in all cases the objective is to split f(x) into a dominant part that you can evaluate easily using standard math functions, and a smooth and slowly changing rational approximation part. Refer to your favourite textbook for more examples.

Command line options for the program are as follows:

variant N

Sets the current function variant to N. This allows multiple functions that are to be approximated to be compiled into the same executable. Defaults to 0.

range a b

Sets the domain for the approximation to the range [a,b], defaults to [0,1].

relative

Sets the Remez code to optimise for relative error. This is the default at program startup. Note that relative error can only be used if f(x) has no roots over the range being optimised.

absolute

Sets the Remez code to optimise for absolute error.

pin [true|false]

"Pins" the code so that the rational approximation passes through the origin. Obviously only set this to true if R(0) must be zero. This is typically used when trying to preserve a root at [0,0] while also optimising for relative error.

order N D

Sets the order of the approximation to N in the numerator and D in the denominator. If D is zero then the result will be a polynomial approximation. There will be N+D+2 coefficients in total, the first coefficient of the numerator is zero if pin was set to true, and the first coefficient of the denominator is always one.

working-precision N

Sets the working precision of NTL::RR to N binary digits. Defaults to 250.

target-precision N

Sets the precision of printed output to N binary digits: set to the same number of digits as the type that will be used to evaluate the approximation. Defaults to 53 (for double precision).

skew val

"Skews" the initial interpolated control points towards one end or the other of the range. Positive values skew the initial control points towards the left hand side of the range, and negative values towards the right hand side. If an approximation won't converge (a common situation) try adjusting the skew parameter until the first step yields the smallest possible error. val should be in the range [-100,+100], the default is zero.

brake val

Sets a brake on each step so that the change in the control points is braked by val%. Defaults to 50, try a higher value if an approximation won't converge, or a lower value to get speedier convergence.

x-offset val

Sets the x-offset to val: the approximation will be generated for f(x + X) + Y where X is the x-offset and Y is the y-offset. Defaults to zero. To avoid rounding errors, take care to specify a value that can be exactly represented as a floating point number.

y-offset val

Sets the y-offset to val: the approximation will be generated for f(x + X) + Y where X is the x-offset and Y is the y-offset. Defaults to zero. To avoid rounding errors, take care to specify a value that can be exactly represented as a floating point number.

y-offset auto

Sets the y-offset to the average value of f(x) evaluated at the two endpoints of the range plus the midpoint of the range. The calculated value is deliberately truncated to float precision (and should be stored as a float in your code). The approximation will be generated for f(x + X) + Y where X is the x-offset and Y is the y-offset. Defaults to zero.

graph N

Prints N evaluations of f(x) at evenly spaced points over the range being optimised. If unspecified then N defaults to 3. Use to check that f(x) is indeed smooth over the range of interest.

step N

Performs N steps, or one step if N is unspecified. After each step prints: the peek error at the extrema of the error function of the approximation, the theoretical error term solved for on the last step, and the maximum relative change in the location of the Chebyshev control points. The approximation is converged on the minimax solution when the two error terms are (approximately) equal, and the change in the control points has decreased to a suitably small value.

test [float|double|long]

Tests the current approximation at float, double, or long double precision. Useful to check for rounding errors in evaluating the approximation at fixed precision. Tests are conducted at the extrema of the error function of the approximation, and at the zeros of the error function.

test [float|double|long] N

Tests the current approximation at float, double, or long double precision. Useful to check for rounding errors in evaluating the approximation at fixed precision. Tests are conducted at N evenly spaced points over the range of the approximation. If none of [float|double|long] are specified then tests using NTL::RR, this can be used to obtain the error function of the approximation.

rescale a b

Takes the current Chebeshev control points, and rescales them over a new interval [a,b]. Sometimes this can be used to obtain starting control points for an approximation that can not otherwise be converged.

rotate

Moves one term from the numerator to the denominator, but keeps the Chebyshev control points the same. Sometimes this can be used to obtain starting control points for an approximation that can not otherwise be converged.

info

Prints out the current approximation: the location of the zeros of the error function, the location of the Chebyshev control points, the x and y offsets, and of course the coefficients of the polynomials.


PrevUpHomeNext