...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/math/special_functions/gamma.hpp>
namespace boost{ namespace math{ template <class T1, class T2> calculated-result-type gamma_p(T1 a, T2 z); template <class T1, class T2, class Policy> calculated-result-type gamma_p(T1 a, T2 z, const Policy&); template <class T1, class T2> calculated-result-type gamma_q(T1 a, T2 z); template <class T1, class T2, class Policy> calculated-result-type gamma_q(T1 a, T2 z, const Policy&); template <class T1, class T2> calculated-result-type tgamma_lower(T1 a, T2 z); template <class T1, class T2, class Policy> calculated-result-type tgamma_lower(T1 a, T2 z, const Policy&); template <class T1, class T2> calculated-result-type tgamma(T1 a, T2 z); template <class T1, class T2, class Policy> calculated-result-type tgamma(T1 a, T2 z, const Policy&); }} // namespaces
There are four incomplete
gamma functions: two are normalised versions (also known as regularized
incomplete gamma functions) that return values in the range [0, 1], and two
are non-normalised and return values in the range [0, Γ(a)]. Users interested
in statistical applications should use the normalised
versions (gamma_p
and gamma_q
).
All of these functions require a > 0 and z >= 0, otherwise they return the result of domain_error.
The final Policy argument is optional and can be used to control the behaviour of the function: how it handles errors, what level of precision to use etc. Refer to the policy documentation for more details.
The return type of these functions is computed using the result type calculation rules when T1 and T2 are different types, otherwise the return type is simply T1.
template <class T1, class T2> calculated-result-type gamma_p(T1 a, T2 z); template <class T1, class T2, class Policy> calculated-result-type gamma_p(T1 a, T2 z, const Policy&);
Returns the normalised lower incomplete gamma function of a and z:
This function changes rapidly from 0 to 1 around the point z == a:
template <class T1, class T2> calculated-result-type gamma_q(T1 a, T2 z); template <class T1, class T2, class Policy> calculated-result-type gamma_q(T1 a, T2 z, const Policy&);
Returns the normalised upper incomplete gamma function of a and z:
This function changes rapidly from 1 to 0 around the point z == a:
template <class T1, class T2> calculated-result-type tgamma_lower(T1 a, T2 z); template <class T1, class T2, class Policy> calculated-result-type tgamma_lower(T1 a, T2 z, const Policy&);
Returns the full (non-normalised) lower incomplete gamma function of a and z:
template <class T1, class T2> calculated-result-type tgamma(T1 a, T2 z); template <class T1, class T2, class Policy> calculated-result-type tgamma(T1 a, T2 z, const Policy&);
Returns the full (non-normalised) upper incomplete gamma function of a and z:
The following tables give peak and mean relative errors in over various domains of a and z, along with comparisons to the GSL-1.9 and Cephes libraries. Note that only results for the widest floating-point type on the system are given as narrower types have effectively zero error.
Note that errors grow as a grows larger.
Note also that the higher error rates for the 80 and 128 bit long double results are somewhat misleading: expected results that are zero at 64-bit double precision may be non-zero - but exceptionally small - with the larger exponent range of a long double. These results therefore reflect the more extreme nature of the tests conducted for these types.
All values are in units of epsilon.
Table 8.9. Error rates for gamma_p
GNU C++ version 7.1.0 |
GNU C++ version 7.1.0 |
Sun compiler version 0x5150 |
Microsoft Visual C++ version 14.1 |
|
---|---|---|---|---|
tgamma(a, z) medium values |
Max = 0.955ε (Mean = 0.05ε) |
Max = 41.6ε (Mean = 8.09ε) |
Max = 239ε (Mean = 30.2ε) |
Max = 35.1ε (Mean = 6.98ε) |
tgamma(a, z) small values |
Max = 0ε (Mean = 0ε) |
Max = 2ε (Mean = 0.464ε) |
Max = 2ε (Mean = 0.461ε) |
Max = 1.54ε (Mean = 0.439ε) |
tgamma(a, z) large values |
Max = 0ε (Mean = 0ε) |
Max = 3.08e+04ε (Mean = 1.86e+03ε) |
Max = 3.02e+04ε (Mean = 1.91e+03ε) |
Max = 243ε (Mean = 20.2ε) |
tgamma(a, z) integer and half integer values |
Max = 0ε (Mean = 0ε) |
Max = 11.8ε (Mean = 2.66ε) |
Max = 71.6ε (Mean = 9.47ε) |
Max = 13ε (Mean = 2.97ε) |
Table 8.10. Error rates for gamma_q
GNU C++ version 7.1.0 |
GNU C++ version 7.1.0 |
Sun compiler version 0x5150 |
Microsoft Visual C++ version 14.1 |
|
---|---|---|---|---|
tgamma(a, z) medium values |
Max = 0.927ε (Mean = 0.035ε) |
Max = 32.3ε (Mean = 6.61ε) |
Max = 199ε (Mean = 26.6ε) |
Max = 23.7ε (Mean = 4ε) |
tgamma(a, z) small values |
Max = 0ε (Mean = 0ε) |
Max = 2.45ε (Mean = 0.885ε) |
Max = 2.45ε (Mean = 0.819ε) |
Max = 2.26ε (Mean = 0.74ε) |
tgamma(a, z) large values |
Max = 0ε (Mean = 0ε) |
Max = 6.82e+03ε (Mean = 414ε) |
Max = 1.15e+04ε (Mean = 733ε) |
Max = 469ε (Mean = 31.5ε) |
tgamma(a, z) integer and half integer values |
Max = 0ε (Mean = 0ε) |
Max = 11.1ε (Mean = 2.07ε) |
Max = 54.7ε (Mean = 6.16ε) |
Max = 8.72ε (Mean = 1.48ε) |
Table 8.11. Error rates for tgamma_lower
GNU C++ version 7.1.0 |
GNU C++ version 7.1.0 |
Sun compiler version 0x5150 |
Microsoft Visual C++ version 14.1 |
|
---|---|---|---|---|
tgamma(a, z) medium values |
Max = 0.833ε (Mean = 0.0315ε) |
Max = 6.79ε (Mean = 1.46ε) |
Max = 363ε (Mean = 63.8ε) |
Max = 5.62ε (Mean = 1.49ε) |
tgamma(a, z) small values |
Max = 0ε (Mean = 0ε) |
Max = 1.97ε (Mean = 0.555ε) |
Max = 1.97ε (Mean = 0.558ε) |
Max = 1.57ε (Mean = 0.525ε) |
tgamma(a, z) integer and half integer values |
Max = 0ε (Mean = 0ε) |
Max = 4.83ε (Mean = 1.15ε) |
Max = 84.7ε (Mean = 17.5ε) |
Max = 2.69ε (Mean = 0.849ε) |
Table 8.12. Error rates for tgamma (incomplete)
GNU C++ version 7.1.0 |
GNU C++ version 7.1.0 |
Sun compiler version 0x5150 |
Microsoft Visual C++ version 14.1 |
|
---|---|---|---|---|
tgamma(a, z) medium values |
Max = 0ε (Mean = 0ε) |
Max = 8.47ε (Mean = 1.9ε) |
Max = 412ε (Mean = 95.5ε) |
Max = 8.14ε (Mean = 1.76ε) |
tgamma(a, z) small values |
Max = 0.753ε (Mean = 0.0474ε) |
Max = 2.31ε (Mean = 0.775ε) |
Max = 2.13ε (Mean = 0.717ε) |
Max = 2.53ε (Mean = 0.66ε) |
tgamma(a, z) integer and half integer values |
Max = 0ε (Mean = 0ε) |
Max = 5.52ε (Mean = 1.48ε) |
Max = 79.6ε (Mean = 20.9ε) |
Max = 5.16ε (Mean = 1.33ε) |
There are two sets of tests: spot tests compare values taken from Mathworld's online evaluator with this implementation to perform a basic "sanity check". Accuracy tests use data generated at very high precision (using NTL's RR class set at 1000-bit precision) using this implementation with a very high precision 60-term Lanczos approximation, and some but not all of the special case handling disabled. This is less than satisfactory: an independent method should really be used, but apparently a complete lack of such methods are available. We can't even use a deliberately naive implementation without special case handling since Legendre's continued fraction (see below) is unstable for small a and z.
These four functions share a common implementation since they are all related via:
1)
2)
3)
The lower incomplete gamma is computed from its series representation:
4)
Or by subtraction of the upper integral from either Γ(a) or 1 when x - (1(3x)) > a and x > 1.1/.
The upper integral is computed from Legendre's continued fraction representation:
5)
When (x > 1.1) or by subtraction of the lower integral from either Γ(a) or 1 when x - (1(3x)) < a/.
For x < 1.1 computation of the upper integral is more complex as the continued fraction representation is unstable in this area. However there is another series representation for the lower integral:
6)
That lends itself to calculation of the upper integral via rearrangement to:
7)
Refer to the documentation for powm1 and tgamma1pm1 for details of their implementation.
For x < 1.1 the crossover point where the result is ~0.5 no longer occurs for x ~ y. Using x * 0.75 < a as the crossover criterion for 0.5 < x <= 1.1 keeps the maximum value computed (whether it's the upper or lower interval) to around 0.75. Likewise for x <= 0.5 then using -0.4 / log(x) < a as the crossover criterion keeps the maximum value computed to around 0.7 (whether it's the upper or lower interval).
There are two special cases used when a is an integer or half integer, and the crossover conditions listed above indicate that we should compute the upper integral Q. If a is an integer in the range 1 <= a < 30 then the following finite sum is used:
9)
While for half-integers in the range 0.5 <= a < 30 then the following finite sum is used:
10)
These are both more stable and more efficient than the continued fraction alternative.
When the argument a is large, and x ~ a then the series (4) and continued fraction (5) above are very slow to converge. In this area an expansion due to Temme is used:
11)
12)
13)
14)
The double sum is truncated to a fixed number of terms - to give a specific target precision - and evaluated as a polynomial-of-polynomials. There are versions for up to 128-bit long double precision: types requiring greater precision than that do not use these expansions. The coefficients Ckn are computed in advance using the recurrence relations given by Temme. The zone where these expansions are used is
(a > 20) && (a < 200) && fabs(x-a)/a < 0.4
And:
(a > 200) && (fabs(x-a)/a < 4.5/sqrt(a))
The latter range is valid for all types up to 128-bit long doubles, and is
designed to ensure that the result is larger than 10-6, the first range is
used only for types up to 80-bit long doubles. These domains are narrower
than the ones recommended by either Temme or Didonato and Morris. However,
using a wider range results in large and inexact (i.e. computed) values being
passed to the exp
and erfc
functions resulting in significantly
larger error rates. In other words there is a fine trade off here between
efficiency and error. The current limits should keep the number of terms
required by (4) and (5) to no more than ~20 at double precision.
For the normalised incomplete gamma functions, calculation of the leading power terms is central to the accuracy of the function. For smallish a and x combining the power terms with the Lanczos approximation gives the greatest accuracy:
15)
In the event that this causes underflow/overflow then the exponent can be reduced by a factor of a and brought inside the power term.
When a and x are large, we end up with a very large exponent with a base near one: this will not be computed accurately via the pow function, and taking logs simply leads to cancellation errors. The worst of the errors can be avoided by using:
16)
when a-x is small and a and x are large. There is still a subtraction and therefore some cancellation errors - but the terms are small so the absolute error will be small - and it is absolute rather than relative error that counts in the argument to the exp function. Note that for sufficiently large a and x the errors will still get you eventually, although this does delay the inevitable much longer than other methods. Use of log(1+x)-x here is inspired by Temme (see references below).