Interval arithmetic

Interval arithmetic (also known as interval mathematics, interval analysis, or interval computation) is a mathematical technique used to put bounds on rounding errors and measurement errors in mathematical computation. Numerical methods using interval arithmetic can guarantee reliable, mathematically correct results. Instead of representing a value as a single number, interval arithmetic represents each value as a range of possibilities. For example, instead of estimating the height of someone as exactly 2.0 metres, using interval arithmetic one might be certain that that person is somewhere between 1.97 and 2.03 metres.

Interval arithmetic is suitable for a variety of purposes. The most common use is in software, to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations) and optimization problems.

The main objective of interval arithmetic is a simple way to calculate upper and lower bounds for the range of a function in one or more variables. These endpoints are not necessarily the true supremum or infimum, since the precise calculation of those values can be difficult or impossible; the bounds need only contain the function's range as a subset.

This treatment is typically limited to real intervals, so quantities of form

As with traditional calculations with real numbers, simple arithmetic operations and functions on elementary intervals must first be defined.[1] More complicated functions can be calculated from these basic elements.[1]

The error in this case does not affect the conclusion (normal weight), but this is not always the case. If the man was slightly heavier, the BMI's range may include the cutoff value of 25. In that case, the scale's precision was insufficient to make a definitive conclusion.

Interval arithmetic states the range of possible outcomes explicitly. Results are no longer stated as numbers, but as intervals that represent imprecise values. The size of the intervals are similar to error bars in expressing the extent of uncertainty.

Body mass index for different weights in relation to height L (in metres)

In this case, the man may have a normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion. This demonstrates interval arithmetic's ability to correctly track and propagate error.

The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from smallest to the largest.

To make the notation of intervals smaller in formulae, brackets can be used.

Interval functions beyond the four basic operators may also be defined.

From this, the following basic features for interval functions can easily be defined:

Given a real expression, its natural interval extension is achieved by using the interval extensions of each of its subexpressions, functions and operators.

An interval can also be defined as a locus of points at a given distance from the centre,[clarification needed] and this definition can be extended from real numbers to complex numbers.[3] As it is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair of real numbers, there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers.[4] Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers.[4]

The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic.[4] It can be shown that, as it is the case with real interval arithmetic, there is no distributivity between addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers.[4] Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates.[4]

Interval arithmetic can be extended, in an analogous manner, to other multidimensional number systems such as quaternions and octonions, but with the expense that we have to sacrifice other useful properties of ordinary arithmetic.[4]

The methods of classical numerical analysis can not be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account.

The standard IEEE 754 for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e. up), or rounding towards negative infinity (i.e. down).

The so-called dependency problem is a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently then this can lead to an unwanted expansion of the resulting intervals.

The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions.

An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system

Reduction of the search area in the interval Newton step in "thick" functions

The method converges on all zeros in the starting region. Division by zero can lead to separation of distinct zeros, though the separation may not be complete; it can be complemented by the bisection method.

Rough estimate (turquoise) and improved estimates through "mincing" (red)

The various interval methods deliver conservative results as dependencies between the sizes of different intervals extensions are not taken into account. However the dependency problem becomes less significant for narrower intervals.

With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known as mincing. This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension.

Interval arithmetic can be used in various areas (such as set inversion, motion planning, set estimation or stability analysis) to treat estimates with no exact numerical value.[7]

Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly:

Interval analysis adds to rather than substituting for traditional methods for error reduction, such as pivoting.

Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely.[2]

can be found by interval methods. This provides an alternative to traditional propagation of error analysis. Unlike point methods, such as Monte Carlo simulation, interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered.

Warwick Tucker used interval arithmetic in order to solve the 14th of Smale's problems, that is, to show that the Lorenz attractor is a strange attractor.[9] Thomas Hales used interval arithmetic in order to solve the Kepler conjecture.

Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example, Archimedes calculated lower and upper bounds 223/71 < π < 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten.

Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young.[10] Arithmetic work on range numbers to improve the reliability of digital systems were then published in a 1951 textbook on linear algebra by Paul S. Dwyer [de];[11] intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958).[12]

The birth of modern interval arithmetic was marked by the appearance of the book Interval Analysis by Ramon E. Moore in 1966.[13][14] He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic.[15] Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding.

Independently in 1956, Mieczyslaw Warmus suggested formulae for calculations with intervals,[16] though Moore found the first non-trivial applications.

In the following twenty years, German groups of researchers carried out pioneering work around Ulrich W. Kulisch[1][17] and Götz Alefeld [de][18] at the University of Karlsruhe and later also at the Bergische University of Wuppertal. For example, Karl Nickel [de] explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s, Eldon R. Hansen dealt with interval extensions for linear equations and then provided crucial contributions to global optimisation, including what is now known as Hansen's method, perhaps the most widely used interval algorithm.[6] Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developed branch and bound methods, which until then had only applied to integer values, by using intervals to provide applications for continuous values.

In 1988, Rudolf Lohner developed Fortran-based software for reliable solutions for initial value problems using ordinary differential equations.[19]

The journal Reliable Computing (originally Interval Computations) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimisation, has contributed significantly to the unification of notation and terminology used in interval arithmetic.[20]

In recent years work has concentrated in particular on the estimation of preimages of parameterised functions and to robust control theory by the COPRIN working group of INRIA in Sophia Antipolis in France.[21]

There are many software packages that permit the development of numerical applications using interval arithmetic.[22] These are usually provided in the form of program libraries. There are also C++ and Fortran compilers that handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly.

Since 1967, Extensions for Scientific Computation (XSC) have been developed in the University of Karlsruhe for various programming languages, such as C++, Fortran and Pascal.[23] The first platform was a Zuse Z23, for which a new interval data type with appropriate elementary operators was made available. There followed in 1976, Pascal-SC, a Pascal variant on a Zilog Z80 that it made possible to create fast, complicated routines for automated result verification. Then came the Fortran 77-based ACRITH-XSC for the System/370 architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code for C compilers with Pascal-XSC; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under the GNU General Public License. At the beginning of 2000 C-XSC 2.0 was released under the leadership of the working group for scientific computation at the Bergische University of Wuppertal to correspond to the improved C++ standard.

Another C++-class library was created in 1993 at the Hamburg University of Technology called Profil/BIAS (Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user friendly. It emphasized the efficient use of hardware, portability and independence of a particular presentation of intervals.

The Boost collection of C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language.[24]

The Frink programming language has an implementation of interval arithmetic that handles arbitrary-precision numbers. Programs written in Frink can use intervals without rewriting or recompilation.

Gaol[25] is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in interval constraint programming.

The Moore library[26] is an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on the ``concepts´´ feature of C++.

The Julia programming language[27] has an implementation of interval arithmetics along with high-level features, such as root-finding (for both real and complex-valued functions) and interval constraint programming, via the ValidatedNumerics.jl package.[28]

In addition computer algebra systems, such as FriCAS, Mathematica, Maple, Maxima (software)[29] and MuPAD, can handle intervals. A Matlab extension Intlab[30] builds on BLAS routines, and the Toolbox b4m makes a Profil/BIAS interface.[30][31] Moreover, the Software Euler Math Toolbox includes an interval arithmetic.

A library for the functional language OCaml was written in assembly language and C.[32]

A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015.[33] Two reference implementations are freely available.[34] These have been developed by members of the standard's working group: The libieeep1788[35] library for C++, and the interval package[36] for GNU Octave.

A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations.[37]

Several international conferences or workshop take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing).