# Factorial

In mathematics, the factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n:

The value of 0! is 1, according to the convention for an empty product.[1]

The factorial operation is encountered in many areas of mathematics, notably in combinatorics, algebra, and mathematical analysis. Its most basic use counts the possible distinct sequences – the permutations – of n distinct objects: there are n!.

The factorial function can also be extended to non-integer arguments while retaining its most important properties by defining x! = Γ(x + 1), where Γ is the gamma function; this is undefined when x is a negative integer.

The use of factorials is documented since the Talmudic period (200 to 500 CE), one of the earliest examples being the Hebrew Book of Creation Sefer Yetzirah which lists factorials as a means of counting permutations.[2] Indian scholars have been using factorial formulas since at least the 12th century.[3] Siddhānta Shiromani by Bhāskara II (c. 1114–1185) mentioned factorials for permutations in Volume I, the Līlāvatī. Fabian Stedman later described factorials as applied to change ringing, a musical art involving the ringing of several tuned bells.[4] After describing a recursive approach, Stedman gives a statement of a factorial (using the language of the original):

Now the nature of these methods is such, that the changes on one number comprehends [includes] the changes on all lesser numbers ... insomuch that a compleat Peal of changes on one number seemeth to be formed by uniting of the compleat Peals on all lesser numbers into one entire body.[5]

The notation n! was introduced by the French mathematician Christian Kramp in 1808.[6]

Although the factorial function has its roots in combinatorics, formulas involving factorials occur in many areas of mathematics.

Most approximations for n! are based on approximating its natural logarithm

The graph of the function f(n) = ln n! is shown in the figure on the right. It looks approximately linear for all reasonable values of n, but this intuition is false. We get one of the simplest approximations for ln n! by bounding the sum with an integral from above and below as follows:

Hence ln n! ∼ n ln n (see Big O notation). This result plays a key role in the analysis of the computational complexity of sorting algorithms (see comparison sort). From the bounds on ln n! deduced above we get that

It is sometimes practical to use weaker but simpler estimates. Using the above formula it is easily shown that for all n we have (n/3)n < n!, and for all n ≥ 6 we have n! < ( n/2)n.

For large n we get a better estimate for the number n! using Stirling's approximation:

This in fact comes from an asymptotic series for the logarithm, and n factorial lies between this and the next approximation:

Another approximation for ln n! is given by Srinivasa Ramanujan (Ramanujan 1988)

Both this and Stirling's approximation give a relative error on the order of 1/n3, but Ramanujan's is about four times more accurate. However, if we use two correction terms in a Stirling-type approximation, as with Ramanujan's approximation, the relative error will be of order 1/n5:[13]

If efficiency is not a concern, computing factorials is trivial from an algorithmic point of view: successively multiplying a variable initialized to 1 by the integers up to n (if any) will compute n!, provided the result fits in the variable. In functional languages, the recursive definition is often implemented directly to illustrate recursive functions.

The main practical difficulty in computing factorials is the size of the result. To assure that the exact result will fit for all legal values of even the smallest commonly used integral type (8-bit signed integers) would require more than 700 bits, so no reasonable specification of a factorial function using fixed-size types can avoid questions of overflow. The values 12! and 20! are the largest factorials that can be stored in, respectively, the 32-bit and 64-bit integers commonly used in personal computers, however many languages support variable length integer types capable of calculating very large values.[14] Floating-point representation of an approximated result allows going a bit further, but this also remains quite limited by possible overflow. Most calculators use scientific notation with 2-digit decimal exponents, and the largest factorial that fits is then 69!, because 69! < 10100 < 70!. Other implementations (such as computer software such as spreadsheet programs) can often handle larger values.

Most software applications will compute small factorials by direct multiplication or table lookup. Larger factorial values can be approximated using Stirling's formula. Wolfram Alpha can calculate exact results for the ceiling function and floor function applied to the binary, natural and common logarithm of n! for values of n up to 249999, and up to 20000000! for the integers.

If the exact values of large factorials are needed, they can be computed using arbitrary-precision arithmetic. Instead of doing the sequential multiplications ((1 × 2) × 3) × 4..., a program can partition the sequence into two parts, whose products are roughly the same size, and multiply them using a divide-and-conquer method. This is often more efficient.[15]

The asymptotically best efficiency is obtained by computing n! from its prime factorization. As documented by Peter Borwein, prime factorization allows n! to be computed in time O(n(log n log log n)2), provided that a fast multiplication algorithm is used (for example, the Schönhage–Strassen algorithm).[16] Peter Luschny presents source code and benchmarks for several efficient factorial algorithms, with or without the use of a prime sieve.[17]

Factorials have many applications in number theory. In particular, n! is necessarily divisible by all prime numbers up to and including n. As a consequence, n > 5 is a composite number if and only if

Legendre's formula gives the multiplicity of the prime p occurring in the prime factorization of n! as

Adding 1 to a factorial n! yields a number that is only divisible by primes that are larger than n. This fact can be used to prove Euclid's theorem that the number of primes is infinite.[20] Primes of the form n! ± 1 are called factorial primes.

The reciprocals of factorials produce a convergent series whose sum is the exponential base e:

Although the sum of this series is an irrational number, it is possible to multiply the factorials by positive integers to produce a convergent series with a rational sum:

The gamma function interpolates the factorial function to non-integer values. The main clue is the recurrence relation generalized to a continuous domain.

Besides nonnegative integers, the factorial can also be defined for non-integer values, but this requires more advanced tools from mathematical analysis.

One function that fills in the values of the factorial (but with a shift of 1 in the argument), that is often used, is called the gamma function, denoted Γ(z). It is defined for all complex numbers z except for the non-positive integers, and given when the real part of z is positive by

Its relation to the factorial is that n! = Γ(n + 1) for every nonnegative integer n.

Carl Friedrich Gauss used the notation Π(z) to denote the same function, but with argument shifted by 1, so that it agrees with the factorial for nonnegative integers. This pi function is defined by

The pi function and gamma function are related by the formula Π(z) = Γ(z + 1). Likewise, Π(n) = n! for any nonnegative integer n.

In addition to this, the pi function satisfies the same recurrence as factorials do, but at every complex value z where it is defined

This is no longer a recurrence relation but a functional equation. In terms of the gamma function, it is

The values of these functions at half-integer values is therefore determined by a single one of them:

The pi function is certainly not the only way to extend factorials to a function defined at almost all complex values, and not even the only one that is analytic wherever it is defined. Nonetheless it is usually considered the most natural way to extend the values of the factorials to a complex function. For instance, the Bohr–Mollerup theorem states that the gamma function is the only function that takes the value 1 at 1, satisfies the functional equation Γ(n + 1) = nΓ(n), is meromorphic on the complex numbers, and is log-convex on the positive real axis. A similar statement holds for the pi function as well, using the Π(n) = nΠ(n − 1) functional equation.

However, there exist complex functions that are probably simpler in the sense of analytic function theory and which interpolate the factorial values. For example, Hadamard's 'gamma' function (Hadamard 1894) which, unlike the gamma function, is an entire function.[22]

Euler also developed a convergent product approximation for the non-integer factorials, which can be seen to be equivalent to the formula for the gamma function above:

However, this formula does not provide a practical means of computing the pi function or the gamma function, as its rate of convergence is slow.

Representation through the gamma function allows evaluation of factorial of complex argument. Equilines of amplitude and phase of factorial are shown in figure. Let

Several levels of constant modulus (amplitude) ρ and constant phase φ are shown. The grid covers the range −3 ≤ x ≤ 3, −2 ≤ y ≤ 2, with unit steps. The scratched line shows the level φ = ±π.

Thin lines show intermediate levels of constant modulus and constant phase. At the poles at every negative integer, phase and amplitude are not defined. Equilines are dense in vicinity of singularities along negative integer values of the argument.

where γ is the Euler–Mascheroni constant and ζ is the Riemann zeta function. Computer algebra systems such as SageMath can generate many terms of this expansion.

For the large values of the argument, the factorial can be approximated through the integral of the digamma function, using the continued fraction representation. This approach is due to T. J. Stieltjes (1894).[citation needed] Writing z! = eP(z) where P(z) is

There is a misconception that ln z! = P(z) or ln Γ(z + 1) = P(z) for any complex z ≠ 0.[citation needed] Indeed, the relation through the logarithm is valid only for a specific range of values of z in the vicinity of the real axis, where −π < Im(Γ(z + 1)) < π. The larger the real part of the argument, the smaller the imaginary part should be. However, the inverse relation, z! = eP(z), is valid for the whole complex plane apart from z = 0. The convergence is poor in the vicinity of the negative part of the real axis;[citation needed] it is difficult to have good convergence of any approximation in the vicinity of the singularities. When |Im z| > 2 or Re z > 2, the six coefficients above are sufficient for the evaluation of the factorial with complex double precision. For higher precision more coefficients can be computed by a rational QD scheme (Rutishauser's QD algorithm).[24]

The relation n! = n × (n − 1)! allows one to compute the factorial for an integer given the factorial for a smaller integer. The relation can be inverted so that one can compute the factorial for an integer given the factorial for a larger integer:

However, this recursion does not permit us to compute the factorial of a negative integer; use of the formula to compute (−1)! would require a division of a nonzero value by zero, and thus blocks us from computing a factorial value for every negative integer. Similarly, the gamma function is not defined for zero or negative integers, though it is defined for all other complex numbers.

There are several other integer sequences similar to the factorial that are used in mathematics:

The product of all the odd integers up to some odd positive integer n is called the double factorial of n, and denoted by n!!.[26] That is,

Double factorial notation may be used to simplify the expression of certain trigonometric integrals,[27] to provide an expression for the values of the gamma function at half-integer arguments and the volume of hyperspheres,[28] and to solve many counting problems in combinatorics including counting binary trees with labeled leaves and perfect matchings in complete graphs.[26][29]

A common related notation is to use multiple exclamation points to denote a multifactorial, the product of integers in steps of two (n!!), three (n!!!), or more (see generalizations of the double factorial). The double factorial is the most commonly used variant, but one can similarly define the triple factorial (n!!!) and so on. One can define the k-tuple factorial, denoted by n!(k), recursively for positive integers as

For sufficiently large n ≥ 1, the ordinary single factorial function is expanded through the multifactorial functions as follows:

In the same way that n! is not defined for negative integers, and n!! is not defined for negative even integers, n!(k) is not defined for negative integers divisible by k.

The primorial of a natural number n (sequence in the OEIS), denoted n#, is similar to the factorial, but with the product taken only over the prime numbers less than or equal to n. That is,

where p ranges over the prime numbers less than or equal to n. For example, the primorial of 11 is

Neil Sloane and Simon Plouffe defined a superfactorial in The Encyclopedia of Integer Sequences (Academic Press, 1995) to be the product of the first n factorials. So the superfactorial of 4 is

By this definition, we can define the k-superfactorial of n (denoted sfk(n)) as:

In his 1995 book Keys to Infinity, Clifford Pickover defined a different function n\$ that he called the superfactorial. It is defined by

(Here, as is usual for compound exponentiation, the grouping is understood to be from right to left: abc = a(bc).)

Occasionally the hyperfactorial of n is considered. It is written as H(n) and defined by

For n = 1, 2, 3, 4,... the values of H(n) are 1, 4, 108, 27648,... (sequence in the OEIS).

where A = 1.2824... is the Glaisher–Kinkelin constant.[30] H(14) ≈ 1.8474×1099 is already almost equal to a googol, and H(15) ≈ 8.0896×10116 is almost of the same magnitude as the Shannon number, the theoretical number of possible chess games. Compared to the Pickover definition of the superfactorial, the hyperfactorial grows relatively slowly.

The hyperfactorial function can be generalized to complex numbers in a similar way as the factorial function. The resulting function is called the K-function.