Modular arithmetic

In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.

A familiar use of modular arithmetic is in the 12-hour clock, in which the day is divided into two 12-hour periods. If the time is 7:00 now, then 8 hours later it will be 3:00. Simple addition would result in 7 + 8 = 15, but clocks "wrap around" every 12 hours. Because the hour number starts over after it reaches 12, this is arithmetic modulo 12. In terms of the definition below, 15 is congruent to 3 modulo 12, so "15:00" on a 24-hour clock is displayed "3:00" on a 12-hour clock.

Given an integer n > 1, called a modulus, two integers are said to be congruent modulo n, if n is a divisor of their difference (i.e., if there is an integer k such that ab = kn).

Congruence modulo n is a congruence relation, meaning that it is an equivalence relation that is compatible with the operations of addition, subtraction, and multiplication. Congruence modulo n is denoted:

explicitly showing its relationship with Euclidean division. However, the b here need not be the remainder of the division of a by n. Instead, what the statement ab (mod n) asserts is that a and b have the same remainder when divided by n. That is,

where 0 ≤ r < n is the common remainder. Subtracting these two expressions, we recover the previous relation:

because 38 − 14 = 24, which is a multiple of 12. Another way to express this is to say that both 38 and 14 have the same remainder 2, when divided by 12.

The definition of congruence also applies to negative values. For example:

The congruence relation satisfies all the conditions of an equivalence relation:

If ab (mod n), then it is generally false that kakb (mod n). However, the following is true:

In particular, if p is a prime number, then a is coprime with p for every a such that 0 < a < p; thus a multiplicative inverse exists for all a that is not congruent to zero modulo p.

Some of the more advanced properties of congruence relations are the following:

Each residue class modulo n may be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class[2] (since this is the proper remainder which results from division). Any two members of different residue classes modulo n are incongruent modulo n. Furthermore, every integer belongs to one and only one residue class modulo n.[3]

The verification that this is a proper definition uses the properties given before.

In theoretical mathematics, modular arithmetic is one of the foundations of number theory, touching on almost every aspect of its study, and it is also used extensively in group theory, ring theory, knot theory, and abstract algebra. In applied mathematics, it is used in computer algebra, cryptography, computer science, chemistry and the visual and musical arts.

A very practical application is to calculate checksums within serial number identifiers. For example, International Standard Book Number (ISBN) uses modulo 11 (for 10 digit ISBN) or modulo 10 (for 13 digit ISBN) arithmetic for error detection. Likewise, International Bank Account Numbers (IBANs), for example, make use of modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of the CAS registry number (a unique identifying number for each chemical compound) is a check digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10.

In cryptography, modular arithmetic directly underpins public key systems such as RSA and Diffie–Hellman, and provides finite fields which underlie elliptic curves, and is used in a variety of symmetric key algorithms including Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), and RC4. RSA and Diffie–Hellman use modular exponentiation.

In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used in polynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations of polynomial greatest common divisor, exact linear algebra and Gröbner basis algorithms over the integers and the rational numbers. As posted on Fidonet in the 1980s and archived at Rosetta Code, modular arithmetic was used to disprove Euler's sum of powers conjecture on a Sinclair QL microcomputer using just one-fourth of the integer precision used by a CDC 6600 supercomputer to disprove it two decades earlier via a brute force search.[9]

In computer science, modular arithmetic is often applied in bitwise operations and other operations involving fixed-width, cyclic data structures. The modulo operation, as implemented in many programming languages and calculators, is an application of modular arithmetic that is often used in this context. The logical operator XOR sums 2 bits, modulo 2.

In music, arithmetic modulo 12 is used in the consideration of the system of twelve-tone equal temperament, where octave and enharmonic equivalency occurs (that is, pitches in a 1∶2 or 2∶1 ratio are equivalent, and C-sharp is considered the same as D-flat).

The method of casting out nines offers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9).

Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular, Zeller's congruence and the Doomsday algorithm make heavy use of modulo-7 arithmetic.

More generally, modular arithmetic also has application in disciplines such as law (e.g., apportionment), economics (e.g., game theory) and other areas of the social sciences, where proportional division and allocation of resources plays a central part of the analysis.

Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved in polynomial time with a form of Gaussian elimination, for details see linear congruence theorem. Algorithms, such as Montgomery reduction, also exist to allow simple arithmetic operations, such as multiplication and exponentiation modulo n, to be performed efficiently on large numbers.

Some operations, like finding a discrete logarithm or a quadratic congruence appear to be as hard as integer factorization and thus are a starting point for cryptographic algorithms and encryption. These problems might be NP-intermediate.

Solving a system of non-linear modular arithmetic equations is NP-complete.[10]

Below are three reasonably fast C functions, two for performing modular multiplication and one for modular exponentiation on unsigned integers not larger than 63 bits, without overflow of the transient operations.

On computer architectures where an extended precision format with at least 64 bits of mantissa is available (such as the long double type of most x86 C compilers), the following routine is[clarification needed], by employing the trick that, by hardware, floating-point multiplication results in the most significant bits of the product kept, while integer multiplication results in the least significant bits kept:[citation needed]

Below is a C function for performing modular exponentiation, that uses the mul_mod function implemented above.