Natural number

Natural numbers can be used for counting (one apple, two apples, three apples, ...)

In mathematics, the natural numbers are those numbers used for counting (as in "there are six coins on the table") and ordering (as in "this is the third largest city in the country"). In common mathematical terminology, words colloquially used for counting are "cardinal numbers", and words used for ordering are "ordinal numbers". The natural numbers can, at times, appear as a convenient set of codes (labels or "names"), that is, as what linguists call nominal numbers, forgoing many or all of the properties of being a number in a mathematical sense.[1][2]

Some definitions, including the standard ISO 80000-2,[3][a] begin the natural numbers with 0, corresponding to the non-negative integers 0, 1, 2, 3, ..., whereas others start with 1, corresponding to the positive integers 1, 2, 3, ...[4][b] Texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, while in other writings, that term is used instead for the integers (including negative integers).[5]

Properties of the natural numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics.

In common language, particularly in primary school education, natural numbers may be called counting numbers[6] to intuitively exclude the negative integers and zero, and also to contrast the discreteness of counting to the continuity of measurement—a hallmark characteristic of real numbers.

The most primitive method of representing a natural number is to put down a mark for each object. Later, a set of objects could be tested for equality, excess or shortage—by striking out a mark and removing an object from the set.

The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. The ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1, 10, and all powers of 10 up to over 1 million. A stone carving from Karnak, dating back from around 1500 BCE and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. The Babylonians had a place-value system based essentially on the numerals for 1 and 10, using base sixty, so that the symbol for sixty was the same as the symbol for one—its value being determined from context.[10]

A much later advance was the development of the idea that 0 can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation (within other numbers) dates back as early as 700 BCE by the Babylonians, who omitted such a digit when it would have been the last symbol in the number.[e] The Olmec and Maya civilizations used 0 as a separate number as early as the 1st century BCE, but this usage did not spread beyond Mesoamerica.[12][13] The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628 CE. However, 0 had been used as a number in the medieval computus (the calculation of the date of Easter), beginning with Dionysius Exiguus in 525 CE, without being denoted by a numeral (standard Roman numerals do not have a symbol for 0). Instead, nulla (or the genitive form nullae) from nullus, the Latin word for "none", was employed to denote a 0 value.[14]

The first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, sometimes even not as a number at all.[f] Euclid, for example, defined a unit first and then a number as a multitude of units, thus by his definition, a unit is not a number and there are no unique numbers (e.g., any two units from indefinitely many units is a 2).[16]

Independent studies on numbers also occurred at around the same time in India, China, and Mesoamerica.[17]

In 19th century Europe, there was mathematical and philosophical discussion about the exact nature of the natural numbers. A school[which?] of Naturalism stated that the natural numbers were a direct consequence of the human psyche. Henri Poincaré was one of its advocates, as was Leopold Kronecker, who summarized his belief as "God made the integers, all else is the work of man".[g]

In opposition to the Naturalists, the constructivists saw a need to improve upon the logical rigor in the foundations of mathematics.[h] In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers, thus stating they were not really natural—but a consequence of definitions. Later, two classes of such formal definitions were constructed; later still, they were shown to be equivalent in most practical applications.

Set-theoretical definitions of natural numbers were initiated by Frege. He initially defined a natural number as the class of all sets that are in one-to-one correspondence with a particular set. However, this definition turned out to lead to paradoxes, including Russell's paradox. To avoid such paradoxes, the formalism was modified so that a natural number is defined as a particular set, and any set that can be put into one-to-one correspondence with that set is said to have that number of elements.[20]

The second class of definitions was introduced by Charles Sanders Peirce, refined by Richard Dedekind, and further explored by Giuseppe Peano; this approach is now called Peano arithmetic. It is based on an axiomatization of the properties of ordinal numbers: each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several weak systems of set theory. One such system is ZFC with the axiom of infinity replaced by its negation. Theorems that can be proved in ZFC but cannot be proved using the Peano Axioms include Goodstein's theorem.[21]

With all these definitions, it is convenient to include 0 (corresponding to the empty set) as a natural number. Including 0 is now the common convention among set theorists[22] and logicians.[23] Other mathematicians also include 0,[a] and computer languages often start from zero when enumerating items like loop counters and string- or array-elements.[24][25] On the other hand, many mathematicians have kept the older tradition to take 1 to be the first natural number.[26]

Since different properties are customarily associated to the tokens 0 and 1 (e.g., neutral elements for addition and multiplications, respectively), it is important to know which version of natural numbers is employed in the case under consideration. This can be done by explanation in prose, by explicitly writing down the set, or by qualifying the generic identifier with a super- or subscript,[3][29] for example, like this:

If 1 is defined as S(0), then b + 1 = b + S(0) = S(b + 0) = S(b). That is, b + 1 is simply the successor of b.

Addition and multiplication are compatible, which is expressed in the distribution law: a × (b + c) = (a × b) + (a × c). These properties of addition and multiplication make the natural numbers an instance of a commutative semiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily commutative. The lack of additive inverses, which is equivalent to the fact that is not closed under subtraction (that is, subtracting one natural from another does not always result in another natural), means that is not a ring; instead it is a semiring (also known as a rig).

If the natural numbers are taken as "excluding 0", and "starting at 1", the definitions of + and × are as above, except that they begin with a + 1 = S(a) and a × 1 = a.

In this section, juxtaposed variables such as ab indicate the product a × b,[31] and the standard order of operations is assumed.

A total order on the natural numbers is defined by letting ab if and only if there exists another natural number c where a + c = b. This order is compatible with the arithmetical operations in the following sense: if a, b and c are natural numbers and ab, then a + cb + c and acbc.

An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by an ordinal number; for the natural numbers, this is denoted as ω (omega).

In this section, juxtaposed variables such as ab indicate the product a × b, and the standard order of operations is assumed.

While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder or Euclidean division is available as a substitute: for any two natural numbers a and b with b ≠ 0 there are natural numbers q and r such that

The number q is called the quotient and r is called the remainder of the division of a by b. The numbers q and r are uniquely determined by a and b. This Euclidean division is key to the several other properties (divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory.

The addition (+) and multiplication (×) operations on natural numbers as defined above have several algebraic properties:

The set of natural numbers is an infinite set. By definition, this kind of infinity is called countable infinity. All sets that can be put into a bijective relation to the natural numbers are said to have this kind of infinity. This is also expressed by saying that the cardinal number of the set is aleph-nought (0).[35]

Two important generalizations of natural numbers arise from the two uses of counting and ordering: cardinal numbers and ordinal numbers.

The least ordinal of cardinality 0 (that is, the initial ordinal of 0) is ω but many well-ordered sets with cardinal number 0 have an ordinal number greater than ω.

For finite well-ordered sets, there is a one-to-one correspondence between ordinal and cardinal numbers; therefore they can both be expressed by the same natural number, the number of elements of the set. This number can also be used to describe the position of an element in a larger finite, or an infinite, sequence.

A countable non-standard model of arithmetic satisfying the Peano Arithmetic (that is, the first-order Peano axioms) was developed by Skolem in 1933. The hypernatural numbers are an uncountable model that can be constructed from the ordinary natural numbers via the ultrapower construction.

Georges Reeb used to claim provocatively that The naïve integers don't fill up . Other generalizations are discussed in the article on numbers.

Many properties of the natural numbers can be derived from the five Peano axioms:[36] [i]

In the area of mathematics called set theory, a specific construction due to John von Neumann[37][38] defines the natural numbers as follows:

With this definition, a natural number n is a particular set with n elements, and nm if and only if n is a subset of m. The standard definition, now called definition of von Neumann ordinals, is: "each ordinal is the well-ordered set of all smaller ordinals."

Also, with this definition, different possible interpretations of notations like n (n-tuples versus mappings of n into ) coincide.

Even if one does not accept the axiom of infinity and therefore cannot accept that the set of all natural numbers exists, it is still possible to define any one of these sets.

Although the standard construction is useful, it is not the only possible construction. Ernst Zermelo's construction goes as follows:[38]

Each natural number is then equal to the set containing just the natural number preceding it. This is the definition of Zermelo ordinals. Unlike von Neumann's construction, the Zermelo ordinals do not account for infinite ordinals.