# Mathematics

Mathematics (from Ancient Greek μάθημα (máthēma) 'knowledge, study, learning') is an area of knowledge that includes the study of such topics as numbers (arithmetic and number theory),[1] formulas and related structures (algebra),[2] shapes and spaces in which they are contained (geometry),[1] and quantities and their changes (calculus and analysis).[3][4][5] There is no general consensus about its exact scope or epistemological status.[6][7]

Most of mathematical activity consists of discovering and proving (by pure reasoning) properties of abstract objects. These objects are either abstractions from nature (such as natural numbers or lines), or (in modern mathematics) abstract entities of which certain properties, called axioms, are stipulated. A proof consists of a succession of applications of some deductive rules to already known results, including previously proved theorems, axioms and (in case of abstraction from nature) some basic properties that are considered as true starting points of the theory under consideration. The result of a proof is called a theorem.

Mathematics is widely used in science for modeling phenomena. This enables the extraction of quantitative predictions from experimental laws. For example, the movement of planets can be predicted with high accuracy using Newton's law of gravitation combined with mathematical computation. The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model for describing the reality. So when some inaccurate predictions arise, it means that the model must be improved or changed, not that the mathematics is wrong. For example, the perihelion precession of Mercury cannot be explained by Newton's law of gravitation, but is accurately explained by Einstein's general relativity. This experimental validation of Einstein's theory shows that Newton's law of gravitation is only an approximation (which still is very accurate in everyday life).

Mathematics is essential in many fields, including natural sciences, engineering, medicine, finance, computer science and social sciences. Some areas of mathematics, such as statistics and game theory, are developed in direct correlation with their applications, and are often grouped under the name of applied mathematics. Other mathematical areas are developed independently from any application (and are therefore called pure mathematics), but practical applications are often discovered later.[8][9] A fitting example is the problem of integer factorization, which goes back to Euclid, but which had no practical application before its use in the RSA cryptosystem (for the security of computer networks).

Mathematics has been a human activity from as far back as written records exist. However, the concept of a "proof" and its associated "mathematical rigour" first appeared in Greek mathematics, most notably in Euclid's Elements.[10] Mathematics developed at a relatively slow pace until the Renaissance, when algebra and infinitesimal calculus were added to arithmetic and geometry as main areas of mathematics. Since then the interaction between mathematical innovations and scientific discoveries have led to a rapid increase in the rate of mathematical discoveries. At the end of the 19th century, the foundational crisis of mathematics led to the systematization of the axiomatic method. This, in turn, gave rise to a dramatic increase in the number of mathematics areas and their fields of applications; a witness of this is the Mathematics Subject Classification, which lists more than sixty first-level areas of mathematics.

Before the Renaissance, mathematics was divided into two main areas: arithmetic, devoted to the manipulation of numbers, and geometry, devoted to the study of shapes. There was also some pseudoscience, such as numerology and astrology, that were not clearly distinguished from mathematics.

During the Renaissance, two new main areas appeared. The introduction of mathematical notation led to algebra, which, roughly speaking, consists of the study and the manipulation of formulas. Calculus, consisting of the two subfields infinitesimal calculus and integral calculus, is the study of continuous functions, which model the changes of and the relationships between varying quantities (variables). This division into four main areas - arithmetic, geometry, algebra, calculus[verification needed] - remained valid until the end of the 19th century, although some areas, such as celestial mechanics and solid mechanics, which were often considered then as part of mathematics, are now considered as belonging to physics. Also, some subjects developed during this period predate mathematics, being divided into different areas, such as probability theory and combinatorics, which only later became regarded as autonomous areas of their own.

At the end of the 19th century, the foundational crisis in mathematics and the resulting systematization of the axiomatic method led to an explosion in the number of areas of mathematics. Today, the Mathematics Subject Classification contains more than 60 first-level areas. Some of these areas correspond to the older division in four main areas. This is the case of number theory (the modern name for higher arithmetic) and geometry. However, there are several other first-level areas that have "geometry" in their name or are commonly considered as belonging to geometry. Algebra and calculus do not appear as first-level areas, but are each split into several first-level areas. Other first-level areas did not exist at all before the 20th century (for example category theory; homological algebra, and computer science) or were not considered before as mathematics, such as 03:Mathematical logic and foundations (including model theory, computability theory, set theory, proof theory, and algebraic logic).

One characteristic of number theory is that many problems that can be stated very elementarily are very difficult, and their solutions often require very sophisticated methods from various parts of mathematics. A prominent example of this is Fermat's last theorem. This conjecture was stated in 1637 by Pierre de Fermat, but it was only proved in 1994 by Andrew Wiles, using, among other tools, algebraic geometry (more specifically scheme theory), category theory and homological algebra. Another example is Goldbach's conjecture, which asserts that every even integer greater than 2 is the sum of two prime numbers. Stated in 1742 by Christian Goldbach, it remains unproven to this day despite considerable effort.

Due to the great diversity of the problems studied and the methods of solution used, number theory is presently split into several subareas, including analytic number theory, algebraic number theory, geometry of numbers (method oriented), Diophantine equations and transcendence theory (problem oriented).

Geometry is, with arithmetic, one of the oldest branches of mathematics. It started with empirical recipes concerning shapes, such as lines, angles and circles, which were developed mainly for the need of surveying and architecture.

A fundamental innovation was the elaboration of proofs by ancient Greeks: it is not sufficient to verify by measurement that, say, two lengths are equal. Such a property must be proved by abstract reasoning from previously proven results (theorems) and basic properties (which are considered as self-evident because they are too basic for being the subject of a proof (postulates)). This principle, which is foundational for all mathematics, was elaborated for the sake of geometry, and was systematized by Euclid around 300 BC in his book Elements.

The resulting Euclidean geometry is the study of shapes and their arrangements constructed from lines, planes and circles in the Euclidean plane (plane geometry) and the (three-dimensional) Euclidean space.[b]

Euclidean geometry was developed without a change of methods or scope until the 17th century, when René Descartes introduced what is now called Cartesian coordinates. This was a major change of paradigm, since instead of defining real numbers as lengths of line segments (see number line), it allowed the representation of points using numbers (their coordinates), and for the use of algebra and later, calculus for solving geometrical problems. This split geometry in two parts that differ only by their methods, synthetic geometry, which uses purely geometrical methods, and analytic geometry, which uses coordinates systemically.

Analytic geometry allows the study of new shapes, in particular curves that are not related to circles and lines; these curves are defined either as graph of functions (whose study led to differential geometry), or by implicit equations, often polynomial equations (which spawned algebraic geometry). Analytic geometry makes it possible to consider spaces dimensions higher than three (it suffices to consider more than three coordinates), which are no longer a model of the physical space.

Geometry expanded quickly during the 19th century. A major event was the discovery (in the second half of the 19th century) of non-Euclidean geometries, which are geometries where the parallel postulate is abandoned. This is, besides Russel's paradox, one of the starting points of the foundational crisis of mathematics, by taking into question the truth of the aforementioned postulate. This aspect of the crisis was solved by systematizing the axiomatic method, and adopting that the truth of the chosen axioms is not a mathematical problem. In turn, the axiomatic method allows for the study of various geometries obtained either by changing the axioms or by considering properties that are invariant under specific transformations of the space. This results in a number of subareas and generalizations of geometry that include:

Algebra may be viewed as the art of manipulating equations and formulas. Diophantus (3rd century) and Al-Khwarizmi (9th century) were two main precursors of algebra. The first one solved some relations between unknown natural numbers (that is, equations) by deducing new relations until getting the solution. The second one introduced systematic methods for transforming equations (such as moving a term from a side of an equation into the other side). The term algebra is derived from the Arabic word that he used for naming one of these methods in the title of his main treatise.

Algebra began to be a specific area only with François Viète (1540–1603), who introduced the use of letters (variables) for representing unknown or unspecified numbers. This allows describing concisely the operations that have to be done on the numbers represented by the variables.

Until the 19th century, algebra consisted mainly of the study of linear equations that is called presently linear algebra, and polynomial equations in a single unknown, which were called algebraic equations (a term that is still in use, although it may be ambiguous). During the 19th century, variables began to represent other things than numbers (such as matrices, modular integers, and geometric transformations), on which some operations can operate, which are often generalizations of arithmetic operations. For dealing with this, the concept of algebraic structure was introduced, which consist of a set whose elements are unspecified, of operations acting on the elements of the set, and rules that these operations must follow. So, the scope of algebra evolved for becoming essentially the study of algebraic structures. This object of algebra was called modern algebra or abstract algebra, the latter term being still used, mainly in an educational context, in opposition with elementary algebra which is concerned with the older way of manipulating formulas.

Rubik's cube: the study of its possible moves is a concrete application of group theory

Some types of algebraic structures have properties that are useful, and often fundamental, in many areas of mathematics. Their study are nowadays autonomous parts of algebra, which include:

The study of types of algebraic structures as mathematical objects is the object of universal algebra and category theory. The latter applies to every mathematical structure (not only the algebraic ones). At its origin, it was introduced, together with homological algebra for allowing the algebraic study of non-algebraic objects such as topological spaces; this particular area of application is called algebraic topology.

Calculus, formerly called infinitesimal calculus, was introduced in the 17th century by Newton and Leibniz, independently and simultaneously. It is fundamentally the study of the relationship of two changing quantities, called variables, such that one depends on the other. Calculus was largely expanded in the 18th century by Euler, with the introduction of the concept of a function, and many other results. Presently "calculus" refers mainly to the elementary part of this theory, and "analysis" is commonly used for advanced parts.

Analysis is further subdivided into real analysis, where variables represent real numbers and complex analysis where variables represent complex numbers. Presently there are many subareas of analysis, some being shared with other areas of mathematics; they include:

Discrete mathematics is a recently-emerging wide area of mathematics, which aggregates several previously existing areas, which deal with finite mathematical structures and processes where continuous variations are not allowed. These areas have in common that, because of the discrete aspect, the standard methods of calculus and mathematical analysis do not apply directly.[c] These areas have also in common that algorithms, their implementation and their computational complexity play a major role. Despite they may have very different objects of study, they share often similar algorithmics methods.

Four color theorem and optimal sphere packing are two major problems of discrete mathematics that have been solved since the second half of the 20th century. The open problem P=NP is important for discrete mathematics, since it solution would impact most parts of discrete mathematics, whichever would be the solution.

Combinatorics may be viewed primarily as the art of enumerating a prescribed set of objects. The history of combinatorics is quite long, with combinatorial techniques being developed in a variety of ancient societies. The usage of the term combinatorics in the modern mathematical sense was coined by Leibiniz in the 17th century AD,[11] though it was through the work of Euler that many of its modern tools, such as generating functions, were developed.

The breadth of combinatorics is quite large, and has been used to study enumeration problems arising in pure mathematics within algebra, number theory, probability theory, topology and geometry,[12] as well as many areas of applied math. Due to the wide variety of objects that may be enumerated, the theory is often subdivided into a variety of areas based on either the type of objects under consideration or the methods used, including:

Combinatorics is also frequently used in graph theory, as well as forming one of the basic tools used in the analysis of algorithms.

These subjects belong to mathematics since the end of the 19th century. Before this period, sets were not considered as mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy, and was not specifically studied by mathematicians.

Before the study of infinite sets by Georg Cantor, mathematicians were reluctant to consider collections that are actually infinite, and considered infinity as the result of an endless enumeration. Cantor's work offended many mathematicians not only by considering actually infinite sets, but also by showing that this implies different sizes of infinity (see Cantor's diagonal argument) and the existence of mathematical objects that cannot be computed, and not even be explicitly described (for example, Hamel bases of the real numbers over the rational numbers). This led to the controversy over Cantor's set theory.

In the same period, it appeared in various areas of mathematics that the former intuitive definitions of the basic mathematical objects were insufficient for insuring mathematical rigour. Examples of such intuitive definitions are "a set is a collection of objects", "natural number is what is used for counting", "a point is a shape with a zero length in every direction", "a curve is a trace left by a moving point", etc.

This is the origin of the foundational crisis of mathematics.[13] It has been eventually solved in the mainstream of mathematics by systematize the axiomatic method inside a formalized set theory. Roughly speaking, each mathematical object is defined by the set of all similar objects and the properties that these objects must have. For example, in Peano arithmetic, the natural numbers are defined by "zero is a number", "each number as a unique successor", "each number but zero has a unique predecessor", and some rules of reasoning. The "nature" of the objects defined this way is a philosophical problem that mathematicians leave to philosophers, even if many mathematicians have opinions on this nature, and use their opinion—sometimes called "intuition"—to guide their study and finding proofs.

This approach allows considering "logics" (that is, sets of allowed deducing rules), theorems, proofs, etc. as mathematical objects, and to prove theorems about them. For example, Gödel's incompleteness theorems assert, roughly speaking that, in every theory that contains the natural numbers, there are theorems that are true (that is provable in a larger theory), but not provable inside the theory.

This approach of the foundations of the mathematics was challenged during the first half of the 20th century by mathematicians leaded by L. E. J. Brouwer who promoted an intuitionistic logic that excludes the law of excluded middle.

These problems and debates led to a wide expansion of mathematical logic, with subareas such as model theory (modeling some logical theories inside other theory), proof theory, type theory, computability theory and computational complexity theory. Although these aspects of mathematical logic were introduced before the rise of computers, their use in compiler design, program certification, proof assistants and other aspects of computer science, contributed in turn to the expansion of these logical theories.[14]

Applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry. Thus, "applied mathematics" is a mathematical science with specialized knowledge. The term applied mathematics also describes the professional specialty in which mathematicians work on practical problems; as a profession focused on practical problems, applied mathematics focuses on the "formulation, study, and use of mathematical models" in science, engineering, and other areas of mathematical practice.

In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics, where mathematics is developed primarily for its own sake. Thus, the activity of applied mathematics is vitally connected with research in pure mathematics.

Applied mathematics has significant overlap with the discipline of statistics, whose theory is formulated mathematically, especially with probability theory. Statisticians (working as part of a research project) "create data that makes sense" with random sampling and with randomized experiments;[15] the design of a statistical sample or experiment specifies the analysis of the data (before the data becomes available). When reconsidering data from experiments and samples or when analyzing data from observational studies, statisticians "make sense of the data" using the art of modelling and the theory of inference—with model selection and estimation; the estimated models and consequential predictions should be tested on new data.[d]

Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints: For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence.[16] Because of its use of optimization, the mathematical theory of statistics shares concerns with other decision sciences, such as operations research, control theory, and mathematical economics.[17]

Computational mathematics proposes and studies methods for solving mathematical problems that are typically too large for human numerical capacity. Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis broadly includes the study of approximation and discretisation with special focus on rounding errors. Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic-matrix-and-graph theory. Other areas of computational mathematics include computer algebra and symbolic computation.

The history of mathematics can be seen as an ever-increasing series of abstractions. Evolutionarily speaking, the first abstraction to ever take place, which is shared by many animals,[18] was probably that of numbers: the realization that a collection of two apples and a collection of two oranges (for example) have something in common, namely the quantity of their members. As evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have also recognized how to count abstract quantities, like time—days, seasons, or years.[19][20]

Evidence for more complex mathematics does not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic, algebra, and geometry for taxation and other financial calculations, for building and construction, and for astronomy.[21] The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to 1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical concept after basic arithmetic and geometry. It is in Babylonian mathematics that elementary arithmetic (addition, subtraction, multiplication, and division) first appear in the archaeological record. The Babylonians also possessed a place-value system and used a sexagesimal numeral system which is still in use today for measuring angles and time.[22]

Beginning in the 6th century BC with the Pythagoreans, with Greek mathematics the Ancient Greeks began a systematic study of mathematics as a subject in its own right.[23] Around 300 BC, Euclid introduced the axiomatic method still used in mathematics today, consisting of definition, axiom, theorem, and proof. His book, Elements, is widely considered the most successful and influential textbook of all time.[24] The greatest mathematician of antiquity is often held to be Archimedes (c. 287–212 BC) of Syracuse.[25] He developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus.[26] Other notable achievements of Greek mathematics are conic sections (Apollonius of Perga, 3rd century BC),[27] trigonometry (Hipparchus of Nicaea, 2nd century BC),[28] and the beginnings of algebra (Diophantus, 3rd century AD).[29]

The numerals used in the Bakhshali manuscript, dated between the 2nd century BC and the 2nd century AD.

The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics. Other notable developments of Indian mathematics include the modern definition and approximation of sine and cosine, and an early form of infinite series.

During the Golden Age of Islam, especially during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics. The most notable achievement of Islamic mathematics was the development of algebra. Other achievements of the Islamic period include advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system.[30] Many notable mathematicians from this period were Persian, such as Al-Khwarismi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī.

During the early modern period, mathematics began to develop at an accelerating pace in Western Europe. The development of calculus by Isaac Newton and Gottfried Leibniz in the 17th century revolutionized mathematics. Leonhard Euler was the most notable mathematician of the 18th century, contributing numerous theorems and discoveries. Perhaps the foremost mathematician of the 19th century was the German mathematician Carl Gauss, who made numerous contributions to fields such as algebra, analysis, differential geometry, matrix theory, number theory, and statistics. In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show in part that any consistent axiomatic system—if powerful enough to describe arithmetic—will contain true propositions that cannot be proved.

Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made to this very day. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs."[31]

The word mathematics comes from Ancient Greek máthēma (μάθημα), meaning "that which is learnt,"[32] "what one gets to know," hence also "study" and "science". The word for "mathematics" came to have the narrower and more technical meaning "mathematical study" even in Classical times.[33] Its adjective is mathēmatikós (μαθηματικός), meaning "related to learning" or "studious," which likewise further came to mean "mathematical." In particular, mathēmatikḗ tékhnē (μαθηματικὴ τέχνη; Latin: ars mathematica) meant "the mathematical art."

Similarly, one of the two main schools of thought in Pythagoreanism was known as the mathēmatikoi (μαθηματικοί)—which at the time meant "learners" rather than "mathematicians" in the modern sense.

In Latin, and in English until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This has resulted in several mistranslations. For example, Saint Augustine's warning that Christians should beware of mathematici, meaning astrologers, is sometimes mistranslated as a condemnation of mathematicians.[34]

The apparent plural form in English, like the French plural form les mathématiques (and the less commonly used singular derivative la mathématique), goes back to the Latin neuter plural mathematica (Cicero), based on the Greek plural ta mathēmatiká (τὰ μαθηματικά), used by Aristotle (384–322 BC), and meaning roughly "all things mathematical", although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, which were inherited from Greek.[35] In English, the noun mathematics takes a singular verb. It is often shortened to maths or, in North America, math.[36]

There is no general consensus about the exact definition or epistemological status of mathematics.[6][7] A great many professional mathematicians take no interest in a definition of mathematics, or consider it undefinable.[6] There is not even consensus on whether mathematics is an art or a science.[7] Some just say, "Mathematics is what mathematicians do."[6]

Aristotle defined mathematics as "the science of quantity" and this definition prevailed until the 18th century. However, Aristotle also noted a focus on quantity alone may not distinguish mathematics from sciences like physics; in his view, abstraction and studying quantity as a property "separable in thought" from real instances set mathematics apart.[37]

In the 19th century, when the study of mathematics increased in rigor and began to address abstract topics such as group theory and projective geometry, which have no clear-cut relation to quantity and measurement, mathematicians and philosophers began to propose a variety of new definitions.[38] To this day, philosophers continue to tackle questions in philosophy of mathematics, such as the nature of mathematical proof.[39]

Mathematicians strive to develop their results with systematic reasoning in order to avoid mistaken "theorems". These false proofs often arise from fallible intuitions and have been common in mathematics' history. To allow deductive reasoning, some basic assumptions need to be admitted explicitly as axioms. Traditionally, these axioms were selected on the grounds of common-sense, but modern axioms typically express formal guarantees for primitive notions, such as simple objects and relations.

The validity of a mathematical proof is fundamentally a matter of rigor, and misunderstanding rigor is a notable cause for some common misconceptions about mathematics. Mathematical language may give more precision than in everyday speech to ordinary words like or and only. Other words such as open and field are invested with new meanings for specific mathematical concepts. Sometimes even entirely new terms (e.g. homeomorphism) are coined. This technical vocabulary is both precise and compact, making it possible to mentally process complex ideas. Mathematicians refer to this precision of language and logic as "rigor".

The rigor expected in mathematics has varied over time: the Greeks expected detailed arguments, but in Isaac Newton's heyday, the methods employed were less rigorous. Problems inherent in the definitions used by Newton led to a resurgence of careful analysis and formal proof in the 19th century. Later in the early 20th century, Bertrand Russell and Alfred North Whitehead would publish their Principia Mathematica, an attempt to show that all mathematical concepts and statements could be defined, then proven entirely through symbolic logic. This was part of a wider philosophical program known as logicism, which sees mathematics as primarily an extension of logic.

Despite mathematics' concision, many proofs require hundreds of pages to express. The emergence of computer-assisted proofs has allowed proof lengths to further expand. Assisted proofs may be erroneous if the proving software has flaws and if they are lengthy, difficult to check.[e][40] On the other hand, proof assistants allow for the verification of details that cannot be given in a hand-written proof, and provide certainty of the correctness of long proofs such as that of the 255-page Feit–Thompson theorem.[f]

Leonhard Euler created and popularized much of the mathematical notation used today.

In addition to special language, contemporary mathematics makes heavy use of special notation. These symbols also contribute to rigor, both by simplifying the expression of mathematical ideas and allowing routine operations that follow consistent rules. Modern notation makes mathematics much more efficient for the adept, though beginners can find it daunting.

Most of the mathematical notation in use today was invented after the 15th century, with many contributions by Leonhard Euler (1707–1783) in particular.[41][failed verification] Before then, mathematical arguments were typically written out in words, limiting mathematical discovery.[42]

Beginning in the 19th century, a school of thought known as formalism developed. To a formalist, mathematics is primarily about formal systems of symbols and rules for combining them. From this point-of-view, even axioms are just privileged formulas in an axiomatic system, given without being derived procedurally from other elements in the system. A maximal instance of formalism was David Hilbert's call in the early 20th century, often called Hilbert's program, to encode all mathematics in this way.

Kurt Gödel proved this goal was fundamentally impossible with his incompleteness theorems, which showed any formal system rich enough to describe even simple arithmetic could not guarantee its own completeness or consistency. Nonetheless, formalist concepts continue to influence mathematics greatly, to the point statements are expected by default to be expressible in set-theoretic formulas. Only very exceptional results are accepted as not fitting into one axiomatic system or another.[43]

Isaac Newton (left) and Gottfried Wilhelm Leibniz developed infinitesimal calculus.

In practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences, notably deductive reasoning from assumptions. Mathematicians develop mathematical hypotheses, known as conjectures, using trial and error with intuition too, similarly to scientists.[44] Experimental mathematics and computational methods like simulation also continue to grow in importance within mathematics.

Today, all sciences pose problems studied by mathematicians, and conversely, results from mathematics often lead to new questions and realizations in the sciences. For example, the physicist Richard Feynman combined mathematical reasoning and physical insight to invent the path integral formulation of quantum mechanics. String theory, on the other hand, is a proposed framework for unifying much of modern physics that has inspired new techniques and results in mathematics.[45]

The German mathematician Carl Friedrich Gauss even went so far as to call mathematics "the Queen of the Sciences",[46] and more recently, Marcus du Sautoy has described mathematics as "the main driving force behind scientific discovery".[47] However, some authors emphasize that mathematics differs from the modern notion of science in a major way: it does not rely on empirical evidence.[48][49][50][51]

Mathematical knowledge has exploded in scope since the Scientific Revolution, and as with other fields of study, this has driven specialization. As of 2010, the latest Mathematics Subject Classification of the American Mathematical Society recognizes hundreds of subfields, with the full classification reaching 46 pages.[52] Typically, many concepts in a subfield can remain isolated from other branches of mathematics indefinitely; results may serve primarily as scaffolding to support other theorems and techniques, or they may not have a clear relation to anything outside the subfield.

Mathematics shows a remarkable tendency to evolve though, and in time, mathematicians often discover surprising applications or links between concepts. One very influential instance of this was the Erlangen program of Felix Klein, which established innovative and profound links between geometry and algebra. This in turn opened up both fields to greater abstraction and spawned entirely new subfields.

A distinction is often made between applied mathematics and mathematics oriented entirely towards abstract questions and concepts, known as pure mathematics. As with other divisions of mathematics though, the boundary is fluid. Ideas that initially develop with a specific application in mind are often generalized later, thereupon joining the general stock of mathematical concepts. Several areas of applied mathematics have even merged with practical fields to become disciplines in their own right, such as statistics, operations research, and computer science.

Perhaps even more surprising is when ideas flow in the other direction, and even the "purest" mathematics lead to unexpected predictions or applications. For example, number theory occupies a central place in modern cryptography, and in physics, derivations from Maxwell's equations preempted experimental evidence of radio waves and the speed of light's constancy. Physicist Eugene Wigner named this phenomena the "unreasonable effectiveness of mathematics".[9]

The uncanny connection between abstract mathematics and material reality has led to philosophical debates since at least the time of Pythagoras. The ancient philosopher Plato argued this was possible because material reality reflects abstract objects that exist outside time. As a result, the view that mathematical objects somehow exist on their own in abstraction is often referred to as Platonism. While most mathematicians don't typically concern themselves with the questions raised by Platonism, some more philosophically-minded ones do and identify as Platonists, even in contemporary times.[53]

The need for correctness and rigor does not mean mathematics has no place for creativity. On the contrary, most mathematical work beyond rote calculations requires clever problem-solving and exploring novel perspectives intuitively.

The mathematically inclined often see not only creativity, but also an aesthetic value in mathematics, commonly described as elegance. Qualities like simplicity, symmetry, completeness, and generality are particularly valued in proofs and techniques. G. H. Hardy in A Mathematician's Apology expressed the belief that these aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. He also identified other criteria such as significance, unexpectedness, and inevitability, which contribute to a mathematical aesthetic.[54]

Paul Erdős expressed this sentiment more ironically by speaking of "The Book", a supposed divine collection of the most beautiful proofs. Inspired by Erdős, a collection of particularly succinct and revelatory mathematical arguments have been published in Proofs from THE BOOK. Some examples of particularly elegant results are Euclid's proof that there are infinitely many prime numbers and the fast Fourier transform for harmonic analysis.

Some feel that to consider mathematics a science is to downplay its artistry and history in the seven traditional liberal arts.[55] One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematical results are created (as in art) or discovered (as in science).[56] The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions.

In the 20th century, the mathematician L. E. J. Brouwer even initiated a philosophical perspective known as intuitionism, which primarily identifies mathematics with certain creative processes in the mind.[57] Intuitionism is in turn one flavor of a stance known as constructivism, which only considers a mathematical object valid if it can be directly constructed, not merely guaranteed by logic indirectly. This leads committed constructivists to reject certain results, particularly arguments like existential proofs based on the law of excluded middle.[58]

In the end, neither constructivism nor intuitionism displaced classical mathematics or achieved mainstream acceptance. However, these programs have motivated specific developments, such as intuitionistic logic and other foundational insights, which are appreciated in their own right.[58]

Even when difficult, mathematics has a remarkable ability to cross cultural boundaries and time periods. However, as a human activity, mathematical practice has a social side too, including concerns such as education, careers, recognition, etc.

The most prestigious award in mathematics is the Fields Medal,[59][60] established in 1936 and awarded every four years (except around World War II) to as many as four individuals.[61][62] It is the mathematical equivalent of the Nobel Prize.[62]

A famous list of 23 open problems, called "Hilbert's problems", was compiled in 1900 by German mathematician David Hilbert.[69] This list achieved great celebrity among mathematicians[70][better source needed], and at least thirteen of the problems (depending how some are interpreted) have now been solved.[69] A new list of seven important problems, titled the "Millennium Prize Problems", was published in 2000. Only one of them, the Riemann hypothesis, duplicates one of Hilbert's problems. A solution to any of these problems carries a 1 million dollar reward.[71] Currently, only one of these problems, the Poincaré conjecture, has been solved.[72]