# Algebra

**Algebra** (from Arabic: الجبر, romanized: *al-jabr*, lit. 'reunion of broken parts,^{[1]} bonesetting^{[2]}') is one of the broad areas of mathematics. In its most general form, algebra is the study of mathematical symbols and the rules for manipulating these symbols;^{[3]} it is a unifying thread of almost all of mathematics.^{[4]} It includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra; the more abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine and economics. Abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians.

The word *algebra* is also used in certain specialized ways. A special kind of algebraic structure is called an "algebra", and the word is used, for example, in the phrases linear algebra and algebraic topology.

The word *algebra* comes from the Arabic: الجبر, romanized: *al-jabr*, lit. 'reunion of broken parts,^{[1]} bonesetting^{[2]}' from the title of the early 9th century book * ^{c}Ilm al-jabr wa l-muqābala* "The Science of Restoring and Balancing" by the Persian mathematician and astronomer al-Khwarizmi. In his work, the term

*al-jabr*referred to the operation of moving a term from one side of an equation to the other, المقابلة

*al-muqābala*"balancing" referred to adding equal terms to both sides. Shortened to just

*algeber*or

*algebra*in Latin, the word eventually entered the English language during the 15th century, from either Spanish, Italian, or Medieval Latin. It originally referred to the surgical procedure of setting broken or dislocated bones. The mathematical meaning was first recorded (in English) in the 16th century.

^{[7]}

The word "algebra" has several related meanings in mathematics, as a single word or with qualifiers.

Algebra began with computations similar to those of arithmetic, with letters standing for numbers.^{[5]} This allowed proofs of properties that are true no matter which numbers are involved. For example, in the quadratic equation

Historically, and in current teaching, the study of algebra starts with the solving of equations, such as the quadratic equation above. Then more general questions, such as "does an equation have a solution?", "how many solutions does an equation have?", "what can be said about the nature of the solutions?" are considered. These questions led extending algebra to non-numerical objects, such as permutations, vectors, matrices, and polynomials. The structural properties of these non-numerical objects were then abstracted into algebraic structures such as groups, rings, and fields.

Before the 16th century, mathematics was divided into only two subfields, arithmetic and geometry. Even though some methods, which had been developed much earlier, may be considered nowadays as algebra, the emergence of algebra and, soon thereafter, of infinitesimal calculus as subfields of mathematics only dates from the 16th or 17th century. From the second half of the 19th century on, many new fields of mathematics appeared, most of which made use of both arithmetic and geometry, and almost all of which used algebra.

Today, algebra has grown considerably and includes many branches of mathematics, as can be seen in the Mathematics Subject Classification^{[8]}
where none of the first level areas (two digit entries) are called *algebra*. Today algebra includes section 08-General algebraic systems, 12-Field theory and polynomials, 13-Commutative algebra, 15-Linear and multilinear algebra; matrix theory, 16-Associative rings and algebras, 17-Nonassociative rings and algebras, 18-Category theory; homological algebra, 19-K-theory and 20-Group theory. Algebra is also used extensively in 11-Number theory and 14-Algebraic geometry.

The roots of algebra can be traced to the ancient Babylonians,^{[9]} who developed an advanced arithmetical system with which they were able to do calculations in an algorithmic fashion. The Babylonians developed formulas to calculate solutions for problems typically solved today by using linear equations, quadratic equations, and indeterminate linear equations. By contrast, most Egyptians of this era, as well as Greek and Chinese mathematics in the 1st millennium BC, usually solved such equations by geometric methods, such as those described in the *Rhind Mathematical Papyrus*, Euclid's *Elements*, and *The Nine Chapters on the Mathematical Art*. The geometric work of the Greeks, typified in the *Elements*, provided the framework for generalizing formulae beyond the solution of particular problems into more general systems of stating and solving equations, although this would not be realized until mathematics developed in medieval Islam.^{[10]}

By the time of Plato, Greek mathematics had undergone a drastic change. The Greeks created a geometric algebra where terms were represented by sides of geometric objects, usually lines, that had letters associated with them.^{[5]} Diophantus (3rd century AD) was an Alexandrian Greek mathematician and the author of a series of books called *Arithmetica*. These texts deal with solving algebraic equations,^{[11]} and have led, in number theory, to the modern notion of Diophantine equation.

Earlier traditions discussed above had a direct influence on the Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī (c. 780–850). He later wrote , which established algebra as a mathematical discipline that is independent of geometry and arithmetic.^{[12]}

The Hellenistic mathematicians Hero of Alexandria and Diophantus^{[13]} as well as Indian mathematicians such as Brahmagupta, continued the traditions of Egypt and Babylon, though Diophantus' *Arithmetica* and Brahmagupta's *Brāhmasphuṭasiddhānta* are on a higher level.^{[14]}^{[better source needed]} For example, the first complete arithmetic solution written in words instead of symbols,^{[15]} including zero and negative solutions, to quadratic equations was described by Brahmagupta in his book *Brahmasphutasiddhanta,* published in 628 AD.^{[16]} Later, Persian and Arab mathematicians developed algebraic methods to a much higher degree of sophistication. Although Diophantus and the Babylonians used mostly special *ad hoc* methods to solve equations, Al-Khwarizmi's contribution was fundamental. He solved linear and quadratic equations without algebraic symbolism, negative numbers or zero, thus he had to distinguish several types of equations.^{[17]}

In the context where algebra is identified with the theory of equations, the Greek mathematician Diophantus has traditionally been known as the "father of algebra" and in the context where it is identified with rules for manipulating and solving equations, Persian mathematician al-Khwarizmi is regarded as "the father of algebra".^{[18]}^{[19]}^{[20]}^{[21]}^{[22]}^{[23]}^{[24]} It is open to debate whether Diophantus or al-Khwarizmi is more entitled to be known, in the general sense, as "the father of algebra". Those who support Diophantus point to the fact that the algebra found in *Al-Jabr* is slightly more elementary than the algebra found in *Arithmetica* and that *Arithmetica* is syncopated while *Al-Jabr* is fully rhetorical.^{[25]} Those who support Al-Khwarizmi point to the fact that he introduced the methods of "reduction" and "balancing" (the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation) which the term *al-jabr* originally referred to,^{[26]} and that he gave an exhaustive explanation of solving quadratic equations,^{[27]} supported by geometric proofs while treating algebra as an independent discipline in its own right.^{[22]} His algebra was also no longer concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study". He also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems".^{[28]}

Another Persian mathematician Omar Khayyam is credited with identifying the foundations of algebraic geometry and found the general geometric solution of the cubic equation. His book *Treatise on Demonstrations of Problems of Algebra* (1070), which laid down the principles of algebra, is part of the body of Persian mathematics that was eventually transmitted to Europe.^{[29]} Yet another Persian mathematician, Sharaf al-Dīn al-Tūsī, found algebraic and numerical solutions to various cases of cubic equations.^{[30]} He also developed the concept of a function.^{[31]} The Indian mathematicians Mahavira and Bhaskara II, the Persian mathematician Al-Karaji,^{[32]} and the Chinese mathematician Zhu Shijie, solved various cases of cubic, quartic, quintic and higher-order polynomial equations using numerical methods. In the 13th century, the solution of a cubic equation by Fibonacci is representative of the beginning of a revival in European algebra. Abū al-Ḥasan ibn ʿAlī al-Qalaṣādī (1412–1486) took "the first steps toward the introduction of algebraic symbolism". He also computed Σ*n*^{2}, Σ*n*^{3} and used the method of successive approximation to determine square roots.^{[33]}

François Viète's work on new algebra at the close of the 16th century was an important step towards modern algebra. In 1637, René Descartes published *La Géométrie*, inventing analytic geometry and introducing modern algebraic notation. Another key event in the further development of algebra was the general algebraic solution of the cubic and quartic equations, developed in the mid-16th century. The idea of a determinant was developed by Japanese mathematician Seki Kōwa in the 17th century, followed independently by Gottfried Leibniz ten years later, for the purpose of solving systems of simultaneous linear equations using matrices. Gabriel Cramer also did some work on matrices and determinants in the 18th century. Permutations were studied by Joseph-Louis Lagrange in his 1770 paper "*Réflexions sur la résolution algébrique des équations*" devoted to solutions of algebraic equations, in which he introduced Lagrange resolvents. Paolo Ruffini was the first person to develop the theory of permutation groups, and like his predecessors, also in the context of solving algebraic equations.

Abstract algebra was developed in the 19th century, deriving from the interest in solving equations, initially focusing on what is now called Galois theory, and on constructibility issues.^{[34]} George Peacock was the founder of axiomatic thinking in arithmetic and algebra. Augustus De Morgan discovered relation algebra in his *Syllabus of a Proposed System of Logic*. Josiah Willard Gibbs developed an algebra of vectors in three-dimensional space, and Arthur Cayley developed an algebra of matrices (this is a noncommutative algebra).^{[35]}

Some subareas of algebra have the word algebra in their name; linear algebra is one example. Others do not: group theory, ring theory, and field theory are examples. In this section, we list some areas of mathematics with the word "algebra" in the name.

Elementary algebra is the most basic form of algebra. It is taught to students who are presumed to have no knowledge of mathematics beyond the basic principles of arithmetic. In arithmetic, only numbers and their arithmetical operations (such as +, −, ×, ÷) occur. In algebra, numbers are often represented by symbols called variables (such as *a*, *n*, *x*, *y* or *z*). This is useful because:

A polynomial is an expression that is the sum of a finite number of non-zero terms, each term consisting of the product of a constant and a finite number of variables raised to whole number powers. For example, *x*^{2} + 2*x* − 3 is a polynomial in the single variable *x*. A polynomial expression is an expression that may be rewritten as a polynomial, by using commutativity, associativity and distributivity of addition and multiplication. For example, (*x* − 1)(*x* + 3) is a polynomial expression, that, properly speaking, is not a polynomial. A polynomial function is a function that is defined by a polynomial, or, equivalently, by a polynomial expression. The two preceding examples define the same polynomial function.

Two important and related problems in algebra are the factorization of polynomials, that is, expressing a given polynomial as a product of other polynomials that cannot be factored any further, and the computation of polynomial greatest common divisors. The example polynomial above can be factored as (*x* − 1)(*x* + 3). A related class of problems is finding algebraic expressions for the roots of a polynomial in a single variable.

It has been suggested that elementary algebra should be taught to students as young as eleven years old,^{[36]} though in recent years it is more common for public lessons to begin at the eighth grade level (≈ 13 y.o. ±) in the United States.^{[37]} However, in some US schools, algebra is started in ninth grade.

Abstract algebra extends the familiar concepts found in elementary algebra and arithmetic of numbers to more general concepts. Here are the listed fundamental concepts in abstract algebra.

Sets: Rather than just considering the different types of numbers, abstract algebra deals with the more general concept of *sets*: a collection of all objects (called elements) selected by property specific for the set. All collections of the familiar types of numbers are sets. Other examples of sets include the set of all two-by-two matrices, the set of all second-degree polynomials (*ax*^{2} + *bx* + *c*), the set of all two dimensional vectors of a plane, and the various finite groups such as the cyclic groups, which are the groups of integers modulo *n*. Set theory is a branch of logic and not technically a branch of algebra.

Binary operations: The notion of addition (+) is abstracted to give a *binary operation*, say ∗. The notion of binary operation is meaningless without the set on which the operation is defined. For two elements *a* and *b* in a set *S*, *a* ∗ *b* is another element in the set; this condition is called closure. Addition (+), subtraction (−), multiplication (×), and division (÷) can be binary operations when defined on different sets, as are addition and multiplication of matrices, vectors, and polynomials.

Identity elements: The numbers zero and one are abstracted to give the notion of an *identity element* for an operation. Zero is the identity element for addition and one is the identity element for multiplication. For a general binary operator ∗ the identity element *e* must satisfy *a* ∗ *e* = *a* and *e* ∗ *a* = *a*, and is necessarily unique, if it exists. This holds for addition as *a* + 0 = *a* and 0 + *a* = *a* and multiplication *a* × 1 = *a* and 1 × *a* = *a*. Not all sets and operator combinations have an identity element; for example, the set of positive natural numbers (1, 2, 3, ...) has no identity element for addition.

Inverse elements: The negative numbers give rise to the concept of *inverse elements*. For addition, the inverse of *a* is written −*a*, and for multiplication the inverse is written *a*^{−1}. A general two-sided inverse element *a*^{−1} satisfies the property that *a* ∗ *a*^{−1} = *e* and *a*^{−1} ∗ *a* = *e*, where *e* is the identity element.

Associativity: Addition of integers has a property called associativity. That is, the grouping of the numbers to be added does not affect the sum. For example: (2 + 3) + 4 = 2 + (3 + 4). In general, this becomes (*a* ∗ *b*) ∗ *c* = *a* ∗ (*b* ∗ *c*). This property is shared by most binary operations, but not subtraction or division or octonion multiplication.

Commutativity: Addition and multiplication of real numbers are both commutative. That is, the order of the numbers does not affect the result. For example: 2 + 3 = 3 + 2. In general, this becomes *a* ∗ *b* = *b* ∗ *a*. This property does not hold for all binary operations. For example, matrix multiplication and quaternion multiplication are both non-commutative.

Combining the above concepts gives one of the most important structures in mathematics: a group. A group is a combination of a set *S* and a single binary operation ∗, defined in any way you choose, but with the following properties:

If a group is also commutative – that is, for any two members *a* and *b* of *S*, *a* ∗ *b* is identical to *b* ∗ *a* – then the group is said to be abelian.

For example, the set of integers under the operation of addition is a group. In this group, the identity element is 0 and the inverse of any element *a* is its negation, −*a*. The associativity requirement is met, because for any integers *a*, *b* and *c*, (*a* + *b*) + *c* = *a* + (*b* + *c*)

The non-zero rational numbers form a group under multiplication. Here, the identity element is 1, since 1 × *a* = *a* × 1 = *a* for any rational number *a*. The inverse of *a* is 1/*a*, since *a* × 1/*a* = 1.

The integers under the multiplication operation, however, do not form a group. This is because, in general, the multiplicative inverse of an integer is not an integer. For example, 4 is an integer, but its multiplicative inverse is 1/4, which is not an integer.

The theory of groups is studied in group theory. A major result of this theory is the classification of finite simple groups, mostly published between about 1955 and 1983, which separates the finite simple groups into roughly 30 basic types.

Semi-groups, quasi-groups, and monoids are algebraic structures similar to groups, but with less constraints on the operation. They comprise a set and a closed binary operation but do not necessarily satisfy the other conditions. A semi-group has an *associative* binary operation but might not have an identity element. A monoid is a semi-group which does have an identity but might not have an inverse for every element. A quasi-group satisfies a requirement that any element can be turned into any other by either a unique left-multiplication or right-multiplication; however, the binary operation might not be associative.

Groups just have one binary operation. To fully explain the behaviour of the different types of numbers, structures with two operators need to be studied. The most important of these are rings and fields.

A ring has two binary operations (+) and (×), with × distributive over +. Under the first operator (+) it forms an *abelian group*. Under the second operator (×) it is associative, but it does not need to have an identity, or inverse, so division is not required. The additive (+) identity element is written as 0 and the additive inverse of *a* is written as −*a*.

Distributivity generalises the *distributive law* for numbers. For the integers (*a* + *b*) × *c* = *a* × *c* + *b* × *c* and *c* × (*a* + *b*) = *c* × *a* + *c* × *b*, and × is said to be *distributive* over +.

The integers are an example of a ring. The integers have additional properties which make it an integral domain.

A field is a *ring* with the additional property that all the elements excluding 0 form an *abelian group* under ×. The multiplicative (×) identity is written as 1 and the multiplicative inverse of *a* is written as *a*^{−1}.

The rational numbers, the real numbers and the complex numbers are all examples of fields.