Mathematics is the study of representing and reasoning about abstract objects (such as numbers, points, spaces, sets, structures, and games). Mathematics is used throughout the world as an essential tool in many fields, including natural science, engineering, medicine, and the social sciences. Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspires and makes use of new mathematical discoveries and sometimes leads to the development of entirely new mathematical disciplines, such as statistics and game theory. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, and practical applications for what began as pure mathematics are often discovered. (Full article...)
Featured articles are displayed here, which represent some of the best content on English Wikipedia.
Image 1
Damage from Hurricane Katrina in 2005. Actuaries need to estimate long-term levels of such damage in order to accurately price property insurance, set appropriate reserves, and design appropriate reinsurance and capital management strategies.
An actuary is a business professional who deals with the measurement and management of risk and uncertainty. The name of the corresponding field is actuarial science. These risks can affect both sides of the balance sheet and require asset management, liability management, and valuation skills. Actuaries provide assessments of financial security systems, with a focus on their complexity, their mathematics, and their mechanisms.
While the concept of insurance dates to antiquity, the concepts needed to scientifically measure and mitigate risks have their origins in the 17th century studies of probability and annuities. Actuaries of the 21st century require analytical skills, business knowledge, and an understanding of human behavior and information systems to design and manage programs that control risk. The actual steps needed to become an actuary are usually country-specific; however, almost all processes share a rigorous schooling or examination structure and take many years to complete. (Full article...)
Figure 1: A solution (in purple) to Apollonius's problem. The given circles are shown in black.
In Euclidean plane geometry, Apollonius's problem is to construct circles that are tangent to three given circles in a plane (Figure 1). Apollonius of Perga (c. 262 BC – c. 190 BC) posed and solved this famous problem in his work Ἐπαφαί (Epaphaí, "Tangencies"); this work has been lost, but a 4th-century AD report of his results by Pappus of Alexandria has survived. Three given circles generically have eight different circles that are tangent to them (Figure 2), a pair of solutions for each way to divide the three given circles in two subsets (there are 4 ways to divide a set of cardinality 3 in 2 parts).
In the 16th century, Adriaan van Roomen solved the problem using intersecting hyperbolas, but this solution does not use only straightedge and compass constructions. François Viète found such a solution by exploiting limiting cases: any of the three given circles can be shrunk to zero radius (a point) or expanded to infinite radius (a line). Viète's approach, which uses simpler limiting cases to solve more complicated ones, is considered a plausible reconstruction of Apollonius' method. The method of van Roomen was simplified by Isaac Newton, who showed that Apollonius' problem is equivalent to finding a position from the differences of its distances to three known points. This has applications in navigation and positioning systems such as LORAN. (Full article...)
Noether was born to a Jewish family in the Franconian town of Erlangen; her father was the mathematician Max Noether. She originally planned to teach French and English after passing the required examinations, but instead studied mathematics at the University of Erlangen, where her father lectured. After completing her doctorate in 1907 under the supervision of Paul Gordan, she worked at the Mathematical Institute of Erlangen without pay for seven years. At the time, women were largely excluded from academic positions. In 1915, she was invited by David Hilbert and Felix Klein to join the mathematics department at the University of Göttingen, a world-renowned center of mathematical research. The philosophical faculty objected, however, and she spent four years lecturing under Hilbert's name. Her habilitation was approved in 1919, allowing her to obtain the rank of Privatdozent. (Full article...)
Image 5
Rejewski, c. 1932
Marian Adam Rejewski (Polish: [ˈmarjan rɛˈjɛfskʲi](listen); 16 August 1905 – 13 February 1980) was a Polish mathematician and cryptologist who in late 1932 reconstructed the sight-unseen German military Enigma cipher machine, aided by limited documents obtained by French military intelligence. Over the next nearly seven years, Rejewski and fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski developed and used techniques and equipment to decrypt the German machine ciphers, even as the Germans introduced modifications to their equipment and encryption procedures. Five weeks before the outbreak of World War II the Poles, at a conference in Warsaw, shared their achievements with the French and British, thus enabling Britain to begin reading German Enigma-encrypted messages, seven years after Rejewski's original reconstruction of the machine. The intelligence that was gained by the British from Enigma decrypts formed part of what was code-named Ultra and contributed—perhaps decisively—to the defeat of Germany.
In 1929, while studying mathematics at Poznań University, Rejewski attended a secret cryptology course conducted by the Polish General Staff's Cipher Bureau (Biuro Szyfrów), which he joined in September 1932. The Bureau had had no success in reading Enigma-enciphered messages and set Rejewski to work on the problem in late 1932; he deduced the machine's secret internal wiring after only a few weeks. Rejewski and his two colleagues then developed successive techniques for the regular decryption of Enigma messages. His own contributions included the cryptologic card catalog, derived using the cyclometer that he had invented, and the cryptologic bomb. (Full article...)
Image 6
Josiah Willard Gibbs
Josiah Willard Gibbs (/ɡɪbz/; February 11, 1839 – April 28, 1903) was an American scientist who made significant theoretical contributions to physics, chemistry, and mathematics. His work on the applications of thermodynamics was instrumental in transforming physical chemistry into a rigorous inductive science. Together with James Clerk Maxwell and Ludwig Boltzmann, he created statistical mechanics (a term that he coined), explaining the laws of thermodynamics as consequences of the statistical properties of ensembles of the possible states of a physical system composed of many particles. Gibbs also worked on the application of Maxwell's equations to problems in physical optics. As a mathematician, he invented modern vector calculus (independently of the British scientist Oliver Heaviside, who carried out similar work during the same period).
In 1863, Yale awarded Gibbs the first American doctorate in engineering. After a three-year sojourn in Europe, Gibbs spent the rest of his career at Yale, where he was a professor of mathematical physics from 1871 until his death in 1903. Working in relative isolation, he became the earliest theoretical scientist in the United States to earn an international reputation and was praised by Albert Einstein as "the greatest mind in American history." In 1901, Gibbs received what was then considered the highest honor awarded by the international scientific community, the Copley Medal of the Royal Society of London, "for his contributions to mathematical physics." (Full article...)
Image 7
The repeating decimal continues infinitely
In mathematics, 0.999... (also written as 0.9, in repeating decimal notation) denotes the repeating decimal consisting of an unending sequence of 9s after the decimal point. This repeating decimal represents the smallest number no less than every decimal number in the sequence (0.9, 0.99, 0.999, ...); that is, the supremum of this sequence. This number is equal to1. In other words, "0.999..." is not "almost exactly" or "very, very nearly but not quite" 1 – rather, "0.999..." and "1" represent exactly the same number.
There are many ways of showing this equality, from intuitive arguments to mathematically rigorousproofs. The technique used depends on the target audience, background assumptions, historical context, and preferred development of the real numbers, the system within which 0.999... is commonly defined. In other systems, 0.999... can have the same meaning, a different definition, or be undefined. (Full article...)
In mathematics, a group is a set and an operation that combines any two elements of the set to produce a third element of the set, in such a way that the operation is associative, an identity element exists and every element has an inverse. These three axioms hold for number systems and many other mathematical structures. For example, the integers together with the addition operation form a group. The concept of a group and the axioms that define it were elaborated for handling, in a unified way, essential structural properties of very different mathematical entities such as numbers, geometric shapes and polynomial roots. Because the concept of groups is ubiquitous in numerous areas both within and outside mathematics, some authors consider it as a central organizing principle of contemporary mathematics.
Euler is held to be one of the greatest mathematicians in history and the greatest of the 18th century. A statement attributed to Pierre-Simon Laplace expresses Euler's influence on mathematics: "Read Euler, read Euler, he is the master of us all." Carl Friedrich Gauss remarked: "The study of Euler's works will remain the best school for the different fields of mathematics, and nothing else can replace it." Euler is also widely considered to be the most prolific; his 866 publications as well as his correspondences are collected in the Opera Omnia Leonhard Euler which, when completed, will consist of 81 quarto volumes. He spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. (Full article...)
For thousands of years, mathematicians have attempted to extend their understanding of π, sometimes by computing its value to a high degree of accuracy. Ancient civilizations, including the Egyptians and Babylonians, required fairly accurate approximations of π for practical computations. Around 250 BC, the Greek mathematicianArchimedes created an algorithm to approximate π with arbitrary accuracy. In the 5th century AD, Chinese mathematicians approximated π to seven digits, while Indian mathematicians made a five-digit approximation, both using geometrical techniques. The first computational formula for π, based on infinite series, was discovered a millennium later. The earliest known use of the Greek letter π to represent the ratio of a circle's circumference to its diameter was by the Welsh mathematician William Jones in 1706. (Full article...)
Image 11
Plots of logarithm functions, with three commonly used bases. The special points {{{1}}} are indicated by dotted lines, and all curves intersect in log_{b} 1 = 0.
In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number x is the exponent to which another fixed number, the baseb, must be raised, to produce that number x. In the simplest case, the logarithm counts the number of occurrences of the same factor in repeated multiplication; e.g. since {{{1}}}, the "logarithm base 10" of 1000 is 3, or {{{1}}}. The logarithm of x to baseb is denoted as log_{b} (x), or without parentheses, log_{b}x, or even without the explicit base, log x, when no confusion is possible, or when the base does not matter such as in big O notation.
Originally, Cantor's theory of transfinite numbers was regarded as counter-intuitive – even shocking. This caused it to encounter resistance from mathematical contemporaries such as Leopold Kronecker and Henri Poincaré and later from Hermann Weyl and L. E. J. Brouwer, while Ludwig Wittgenstein raised philosophical objections; see Controversy over Cantor's theory. Cantor, a devout Lutheran Christian, believed the theory had been communicated to him by God. Some Christian theologians (particularly neo-Scholastics) saw Cantor's work as a challenge to the uniqueness of the absolute infinity in the nature of God – on one occasion equating the theory of transfinite numbers with pantheism – a proposition that Cantor vigorously rejected. It is important to note that not all theologians were against Cantor's theory; prominent neo-scholastic philosopher Constantin Gutberlet was in favor of it and Cardinal Johann Baptist Franzelin accepted it as a valid theory (after Cantor made some important clarifications). (Full article...)
Image 14
The first 15,000 partial sums of 0 + 1 − 2 + 3 − 4 + ... The graph is situated with positive integers to the right and negative integers to the left.
The infinite series diverges, meaning that its sequence of partial sums, (1, −1, 2, −2, ...), does not tend towards any finite limit. Nonetheless, in the mid-18th century, Leonhard Euler wrote what he admitted to be a paradoxical equation:
Title page of the first edition of Wright's Certaine Errors in Navigation (1599)
Edward Wright (baptised 8 October 1561; died November 1615) was an English mathematician and cartographer noted for his book Certaine Errors in Navigation (1599; 2nd ed., 1610), which for the first time explained the mathematical basis of the Mercator projection by building on the works of Pedro Nunes, and set out a reference table giving the linear scale multiplication factor as a function of latitude, calculated for each minute of arc up to a latitude of 75°. This was in fact a table of values of the integral of the secant function, and was the essential step needed to make practical both the making and the navigational use of Mercator charts.
A Lorenz curve shows the distribution of income in a population by plotting the percentage y of total income that is earned by the bottom x percent of households (or individuals). Developed by economist Max O. Lorenz in 1905 to describe income inequality, the curve is typically plotted with a diagonal line (reflecting a hypothetical "equal" distribution of incomes) for comparison. This leads naturally to a derived quantity called the Gini coefficient, first published in 1912 by Corrado Gini, which is the ratio of the area between the diagonal line and the curve (area A in this graph) to the area under the diagonal line (the sum of A and B); higher Gini coefficients reflect more income inequality. Lorenz's curve is a special kind of cumulative distribution function used to characterize quantities that follow a Pareto distribution, a type of power law. More specifically, it can be used to illustrate the Pareto principle, a rule of thumb stating that roughly 80% of the identified "effects" in a given phenomenon under study will come from 20% of the "causes" (in the first decade of the 20th century Vilfredo Pareto showed that 80% of the land in Italy was owned by 20% of the population). As this so-called "80–20 rule" implies a specific level of inequality (i.e., a specific power law), more or less extreme cases are possible. For example, in the United States in the first half of the 2010s, 95% of the financial wealth was held by the top 20% of wealthiest households (in 2010), the top 1% of individuals held approximately 40% of the wealth (2012), and the top 1% of income earners received approximately 20% of the pre-tax income (2013). Observations such as these have brought income and wealth inequality into popular consciousness and have given rise to various slogans about "the 1%" versus "the 99%".
These are Good articles, which meet a core set of high editorial standards.
Image 1
Fredrik Carl Mülertz Størmer (3 September 1874 – 13 August 1957) was a Norwegianmathematician and astrophysicist. In mathematics, he is known for his work in number theory, including the calculation of π and Størmer's theorem on consecutive smooth numbers. In physics, he is known for studying the movement of charged particles in the magnetosphere and the formation of aurorae, and for his book on these subjects, From the Depths of Space to the Heart of the Atom. He worked for many years as a professor of mathematics at the University of Oslo in Norway. A crater on the far side of the moon is named after him. (Full article...)
Image 2
As the degree of the Taylor polynomial rises, it approaches the correct function. This image shows sin x and its Taylor approximations by polynomials of degree 1, 3, 5, 7, 9, 11, and 13 at x = 0.
In mathematics, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. If 0 is the point where the derivatives are considered, a Taylor series is also called a Maclaurin series, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the mid 1700s.
The partial sum formed by the first n + 1 terms of a Taylor series is a polynomial of degree n that is called the nth Taylor polynomial of the function. Taylor polynomials are approximations of a function, which become generally better as n increases. Taylor's theorem gives quantitative estimates on the error introduced by the use of such approximations. If the Taylor series of a function is convergent, its sum is the limit of the infinite sequence of the Taylor polynomials. A function may differ from the sum of its Taylor series, even if its Taylor series is convergent. A function is analytic at a point x if it is equal to the sum of its Taylor series in some open interval (or open disk in the complex plane) containing x. This implies that the function is analytic at every point of the interval (or disk). (Full article...)
In this graph, an even number of vertices (the four vertices numbered 2, 4, 5, and 6) have odd degrees. The sum of degrees of all six vertices is {{{1}}}, twice the number of edges.
In graph theory, a branch of mathematics, the handshaking lemma is the statement that, in every finite undirected graph, the number of vertices that touch an odd number of edges is even. In more colloquial terms, in a party of people some of whom shake hands, the number of people who shake an odd number of other people's hands is even. The handshaking lemma is a consequence of the degree sum formula, also sometimes called the handshaking lemma, according to which the sum of the degrees (the numbers of times each vertex is touched) equals twice the number of edges in the graph. Both results were proven by Leonhard Euler (1736) in his famous paper on the Seven Bridges of Königsberg that began the study of graph theory.
Beyond the Bridges of Königsberg and their generalization to Euler tours, other applications include proving that for certain combinatorial structures, the number of structures is always even, and assisting with the proofs of Sperner's lemma and the mountain climbing problem. The complexity classPPA encapsulates the difficulty of finding a second odd vertex, given one such vertex in a large implicitly-defined graph. (Full article...)
His contemporaries used to say that Lasker used a "psychological" approach to the game, and even that he sometimes deliberately played inferior moves to confuse opponents. Recent analysis, however, indicates that he was ahead of his time and used a more flexible approach than his contemporaries, which mystified many of them. Lasker knew contemporary analyses of openings well but disagreed with many of them. He published chess magazines and five chess books, but later players and commentators found it difficult to draw lessons from his methods. (Full article...)
The maximum spacing method tries to find a distribution function such that the spacings, D_{(i)}, are all approximately of the same length. This is done by maximizing their geometric mean.
The concept underlying the method is based on the probability integral transform, in that a set of independent random samples derived from any random variable should on average be uniformly distributed with respect to the cumulative distribution function of the random variable. The MPS method chooses the parameter values that make the observed data as uniform as possible, according to a specific quantitative measure of uniformity. (Full article...)
Image 10
The Shapley–Folkman lemma is illustrated by the Minkowski addition of four sets. The point (+) in the convex hull of the Minkowski sum of the four non-convex sets (right) is the sum of four points (+) from the (left-hand) sets—two points in two non-convex sets plus two points in the convex hulls of two sets. The convex hulls are shaded pink. The original sets each have exactly two points (shown as red dots).
The Shapley–Folkman lemma is a result in convex geometry with applications in mathematical economics that describes the Minkowski addition of sets in a vector space. Minkowski addition is defined as the addition of the sets' members: for example, adding the set consisting of the integers zero and one to itself yields the set consisting of zero, one, and two: : {0, 1} + {0, 1} = {0 + 0, 0 + 1, 1 + 0, 1 + 1} = {0, 1, 2}. The Shapley–Folkman lemma and related results provide an affirmative answer to the question, "Is the sum of many sets close to being convex?" A set is defined to be convex if every line segment joining two of its points is a subset in the set: For example, the solid disk$\bullet$ is a convex set but the circle$\circ$ is not, because the line segment joining two distinct points $\oslash$ is not a subset of the circle. The Shapley–Folkman lemma suggests that if the number of summed sets exceeds the dimension of the vector space, then their Minkowski sum is approximately convex.
The Shapley–Folkman lemma was introduced as a step in the proof of the Shapley–Folkman theorem, which states an upper bound on the distance between the Minkowski sum and its convex hull. The convex hull of a set Q is the smallest convex set that contains Q. This distance is zero if and only if the sum is convex. The theorem's bound on the distance depends on the dimension D and on the shapes of the summand-sets, but not on the number of summand-sets N, when N > D. The shapes of a subcollection of only D summand-sets determine the bound on the distance between the Minkowski average of N sets : 1⁄N (Q_{1} + Q_{2} + ... + Q_{N}) and its convex hull. As N increases to infinity, the bound decreases to zero (for summand-sets of uniformly bounded size). The Shapley–Folkman theorem's upper bound was decreased by Starr's corollary (alternatively, the Shapley–Folkman–Starr theorem). (Full article...)
Image 11
Vector addition and scalar multiplication: a vector v (blue) is added to another vector w (red, upper illustration). Below, w is stretched by a factor of 2, yielding the sum v + 2w.
In mathematics, the Schwarz lantern is a polyhedral approximation to a cylinder, used as a pathological example of the difficulty of defining the area of a smooth (curved) surface as the limit of the areas of polyhedra. It is formed by stacked rings of isosceles triangles, arranged within each ring in the same pattern as an antiprism. The resulting shape can be folded from paper, and is named after mathematician Hermann Schwarz and for its resemblance to a cylindrical paper lantern. It is also known as Schwarz's boot, Schwarz's polyhedron, or the Chinese lantern.
As Schwarz showed, for the surface area of a polyhedron to converge to the surface area of a curved surface, it is not sufficient to simply increase the number of rings and the number of isosceles triangles per ring. Depending on the relation of the number of rings to the number of triangles per ring, the area of the lantern can converge to the area of the cylinder, to a limit arbitrarily larger than the area of the cylinder, or to infinity—in other words, the area can diverge. The Schwarz lantern demonstrates that sampling a curved surface by close-together points and connecting them by small triangles is inadequate to ensure an accurate approximation of area, in contrast to the accurate approximation of arc length by inscribed polygonal chains. (Full article...)
...that the Catalan numbers solve a number of problems in combinatorics such as the number of ways to completely parenthesize an algebraic expression with n+1 factors?
...that a ball can be cut up and reassembled into two balls, each the same size as the original (Banach-Tarski paradox)?
A fractal is "a rough or fragmented geometric shape that can be subdivided in parts, each of which is (at least approximately) a reduced-size copy of the whole". The term was coined by Benoît Mandelbrot in 1975 and was derived from the Latin fractus meaning "broken" or "fractured".
A fractal as a geometric object generally has the following features:
It has a fine structure at arbitrarily small scales.
It is too irregular to be easily described in traditional Euclidean geometric language.
Because they appear similar at all levels of magnification, fractals are often considered to be infinitely complex (in informal terms). Natural objects that approximate fractals to a degree include clouds, mountain ranges, lightning bolts, coastlines, and snow flakes. However, not all self-similar objects are fractals—for example, the real line (a straight Euclidean line) is formally self-similar but fails to have other fractal characteristics. Fractals, when zoomed in, will keep showing more and more of itself, and it keeps going for infinity. (Full article...)