text
stringlengths
24
72k
docno
stringlengths
4
7
subject
stringclasses
1 value
In number theory, given a positive integer "n" and an integer "a" coprime to "n", the multiplicative order of "a" modulo "n" is the smallest positive integer "k" such that <math display="inline">a^k\ \equiv\ 1 \pmod n$. In other words, the multiplicative order of "a" modulo "n" is the order of "a" in the multiplicative group of the units in the ring of the integers modulo "n". The order of "a" modulo "n" is sometimes written as $\operatorname{ord}_n(a)$. Example. The powers of 4 modulo 7 are as follows: $\begin{array}{llll} 4^0 &= 1 &=0 \times 7 + 1 &\equiv 1\pmod7 \\ 4^1 &= 4 &=0 \times 7 + 4 &\equiv 4\pmod7 \\ 4^2 &= 16 &=2 \times 7 + 2 &\equiv 2\pmod7 \\ 4^3 &= 64 &=9 \times 7 + 1 &\equiv 1\pmod7 \\ 4^4 &= 256 &=36 \times 7 + 4 &\equiv 4\pmod7 \\ 4^5 &= 1024 &=146 \times 7 + 2 &\equiv 2\pmod7 \\ \vdots\end{array}$ The smallest positive integer "k" such that 4"k" ≡ 1 (mod 7) is 3, so the order of 4 (mod 7) is 3. Properties. Even without knowledge that we are working in the multiplicative group of integers modulo n, we can show that "a" actually has an order by noting that the powers of "a" can only take a finite number of different values modulo "n", so according to the pigeonhole principle there must be two powers, say "s" and "t" and without loss of generality "s" > "t", such that "a""s" ≡ "a""t" (mod "n"). Since "a" and "n" are coprime, "a" has an inverse element "a"−1 and we can multiply both sides of the congruence with "a"−"t", yielding "a""s"−"t" ≡ 1 (mod "n"). The concept of multiplicative order is a special case of the order of group elements. The multiplicative order of a number "a" modulo "n" is the order of "a" in the multiplicative group whose elements are the residues modulo "n" of the numbers coprime to "n", and whose group operation is multiplication modulo "n". This is the group of units of the ring Z"n"; it has "φ"("n") elements, φ being Euler's totient function, and is denoted as "U"("n") or "U"(Z"n"). As a consequence of Lagrange's theorem, the order of "a" (mod "n") always divides "φ"("n"). If the order of "a" is actually equal to "φ"("n"), and therefore as large as possible, then "a" is called a primitive root modulo "n". This means that the group "U"("n") is cyclic and the residue class of "a" generates it. The order of "a" (mod "n") also divides λ("n"), a value of the Carmichael function, which is an even stronger statement than the divisibility of "φ"("n").
152697
abstract_algebra
Irreducible polynomial whose roots are nth roots of unity In mathematics, the "n"th cyclotomic polynomial, for any positive integer "n", is the unique irreducible polynomial with integer coefficients that is a divisor of $x^n-1$ and is not a divisor of $x^k-1$ for any "k" < "n". Its roots are all "n"th primitive roots of unity $ $, where "k" runs over the positive integers not greater than "n" and coprime to "n" (and "i" is the imaginary unit). In other words, the "n"th cyclotomic polynomial is equal to $ \Phi_n(x) = \left(x-e^{2i\pi\frac{k}{n}}\right). $ It may also be defined as the monic polynomial with integer coefficients that is the minimal polynomial over the field of the rational numbers of any primitive "n"th-root of unity ($ e^{2i\pi/n} $ is an example of such a root). An important relation linking cyclotomic polynomials and primitive roots of unity is $\prod_{d\mid n}\Phi_d(x) = x^n - 1,$ showing that "x" is a root of $x^n - 1$ if and only if it is a "d"  th primitive root of unity for some "d" that divides "n". Examples. If "n" is a prime number, then $\Phi_n(x) = 1+x+x^2+\cdots+x^{n-1}=\sum_{k=0}^{n-1} x^k.$ If "n" = 2"p" where "p" is an odd prime number, then $\Phi_{2p}(x) = 1-x+x^2-\cdots+x^{p-1}=\sum_{k=0}^{p-1} (-x)^k.$ For "n" up to 30, the cyclotomic polynomials are: $\begin{align} \Phi_1(x) &= x - 1 \\ \Phi_2(x) &= x + 1 \\ \Phi_3(x) &= x^2 + x + 1 \\ \Phi_4(x) &= x^2 + 1 \\ \Phi_5(x) &= x^4 + x^3 + x^2 + x +1 \\ \Phi_6(x) &= x^2 - x + 1 \\ \Phi_7(x) &= x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 \\ \Phi_8(x) &= x^4 + 1 \\ \Phi_9(x) &= x^6 + x^3 + 1 \\ \Phi_{10}(x) &= x^4 - x^3 + x^2 - x + 1 \\ \Phi_{11}(x) &= x^{10} + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 \\ \Phi_{12}(x) &= x^4 - x^2 + 1 \\ \Phi_{13}(x) &= x^{12} + x^{11} + x^{10} + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 \\ \Phi_{14}(x) &= x^6 - x^5 + x^4 - x^3 + x^2 - x + 1 \\ \Phi_{15}(x) &= x^8 - x^7 + x^5 - x^4 + x^3 - x + 1 \\ \Phi_{16}(x) &= x^8 + 1 \\ \Phi_{17}(x) &= x^{16} + x^{15} + x^{14} + x^{13} + x^{12} + x^{11} + x^{10} + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1\\ \Phi_{18}(x) &= x^6 - x^3 + 1 \\ \Phi_{19}(x) &= x^{18} + x^{17} + x^{16} + x^{15} + x^{14} + x^{13} + x^{12} + x^{11} + x^{10} + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1\\ \Phi_{20}(x) &= x^8 - x^6 + x^4 - x^2 + 1 \\ \Phi_{21}(x) &= x^{12} - x^{11} + x^9 - x^8 + x^6 - x^4 + x^3 - x + 1 \\ \Phi_{22}(x) &= x^{10} - x^9 + x^8 - x^7 + x^6 - x^5 + x^4 - x^3 + x^2 - x + 1 \\ \Phi_{23}(x) &= x^{22} + x^{21} + x^{20} + x^{19} + x^{18} + x^{17} + x^{16} + x^{15} + x^{14} + x^{13} + x^{12} \\ & \qquad\quad + x^{11} + x^{10} + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 \\ \Phi_{24}(x) &= x^8 - x^4 + 1 \\ \Phi_{25}(x) &= x^{20} + x^{15} + x^{10} + x^5 + 1 \\ \Phi_{26}(x) &= x^{12} - x^{11} + x^{10} - x^9 + x^8 - x^7 + x^6 - x^5 + x^4 - x^3 + x^2 - x + 1 \\ \Phi_{27}(x) &= x^{18} + x^9 + 1 \\ \Phi_{28}(x) &= x^{12} - x^{10} + x^8 - x^6 + x^4 - x^2 + 1 \\ \Phi_{29}(x) &= x^{28} + x^{27} + x^{26} + x^{25} + x^{24} + x^{23} + x^{22} + x^{21} + x^{20} + x^{19} + x^{18} + x^{17} + x^{16} + x^{15} \\ & \qquad\quad + x^{14} + x^{13} + x^{12} + x^{11} + x^{10} + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 \\ \Phi_{30}(x) &= x^8 + x^7 - x^5 - x^4 - x^3 + x + 1. \end{align}$ The case of the 105th cyclotomic polynomial is interesting because 105 is the least positive integer that is the product of three distinct odd prime numbers (3*5*7) and this polynomial is the first one that has a coefficient other than 1, 0, or −1: $\begin{align} \Phi_{105}(x) &= x^{48} + x^{47} + x^{46} - x^{43} - x^{42} - 2 x^{41} - x^{40} - x^{39} + x^{36} + x^{35} + x^{34} + x^{33} + x^{32} + x^{31} - x^{28} - x^{26} \\ &\qquad\quad - x^{24} - x^{22} - x^{20} + x^{17} + x^{16} + x^{15} + x^{14} + x^{13} + x^{12} - x^9 - x^8 - 2 x^7 - x^6 - x^5 + x^2 + x + 1. \end{align}$ Properties. Fundamental tools. The cyclotomic polynomials are monic polynomials with integer coefficients that are irreducible over the field of the rational numbers. Except for "n" equal to 1 or 2, they are palindromes of even degree. The degree of $\Phi_n$, or in other words the number of "n"th primitive roots of unity, is $\varphi (n)$, where $\varphi$ is Euler's totient function. The fact that $\Phi_n$ is an irreducible polynomial of degree $\varphi (n)$ in the ring $\Z[x]$ is a nontrivial result due to Gauss. Depending on the chosen definition, it is either the value of the degree or the irreducibility which is a nontrivial result. The case of prime "n" is easier to prove than the general case, thanks to Eisenstein's criterion. A fundamental relation involving cyclotomic polynomials is $\begin{align} x^n - 1 &=\prod_{1\leqslant k\leqslant n} \left(x- e^{2i\pi\frac{k}{n}} \right) \\ &= \prod_{d \mid n} \prod_{1 \leqslant k \leqslant n \atop \gcd(k, n) = d} \left(x- e^{2i\pi\frac{k}{n}} \right) \\ &=\prod_{d \mid n} \Phi_{\frac{n}{d}}(x) = \prod_{d\mid n} \Phi_d(x).\end{align}$ which means that each "n"-th root of unity is a primitive "d"-th root of unity for a unique "d" dividing "n". The Möbius inversion formula allows the expression of $\Phi_n(x)$ as an explicit rational fraction: $\Phi_n(x)=\prod_{d\mid n}(x^d-1)^{\mu \left (\frac{n}{d} \right )}, $ where $\mu$ is the Möbius function. The cyclotomic polynomial $\Phi_{n}(x)$ may be computed by (exactly) dividing $x^n-1$ by the cyclotomic polynomials of the proper divisors of "n" previously computed recursively by the same method: $\Phi_n(x)=\frac{x^{n}-1}{\prod_{\stackrel{d|n}}\Phi_{d}(x)}$ This formula defines an algorithm for computing $\Phi_n(x)$ for any "n", provided integer factorization and division of polynomials are available. Many computer algebra systems, such as SageMath, Maple, Mathematica, and PARI/GP, have a built-in function to compute the cyclotomic polynomials. Easy cases for computation. As noted above, if "n" is a prime number, then $\Phi_n(x) = 1+x+x^2+\cdots+x^{n-1}=\sum_{k=0}^{n-1}x^k.$ If "n" is an odd integer greater than one, then $\Phi_{2n}(x) = \Phi_n(-x).$ In particular, if "n" = 2"p" is twice an odd prime, then (as noted above) $\Phi_n(x) = 1-x+x^2-\cdots+x^{p-1}=\sum_{k=0}^{p-1}(-x)^k.$ If "n" = "pm" is a prime power (where "p" is prime), then $\Phi_n(x) = \Phi_p(x^{p^{m-1}}) =\sum_{k=0}^{p-1}x^{kp^{m-1}}.$ More generally, if "n" = "pmr" with "r" relatively prime to "p", then $\Phi_n(x) = \Phi_{pr}(x^{p^{m-1}}).$ These formulas may be applied repeatedly to get a simple expression for any cyclotomic polynomial $\Phi_n(x)$ in term of a cyclotomic polynomial of square free index: If "q" is the product of the prime divisors of "n" (its radical), then $\Phi_n(x) = \Phi_q(x^{n/q}).$ This allows to give formulas for the "n"th cyclotomic polynomial when "n" has at most one odd prime factor: If "p" is an odd prime number, and "h" and "k" are positive integers, then: $\Phi_{2^h}(x) = x^{2^{h-1}}+1$ $\Phi_{p^k}(x) = \sum_{j=0}^{p-1}x^{jp^{k-1}}$ $\Phi_{2^hp^k}(x) = \sum_{j=0}^{p-1}(-1)^jx^{j2^{h-1}p^{k-1}}$ For the other values of "n", the computation of the "n"th cyclotomic polynomial is similarly reduced to that of $\Phi_q(x),$ where "q" is the product of the distinct odd prime divisors of "n". To deal with this case, one has that, for "p" prime and not dividing "n", $\Phi_{np}(x)=\Phi_{n}(x^p)/\Phi_n(x).$ Integers appearing as coefficients. The problem of bounding the magnitude of the coefficients of the cyclotomic polynomials has been the object of a number of research papers. Several survey papers give an overview. If "n" has at most two distinct odd prime factors, then Migotti showed that the coefficients of $\Phi_n$ are all in the set {1, −1, 0}. The first cyclotomic polynomial for a product of three different odd prime factors is $\Phi_{105}(x);$ it has a coefficient −2 (see its expression above). The converse is not true: $\Phi_{231}(x)=\Phi_{3\times 7\times 11}(x)$ only has coefficients in {1, −1, 0}. If "n" is a product of more different odd prime factors, the coefficients may increase to very high values. E.g., $\Phi_{15015}(x) =\Phi_{3\times 5\times 7\times 11\times 13}(x)$ has coefficients running from −22 to 23, $\Phi_{255255}(x)=\Phi_{3\times 5\times 7\times 11\times 13\times 17}(x)$, the smallest "n" with 6 different odd primes, has coefficients of magnitude up to 532. Let "A"("n") denote the maximum absolute value of the coefficients of Φ"n". It is known that for any positive "k", the number of "n" up to "x" with "A"("n") > "n""k" is at least "c"("k")⋅"x" for a positive "c"("k") depending on "k" and "x" sufficiently large. In the opposite direction, for any function ψ("n") tending to infinity with "n" we have "A"("n") bounded above by "n"ψ("n") for almost all "n". A combination of theorems of Bateman resp. Vaughan states10 that on the one hand, for every $\varepsilon>0$, we have $A(n) < e^{\left(n^{(\log 2+\varepsilon)/(\log\log n)}\right)}$ for all sufficiently large positive integers $n$, and on the other hand, we have $A(n) > e^{\left(n^{(\log 2)/(\log\log n)}\right)}$ for infinitely many positive integers $n$. This implies in particular that univariate polynomials (concretely $x^n-1$ for infinitely many positive integers $n$) can have factors (like $\Phi_n$) whose coefficients are superpolynomially larger than the original coefficients. This is not too far from the general Landau-Mignotte bound. Gauss's formula. Let "n" be odd, square-free, and greater than 3. Then: $4\Phi_n(z) = A_n^2(z) - (-1)^{\frac{n-1}{2}}nz^2B_n^2(z)$ where both "An"("z") and "Bn"("z") have integer coefficients, "An"("z") has degree "φ"("n")/2, and "Bn"("z") has degree "φ"("n")/2 − 2. Furthermore, "An"("z") is palindromic when its degree is even; if its degree is odd it is antipalindromic. Similarly, "Bn"("z") is palindromic unless "n" is composite and ≡ 3 (mod 4), in which case it is antipalindromic. The first few cases are $\begin{align} 4\Phi_5(z) &=4(z^4+z^3+z^2+z+1)\\ &= (2z^2+z+2)^2 - 5z^2 \\[6pt] 4\Phi_7(z) &=4(z^6+z^5+z^4+z^3+z^2+z+1)\\ &= (2z^3+z^2-z-2)^2+7z^2(z+1)^2 \\ [6pt] 4\Phi_{11}(z) &=4(z^{10}+z^9+z^8+z^7+z^6+z^5+z^4+z^3+z^2+z+1)\\ &= (2z^5+z^4-2z^3+2z^2-z-2)^2+11z^2(z^3+1)^2 \end{align}$ Lucas's formula. Let "n" be odd, square-free and greater than 3. Then $\Phi_n(z) = U_n^2(z) - (-1)^{\frac{n-1}{2}}nzV_n^2(z)$ where both "Un"("z") and "Vn"("z") have integer coefficients, "Un"("z") has degree "φ"("n")/2, and "Vn"("z") has degree "φ"("n")/2 − 1. This can also be written $\Phi_n \left ((-1)^{\frac{n-1}{2}}z \right ) = C_n^2(z) - nzD_n^2(z).$ If "n" is even, square-free and greater than 2 (this forces "n"/2 to be odd), $\Phi_{\frac{n}{2}} \left (-z^2 \right ) = \Phi_{2n}(z)= C_n^2(z) - nzD_n^2(z)$ where both "Cn"("z") and "Dn"("z") have integer coefficients, "Cn"("z") has degree "φ"("n"), and "Dn"("z") has degree "φ"("n") − 1. "Cn"("z") and "Dn"("z") are both palindromic. The first few cases are: $\begin{align} \Phi_3(-z) &=\Phi_6(z) =z^2-z+1 \\ &= (z+1)^2 - 3z \\[6pt] \Phi_5(z) &=z^4+z^3+z^2+z+1 \\ &= (z^2+3z+1)^2 - 5z(z+1)^2 \\[6pt] \Phi_{6/2}(-z^2) &=\Phi_{12}(z)=z^4-z^2+1 \\ &= (z^2+3z+1)^2 - 6z(z+1)^2 \end{align}$ Sister Beiter conjecture. The Sister Beiter conjecture is concerned with the maximal size (in absolute value) $A(pqr)$ of coefficients of "ternary cyclotomic polynomials" $\Phi_{pqr}(x)$ where $3\leq p\leq q\leq r$ are three prime numbers. Cyclotomic polynomials over a finite field and over the "p"-adic integers. Over a finite field with a prime number "p" of elements, for any integer "n" that is not a multiple of "p", the cyclotomic polynomial $\Phi_n$ factorizes into $\frac{\varphi (n)}{d}$ irreducible polynomials of degree "d", where $\varphi (n)$ is Euler's totient function and "d" is the multiplicative order of "p" modulo "n". In particular, $\Phi_n$ is irreducible if and only if "p" is a primitive root modulo n, that is, "p" does not divide "n", and its multiplicative order modulo "n" is $\varphi(n)$, the degree of $\Phi_n$. These results are also true over the p-adic integers, since Hensel's lemma allows lifting a factorization over the field with "p" elements to a factorization over the "p"-adic integers. Polynomial values. If "x" takes any real value, then $\Phi_n(x)>0$ for every "n" ≥ 3 (this follows from the fact that the roots of a cyclotomic polynomial are all non-real, for "n" ≥ 3). For studying the values that a cyclotomic polynomial may take when "x" is given an integer value, it suffices to consider only the case "n" ≥ 3, as the cases "n" = 1 and "n" = 2 are trivial (one has $\Phi_1(x)=x-1$ and $\Phi_2(x)=x+1$). For "n" ≥ 2, one has $\Phi_n(0) =1,$ $\Phi_n(1) =1$ if "n" is not a prime power, $\Phi_n(1) =p$ if $n=p^k$ is a prime power with "k" ≥ 1. The values that a cyclotomic polynomial $\Phi_n(x)$ may take for other integer values of "x" is strongly related with the multiplicative order modulo a prime number. More precisely, given a prime number "p" and an integer "b" coprime with "p", the multiplicative order of "b" modulo "p", is the smallest positive integer "n" such that "p" is a divisor of $b^n-1.$ For "b" > 1, the multiplicative order of "b" modulo "p" is also the shortest period of the representation of 1/"p" in the numeral base "b" (see Unique prime; this explains the notation choice). The definition of the multiplicative order implies that, if "n" is the multiplicative order of "b" modulo "p", then "p" is a divisor of $\Phi_n(b).$ The converse is not true, but one has the following. If "n" > 0 is a positive integer and "b" > 1 is an integer, then (see below for a proof) $\Phi_n(b)=2^kgh,$ where This implies that, if "p" is an odd prime divisor of $\Phi_n(b),$ then either "n" is a divisor of "p" − 1 or "p" is a divisor of "n". In the latter case, $p^2$ does not divide $\Phi_n(b).$ Zsigmondy's theorem implies that the only cases where "b" > 1 and "h" = 1 are $\begin{align} \Phi_1(2) &=1 \\ \Phi_2 \left (2^k-1 \right ) & =2^k && k >0 \\ \Phi_6(2) &=3 \end{align}$ It follows from above factorization that the odd prime factors of $\frac{\Phi_n(b)}{\gcd(n,\Phi_n(b))}$ are exactly the odd primes "p" such that "n" is the multiplicative order of "b" modulo "p". This fraction may be even only when "b" is odd. In this case, the multiplicative order of "b" modulo 2 is always 1. There are many pairs ("n", "b") with "b" > 1 such that $\Phi_n(b)$ is prime. In fact, Bunyakovsky conjecture implies that, for every "n", there are infinitely many "b" > 1 such that $\Phi_n(b)$ is prime. See OEIS:  for the list of the smallest "b" > 1 such that $\Phi_n(b)$ is prime (the smallest "b" > 1 such that $\Phi_n(b)$ is prime is about $\lambda \cdot \varphi(n)$, where $\lambda$ is Euler–Mascheroni constant, and $\varphi$ is Euler's totient function). See also OEIS:  for the list of the smallest primes of the form $\Phi_n(b)$ with "n" > 2 and "b" > 1, and, more generally, OEIS: , for the smallest positive integers of this form. Applications. Using $\Phi_n$, one can give an elementary proof for the infinitude of primes congruent to 1 modulo "n", which is a special case of Dirichlet's theorem on arithmetic progressions.
95864
abstract_algebra
In the mathematical field of graph theory, the Hoffman graph is a 4-regular graph with 16 vertices and 32 edges discovered by Alan Hoffman. Published in 1963, it is cospectral to the hypercube graph Q4. The Hoffman graph has many common properties with the hypercube Q4—both are Hamiltonian and have chromatic number 2, chromatic index 4, girth 4 and diameter 4. It is also a 4-vertex-connected graph and a 4-edge-connected graph. However, it is not distance-regular. It has book thickness 3 and queue number 2. Algebraic properties. The Hoffman graph is not a vertex-transitive graph and its full automorphism group is a group of order 48 isomorphic to the direct product of the symmetric group S4 and the cyclic group Z/2Z. The characteristic polynomial of the Hoffman graph is equal to $(x-4) (x-2)^4 x^6 (x+2)^4 (x+4)$ making it an integral graph—a graph whose spectrum consists entirely of integers. It is the same spectrum as the hypercube Q4.
2794017
abstract_algebra
Pollard's "p" − 1 algorithm is a number theoretic integer factorization algorithm, invented by John Pollard in 1974. It is a special-purpose algorithm, meaning that it is only suitable for integers with specific types of factors; it is the simplest example of an algebraic-group factorisation algorithm. The factors it finds are ones for which the number preceding the factor, "p" − 1, is powersmooth; the essential observation is that, by working in the multiplicative group modulo a composite number "N", we are also working in the multiplicative groups modulo all of "N"'s factors. The existence of this algorithm leads to the concept of safe primes, being primes for which "p" − 1 is two times a Sophie Germain prime "q" and thus minimally smooth. These primes are sometimes construed as "safe for cryptographic purposes", but they might be "unsafe" — in current recommendations for cryptographic strong primes ("e.g." ANSI X9.31), it is necessary but not sufficient that "p" − 1 has at least one large prime factor. Most sufficiently large primes are strong; if a prime used for cryptographic purposes turns out to be non-strong, it is much more likely to be through malice than through an accident of random number generation. This terminology is considered obsolete by the cryptography industry: the ECM factorization method is more efficient that Pollard's algorithm and finds safe prime factors just as quickly as it finds non-safe prime factors of similar size, thus the size of "p" is the key security parameter, not the smoothness of "p-1". Base concepts. Let "n" be a composite integer with prime factor "p". By Fermat's little theorem, we know that for all integers "a" coprime to "p" and for all positive integers "K": $a^{K(p-1)} \equiv 1\pmod{p}$ If a number "x" is congruent to 1 modulo a factor of "n", then the gcd("x" − 1, "n") will be divisible by that factor. The idea is to make the exponent a large multiple of "p" − 1 by making it a number with very many prime factors; generally, we take the product of all prime powers less than some limit "B". Start with a random "x", and repeatedly replace it by $x^w \bmod n$ as "w" runs through those prime powers. Check at each stage, or once at the end if you prefer, whether gcd("x" − 1, "n") is not equal to 1. Multiple factors. It is possible that for all the prime factors "p" of "n", "p" − 1 is divisible by small primes, at which point the Pollard "p" − 1 algorithm gives you "n" again. Algorithm and running time. The basic algorithm can be written as follows: Inputs: "n": a composite number Output: a nontrivial factor of "n" or failure # select a smoothness bound "B" # define $M = \prod_{\text{primes}~q \le B} q^{ \lfloor \log_q{B} \rfloor }$ (note: explicitly evaluating "M" may not be necessary) # randomly pick "a" coprime to "n" (note: we can actually fix "a", e.g. if "n" is odd, then we can always select "a" = 2, random selection here is not imperative) # compute "g" gcd("a""M" − 1, "n") (note: exponentiation can be done modulo "n") # if 1 < "g" < "n" then return "g" # if "g" 1 then select a larger "B" and go to step 2 or return failure # if "g" "n" then select a smaller "B" and go to step 2 or return failure If "g" 1 in step 6, this indicates there are no prime factors "p" for which "p-1" is "B"-powersmooth. If "g" "n" in step 7, this usually indicates that all factors were "B"-powersmooth, but in rare cases it could indicate that "a" had a small order modulo "n". Additionally, when the maximum prime factors of "p-1" for each prime factors "p" of "n" are all the same in some rare cases, this algorithm will fail. The running time of this algorithm is O("B" × log "B" × log2 "n"); larger values of "B" make it run slower, but are more likely to produce a factor. Example. If we want to factor the number "n" = 299. # We select "B" = 5. # Thus "M" = 22 × 31 × 51. # We select "a" = 2. # "g" = gcd("a""M" − 1, "n") = 13. # Since 1 < 13 < 299, thus return 13. # 299 / 13 = 23 is prime, thus it is fully factored: 299 = 13 × 23. How to choose "B"? Since the algorithm is incremental, it can just keep running with the bound constantly increasing. Assume that "p" − 1, where "p" is the smallest prime factor of "n", can be modelled as a random number of size less than √"n". By Dixon's theorem, the probability that the largest factor of such a number is less than ("p" − 1)"1/ε" is roughly "ε"−"ε"; so there is a probability of about 3−3 = 1/27 that a "B" value of "n"1/6 will yield a factorisation. In practice, the elliptic curve method is faster than the Pollard "p" − 1 method once the factors are at all large; running the "p" − 1 method up to "B" = 232 will find a quarter of all 64-bit factors and 1/27 of all 96-bit factors. Two-stage variant. A variant of the basic algorithm is sometimes used; instead of requiring that "p" − 1 has all its factors less than "B", we require it to have all but one of its factors less than some "B"1, and the remaining factor less than some "B"2 ≫ "B"1. After completing the first stage, which is the same as the basic algorithm, instead of computing a new $M' = \prod_{\text{primes }q \le B_2} q^{ \lfloor \log_q B_2 \rfloor } $ for "B"2 and checking gcd("a""M"' − 1, "n"), we compute $Q = \prod_{\text{primes } q \in (B_1, B_2]} (H^q - 1)$ where "H" "a""M" and check if gcd("Q", "n") produces a nontrivial factor of "n". As before, exponentiations can be done modulo "n". Let {"q"1, "q"2, …} be successive prime numbers in the interval ("B"1, "B"2] and "d""n" = "q""n" − "q""n"−1 the difference between consecutive prime numbers. Since typically "B"1 > 2, "d""n" are even numbers. The distribution of prime numbers is such that the "d""n" will all be relatively small. It is suggested that "d""n" ≤ ln2 "B"2. Hence, the values of "H"2, "H"4, "H"6, … (mod "n") can be stored in a table, and "H""q""n" be computed from "H""q""n"−1⋅"H""d""n", saving the need for exponentiations.
222870
abstract_algebra
In the mathematical field of graph theory, the Brinkmann graph is a 4-regular graph with 21 vertices and 42 edges discovered by Gunnar Brinkmann in 1992. It was first published by Brinkmann and Meringer in 1997. It has chromatic number 4, chromatic index 5, radius 3, diameter 3 and girth 5. It is also a 3-vertex-connected graph and a 3-edge-connected graph. It is the smallest 4-regular graph of girth 5 with chromatic number 4. It has book thickness 3 and queue number 2. By Brooks’ theorem, every "k"-regular graph (except for odd cycles and cliques) has chromatic number at most "k". It was also known since 1959 that, for every "k" and "l" there exist "k"-chromatic graphs with girth "l". In connection with these two results and several examples including the Chvátal graph, Branko Grünbaum conjectured in 1970 that for every "k" and "l" there exist "k"-chromatic "k"-regular graphs with girth "l". The Chvátal graph solves the case "k" = "l" = 4 of this conjecture and the Brinkmann graph solves the case "k" =  4, "l" = 5. Grünbaum's conjecture was disproved for sufficiently large "k" by Johannsen, who showed that the chromatic number of a triangle-free graph is O(Δ/log Δ) where Δ is the maximum vertex degree and the O introduces big O notation. However, despite this disproof, it remains of interest to find examples and only very few are known. The chromatic polynomial of the Brinkmann graph is "x"21 - 42"x"20 + 861"x"19 - 11480"x"18 + 111881"x"17 - 848708"x"16 + 5207711"x"15 - 26500254"x"14 + 113675219"x"13 - 415278052"x"12 + 1299042255"x"11 - 3483798283"x"10 + 7987607279"x"9 - 15547364853"x"8 + 25384350310"x"7 - 34133692383"x"6 + 36783818141"x"5 - 30480167403"x"4 + 18168142566"x"3 - 6896700738"x"2 + 1242405972"x" (sequence in the OEIS). Algebraic properties. The Brinkmann graph is not a vertex-transitive graph and its full automorphism group is isomorphic to the dihedral group of order 14, the group of symmetries of a heptagon, including both rotations and reflections. The characteristic polynomial of the Brinkmann graph is $(x-4)(x-2)(x+2)(x^3-x^2-2x+1)^2$$(x^6+3x^5-8x^4-21x^3+27x^2+38x-41)^2$.
2790841
abstract_algebra
(Mathematical) decomposition into a product In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several "factors", usually smaller or simpler objects of the same kind. For example, 3 × 5 is an "integer factorization" of 15, and ("x" – 2)("x" + 2) is a "polynomial factorization" of "x"2 – 4. Factorization is not usually considered meaningful within number systems possessing division, such as the real or complex numbers, since any $x$ can be trivially written as $(xy)\times(1/y)$ whenever $y$ is not zero. However, a meaningful factorization for a rational number or a rational function can be obtained by writing it in lowest terms and separately factoring its numerator and denominator. Factorization was first considered by ancient Greek mathematicians in the case of integers. They proved the fundamental theorem of arithmetic, which asserts that every positive integer may be factored into a product of prime numbers, which cannot be further factored into integers greater than 1. Moreover, this factorization is unique up to the order of the factors. Although integer factorization is a sort of inverse to multiplication, it is much more difficult algorithmically, a fact which is exploited in the RSA cryptosystem to implement public-key cryptography. Polynomial factorization has also been studied for centuries. In elementary algebra, factoring a polynomial reduces the problem of finding its roots to finding the roots of the factors. Polynomials with coefficients in the integers or in a field possess the unique factorization property, a version of the fundamental theorem of arithmetic with prime numbers replaced by irreducible polynomials. In particular, a univariate polynomial with complex coefficients admits a unique (up to ordering) factorization into linear polynomials: this is a version of the fundamental theorem of algebra. In this case, the factorization can be done with root-finding algorithms. The case of polynomials with integer coefficients is fundamental for computer algebra. There are efficient computer algorithms for computing (complete) factorizations within the ring of polynomials with rational number coefficients (see factorization of polynomials). A commutative ring possessing the unique factorization property is called a unique factorization domain. There are number systems, such as certain rings of algebraic integers, which are not unique factorization domains. However, rings of algebraic integers satisfy the weaker property of Dedekind domains: ideals factor uniquely into prime ideals. "Factorization" may also refer to more general decompositions of a mathematical object into the product of smaller or simpler objects. For example, every function may be factored into the composition of a surjective function with an injective function. Matrices possess many kinds of matrix factorizations. For example, every matrix has a unique LUP factorization as a product of a lower triangular matrix L with all diagonal entries equal to one, an upper triangular matrix U, and a permutation matrix P; this is a matrix formulation of Gaussian elimination. Integers. By the fundamental theorem of arithmetic, every integer greater than 1 has a unique (up to the order of the factors) factorization into prime numbers, which are those integers which cannot be further factorized into the product of integers greater than one. For computing the factorization of an integer n, one needs an algorithm for finding a divisor q of n or deciding that n is prime. When such a divisor is found, the repeated application of this algorithm to the factors q and "n" / "q" gives eventually the complete factorization of n. For finding a divisor q of n, if any, it suffices to test all values of q such that 1 < "q" and "q"2 ≤ "n". In fact, if "r" is a divisor of n such that "r"2 > "n", then "q" = "n" / "r" is a divisor of n such that "q"2 ≤ "n". If one tests the values of q in increasing order, the first divisor that is found is necessarily a prime number, and the "cofactor" "r" = "n" / "q" cannot have any divisor smaller than q. For getting the complete factorization, it suffices thus to continue the algorithm by searching a divisor of r that is not smaller than q and not greater than . There is no need to test all values of q for applying the method. In principle, it suffices to test only prime divisors. This needs to have a table of prime numbers that may be generated for example with the sieve of Eratosthenes. As the method of factorization does essentially the same work as the sieve of Eratosthenes, it is generally more efficient to test for a divisor only those numbers for which it is not immediately clear whether they are prime or not. Typically, one may proceed by testing 2, 3, 5, and the numbers > 5, whose last digit is 1, 3, 7, 9 and the sum of digits is not a multiple of 3. This method works well for factoring small integers, but is inefficient for larger integers. For example, Pierre de Fermat was unable to discover that the 6th Fermat number $1 + 2^{2^5} = 1 + 2^{32} = 4\,294\,967\,297$ is not a prime number. In fact, applying the above method would require more than , for a number that has 10 decimal digits. There are more efficient factoring algorithms. However they remain relatively inefficient, as, with the present state of the art, one cannot factorize, even with the more powerful computers, a number of 500 decimal digits that is the product of two randomly chosen prime numbers. This ensures the security of the RSA cryptosystem, which is widely used for secure internet communication. Example. For factoring "n" = 1386 into primes: 1386 = 2 · 32 · 7 · 11. Expressions. Manipulating expressions is the basis of algebra. Factorization is one of the most important methods for expression manipulation for several reasons. If one can put an equation in a factored form "E"⋅"F" = 0, then the problem of solving the equation splits into two independent (and generally easier) problems "E" = 0 and "F" = 0. When an expression can be factored, the factors are often much simpler, and may thus offer some insight on the problem. For example, $x^3-ax^2-bx^2-cx^2+ abx+acx+bcx-abc$ having 16 multiplications, 4 subtractions and 3 additions, may be factored into the much simpler expression $(x-a)(x-b)(x-c),$ with only two multiplications and three subtractions. Moreover, the factored form immediately gives roots "x" = "a","b","c" as the roots of the polynomial. On the other hand, factorization is not always possible, and when it is possible, the factors are not always simpler. For example, $x^{10}-1$ can be factored into two irreducible factors $x-1$ and $x^{9}+x^{8}+\cdots+x^2+x+1$. Various methods have been developed for finding factorizations; some are described below. Solving algebraic equations may be viewed as a problem of polynomial factorization. In fact, the fundamental theorem of algebra can be stated as follows: every polynomial in x of degree "n" with complex coefficients may be factorized into "n" linear factors $x-a_i,$ for "i" = 1, ..., "n", where the "a""i"s are the roots of the polynomial. Even though the structure of the factorization is known in these cases, the "a""i"s generally cannot be computed in terms of radicals ("n"th roots), by the Abel–Ruffini theorem. In most cases, the best that can be done is computing approximate values of the roots with a root-finding algorithm. History of factorization of expressions. The systematic use of algebraic manipulations for simplifying expressions (more specifically equations)) may be dated to 9th century, with al-Khwarizmi's book "The Compendious Book on Calculation by Completion and Balancing", which is titled with two such types of manipulation. However, even for solving quadratic equations, the factoring method was not used before Harriot's work published in 1631, ten years after his death. In his book "Artis Analyticae Praxis ad Aequationes Algebraicas Resolvendas", Harriot drew tables for addition, subtraction, multiplication and division of monomials, binomials, and trinomials. Then, in a second section, he set up the equation "aa" − "ba" + "ca" = + "bc", and showed that this matches the form of multiplication he had previously provided, giving the factorization ("a" − "b")("a" + "c"). General methods. The following methods apply to any expression that is a sum, or that may be transformed into a sum. Therefore, they are most often applied to polynomials, though they also may be applied when the terms of the sum are not monomials, that is, the terms of the sum are a product of variables and constants. Common factor. It may occur that all terms of a sum are products and that some factors are common to all terms. In this case, the distributive law allows factoring out this common factor. If there are several such common factors, it is preferable to divide out the greatest such common factor. Also, if there are integer coefficients, one may factor out the greatest common divisor of these coefficients. For example, $6x^3y^2 + 8x^4y^3 - 10x^5y^3 = 2x^3y^2(3 + 4xy -5x^2y),$ since 2 is the greatest common divisor of 6, 8, and 10, and $x^3y^2$ divides all terms. Grouping. Grouping terms may allow using other methods for getting a factorization. For example, to factor $4x^2+20x+3xy+15y, $ one may remark that the first two terms have a common factor x, and the last two terms have the common factor y. Thus $4x^2+20x+3xy+15y = (4x^2+20x)+(3xy+15y) = 4x(x+5)+3y(x+5). $ Then a simple inspection shows the common factor "x" + 5, leading to the factorization $4x^2+20x+3xy+15y = (4x+3y)(x+5).$ In general, this works for sums of 4 terms that have been obtained as the product of two binomials. Although not frequently, this may work also for more complicated examples. Adding and subtracting terms. Sometimes, some term grouping reveals part of a recognizable pattern. It is then useful to add and subtract terms to complete the pattern. A typical use of this is the completing the square method for getting the quadratic formula. Another example is the factorization of $x^4 + 1.$ If one introduces the non-real square root of –1, commonly denoted i, then one has a difference of squares $x^4+1=(x^2+i)(x^2-i).$ However, one may also want a factorization with real number coefficients. By adding and subtracting $2x^2,$ and grouping three terms together, one may recognize the square of a binomial: $x^4+1 = (x^4+2x^2+1)-2x^2 = (x^2+1)^2 - \left(x\sqrt2\right)^2 =\left(x^2+x\sqrt2+1\right)\left(x^2-x\sqrt2+1\right).$ Subtracting and adding $2x^2$ also yields the factorization: $x^4+1 = (x^4-2x^2+1)+2x^2 = (x^2-1)^2 + \left(x\sqrt2\right)^2 =\left(x^2+x\sqrt{-2}-1\right)\left(x^2-x\sqrt{-2}-1\right).$ These factorizations work not only over the complex numbers, but also over any field, where either –1, 2 or –2 is a square. In a finite field, the product of two non-squares is a square; this implies that the polynomial $x^4 + 1,$ which is irreducible over the integers, is reducible modulo every prime number. For example, $x^4 + 1 \equiv (x+1)^4 \pmod 2;$ $x^4 + 1 \equiv (x^2+x-1)(x^2-x-1) \pmod 3,\qquad$since $1^2 \equiv -2 \pmod 3;$ $x^4 + 1 \equiv (x^2+2)(x^2-2) \pmod 5,\qquad$since $2^2 \equiv -1 \pmod 5;$ $x^4 + 1 \equiv (x^2+3x+1)(x^2-3x+1) \pmod 7,\qquad$since $3^2 \equiv 2 \pmod 7.$ Recognizable patterns. Many identities provide an equality between a sum and a product. The above methods may be used for letting the sum side of some identity appear in an expression, which may therefore be replaced by a product. Below are identities whose left-hand sides are commonly used as patterns (this means that the variables E and F that appear in these identities may represent any subexpression of the expression that has to be factorized). $ E^2 - F^2 = (E+F)(E-F)$ For example, $\begin{align} a^2 + &2ab + b^2 - x^2 +2xy - y^2 \\ &= (a^2 + 2ab + b^2) - (x^2 -2xy + y^2) \\ &= (a+b)^2 - (x -y)^2 \\ &= (a+b + x -y)(a+b -x + y). \end{align} $ $ E^3 + F^3 = (E + F)(E^2 - EF + F^2)$ $ E^3 - F^3 = (E - F)(E^2 + EF + F^2)$ $\begin{align} E^4 - F^4 &= (E^2 + F^2)(E^2 - F^2) \\ &= (E^2 + F^2)(E + F)(E - F) \end{align}$ In the following identities, the factors may often be further factorized: *;Difference, even exponent $E^{2n}-F^{2n}= (E^n+F^n)(E^n-F^n)$ *;Difference, even or odd exponent $ E^n - F^n = (E-F)(E^{n-1} + E^{n-2}F + E^{n-3}F^2 + \cdots + EF^{n-2} + F^{n-1} )$ This is an example showing that the factors may be much larger than the sum that is factorized. *;Sum, odd exponent $ E^n + F^n = (E+F)(E^{n-1} - E^{n-2}F + E^{n-3}F^2 - \cdots - EF^{n-2} + F^{n-1} )$ (obtained by changing F by –"F" in the preceding formula) *;Sum, even exponent If the exponent is a power of two then the expression cannot, in general, be factorized without introducing complex numbers (if E and F contain complex numbers, this may be not the case). If "n" has an odd divisor, that is if "n" = "pq" with p odd, one may use the preceding formula (in "Sum, odd exponent") applied to $(E^q)^p+(F^q)^p.$ $ &x^2 + y^2 + z^2 + 2(xy +yz+xz)= (x + y+ z)^2 \\ &x^3 + y^3 + z^3 - 3xyz = (x + y + z)(x^2 + y^2 + z^2 - xy - xz - yz)\\ &x^3 + y^3 + z^3 + 3x^2(y + z) +3y^2(x+z) + 3z^2(x+y) + 6xyz = (x + y+z)^3 \\ &x^4 + x^2y^2 + y^4 = (x^2 + xy+y^2)(x^2 - xy + y^2). $ The binomial theorem supplies patterns that can easily be recognized from the integers that appear in them In low degree: $ a^2 + 2ab + b^2 = (a + b)^2$ $ a^2 - 2ab + b^2 = (a - b)^2$ $ a^3 + 3a^2b + 3ab^2 + b^3 = (a+b)^3 $ $ a^3 - 3a^2b + 3ab^2 - b^3 = (a-b)^3 $ More generally, the coefficients of the expanded forms of $(a+b)^n$ and $(a-b)^n$ are the binomial coefficients, that appear in the "n"th row of Pascal's triangle. Roots of unity. The nth roots of unity are the complex numbers each of which is a root of the polynomial $x^n-1.$ They are thus the numbers $e^{2ik\pi/n}=\cos \tfrac{2\pi k}n +i\sin \tfrac{2\pi k}n$ for $k=0, \ldots, n-1.$ It follows that for any two expressions E and F, one has: $E^n-F^n= (E-F)\prod_{k=1}^{n-1} \left(E-F e^{2ik\pi/n}\right)$ $E^{n}+F^{n}=\prod_{k=0}^{n-1} \left(E-F e^{(2k+1)i\pi/n}\right) \qquad \text{if } n \text{ is even}$ $E^{n}+F^{n}=(E+F)\prod_{k=1}^{n-1}\left(E+F e^{2ik\pi/n}\right) \qquad \text{if } n \text{ is odd}$ If E and F are real expressions, and one wants real factors, one has to replace every pair of complex conjugate factors by its product. As the complex conjugate of $e^{i\alpha}$ is $e^{-i\alpha},$ and $\left(a-be^{i\alpha}\right)\left(a-be^{-i\alpha}\right)= a^2-ab\left(e^{i\alpha}+e^{-i\alpha}\right)+b^2e^{i\alpha}e^{-i\alpha}= a^2-2ab\cos\,\alpha +b^2, $ one has the following real factorizations (one passes from one to the other by changing k into "n" – "k" or "n" + 1 – "k", and applying the usual trigonometric formulas: $\begin{align}E^{2n}-F^{2n}&= (E-F)(E+F)\prod_{k=1}^{n-1} \left(E^2-2EF \cos\,\tfrac{k\pi}n +F^2\right)\\ &=(E-F)(E+F)\prod_{k=1}^{n-1} \left(E^2+2EF \cos\,\tfrac{k\pi}n +F^2\right)\end{align}$ $ \begin{align}E^{2n} + F^{2n} &= \prod_{k=1}^n \left(E^2 + 2EF\cos\,\tfrac{(2k-1)\pi}{2n}+F^2\right)\\ &=\prod_{k=1}^n \left(E^2 - 2EF\cos\,\tfrac{(2k-1)\pi}{2n}+F^2\right) \end{align}$ The cosines that appear in these factorizations are algebraic numbers, and may be expressed in terms of radicals (this is possible because their Galois group is cyclic); however, these radical expressions are too complicated to be used, except for low values of n. For example, $ a^4 + b^4 = (a^2 - \sqrt 2 ab + b^2)(a^2 + \sqrt 2 ab + b^2).$ $ a^5 - b^5 = (a - b)\left(a^2 + \frac{1-\sqrt 5}2 ab + b^2\right)\left(a^2 +\frac{1+\sqrt 5}2 ab + b^2\right),$ $ a^5 + b^5 = (a + b)\left(a^2 - \frac{1-\sqrt 5}2 ab + b^2\right)\left(a^2 -\frac{1+\sqrt 5}2 ab + b^2\right),$ Often one wants a factorization with rational coefficients. Such a factorization involves cyclotomic polynomials. To express rational factorizations of sums and differences or powers, we need a notation for the homogenization of a polynomial: if $P(x)=a_0x^n+a_ix^{n-1} +\cdots +a_n,$ its "homogenization" is the bivariate polynomial $\overline P(x,y)=a_0x^n+a_ix^{n-1}y +\cdots +a_ny^n.$ Then, one has $E^n-F^n=\prod_{k\mid n}\overline Q_n(E,F),$ $E^n+F^n=\prod_{k\mid 2n,k\not\mid n}\overline Q_n(E,F),$ where the products are taken over all divisors of n, or all divisors of 2"n" that do not divide n, and $Q_n(x)$ is the nth cyclotomic polynomial. For example, $a^6-b^6= \overline Q_1(a,b)\overline Q_2(a,b)\overline Q_3(a,b)\overline Q_6(a,b)=(a-b)(a+b)(a^2-ab+b^2)(a^2+ab+b^2),$ $a^6+b^6=\overline Q_4(a,b)\overline Q_{12}(a,b) = (a^2+b^2)(a^4-a^2b^2+b^4),$ since the divisors of 6 are 1, 2, 3, 6, and the divisors of 12 that do not divide 6 are 4 and 12. Polynomials. For polynomials, factorization is strongly related with the problem of solving algebraic equations. An algebraic equation has the form $P(x)\ \,\stackrel{\text{def}}{=}\ \,a_0x^n+a_1x^{n-1}+\cdots+a_n=0,$ where "P"("x") is a polynomial in x with $a_0\ne 0.$ A solution of this equation (also called a root of the polynomial) is a value r of x such that $P(r)=0.$ If $P(x)=Q(x)R(x)$ is a factorization of "P"("x") = 0 as a product of two polynomials, then the roots of "P"("x") are the union of the roots of "Q"("x") and the roots of "R"("x"). Thus solving "P"("x") = 0 is reduced to the simpler problems of solving "Q"("x") = 0 and "R"("x") = 0. Conversely, the factor theorem asserts that, if r is a root of "P"("x") = 0, then "P"("x") may be factored as $P(x)=(x-r)Q(x),$ where "Q"("x") is the quotient of Euclidean division of "P"("x") = 0 by the linear (degree one) factor "x" – "r". If the coefficients of "P"("x") are real or complex numbers, the fundamental theorem of algebra asserts that "P"("x") has a real or complex root. Using the factor theorem recursively, it results that $P(x)=a_0(x-r_1)\cdots (x-r_n),$ where $r_1, \ldots, r_n$ are the real or complex roots of P, with some of them possibly repeated. This complete factorization is unique up to the order of the factors. If the coefficients of "P"("x") are real, one generally wants a factorization where factors have real coefficients. In this case, the complete factorization may have some quadratic (degree two) factors. This factorization may easily be deduced from the above complete factorization. In fact, if "r" = "a" + "ib" is a non-real root of "P"("x"), then its complex conjugate "s" = "a" - "ib" is also a root of "P"("x"). So, the product $(x-r)(x-s) = x^2-(r+s)x+rs = x^2-2ax+a^2+b^2$ is a factor of "P"("x") with real coefficients. Repeating this for all non-real factors gives a factorization with linear or quadratic real factors. For computing these real or complex factorizations, one needs the roots of the polynomial, which may not be computed exactly, and only approximated using root-finding algorithms. In practice, most algebraic equations of interest have integer or rational coefficients, and one may want a factorization with factors of the same kind. The fundamental theorem of arithmetic may be generalized to this case, stating that polynomials with integer or rational coefficients have the unique factorization property. More precisely, every polynomial with rational coefficients may be factorized in a product $P(x)=q\,P_1(x)\cdots P_k(x),$ where q is a rational number and $P_1(x), \ldots, P_k(x)$ are non-constant polynomials with integer coefficients that are irreducible and primitive; this means that none of the $P_i(x)$ may be written as the product two polynomials (with integer coefficients) that are neither 1 nor –1 (integers are considered as polynomials of degree zero). Moreover, this factorization is unique up to the order of the factors and the signs of the factors. There are efficient algorithms for computing this factorization, which are implemented in most computer algebra systems. See Factorization of polynomials. Unfortunately, these algorithms are too complicated to use for paper-and-pencil computations. Besides the heuristics above, only a few methods are suitable for hand computations, which generally work only for polynomials of low degree, with few nonzero coefficients. The main such methods are described in next subsections. Primitive-part & content factorization. Every polynomial with rational coefficients, may be factorized, in a unique way, as the product of a rational number and a polynomial with integer coefficients, which is primitive (that is, the greatest common divisor of the coefficients is 1), and has a positive leading coefficient (coefficient of the term of the highest degree). For example: $-10x^2 + 5x + 5 = (-5)\cdot (2x^2 - x - 1)$ $\frac{1}{3}x^5 + \frac{7}{2} x^2 + 2x + 1 = \frac{1}{6} ( 2x^5 + 21x^2 + 12x + 6)$ In this factorization, the rational number is called the content, and the primitive polynomial is the primitive part. The computation of this factorization may be done as follows: firstly, reduce all coefficients to a common denominator, for getting the quotient by an integer q of a polynomial with integer coefficients. Then one divides out the greater common divisor p of the coefficients of this polynomial for getting the primitive part, the content being $p/q.$ Finally, if needed, one changes the signs of p and all coefficients of the primitive part. This factorization may produce a result that is larger than the original polynomial (typically when there are many coprime denominators), but, even when this is the case, the primitive part is generally easier to manipulate for further factorization. Using the factor theorem. The factor theorem states that, if r is a root of a polynomial $P(x)=a_0x^n+a_1x^{n-1}+\cdots+a_{n-1}x+a_n,$ meaning "P"("r") = 0, then there is a factorization $P(x)=(x-r)Q(x),$ where $Q(x)=b_0x^{n-1}+\cdots+b_{n-2}x+b_{n-1},$ with $a_0=b_0$. Then polynomial long division or synthetic division give: $b_i=a_0r^i +\cdots+a_{i-1}r+a_i \ \text{ for }\ i = 1,\ldots,n{-}1.$ This may be useful when one knows or can guess a root of the polynomial. For example, for $P(x) = x^3 - 3x + 2,$ one may easily see that the sum of its coefficients is 0, so "r" = 1 is a root. As "r" + 0 = 1, and $r^2 +0r-3=-2,$ one has $x^3 - 3x + 2 = (x - 1)(x^2 + x - 2).$ Rational roots. For polynomials with rational number coefficients, one may search for roots which are rational numbers. Primitive part-content factorization (see above) reduces the problem of searching for rational roots to the case of polynomials with integer coefficients having no non-trivial common divisor. If $x=\tfrac pq$ is a rational root of such a polynomial $P(x)=a_0x^n+a_1x^{n-1}+\cdots+a_{n-1}x+a_n,$ the factor theorem shows that one has a factorization $P(x)=(qx-p)Q(x),$ where both factors have integer coefficients (the fact that Q has integer coefficients results from the above formula for the quotient of "P"("x") by $x-p/q$). Comparing the coefficients of degree n and the constant coefficients in the above equality shows that, if $\tfrac pq$ is a rational root in reduced form, then q is a divisor of $a_0,$ and p is a divisor of $a_n.$ Therefore, there is a finite number of possibilities for p and q, which can be systematically examined. For example, if the polynomial $P(x)=2x^3 - 7x^2 + 10x - 6$ has a rational root $\tfrac pq$ with "q" > 0, then p must divide 6; that is $p\in\{\pm 1,\pm 2,\pm3, \pm 6\}, $ and q must divide 2, that is $q\in\{1, 2\}. $ Moreover, if "x" < 0, all terms of the polynomial are negative, and, therefore, a root cannot be negative. That is, one must have $\tfrac pq \in \{1, 2, 3, 6, \tfrac 12, \tfrac 32\}.$ A direct computation shows that only $\tfrac 32$ is a root, so there can be no other rational root. Applying the factor theorem leads finally to the factorization $2x^3 - 7x^2 + 10x - 6 = (2x -3)(x^2 -2x + 2).$ Quadratic ac method. The above method may be adapted for quadratic polynomials, leading to the "ac method" of factorization. Consider the quadratic polynomial $P(x)=ax^2 + bx + c$ with integer coefficients. If it has a rational root, its denominator must divide "a" evenly and it may be written as a possibly reducible fraction $r_1 = \tfrac ra.$ By Vieta's formulas, the other root $r_2$ is $r_2 = -\frac ba - r_1 = -\frac ba-\frac ra =-\frac{b+r}a = \frac sa,$ with $s=-(b+r).$ Thus the second root is also rational, and Vieta's second formula $r_1 r_2=\frac ca$ gives $\frac sa\frac ra =\frac ca,$ that is $rs=ac\quad \text{and}\quad r+s=-b.$ Checking all pairs of integers whose product is "ac" gives the rational roots, if any. In summary, if $ax^2 +bx+c$ has rational roots there are integers r and s such $rs=ac$ and $r+s=-b$ (a finite number of cases to test), and the roots are $\tfrac ra$ and $\tfrac sa.$ In other words, one has the factorization $a(ax^2+bx+c) = (ax-r)(ax-s).$ For example, let consider the quadratic polynomial $6x^2 + 13x + 6.$ Inspection of the factors of "ac" = 36 leads to 4 + 9 = 13 = "b", giving the two roots $r_1 = -\frac 46 =-\frac 23 \quad \text{and} \quad r_2 = -\frac96 = -\frac 32,$ and the factorization $ 6x^2 + 13x + 6 = 6(x+\tfrac 23)(x+\tfrac 32)= (3x+2)(2x+3). $ Using formulas for polynomial roots. Any univariate quadratic polynomial $ax^2+bx+c$ can be factored using the quadratic formula: $ ax^2 + bx + c = a(x - \alpha)(x - \beta) = a\left(x - \frac{-b + \sqrt{b^2-4ac}}{2a}\right) \left(x - \frac{-b - \sqrt{b^2-4ac}}{2a}\right), $ where $\alpha$ and $\beta$ are the two roots of the polynomial. If "a, b, c" are all real, the factors are real if and only if the discriminant $b^2-4ac$ is non-negative. Otherwise, the quadratic polynomial cannot be factorized into non-constant real factors. The quadratic formula is valid when the coefficients belong to any field of characteristic different from two, and, in particular, for coefficients in a finite field with an odd number of elements. There are also formulas for roots of cubic and quartic polynomials, which are, in general, too complicated for practical use. The Abel–Ruffini theorem shows that there are no general root formulas in terms of radicals for polynomials of degree five or higher. Using relations between roots. It may occur that one knows some relationship between the roots of a polynomial and its coefficients. Using this knowledge may help factoring the polynomial and finding its roots. Galois theory is based on a systematic study of the relations between roots and coefficients, that include Vieta's formulas. Here, we consider the simpler case where two roots $x_1$ and $x_2$ of a polynomial $P(x)$ satisfy the relation $x_2=Q(x_1),$ where Q is a polynomial. This implies that $x_1$ is a common root of $P(Q(x))$ and $P(x).$ It is therefore a root of the greatest common divisor of these two polynomials. It follows that this greatest common divisor is a non constant factor of $P(x).$ Euclidean algorithm for polynomials allows computing this greatest common factor. For example, if one know or guess that: $P(x)=x^3 -5x^2 -16x +80$ has two roots that sum to zero, one may apply Euclidean algorithm to $P(x)$ and $P(-x).$ The first division step consists in adding $P(x)$ to $P(-x),$ giving the remainder of $-10(x^2-16).$ Then, dividing $P(x)$ by $x^2-16$ gives zero as a new remainder, and "x" – 5 as a quotient, leading to the complete factorization $x^3 - 5x^2 - 16x + 80 = (x -5)(x-4)(x+4).$ Unique factorization domains. The integers and the polynomials over a field share the property of unique factorization, that is, every nonzero element may be factored into a product of an invertible element (a unit, ±1 in the case of integers) and a product of irreducible elements (prime numbers, in the case of integers), and this factorization is unique up to rearranging the factors and shifting units among the factors. Integral domains which share this property are called unique factorization domains (UFD). Greatest common divisors exist in UFDs, and conversely, every integral domain in which greatest common divisors exist is an UFD. Every principal ideal domain is an UFD. A Euclidean domain is an integral domain on which is defined a Euclidean division similar to that of integers. Every Euclidean domain is a principal ideal domain, and thus a UFD. In a Euclidean domain, Euclidean division allows defining a Euclidean algorithm for computing greatest common divisors. However this does not imply the existence of a factorization algorithm. There is an explicit example of a field F such that there cannot exist any factorization algorithm in the Euclidean domain "F"["x"] of the univariate polynomials over F. Ideals. In algebraic number theory, the study of Diophantine equations led mathematicians, during 19th century, to introduce generalizations of the integers called algebraic integers. The first ring of algebraic integers that have been considered were Gaussian integers and Eisenstein integers, which share with usual integers the property of being principal ideal domains, and have thus the unique factorization property. Unfortunately, it soon appeared that most rings of algebraic integers are not principal and do not have unique factorization. The simplest example is $\mathbb Z[\sqrt{-5}],$ in which $9=3\cdot 3 = (2+\sqrt{-5})(2-\sqrt{-5}),$ and all these factors are irreducible. This lack of unique factorization is a major difficulty for solving Diophantine equations. For example, many wrong proofs of Fermat's Last Theorem (probably including Fermat's "truly marvelous proof of this, which this margin is too narrow to contain") were based on the implicit supposition of unique factorization. This difficulty was resolved by Dedekind, who proved that the rings of algebraic integers have unique factorization of ideals: in these rings, every ideal is a product of prime ideals, and this factorization is unique up the order of the factors. The integral domains that have this unique factorization property are now called Dedekind domains. They have many nice properties that make them fundamental in algebraic number theory. Matrices. Matrix rings are non-commutative and have no unique factorization: there are, in general, many ways of writing a matrix as a product of matrices. Thus, the factorization problem consists of finding factors of specified types. For example, the LU decomposition gives a matrix as the product of a lower triangular matrix by an upper triangular matrix. As this is not always possible, one generally considers the "LUP decomposition" having a permutation matrix as its third factor. See Matrix decomposition for the most common types of matrix factorizations. A logical matrix represents a binary relation, and matrix multiplication corresponds to composition of relations. Decomposition of a relation through factorization serves to profile the nature of the relation, such as a difunctional relation. Notes.
38979
abstract_algebra
Wreath product of cyclic group m and symmetrical group n In mathematics, the generalized symmetric group is the wreath product $S(m,n) := Z_m \wr S_n$ of the cyclic group of order "m" and the symmetric group of order "n". Representation theory. There is a natural representation of elements of $S(m,n)$ as generalized permutation matrices, where the nonzero entries are "m"-th roots of unity: $Z_m \cong \mu_m.$ The representation theory has been studied since ; see references in . As with the symmetric group, the representations can be constructed in terms of Specht modules; see . Homology. The first group homology group (concretely, the abelianization) is $Z_m \times Z_2$ (for "m" odd this is isomorphic to $Z_{2m}$): the $Z_m$ factors (which are all conjugate, hence must map identically in an abelian group, since conjugation is trivial in an abelian group) can be mapped to $Z_m$ (concretely, by taking the product of all the $Z_m$ values), while the sign map on the symmetric group yields the $Z_2.$ These are independent, and generate the group, hence are the abelianization. The second homology group (in classical terms, the Schur multiplier) is given by : $H_2(S(2k+1,n)) = \begin{cases} 1 & n < 4\\ \mathbf{Z}/2 & n \geq 4.\end{cases}$ $H_2(S(2k+2,n)) = \begin{cases} 1 & n = 0, 1\\ \mathbf{Z}/2 & n = 2\\ (\mathbf{Z}/2)^2 & n = 3\\ (\mathbf{Z}/2)^3 & n \geq 4. \end{cases}$ Note that it depends on "n" and the parity of "m:" $H_2(S(2k+1,n)) \approx H_2(S(1,n))$ and $H_2(S(2k+2,n)) \approx H_2(S(2,n)),$ which are the Schur multipliers of the symmetric group and signed symmetric group.
2877132
abstract_algebra
Algorithm for generating numbers coprime with first few primes Wheel factorization is a method for generating a sequence of natural numbers by repeated additions, as determined by a number of the first few primes, so that the generated numbers are coprime with these primes, by construction. Description. For a chosen number "n" (usually no larger than "4" or "5"), the first "n" primes determine the specific way to generate a sequence of natural numbers which are all known in advance to be coprime with these primes, i.e. are all known to not be multiples of any of these primes. This method can thus be used for an improvement of the trial division method for integer factorization, as none of the generated numbers need be tested in trial divisions by those small primes. The trial division method consists of dividing the number to be factorized by the integers in increasing order (2, 3, 4, 5, ...) successively. A common improvement consists of testing only by primes, i.e. by 2, 3, 5, 7, 11, ... . With the wheel factorization, one starts from a small list of numbers, called the "basis" — generally the first few prime numbers; then one generates the list, called the "wheel", of the integers that are coprime with all the numbers in the basis. Then, for the numbers generated by "rolling the wheel", one needs to only consider the primes "not" in the basis as their possible factors. It is as if these generated numbers have already been tested, and found to not be divisible by any of the primes in the basis. It is an optimization because all these operations become redundant, and are spared from being performed at all. When used in finding primes, or sieving in general, this method reduces the amount of candidate numbers to be considered as possible primes. With the basis {2, 3}, the reduction is to 1/3 < 34% of all the numbers. This means that fully 2/3 of all the candidate numbers are skipped over automatically. Larger bases reduce this proportion even further; for example, with basis {2, 3, 5} to 8/30 < 27%; and with basis {2, 3, 5, 7} to 48/210 < 23%. The bigger the wheel the larger the computational resources involved and the smaller the additional improvements, though, so it is the case of quickly diminishing returns. Introduction. Natural numbers from 1 and up are enumerated by repeated addition of 1: 1, 2, 3, 4, 5, ... Considered by spans of two numbers each, they are enumerated by repeated additions of 2: 1, 2  ;  3, 4  ;  5, 6, ... Every second thus generated number will be even. Thus odds are generated by the repeated additions of 2: 1  ;  3  ;  5  ;  7 ... Considered by spans of three numbers each, they are enumerated by repeated additions of 2 * 3 = 6: 1, 3, 5  ;  7, 9, 11  ;  ... Every second number in these triplets will be a multiple of 3, because numbers of the form 3 + 6k are all odd multiples of 3. Thus all the numbers coprime with the first two primes i.e. 2 and 3, i.e. 2 * 3 = 6–coprime numbers, will be generated by repeated additions of 6, starting from {1, 5}: 1, 5  ;  7, 11  ;  13, 17  ;  ... The "same" sequence can be generated by repeated additions of 2 * 3 * 5 = 30, turning each "five" consecutive spans, of "two" numbers each, into one joined span of "ten" numbers: 1, 5, 7, 11, 13, 17, 19, 23, 25, 29  ;  31, 35, 37, ... Out of each ten of these 6–coprime numbers, two are multiples of 5, thus the remaining eight will be 30–coprime: 1, 7, 11, 13, 17, 19, 23, 29  ;  31, 37, 41, 43, 47, 49, ... This is naturally generalized. The above showcases first three wheels: Another representation of these wheels is by turning a wheel's numbers, as seen above, into a "circular list" of the "differences" between the consecutive numbers, and then generating the sequence starting from 1 by repeatedly adding these increments one after another to the last generated number, indefinitely. This is the closest it comes to the "rolling the wheel" metaphor. For instance, this turns {1, 7, 11, 13, 17, 19, 23, 29, 31} into {6, 4, 2, 4, 2, 4, 6, 2}, and then the sequence is generated as A typical example. With a given basis of the first few prime numbers {2, 3, 5}, the "first turn" of the wheel consists of: 7, 11, 13, 17, 19, 23, 29, 31. The second turn is obtained by adding 30, the product of the basis, to the numbers in the first turn. The third turn is obtained by adding 30 to the second turn, and so on. For implementing the method, one may remark that the increments between two consecutive elements of the wheel, that is inc = [4, 2, 4, 2, 4, 6, 2, 6], remain the same after each turn. The suggested implementation that follows uses an auxiliary function div("n", "k"), which tests whether "n" is evenly divisible by "k", and returns "true" in this case, "false" otherwise. In this implementation, the number to be factorized is "n", and the program returns the smallest divisor of "n" – returning "n" itself if it is prime. if div("n", 2) = true then return 2 if div("n", 3) = true then return 3 if div("n", 5) = true then return 5 "k" := 7; "i" := 0 while "k" * "k" ≤ "n" do if div("n", "k") = true, then return "k" "k" := "k" + inc["i"] if "i" < 7 then "i" := "i" + 1 else "i" := 0 return "n" For getting the complete factorization of an integer, the computation may be continued without restarting the wheel at the beginning. This leads to the following program for a complete factorization, where the function "add" adds its first argument at the end of the second argument, which must be a list. factors := [ ] while div("n", 2) = true do factors := add(2, factors) "n" := "n" / 2 while div("n", 3) = true do factors := add(3, factors) "n" := "n" / 3 while div("n", 5) = true do factors := add(5, factors) "n" := "n" / 5 "k" := 7; "i" := 0 while "k" * "k" ≤ "n" do if div("n", "k") = true then add("k", factors) "n" := "n" / "k" else "k" := "k" + inc["i"] if "i" < 7 then "i" := "i" + 1 else "i" := 0 if "n" > 1 then add("n", factors) return factors Another presentation. Wheel factorization is used for generating lists of mostly prime numbers from a simple mathematical formula and a much smaller list of the first prime numbers. These lists may then be used in trial division or sieves. Because not all the numbers in these lists are prime, doing so introduces inefficient redundant operations. However, the generators themselves require very little memory compared to keeping a pure list of prime numbers. The small list of initial prime numbers constitute complete parameters for the algorithm to generate the remainder of the list. These generators are referred to as wheels. While each wheel may generate an infinite list of numbers, past a certain point the numbers cease to be mostly prime. The method may further be applied recursively as a prime number wheel sieve to generate more accurate wheels. Much definitive work on wheel factorization, sieves using wheel factorization, and wheel sieve, was done by Paul Pritchard in formulating a series of different algorithms. To visualize the use of a factorization wheel, one may start by writing the natural numbers around circles as shown in the adjacent diagram. The number of spokes is chosen such that prime numbers will have a tendency to accumulate in a minority of the spokes. Example. 1. Find the first 2 prime numbers: 2 and 3. 2. "n" = 2 × 3 = 6 3. 1 2 3 4 5 6 4. strike off factors of 2 and 3 which are 4 and 6 as factors of 2; 6 as the only factor of 3 is already stricken: 1 2 3 4 5 6 5. "x" = 1.<br> "xn" + 1 = 1 · 6 + 1 = 7.<br> ("x" + 1)"n" = (1 + 1) · 6 = 12.<br> Write 7 to 12 with 7 aligned with 1. 1 2 3 4 5 6 7 8 9 10 11 12 6. "x" = 2.<br> "xn" + 1 = 2 · 6 + 1 = 13.<br> ("x" + 1)"n" = (2 + 1) · 6 = 18.<br> Write 13 to 18.<br> Repeat for the next few lines. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 7 and 8. Sieving 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 9. Sieving 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 10. The resulting list contains a non-prime number of 25 which is 52. Use other methods such as a sieve to eliminate it to arrive at 2 3 5 7 11 13 17 19 23 29 Note that by using exactly the next prime number of 5 wheel cycles and eliminating the multiple(s) of that prime (and only that prime) from the resulting list, we have obtained the base wheel as per step 4 for a factorization wheel with base primes of 2, 3, and 5; this is one wheel in advance of the previous 2/3 factorization wheel. One could then follow the steps to step 10 using the next succeeding prime of 7 cycles and only eliminating the multiples of 7 from the resulting list in step 10 (leaving some "relative" primes in this case and all successive cases - i.e. some not true fully qualified primes), to get the next further advanced wheel, recursively repeating the steps as necessary to get successively larger wheels. Analysis and computer implementation. Formally, the method makes use of the following insights: First, that the set of base primes unioned with its (infinite) set of coprimes is a superset of the primes. Second, that the infinite set of coprimes can be enumerated easily from the coprimes to the base set between 2 and the base set product. (Note that 1 requires special handling.) As seen in the example above, the result of repeated applications of the above recursive procedure from steps 4 to 10 can be a wheel list which spans any desired sieving range (to which it can be truncated) and the resulting list then includes only the multiples of primes higher than one past the last used base primes. Note that once a wheel spans the desired upper limit of the sieving range, one can stop generating further wheels and use the information in that wheel to cull the remaining composite numbers from that last wheel list using a Sieve of Eratosthenes type technique but using the gap pattern inherent to the wheel to avoid redundant culls; some optimizations may be able to be made based on the fact that (will be proven in the next section) that there will be no repeat culling of any composite number: each remaining composite will be culled exactly once. Alternatively, one can continue to generate truncated wheel lists using primes up to the square root of the desired sieve range, in which case all remaining number representations in the wheel will be prime; however, although this method is as efficient as to never culling composite numbers more than once, it loses much time external to the normally considered culling operations in processing the successive wheel sweeps so as to take much longer. The elimination of composite numbers by a factorization wheel is based on the following: Given a number k > n, we know that k is not prime if k mod n and n are not relatively prime. From that, the fraction of numbers that the wheel sieve eliminates can be determined (although not all need be physically struck off; many can be culled automatically in the operations of copying of lesser wheels to greater wheels) as 1 - phi (n) / n, which is also the efficiency of the sieve. It is known that $ \lim\inf\frac{\varphi(n)}{n}\log\log n = e^{-\gamma}\sim 0.56145948, $ where "γ" is Euler's constant. Thus phi(n) / n goes to zero slowly as n increases to infinity and it can be seen that this efficiency rises very slowly to 100% for infinitely large n. From the properties of phi, it can easily be seen that the most efficient sieve smaller than x is the one where $ n = p_1 p_2 ... p_i < x $ and $ n p_{i+1} \geq x $ (i.e. wheel generation can stop when the last wheel passes or has a sufficient circumference to include the highest number in the sieving range). To be of maximum use on a computer, we want the numbers that are smaller than n and relatively prime to it as a set. Using a few observations, the set can easily be generated :
934000
abstract_algebra
Trial division is the most laborious but easiest to understand of the integer factorization algorithms. The essential idea behind trial division tests to see if an integer "n", the integer to be factored, can be divided by each number in turn that is less than "n". For example, for the integer n 12, the only numbers that divide it are 1, 2, 3, 4, 6, 12. Selecting only the largest powers of primes in this list gives that 12 3 × 4 3 × 22. Trial division was first described by Fibonacci in his book "Liber Abaci" (1202). Method. Given an integer "n" ("n" refers to "the integer to be factored"), the trial division consists of systematically testing whether "n" is divisible by any smaller number. Clearly, it is only worthwhile to test candidate factors less than "n", and in order from two upwards because an arbitrary "n" is more likely to be divisible by two than by three, and so on. With this ordering, there is no point in testing for divisibility by four if the number has already been determined not divisible by two, and so on for three and any multiple of three, etc. Therefore, the effort can be reduced by selecting only prime numbers as candidate factors. Furthermore, the trial factors need go no further than $\scriptstyle\sqrt{n}$ because, if "n" is divisible by some number "p", then "n = p × q" and if "q" were smaller than "p", "n" would have been detected earlier as being divisible by "q" or by a prime factor of "q". A definite bound on the prime factors is possible. Suppose Pi is the i'th prime, so that "P"1 = 2, "P"2 = 3, "P"3 = 5, etc. Then the last prime number worth testing as a possible factor of "n" is Pi where "P"2"i" + 1 > "n"; equality here would mean that "P""i" + 1 is a factor. Thus, testing with 2, 3, and 5 suffices up to "n" = 48 not just 25 because the square of the next prime is 49, and below "n" = 25 just 2 and 3 are sufficient. Should the square root of "n" be an integer, then it is a factor and "n" is a perfect square. An example of the trial division algorithm, using successive integers as trial factors, is as follows (in Python): def trial_division(n: int) -> list[int]: """Return a list of the prime factors for a natural number.""" a = [] # Prepare an empty list. f = 2 # The first possible factor. while n > 1: # While n still has remaining factors... if n % f == 0: # The remainder of n divided by f might be zero. a.append(f) # If so, it divides n. Add f to the list. n //= f # Divide that factor out of n. else: # But if f is not a factor of n, f += 1 # Add one to f and try again. return a # Prime factors may be repeated: 12 factors to 2,2,3. Or 2x more efficient: def trial_division(n: int) -> list[int]: a = [] while n % 2 == 0: a.append(2) n //= 2 f = 3 while f * f <= n: if n % f == 0: a.append(f) n //= f else: f += 2 if n != 1: a.append(n) # Only odd number is possible return a These versions of trial division are guaranteed to find a factor of "n" if there is one since they check "all possible factors" of "n" — and if "n" is a prime number, this means trial factors all the way up to "n". Thus, if the algorithm finds one factor only, n, it is proof that "n" is a prime. If more than one factor is found, then "n" is a composite integer. A more computationally advantageous way of saying this is, if any prime whose square does not exceed "n" divides it without a remainder, then "n" is not prime. Below is a version in C++ (without squaring f) template <class T, class U> vector<T> TrialDivision(U n) vector<T> v; T f; f = 2; f = 3; f = 5; T ac = 9, temp = 16; do { ac += temp; // Assume addition does not cause overflow with U type if (ac > n) break; if (n % f == 0) { v.push_back(f); n /= f; ac -= temp; else { f += 2; temp += 8; } while (1); if (n != 1) v.push_back(n); return v; Speed. In the worst case, trial division is a laborious algorithm. For a base-2 "n" digit number "a", if it starts from two and works up only to the square root of "a", the algorithm requires $\pi(2^{n/2}) \approx {2^{n/2} \over \left(\frac{n}{2}\right) \ln 2} $ trial divisions, where $\scriptstyle \pi(x)$ denotes the prime-counting function, the number of primes less than "x". This does not take into account the overhead of primality testing to obtain the prime numbers as candidate factors. A useful table need not be large: P(3512) = 32749, the last prime that fits into a sixteen-bit signed integer and P(6542) = 65521 for unsigned sixteen-bit integers. That would suffice to test primality for numbers up to 655372 = 4,295,098,369. Preparing such a table (usually via the Sieve of Eratosthenes) would only be worthwhile if many numbers were to be tested. If instead a variant is used without primality testing, but simply dividing by every odd number less than the square root the base-2 "n" digit number "a", prime or not, it can take up to about: $2^{n/2}$ In both cases, the required time grows exponentially with the digits of the number. Even so, this is a quite satisfactory method, considering that even the best-known algorithms have exponential time growth. For "a" chosen uniformly at random from integers of a given length, there is a 50% chance that 2 is a factor of "a" and a 33% chance that 3 is a factor of "a", and so on. It can be shown that 88% of all positive integers have a factor under 100 and that 92% have a factor under 1000. Thus, when confronted by an arbitrary large "a", it is worthwhile to check for divisibility by the small primes, since for $a = 1000$, in base-2 $n=10$. However, many-digit numbers that do not have factors in the small primes can require days or months to factor with the trial division. In such cases other methods are used such as the quadratic sieve and the general number field sieve (GNFS). Because these methods also have superpolynomial time growth a practical limit of "n" digits is reached very quickly. For this reason, in public key cryptography, values for "a" are chosen to have large prime factors of similar size so that they cannot be factored by any publicly known method in a useful time period on any available computer system or computer cluster such as supercomputers and computer grids. The largest cryptography-grade number that has been factored is RSA-250, a 250 digits number, using the GNFS and resources of several supercomputers. The running time was 2700 core years.
217297
abstract_algebra
In the mathematical field of graph theory, the Tutte 12-cage or Benson graph is a 3-regular graph with 126 vertices and 189 edges named after W. T. Tutte. The Tutte 12-cage is the unique (3-12)-cage (sequence in the OEIS). It was discovered by C. T. Benson in 1966. It has chromatic number 2 (bipartite), chromatic index 3, girth 12 (as a 12-cage) and diameter 6. Its crossing number is known to be less than 165, see Wolfram MathWorld. Construction. The Tutte 12-cage is a cubic Hamiltonian graph and can be defined by the LCF notation [17, 27, –13, –59, –35, 35, –11, 13, –53, 53, –27, 21, 57, 11, –21, –57, 59, –17]7. There are, up to isomorphism, precisely two generalized hexagons of order "(2,2)" as proved by Cohen and Tits. They are the split Cayley hexagon "H(2)" and its point-line dual. Clearly both of them have the same incidence graph, which is in fact isomorphic to the Tutte 12-cage. The Balaban 11-cage can be constructed by excision from the Tutte 12-cage by removing a small subtree and suppressing the resulting vertices of degree two. Algebraic properties. The automorphism group of the Tutte 12-cage is of order and is a semi-direct product of the projective special unitary group PSU(3,3) with the cyclic group Z/2Z. It acts transitively on its edges but not on its vertices, making it a semi-symmetric graph, a regular graph that is edge-transitive but not vertex-transitive. In fact, the automorphism group of the Tutte 12-cage preserves the bipartite parts and acts primitively on each part. Such graphs are called bi-primitive graphs and only five cubic bi-primitive graphs exist; they are named the Iofinova-Ivanov graphs and are of order 110, 126, 182, 506 and 990. All the cubic semi-symmetric graphs on up to 768 vertices are known. According to Conder, Malnič, Marušič and Potočnik, the Tutte 12-cage is the unique cubic semi-symmetric graph on 126 vertices and is the fifth smallest possible cubic semi-symmetric graph after the Gray graph, the Iofinova–Ivanov graph on 110 vertices, the Ljubljana graph and a graph on 120 vertices with girth 8. The characteristic polynomial of the Tutte 12-cage is $(x-3)x^{28}(x+3)(x^2-6)^{21}(x^2-2)^{27}.\ $ It is the only graph with this characteristic polynomial; therefore, the 12-cage is determined by its spectrum.
2807989
abstract_algebra
Test for determining whether a number is prime In mathematics, the Lucas–Lehmer–Riesel test is a primality test for numbers of the form "N" = "k" ⋅ 2"n" − 1 (Riesel numbers) with odd "k" < 2"n". The test was developed by Hans Riesel and it is based on the Lucas–Lehmer primality test. It is the fastest deterministic algorithm known for numbers of that form. For numbers of the form "N" = "k" ⋅ 2"n" + 1 (Proth numbers), either application of Proth's theorem (a Las Vegas algorithm) or one of the deterministic proofs described in Brillhart–Lehmer–Selfridge 1975 (see Pocklington primality test) are used. The algorithm. The algorithm is very similar to the Lucas–Lehmer test, but with a variable starting point depending on the value of "k". Define a sequence "u""i" for all "i" > 0 by: $u_i = u_{i-1}^2-2. \, $ Then "N" = "k" ⋅ 2"n" − 1, with "k" < 2"n" is prime if and only if it divides "u""n"−2. Finding the starting value. The starting value "u"0 is determined as follows. An alternative method for finding the starting value "u"0 is given in Rödseth 1994. The selection method is much easier than that used by Riesel for the "3" divides "k" case. First find a "P" value that satisfies the following equalities of Jacobi symbols: $\left(\frac{P-2}{N}\right)=1 \quad\text{and}\quad \left(\frac{P+2}{N}\right)=-1$. In practice, only a few "P" values need be checked before one is found (5, 8, 9, or 11 work in about 85% of trials). To find the starting value "u"0 from the "P" value we can use a Lucas(P,1) sequence, as shown in as well as page 124 of. The latter explains that when 3 ∤ "k", "P"=4 may be used as above, and no further search is necessary. The starting value "u"0 will be the Lucas sequence term "V""k"("P",1) taken mod "N". This process of selection takes very little time compared to the main test. How the test works. The Lucas–Lehmer–Riesel test is a particular case of group-order primality testing; we demonstrate that some number is prime by showing that some group has the order that it would have were that number prime, and we do this by finding an element of that group of precisely the right order. For Lucas-style tests on a number "N", we work in the multiplicative group of a quadratic extension of the integers modulo "N"; if "N" is prime, the order of this multiplicative group is "N"2 − 1, it has a subgroup of order "N" + 1, and we try to find a generator for that subgroup. We start off by trying to find a non-iterative expression for the $u_i$. Following the model of the Lucas–Lehmer test, put $u_i = a^{2^i} + a^{-2^i}$, and by induction we have $u_i = u_{i-1}^2 - 2$. So we can consider ourselves as looking at the 2"i"th term of the sequence $v(i) = a^i + a^{-i}$. If "a" satisfies a quadratic equation, this is a Lucas sequence, and has an expression of the form $v(i) = \alpha v(i-1) + \beta v(i-2)$. Really, we're looking at the "k" ⋅ 2"i"th term of a different sequence, but since decimations (take every "k"th term starting with the zeroth) of a Lucas sequence are themselves Lucas sequences, we can deal with the factor "k" by picking a different starting point. LLR software. LLR is a program that can run the LLR tests. The program was developed by Jean Penné. Vincent Penné has modified the program so that it can obtain tests via the Internet. The software is both used by individual prime searchers and some distributed computing projects including Riesel Sieve and PrimeGrid.
1975482
abstract_algebra
In algebraic geometry, the problem of resolution of singularities asks whether every algebraic variety "V" has a resolution, a non-singular variety "W" with a proper birational map "W"→"V". For varieties over fields of characteristic 0 this was proved in Hironaka (1964), while for varieties over fields of characteristic "p" it is an open problem in dimensions at least 4. Definitions. Originally the problem of resolution of singularities was to find a nonsingular model for the function field of a variety "X", in other words a complete non-singular variety "X′" with the same function field. In practice it is more convenient to ask for a different condition as follows: a variety "X" has a resolution of singularities if we can find a non-singular variety "X′" and a proper birational map from "X′" to "X". The condition that the map is proper is needed to exclude trivial solutions, such as taking "X′" to be the subvariety of non-singular points of "X". More generally, it is often useful to resolve the singularities of a variety "X" embedded into a larger variety "W". Suppose we have a closed embedding of "X" into a regular variety "W". A strong desingularization of "X" is given by a proper birational morphism from a regular variety "W"′ to "W" subject to some of the following conditions (the exact choice of conditions depends on the author): Hironaka showed that there is a strong desingularization satisfying the first three conditions above whenever "X" is defined over a field of characteristic 0, and his construction was improved by several authors (see below) so that it satisfies all conditions above. Resolution of singularities of curves. Every algebraic curve has a unique nonsingular projective model, which means that all resolution methods are essentially the same because they all construct this model. In higher dimensions this is no longer true: varieties can have many different nonsingular projective models. lists about 20 ways of proving resolution of singularities of curves. Newton's method. Resolution of singularities of curves was essentially first proved by Newton (1676), who showed the existence of Puiseux series for a curve from which resolution follows easily. Riemann's method. Riemann constructed a smooth Riemann surface from the function field of a complex algebraic curve, which gives a resolution of its singularities. This can be done over more general fields by using the set of discrete valuation rings of the field as a substitute for the Riemann surface. Albanese's method. Albanese's method consists of taking a curve that spans a projective space of sufficiently large dimension (more than twice the degree of the curve) and repeatedly projecting down from singular points to projective spaces of smaller dimension. This method extends to higher-dimensional varieties, and shows that any "n"-dimensional variety has a projective model with singularities of multiplicity at most "n"!. For a curve, "n = 1", and thus there are no singular points. Normalization. gave a one step method of resolving singularities of a curve by taking the normalization of the curve. Normalization removes all singularities in codimension 1, so it works for curves but not in higher dimensions. Valuation rings. Another one-step method of resolving singularities of a curve is to take a space of valuation rings of the function field of the curve. This space can be made into a nonsingular projective curve birational to the original curve. Blowing up. Repeatedly blowing up the singular points of a curve will eventually resolve the singularities. The main task with this method is to find a way to measure the complexity of a singularity and to show that blowing up improves this measure. There are many ways to do this. For example, one can use the arithmetic genus of the curve. Noether's method. Noether's method takes a plane curve and repeatedly applies quadratic transformations (determined by a singular point and two points in general position). Eventually this produces a plane curve whose only singularities are ordinary multiple points (all tangent lines have multiplicity two). Bertini's method. Bertini's method is similar to Noether's method. It starts with a plane curve, and repeatedly applies birational transformations to the plane to improve the curve. The birational transformations are more complicated than the quadratic transformations used in Noether's method, but produce the better result that the only singularities are ordinary double points. Resolution of singularities of surfaces. Surfaces have many different nonsingular projective models (unlike the case of curves where the nonsingular projective model is unique). However a surface still has a unique minimal resolution, that all others factor through (all others are resolutions of it). In higher dimensions there need not be a minimal resolution. There were several attempts to prove resolution for surfaces over the complex numbers by , , , , and , but points out that none of these early attempts are complete, and all are vague (or even wrong) at some critical point of the argument. The first rigorous proof was given by , and an algebraic proof for all fields of characteristic 0 was given by . gave a proof for surfaces of non-zero characteristic. Resolution of singularities has also been shown for all excellent 2-dimensional schemes (including all arithmetic surfaces) by . Zariski's method. Zariski's method of resolution of singularities for surfaces is to repeatedly alternate normalizing the surface (which kills codimension 1 singularities) with blowing up points (which makes codimension 2 singularities better, but may introduce new codimension 1 singularities). Although this will resolve the singularities of surfaces by itself, Zariski used a more roundabout method: he first proved a local uniformization theorem showing that every valuation of a surface could be resolved, then used the compactness of the Zariski–Riemann surface to show that it is possible to find a finite set of surfaces such that the center of each valuation is simple on at least one of these surfaces, and finally by studying birational maps between surfaces showed that this finite set of surfaces could be replaced by a single non-singular surface. Jung's method. By applying strong embedded resolution for curves, reduces to a surface with only rather special singularities (abelian quotient singularities) which are then dealt with explicitly. The higher-dimensional version of this method is de Jong's method. Albanese method. In general the analogue of Albanese's method for curves shows that for any variety one can reduce to singularities of order at most "n"!, where "n" is the dimension. For surfaces this reduces to the case of singularities of order 2, which are easy enough to do explicitly. Abhyankar's method. proved resolution of singularities for surfaces over a field of any characteristic by proving a local uniformization theorem for valuation rings. The hardest case is valuation rings of rank 1 whose valuation group is a nondiscrete subgroup of the rational numbers. The rest of the proof follows Zariski's method. Hironaka's method. Hironaka's method for arbitrary characteristic varieties gives a resolution method for surfaces, which involves repeatedly blowing up points or smooth curves in the singular set. Lipman's method. showed that a surface "Y" (a 2-dimensional reduced Noetherian scheme) has a desingularization if and only if its normalization is finite over "Y" and analytically normal (the completions of its singular points are normal) and has only finitely many singular points. In particular if "Y" is excellent then it has a desingularization. His method was to consider normal surfaces "Z" with a birational proper map to "Y" and show that there is a minimal one with minimal possible arithmetic genus. He then shows that all singularities of this minimal "Z" are pseudo rational, and shows that pseudo rational singularities can be resolved by repeatedly blowing up points. Resolution of singularities in higher dimensions. The problem of resolution of singularities in higher dimensions is notorious for many incorrect published proofs and announcements of proofs that never appeared. Zariski's method. For 3-folds the resolution of singularities was proved in characteristic 0 by . He first proved a theorem about local uniformization of valuation rings, valid for varieties of any dimension over any field of characteristic 0. He then showed that the Zariski–Riemann space of valuations is quasi-compact (for any variety of any dimension over any field), implying that there is a finite family of models of any projective variety such that any valuation has a smooth center over at least one of these models. The final and hardest part of the proof, which uses the fact that the variety is of dimension 3 but which works for all characteristics, is to show that given 2 models one can find a third that resolves the singularities that each of the two given models resolve. Abhyankar's method. proved resolution of singularities for 3-folds in characteristic greater than 6. The restriction on the characteristic arises because Abhyankar shows that it is possible to resolve any singularity of a 3-fold of multiplicity less than the characteristic, and then uses Albanese's method to show that singularities can be reduced to those of multiplicity at most (dimension)! = 3! = 6. gave a simplified version of Abhyankar's proof. Cossart and Piltant (2008, 2009) proved resolution of singularities of 3-folds in all characteristics, by proving local uniformization in dimension at most 3, and then checking that Zariski's proof that this implies resolution for 3-folds still works in the positive characteristic case. Hironaka's method. Resolution of singularities in characteristic 0 in all dimensions was first proved by . He proved that it was possible to resolve singularities of varieties over fields of characteristic 0 by repeatedly blowing up along non-singular subvarieties, using a very complicated argument by induction on the dimension. Simplified versions of his formidable proof were given by several people, including , , , , , , . Some of the recent proofs are about a tenth of the length of Hironaka's original proof, and are easy enough to give in an introductory graduate course. For an expository account of the theorem, see and for a historical discussion see . De Jong's method. found a different approach to resolution of singularities, generalizing Jung's method for surfaces, which was used by and by to prove resolution of singularities in characteristic 0. De Jong's method gave a weaker result for varieties of all dimensions in characteristic "p", which was strong enough to act as a substitute for resolution for many purposes. De Jong proved that for any variety "X" over a field there is a dominant proper morphism which preserves the dimension from a regular variety onto "X". This need not be a birational map, so is not a resolution of singularities, as it may be generically finite to one and so involves a finite extension of the function field of "X". De Jong's idea was to try to represent "X" as a fibration over a smaller space "Y" with fibers that are curves (this may involve modifying "X"), then eliminate the singularities of "Y" by induction on the dimension, then eliminate the singularities in the fibers. Resolution for schemes and status of the problem. It is easy to extend the definition of resolution to all schemes. Not all schemes have resolutions of their singularities: showed that if a locally Noetherian scheme "X" has the property that one can resolve the singularities of any finite integral scheme over "X", then "X" must be quasi-excellent. Grothendieck also suggested that the converse might hold: in other words, if a locally Noetherian scheme "X" is reduced and quasi excellent, then it is possible to resolve its singularities. When "X" is defined over a field of characteristic 0 and is Noetherian, this follows from Hironaka's theorem, and when "X" has dimension at most 2 it was proved by Lipman. gave a survey of work on the unsolved characteristic "p" resolution problem. Method of proof in characteristic zero. The lingering perception that the proof of resolution is very hard gradually diverged from reality. ... it is feasible to prove resolution in the last two weeks of a beginning algebraic geometry course. There are many constructions of strong desingularization but all of them give essentially the same result. In every case the global object (the variety to be desingularized) is replaced by local data (the ideal sheaf of the variety and those of the exceptional divisors and some "orders" that represents how much should be resolved the ideal in that step). With this local data the centers of blowing-up are defined. The centers will be defined locally and therefore it is a problem to guarantee that they will match up into a global center. This can be done by defining what blowings-up are allowed to resolve each ideal. Done appropriately, this will make the centers match automatically. Another way is to define a local invariant depending on the variety and the history of the resolution (the previous local centers) so that the centers consist of the maximum locus of the invariant. The definition of this is made such that making this choice is meaningful, giving smooth centers transversal to the exceptional divisors. In either case the problem is reduced to resolve singularities of the tuple formed by the ideal sheaf and the extra data (the exceptional divisors and the order, "d", to which the resolution should go for that ideal). This tuple is called a "marked ideal" and the set of points in which the order of the ideal is larger than "d" is called its co-support. The proof that there is a resolution for the marked ideals is done by induction on dimension. The induction breaks in two steps: Here we say that a marked ideal is of "maximal order" if at some point of its co-support the order of the ideal is equal to "d". A key ingredient in the strong resolution is the use of the Hilbert–Samuel function of the local rings of the points in the variety. This is one of the components of the resolution invariant. Examples. Multiplicity need not decrease under blowup. The most obvious invariant of a singularity is its multiplicity. However this need not decrease under blowup, so it is necessary to use more subtle invariants to measure the improvement. For example, the rhamphoid cusp "y"2 = "x"5 has a singularity of order 2 at the origin. After blowing up at its singular point it becomes the ordinary cusp "y"2 = "x"3, which still has multiplicity 2. It is clear that the singularity has improved, since the degree of defining polynomial has decreased. This does not happen in general. An example where it does not is given by the isolated singularity of "x"2 + "y"3"z" + "z"3 = 0 at the origin. Blowing it up gives the singularity "x"2 + "y"2"z" + "yz"3 = 0. It is not immediately obvious that this new singularity is better, as both singularities have multiplicity 2 and are given by the sum of monomials of degrees 2, 3, and 4. Blowing up the most singular points does not work. A natural idea for improving singularities is to blow up the locus of the "worst" singular points. The Whitney umbrella "x"2 = "y"2"z" has singular set the "z" axis, most of whose point are ordinary double points, but there is a more complicated pinch point singularity at the origin, so blowing up the worst singular points suggests that one should start by blowing up the origin. However blowing up the origin reproduces the same singularity on one of the coordinate charts. So blowing up the (apparently) "worst" singular points does not improve the singularity. Instead the singularity can be resolved by blowing up along the "z"-axis. There are algorithms that work by blowing up the "worst" singular points in some sense, such as , but this example shows that the definition of the "worst" points needs to be quite subtle. For more complicated singularities, such as "x"2 = "y""m""z""n" which is singular along "x" = "yz" =0, blowing up the worst singularity at the origin produces the singularities "x"2 = "y""m"+"n"−2"z""n" and "x"2 = "y""m""z""m"+"n"−2 which are worse than the original singularity if "m" and "n" are both at least 3. After resolution, the total transform (the union of the strict transform and the exceptional divisors) is a variety with singularities of the simple normal crossings type. It is natural to consider the possibility of resolving singularities without resolving this type of singularities, this is finding a resolution that is an isomorphism over the set of smooth and simple normal crossing points. When the strict transform is a divisor (i.e., can be embedded as a codimension one subvariety in a smooth variety) it is known that there exists a strong resolution avoiding simple normal crossing points. Whitney's umbrella shows that it is not possible to resolve singularities avoiding blowing-up the normal crossings singularities. Incremental resolution procedures need memory. A natural way to resolve singularities is to repeatedly blow up some canonically chosen smooth subvariety. This runs into the following problem. The singular set of "x"2 = "y"2"z"2 is the pair of lines given by the "y" and "z" axes. The only reasonable varieties to blow up are the origin, one of these two axes, or the whole singular set (both axes). However the whole singular set cannot be used since it is not smooth, and choosing one of the two axes breaks the symmetry between them so is not canonical. This means we have to start by blowing up the origin, but this reproduces the original singularity, so we seem to be going round in circles. The solution to this problem is that although blowing up the origin does not change the type of the singularity, it does give a subtle improvement: it breaks the symmetry between the two singular axes because one of them is an exceptional divisor for a previous blowup, so it is now permissible to blow up just one of these. However, in order to exploit this the resolution procedure needs to treat these 2 singularities differently, even though they are locally the same. This is sometimes done by giving the resolution procedure some memory, so the center of the blowup at each step depends not only on the singularity, but on the previous blowups used to produce it. Resolutions are not functorial. Some resolution methods (in characteristic 0) are functorial for all smooth morphisms. However it is not possible to find a strong resolution functorial for all (possibly non-smooth) morphisms. An example is given by the map from the affine plane "A"2 to the conical singularity "x"2 + "y"2 = "z"2 taking ("X","Y") to (2"XY", "X"2 − "Y"2, "X"2 + "Y"2). The "XY"-plane is already nonsingular so should not be changed by resolution, and any resolution of the conical singularity factorizes through the minimal resolution given by blowing up the singular point. However the rational map from the "XY"-plane to this blowup does not extend to a regular map. Minimal resolutions need not exist. Minimal resolutions (resolutions such that every resolution factors through them) exist in dimensions 1 and 2, but not always in higher dimensions. The Atiyah flop gives an example in 3 dimensions of a singularity with no minimal resolution. Let "Y" be the zeros of "xy" = "zw" in A4, and let "V" be the blowup of "Y" at the origin. The exceptional locus of this blowup is isomorphic to P1×P1, and can be blown down to P1 in 2 different ways, giving two small resolutions "X"1 and "X"2 of "Y", neither of which can be blown down any further. Resolutions should not commute with products. gives the following example showing that one cannot expect a sufficiently good resolution procedure to commute with products. If "f":"A"→"B" is the blowup of the origin of a quadric cone "B" in affine 3-space, then "f"×"f":"A"×"A"→"B"×"B" cannot be produced by an étale local resolution procedure, essentially because the exceptional locus has 2 components that intersect. Singularities of toric varieties. Singularities of toric varieties give examples of high-dimensional singularities that are easy to resolve explicitly. A toric variety is defined by a fan, a collection of cones in a lattice. The singularities can be resolved by subdividing each cone into a union of cones each of which is generated by a basis for the lattice, and taking the corresponding toric variety. Choosing centers that are regular subvarieties of "X". Construction of a desingularization of a variety "X" may not produce centers of blowings up that are smooth subvarieties of "X". Many constructions of a desingularization of an abstract variety "X" proceed by locally embedding "X" in a smooth variety "W", considering its ideal in "W" and computing a canonical desingularization of this ideal. The desingularization of ideals uses the order of the ideal as a measure of how singular is the ideal. The desingularization of the ideal can be made such that one can justify that the local centers patch together to give global centers. This method leads to a proof that is relatively simpler to present, compared to Hironaka's original proof, which uses the Hilbert-Samuel function as the measure of how bad singularities are. For example, the proofs in , , , and use this idea. However, this method only ensures centers of blowings up that are regular in "W". The following example shows that this method can produce centers that have non-smooth intersections with the (strict transform of) "X". Therefore, the resulting desingularization, when restricted to the abstract variety "X", is not obtained by blowing up regular subvarieties of "X". Let "X" be the subvariety of the four-dimensional affine plane, with coordinates "x,y,z,w", generated by "y"2-"x"3 and "x"4+"xz"2-"w"3. The canonical desingularization of the ideal with these generators would blow up the center "C"0 given by "x"="y"="z"="w"=0. The transform of the ideal in the "x"-chart if generated by "x"-"y"2 and "y"2("y"2+"z"2-"w"3). The next center of blowing up "C"1 is given by "x"="y"=0. However, the strict transform of "X" is "X"1, which is generated by "x"-"y"2 and "y"2+"z"2-"w"3. This means that the intersection of "C"1 and "X"1 is given by "x"="y"=0 and "z"2-"w"3=0, which is not regular. To produce centers of blowings up that are regular subvarieties of "X" stronger proofs use the Hilbert-Samuel function of the local rings of "X" rather than the order of its ideal in the local embedding in "W". Other variants of resolutions of singularities. After the resolution the total transform, the union of the strict transform, "X", and the exceptional divisor, is a variety that can be made, at best, to have simple normal crossing singularities. Then it is natural to consider the possibility of resolving singularities without resolving this type of singularities. The problem is to find a resolution that is an isomorphism over the set of smooth and simple normal crossing points. When "X" is a divisor, i.e. it can be embedded as a codimension-one subvariety in a smooth variety it is known to be true the existence of the strong resolution avoiding simple normal crossing points. The general case or generalizations to avoid different types of singularities are still not known. Avoiding certain singularities is impossible. For example, one can't resolve singularities avoiding blowing-up the normal crossings singularities. In fact, to resolve the pinch point singularity the whole singular locus needs to be blown up, including points where normal crossing singularities are present.
1474929
abstract_algebra
In mathematics, specifically the area of algebraic number theory, a cubic field is an algebraic number field of degree three. Definition. If "K" is a field extension of the rational numbers Q of degree ["K":Q] = 3, then "K" is called a cubic field. Any such field is isomorphic to a field of the form $\mathbf{Q}[x]/(f(x))$ where "f" is an irreducible cubic polynomial with coefficients in Q. If "f" has three real roots, then "K" is called a totally real cubic field and it is an example of a totally real field. If, on the other hand, "f" has a non-real root, then "K" is called a complex cubic field. A cubic field "K" is called a cyclic cubic field if it contains all three roots of its generating polynomial "f". Equivalently, "K" is a cyclic cubic field if it is a Galois extension of Q, in which case its Galois group over Q is cyclic of order three. This can only happen if "K" is totally real. It is a rare occurrence in the sense that if the set of cubic fields is ordered by discriminant, then the proportion of cubic fields which are cyclic approaches zero as the bound on the discriminant approaches infinity. A cubic field is called a pure cubic field if it can be obtained by adjoining the real cube root $\sqrt[3]{n}$ of a cube-free positive integer "n" to the rational number field Q. Such fields are always complex cubic fields since each positive number has two complex non-real cube roots. Galois closure. A cyclic cubic field "K" is its own Galois closure with Galois group Gal("K"/Q) isomorphic to the cyclic group of order three. However, any other cubic field "K" is a non-Galois extension of Q and has a field extension "N" of degree two as its Galois closure. The Galois group Gal("N"/Q) is isomorphic to the symmetric group "S"3 on three letters. Associated quadratic field. The discriminant of a cubic field "K" can be written uniquely as "df"2 where "d" is a fundamental discriminant. Then, "K" is cyclic if and only if "d" = 1, in which case the only subfield of "K" is Q itself. If "d" ≠ 1 then the Galois closure "N" of "K" contains a unique quadratic field "k" whose discriminant is "d" (in the case "d" = 1, the subfield Q is sometimes considered as the "degenerate" quadratic field of discriminant 1). The conductor of "N" over "k" is "f", and "f"2 is the relative discriminant of "N" over "K". The discriminant of "N" is "d"3"f"4. The field "K" is a pure cubic field if and only if "d" = −3. This is the case for which the quadratic field contained in the Galois closure of "K" is the cyclotomic field of cube roots of unity. Discriminant. Since the sign of the discriminant of a number field "K" is (−1)"r"2, where "r"2 is the number of conjugate pairs of complex embeddings of "K" into C, the discriminant of a cubic field will be positive precisely when the field is totally real, and negative if it is a complex cubic field. Given some real number "N" > 0 there are only finitely many cubic fields "K" whose discriminant "D""K" satisfies |"D""K"| ≤ "N". Formulae are known which calculate the prime decomposition of "D""K", and so it can be explicitly calculated. Unlike quadratic fields, several non-isomorphic cubic fields "K"1, ..., "Km" may share the same discriminant "D". The number "m" of these fields is called the multiplicity of the discriminant "D". Some small examples are "m" = 2 for "D"  = −1836, 3969, "m" = 3 for "D"  = −1228, 22356, "m"  = 4 for "D" = −3299, 32009, and "m" = 6 for "D" = −70956, 3054132. Any cubic field "K" will be of the form "K" = Q(θ) for some number θ that is a root of an irreducible polynomial $f(X)=X^3-aX+b$ where "a" and "b" are integers. The discriminant of "f" is Δ = 4"a"3 − 27"b"2. Denoting the discriminant of "K" by "D", the index "i"(θ) of θ is then defined by Δ = "i"(θ)2"D". In the case of a non-cyclic cubic field "K" this index formula can be combined with the conductor formula "D" = "f"2"d" to obtain a decomposition of the polynomial discriminant Δ = "i"(θ)2"f"2"d" into the square of the product "i"(θ)"f" and the discriminant "d" of the quadratic field "k" associated with the cubic field "K", where "d" is squarefree up to a possible factor 22 or 23. Georgy Voronoy gave a method for separating "i"(θ) and "f" in the square part of Δ. The study of the number of cubic fields whose discriminant is less than a given bound is a current area of research. Let "N"+("X") (respectively "N"−("X")) denote the number of totally real (respectively complex) cubic fields whose discriminant is bounded by "X" in absolute value. In the early 1970s, Harold Davenport and Hans Heilbronn determined the first term of the asymptotic behaviour of "N"±("X") (i.e. as "X" goes to infinity). By means of an analysis of the residue of the Shintani zeta function, combined with a study of the tables of cubic fields compiled by Karim Belabas and some heuristics, David P. Roberts conjectured a more precise asymptotic formula: $N^\pm(X)\sim\frac{A_\pm}{12\zeta(3)}X+\frac{4\zeta(\frac{1}{3})B_\pm}{5\Gamma(\frac{2}{3})^3\zeta(\frac{5}{3})}X^{\frac{5}{6}}$ where "A"± = 1 or 3, "B"± = 1 or $\sqrt{3}$, according to the totally real or complex case, ζ("s") is the Riemann zeta function, and Γ("s") is the Gamma function. Proofs of this formula have been published by using methods based on Bhargava's earlier work, as well as by based on the Shintani zeta function. Unit group. According to Dirichlet's unit theorem, the torsion-free unit rank "r" of an algebraic number field "K" with "r"1 real embeddings and "r"2 pairs of conjugate complex embeddings is determined by the formula "r" = "r"1 + "r"2 − 1. Hence a totally real cubic field "K" with "r"1 = 3, "r"2 = 0 has two independent units ε1, ε2 and a complex cubic field "K" with "r"1 = "r"2 = 1 has a single fundamental unit ε1. These fundamental systems of units can be calculated by means of generalized continued fraction algorithms by Voronoi, which have been interpreted geometrically by Delone and Faddeev. Notes.
2580100
abstract_algebra
In mathematics, the Pocklington–Lehmer primality test is a primality test devised by Henry Cabourn Pocklington and Derrick Henry Lehmer. The test uses a partial factorization of $N - 1$ to prove that an integer $N$ is prime. It produces a primality certificate to be found with less effort than the Lucas primality test, which requires the full factorization of $N - 1$. Pocklington criterion. The basic version of the test relies on the Pocklington theorem (or Pocklington criterion) which is formulated as follows: Let $N > 1$ be an integer, and suppose there exist natural numbers a and p such that Then N is prime. Note: Equation (1) is simply a Fermat primality test. If we find "any" value of a, not divisible by N, such that equation (1) is false, we may immediately conclude that N is not prime. (This divisibility condition is not explicitly stated because it is implied by equation (3).) For example, let $N = 35$. With $a = 2$, we find that $a^{N-1} \equiv 9 \pmod{N}$. This is enough to prove that N is not prime. Given N, if p and a can be found which satisfy the conditions of the theorem, then N is prime. Moreover, the pair (p, a) constitute a "primality certificate" which can be quickly verified to satisfy the conditions of the theorem, confirming N as prime. The main difficulty is finding a value of p which satisfies (2). First, it is usually difficult to find a large prime factor of a large number. Second, for many primes N, such a p does not exist. For example, $N = 17$ has no suitable p because $N - 1 = 2^4$, and $p = 2 < \sqrt{N}-1$, which violates the inequality in (2); other examples include $N = 19, 37, 41, 61, 71, 73,$ and $97$. Given p, finding a is not nearly as difficult. If N is prime, then by Fermat's little theorem, any a in the interval $1 \leq a \leq N - 1$ will satisfy (1) (however, the cases $a = 1$ and $a = N - 1$ are trivial and will not satisfy (3)). This a will satisfy (3) as long as ord(a) does not divide $(N - 1)/p$. Thus a randomly chosen a in the interval $2 \leq a \leq N - 2$ has a good chance of working. If a is a generator mod N, its order is and so the method is guaranteed to work for this choice. Generalized Pocklington test. The above version of version of Pocklington's theorem is sometimes impossible to apply because some primes $N$ are such that there is no prime $p$ dividing $N - 1$ where $p > \sqrt{N} - 1$. The following generalized version of Pocklington's theorem is more widely applicable. Theorem: Factor "N" − 1 as "N" − 1 = "AB", where A and B are relatively prime, $A > \sqrt{N}$, the prime factorization of A is known, but the factorization of B is not necessarily known. If for each prime factor p of A there exists an integer $a_p$ so that then "N" is prime. Comments. The Pocklington–Lehmer primality test follows directly from this corollary. To use this corollary, first find enough factors of "N" − 1 so the product of those factors exceeds $\sqrt{N}$. Call this product A. Then let "B" = ("N" − 1)/"A" be the remaining, unfactored portion of "N" − 1. It does not matter whether B is prime. We merely need to verify that no prime that divides A also divides B, that is, that A and B are relatively prime. Then, for every prime factor p of A, find an $a_p$ which fulfills conditions (6) and (7) of the corollary. If such $a_p$s can be found, the Corollary implies that N is prime. According to Koblitz, $a_p$ = 2 often works. Example. Determine whether $N = 27457$ is prime. First, search for small prime factors of $N - 1$. We quickly find that $N - 1 = 2^6 \cdot 3 \cdot B = 192 \cdot B$. We must determine whether $A = 192$ and $B = (N - 1)/A = 143$ meet the conditions of the Corollary. $A^2 = 36864 > N$, so $A > \sqrt{N}$. Therefore, we have factored enough of $N - 1$ to apply the Corollary. We must also verify that $\gcd{(A, B)} = 1$. It does not matter whether B is prime (in fact, it is not). Finally, for each prime factor p of A, use trial and error to find an ap that satisfies (6) and (7). For $p = 2$, try $a_2 = 2$. Raising $a_2$ to this high power can be done efficiently using binary exponentiation: $a_2^{N - 1} \equiv 2^{27456} \equiv 1 \pmod{27457}$ $\gcd{(a_2^{(N - 1)/2} - 1, N)} = \gcd{(2^{13728} - 1, 27457)} = 27457$. So, $a_2 = 2$ satisfies (6) but not (7). As we are allowed a different ap for each p, try $a_2 = 5$ instead: $a_2^{N - 1} \equiv 5^{27456} \equiv 1 \pmod{27457}$ $\gcd{(a_2^{(N - 1)/2} - 1, N)} = \gcd{(5^{13728} - 1, 27457)} = 1$. So $a_2 = 5$ satisfies both (6) and (7). For $p = 3$, the second prime factor of A, try $a_3 = 2$: $a_3^{N - 1} \equiv 2^{27456} \equiv 1 \pmod{27457}$. $\gcd{(a_3^{(N - 1)/3} - 1, N)} = \gcd{(2^{9152} - 1, 27457)} = 1$. $a_3 = 2$ satisfies both (6) and (7). This completes the proof that $N = 27457$ is prime. The certificate of primality for $N = 27457$ would consist of the two $(p, a_p)$ pairs (2, 5) and (3, 2). We have chosen small numbers for this example, but in practice when we start factoring A we may get factors that are themselves so large their primality is not obvious. We cannot prove N is prime without proving that the factors of A are prime as well. In such a case we use the same test recursively on the large factors of A, until all of the primes are below a reasonable threshold. In our example, we can say with certainty that 2 and 3 are prime, and thus we have proved our result. The primality certificate is the list of $(p, a_p)$pairs, which can be quickly checked in the corollary. If our example had included large prime factors, the certificate would be more complicated. It would first consist of our initial round of aps which correspond to the 'prime' factors of A; Next, for each factor of A where primality was uncertain, we would have more ap, and so on for factors of these factors until we reach factors of which primality is certain. This can continue for many layers if the initial prime is large, but the important point is that a certificate can be produced, containing at each level the prime to be tested, and the corresponding aps, which can easily be verified. Extensions and variants. The 1975 paper by Brillhart, Lehmer, and Selfridge gives a proof for what is shown above as the "generalized Pocklington theorem" as Theorem 4 on page 623. Additional theorems are shown which allow less factoring. This includes their Theorem 3 (a strengthening of an 1878 theorem of Proth): Let $N-1 = mp$ where p is an odd prime such that $2p+1 > \sqrt N$. If there exists an a for which $a^{(N-1)/2} \equiv -1 \pmod{N}$, but $a^{m/2} \not\equiv -1 \pmod{N}$, then N is prime. If N is large, it is often difficult to factor enough of $N - 1$ to apply the above corollary. Theorem 5 of the Brillhart, Lehmer, and Selfridge paper allows a primality proof when the factored part has reached only $(N/2)^{1/3}$. Many additional such theorems are presented that allow one to prove the primality of N based on the partial factorization of $N - 1$, $N + 1$, $N^2 + 1$, and $N^2 \pm N + 1$.
2936333
abstract_algebra
Psychological conceptCollective self-esteem is a concept originating in the field of psychology that describes the aspect of an individual's self-image that stems from how the individual interacts with others and the groups that the individual is a part of. The idea originated during the research of Jennifer Crocker, during which she was trying to learn about the connection between a person's self-esteem and their attitude towards or about the group that the person is part of. Collective self-esteem is talked about subjectively as a concept as well as measured objectively with various scales and assessments. The data from such research is used practically to give importance and weight to the idea that most individuals benefit from being in a group setting for at least sometime as well as being able to identify with being a part of a group. Or not. History. Jennifer Crocker and Riia Luhtanen were the first to study collective self-esteem. They believed there was a relationship between people's self-esteem and how they felt about groups they were a part of. Crocker hypothesized that people who were high in the trait of collective self-esteem would be more likely to “react to threats to collective self-esteem by derogating out-groups and enhancing the in-group.” The idea of collective self-esteem rose out of social identity theory (Tajfel & Turner). Social identity theory focused on an individual's personal beliefs about themselves and beliefs that stemmed from the groups they were part of. Collective self-esteem described a more group-oriented idea of self-esteem. It focused more on how groups, when they are threatened or perceive to be threatened will increase bias in favor of the in-group and increase prejudice toward the out-group. Crocker published a paper titled “Collective self-esteem and in-group bias.” It was published in the Journal of Personality and Social Psychology. Crocker developed a scale that consisted of four categories to measure collective self-esteem. The study concluded that prejudice and discrimination toward out-groups were not so much motivated by personal self-esteem needs but rather they were attempts to increase or enhance collective self-esteem. Research. Collective Self-Esteem Scale. The Collective Self-Esteem Scale (CSES) was developed by Luhtanen and Crocker as a measure to assess individuals’ social identity based on their membership in ascribed groups such as race, ethnicity, gender, and the like. The CSES has been one of the most widely used instruments in the field of psychology to assess collective group identity among racial and ethnic populations. In the initial CSES development study, exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) techniques were used to examine the underlying factor structure of the proposed four-factor CSES. The preliminary prototype of the CSES consisted of 43 items. To shorten the instrument, Crocker and Luhtanen selected the four items with the highest factor loadings from each subscale to represent the final 16-item CSES. The actual scale is a list of statements that pertain to the person's membership in a group or category and each is rated on a seven-point scale. The scoring is done through four subscales that are categorized as follows: 1) Items 1, 5, 9, and 13 = Membership self-esteem. 2) Items 2, 6, 10, and 14 = Private collective self-esteem. 3) Items 3, 7, 11, and 15 = Public collective self-esteem. 4) Items 4, 8, 12, and 16 = Importance to identity. First, reverse-score answers to items 2,4, 5, 7, 10, 12, 13, and 15, such that (1=7), (2=6), (3=5), (4=4), (5=3), (6=2), (7=1). Then sum the answers to the four items for each respective subscale score, and divide each by four. Because the subscales measure distinct construct, it is strongly recommended against creating an overall or composite score for collective self-esteem. Race. In the context of cross-cultural research using the CSES, Crocker Luhtanen, Blaine, and Broadnax explored the nature of collective self-esteem in Asian, Black, and White college students. Crocker et al. found that collective self-esteem was a significant predictor of psychological adjustment and that among the Black students in their sample, the correlation between public and private collective self-esteem was essentially zero. Based on this later finding, Crocker et al. surmised that Black college students might separate how they privately feel about their group from how they believe others may evaluate them. This separation between public and private evaluations may represent an important survival strategy for Black Americans because of the prejudice and discrimination they face in the United States. The literature has suggested that Black Americans may generally feel good about their own racial group but still believe that external perceptions of their racial group may be negative or derogatory. This phenomenon might indicate a CSES factor structure for Black Americans that is different from that of other racial or ethnic groups in the United States. In-group and out-group bias. Cremer et al. began a study expecting that people high in CSE engaged in indirect enhancement of the in-group. This finding suggests that predictions made by Social Identity Theory are more applicable to individuals with a high level of CSE. In this study, Cremer et al. found that participants identified more with the in-group than with the out-group. Participants high in CSE evaluated in-group members as fairer and more competent than participants low in CSE. Cremer et al. also found that women expressed a higher level of CSE than men. These findings provide additional evidence that individuals high in CSE are more likely to engage in in-group distorting evaluations when there is a possible threat to their CSE. Applying the CSE scale in situations of a potential threat to the in-group (i.e., success or failure feedback), Crocker and Luhtanen found that, in contrast to people low in CSE, people high in CSE showed in-group favoritism, thereby indirectly enhancing the in-group. Individuals high in CSE will evaluate in-group members more positively than those low in CSE. The results concerning the in-group and outgroup evaluations seem to suggest that people with high CSE can be considered more confident about their esteemed social identity, making them search for more opportunities to enhance the collective self. As a result, those people will feel a greater need to evaluate their in-group members more positively (i.e., in-group favoritism) than people low in CSE. On the other hand, people low in CSE do not feel very confident about their social identity and, in order to avoid failure, they will consider out-group derogation as a more useful strategy to protect their social identity. Real-world application. Collective self-esteem can be seen in real-world applications through the use of BIRGing and CORFing. Both different concepts but both effect and incorporate collective self-esteem into our everyday lives. BIRGing (basking in reflected glory) is when a person uses the association of another person's success to boost their own self-esteem or self-glory. The act of BIRGing is often used to offset any threats made to a person's or a group's self-esteem. Sometimes, this act is actually made unknowingly and unintentional. An example of BIRGing can be seen when your favorite sports team, say the University of Kansas men's basketball team, has just won the national title. You will notice when walking around campus or reading the local newspaper, you will be more likely to hear or read “We won!” or “We are the champs!” instead of "KU won!" CORFing (cutting off reflected failure) can be seen in this same example, except when the team suffers a loss, the fans will change “we” to “they”. For example, “They did not show enough heart” or “They really got outplayed today”. When someone uses BIRGing or CORFing, their collective self-esteem will be altered in either a positive or negative way and one must not only be careful on the frequency they are using these techniques, but also that these are often used unintentionally and can be taken out of context in many situations. In an article titled “BIRGing and CORFing: From the Hardcourt to the Boardroom” by Kevin Meyer, the author provides an example of BIRGing and CORFing being used in the workplace for job security. This can be seen when an employee aligns them self only with good projects or products and distances them self from poor outcomes and products. Employees must be careful when using these techniques often, because their coworkers may start to believe that that individual is only doing certain things to benefit themself and as Meyer states “throwing us under the bus” when hard times arise. Meyer also states “An effective leader must be willing to weather the storm, sharing in the collective successes but also standing up for their team when things don't go to plan. For most, BIRG and CORF can be more difficult to accomplish in the workplace as our affiliation with a particular team or project is often more obvious”.
3936170
abstract_algebra
Relationship between certain vector spaces In mathematics, triality is a relationship among three vector spaces, analogous to the duality relation between dual vector spaces. Most commonly, it describes those special features of the Dynkin diagram D4 and the associated Lie group Spin(8), the double cover of 8-dimensional rotation group SO(8), arising because the group has an outer automorphism of order three. There is a geometrical version of triality, analogous to duality in projective geometry. Of all simple Lie groups, Spin(8) has the most symmetrical Dynkin diagram, D4. The diagram has four nodes with one node located at the center, and the other three attached symmetrically. The symmetry group of the diagram is the symmetric group "S"3 which acts by permuting the three legs. This gives rise to an "S"3 group of outer automorphisms of Spin(8). This automorphism group permutes the three 8-dimensional irreducible representations of Spin(8); these being the "vector" representation and two chiral "spin" representations. These automorphisms do not project to automorphisms of SO(8). The vector representation—the natural action of SO(8) (hence Spin(8)) on "F"8—consists over the real numbers of Euclidean 8-vectors and is generally known as the "defining module", while the chiral spin representations are also known as "half-spin representations", and all three of these are fundamental representations. No other connected Dynkin diagram has an automorphism group of order greater than 2; for other D"n" (corresponding to other even Spin groups, Spin(2"n")), there is still the automorphism corresponding to switching the two half-spin representations, but these are not isomorphic to the vector representation. Roughly speaking, symmetries of the Dynkin diagram lead to automorphisms of the Tits building associated with the group. For special linear groups, one obtains projective duality. For Spin(8), one finds a curious phenomenon involving 1-, 2-, and 4-dimensional subspaces of 8-dimensional space, historically known as "geometric triality". The exceptional 3-fold symmetry of the D4 diagram also gives rise to the Steinberg group 3D4. General formulation. A duality between two vector spaces over a field F is a non-degenerate bilinear form $ V_1\times V_2\to F,$ i.e., for each non-zero vector v in one of the two vector spaces, the pairing with v is a non-zero linear functional on the other. Similarly, a triality between three vector spaces over a field F is a non-degenerate trilinear form $ V_1\times V_2\times V_3\to F,$ i.e., each non-zero vector in one of the three vector spaces induces a duality between the other two. By choosing vectors "e""i" in each "V""i" on which the trilinear form evaluates to 1, we find that the three vector spaces are all isomorphic to each other, and to their duals. Denoting this common vector space by V, the triality may be re-expressed as a bilinear multiplication $ V \times V \to V$ where each "e""i" corresponds to the identity element in V. The non-degeneracy condition now implies that V is a composition algebra. It follows that V has dimension 1, 2, 4 or 8. If further "F" = R and the form used to identify V with its dual is positively definite, then V is a Euclidean Hurwitz algebra, and is therefore isomorphic to R, C, H or O. Conversely, composition algebras immediately give rise to trialities by taking each "V""i" equal to the algebra, and contracting the multiplication with the inner product on the algebra to make a trilinear form. An alternative construction of trialities uses spinors in dimensions 1, 2, 4 and 8. The eight-dimensional case corresponds to the triality property of Spin(8).
251531
abstract_algebra
In number theory, a congruence of squares is a congruence commonly used in integer factorization algorithms. Derivation. Given a positive integer "n", Fermat's factorization method relies on finding numbers "x" and "y" satisfying the equality $x^2 - y^2 = n$ We can then factor "n" = "x"2 − "y"2 = ("x" + "y")("x" − "y"). This algorithm is slow in practice because we need to search many such numbers, and only a few satisfy the equation. However, "n" may also be factored if we can satisfy the weaker congruence of squares condition: $x^2 \equiv y^2 \pmod{n}$ $x \not\equiv \pm y \,\pmod{n}$ From here we easily deduce $x^2 - y^2 \equiv 0 \pmod{n}$ $(x + y)(x - y) \equiv 0 \pmod{n}$ This means that "n" divides the product ("x" + "y")("x" − "y"). Thus ("x" + "y") and ("x" − "y") each contain factors of "n", but those factors can be trivial. In this case we need to find another "x" and "y". Computing the greatest common divisors of ("x" + "y", "n") and of ("x" − "y", "n") will give us these factors; this can be done quickly using the Euclidean algorithm. Congruences of squares are extremely useful in integer factorization algorithms and are extensively used in, for example, the quadratic sieve, general number field sieve, continued fraction factorization, and Dixon's factorization. Conversely, because finding square roots modulo a composite number turns out to be probabilistic polynomial-time equivalent to factoring that number, any integer factorization algorithm can be used efficiently to identify a congruence of squares. Further generalizations. It is also possible to use factor bases to help find congruences of squares more quickly. Instead of looking for $\textstyle x^2 \equiv y^2 \pmod{n}$ from the outset, we find many $\textstyle x^2 \equiv y \pmod{n}$ where the "y" have small prime factors, and try to multiply a few of these together to get a square on the right-hand side. Examples. Factorize 35. We take "n" = 35 and find that $\textstyle 6^2 = 36 \equiv 1 = 1^2 \pmod{35}$. We thus factor as $ \gcd( 6-1, 35 ) \cdot \gcd( 6+1, 35 ) = 5 \cdot 7 = 35$ Factorize 1649. Using "n" = 1649, as an example of finding a congruence of squares built up from the products of non-squares (see Dixon's factorization method), first we obtain several congruences $41^2 \equiv 32 \pmod{1649}$ $42^2 \equiv 115 \pmod{1649}$ $43^2 \equiv 200 \pmod{1649}$ of these, two have only small primes as factors $32 = 2^5$ $200 = 2^3 \cdot 5^2$ and a combination of these has an even power of each small prime, and is therefore a square $32 \cdot 200 = 2^{5+3} \cdot 5^2 = 2^8 \cdot 5^2 = (2^4 \cdot 5)^2 = 80^2$ yielding the congruence of squares $32 \cdot 200 = 80^2 \equiv 41^2 \cdot 43^2 \equiv 114^2 \pmod{1649}$ So using the values of 80 and 114 as our "x" and "y" gives factors $\gcd( 114-80, 1649 ) \cdot \gcd( 114+80, 1649 ) = 17 \cdot 97 = 1649.$
222354
abstract_algebra
Methods to check primality In mathematics, elliptic curve primality testing techniques, or elliptic curve primality proving (ECPP), are among the quickest and most widely used methods in primality proving. It is an idea put forward by Shafi Goldwasser and Joe Kilian in 1986 and turned into an algorithm by A. O. L. Atkin the same year. The algorithm was altered and improved by several collaborators subsequently, and notably by Atkin and François Morain, in 1993. The concept of using elliptic curves in factorization had been developed by H. W. Lenstra in 1985, and the implications for its use in primality testing (and proving) followed quickly. Primality testing is a field that has been around since the time of Fermat, in whose time most algorithms were based on factoring, which become unwieldy with large input; modern algorithms treat the problems of determining whether a number is prime and what its factors are separately. It became of practical importance with the advent of modern cryptography. Although many current tests result in a probabilistic output ("N" is either shown composite, or probably prime, such as with the Baillie–PSW primality test or the Miller–Rabin test), the elliptic curve test proves primality (or compositeness) with a quickly verifiable certificate. Previously-known prime-proving methods such as the Pocklington primality test required at least partial factorization of $N \pm 1$ in order to prove that $N$ is prime. As a result, these methods required some luck and are generally slow in practice. Elliptic curve primality proving. It is a general-purpose algorithm, meaning it does not depend on the number being of a special form. ECPP is currently in practice the fastest known algorithm for testing the primality of general numbers, but the worst-case execution time is not known. ECPP heuristically runs in time: $ O((\log n)^{5+\varepsilon})\, $ for some $\varepsilon > 0$. This exponent may be decreased to $4+\varepsilon$ for some versions by heuristic arguments. ECPP works the same way as most other primality tests do, finding a group and showing its size is such that $p$ is prime. For ECPP the group is an elliptic curve over a finite set of quadratic forms such that $p-1$ is trivial to factor over the group. ECPP generates an Atkin–Goldwasser–Kilian–Morain certificate of primality by recursion and then attempts to verify the certificate. The step that takes the most CPU time is the certificate generation, because factoring over a class field must be performed. The certificate can be verified quickly, allowing a check of operation to take very little time. As of  2023[ [update]], the largest prime that has been proved with ECPP method is $5^{104824} + 104824^5$. The certification was performed by Andreas Enge using his fastECPP software "CM". Proposition. The elliptic curve primality tests are based on criteria analogous to the Pocklington criterion, on which that test is based, where the group $(\Z/n\Z)^*$ is replaced by $E(\Z/n\Z),$ and "E" is a properly chosen elliptic curve. We will now state a proposition on which to base our test, which is analogous to the Pocklington criterion, and gives rise to the Goldwasser–Kilian–Atkin form of the elliptic curve primality test. Let "N" be a positive integer, and "E" be the set which is defined by the equation $y^2 = x^3 + ax + b \bmod{N}.$ Consider "E" over $\Z/N\Z,$ use the usual addition law on "E", and write 0 for the neutral element on "E". Let "m" be an integer. If there is a prime "q" which divides "m", and is greater than $\left (\sqrt[4]{N}+1 \right )^2$ and there exists a point "P" on "E" such that (1) "mP" = 0 (2) ("m"/"q")"P" is defined and not equal to 0 Then "N" is prime. Proof. If "N" is composite, then there exists a prime $p \le \sqrt{N}$ that divides "N". Define $E_p$ as the elliptic curve defined by the same equation as "E" but evaluated modulo "p" rather than modulo "N". Define $m_p$ as the order of the group $E_p$. By Hasse's theorem on elliptic curves we know $m_p \le p+1+2\sqrt{p} = \left (\sqrt{p} + 1 \right )^2 \le \left (\sqrt[4]{N}+1 \right )^2 < q$ and thus $\gcd(q,m_p)=1$ and there exists an integer "u" with the property that $uq \equiv 1 \bmod{m_p}.$ Let $P_p$ be the point "P" evaluated modulo "p". Thus, on $E_p$ we have $(m/q)P_p = uq(m/q)P_p = umP_p = 0,$ by (1), as $mP_p$ is calculated using the same method as "mP", except modulo "p" rather than modulo "N" (and $p \mid N$). This contradicts (2), because if ("m"/"q")"P" is defined and not equal to 0 (mod "N"), then the same method calculated modulo "p" instead of modulo "N" will yield: $(m/q)P_p \ne 0.$ Goldwasser–Kilian algorithm. From this proposition an algorithm can be constructed to prove an integer, "N", is prime. This is done as follows: Choose three integers at random, "a, x, y" and set $b \equiv y^2 - x^3 - ax \pmod{N}$ Now "P" = ("x","y") is a point on "E", where we have that "E" is defined by $y^2 = x^3 + ax + b$. Next we need an algorithm to count the number of points on "E". Applied to "E", this algorithm (Koblitz and others suggest Schoof's algorithm) produces a number "m" which is the number of points on curve "E" over F"N", provided "N" is prime. If the point-counting algorithm stops at an undefined expression this allows to determine a non-trivial factor of "N". If it succeeds, we apply a criterion for deciding whether our curve "E" is acceptable. If we can write "m" in the form $m = kq$ where $ k \ge 2 $ is a small integer and "q" a large probable prime (a number that passes a probabilistic primality test, for example), then we do not discard "E". Otherwise, we discard our curve and randomly select another triple "(a, x, y)" to start over. The idea here is to find an "m" that is divisible by a large prime number "q". This prime is a few digits smaller than "m" (or "N") so "q" will be easier to prove prime than "N". Assuming we find a curve which passes the criterion, proceed to calculate "mP" and "kP". If any of the two calculations produce an undefined expression, we can get a non-trivial factor of "N". If both calculations succeed, we examine the results. If $mP \neq 0$ it is clear that "N" is not prime, because if "N" were prime then "E" would have order "m", and any element of "E" would become 0 on multiplication by "m". If "kP" = 0, then the algorithm discards "E" and starts over with a different "a, x, y" triple. Now if $mP = 0$ and $kP \neq 0 $ then our previous proposition tells us that "N" is prime. However, there is one possible problem, which is the primality of "q". This is verified using the same algorithm. So we have described a recursive algorithm, where the primality of "N" depends on the primality of "q" and indeed smaller 'probable primes' until some threshold is reached where "q" is considered small enough to apply a non-recursive deterministic algorithm. Problems with the algorithm. Atkin and Morain state "the problem with GK is that Schoof's algorithm seems almost impossible to implement." It is very slow and cumbersome to count all of the points on "E" using Schoof's algorithm, which is the preferred algorithm for the Goldwasser–Kilian algorithm. However, the original algorithm by Schoof is not efficient enough to provide the number of points in short time. These comments have to be seen in the historical context, before the improvements by Elkies and Atkin to Schoof's method. A second problem Koblitz notes is the difficulty of finding the curve "E" whose number of points is of the form "kq", as above. There is no known theorem which guarantees we can find a suitable "E" in polynomially many attempts. The distribution of primes on the Hasse interval $[p+1-2\sqrt{p},p+1+2\sqrt{p}]$, which contains "m", is not the same as the distribution of primes in the group orders, counting curves with multiplicity. However, this is not a significant problem in practice. Atkin–Morain elliptic curve primality test (ECPP). In a 1993 paper, Atkin and Morain described an algorithm ECPP which avoided the trouble of relying on a cumbersome point counting algorithm (Schoof's). The algorithm still relies on the proposition stated above, but rather than randomly generating elliptic curves and searching for a proper "m", their idea was to construct a curve "E" where the number of points is easy to compute. Complex multiplication is key in the construction of the curve. Now, given an "N" for which primality needs to be proven we need to find a suitable "m" and "q", just as in the Goldwasser–Kilian test, that will fulfill the proposition and prove the primality of "N". (Of course, a point "P" and the curve itself, "E", must also be found.) ECPP uses complex multiplication to construct the curve "E", doing so in a way that allows for "m" (the number of points on "E") to be easily computed. We will now describe this method: Utilization of complex multiplication requires a negative discriminant, "D", such that "D" can be written as the product of two elements $D = \pi \bar{\pi}$, or completely equivalently, we can write the equation: $a^2 + |D|b^2 = 4N \, $ For some "a, b". If we can describe "N" in terms of either of these forms, we can create an elliptic curve "E" on $\mathbb{Z}/N\mathbb{Z}$ with complex multiplication (described in detail below), and the number of points is given by: $|E(\mathbb{Z}/N\mathbb{Z})| = N + 1 - \pi - \bar{\pi} = N + 1 - a. \, $ The test. Pick discriminants "D" in sequence of increasing "h"("D"). For each "D" check if $\left(\frac{D}{N}\right) = 1$ and whether 4"N" can be written as: $a^2 + |D|b^2 = 4N \, $ This part can be verified using Cornacchia's algorithm. Once acceptable "D" and "a" have been discovered, calculate $m = N + 1 - a$. Now if "m" has a prime factor "q" of size $q>(N^{1/4}+1)^2$ use the complex multiplication method to construct the curve "E" and a point "P" on it. Then we can use our proposition to verify the primality of "N". Note that if "m" does not have a large prime factor or cannot be factored quickly enough, another choice of "D" can be made. Complex multiplication method. For completeness, we will provide an overview of complex multiplication, the way in which an elliptic curve can be created, given our "D" (which can be written as a product of two elements). Assume first that $D \neq -3$ and $D \neq -4$ (these cases are much more easily done). It is necessary to calculate the elliptic j-invariants of the "h"("D") classes of the order of discriminant "D" as complex numbers. There are several formulas to calculate these. Next create the monic polynomial $H_D(X)$, which has roots corresponding to the "h"("D") values. Note, that $H_D(X)$ is the class polynomial. From complex multiplication theory, we know that $H_D(X)$ has integer coefficients, which allows us to estimate these coefficients accurately enough to discover their true values. Now, if "N" is prime, CM tells us that $H_D(X)$ splits modulo "N" into a product of "h"("D") linear factors, based on the fact that "D" was chosen so that "N" splits as the product of two elements. Now if "j" is one of the "h"("D") roots "modulo N" we can define "E" as: $y^2 = x^3 - 3kc^{2r}x + 2kc^{3r},\text{ where } k = \frac{j}{j-1728},$ "c" is any quadratic nonresidue mod "N", and "r" is either 0 or 1. Given a root "j" there are only two possible nonisomorphic choices of "E", one for each choice of "r". We have the cardinality of these curves as $|E(\mathbb{Z}/N\mathbb{Z})| = N+1-a$ or $|E(\mathbb{Z}/N\mathbb{Z})| = N+1+a$ Discussion. Just as with the Goldwasser–Killian test, this one leads to a down-run procedure. Again, the culprit is "q". Once we find a "q" that works, we must check it to be prime, so in fact we are doing the whole test now for "q". Then again we may have to perform the test for factors of "q". This leads to a nested certificate where at each level we have an elliptic curve "E", an "m" and the prime in doubt, "q". Example of Atkin–Morain ECPP. We construct an example to prove that $N = 167$ is prime using the Atkin–Morain ECPP test. First proceed through the set of 13 possible discriminants, testing whether the Legendre Symbol $(D/N) = 1$, and if 4"N" can be written as $ 4N = a^2 + |D|b^2$. For our example $D = -43$ is chosen. This is because $(D/N) = (-43/167) = 1$ and also, using Cornacchia's algorithm, we know that $4\cdot (167) = 25^2 + (43)(1^2)$ and thus "a" = 25 and "b" = 1. The next step is to calculate "m". This is easily done as $m = N + 1 - a$ which yields $ m = 167 + 1 - 25 = 143.$ Next we need to find a probable prime divisor of "m", which was referred to as "q". It must satisfy the condition that $q>(N^{1/4}+1)^2.$ In this case, "m" = 143 = 11×13. So unfortunately we cannot choose 11 or 13 as our "q", for it does not satisfy the necessary inequality. We are saved, however, by an analogous proposition to that which we stated before the Goldwasser–Kilian algorithm, which comes from a paper by Morain. It states, that given our "m", we look for an "s" which divides "m", $s>(N^{1/4}+1)^2$, but is not necessarily prime, and check whether, for each $p_i$ which divides "s" $(m/p_i)P \neq P_\infty$ for some point "P" on our yet to be constructed curve. If "s" satisfies the inequality, and its prime factors satisfy the above, then "N" is prime. So in our case, we choose "s" = "m" = 143. Thus our possible $p_i$'s are 11 and 13. First, it is clear that $143 >(167^{1/4}+1)^2$, and so we need only check the values of $(143/11)P = 13P \qquad \text{ and } \qquad (143/13)P = 11P.$ But before we can do this, we must construct our curve, and choose a point "P". In order to construct the curve, we make use of complex multiplication. In our case we compute the J-invariant $j \equiv -960^3 \pmod{167} \equiv 107 \pmod{167}. $ Next we compute $k = \frac{j}{1728-j} \pmod{167} \equiv 158 \pmod{167}$ and we know our elliptic curve is of the form: $y^2 = x^3 + 3kc^2x + 2kc^3$, where "k" is as described previously and "c" a non-square in $\mathbb{F}_{167}$. So we can begin with $\begin{align} r &= 0\\ 3k &\equiv 140 \pmod{167} \\ \end{align}$ which yields $E: y^2 = x^3 + 140x + 149 \pmod{167}$ Now, utilizing the point "P" = (6,6) on "E" it can be verified that $143 P = P_\infty.$ It is simple to check that 13(6, 6) = (12, 65) and 11"P" = (140, 147), and so, by Morain's proposition, "N" is prime. Complexity and running times. Goldwasser and Kilian's elliptic curve primality proving algorithm terminates in expected polynomial time for at least $1 - O\left(2^{-N\frac{1}{\log \log n}}\right)$ of prime inputs. Conjecture. Let $\pi(x)$ be the number of primes smaller than "x" $\exists c_1, c_2 > 0: \pi(x+\sqrt{x}) - \pi(x) \ge \frac{c_2\sqrt{x}}{\log^{c_1}x}$ for sufficiently large "x". If one accepts this conjecture then the Goldwasser–Kilian algorithm terminates in expected polynomial time for every input. Also, if our "N" is of length "k", then the algorithm creates a certificate of size $O(k^2)$ that can be verified in $O(k^4)$. Now consider another conjecture, which will give us a bound on the total time of the algorithm. Conjecture 2. Suppose there exist positive constants $c_1$ and $c_2$ such that the amount of primes in the interval $[x, x+\sqrt{2x}], x \ge 2$ is larger than $c_1\sqrt{x}(\log x)^{-c_2}$ Then the Goldwasser Kilian algorithm proves the primality of "N" in an expected time of $O(\log^{10 + c_2} n)$ For the Atkin–Morain algorithm, the running time stated is $O((\log N)^{6+\epsilon})$ for some $\epsilon > 0$ Primes of special form. For some forms of numbers, it is possible to find 'short-cuts' to a primality proof. This is the case for the Mersenne numbers. In fact, due to their special structure, which allows for easier verification of primality, the six largest known prime numbers are all Mersenne numbers. There has been a method in use for some time to verify primality of Mersenne numbers, known as the Lucas–Lehmer test. This test does not rely on elliptic curves. However we present a result where numbers of the form $N = 2^kn - 1$ where $k,n \in \Z, k \ge 2$, "n" odd can be proven prime (or composite) using elliptic curves. Of course this will also provide a method for proving primality of Mersenne numbers, which correspond to the case where "n" = 1. The following method is drawn from the paper "Primality Test for $2^kn - 1$ using Elliptic Curves", by Yu Tsumura. Group structure of "E"(FN). We take "E" as our elliptic curve, where "E" is of the form $y^2 = x^3 - mx$ for $m \in \Z, m \not\equiv 0 \bmod{p},$ where $p \equiv 3 \bmod{4}$ is prime, and $p+1 = 2^kn,$ with $k \in \Z, k \ge 2, n$ odd. Theorem 1. $E(\mathbb{F}_p) = p+1.$ Theorem 2. $E(\mathbb{F}_p) \cong \Z_{2^kn}$ or $E(\mathbb{F}_p) \cong \Z_2 \oplus \Z_{2^{k-1}n}$ depending on whether or not "m" is a quadratic residue "modulo p". Theorem 3. Let "Q" = ("x","y") on "E" be such that "x" a quadratic non-residue "modulo p". Then the order of "Q" is divisible by $2^k$ in the cyclic group $E(\mathbb{F}_p) \cong \Z_{2^{k}n}.$ First we will present the case where "n" is relatively small with respect to $2^k$, and this will require one more theorem: Theorem 4. Choose a $\lambda > 1$ and suppose $n \le \frac{\sqrt{p}}{\lambda} \qquad \text{and} \qquad \lambda\sqrt{p} > \left (\sqrt[4]{p} + 1 \right )^2.$ Then "p" is a prime if and only if there exists a "Q" = ("x","y") on "E", such that $\gcd{(S_i,p)} = 1$ for "i" = 1, 2, ...,"k" − 1 and $S_k \equiv 0\pmod{p},$ where $S_i$ is a sequence with initial value $S_0 = x$. The algorithm. We provide the following algorithm, which relies mainly on Theorems 3 and 4. To verify the primality of a given number $N$, perform the following steps: (1) Choose $x \in \mathbb{Z}$ such that $\left(\frac{x}{N}\right) = -1$, and find $y \in \mathbb{Z}, y \not\equiv 0 \pmod{2}$ such that $\left(\frac{x^3-y^2}{N}\right) = 1$. Take $m = \frac{x^3-y^2}{x}\mod N$ and $m \not\equiv 0 \pmod{N}$. Then $Q'=(x,y)$ is on $E: y^2=x^3-mx$. Calculate $Q = nQ'$. If $Q \in E$ then $N$ is composite, otherwise proceed to (2). (2) Set $S_i$ as the sequence with initial value $Q$. Calculate $S_i$ for $i=$$1, 2, 3, ..., k-1$. If $\gcd({S_i,N})>1$ for an $i$, where $1 \le i \le k-1$ then $N$ is composite. Otherwise, proceed to (3). (3) If $S_k \equiv 0 \pmod{N}$ then $N$ is prime. Otherwise, $N$ is composite. This completes the test. Justification of the algorithm. In (1), an elliptic curve, "E" is picked, along with a point "Q" on "E", such that the "x"-coordinate of "Q" is a quadratic nonresidue. We can say $\left(\frac{m}{N}\right) = \left(\frac{\frac{x^3-y^2}{x}}{N}\right) = \left(\frac{x}{N}\right)\left(\frac{x^3-y^2}{N}\right) = -1\cdot 1=-1.$ Thus, if "N" is prime, "Q"' has order divisible by $2^k$, by Theorem 3, and therefore the order of "Q"' is $2^kd$ "d" | "n". This means "Q" = "nQ"' has order $2^k$. Therefore, if (1) concludes that "N" is composite, it truly is composite. (2) and (3) check if "Q" has order $2^k$. Thus, if (2) or (3) conclude "N" is composite, it is composite. Now, if the algorithm concludes that "N" is prime, then that means $S_1$ satisfies the condition of Theorem 4, and so "N" is truly prime. There is an algorithm as well for when "n" is large; however, for this we refer to the aforementioned article.
2942158
abstract_algebra
In mathematics, the Freiheitssatz (German: "freedom/independence theorem": "Freiheit" + "Satz") is a result in the presentation theory of groups, stating that certain subgroups of a one-relator group are free groups. Statement. Consider a group presentation $G = \langle x_{1}, \dots, x_{n} | r = 1 \rangle$ given by n generators "x""i" and a single cyclically reduced relator r. If "x"1 appears in r, then (according to the freiheitssatz) the subgroup of G generated by "x"2, ..., "x""n" is a free group, freely generated by "x"2, ..., "x""n". In other words, the only relations involving "x"2, ..., "x""n" are the trivial ones. History. The result was proposed by the German mathematician Max Dehn and proved by his student, Wilhelm Magnus, in his doctoral thesis. Although Dehn expected Magnus to find a topological proof, Magnus instead found a proof based on mathematical induction and amalgamated products of groups. Different induction-based proofs were given later by and . Significance. The freiheitssatz has become "the cornerstone of one-relator group theory", and motivated the development of the theory of amalgamated products. It also provides an analogue, in non-commutative group theory, of certain results on vector spaces and other commutative groups.
1736256
abstract_algebra
Abstract ring with finite number of elements In mathematics, more specifically abstract algebra, a finite ring is a ring that has a finite number of elements. Every finite field is an example of a finite ring, and the additive part of every finite ring is an example of an abelian finite group, but the concept of finite rings in their own right has a more recent history. Although rings have more structure than groups do, the theory of finite rings is simpler than that of finite groups. For instance, the classification of finite simple groups was one of the major breakthroughs of 20th century mathematics, its proof spanning thousands of journal pages. On the other hand, it has been known since 1907 that any finite simple ring is isomorphic to the ring $\mathrm{M}_n(\mathbb{F}_q)$ – the "n"-by-"n" matrices over a finite field of order "q" (as a consequence of Wedderburn's theorems, described below). The number of rings with "m" elements, for "m" a natural number, is listed under OEIS:  in the On-Line Encyclopedia of Integer Sequences. Finite field. The theory of finite fields is perhaps the most important aspect of finite ring theory due to its intimate connections with algebraic geometry, Galois theory and number theory. An important, but fairly old aspect of the theory is the classification of finite fields: Despite the classification, finite fields are still an active area of research, including recent results on the Kakeya conjecture and open problems regarding the size of smallest primitive roots (in number theory). A finite field "F" may be used to build a vector space of n-dimensions over "F". The matrix ring "A" of "n" × "n" matrices with elements from "F" is used in Galois geometry, with the projective linear group serving as the multiplicative group of "A". Wedderburn's theorems. Wedderburn's little theorem asserts that any finite division ring is necessarily commutative: If every nonzero element "r" of a finite ring "R" has a multiplicative inverse, then "R" is commutative (and therefore a finite field). Nathan Jacobson later discovered yet another condition which guarantees commutativity of a ring: if for every element "r" of "R" there exists an integer "n" > 1 such that "r" n = "r", then "R" is commutative. More general conditions that imply commutativity of a ring are also known. Yet another theorem by Wedderburn has, as its consequence, a result demonstrating that the theory of finite simple rings is relatively straightforward in nature. More specifically, any finite simple ring is isomorphic to the ring $\mathrm{M}_n(\mathbb{F}_q)$, the "n"-by-"n" matrices over a finite field of order "q". This follows from two theorems of Joseph Wedderburn established in 1905 and 1907 (one of which is Wedderburn's little theorem). Enumeration. (Warning: the enumerations in this section include rings that do not necessarily have a multiplicative identity, sometimes called rngs.) In 1964 David Singmaster proposed the following problem in the American Mathematical Monthly: "(1) What is the order of the smallest non-trivial ring with identity which is not a field? Find two such rings with this minimal order. Are there more? (2) How many rings of order four are there?" One can find the solution by D.M. Bloom in a two-page proof that there are eleven rings of order 4, four of which have a multiplicative identity. Indeed, four-element rings introduce the complexity of the subject. There are three rings over the cyclic group C4 and eight rings over the Klein four-group. There is an interesting display of the discriminatory tools (nilpotents, zero-divisors, idempotents, and left- and right-identities) in Gregory Dresden's lecture notes. The occurrence of "non-commutativity" in finite rings was described in in two theorems: If the order "m" of a finite ring with 1 has a cube-free factorization, then it is commutative. And if a non-commutative finite ring with 1 has the order of a prime cubed, then the ring is isomorphic to the upper triangular 2 × 2 matrix ring over the Galois field of the prime. The study of rings of order the cube of a prime was further developed in and . Next Flor and Wessenbauer (1975) made improvements on the cube-of-a-prime case. Definitive work on the isomorphism classes came with proving that for "p" > 2, the number of classes is 3"p" + 50. There are earlier references in the topic of finite rings, such as Robert Ballieu and Scorza. These are a few of the facts that are known about the number of finite rings (not necessarily with unity) of a given order (suppose "p" and "q" represent distinct prime numbers): The number of rings with "n" elements are (with "a"(0) = 1) 1, 1, 2, 2, 11, 2, 4, 2, 52, 11, 4, 2, 22, 2, 4, 4, 390, 2, 22, 2, 22, 4, 4, 2, 104, 11, 4, 59, 22, 2, 8, 2, >18590, 4, 4, 4, 121, 2, 4, 4, 104, 2, 8, 2, 22, 22, 4, 2, 780, 11, 22, ... (sequence in the OEIS) Notes.
2738622
abstract_algebra
Any algorithm which solves the search problem In computer science, a search algorithm is an algorithm designed to solve a search problem. Search algorithms work to retrieve information stored within particular data structure, or calculated in the search space of a problem domain, with either discrete or continuous values. Although search engines use search algorithms, they belong to the study of information retrieval, not algorithmics. The appropriate search algorithm to use often depends on the data structure being searched, and may also include prior knowledge about the data. Search algorithms can be made faster or more efficient by specially constructed database structures, such as search trees, hash maps, and database indexes. Search algorithms can be classified based on their mechanism of searching into three types of algorithms: linear, binary, and hashing. Linear search algorithms check every record for the one associated with a target key in a linear fashion. Binary, or half-interval, searches repeatedly target the center of the search structure and divide the search space in half. Comparison search algorithms improve on linear searching by successively eliminating records based on comparisons of the keys until the target record is found, and can be applied on data structures with a defined order. Digital search algorithms work based on the properties of digits in data structures by using numerical keys. Finally, hashing directly maps keys to records based on a hash function. Algorithms are often evaluated by their computational complexity, or maximum theoretical run time. Binary search functions, for example, have a maximum complexity of "O"(log "n"), or logarithmic time. In simple terms, the maximum number of operations needed to find the search target is a logarithmic function of the size of the search space. Applications of search algorithms. Specific applications of search algorithms include: Classes. For virtual search spaces. Algorithms for searching virtual spaces are used in the constraint satisfaction problem, where the goal is to find a set of value assignments to certain variables that will satisfy specific mathematical equations and inequations / equalities. They are also used when the goal is to find a variable assignment that will maximize or minimize a certain function of those variables. Algorithms for these problems include the basic brute-force search (also called "naïve" or "uninformed" search), and a variety of heuristics that try to exploit partial knowledge about the structure of this space, such as linear relaxation, constraint generation, and constraint propagation. An important subclass are the local search methods, that view the elements of the search space as the vertices of a graph, with edges defined by a set of heuristics applicable to the case; and scan the space by moving from item to item along the edges, for example according to the steepest descent or best-first criterion, or in a stochastic search. This category includes a great variety of general metaheuristic methods, such as simulated annealing, tabu search, A-teams, and genetic programming, that combine arbitrary heuristics in specific ways. The opposite of local search would be global search methods. This method is applicable when the search space is not limited and all aspects of the given network are available to the entity running the search algorithm. This class also includes various tree search algorithms, that view the elements as vertices of a tree, and traverse that tree in some special order. Examples of the latter include the exhaustive methods such as depth-first search and breadth-first search, as well as various heuristic-based search tree pruning methods such as backtracking and branch and bound. Unlike general metaheuristics, which at best work only in a probabilistic sense, many of these tree-search methods are guaranteed to find the exact or optimal solution, if given enough time. This is called "completeness". Another important sub-class consists of algorithms for exploring the game tree of multiple-player games, such as chess or backgammon, whose nodes consist of all possible game situations that could result from the current situation. The goal in these problems is to find the move that provides the best chance of a win, taking into account all possible moves of the opponent(s). Similar problems occur when humans or machines have to make successive decisions whose outcomes are not entirely under one's control, such as in robot guidance or in marketing, financial, or military strategy planning. This kind of problem — combinatorial search — has been extensively studied in the context of artificial intelligence. Examples of algorithms for this class are the minimax algorithm, alpha–beta pruning, and the A* algorithm and its variants. For sub-structures of a given structure. The name "combinatorial search" is generally used for algorithms that look for a specific sub-structure of a given discrete structure, such as a graph, a string, a finite group, and so on. The term combinatorial optimization is typically used when the goal is to find a sub-structure with a maximum (or minimum) value of some parameter. (Since the sub-structure is usually represented in the computer by a set of integer variables with constraints, these problems can be viewed as special cases of constraint satisfaction or discrete optimization; but they are usually formulated and solved in a more abstract setting where the internal representation is not explicitly mentioned.) An important and extensively studied subclass are the graph algorithms, in particular graph traversal algorithms, for finding specific sub-structures in a given graph — such as subgraphs, paths, circuits, and so on. Examples include Dijkstra's algorithm, Kruskal's algorithm, the nearest neighbour algorithm, and Prim's algorithm. Another important subclass of this category are the string searching algorithms, that search for patterns within strings. Two famous examples are the Boyer–Moore and Knuth–Morris–Pratt algorithms, and several algorithms based on the suffix tree data structure. Search for the maximum of a function. In 1953, American statistician Jack Kiefer devised Fibonacci search which can be used to find the maximum of a unimodal function and has many other applications in computer science. For quantum computers. There are also search methods designed for quantum computers, like Grover's algorithm, that are theoretically faster than linear or brute-force search even without the help of data structures or heuristics. While the ideas and applications behind quantum computers are still entirely theoretical, studies have been conducted with algorithms like Grover's that accurately replicate the hypothetical physical versions of quantum computing systems. See also. Categories: References. Citations. External links.
13972
abstract_algebra
Finite field of two elements GF(2) (also denoted $\mathbb F_2$, Z/2Z or $\mathbb Z/2\mathbb Z$) is the finite field of two elements (GF is the initialism of "Galois field", another name for finite fields). Notations Z2 and $\mathbb Z_2$ may be encountered although they can be confused with the notation of -adic integers. GF(2) is the field with the smallest possible number of elements, and is unique if the additive identity and the multiplicative identity are denoted respectively and , as usual. The elements of GF(2) may be identified with the two possible values of a bit and to the boolean values "true" and "false". It follows that GF(2) is fundamental and ubiquitous in computer science and its logical foundations. Definition. GF(2) is the unique field with two elements with its additive and multiplicative identities respectively denoted and . Its addition is defined as the usual addition of integers but modulo 2 and corresponds to the table below: If the elements of GF(2) are seen as boolean values, then the addition is the same as that of the logical XOR operation. Since each element equals its opposite, subtraction is thus the same operation as addition. The multiplication of GF(2) is again the usual multiplication modulo 2 (see the table below), and on boolean variables corresponds to the logical AND operation. GF(2) can be identified with the field of the integers modulo, that is, the quotient ring of the ring of integers Z by the ideal 2Z of all even numbers: GF(2) = Z/2Z. Properties. Because GF(2) is a field, many of the familiar properties of number systems such as the rational numbers and real numbers are retained: Properties that are not familiar from the real numbers include: Applications. Because of the algebraic properties above, many familiar and powerful tools of mathematics work in GF(2) just as well as other fields. For example, matrix operations, including matrix inversion, can be applied to matrices with elements in GF(2) ("see" matrix ring). Any group "V" with the property "v" + "v" = 0 for every "v" in "V" (i.e. every element is an involution) is necessarily abelian and can be turned into a vector space over GF(2) in a natural fashion, by defining 0"v" = 0 and 1"v" = "v". This vector space will have a basis, implying that the number of elements of "V" must be a power of 2 (or infinite). In modern computers, data are represented with bit strings of a fixed length, called "machine words". These are endowed with the structure of a vector space over GF(2). The addition of this vector space is the bitwise operation called XOR (exclusive or). The bitwise AND is another operation on this vector space, which makes it a Boolean algebra, a structure that underlies all computer science. These spaces can also be augmented with a multiplication operation that makes them into a field GF(2"n"), but the multiplication operation cannot be a bitwise operation. When "n" is itself a power of two, the multiplication operation can be nim-multiplication; alternatively, for any "n", one can use multiplication of polynomials over GF(2) modulo a irreducible polynomial (as for instance for the field GF(28) in the description of the Advanced Encryption Standard cipher). Vector spaces and polynomial rings over GF(2) are widely used in coding theory, and in particular in error correcting codes and modern cryptography. For example, many common error correcting codes (such as BCH codes) are linear codes over GF(2) (codes defined from vector spaces over GF(2)), or polynomial codes (codes defined as quotients of polynomial rings over GF(2)).
1568809
abstract_algebra
Subgroup invariant under conjugation In abstract algebra, a normal subgroup (also known as an invariant subgroup or self-conjugate subgroup) is a subgroup that is invariant under conjugation by members of the group of which it is a part. In other words, a subgroup $N$ of the group $G$ is normal in $G$ if and only if $gng^{-1} \in N$ for all $g \in G$ and $n \in N.$ The usual notation for this relation is $N \triangleleft G.$ Normal subgroups are important because they (and only they) can be used to construct quotient groups of the given group. Furthermore, the normal subgroups of $G$ are precisely the kernels of group homomorphisms with domain $G,$ which means that they can be used to internally classify those homomorphisms. Évariste Galois was the first to realize the importance of the existence of normal subgroups. Definitions. A subgroup $N$ of a group $G$ is called a normal subgroup of $G$ if it is invariant under conjugation; that is, the conjugation of an element of $N$ by an element of $G$ is always in $N.$ The usual notation for this relation is $N \triangleleft G.$ Equivalent conditions. For any subgroup $N$ of $G,$ the following conditions are equivalent to $N$ being a normal subgroup of $G.$ Therefore, any one of them may be taken as the definition. Examples. For any group $G,$ the trivial subgroup $\{ e \}$ consisting of just the identity element of $G$ is always a normal subgroup of $G.$ Likewise, $G$ itself is always a normal subgroup of $G.$ (If these are the only normal subgroups, then $G$ is said to be simple.) Other named normal subgroups of an arbitrary group include the center of the group (the set of elements that commute with all other elements) and the commutator subgroup $[G,G].$ More generally, since conjugation is an isomorphism, any characteristic subgroup is a normal subgroup. If $G$ is an abelian group then every subgroup $N$ of $G$ is normal, because $gN = \{gn\}_{n\in N} = \{ng\}_{n\in N} = Ng.$ More generally, for any group $G$, every subgroup of the "center" $Z(G)$ of $G$ is normal in $G$. (In the special case that $G$ is abelian, the center is all of $G$, hence the fact that all subgroups of an abelian group are normal.) A group that is not abelian but for which every subgroup is normal is called a Hamiltonian group. A concrete example of a normal subgroup is the subgroup $N = \{(1), (123), (132)\}$ of the symmetric group $S_3,$ consisting of the identity and both three-cycles. In particular, one can check that every coset of $N$ is either equal to $N$ itself or is equal to $(12)N = \{ (12), (23), (13)\}.$ On the other hand, the subgroup $H = \{(1), (12)\}$ is not normal in $S_3$ since $(123)H = \{(123), (13) \} \neq \{(123), (23) \} = H(123).$ This illustrates the general fact that any subgroup $H \leq G$ of index two is normal. As an example of a normal subgroup within a matrix group, consider the general linear group $\mathrm{GL}_n(\mathbf{R})$ of all invertible $n\times n$ matrices with real entries under the operation of matrix multiplication and its subgroup $\mathrm{SL}_n(\mathbf{R})$ of all $n\times n$ matrices of determinant 1 (the special linear group). To see why the subgroup $\mathrm{SL}_n(\mathbf{R})$ is normal in $\mathrm{GL}_n(\mathbf{R})$, consider any matrix $X$ in $\mathrm{SL}_n(\mathbf{R})$ and any invertible matrix $A$. Then using the two important identities $\det(AB)=\det(A)\det(B)$ and $\det(A^{-1})=\det(A)^{-1}$, one has that $\det(AXA^{-1}) = \det(A) \det(X) \det(A)^{-1} = \det(X) = 1$, and so $AXA^{-1} \in \mathrm{SL}_n(\mathbf{R})$ as well. This means $\mathrm{SL}_n(\mathbf{R})$ is closed under conjugation in $\mathrm{GL}_n(\mathbf{R})$, so it is a normal subgroup. In the Rubik's Cube group, the subgroups consisting of operations which only affect the orientations of either the corner pieces or the edge pieces are normal. The translation group is a normal subgroup of the Euclidean group in any dimension. This means: applying a rigid transformation, followed by a translation and then the inverse rigid transformation, has the same effect as a single translation. By contrast, the subgroup of all rotations about the origin is "not" a normal subgroup of the Euclidean group, as long as the dimension is at least 2: first translating, then rotating about the origin, and then translating back will typically not fix the origin and will therefore not have the same effect as a single rotation about the origin. Properties. Lattice of normal subgroups. Given two normal subgroups, $N$ and $M,$ of $G,$ their intersection $N\cap M$and their product $N M = \{n m : n \in N\; \text{ and }\; m \in M \}$ are also normal subgroups of $G.$ The normal subgroups of $G$ form a lattice under subset inclusion with least element, $\{ e \},$ and greatest element, $G.$ The meet of two normal subgroups, $N$ and $M,$ in this lattice is their intersection and the join is their product. The lattice is complete and modular. Normal subgroups, quotient groups and homomorphisms. If $N$ is a normal subgroup, we can define a multiplication on cosets as follows: <math display="block">\left(a_1 N\right) \left(a_2 N\right) := \left(a_1 a_2\right) N.$ This relation defines a mapping $G/N\times G/N \to G/N.$ To show that this mapping is well-defined, one needs to prove that the choice of representative elements $a_1, a_2$ does not affect the result. To this end, consider some other representative elements $a_1'\in a_1 N, a_2' \in a_2 N.$ Then there are $n_1, n_2\in N$ such that $a_1' = a_1 n_1, a_2' = a_2 n_2.$ It follows that <math display="block">a_1' a_2' N = a_1 n_1 a_2 n_2 N =a_1 a_2 n_1' n_2 N=a_1 a_2 N,$where we also used the fact that $N$ is a normal subgroup, and therefore there is $n_1'\in N$ such that $n_1 a_2 = a_2 n_1'.$ This proves that this product is a well-defined mapping between cosets. With this operation, the set of cosets is itself a group, called the quotient group and denoted with $G/N.$ There is a natural homomorphism, $f : G \to G/N,$ given by $f(a) = a N.$ This homomorphism maps $N$ into the identity element of $G/N,$ which is the coset $e N = N,$ that is, $\ker(f) = N.$ In general, a group homomorphism, $f : G \to H$ sends subgroups of $G$ to subgroups of $H.$ Also, the preimage of any subgroup of $H$ is a subgroup of $G.$ We call the preimage of the trivial group $\{ e \}$ in $H$ the kernel of the homomorphism and denote it by $\ker f.$ As it turns out, the kernel is always normal and the image of $G, f(G),$ is always isomorphic to $G / \ker f$ (the first isomorphism theorem). In fact, this correspondence is a bijection between the set of all quotient groups of $G, G / N,$ and the set of all homomorphic images of $G$ (up to isomorphism). It is also easy to see that the kernel of the quotient map, $f : G \to G/N,$ is $N$ itself, so the normal subgroups are precisely the kernels of homomorphisms with domain $G.$
10726
abstract_algebra
Abelian group extending a commutative monoid In mathematics, the Grothendieck group, or group of differences, of a commutative monoid "M" is a certain abelian group. This abelian group is constructed from "M" in the most universal way, in the sense that any abelian group containing a homomorphic image of "M" will also contain a homomorphic image of the Grothendieck group of "M". The Grothendieck group construction takes its name from a specific case in category theory, introduced by Alexander Grothendieck in his proof of the Grothendieck–Riemann–Roch theorem, which resulted in the development of K-theory. This specific case is the monoid of isomorphism classes of objects of an abelian category, with the direct sum as its operation. Grothendieck group of a commutative monoid. Motivation. Given a commutative monoid M, "the most general" abelian group K that arises from M is to be constructed by introducing inverse elements to all elements of M. Such an abelian group K always exists; it is called the Grothendieck group of M. It is characterized by a certain universal property and can also be concretely constructed from M. If M does not have the cancellation property (that is, there exists "a", "b" and c in M such that $a\ne b$ and $ac=bc$), then the Grothendieck group K cannot contain M. In particular, in the case of a monoid operation denoted multiplicatively that has a zero element satisfying $0.x=0$ for every $x\in M,$ the Grothendieck group must be the trivial group (group with only one element), since one must have $x=1.x=(0^{-1}.0).x = 0^{-1}.(0.x)=0^{-1}.(0.0)=(0^{-1}.0).0=1.0=0$ for every x. Universal property. Let "M" be a commutative monoid. Its Grothendieck group is an abelian group "K" with a monoid homomorphism $i \colon M \to K$ satisfying the following universal property: for any monoid homomorphism $f \colon M \to A$ from "M" to an abelian group "A", there is a unique group homomorphism $g \colon K \to A$ such that $f = g \circ i.$ This expresses the fact that any abelian group "A" that contains a homomorphic image of "M" will also contain a homomorphic image of "K", "K" being the "most general" abelian group containing a homomorphic image of "M". Explicit constructions. To construct the Grothendieck group "K" of a commutative monoid "M", one forms the Cartesian product $M \times M$. The two coordinates are meant to represent a positive part and a negative part, so $(m_1, m_2)$ corresponds to $m_1- m_2$ in "K". Addition on $M\times M$ is defined coordinate-wise: $(m_1, m_2) + (n_1,n_2) = (m_1+n_1, m_2+n_2)$. Next one defines an equivalence relation on $M \times M$, such that $(m_1, m_2)$ is equivalent to $(n_1, n_2)$ if, for some element "k" of "M", "m"1 + "n"2 + "k" = "m"2 + "n"1 + "k" (the element "k" is necessary because the cancellation law does not hold in all monoids). The equivalence class of the element ("m"1, "m"2) is denoted by [("m"1, "m"2)]. One defines "K" to be the set of equivalence classes. Since the addition operation on "M" × "M" is compatible with our equivalence relation, one obtains an addition on "K", and "K" becomes an abelian group. The identity element of "K" is [(0, 0)], and the inverse of [("m"1, "m"2)] is [("m"2, "m"1)]. The homomorphism $i:M\to K$ sends the element "m" to [("m", 0)]. Alternatively, the Grothendieck group "K" of "M" can also be constructed using generators and relations: denoting by $(Z(M), +')$ the free abelian group generated by the set "M", the Grothendieck group "K" is the quotient of $Z(M)$ by the subgroup generated by $\{(x+'y)-'(x+y)\mid x,y\in M\}$. (Here +′ and −′ denote the addition and subtraction in the free abelian group $Z(M)$ while + denotes the addition in the monoid "M".) This construction has the advantage that it can be performed for any semigroup "M" and yields a group which satisfies the corresponding universal properties for semigroups, i.e. the "most general and smallest group containing a homomorphic image of "M" ". This is known as the "group completion of a semigroup" or "group of fractions of a semigroup". Properties. In the language of category theory, any universal construction gives rise to a functor; one thus obtains a functor from the category of commutative monoids to the category of abelian groups which sends the commutative monoid "M" to its Grothendieck group "K". This functor is left adjoint to the forgetful functor from the category of abelian groups to the category of commutative monoids. For a commutative monoid "M", the map "i" : "M" → "K" is injective if and only if "M" has the cancellation property, and it is bijective if and only if "M" is already a group. Example: the integers. The easiest example of a Grothendieck group is the construction of the integers $\Z$ from the (additive) natural numbers $\N$. First one observes that the natural numbers (including 0) together with the usual addition indeed form a commutative monoid $(\N, +).$ Now when one uses the Grothendieck group construction one obtains the formal differences between natural numbers as elements "n" − "m" and one has the equivalence relation $n - m \sim n' - m' \iff n + m' + k = n'+ m + k$ for some $k \iff n + m' = n' + m$. Now define $\forall n \in \N: \qquad \begin{cases} n := [n - 0] \\ -n := [0 - n] \end{cases}$ This defines the integers $\Z$. Indeed, this is the usual construction to obtain the integers from the natural numbers. See "Construction" under Integers for a more detailed explanation. Example: the positive rational numbers. Similarly, the Grothendieck group of the multiplicative commutative monoid $(\N^*, \times)$ (starting at 1) consists of formal fractions $p/q$ with the equivalence $p/q \sim p'/q' \iff pq'r = p'qr $ for some $r \iff pq' = p'q$ which of course can be identified with the positive rational numbers. Example: the Grothendieck group of a manifold. The Grothendieck group is the fundamental construction of K-theory. The group $K_0(M)$ of a compact manifold "M" is defined to be the Grothendieck group of the commutative monoid of all isomorphism classes of vector bundles of finite rank on "M" with the monoid operation given by direct sum. This gives a contravariant functor from manifolds to abelian groups. This functor is studied and extended in topological K-theory. Example: The Grothendieck group of a ring. The zeroth algebraic K group $K_0(R)$ of a (not necessarily commutative) ring "R" is the Grothendieck group of the monoid consisting of isomorphism classes of finitely generated projective modules over "R", with the monoid operation given by the direct sum. Then $K_0$ is a covariant functor from rings to abelian groups. The two previous examples are related: consider the case where $R = C^\infty(M)$ is the ring of complex-valued smooth functions on a compact manifold "M". In this case the projective "R"-modules are dual to vector bundles over "M" (by the Serre–Swan theorem). Thus $K_0(R)$ and $K_0(M)$ are the same group. Grothendieck group and extensions. Definition. Another construction that carries the name Grothendieck group is the following: Let "R" be a finite-dimensional algebra over some field "k" or more generally an artinian ring. Then define the Grothendieck group $G_0(R)$ as the abelian group generated by the set $\{[X] \mid X \in R\text{-mod}\}$ of isomorphism classes of finitely generated "R"-modules and the following relations: For every short exact sequence $0 \to A \to B \to C \to 0$ of "R"-modules, add the relation $[A] - [B] + [C] = 0.$ This definition implies that for any two finitely generated "R"-modules "M" and "N", $[M \oplus N] = [M] + [N]$, because of the split short exact sequence $ 0 \to M \to M \oplus N \to N \to 0. $ Examples. Let "K" be a field. Then the Grothendieck group $G_0(K)$ is an abelian group generated by symbols $[V]$ for any finite-dimensional "K"-vector space "V". In fact, $G_0(K)$ is isomorphic to $\Z$ whose generator is the element $[K]$. Here, the symbol $[V]$ for a finite-dimensional "K"-vector space "V" is defined as $[V] = \dim_K V$, the dimension of the vector space "V". Suppose one has the following short exact sequence of "K"-vector spaces. $0 \to V \to T \to W \to 0$ Since any short exact sequence of vector spaces splits, it holds that $T \cong V \oplus W $. In fact, for any two finite-dimensional vector spaces "V" and "W" the following holds: $\dim_K(V \oplus W) = \dim_K(V) + \dim_K(W)$ The above equality hence satisfies the condition of the symbol $[V]$ in the Grothendieck group. $[T] = [V \oplus W] = [V] + [W]$ Note that any two isomorphic finite-dimensional "K"-vector spaces have the same dimension. Also, any two finite-dimensional "K"-vector spaces "V" and "W" of same dimension are isomorphic to each other. In fact, every finite "n"-dimensional "K"-vector space "V" is isomorphic to $K^{\oplus n}$. The observation from the previous paragraph hence proves the following equation: $[V] = \left[ K^{\oplus n} \right] = n[K]$ Hence, every symbol $[V]$ is generated by the element $[K]$ with integer coefficients, which implies that $G_0(K)$ is isomorphic to $\Z$ with the generator $[K]$. More generally, let $\Z$ be the set of integers. The Grothendieck group $G_0(\Z)$ is an abelian group generated by symbols $[A]$ for any finitely generated abelian groups "A". One first notes that any finite abelian group "G" satisfies that $[G] = 0$. The following short exact sequence holds, where the map $\Z \to \Z$ is multiplication by "n". $0 \to \Z \to \Z \to \Z /n\Z \to 0$ The exact sequence implies that $[\Z /n\Z] = [\Z] - [\Z] = 0$, so every cyclic group has its symbol equal to 0. This in turn implies that every finite abelian group "G" satisfies $[G] = 0$ by the fundamental theorem of finite abelian groups. Observe that by the fundamental theorem of finitely generated abelian groups, every abelian group "A" is isomorphic to a direct sum of a torsion subgroup and a torsion-free abelian group isomorphic to $\Z^r$ for some non-negative integer "r", called the rank of "A" and denoted by $r = \mbox{rank}(A)$ . Define the symbol $[A]$ as $[A] = \mbox{rank}(A)$. Then the Grothendieck group $G_0(\Z)$ is isomorphic to $\Z$ with generator $[\Z].$ Indeed, the observation made from the previous paragraph shows that every abelian group "A" has its symbol $[A]$ the same to the symbol $[\Z^r] = r[\Z]$ where $r = \mbox{rank}(A)$. Furthermore, the rank of the abelian group satisfies the conditions of the symbol $[A]$ of the Grothendieck group. Suppose one has the following short exact sequence of abelian groups: $0 \to A \to B \to C \to 0$ Then tensoring with the rational numbers $\Q$ implies the following equation. $0 \to A \otimes_\Z \Q \to B \otimes_\Z \Q \to C \otimes_\Z \Q \to 0$ Since the above is a short exact sequence of $\Q$-vector spaces, the sequence splits. Therefore, one has the following equation. $\dim_\Q (B \otimes_\Z \Q ) = \dim_\Q (A \otimes_\Z \Q) + \dim_\Q (C \otimes_\Z \Q )$ On the other hand, one also has the following relation; for more information, see Rank of an abelian group. $\operatorname{rank}(A) = \dim_\Q (A \otimes_\Z \Q )$ Therefore, the following equation holds: $[B] = \operatorname{rank}(B) = \operatorname{rank}(A) + \operatorname{rank}(C) = [A] + [C]$ Hence one has shown that $G_0(\Z)$ is isomorphic to $\Z$ with generator $[\Z].$ Universal Property. The Grothendieck group satisfies a universal property. One makes a preliminary definition: A function $\chi$ from the set of isomorphism classes to an abelian group $X$ is called "additive" if, for each exact sequence $0 \to A \to B \to C \to 0$, one has $\chi(A)-\chi(B)+\chi(C)= 0.$ Then, for any additive function $\chi: R\text{-mod} \to X$, there is a "unique" group homomorphism $f:G_0(R) \to X$ such that $\chi$ factors through "$f$" and the map that takes each object of $\mathcal A$ to the element representing its isomorphism class in $G_0(R).$ Concretely this means that $f$ satisfies the equation $f([V])=\chi(V)$ for every finitely generated $R$-module $V$ and $f$ is the only group homomorphism that does that. Examples of additive functions are the character function from representation theory: If $R$ is a finite-dimensional $k$-algebra, then one can associate the character $\chi_V: R \to k$ to every finite-dimensional $R$-module $V: \chi_V(x)$ is defined to be the trace of the $k$-linear map that is given by multiplication with the element $x \in R$ on $V$. By choosing a suitable basis and writing the corresponding matrices in block triangular form one easily sees that character functions are additive in the above sense. By the universal property this gives us a "universal character" $\chi: G_0(R)\to \mathrm{Hom}_K(R,K)$ such that $\chi([V]) = \chi_V$. If $k=\Complex$ and $R$ is the group ring $\Complex[G]$ of a finite group $G$ then this character map even gives a natural isomorphism of $G_0(\Complex[G])$ and the character ring $Ch(G)$. In the modular representation theory of finite groups, $k$ can be a field $\overline{\mathbb{F}_p},$ the algebraic closure of the finite field with "p" elements. In this case the analogously defined map that associates to each $k[G]$-module its Brauer character is also a natural isomorphism $G_0(\overline{\mathbb{F}_p}[G])\to \mathrm{BCh}(G)$ onto the ring of Brauer characters. In this way Grothendieck groups show up in representation theory. This universal property also makes $G_0(R)$ the 'universal receiver' of generalized Euler characteristics. In particular, for every bounded complex of objects in $R\text{-mod}$ $\cdots \to 0 \to 0 \to A^n \to A^{n+1} \to \cdots \to A^{m-1} \to A^m \to 0 \to 0 \to \cdots$ one has a canonical element $[A^*] = \sum_i (-1)^i [A^i] = \sum_i (-1)^i [H^i (A^*)] \in G_0(R).$ In fact the Grothendieck group was originally introduced for the study of Euler characteristics. Grothendieck groups of exact categories. A common generalization of these two concepts is given by the Grothendieck group of an exact category $\mathcal{A}$. Simply put, an exact category is an additive category together with a class of distinguished short sequences "A" → "B" → "C". The distinguished sequences are called "exact sequences", hence the name. The precise axioms for this distinguished class do not matter for the construction of the Grothendieck group. The Grothendieck group is defined in the same way as before as the abelian group with one generator ["M" ] for each (isomorphism class of) object(s) of the category $\mathcal{A}$ and one relation $[A]-[B]+[C] = 0$ for each exact sequence $A\hookrightarrow B\twoheadrightarrow C$. Alternatively and equivalently, one can define the Grothendieck group using a universal property: A map $\chi: \mathrm{Ob}(\mathcal{A})\to X$ from $\mathcal{A}$ into an abelian group "X" is called "additive" if for every exact sequence $A\hookrightarrow B\twoheadrightarrow C$ one has $\chi(A)-\chi(B)+\chi(C)=0$; an abelian group "G" together with an additive mapping $\phi: \mathrm{Ob}(\mathcal{A})\to G$ is called the Grothendieck group of $\mathcal{A}$ iff every additive map $\chi: \mathrm{Ob}(\mathcal{A})\to X$ factors uniquely through $\phi$. Every abelian category is an exact category if one just uses the standard interpretation of "exact". This gives the notion of a Grothendieck group in the previous section if one chooses $\mathcal{A} := R\text{-mod}$ the category of finitely generated "R"-modules as $\mathcal{A}$. This is really abelian because "R" was assumed to be artinian (and hence noetherian) in the previous section. On the other hand, every additive category is also exact if one declares those and only those sequences to be exact that have the form $A\hookrightarrow A\oplus B\twoheadrightarrow B$ with the canonical inclusion and projection morphisms. This procedure produces the Grothendieck group of the commutative monoid $(\mathrm{Iso}(\mathcal{A}),\oplus)$ in the first sense (here $\mathrm{Iso}(\mathcal{A})$ means the "set" [ignoring all foundational issues] of isomorphism classes in $\mathcal{A}$.) Grothendieck groups of triangulated categories. Generalizing even further it is also possible to define the Grothendieck group for triangulated categories. The construction is essentially similar but uses the relations ["X"] − ["Y"] + ["Z"] = 0 whenever there is a distinguished triangle "X" → "Y" → "Z" → "X"[1]. $[V] = \big[ k^{\dim(V)} \big] \in K_0 (\mathrm{Vect}_{\mathrm{fin}}).$ Moreover, for an exact sequence $0 \to k^l \to k^m \to k^n \to 0$ "m" = "l" + "n", so $\left[ k^{l+n} \right] = \left[ k^l \right] + \left[ k^n \right] = (l+n)[k].$ Thus $[V] = \dim(V)[k],$ and $K_0(\mathrm{Vect}_{\mathrm{fin}})$ is isomorphic to $\Z$ and is generated by $[k].$ Finally for a bounded complex of finite-dimensional vector spaces "V" *, $[V^*] = \chi(V^*)[k]$ where $\chi$ is the standard Euler characteristic defined by $\chi(V^*)= \sum_i (-1)^i \dim V = \sum_i (-1)^i \dim H^i(V^*).$
457366
abstract_algebra
Covering group In mathematics, a quasisimple group (also known as a covering group) is a group that is a perfect central extension "E" of a simple group "S". In other words, there is a short exact sequence $1 \to Z(E) \to E \to S \to 1$ such that $E = [E, E]$, where $Z(E)$ denotes the center of "E" and [ , ] denotes the commutator. Equivalently, a group is quasisimple if it is equal to its commutator subgroup and its inner automorphism group Inn("G") (its quotient by its center) is simple (and it follows Inn("G") must be non-abelian simple, as inner automorphism groups are never non-trivial cyclic). All non-abelian simple groups are quasisimple. The subnormal quasisimple subgroups of a group control the structure of a finite insoluble group in much the same way as the minimal normal subgroups of a finite soluble group do, and so are given a name, component. The subgroup generated by the subnormal quasisimple subgroups is called the layer, and along with the minimal normal soluble subgroups generates a subgroup called the generalized Fitting subgroup. The quasisimple groups are often studied alongside the simple groups and groups related to their automorphism groups, the almost simple groups. The representation theory of the quasisimple groups is nearly identical to the projective representation theory of the simple groups. Examples. The covering groups of the alternating groups are quasisimple but not simple, for $n \geq 5.$ Notes.
459015
abstract_algebra
Fitting's theorem is a mathematical theorem proved by Hans Fitting. It can be stated as follows: If "M" and "N" are nilpotent normal subgroups of a group "G", then their product "MN" is also a nilpotent normal subgroup of "G"; if, moreover, "M" is nilpotent of class "m" and "N" is nilpotent of class "n", then "MN" is nilpotent of class at most "m" + "n". By induction it follows also that the subgroup generated by a finite collection of nilpotent normal subgroups is nilpotent. This can be used to show that the Fitting subgroup of certain types of groups (including all finite groups) is nilpotent. However, a subgroup generated by an "infinite" collection of nilpotent normal subgroups need not be nilpotent. Order-theoretic statement. In terms of order theory, (part of) Fitting's theorem can be stated as: The set of nilpotent normal subgroups form a lattice of subgroups. Thus the nilpotent normal subgroups of a "finite" group also form a bounded lattice, and have a top element, the Fitting subgroup. However, nilpotent normal subgroups do not in general form a complete lattice, as a subgroup generated by an infinite collection of nilpotent normal subgroups need not be nilpotent, though it will be normal. The join of all nilpotent normal subgroups is still defined as the Fitting subgroup, but it need not be nilpotent.
861758
abstract_algebra
Group obtained by aggregating similar elements of a larger group A quotient group or factor group is a mathematical group obtained by aggregating similar elements of a larger group using an equivalence relation that preserves some of the group structure (the rest of the structure is "factored" out). For example, the cyclic group of addition modulo "n" can be obtained from the group of integers under addition by identifying elements that differ by a multiple of $n$ and defining a group structure that operates on each such class (known as a congruence class) as a single entity. It is part of the mathematical field known as group theory. For a congruence relation on a group, the equivalence class of the identity element is always a normal subgroup of the original group, and the other equivalence classes are precisely the cosets of that normal subgroup. The resulting quotient is written $G\,/\,N$, where $G$ is the original group and $N$ is the normal subgroup. (This is pronounced $G\bmod N$, where $\mbox{mod}$ is short for modulo.) Much of the importance of quotient groups is derived from their relation to homomorphisms. The first isomorphism theorem states that the image of any group "G" under a homomorphism is always isomorphic to a quotient of $G$. Specifically, the image of $G$ under a homomorphism $\varphi: G \rightarrow H$ is isomorphic to $G\,/\,\ker(\varphi)$ where $\ker(\varphi)$ denotes the kernel of $\varphi$. The dual notion of a quotient group is a subgroup, these being the two primary ways of forming a smaller group from a larger one. Any normal subgroup has a corresponding quotient group, formed from the larger group by eliminating the distinction between elements of the subgroup. In category theory, quotient groups are examples of quotient objects, which are dual to subobjects. Definition and illustration. Given a group $G$ and a subgroup $H$, and a fixed element $a \in G$, one can consider the corresponding left coset: $aH := \left\{ah: h \in H \right\}$. Cosets are a natural class of subsets of a group; for example consider the abelian group "G" of integers, with operation defined by the usual addition, and the subgroup $H$ of even integers. Then there are exactly two cosets: $0+H$, which are the even integers, and $1+H$, which are the odd integers (here we are using additive notation for the binary operation instead of multiplicative notation). For a general subgroup "$H$", it is desirable to define a compatible group operation on the set of all possible cosets, $\left\{aH: a \in G \right\}$. This is possible exactly when "$H$" is a normal subgroup, see below. A subgroup $N$ of a group "$G$" is normal if and only if the coset equality $aN = Na$ holds for all $a \in G$. A normal subgroup of "$G$" is denoted $N$. Definition. Let "$N$" be a normal subgroup of a group "$G$" . Define the set $G\,/\,N$ to be the set of all left cosets of "$N$" in "$G$" . That is, $G\,/\,N = \left\{aN: a \in G\right\}$. Since the identity element $e \in N$, $a \in aN$. Define a binary operation on the set of cosets, $G\,/\,N$, as follows. For each $aN$ and $bN$ in $G\,/\,N$, the product of $aN$ and $bN$, $(aN)(bN)$, is $(ab)N$. This works only because $(ab)N$ does not depend on the choice of the representatives, $a$ and $b$, of each left coset, $aN$ and $bN$. To prove this, suppose $xN = aN$ and $yN = bN$ for some $x, y, a, b \in G$. Then <math display="inline">(ab)N = a(bN) = a(yN) = a(Ny) = (aN)y = (xN)y = x(Ny) = x(yN) = (xy)N$. This depends on the fact that "N" is a normal subgroup. It still remains to be shown that this condition is not only sufficient but necessary to define the operation on "G"/"N". To show that it is necessary, consider that for a subgroup "$N$" of "$G$", we have been given that the operation is well defined. That is, for all $xN = aN$ and $yN = bN$"," for $x, y, a, b \in G, \; (ab)N = (xy)N$. Let $n \in N$ and $g \in G$. Since $eN = nN$"," we have $gN = (eg)N = (eN)(gN) = (nN)(gN) = (ng)N$. Now, $gN = (ng)N \Leftrightarrow N = (g^{-1}ng)N \Leftrightarrow g^{-1}ng \in N, \; \forall \, n \in N$ and $g \in G$. Hence "$N$" is a normal subgroup of "$G$" . It can also be checked that this operation on $G\,/\,N$ is always associative, $G\,/\,N$ has identity element "$N$", and the inverse of element $aN$ can always be represented by $a^{-1}N$. Therefore, the set $G\,/\,N$ together with the operation defined by $(aN)(bN) = (ab)N$ forms a group, the quotient group of "$G$" by "$N$". Due to the normality of "$N$", the left cosets and right cosets of "$N$" in "$G$" are the same, and so, $G\,/\,N$ could have been defined to be the set of right cosets of "$N$" in "$G$" . Example: Addition modulo 6. For example, consider the group with addition modulo 6: $G = \left\{0, 1, 2, 3, 4, 5 \right\}$. Consider the subgroup "$N = \left\{0, 3 \right\}$", which is normal because "$G$" is abelian. Then the set of (left) cosets is of size three: $G\,/\,N = \left\{a+N: a \in G \right\} = \left\{ \left\{0, 3 \right\}, \left\{1, 4 \right\}, \left\{2, 5 \right\} \right\} = \left\{0+N, 1+N, 2+N \right\}$. The binary operation defined above makes this set into a group, known as the quotient group, which in this case is isomorphic to the cyclic group of order 3. Motivation for the name "quotient". The reason $G\,/\,N$ is called a quotient group comes from division of integers. When dividing 12 by 3 one obtains the answer 4 because one can regroup 12 objects into 4 subcollections of 3 objects. The quotient group is the same idea, although we end up with a group for a final answer instead of a number because groups have more structure than an arbitrary collection of objects. To elaborate, when looking at $G\,/\,N$ with "$N$" a normal subgroup of "$G$", the group structure is used to form a natural "regrouping". These are the cosets of "$N$" in "$G$". Because we started with a group and normal subgroup, the final quotient contains more information than just the number of cosets (which is what regular division yields), but instead has a group structure itself. Examples. Even and odd integers. Consider the group of integers $\Z$ (under addition) and the subgroup $2\Z$ consisting of all even integers. This is a normal subgroup, because $\Z$ is abelian. There are only two cosets: the set of even integers and the set of odd integers, and therefore the quotient group $\Z\,/\,2\Z$ is the cyclic group with two elements. This quotient group is isomorphic with the set $\left\{0,1 \right\}$ with addition modulo 2; informally, it is sometimes said that $\Z\,/\,2\Z$ "equals" the set $\left\{0,1 \right\}$ with addition modulo 2. Example further explained... Let $ \gamma(m) $ be the remainders of $ m \in \Z $ when dividing by $ 2 $. Then, $ \gamma(m)=0 $ when $ m $ is even and $ \gamma(m)=1 $ when $ m $ is odd. By definition of $ \gamma $, the kernel of $ \gamma $, $ \ker(\gamma) $ $ = \{ m \in \Z : \gamma(m)=0 \} $, is the set of all even integers. Let $ H=$ $\ker(\gamma)$. Then, $ H $ is a subgroup, because the identity in $ \Z $, which is $ 0 $, is in $ H $, the sum of two even integers is even and hence if $ m $ and $ n $ are in $ H $, $ m+n $ is in $ H $ (closure) and if $ m $ is even, $ -m $ is also even and so $ H $ contains its inverses. Define $ \mu : \mathbb{Z} / H \to \Z_2 $ as $ \mu(aH)=\gamma(a) $ for $ a\in\Z $ and $\mathbb{Z} / H$ is the quotient group of left cosets; $\mathbb{Z} / H=\{H,1+H\} $. Note that we have defined $ \mu $, $ \mu(aH) $ is $ 1 $ if $ a $ is odd and $ 0 $ if $ a $ is even. Thus, $ \mu $ is an isomorphism from $\mathbb{Z} / H$ to $ \Z_2 $. Remainders of integer division. A slight generalization of the last example. Once again consider the group of integers $\Z$ under addition. Let "n" be any positive integer. We will consider the subgroup $n\Z$ of $\Z$ consisting of all multiples of "$n$". Once again $n\Z$ is normal in $\Z$ because $\Z$ is abelian. The cosets are the collection $\left\{n\Z, 1+n\Z, \; \ldots, (n-2)+n\Z, (n-1)+n\Z \right\}$. An integer "$k$" belongs to the coset $r+n\Z$, where "$r$" is the remainder when dividing "$k$" by "$n$". The quotient $\Z\,/\,n\Z$ can be thought of as the group of "remainders" modulo $n$. This is a cyclic group of order "$n$". Complex integer roots of 1. The twelfth roots of unity, which are points on the complex unit circle, form a multiplicative abelian group "$G$", shown on the picture on the right as colored balls with the number at each point giving its complex argument. Consider its subgroup "$N$" made of the fourth roots of unity, shown as red balls. This normal subgroup splits the group into three cosets, shown in red, green and blue. One can check that the cosets form a group of three elements (the product of a red element with a blue element is blue, the inverse of a blue element is green, etc.). Thus, the quotient group "$G\,/\,N$" is the group of three colors, which turns out to be the cyclic group with three elements. The real numbers modulo the integers. Consider the group of real numbers $\R$ under addition, and the subgroup $\Z$ of integers. Each coset of $\Z$ in $\R$ is a set of the form $a+\Z$, where $a$ is a real number. Since $a_1+\Z$ and $a_2+\Z$ are identical sets when the non-integer parts of "$a_1$" and "$a_2$" are equal, one may impose the restriction $0 \leq a < 1$ without change of meaning. Adding such cosets is done by adding the corresponding real numbers, and subtracting 1 if the result is greater than or equal to 1. The quotient group $\R\,/\,\Z$ is isomorphic to the circle group, the group of complex numbers of absolute value 1 under multiplication, or correspondingly, the group of rotations in 2D about the origin, that is, the special orthogonal group $\mbox{SO}(2)$. An isomorphism is given by $f(a+\Z) = \exp(2\pi ia)$ (see Euler's identity). Matrices of real numbers. If "$G$" is the group of invertible $3 \times 3$ real matrices, and "$N$" is the subgroup of $3 \times 3$ real matrices with determinant 1, then "$N$" is normal in "$G$" (since it is the kernel of the determinant homomorphism). The cosets of "$N$" are the sets of matrices with a given determinant, and hence "$G\,/\,N$" is isomorphic to the multiplicative group of non-zero real numbers. The group "$N$" is known as the special linear group $\mbox{SL}(3)$. Integer modular arithmetic. Consider the abelian group $\Z_4 = \Z\,/\,4 \Z$ (that is, the set $\left\{0, 1, 2, 3 \right\}$ with addition modulo 4), and its subgroup $\left\{0, 2\right\}$. The quotient group $\Z_4\,/\,\left\{0, 2\right\}$ is $\left\{\left\{ 0, 2 \right\}, \left\{1, 3 \right\} \right\}$. This is a group with identity element $\left\{0, 2\right\}$, and group operations such as $\left\{0, 2 \right\} + \left\{1, 3 \right\} = \left\{1, 3 \right\}$. Both the subgroup $\left\{0, 2\right\}$ and the quotient group $\left\{\left\{ 0, 2 \right\}, \left\{1, 3 \right\} \right\}$ are isomorphic with $\Z_2$. Integer multiplication. Consider the multiplicative group $G=(\Z_{n^2})^{\times}$. The set "$N$" of $n$th residues is a multiplicative subgroup isomorphic to $(\Z_{n})^{\times}$. Then "$N$" is normal in "$G$" and the factor group "$G\,/\,N$" has the cosets $N, (1+n)N, (1+n)2N, \;\ldots, (1+n)n-1N$. The Paillier cryptosystem is based on the conjecture that it is difficult to determine the coset of a random element of "$G$" without knowing the factorization of "$n$". Properties. The quotient group $G\,/\,G$ is isomorphic to the trivial group (the group with one element), and $G\,/\,\left\{e \right\}$ is isomorphic to "$G$". The order of "$G\,/\,N$", by definition the number of elements, is equal to $\vert G : N \vert$, the index of "$N$" in "$G$". If "$G$" is finite, the index is also equal to the order of "$G$" divided by the order of "$N$". The set "$G\,/\,N$" may be finite, although both "$G$" and "$N$" are infinite (for example, $\Z\,/\,2\Z$). There is a "natural" surjective group homomorphism $\pi: G \rightarrow G\,/\,N$, sending each element $g$ of "$G$" to the coset of "$N$" to which "$g$" belongs, that is: $\pi(g) = gN$. The mapping $\pi$ is sometimes called the "canonical projection of $G$ onto $G\,/\,N$". Its kernel is "$N$". There is a bijective correspondence between the subgroups of "$G$" that contain "$N$" and the subgroups of "$G\,/\,N$"; if $H$ is a subgroup of "$G$" containing "$N$", then the corresponding subgroup of "$G\,/\,N$" is $\pi(H)$. This correspondence holds for normal subgroups of "$G$" and "$G\,/\,N$" as well, and is formalized in the lattice theorem. Several important properties of quotient groups are recorded in the fundamental theorem on homomorphisms and the isomorphism theorems. If "$G$" is abelian, nilpotent, solvable, cyclic or finitely generated, then so is "$G\,/\,N$". If "$H$" is a subgroup in a finite group "$G$", and the order of "$H$" is one half of the order of "$G$", then "$H$" is guaranteed to be a normal subgroup, so "$G\,/\,H$" exists and is isomorphic to $C_2$. This result can also be stated as "any subgroup of index 2 is normal", and in this form it applies also to infinite groups. Furthermore, if $p$ is the smallest prime number dividing the order of a finite group, "$G$", then if "$G\,/\,H$" has order "$p$", "$H$" must be a normal subgroup of "$G$". Given "$G$" and a normal subgroup "$N$", then "$G$" is a group extension of "$G\,/\,N$" by "$N$". One could ask whether this extension is trivial or split; in other words, one could ask whether "$G$" is a direct product or semidirect product of "$N$" and "$G\,/\,N$". This is a special case of the extension problem. An example where the extension is not split is as follows: Let $G = \Z_4 = \left\{0, 1, 2, 3 \right\}$, and $N = \left\{0, 2 \right\}$, which is isomorphic to $\Z_2$. Then "$G\,/\,N$" is also isomorphic to $\Z_2$. But $\Z_2$ has only the trivial automorphism, so the only semi-direct product of "$N$" and "$G\,/\,N$" is the direct product. Since $\Z_4$ is different from $\Z_2 \times \Z_2$, we conclude that "$G$" is not a semi-direct product of "$N$" and "$G\,/\,N$". Quotients of Lie groups. If "$G$" is a Lie group and "$N$" is a normal and closed (in the topological rather than the algebraic sense of the word) Lie subgroup of "$G$", the quotient "$G$" / "$N$" is also a Lie group. In this case, the original group "$G$" has the structure of a fiber bundle (specifically, a principal "$N$"-bundle), with base space "$G$" / "$N$" and fiber "$N$". The dimension of "$G$" / "$N$" equals $ \dim G - \dim N$. Note that the condition that "$N$" is closed is necessary. Indeed, if "$N$" is not closed then the quotient space is not a T1-space (since there is a coset in the quotient which cannot be separated from the identity by an open set), and thus not a Hausdorff space. For a non-normal Lie subgroup "$N$", the space $G\,/\,N$ of left cosets is not a group, but simply a differentiable manifold on which "$G$" acts. The result is known as a homogeneous space. Notes.
5470
abstract_algebra
In mathematics, the Bruhat decomposition (introduced by François Bruhat for classical groups and by Claude Chevalley in general) "G" = "BWB" of certain algebraic groups "G" into cells can be regarded as a general expression of the principle of Gauss–Jordan elimination, which generically writes a matrix as a product of an upper triangular and lower triangular matrices—but with exceptional cases. It is related to the Schubert cell decomposition of flag varieties: see Weyl group for this. More generally, any group with a ("B", "N") pair has a Bruhat decomposition. Definitions. The Bruhat decomposition of "G" is the decomposition $G=BWB =\bigsqcup_{w\in W}BwB$ of "G" as a disjoint union of double cosets of "B" parameterized by the elements of the Weyl group "W". (Note that although "W" is not in general a subgroup of "G", the coset "wB" is still well defined because the maximal torus is contained in "B".) Examples. Let "G" be the general linear group GL"n" of invertible $n \times n$ matrices with entries in some algebraically closed field, which is a reductive group. Then the Weyl group "W" is isomorphic to the symmetric group "S""n" on "n" letters, with permutation matrices as representatives. In this case, we can take "B" to be the subgroup of upper triangular invertible matrices, so Bruhat decomposition says that one can write any invertible matrix "A" as a product "U"1"PU"2 where "U"1 and "U"2 are upper triangular, and "P" is a permutation matrix. Writing this as "P" = "U"1−1"AU"2−1, this says that any invertible matrix can be transformed into a permutation matrix via a series of row and column operations, where we are only allowed to add row "i" (resp. column "i") to row "j" (resp. column "j") if "i" > "j" (resp. "i" < "j"). The row operations correspond to "U"1−1, and the column operations correspond to "U"2−1. The special linear group SL"n" of invertible $n \times n$ matrices with determinant 1 is a semisimple group, and hence reductive. In this case, "W" is still isomorphic to the symmetric group "S""n". However, the determinant of a permutation matrix is the sign of the permutation, so to represent an odd permutation in SL"n", we can take one of the nonzero elements to be −1 instead of 1. Here "B" is the subgroup of upper triangular matrices with determinant 1, so the interpretation of Bruhat decomposition in this case is similar to the case of GL"n". Geometry. The cells in the Bruhat decomposition correspond to the Schubert cell decomposition of flag varieties. The dimension of the cells corresponds to the length of the word "w" in the Weyl group. Poincaré duality constrains the topology of the cell decomposition, and thus the algebra of the Weyl group; for instance, the top dimensional cell is unique (it represents the fundamental class), and corresponds to the longest element of a Coxeter group. Computations. The number of cells in a given dimension of the Bruhat decomposition are the coefficients of the "q"-polynomial of the associated Dynkin diagram. Double Bruhat Cells. With two opposite Borels one may intersect the Bruhat cells for each of them. $G=\bigsqcup_{w_1 , w_2\in W} ( Bw_1 B \cap B_- w_2 B_- )$
355196
abstract_algebra
Set of elements that commute with every element of a group In abstract algebra, the center of a group, "G", is the set of elements that commute with every element of "G". It is denoted Z("G"), from German "Zentrum," meaning "center". In set-builder notation, Z("G") = {"z" ∈ "G"| ∀"g" ∈ "G", "zg" "gz"}. The center is a normal subgroup, Z("G") ⊲ "G". As a subgroup, it is always characteristic, but is not necessarily fully characteristic. The quotient group, "G" / Z("G"), is isomorphic to the inner automorphism group, Inn("G"). A group "G" is abelian if and only if Z("G") = "G". At the other extreme, a group is said to be centerless if Z("G") is trivial; i.e., consists only of the identity element. The elements of the center are sometimes called central. As a subgroup. The center of "G" is always a subgroup of "G". In particular: Furthermore, the center of "G" is always an abelian and normal subgroup of "G". Since all elements of Z("G") commute, it is closed under conjugation. Note that a homomorphism "f": "G" → "H" between groups generally does not restrict to a homomorphism between their centers. Although "f" ("Z" ("G")) commutes with "f" ( "G" ), unless "f" is surjective "f" ("Z" ("G")) need not commute with all of "H" and therefore need not be a subset of "Z" ( "H" ). Put another way, there is no "center" functor between categories Grp and Ab. Even though we can map objects, we cannot map arrows. Conjugacy classes and centralizers. By definition, the center is the set of elements for which the conjugacy class of each element is the element itself; i.e., Cl("g") = {"g"}. The center is also the intersection of all the centralizers of each element of "G". As centralizers are subgroups, this again shows that the center is a subgroup. Conjugation. Consider the map, "f": "G" → Aut("G"), from "G" to the automorphism group of "G" defined by "f"("g") = "ϕ""g", where "ϕ""g" is the automorphism of "G" defined by "f"("g")("h") = "ϕ""g"("h") = "ghg"−1. The function, "f" is a group homomorphism, and its kernel is precisely the center of "G", and its image is called the inner automorphism group of "G", denoted Inn("G"). By the first isomorphism theorem we get, "G"/Z("G") ≃ Inn("G"). The cokernel of this map is the group Out("G") of outer automorphisms, and these form the exact sequence 1 ⟶ Z("G") ⟶ "G" ⟶ Aut("G") ⟶ Out("G") ⟶ 1. Examples. 1 & 0 & z\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$ Higher centers. Quotienting out by the center of a group yields a sequence of groups called the upper central series: ("G"0 = "G") ⟶ ("G"1 = "G"0/Z("G"0)) ⟶ ("G"2 = "G"1/Z("G"1)) ⟶ ⋯ The kernel of the map "G" → "Gi" is the "i"th center of "G" (second center, third center, etc.) and is denoted Z"i"("G"). Concretely, the ("i" + 1)-st center are the terms that commute with all elements up to an element of the "i"th center. Following this definition, one can define the 0th center of a group to be the identity subgroup. This can be continued to transfinite ordinals by transfinite induction; the union of all the higher centers is called the hypercenter. The ascending chain of subgroups 1 ≤ Z("G") ≤ Z2("G") ≤ ⋯ stabilizes at "i" (equivalently, Z"i"("G") = Zi+1("G")) if and only if "G""i" is centerless. Notes.
3245
abstract_algebra
In combinatorial mathematics a cycle index is a polynomial in several variables which is structured in such a way that information about how a group of permutations acts on a set can be simply read off from the coefficients and exponents. This compact way of storing information in an algebraic form is frequently used in combinatorial enumeration. Each permutation π of a finite set of objects partitions that set into cycles; the cycle index monomial of π is a monomial in variables "a"1, "a"2, … that describes the cycle type of this partition: the exponent of "a""i" is the number of cycles of π of size "i". The cycle index polynomial of a permutation group is the average of the cycle index monomials of its elements. The phrase cycle indicator is also sometimes used in place of "cycle index". Knowing the cycle index polynomial of a permutation group, one can enumerate equivalence classes due to the group's action. This is the main ingredient in the Pólya enumeration theorem. Performing formal algebraic and differential operations on these polynomials and then interpreting the results combinatorially lies at the core of species theory. Permutation groups and group actions. A bijective map from a set "X" onto itself is called a permutation of "X", and the set of all permutations of "X" forms a group under the composition of mappings, called the symmetric group of "X", and denoted Sym("X"). Every subgroup of Sym("X") is called a permutation group of "degree" |"X"|. Let "G" be an abstract group with a group homomorphism φ from "G" into Sym("X"). The image, φ("G"), is a permutation group. The group homomorphism can be thought of as a means for permitting the group "G" to "act" on the set "X" (using the permutations associated with the elements of "G"). Such a group homomorphism is formally called a group action and the image of the homomorphism is a "permutation representation" of "G". A given group can have many different permutation representations, corresponding to different actions. Suppose that group "G" acts on set "X" (that is, a group action exists). In combinatorial applications the interest is in the set "X"; for instance, counting things in "X" and knowing what structures might be left invariant by "G". Little is lost by working with permutation groups in such a setting, so in these applications, when a group is considered, it is a permutation representation of the group which will be worked with, and thus, a group action must be specified. Algebraists, on the other hand, are more interested in the groups themselves and would be more concerned with the kernels of the group actions, which measure how much is lost in passing from the group to its permutation representation. Disjoint cycle representation of permutations. Finite permutations are most often represented as group actions on the set "X" = {1,2, ..., n}. A permutation in this setting can be represented by a two line notation. Thus, $\left ( \begin{matrix} 1 & 2 & 3 & 4 & 5 \\ 2 & 3 & 4 & 5 & 1 \end{matrix} \right )$ corresponds to a bijection on "X" = {1, 2, 3, 4, 5} which sends 1 → 2, 2 → 3, 3 → 4, 4 → 5 and 5 → 1. This can be read off from the columns of the notation. When the top row is understood to be the elements of "X" in an appropriate order, only the second row need be written. In this one line notation, our example would be [2 3 4 5 1]. This example is known as a "cyclic permutation" because it "cycles" the numbers around and a third notation for it would be (1 2 3 4 5). This "cycle notation" is to be read as: each element is sent to the element on its right, but the last element is sent to the first one (it "cycles" to the beginning). With cycle notation, it does not matter where a cycle starts, so (1 2 3 4 5) and (3 4 5 1 2) and (5 1 2 3 4) all represent the same permutation. The "length of a cycle" is the number of elements in the cycle. Not all permutations are cyclic permutations, but every permutation can be written as a product of disjoint (having no common element) cycles in essentially one way. As a permutation may have "fixed points" (elements that are unchanged by the permutation), these will be represented by cycles of length one. For example: $\left ( \begin{matrix}1 & 2 & 3 & 4& 5& 6\\2&1&3&5&6&4 \end{matrix} \right ) = (1 2)(3)(4 5 6).$ This permutation is the product of three cycles, one of length two, one of length three and a fixed point. The elements in these cycles are disjoint subsets of "X" and form a partition of "X". The cycle structure of a permutation can be coded as an algebraic monomial in several (dummy) variables in the following way: a variable is needed for each distinct cycle length of the cycles that appear in the cycle decomposition of the permutation. In the previous example there were three different cycle lengths, so we will use three variables, "a"1, "a"2 and "a"3 (in general, use the variable "a""k" to correspond to length "k" cycles). The variable "a""i" will be raised to the "j""i"("g") power where "j""i"("g") is the number of cycles of length "i" in the cycle decomposition of permutation "g". We can then associate the "cycle index monomial" $\prod_{k=1}^n a_k^{j_k(g)}$ to the permutation "g". The cycle index monomial of our example would be "a"1"a"2"a"3, while the cycle index monomial of the permutation (1 2)(3 4)(5)(6 7 8 9)(10 11 12 13) would be "a"1"a"22"a"42. Definition. The cycle index of a permutation group "G" is the average of the cycle index monomials of all the permutations "g" in "G". More formally, let "G" be a permutation group of order "m" and degree "n". Every permutation "g" in "G" has a unique decomposition into disjoint cycles, say "c"1 "c"2 "c"3 ... . Let the length of a cycle "c" be denoted by |"c"|. Now let "j"k(g) be the number of cycles of "g" of length "k", where $0 \le j_k(g) \le \lfloor n/k \rfloor \mbox{ and } \sum_{k=1}^n k \, j_k(g) \; = n.$ We associate to "g" the monomial $ \prod_{c\in g} a_ = \prod_{k=1}^n a_k^{j_k(g)}$ in the variables "a"1, "a"2, ..., "a""n". Then the cycle index "Z"("G") of "G" is given by $ Z(G) = \frac{1} \sum_{g\in G} \prod_{k=1}^n a_k^{j_k(g)}.$ Example. Consider the group "G" of rotational symmetries of a square in the Euclidean plane. Such symmetries are completely determined by the images of just the corners of the square. By labeling these corners 1, 2, 3 and 4 (consecutively going clockwise) we can represent the elements of "G" as permutations of the set "X" = {1,2,3,4}. The permutation representation of "G" consists of the four permutations (1 4 3 2), (1 3)(2 4), (1 2 3 4) and e = (1)(2)(3)(4) which represent the counter-clockwise rotations by 90°, 180°, 270° and 360° respectively. Notice that the identity permutation e is the only permutation with fixed points in this representation of "G". As an abstract group, "G" is known as the cyclic group "C"4, and this permutation representation of it is its "regular representation". The cycle index monomials are "a"4, "a"22, "a"4, and "a"14 respectively. Thus, the cycle index of this permutation group is: $Z(C_4) = \frac{1}{4}\left ( a_1^4 + a_2^2 + 2a_4 \right).$ The group "C"4 also acts on the unordered pairs of elements of "X" in a natural way. Any permutation "g" would send {"x","y"} → {"x""g", "y""g"} (where "x""g" is the image of the element "x" under the permutation "g"). The set "X" is now {"A", "B", "C", "D", "E", "F"} where "A" = {1,2}, "B" = {2,3}, "C" = {3,4}, "D" = {1,4}, "E" = {1,3} and "F" = {2,4}. These elements can be thought of as the sides and diagonals of the square or, in a completely different setting, as the edges of the complete graph "K"4. Acting on this new set, the four group elements are now represented by ("A" "D" "C" "B")("E" "F"), ("A C")("B D")("E")("F"), ("A B C D")("E F") and e = ("A")("B")("C")("D")("E")("F") and the cycle index of this action is: $Z(C_4) = \frac{1}{4}\left ( a_1^6 + a_1^2 a_2^2 + 2a_2a_4 \right).$ The group "C"4 can also act on the ordered pairs of elements of "X" in the same natural way. Any permutation "g" would send ("x","y") → ("x""g", "y""g") (in this case we would also have ordered pairs of the form ("x", "x")). The elements of "X" could be thought of as the arcs of the complete digraph "D"4 (with loops at each vertex). The cycle index in this case would be: $Z(C_4) = \frac{1}{4}\left ( a_1^{16} + a_2^8 + 2a_4^4 \right).$ Types of actions. As the above example shows, the cycle index depends on the group action and not on the abstract group. Since there are many permutation representations of an abstract group, it is useful to have some terminology to distinguish them. When an abstract group is defined in terms of permutations, it is a permutation group and the group action is the identity homomorphism. This is referred to as the "natural action". The symmetric group "S"3 in its natural action has the elements $S_3 = \{e,(2 3),(1 2),(1 2 3),(1 3 2),(1 3)\}$ and so, its cycle index is: $Z(S_3) = \frac{1}{6} \left( a_1^3 + 3 a_1 a_2 + 2 a_3 \right).$ A permutation group "G" on the set "X" is "transitive" if for every pair of elements "x" and "y" in "X" there is at least one "g" in "G" such that "y" = "x""g". A transitive permutation group is "regular" (or sometimes referred to as "sharply transitive") if the only permutation in the group that has fixed points is the identity permutation. A finite transitive permutation group "G" on the set "X" is regular if and only if |"G"| = |"X"|. Cayley's theorem states that every abstract group has a regular permutation representation given by the group acting on itself (as a set) by (right) multiplication. This is called the "regular representation" of the group. The cyclic group "C"6 in its regular representation contains the six permutations (one-line form of the permutation is given first): [1 2 3 4 5 6] = (1)(2)(3)(4)(5)(6) [2 3 4 5 6 1] = (1 2 3 4 5 6) [3 4 5 6 1 2] = (1 3 5)(2 4 6) [4 5 6 1 2 3] = (1 4)(2 5)(3 6) [5 6 1 2 3 4] = (1 5 3)(2 6 4) [6 1 2 3 4 5] = (1 6 5 4 3 2). Thus its cycle index is: $Z(C_6) = \frac{1}{6} \left( a_1^6 + a_2^3 + 2 a_3^2 + 2 a_6 \right).$ Often, when an author does not wish to use the group action terminology, the permutation group involved is given a name which implies what the action is. The following three examples illustrate this point. The cycle index of the "edge permutation group" of the complete graph on three vertices. We will identify the complete graph "K"3 with an equilateral triangle in the Euclidean plane. This permits us to use geometric language to describe the permutations involved as symmetries of the equilateral triangle. Every permutation in the group "S"3 of "vertex permutations" ("S"3 in its natural action, given above) induces an edge permutation. These are the permutations: The cycle index of the group "G" of edge permutations induced by vertex permutations from "S"3 is $ Z(G) = \frac{1}{6} \left(a_1^3 + 3 a_1 a_2 + 2 a_3 \right).$ It happens that the complete graph "K"3 is isomorphic to its own line graph (vertex-edge dual) and hence the edge permutation group induced by the vertex permutation group is the same as the vertex permutation group, namely "S"3 and the cycle index is "Z"("S"3). This is not the case for complete graphs on more than three vertices, since these have strictly more edges ($\binom{n}{2}$) than vertices ("n"). The cycle index of the edge permutation group of the complete graph on four vertices. This is entirely analogous to the three-vertex case. These are the vertex permutations ("S"4 in its natural action) and the edge permutations ("S"4 acting on unordered pairs) that they induce: We may visualize the types of permutations geometrically as symmetries of a regular tetrahedron. This yields the following description of the permutation types. The cycle index of the edge permutation group "G" of "K"4 is: $ \left( a_1^6 + 9 a_1^2 a_2^2 + 8 a_3^2 + 6 a_2 a_4 \right). $ The cycle index of the face permutations of a cube. Consider an ordinary cube in three-space and its group of symmetries (automorphisms), call it "C". It permutes the six faces of the cube. There are twenty-four automorphisms. There is one such permutation and its contribution is $a_1^6.$ We rotate about the axis passing through the centers of the face and the face opposing it. This will fix the face and the face opposing it and create a four-cycle of the faces parallel to the axis of rotation. The contribution is $6 a_1^2 a_4.$ We rotate about the same axis as in the previous case, but now there is no four cycle of the faces parallel to the axis, but rather two two-cycles. The contribution is $3 a_1^2 a_2^2.$ This time we rotate about the axis passing through two opposite vertices (the endpoints of a main diagonal). This creates two three-cycles of faces (the faces incident on the same vertex form a cycle). The contribution is $8 a_3^2.$ These edge rotations rotate about the axis that passes through the midpoints of opposite edges not incident on the same face and parallel to each other and exchanges the two faces that are incident on the first edge, the two faces incident on the second edge, and the two faces that share two vertices but no edge with the two edges, i.e. there are three two-cycles and the contribution is $6 a_2^3.$ The conclusion is that the cycle index of the group "C" is $Z(C) = \frac{1}{24} \left( a_1^6 + 6 a_1^2 a_4 + 3 a_1^2 a_2^2 + 8 a_3^2 + 6 a_2^3 \right) .$ Cycle indices of some permutation groups. Identity group "E""n". This group contains one permutation that fixes every element (this must be a natural action). $ Z(E_n) = a_1^n.$ Cyclic group "C"n. A cyclic group, "C"n is the group of rotations of a regular "n"-gon, that is, "n" elements equally spaced around a circle. This group has φ("d") elements of order "d" for each divisor "d" of "n", where φ("d") is the Euler φ-function, giving the number of natural numbers less than "d" which are relatively prime to "d". In the regular representation of "C"n, a permutation of order "d" has "n"/"d" cycles of length "d", thus: $ Z(C_n) = \frac{1}{n} \sum_{d|n} \varphi(d) a_d^{n/d}.$ Dihedral group "D"n. The dihedral group is like the cyclic group, but also includes reflections. In its natural action, $ Z(D_n) = \frac{1}{2} Z(C_n) + \frac{1}{2} a_1 a_2^{(n-1)/2}, & n \mbox{ odd, } \\ $ Alternating group "A"n. The cycle index of the alternating group in its natural action as a permutation group is $ Z(A_n) = \frac{1 + (-1)^{j_2+j_4+\cdots}}{\prod_{k=1}^n k^{j_k} j_k!} \prod_{k=1}^n a_k^{j_k}. $ The numerator is 2 for the even permutations, and 0 for the odd permutations. The 2 is needed because $\frac{1}=\frac{2}{n!}$. Symmetric group "S""n". The cycle index of the symmetric group "S""n" in its natural action is given by the formula: $ Z(S_n) = \sum_{j_1+2 j_2 + 3 j_3 + \cdots + n j_n = n} \frac{1}{\prod_{k=1}^n k^{j_k} j_k!} \prod_{k=1}^n a_k^{j_k}$ that can be also stated in terms of complete Bell polynomials: $ Z(S_n) = \frac{B_n(0!\,a_1, 1!\,a_2, \dots, (n-1)!\,a_n)}{n!}.$ This formula is obtained by counting how many times a given permutation shape can occur. There are three steps: first partition the set of "n" labels into subsets, where there are $j_k$ subsets of size "k". Every such subset generates $k!/k$ cycles of length "k". But we do not distinguish between cycles of the same size, i.e. they are permuted by $S_{j_k}$. This yields $ \frac{n!}{\prod_{k=1}^n k^{j_k} j_k!}.$ The formula may be further simplified if we sum up cycle indices over every $n$, while using an extra variable $y$ to keep track of the total size of the cycles: $ \sum\limits_{n \geq 1} y^n Z(S_n) = \sum\limits_{n \geq 1} \sum_{j_1+2 j_2 + 3 j_3 + \cdots + n j_n = n} \prod_{k=1}^n \frac{a_k^{j_k} y^{k j_k}}{k^{j_k} j_k!} = \prod\limits_{k \geq 1} \sum\limits_{j_k \geq 0} \frac{(a_k y^k)^{j_k}}{k^{j_k} j_k!} = \prod\limits_{k \geq 1} \exp\left(\frac{a_k y^k}{k}\right),$ thus giving a simplified form for the cycle index of $S_n$: $\begin{align}Z(S_n) &=[y^n] \prod\limits_{k \geq 1} \exp\left(\frac{a_k y^k}{k}\right) \\ &= [y^n] \exp\left(\sum\limits_{k \geq 1}\frac{a_k y^k}{k}\right). \end{align}$ There is a useful recursive formula for the cycle index of the symmetric group. Set $Z(S_0) = 1$ and consider the size "l" of the cycle that contains "n", where $\begin{matrix}1 \le l \le n.\end{matrix}$ There are $\begin{matrix}{n-1 \choose l-1}\end{matrix}$ ways to choose the remaining $l-1$ elements of the cycle and every such choice generates $\begin{matrix}\frac{l!}{l}=(l-1)!\end{matrix}$ different cycles. This yields the recurrence $ Z(S_n) = \frac{1}{n!} \sum_{g\in S_n} \prod_{k=1}^n a_k^{j_k(g)} \sum_{l=1}^n {n-1 \choose l-1} \; (l-1)! \; a_l \; (n-l)! \; Z(S_{n-l}) $ or $ Z(S_n) = \frac{1}{n} \sum_{l=1}^n a_l \; Z(S_{n-l}).$ Applications. Throughout this section we will modify the notation for cycle indices slightly by explicitly including the names of the variables. Thus, for the permutation group "G" we will now write: $Z(G) = Z(G; a_1, a_2, \ldots, a_n).$ Let "G" be a group acting on the set "X". "G" also induces an action on the "k"-subsets of "X" and on the "k"-tuples of distinct elements of "X" (see #Example for the case "k" = 2), for 1 ≤ "k" ≤ "n". Let "f""k" and "F""k" denote the number of orbits of "G" in these actions respectively. By convention we set "f"0 = "F"0 = 1. We have: a) The ordinary generating function for "f""k" is given by: $\sum_{k=0}^n f_k t^k = Z(G; 1+t, 1+t^2, \ldots, 1+t^n),$ and b) The exponential generating function for "F""k" is given by: $\sum_{k=0}^n F_k t^k/k! = Z(G; 1+t, 1, 1, \ldots, 1).$ Let "G" be a group acting on the set "X" and "h" a function from "X" to "Y". For any "g" in "G", "h"("x""g") is also a function from "X" to "Y". Thus, "G" induces an action on the set "Y""X" of all functions from "X" to "Y". The number of orbits of this action is Z("G"; "b", "b", ...,"b") where "b" = |"Y"|. This result follows from the orbit counting lemma (also known as the Not Burnside's lemma, but traditionally called Burnside's lemma) and the weighted version of the result is Pólya's enumeration theorem. The cycle index is a polynomial in several variables and the above results show that certain evaluations of this polynomial give combinatorially significant results. As polynomials they may also be formally added, subtracted, differentiated and integrated. The area of symbolic combinatorics provides combinatorial interpretations of the results of these formal operations. The question of what the cycle structure of a random permutation looks like is an important question in the analysis of algorithms. An overview of the most important results may be found at random permutation statistics. Notes.
946310
abstract_algebra
In mathematics, especially in the area of algebra known as group theory, the Fitting subgroup "F" of a finite group "G", named after Hans Fitting, is the unique largest normal nilpotent subgroup of "G". Intuitively, it represents the smallest subgroup which "controls" the structure of "G" when "G" is solvable. When "G" is not solvable, a similar role is played by the generalized Fitting subgroup "F*", which is generated by the Fitting subgroup and the components of "G". For an arbitrary (not necessarily finite) group "G", the Fitting subgroup is defined to be the subgroup generated by the nilpotent normal subgroups of "G". For infinite groups, the Fitting subgroup is not always nilpotent. The remainder of this article deals exclusively with finite groups. The Fitting subgroup. The nilpotency of the Fitting subgroup of a finite group is guaranteed by Fitting's theorem which says that the product of a finite collection of normal nilpotent subgroups of "G" is again a normal nilpotent subgroup. It may also be explicitly constructed as the product of the p-cores of "G" over all of the primes "p" dividing the order of "G". If "G" is a finite non-trivial solvable group then the Fitting subgroup is always non-trivial, i.e. if "G"≠1 is finite solvable, then "F"("G")≠1. Similarly the Fitting subgroup of "G"/"F"("G") will be nontrivial if "G" is not itself nilpotent, giving rise to the concept of Fitting length. Since the Fitting subgroup of a finite solvable group contains its own centralizer, this gives a method of understanding finite solvable groups as extensions of nilpotent groups by faithful automorphism groups of nilpotent groups. In a nilpotent group, every chief factor is centralized by every element. Relaxing the condition somewhat, and taking the subgroup of elements of a general finite group which centralize every chief factor, one simply gets the Fitting subgroup again : $\operatorname{Fit}(G) = \bigcap \{ C_G(H/K) : H/K \text{ a chief factor of } G \}.$ The generalization to "p"-nilpotent groups is similar. The generalized Fitting subgroup. A component of a group is a subnormal quasisimple subgroup. (A group is quasisimple if it is a perfect central extension of a simple group.) The layer "E"("G") or "L"("G") of a group is the subgroup generated by all components. Any two components of a group commute, so the layer is a perfect central extension of a product of simple groups, and is the largest normal subgroup of "G" with this structure. The generalized Fitting subgroup "F"*("G") is the subgroup generated by the layer and the Fitting subgroup. The layer commutes with the Fitting subgroup, so the generalized Fitting subgroup is a central extension of a product of "p"-groups and simple groups. The layer is also the maximal normal semisimple subgroup, where a group is called semisimple if it is a perfect central extension of a product of simple groups. This definition of the generalized Fitting subgroup can be motivated by some of its intended uses. Consider the problem of trying to identify a normal subgroup "H" of "G" that contains its own centralizer and the Fitting group. If "C" is the centralizer of "H" we want to prove that "C" is contained in "H". If not, pick a minimal characteristic subgroup "M/Z(H)" of "C/Z(H)", where "Z(H)" is the center of "H", which is the same as the intersection of "C" and "H". Then "M"/"Z"("H") is a product of simple or cyclic groups as it is characteristically simple. If "M"/"Z"("H") is a product of cyclic groups then "M" must be in the Fitting subgroup. If "M"/"Z"("H") is a product of non-abelian simple groups then the derived subgroup of "M" is a normal semisimple subgroup mapping onto "M"/"Z"("H"). So if "H" contains the Fitting subgroup and all normal semisimple subgroups, then "M"/"Z"("H") must be trivial, so "H" contains its own centralizer. The generalized Fitting subgroup is the smallest subgroup that contains the Fitting subgroup and all normal semisimple subgroups. The generalized Fitting subgroup can also be viewed as a generalized centralizer of chief factors. A nonabelian semisimple group cannot centralize itself, but it does act on itself as inner automorphisms. A group is said to be quasi-nilpotent if every element acts as an inner automorphism on every chief factor. The generalized Fitting subgroup is the unique largest subnormal quasi-nilpotent subgroup, and is equal to the set of all elements which act as inner automorphisms on every chief factor of the whole group : $\operatorname{Fit}^*(G) = \bigcap\{ HC_G(H/K) : H/K \text{ a chief factor of } G \}.$ Here an element "g" is in "H"C"G"("H"/"K") if and only if there is some "h" in "H" such that for every "x" in "H", "x""g" ≡ "x""h" mod "K". Properties. If "G" is a finite solvable group, then the Fitting subgroup contains its own centralizer. The centralizer of the Fitting subgroup is the center of the Fitting subgroup. In this case, the generalized Fitting subgroup is equal to the Fitting subgroup. More generally, if "G" is a finite group, then the generalized Fitting subgroup contains its own centralizer. This means that in some sense the generalized Fitting subgroup controls "G", because "G" modulo the centralizer of "F"*("G") is contained in the automorphism group of "F"*("G"), and the centralizer of "F"*("G") is contained in "F"*("G"). In particular there are only a finite number of groups with given generalized Fitting subgroup. Applications. The normalizers of nontrivial "p"-subgroups of a finite group are called the "p"-local subgroups and exert a great deal of control over the structure of the group (allowing what is called local analysis). A finite group is said to be of characteristic "p" type if "F"*("G") is a "p"-group for every "p"-local subgroup, because any group of Lie type defined over a field of characteristic "p" has this property. In the classification of finite simple groups, this allows one to guess over which field a simple group should be defined. Note that a few groups are of characteristic "p" type for more than one "p". If a simple group is not of Lie type over a field of given characteristic "p", then the "p"-local subgroups usually have components in the generalized Fitting subgroup, though there are many exceptions for groups that have small rank, are defined over small fields, or are sporadic. This is used to classify the finite simple groups, because if a "p"-local subgroup has a known component, it is often possible to identify the whole group . The analysis of finite simple groups by means of the structure and embedding of the generalized Fitting subgroups of their maximal subgroups was originated by Helmut Bender and has come to be known as Bender's method. It is especially effective in the exceptional cases where components or signalizer functors are not applicable.
413722
abstract_algebra
Assumption in the kinetic theory of gases In the kinetic theory of gases in physics, the molecular chaos hypothesis (also called Stosszahlansatz in the writings of Paul Ehrenfest) is the assumption that the velocities of colliding particles are uncorrelated, and independent of position. This means the probability that a pair of particles with given velocities will collide can be calculated by considering each particle separately and ignoring any correlation between the probability for finding one particle with velocity "v" and probability for finding another velocity "v'" in a small region "δr". James Clerk Maxwell introduced this approximation in 1867 although its origins can be traced back to his first work on the kinetic theory in 1860. The assumption of molecular chaos is the key ingredient that allows proceeding from the BBGKY hierarchy to Boltzmann's equation, by reducing the 2-particle distribution function showing up in the collision term to a product of 1-particle distributions. This in turn leads to Boltzmann's H-theorem of 1872, which attempted to use kinetic theory to show that the entropy of a gas prepared in a state of less than complete disorder must inevitably increase, as the gas molecules are allowed to collide. This drew the objection from Loschmidt that it should not be possible to deduce an irreversible process from time-symmetric dynamics and a time-symmetric formalism: something must be wrong (Loschmidt's paradox). The resolution (1895) of this paradox is that the velocities of two particles "after a collision" are no longer truly uncorrelated. By asserting that it was acceptable to ignore these correlations in the population at times after the initial time, Boltzmann had introduced an element of time asymmetry through the formalism of his calculation. Though the "Stosszahlansatz" is usually understood as a physically grounded hypothesis, it was recently highlighted that it could also be interpreted as a heuristic hypothesis. This interpretation allows using the principle of maximum entropy in order to generalize the "ansatz" to higher-order distribution functions.
698358
abstract_algebra
In algebra, a group ring is a free module and at the same time a ring, constructed in a natural way from any given ring and any given group. As a free module, its ring of scalars is the given ring, and its basis is the set of elements of the given group. As a ring, its addition law is that of the free module and its multiplication extends "by linearity" the given group law on the basis. Less formally, a group ring is a generalization of a given group, by attaching to each element of the group a "weighting factor" from a given ring. If the ring is commutative then the group ring is also referred to as a group algebra, for it is indeed an algebra over the given ring. A group algebra over a field has a further structure of a Hopf algebra; in this case, it is thus called a group Hopf algebra. The apparatus of group rings is especially useful in the theory of group representations. Definition. Let $G$ be a group, written multiplicatively, and let $R$ be a ring. The group ring of $G$ over $R$, which we will denote by $R[G]$, or simply $RG$, is the set of mappings $f : G \to R$ of finite support ($f(g)$ is nonzero for only finitely many elements $g$), where the module scalar product $\alpha f $ of a scalar $\alpha$ in $R$ and a mapping $f$ is defined as the mapping $x \mapsto \alpha \cdot f(x)$, and the module group sum of two mappings $f$ and $g$ is defined as the mapping $x \mapsto f(x) + g(x)$. To turn the additive group $R[G]$ into a ring, we define the product of $f$ and $g$ to be the mapping $x\mapsto\sum_{uv=x}f(u)g(v)=\sum_{u\in G}f(u)g(u^{-1}x).$ The summation is legitimate because $f$ and $g$ are of finite support, and the ring axioms are readily verified. Some variations in the notation and terminology are in use. In particular, the mappings such as $f : G \to R$ are sometimes written as what are called "formal linear combinations of elements of $G$, with coefficients in $R$ $\sum_{g\in G}f(g) g,$ or simply $\sum_{g\in G}f_g g,$ where this doesn't cause confusion. Note that if the ring $R$ is in fact a field $K$, then the module structure of the group ring $RG$ is in fact a vector space over $K$. Examples. 1. Let "G" = "C"3, the cyclic group of order 3, with generator $a$ and identity element 1"G". An element "r" of C["G"] can be written as $r = z_0 1_G + z_1 a + z_2 a^2\,$ where "z"0, "z"1 and "z"2 are in C, the complex numbers. This is the same thing as a polynomial ring in variable $a$ such that $a^3=a^0=1$ i.e. C["G"] is isomorphic to the ring C[$a$]/$(a^3-1)$. Writing a different element "s" as $s=w_0 1_G +w_1 a +w_2 a^2$, their sum is $r + s = (z_0+w_0) 1_G + (z_1+w_1) a + (z_2+w_2) a^2\,$ and their product is $rs = (z_0w_0 + z_1w_2 + z_2w_1) 1_G +(z_0w_1 + z_1w_0 + z_2w_2)a +(z_0w_2 + z_2w_0 + z_1w_1)a^2.$ Notice that the identity element 1"G" of "G" induces a canonical embedding of the coefficient ring (in this case C) into C["G"]; however strictly speaking the multiplicative identity element of C["G"] is 1⋅1"G" where the first "1" comes from C and the second from "G". The additive identity element is zero. When "G" is a non-commutative group, one must be careful to preserve the order of the group elements (and not accidentally commute them) when multiplying the terms. 2. A different example is that of the Laurent polynomials over a ring "R": these are nothing more or less than the group ring of the infinite cyclic group Z over "R". 3. Let "Q" be the quaternion group with elements $\{e, \bar{e}, i, \bar{i}, j, \bar{j}, k, \bar{k}\}$. Consider the group ring R"Q", where R is the set of real numbers. An arbitrary element of this group ring is of the form $x_1 \cdot e + x_2 \cdot \bar{e} + x_3 \cdot i + x_4 \cdot \bar{i} + x_5 \cdot j + x_6 \cdot \bar{j} + x_7 \cdot k + x_8 \cdot \bar{k}$ where $x_i $ is a real number. Multiplication, as in any other group ring, is defined based on the group operation. For example, $\begin{align} \big(3 \cdot e + \sqrt{2} \cdot i \big)\left(\frac{1}{2} \cdot \bar{j}\right) &= (3 \cdot e)\left(\frac{1}{2} \cdot \bar{j}\right) + (\sqrt{2} \cdot i)\left(\frac{1}{2} \cdot \bar{j}\right)\\ &= \frac{3}{2} \cdot \big((e)(\bar{j})\big) + \frac{\sqrt{2}}{2} \cdot \big((i)(\bar{j})\big)\\ \end{align}.$ Note that RQ" is not the same as the skew field of quaternions over R. This is because the skew field of quaternions satisfies additional relations in the ring, such as $-1 \cdot i = -i$, whereas in the group ring RQ", $-1\cdot i$ is not equal to $1\cdot \bar{i}$. To be more specific, the group ring R"Q" has dimension 8 as a real vector space, while the skew field of quaternions has dimension 4 as a real vector space. 4. Another example of a non-abelian group ring is $\mathbb{Z}[\mathbb{S}_3]$ where $\mathbb{S}_3$ is the symmetric group on 3 letters. This is not an integral domain since we have $[1 - (12)]*[1+(12)] = 1 -(12)+(12) -(12)(12) = 1 - 1 = 0$ where the element $(12)\in \mathbb{S}_3$ is a transposition-a permutation which only swaps 1 and 2. Therefore the group ring need not be an integral domain even when the underlying ring is an integral domain. Some basic properties. Using 1 to denote the multiplicative identity of the ring "R", and denoting the group unit by 1"G", the ring "R"["G"] contains a subring isomorphic to "R", and its group of invertible elements contains a subgroup isomorphic to "G". For considering the indicator function of {1"G"}, which is the vector "f" defined by $f(g)= 1\cdot 1_G + \sum_{g\not= 1_G}0 \cdot g= \mathbf{1}_{\{1_G\}}(g)=\begin{cases} 1 & g = 1_G \\ 0 & g \ne 1_G \end{cases},$ the set of all scalar multiples of "f" is a subring of "R"["G"] isomorphic to "R". And if we map each element "s" of "G" to the indicator function of {"s"}, which is the vector "f" defined by $f(g)= 1\cdot s + \sum_{g\not= s}0 \cdot g= \mathbf{1}_{\{s\}}(g)=\begin{cases} 1 & g = s \\ 0 & g \ne s \end{cases}$ the resulting mapping is an injective group homomorphism (with respect to multiplication, not addition, in "R"["G"]). If "R" and "G" are both commutative (i.e., "R" is commutative and "G" is an abelian group), "R"["G"] is commutative. If "H" is a subgroup of "G", then "R"["H"] is a subring of "R"["G"]. Similarly, if "S" is a subring of "R", "S"["G"] is a subring of "R"["G"]. If "G" is a finite group of order greater than 1, then "R"["G"] always has zero divisors. For example, consider an element "g" of "G" of order |"g"| = m > 1. Then 1 - "g" is a zero divisor: $ (1 - g)(1 + g+\cdots+g^{m-1}) = 1 - g^m = 1 - 1 =0. $ For example, consider the group ring Z["S"3] and the element of order 3 "g"=(123). In this case, $ (1 - (123))(1 + (123)+ (132)) = 1 - (123)^3 = 1 - 1 =0. $ A related result: If the group ring $ K[G] $ is prime, then "G" has no nonidentity finite normal subgroup (in particular, "G" must be infinite). Proof: Considering the contrapositive, suppose $ H $ is a nonidentity finite normal subgroup of $ G $. Take $ a = \sum_{h \in H} h $. Since $ hH = H $ for any $ h \in H $, we know $ ha = a $, therefore $ a^2 = \sum_{h \in H} h a = |H|a $. Taking $ b = |H|\,1 - a $, we have $ ab = 0 $. By normality of $ H $, $ a $ commutes with a basis of $ K[G] $, and therefore $ aK[G]b=K[G]ab=0 $. And we see that $ a,b $ are not zero, which shows $ K[G] $ is not prime. This shows the original statement. Group algebra over a finite group. Group algebras occur naturally in the theory of group representations of finite groups. The group algebra "K"["G"] over a field "K" is essentially the group ring, with the field "K" taking the place of the ring. As a set and vector space, it is the free vector space on "G" over the field "K". That is, for "x" in "K"["G"], $x=\sum_{g\in G} a_g g.$ The algebra structure on the vector space is defined using the multiplication in the group: $g \cdot h = gh,$ where on the left, "g" and "h" indicate elements of the group algebra, while the multiplication on the right is the group operation (denoted by juxtaposition). Because the above multiplication can be confusing, one can also write the basis vectors of "K"["G"] as "e""g" (instead of "g"), in which case the multiplication is written as: $e_g \cdot e_h = e_{gh}.$ Interpretation as functions. Thinking of the free vector space as "K"-valued functions on "G", the algebra multiplication is convolution of functions. While the group algebra of a "finite" group can be identified with the space of functions on the group, for an infinite group these are different. The group algebra, consisting of "finite" sums, corresponds to functions on the group that vanish for cofinitely many points; topologically (using the discrete topology), these correspond to functions with compact support. However, the group algebra "K"["G"] and the space of functions "K""G" := Hom("G", "K") are dual: given an element of the group algebra $x = \sum_{g\in G} a_g g$ and a function on the group "f" : "G" → "K" these pair to give an element of "K" via $(x,f) = \sum_{g\in G} a_g f(g),$ which is a well-defined sum because it is finite. Representations of a group algebra. Taking "K"["G"] to be an abstract algebra, one may ask for representations of the algebra acting on a "K-"vector space "V" of dimension "d". Such a representation $\tilde{\rho}:K[G]\rightarrow \mbox{End} (V)$ is an algebra homomorphism from the group algebra to the algebra of endomorphisms of "V", which is isomorphic to the ring of "d × d" matrices: $\mathrm{End}(V)\cong M_{d}(K) $. Equivalently, this is a left "K"["G"]-module over the abelian group "V". Correspondingly, a group representation $\rho:G\rightarrow \mbox{Aut}(V),$ is a group homomorphism from "G" to the group of linear automorphisms of "V", which is isomorphic to the general linear group of invertible matrices: $\mathrm{Aut}(V)\cong \mathrm{GL}_d(K) $. Any such representation induces an algebra representation $\tilde{\rho}:K[G]\rightarrow \mbox{End}(V),$ simply by letting $\tilde{\rho}(e_g) = \rho(g)$ and extending linearly. Thus, representations of the group correspond exactly to representations of the algebra, and the two theories are essentially equivalent. Regular representation. The group algebra is an algebra over itself; under the correspondence of representations over "R" and "R"["G"] modules, it is the regular representation of the group. Written as a representation, it is the representation "g" ↦ "ρ""g" with the action given by $\rho(g)\cdot e_h = e_{gh}$, or $\rho(g)\cdot r = \sum_{h\in G} k_h \rho(g)\cdot e_h = \sum_{h\in G} k_h e_{gh}. $ Semisimple decomposition. The dimension of the vector space "K"["G"] is just equal to the number of elements in the group. The field "K" is commonly taken to be the complex numbers C or the reals R, so that one discusses the group algebras C["G"] or R["G"]. The group algebra C["G"] of a finite group over the complex numbers is a semisimple ring. This result, Maschke's theorem, allows us to understand C["G"] as a finite product of matrix rings with entries in C. Indeed, if we list the complex irreducible representations of "G" as "Vk" for "k" = 1, . . . , "m", these correspond to group homomorphisms $\rho_k: G\to \mathrm{Aut}(V_k)$ and hence to algebra homomorphisms $\tilde\rho_k: \mathbb{C}[G]\to \mathrm{End}(V_k)$. Assembling these mappings gives an algebra isomorphism $\tilde\rho : \mathbb{C}[G] \to \bigoplus_{k=1}^m \mathrm{End}(V_k) \cong \bigoplus_{k=1}^m M_{d_k}(\mathbb{C}), $ where "dk" is the dimension of "Vk". The subalgebra of C["G"] corresponding to End("Vk") is the two-sided ideal generated by the idempotent $\epsilon_k = \frac{d_k}\sum_{g\in G}\chi_k(g^{-1})\,g, $ where $\chi_k(g)=\mathrm{tr}\,\rho_k(g) $ is the character of "Vk". These form a complete system of orthogonal idempotents, so that $\epsilon_k^2 =\epsilon_k $, $\epsilon_j \epsilon_k = 0 $ for "j ≠ k", and $1 = \epsilon_1+\cdots+\epsilon_m $. The isomorphism $\tilde\rho$ is closely related to Fourier transform on finite groups. For a more general field "K," whenever the characteristic of "K" does not divide the order of the group "G", then "K"["G"] is semisimple. When "G" is a finite abelian group, the group ring "K"[G] is commutative, and its structure is easy to express in terms of roots of unity. When "K" is a field of characteristic "p" which divides the order of "G", the group ring is "not" semisimple: it has a non-zero Jacobson radical, and this gives the corresponding subject of modular representation theory its own, deeper character. Center of a group algebra. The center of the group algebra is the set of elements that commute with all elements of the group algebra: $\mathrm{Z}(K[G]) := \left\{ z \in K[G] : \forall r \in K[G], zr = rz \right\}.$ The center is equal to the set of class functions, that is the set of elements that are constant on each conjugacy class $\mathrm{Z}(K[G]) = \left\{ \sum_{g \in G} a_g g : \forall g,h \in G, a_g = a_{h^{-1}gh}\right\}.$ If "K" = C, the set of irreducible characters of "G" forms an orthonormal basis of Z("K"["G"]) with respect to the inner product $\left \langle \sum_{g \in G} a_g g, \sum_{g \in G} b_g g \right \rangle = \frac{1} \sum_{g \in G} \bar{a}_g b_g.$ Group rings over an infinite group. Much less is known in the case where "G" is countably infinite, or uncountable, and this is an area of active research. The case where "R" is the field of complex numbers is probably the one best studied. In this case, Irving Kaplansky proved that if "a" and "b" are elements of C["G"] with "ab" = 1, then "ba" = 1. Whether this is true if "R" is a field of positive characteristic remains unknown. A long-standing conjecture of Kaplansky (~1940) says that if "G" is a torsion-free group, and "K" is a field, then the group ring "K"["G"] has no non-trivial zero divisors. This conjecture is equivalent to "K"["G"] having no non-trivial nilpotents under the same hypotheses for "K" and "G". In fact, the condition that "K" is a field can be relaxed to any ring that can be embedded into an integral domain. The conjecture remains open in full generality, however some special cases of torsion-free groups have been shown to satisfy the zero divisor conjecture. These include: The case where "G" is a topological group is discussed in greater detail in the article Group algebra of a locally compact group. Category theory. Adjoint. Categorically, the group ring construction is left adjoint to "group of units"; the following functors are an adjoint pair: $R[-]\colon \mathbf{Grp} \to R\mathbf{\text{-}Alg}$ $(-)^\times\colon R\mathbf{\text{-}Alg} \to \mathbf{Grp}$ where $R[-]$ takes a group to its group ring over "R", and $(-)^\times$ takes an "R"-algebra to its group of units. When "R" = Z, this gives an adjunction between the category of groups and the category of rings, and the unit of the adjunction takes a group "G" to a group that contains trivial units: "G" × {±1} = {±"g"}. In general, group rings contain nontrivial units. If "G" contains elements "a" and "b" such that $a^n=1$ and "b" does not normalize $\langle a\rangle$ then the square of $x=(a-1)b \left (1+a+a^2+...+a^{n-1} \right )$ is zero, hence $(1+x)(1-x)=1$. The element 1 + "x" is a unit of infinite order. Universal property. The above adjunction expresses a universal property of group rings. Let R be a (commutative) ring, let G be a group, and let S be an R-algebra. For any group homomorphism $f:G\to S^\times$, there exists a unique R-algebra homomorphism $\overline{f}:R[G]\to S$ such that $\overline{f}^\times \circ i=f$ where i is the inclusion $\begin{align} i:G &\longrightarrow R[G] \\ g &\longmapsto 1_Rg \end{align}$ In other words, $\overline{f}$ is the unique homomorphism making the following diagram commute: Any other ring satisfying this property is canonically isomorphic to the group ring. Hopf algebra. The group algebra "K"["G"] has a natural structure of a Hopf algebra. The comultiplication is defined by $\Delta(g)=g\otimes g $, extended linearly, and the antipode is $S(g)=g^{-1}$, again extended linearly. Generalizations. The group algebra generalizes to the monoid ring and thence to the category algebra, of which another example is the incidence algebra. Filtration. If a group has a length function – for example, if there is a choice of generators and one takes the word metric, as in Coxeter groups – then the group ring becomes a filtered algebra. Notes.
154098
abstract_algebra
Mathematical version of an order change In mathematics, a permutation of a set is, loosely speaking, an arrangement of its members into a sequence or linear order, or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or process of changing the linear order of an ordered set. For example, there are six permutations (orderings) of the set {1,2,3}: written as tuples, they are (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), and (3,2,1). Anagrams of a word whose letters are all different are also permutations: the letters are already ordered in the original word, and the anagram reorders them. The study of permutations of finite sets is an important topic in combinatorics and group theory. Permutations are used in almost every branch of mathematics and in many other fields of science. In computer science, they are used for analyzing sorting algorithms; in quantum physics, for describing states of particles; and in biology, for describing RNA sequences. The number of permutations of "n" distinct objects is "n" factorial, usually written as "n"!, which means the product of all positive integers less than or equal to "n". Formally, a permutation of a set "S" is defined as a bijection from "S" to itself. That is, it is a function from "S" to "S" for which every element occurs exactly once as an image value. Such a function $f:S\to S$ is equivalent to the rearrangement of the elements of "S" in which each element "i" is replaced by the corresponding $f(i)$. For example, the permutation (3,1,2) is described by the function $\sigma$ defined as $f(1) = 3, \quad f(2) = 1, \quad f(3) = 2$. The collection of all permutations of a set form a group called the symmetric group of the set. The group operation is the composition of functions (performing one rearrangement after the other), which results in another function (rearrangement). The properties of permutations do not depend on the nature of the elements being permuted, only on their number, so one often considers the standard set $S=\{1, 2, \ldots, n\}$. In elementary combinatorics, the "k"-permutations, or partial permutations, are the ordered arrangements of "k" distinct elements selected from a set. When "k" is equal to the size of the set, these are the permutations in the previous sense. History. Permutations called hexagrams were used in China in the I Ching (Pinyin: Yi Jing) as early as 1000 BC. In Greece, Plutarch wrote that Xenocrates of Chalcedon (396–314 BC) discovered the number of different syllables possible in the Greek language. This would have been the first attempt on record to solve a difficult problem in permutations and combinations. Al-Khalil (717–786), an Arab mathematician and cryptographer, wrote the "Book of Cryptographic Messages". It contains the first use of permutations and combinations, to list all possible Arabic words with and without vowels. The rule to determine the number of permutations of "n" objects was known in Indian culture around 1150 AD. The "Lilavati" by the Indian mathematician Bhaskara II contains a passage that translates as follows: The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures. In 1677, Fabian Stedman described factorials when explaining the number of permutations of bells in change ringing. Starting from two bells: "first, "two" must be admitted to be varied in two ways", which he illustrates by showing 1 2 and 2 1. He then explains that with three bells there are "three times two figures to be produced out of three" which again is illustrated. His explanation involves "cast away 3, and 1.2 will remain; cast away 2, and 1.3 will remain; cast away 1, and 2.3 will remain". He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three. Effectively, this is a recursive process. He continues with five bells using the "casting away" method and tabulates the resulting 120 combinations. At this point he gives up and remarks: Now the nature of these methods is such, that the changes on one number comprehends the changes on all lesser numbers, ... insomuch that a compleat Peal of changes on one number seemeth to be formed by uniting of the compleat Peals on all lesser numbers into one entire body; Stedman widens the consideration of permutations; he goes on to consider the number of permutations of the letters of the alphabet and of horses from a stable of 20. A first case in which seemingly unrelated mathematical questions were studied with the help of permutations occurred around 1770, when Joseph Louis Lagrange, in the study of polynomial equations, observed that properties of the permutations of the roots of an equation are related to the possibilities to solve it. This line of work ultimately resulted, through the work of Évariste Galois, in Galois theory, which gives a complete description of what is possible and impossible with respect to solving polynomial equations (in one unknown) by radicals. In modern mathematics, there are many similar situations in which understanding a problem requires studying certain permutations related to it. Permutations played an important role in the cryptanalysis of the Enigma machine, a cipher device used by Nazi Germany during World War II. In particular, one important property of permutations, namely, that two permutations are conjugate exactly when they have the same cycle type, was used by cryptologist Marian Rejewski to break the German Enigma cipher in turn of years 1932-1933. Definition. In mathematics texts it is customary to denote permutations using lowercase Greek letters. Commonly, either $\alpha,\beta,\gamma$ or $\sigma, \tau,\rho,\pi$ are used. A permutation can be defined as a bijection (an invertible mapping, a one-to-one and onto function) from a set "S" to itself: $\sigma : S\ \stackrel{\sim}{\longrightarrow}\ S.$ The identity permutation is defined by $\sigma(x) = x $ for all elements $x\in S $, and can be denoted by the number $1$, by $\text{id}= \text{id}_S $, or by a single 1-cycle (x). The set of all permutations of a set with "n" elements forms the symmetric group $S_n$, where the group operation is composition of functions. Thus for two permutations $\sigma$ and $\tau$ in the group $S_n$, their product $\pi = \sigma\tau$ is defined by: $\pi(i)=\sigma(\tau(i)).$ Composition is usually written without a dot or other sign. In general, composition of two permutations is not commutative: $\tau\sigma \neq \sigma\tau.$ As a bijection from a set to itself, a permutation is a function that "performs" a rearrangement of a set, termed an "active permutation" or "substitution". An older viewpoint sees a permutation as an ordered arrangement or list of all the elements of "S", called a "passive permutation" (see below). A permutation $\sigma$ can be decomposed into one or more disjoint "cycles" which are the orbits of the cyclic group $\langle\sigma\rangle = \{1, \sigma, \sigma^2,\ldots\} $ acting on the set "S". A cycle is found by repeatedly applying the permutation to an element: $x, \sigma(x),\sigma(\sigma(x)),\ldots, \sigma^{k-1}(x)$, where we assume $\sigma^k(x)=x$ . A cycle consisting of "k" elements is called a "k"-cycle. (See below.) A fixed point of a permutation $\sigma$ is an element "x" which is taken to itself, that is $\sigma(x)=x $, forming a 1-cycle $(\,x\,)$. A permutation with no fixed points is called a derangement. A permutation exchanging two elements (a single 2-cycle) and leaving the others fixed is called a transposition. Notations. Several notations are widely used to represent permutations conveniently. "Cycle notation" is a popular choice, as it is compact and shows the permutation's structure clearly. This article will use cycle notation unless otherwise specified. Two-line notation. Cauchy's "two-line notation" lists the elements of "S" in the first row, and the image of each element below it in the second row. For example, the permutation of "S" = {1, 2, 3, 4, 5, 6} given by the function$\sigma(1) = 2, \ \ \sigma(2) = 6, \ \ \sigma(3) = 5, \ \ \sigma(4) = 4, \ \ \sigma(5) = 3, \ \ \sigma(6) = 1 $can be written as $\sigma = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 6 & 5 & 4 & 3 & 1 \end{pmatrix}.$ The elements of "S" may appear in any order in the first row, so this permutation could also be written: $\sigma = \begin{pmatrix} 2 & 3 & 4 & 5 & 6 & 1 \\ 6 & 5 & 4 & 3 & 1 & 2 6 & 5 & 4 & 3 & 2 & 1 \\ 1 & 3 & 4 & 5 & 6 & 2 \end{pmatrix}.$ One-line notation. If there is a "natural" order for the elements of "S", say $x_1, x_2, \ldots, x_n$, then one uses this for the first row of the two-line notation: $\sigma = \begin{pmatrix} x_1 & x_2 & x_3 & \cdots & x_n \\ \sigma(x_1) & \sigma(x_2) & \sigma(x_3) & \cdots & \sigma(x_n) \end{pmatrix}.$ Under this assumption, one may omit the first row and write the permutation in "one-line notation" as $\sigma = \sigma(x_1) \; \sigma(x_2) \; \sigma(x_3) \; \cdots \; \sigma(x_n) $, that is, as an ordered arrangement of the elements of "S". Care must be taken to distinguish one-line notation from the cycle notation described below: a common usage is to omit parentheses or other enclosing marks for one-line notation, while using parentheses for cycle notation. The one-line notation is also called the "word representation". 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 6 & 5 & 4 & 3 & 1 \end{pmatrix} = 2 6 5 4 3 1.$ (It is typical to use commas to separate these entries only if some have two or more digits.) This compact form is common in elementary combinatorics and computer science. It is especially useful in applications where the permutations are to be compared as larger or smaller using lexicographic order. Cycle notation. Cycle notation describes the effect of repeatedly applying the permutation on the elements of the set "S", with an orbit being called a "cycle". The permutation is written as a list of cycles; since distinct cycles involve disjoint sets of elements, this is referred to as "decomposition into disjoint cycles". To write down the permutation $\sigma$ in cycle notation, one proceeds as follows: Also, it is common to omit 1-cycles, since these can be inferred: for any element "x" in "S" not appearing in any cycle, one implicitly assumes $\sigma(x) = x$. Following the convention of omitting 1-cycles, one may interpret an individual cycle as a permutation which fixes all the elements not in the cycle (a cyclic permutation having only one cycle of length greater than 1). Then the list of disjoint cycles can be seen as the composition of these cyclic permutations. For example, the one-line permutation $\sigma = 2 6 5 4 3 1 $ can be written in cycle notation as:$\sigma = (126)(35)(4) = (126)(35). $This may be seen as the composition $\sigma = \kappa_1 \kappa_2 $ of cyclic permutations:$\kappa_1 = (126) = (126)(3)(4)(5),\quad \kappa_2 = (35)= (35)(1)(2)(6). $ While permutations in general do not commute, disjoint cycles do; for example:$\sigma = (126)(35) = (35)(126). $Also, each cycle can be rewritten from a different starting point; for example,$\sigma = (126)(35) = (261)(53). $Thus one may write the disjoint cycles of a given permutation in many different ways. A convenient feature of cycle notation is that inverting the permutation is given by reversing the order of the elements in each cycle. For example, $\sigma^{-1} = \left(\vphantom{A^2}(126)(35)\right)^{-1} = (621)(53). $ Canonical cycle notation. In some combinatorial contexts it is useful to fix a certain order for the elements in the cycles and of the (disjoint) cycles themselves. Miklós Bóna calls the following ordering choices the "canonical cycle notation:" For example, (513)(6)(827)(94) is a permutation of "S =" {1, 2, . . . , 9} in canonical cycle notation. Richard Stanley calls this the "standard representation" of a permutation, and Martin Aigner uses "standard form". Sergey Kitaev also uses the "standard form" terminology, but reverses both choices; that is, each cycle lists its minimal element first, and the cycles are sorted in decreasing order of their minimal elements. Composition of permutations. There are two ways to denote the composition of two permutations. In the most common notation, $\sigma\cdot \tau$ is the function that maps any element "x" to $\sigma(\tau(x))$. The rightmost permutation is applied to the argument first, because the argument is written to the right of the function. A "different" rule for multiplying permutations comes from writing the argument to the left of the function, so that the leftmost permutation acts first. In this notation, the permutation is often written as an exponent, so "σ" acting on "x" is written "x""σ"; then the product is defined by $x^{\sigma\cdot\tau} = (x^\sigma)^\tau$. This article uses the first definition, where the rightmost permutation is applied first. The function composition operation satisfies the axioms of a group. It is associative, meaning $(\rho\sigma)\tau = \rho(\sigma\tau)$, and products of more than two permutations are usually written without parentheses. The composition operation also has an identity element (the identity permutation $\text{id}$), and each permutation $\sigma$ has an inverse $\sigma^{-1}$ (its inverse function) with $\sigma^{-1}\sigma = \sigma\sigma^{-1}=\text{id}$. Other uses of the term "permutation". The concept of a permutation as an ordered arrangement admits several generalizations that have been called "permutations", especially in older literature. "k"-permutations of "n". In older literature and elementary textbooks, a k"-permutation of "n (sometimes called a partial permutation, sequence without repetition, variation, or arrangement) means an ordered arrangement (list) of a "k"-element subset of an "n"-set. The number of such "k"-permutations ("k"-arrangements) of $n$ is denoted variously by such symbols as $P^n_k$, $_nP_k$, $^n\!P_k$, $P_{n,k}$, $P(n,k)$, or $A^k_n$, computed by the formula: $P(n,k) = \underbrace{n\cdot(n-1)\cdot(n-2)\cdots(n-k+1)}_{k\ \mathrm{factors}}$, which is 0 when "k" > "n", and otherwise is equal to $\frac{n!}{(n-k)!}.$ The product is well defined without the assumption that $n$ is a non-negative integer, and is of importance outside combinatorics as well; it is known as the Pochhammer symbol $(n)_k$ or as the $k$-th falling factorial power $n^{\underline k}$:$P(n,k)={_n} P_k =(n)_k = n^{\underline{k}} .$This usage of the term "permutation" is closely associated with the term "combination" to mean a subset. A "k-combination" of a set "S" is a "k-"element subset of "S": the elements of a combination are not ordered. Ordering the "k"-combinations of "S" in all possible ways produces the "k"-permutations of "S". The number of "k"-combinations of an "n"-set, "C"("n","k"), is therefore related to the number of "k"-permutations of "n" by: $C(n,k) = \frac{P(n,k)}{P(k,k)}= \frac{n^{\underline{k}}}{k!} = \frac{n!}{(n-k)!\,k!}.$ These numbers are also known as binomial coefficients, usually denoted $\tbinom{n}{k}$:$C(n,k)={_n} C_k =\binom{n}{k} .$ Permutations with repetition. Ordered arrangements of "k" elements of a set "S", where repetition is allowed, are called "k"-tuples. They have sometimes been referred to as permutations with repetition, although they are not permutations in the usual sense. They are also called words over the alphabet "S". If the set "S" has "n" elements, the number of "k"-tuples over "S" is $n^k.$ A formal language is a set of words obeying specified rules. Permutations of multisets. If "M" is a finite multiset, then a multiset permutation is an ordered arrangement of elements of "M" in which each element appears a number of times equal exactly to its multiplicity in "M". An anagram of a word having some repeated letters is an example of a multiset permutation. If the multiplicities of the elements of "M" (taken in some order) are $m_1$, $m_2$, ..., $m_l$ and their sum (that is, the size of "M") is "n", then the number of multiset permutations of "M" is given by the multinomial coefficient, $ {n \choose m_1, m_2, \ldots, m_l} = \frac{n!}{m_1!\, m_2!\, \cdots\, m_l!} = \frac{\left(\sum_{i=1}^l{m_i}\right)!}{\prod_{i=1}^l{m_i!}}. $ For example, the number of distinct anagrams of the word MISSISSIPPI is: $\frac{11!}{1!\, 4!\, 4!\, 2!} = 34650$. A "k"-permutation of a multiset "M" is a sequence of "k" elements of "M" in which each element appears "a number of times less than or equal to" its multiplicity in "M" (an element's "repetition number"). Circular permutations. Permutations, when considered as arrangements, are sometimes referred to as "linearly ordered" arrangements. If, however, the objects are arranged in a circular manner this distinguished ordering is weakened: there is no "first element" in the arrangement, as any element can be considered as the start. An arrangement of distinct objects in a circular manner is called a circular permutation. These can be formally defined as equivalence classes of ordinary permutations of these objects, for the equivalence relation generated by moving the final element of the linear arrangement to its front. Two circular permutations are equivalent if one can be rotated into the other. The following four circular permutations on four letters are considered to be the same. The circular arrangements are to be read counter-clockwise, so the following two are not equivalent since no rotation can bring one to the other. There are ("n" – 1)! circular permutations of a set with "n" elements. Properties. The number of permutations of "n" distinct objects is "n"!. The number of "n"-permutations with "k" disjoint cycles is the signless Stirling number of the first kind, denoted $c(n,k)$ or $[\begin{smallmatrix}n \\ k\end{smallmatrix}]$. Cycle type. The cycles (including the fixed points) of a permutation $\sigma$ of a set with n elements partition that set; so the lengths of these cycles form an integer partition of n, which is called the cycle type (or sometimes cycle structure or cycle shape) of $\sigma$. There is a "1" in the cycle type for every fixed point of $\sigma$, a "2" for every transposition, and so on. The cycle type of $\beta = (1\,2\,5\,)(\,3\,4\,)(6\,8\,)(\,7\,)$ is $(3, 2, 2, 1).$ This may also be written in a more compact form as [112231]. More precisely, the general form is $[1^{\alpha_1}2^{\alpha_2}\dotsm n^{\alpha_n}]$, where $\alpha_1,\ldots,\alpha_n$ are the numbers of cycles of respective length. The number of permutations of a given cycle type is $\frac{n!}{1^{\alpha_1}2^{\alpha_2}\dotsm n^{\alpha_n}\alpha_1!\alpha_2!\dotsm \alpha_n!}$. The number of cycle types of a set with n elements equals the value of the partition function $p(n)$. Polya's cycle index polynomial is a generating function which counts permutations by their cycle type. Conjugating permutations. In general, composing permutations written in cycle notation follows no easily described pattern – the cycles of the composition can be different from those being composed. However the cycle type is preserved in the special case of conjugating a permutation $\sigma$ by another permutation $\pi$, which means forming the product $\pi\sigma\pi^{-1}$. Here, $\pi\sigma\pi^{-1}$ is the "conjugate" of $\sigma$ by $\pi$ and its cycle notation can be obtained by taking the cycle notation for $\sigma$ and applying $\pi$ to all the entries in it. It follows that two permutations are conjugate exactly when they have the same cycle type. Order of a permutation. The order of a permutation $\sigma$ is the smallest positive integer "m" so that $\sigma^m = \mathrm{id}$. It is the least common multiple of the lengths of its cycles. For example, the order of $\sigma=(152)(34) $ is $\text{lcm}(3,2) = 6$. Parity of a permutation. Every permutation of a finite set can be expressed as the product of transpositions. Although many such expressions for a given permutation may exist, either they all contain an even number of transpositions or they all contain an odd number of transpositions. Thus all permutations can be classified as even or odd depending on this number. This result can be extended so as to assign a "sign", written $\operatorname{sgn}\sigma$, to each permutation. $\operatorname{sgn}\sigma = +1$ if $\sigma$ is even and $\operatorname{sgn}\sigma = -1$ if $\sigma$ is odd. Then for two permutations $\sigma$ and $\pi$ $\operatorname{sgn}(\sigma\pi) = \operatorname{sgn}\sigma\cdot\operatorname{sgn}\pi.$ It follows that $\operatorname{sgn}\left(\sigma\sigma^{-1}\right) = +1.$ The sign of a permutation is equal to the determinant of its permutation matrix (below). Matrix representation. A "permutation matrix" is an "n" × "n" matrix that has exactly one entry 1 in each column and in each row, and all other entries are 0. There are several ways to assign a permutation matrix to a permutation of {1, 2, ..., "n"}. One natural approach is to define $L_{\sigma}$ to be the linear transformation of $\mathbb{R}^n$ which permutes the standand basis $\{\mathbf{e}_1,\ldots,\mathbf{e}_n\}$ by $L_\sigma(\mathbf{e}_j)=\mathbf{e}_{\sigma(j)}$, and define $M_{\sigma}$ to be its matrix. That is, $M_{\sigma}$ has its "j"th column equal to the n × 1 column vector $\mathbf{e}_{\sigma(j)}$: its ("i", "j") entry is to 1 if "i" = "σ"("j"), and 0 otherwise. Since composition of linear mappings is described by matrix multiplication, it follows that this construction is compatible with composition of permutations:$M_\sigma M_\tau = M_{\sigma\tau}$. For example, the one-line permutations $\sigma=213,\ \tau=231$ have product $\sigma\tau = 132$, and the corresponding matrices are:<math display="block"> M_{\sigma} M_{\tau} \begin{pmatrix} 0&0&1\\1&0&0\\0&1&0\end{pmatrix} = \begin{pmatrix} 1&0&0\\0&0&1\\0&1&0\end{pmatrix} = M_{\sigma\tau}.$ It is also common in the literature to find the inverse convention, where a permutation "σ" is associated to the matrix $P_{\sigma} = (M_{\sigma})^{-1} = (M_{\sigma})^{T}$ whose ("i", "j") entry is 1 if "j" = "σ"("i") and is 0 otherwise. In this convention, permutation matrices multiply in the opposite order from permutations, that is, $P_\sigma P_{\tau} = P_{\tau\sigma}$. In this correspondence, permutation matrices act on the right side of the standard $1 \times n$ row vectors $({\bf e}_i)^T$: $({\bf e}_i)^T P_{\sigma} = ({\bf e}_{\sigma(i)})^T$. The Cayley table on the right shows these matrices for permutations of 3 elements. Permutations of totally ordered sets. In some applications, the elements of the set being permuted will be compared with each other. This requires that the set "S" has a total order so that any two elements can be compared. The set {1, 2, ..., "n"} with the usual ≤ relation is the most frequently used set in these applications. A number of properties of a permutation are directly related to the total ordering of "S," considering the permutation written in one-line notation as a sequence $\sigma = \sigma(1)\sigma(2)\cdots\sigma(n)$. Ascents, descents, runs, exceedances, records. An "ascent" of a permutation "σ" of "n" is any position "i" < "n" where the following value is bigger than the current one. That is, "i" is an ascent if $\sigma(i)<\sigma(i{+}1)$. For example, the permutation 3452167 has ascents (at positions) 1, 2, 5, and 6. Similarly, a "descent" is a position "i" < "n" with $\sigma(i)>\sigma(i{+}1)$, so every "i" with $1 \leq i<n$ is either an ascent or a descent. An "ascending run" of a permutation is a nonempty increasing contiguous subsequence that cannot be extended at either end; it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length 1). By contrast an "increasing subsequence" of a permutation is not necessarily contiguous: it is an increasing sequence obtained by omitting some of the values of the one-line notation. For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367. If a permutation has "k" − 1 descents, then it must be the union of "k" ascending runs. The number of permutations of "n" with "k" ascents is (by definition) the Eulerian number $\textstyle\left\langle{n\atop k}\right\rangle$; this is also the number of permutations of "n" with "k" descents. Some authors however define the Eulerian number $\textstyle\left\langle{n\atop k}\right\rangle$ as the number of permutations with "k" ascending runs, which corresponds to "k" − 1 descents. An exceedance of a permutation "σ"1"σ"2..."σ""n" is an index "j" such that "σ""j" > "j". If the inequality is not strict (that is, "σ""j" ≥ "j"), then "j" is called a "weak exceedance". The number of "n"-permutations with "k" exceedances coincides with the number of "n"-permutations with "k" descents. A "record" or "left-to-right maximum" of a permutation "σ" is an element "i" such that "σ"("j") < "σ"("i") for all "j < i". Foata's transition lemma. Foata's "fundamental bijection" transforms a permutation $\sigma $ with a given canonical cycle form into the permutation $f(\sigma) = \hat\sigma $ whose one-line notation has the same sequence of elements with parentheses removed. For example:$\sigma = (513)(6)(827)(94) 1&2&3&4&5&6&7&8&9\\ 3&7&5&9&1&6&8&2&4 \end{pmatrix}, $ $\hat\sigma = 513682794 1&2&3&4&5&6&7&8&9\\ 5&1&3&6&8&2&7&9&4 \end{pmatrix}. $Here the first element in each canonical cycle of $\sigma $ becomes a record (left-to-right maximum) of $\hat\sigma $. Given $\hat\sigma $, one may find its records and insert parentheses to construct the inverse transformation $\sigma=f^{-1}(\hat\sigma) $. Underlining the records in the above example: $\hat\sigma = \underline{5}\, 1\, 3\, \underline{6}\, \underline{8}\,2\,7\,\underline{9}\,4 $, which allows the reconstruction of the cycles of $\sigma $. The following table shows $\hat\sigma $ and $\sigma $ for the six permutations of "S" = {1,2,3}, with the bold text on each side showing the notation used in the bijection: one-line notation for $\hat\sigma $ and canonical cycle notation for $\sigma $. Note that the first 3 permutations are mapped to themselves under the bijection. As a first corollary, the number of "n"-permutations with exactly "k" records is equal to the number of "n"-permutations with exactly "k" cycles: this last number is the signless Stirling number of the first kind, $c(n, k)$. Furthermore, Foata's mapping takes an "n"-permutation with "k" weak exceedances to an "n"-permutation with "k" − 1 ascents. For example, (2)(31) = 321 has "k =" 2 weak exceedances (at index 1 and 2), whereas "f"(321) 231 has "k" − 1 = 1 ascent (at index 1; that is, from 2 to 3). Inversions. An "inversion" of a permutation "σ" is a pair ("i", "j") of positions where the entries of a permutation are in the opposite order: $i < j$ and $\sigma(i)> \sigma(j)$. Thus a descent is an inversion at two adjacent positions. For example, "σ" 23154 has ("i", "j") = (1,3), (2,3), and (4,5), where ("σ"("i"), "σ"("j")) = (2,1), (3,1), and (5,4). Sometimes an inversion is defined as the pair of values ("σ"("i"), "σ"("j")); this makes no difference for the "number" of inversions, and the reverse pair ("σ"("j"), "σ"("i")) is an inversion in the above sense for the inverse permutation "σ"−1. The number of inversions is an important measure for the degree to which the entries of a permutation are out of order; it is the same for "σ" and for "σ"−1. To bring a permutation with "k" inversions into order (that is, transform it into the identity permutation), by successively applying (right-multiplication by) adjacent transpositions, is always possible and requires a sequence of "k" such operations. Moreover, any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition of "i" and "i" + 1 where "i" is a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by 1; as long as this number is not zero, the permutation is not the identity, so it has at least one descent. Bubble sort and insertion sort can be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutation "σ" can be written as a product of adjacent transpositions; for this one may simply reverse any sequence of such transpositions that transforms "σ" into the identity. In fact, by enumerating all sequences of adjacent transpositions that would transform "σ" into the identity, one obtains (after reversal) a "complete" list of all expressions of minimal length writing "σ" as a product of adjacent transpositions. The number of permutations of "n" with "k" inversions is expressed by a Mahonian number, it is the coefficient of "X""k" in the expansion of the product <math display="block">\prod_{m=1}^n\sum_{i=0}^{m-1}X^i = 1 \left(1 + X\right)\left(1 + X + X^2\right) \cdots \left(1 + X + X^2 + \cdots + X^{n-1}\right),$ which is also known (with "q" substituted for "X") as the q-factorial ["n"]"q"! . The expansion of the product appears in Necklace (combinatorics). Let $\sigma \in S_n, i, j\in \{1, 2, \dots, n\} $ such that $i<j$ and $\sigma(i)>\sigma(j)$. In this case, say the weight of the inversion $(i, j)$ is $\sigma(i)-\sigma(j)$. Kobayashi (2011) proved the enumeration formula <math display="block">\sum_{i<j, \sigma(i)>\sigma(j)}(\sigma(i)-\sigma(j)) = |\{\tau \in S_n \mid \tau\le \sigma, \tau \text{ is bigrassmannian}\}$ where $\le$ denotes Bruhat order in the symmetric groups. This graded partial order often appears in the context of Coxeter groups. Permutations in computing. Numbering permutations. One way to represent permutations of "n" things is by an integer "N" with 0 ≤ "N" < "n"!, provided convenient methods are given to convert between the number and the representation of a permutation as an ordered arrangement (sequence). This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive when "n" is small enough that "N" can be held in a machine word; for 32-bit words this means "n" ≤ 12, and for 64-bit words this means "n" ≤ 20. The conversion can be done via the intermediate form of a sequence of numbers "d""n", "d""n"−1, ..., "d"2, "d"1, where "d""i" is a non-negative integer less than "i" (one may omit "d"1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is to simply express "N" in the "factorial number system", which is just a particular mixed radix representation, where, for numbers less than "n"!, the bases (place values or multiplication factors) for successive digits are ("n" − 1)!, ("n" − 2)!, ..., 2!, 1!. The second step interprets this sequence as a Lehmer code or (almost equivalently) as an inversion table. In the Lehmer code for a permutation "σ", the number "d""n" represents the choice made for the first term "σ"1, the number "d""n"−1 represents the choice made for the second term "σ"2 among the remaining "n" − 1 elements of the set, and so forth. More precisely, each "d""n"+1−"i" gives the number of "remaining" elements strictly less than the term "σ""i". Since those remaining elements are bound to turn up as some later term "σ""j", the digit "d""n"+1−"i" counts the "inversions" ("i","j") involving "i" as smaller index (the number of values "j" for which "i" < "j" and "σ""i" > "σ""j"). The inversion table for "σ" is quite similar, but here "d""n"+1−"k" counts the number of inversions ("i","j") where "k" = "σ""j" occurs as the smaller of the two values appearing in inverted order. Both encodings can be visualized by an "n" by "n" Rothe diagram (named after Heinrich August Rothe) in which dots at ("i","σ""i") mark the entries of the permutation, and a cross at ("i","σ""j") marks the inversion ("i","j"); by the definition of inversions a cross appears in any square that comes both before the dot ("j","σ""j") in its column, and before the dot ("i","σ""i") in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns; it is just the Lehmer code for the inverse permutation, and vice versa. To effectively convert a Lehmer code "d""n", "d""n"−1, ..., "d"2, "d"1 into a permutation of an ordered set "S", one can start with a list of the elements of "S" in increasing order, and for "i" increasing from 1 to "n" set "σ""i" to the element in the list that is preceded by "d""n"+1−"i" other ones, and remove that element from the list. To convert an inversion table "d""n", "d""n"−1, ..., "d"2, "d"1 into the corresponding permutation, one can traverse the numbers from "d"1 to "d""n" while inserting the elements of "S" from largest to smallest into an initially empty sequence; at the step using the number "d" from the inversion table, the element from "S" inserted into the sequence at the point where it is preceded by "d" elements already present. Alternatively one could process the numbers from the inversion table and the elements of "S" both in the opposite order, starting with a row of "n" empty slots, and at each step place the element from "S" into the empty slot that is preceded by "d" other empty slots. Converting successive natural numbers to the factorial number system produces those sequences in lexicographic order (as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by the "place" of their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives the signature of the permutation. Moreover, the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5); this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer code "d""n", "d""n"−1, ..., "d"2, "d"1 has an ascent "n" − "i" if and only if "d""i" ≥ "d""i+1". Algorithms to generate permutations. In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account; if so, one should only generate distinct multiset permutations of the sequence. An obvious way to generate permutations of "n" is to generate values for the Lehmer code (possibly using the factorial number system representation of integers up to "n"!), and convert those into the corresponding permutations. However, the latter step, while straightforward, is hard to implement efficiently, because it requires "n" operations each of selection from a sequence and deletion from it, at an arbitrary position; of the obvious representations of the sequence as an array or a linked list, both require (for different reasons) about "n"2/4 operations to perform the conversion. With "n" likely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation in "O"("n" log "n") time. Random generation of permutations. For generating random permutations of a given sequence of "n" values, it makes no difference whether one applies a randomly selected permutation of "n" to the sequence, or chooses a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations of "n" that result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for large "n" due to the growth of the number "n"!, there is no reason to assume that "n" will be small for random generation. The basic idea to generate a random permutation is to generate at random one of the "n"! sequences of integers "d"1,"d"2...,"d""n" satisfying 0 ≤ "d""i" < "i" (since "d"1 is always zero it may be omitted) and to convert it to a permutation through a bijective correspondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 by Ronald Fisher and Frank Yates. While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after using "d""i" to select an element among "i" remaining elements of the sequence (for decreasing values of "i"), rather than removing the element and compacting the sequence by shifting down further elements one place, one swaps the element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediate induction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated. The resulting algorithm for generating a random permutation of <code>"a"[0], "a"[1], ..., "a"["n" − 1]</code> can be described as follows in pseudocode: for "i" from "n" downto 2 do swap "a"["di"] and "a"["i" − 1] This can be combined with the initialization of the array <code>"a"["i"] = "i"</code> as follows for "i" from 0 to "n"−1 do "a"["i"] ← "a"["d""i"+1] "a"["d""i"+1] ← "i" If "d""i"+1 = "i", the first assignment will copy an uninitialized value, but the second will overwrite it with the correct value "i". However, Fisher-Yates is not the fastest algorithm for generating a permutation, because Fisher-Yates is essentially a sequential algorithm and "divide and conquer" procedures can achieve the same result in parallel. Generation in lexicographic order. There are many ways to systematically generate all permutations of a given sequence. One classic, simple, and flexible algorithm is based upon finding the next permutation in lexicographic ordering, if it exists. It can handle repeated values, for which case it generates each distinct multiset permutation once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using the factorial number system) and converting those to permutations. It begins by sorting the sequence in (weakly) increasing order (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back to Narayana Pandita in 14th century India, and has been rediscovered frequently. The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place. For example, given the sequence [1, 2, 3, 4] (which is in increasing order), and given that the index is zero-based, the steps are as follows: Following this algorithm, the next lexicographic permutation will be [1, 3, 2, 4], and the 24th permutation will be [4, 3, 2, 1] at which point "a"["k"] < "a"["k" + 1] does not exist, indicating that this is the last permutation. This method uses about 3 comparisons and 1.5 swaps per permutation, amortized over the whole sequence, not counting the initial sort. Generation with minimal changes. An alternative to the above algorithm, the Steinhaus–Johnson–Trotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation. An alternative to Steinhaus–Johnson–Trotter is Heap's algorithm, said by Robert Sedgewick in 1977 to be the fastest algorithm of generating permutations in applications. The following figure shows the output of all three aforementioned algorithms for generating all permutations of length $n=4$, and of six additional algorithms described in the literature. Meandric permutations. Meandric systems give rise to "meandric permutations", a special subset of "alternate permutations". An alternate permutation of the set {1, 2, ..., 2"n"} is a cyclic permutation (with no fixed points) such that the digits in the cyclic notation form alternate between odd and even integers. Meandric permutations are useful in the analysis of RNA secondary structure. Not all alternate permutations are meandric. A modification of Heap's algorithm has been used to generate all alternate permutations of order "n" (that is, of length 2"n") without generating all (2"n")! permutations. Generation of these alternate permutations is needed before they are analyzed to determine if they are meandric or not. The algorithm is recursive. The following table exhibits a step in the procedure. In the previous step, all alternate permutations of length 5 have been generated. Three copies of each of these have a "6" added to the right end, and then a different transposition involving this last entry and a previous entry in an even position is applied (including the identity; that is, no transposition). Applications. Permutations are used in the interleaver component of the error detection and correction algorithms, such as turbo codes, for example 3GPP Long Term Evolution mobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212). Such applications raise the question of fast generation of permutations satisfying certain desirable properties. One of the methods is based on the permutation polynomials. Also as a base for optimal hashing in Unique Permutation Hashing.
22570
abstract_algebra
Matrix with one nonzero entry in each row and column In mathematics, a generalized permutation matrix (or monomial matrix) is a matrix with the same nonzero pattern as a permutation matrix, i.e. there is exactly one nonzero entry in each row and each column. Unlike a permutation matrix, where the nonzero entry must be 1, in a generalized permutation matrix the nonzero entry can be any nonzero value. An example of a generalized permutation matrix is $\begin{bmatrix} 0 & 0 & 3 & 0\\ 0 & -7 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & \sqrt2\end{bmatrix}.$ Structure. An invertible matrix "A" is a generalized permutation matrix if and only if it can be written as a product of an invertible diagonal matrix "D" and an (implicitly invertible) permutation matrix "P": i.e., $A = DP.$ Group structure. The set of "n" × "n" generalized permutation matrices with entries in a field "F" forms a subgroup of the general linear group GL("n", "F"), in which the group of nonsingular diagonal matrices Δ("n", "F") forms a normal subgroup. Indeed, the generalized permutation matrices are the normalizer of the diagonal matrices, meaning that the generalized permutation matrices are the "largest" subgroup of GL("n", "F") in which diagonal matrices are normal. The abstract group of generalized permutation matrices is the wreath product of "F"× and "S""n". Concretely, this means that it is the semidirect product of Δ("n", "F") by the symmetric group "S""n": "S""n" ⋉ Δ("n", "F"), where "S""n" acts by permuting coordinates and the diagonal matrices Δ("n", "F") are isomorphic to the "n"-fold product ("F"×)"n". To be precise, the generalized permutation matrices are a (faithful) linear representation of this abstract wreath product: a realization of the abstract group as a subgroup of matrices. Generalizations. One can generalize further by allowing the entries to lie in a ring, rather than in a field. In that case if the non-zero entries are required to be units in the ring, one again obtains a group. On the other hand, if the non-zero entries are only required to be non-zero, but not necessarily invertible, this set of matrices forms a semigroup instead. One may also schematically allow the non-zero entries to lie in a group "G," with the understanding that matrix multiplication will only involve multiplying a single pair of group elements, not "adding" group elements. This is an abuse of notation, since element of matrices being multiplied must allow multiplication and addition, but is suggestive notion for the (formally correct) abstract group $G \wr S_n$ (the wreath product of the group "G" by the symmetric group). Signed permutation group. A signed permutation matrix is a generalized permutation matrix whose nonzero entries are ±1, and are the integer generalized permutation matrices with integer inverse. Applications. Monomial representations. Monomial matrices occur in representation theory in the context of monomial representations. A monomial representation of a group "G" is a linear representation "ρ" : "G" → GL("n", "F") of "G" (here "F" is the defining field of the representation) such that the image "ρ"("G") is a subgroup of the group of monomial matrices.
122280
abstract_algebra
In mathematics, in the field of abstract algebra, the structure theorem for finitely generated modules over a principal ideal domain is a generalization of the fundamental theorem of finitely generated abelian groups and roughly states that finitely generated modules over a principal ideal domain (PID) can be uniquely decomposed in much the same way that integers have a prime factorization. The result provides a simple framework to understand various canonical form results for square matrices over fields. Statement. When a vector space over a field "F" has a finite generating set, then one may extract from it a basis consisting of a finite number "n" of vectors, and the space is therefore isomorphic to "F""n". The corresponding statement with the "F" generalized to a principal ideal domain "R" is no longer true, since a basis for a finitely generated module over "R" might not exist. However such a module is still isomorphic to a quotient of some module "Rn" with "n" finite (to see this it suffices to construct the morphism that sends the elements of the canonical basis of "Rn" to the generators of the module, and take the quotient by its kernel.) By changing the choice of generating set, one can in fact describe the module as the quotient of some "Rn" by a particularly simple submodule, and this is the structure theorem. The structure theorem for finitely generated modules over a principal ideal domain usually appears in the following two forms. Invariant factor decomposition. For every finitely generated module "M" over a principal ideal domain "R", there is a unique decreasing sequence of proper ideals $(d_1)\supseteq(d_2)\supseteq\cdots\supseteq(d_n)$ such that "M" is isomorphic to the sum of cyclic modules: $M\cong\bigoplus_i R/(d_i) = R/(d_1)\oplus R/(d_2)\oplus\cdots\oplus R/(d_n).$ The generators $d_i$ of the ideals are unique up to multiplication by a unit, and are called invariant factors of "M". Since the ideals should be proper, these factors must not themselves be invertible (this avoids trivial factors in the sum), and the inclusion of the ideals means one has divisibility $d_1\,|\,d_2\,|\,\cdots\,|\,d_n$. The free part is visible in the part of the decomposition corresponding to factors $d_i = 0$. Such factors, if any, occur at the end of the sequence. While the direct sum is uniquely determined by "M", the isomorphism giving the decomposition itself is "not unique" in general. For instance if "R" is actually a field, then all occurring ideals must be zero, and one obtains the decomposition of a finite dimensional vector space into a direct sum of one-dimensional subspaces; the number of such factors is fixed, namely the dimension of the space, but there is a lot of freedom for choosing the subspaces themselves (if dim "M" > 1). The nonzero $d_i$ elements, together with the number of $d_i$ which are zero, form a complete set of invariants for the module. Explicitly, this means that any two modules sharing the same set of invariants are necessarily isomorphic. Some prefer to write the free part of "M" separately: $R^f \oplus \bigoplus_i R/(d_i) = R^f \oplus R/(d_1)\oplus R/(d_2)\oplus\cdots\oplus R/(d_{n-f})$ where the visible $d_i$ are nonzero, and "f" is the number of $d_i$'s in the original sequence which are 0. Every finitely generated module "M" over a principal ideal domain "R" is isomorphic to one of the form $\bigoplus_i R/(q_i)$ where $(q_i) \neq R$ and the $(q_i)$ are primary ideals. The $q_i$ are unique (up to multiplication by units). Primary decomposition. The elements $q_i$ are called the "elementary divisors" of "M". In a PID, nonzero primary ideals are powers of primes, and so $(q_i)=(p_i^{r_i}) = (p_i)^{r_i}$. When $q_i=0$, the resulting indecomposable module is $R$ itself, and this is inside the part of "M" that is a free module. The summands $R/(q_i)$ are indecomposable, so the primary decomposition is a decomposition into indecomposable modules, and thus every finitely generated module over a PID is a completely decomposable module. Since PID's are Noetherian rings, this can be seen as a manifestation of the Lasker-Noether theorem. As before, it is possible to write the free part (where $q_i=0$) separately and express "M" as: $R^f \oplus(\bigoplus_i R/(q_i))$ where the visible $q_i $ are nonzero. Proofs. One proof proceeds as follows: This yields the invariant factor decomposition, and the diagonal entries of Smith normal form are the invariant factors. Another outline of a proof: Corollaries. This includes the classification of finite-dimensional vector spaces as a special case, where $R = K$. Since fields have no non-trivial ideals, every finitely generated vector space is free. Taking $R=\mathbb{Z}$ yields the fundamental theorem of finitely generated abelian groups. Let "T" be a linear operator on a finite-dimensional vector space "V" over "K". Taking $R = K[T]$, the algebra of polynomials with coefficients in "K" evaluated at "T", yields structure information about "T". "V" can be viewed as a finitely generated module over $K[T]$. The last invariant factor is the minimal polynomial, and the product of invariant factors is the characteristic polynomial. Combined with a standard matrix form for $K[T]/p(T)$, this yields various canonical forms: Uniqueness. While the invariants (rank, invariant factors, and elementary divisors) are unique, the isomorphism between "M" and its canonical form is not unique, and does not even preserve the direct sum decomposition. This follows because there are non-trivial automorphisms of these modules which do not preserve the summands. However, one has a canonical torsion submodule "T", and similar canonical submodules corresponding to each (distinct) invariant factor, which yield a canonical sequence: $0 < \cdots < T < M.$ Compare composition series in Jordan–Hölder theorem. For instance, if $M \approx \mathbf{Z} \oplus \mathbf{Z}/2$, and $(1,\bar{0}), (0,\bar{1})$ is one basis, then $(1,\bar{1}), (0,\bar{1})$ is another basis, and the change of basis matrix $\begin{bmatrix}1&0\\1&1\end{bmatrix}$ does not preserve the summand $\mathbf{Z}$. However, it does preserve the $\mathbf{Z}/2$ summand, as this is the torsion submodule (equivalently here, the 2-torsion elements). Generalizations. Groups. The Jordan–Hölder theorem is a more general result for finite groups (or modules over an arbitrary ring). In this generality, one obtains a composition series, rather than a direct sum. The Krull–Schmidt theorem and related results give conditions under which a module has something like a primary decomposition, a decomposition as a direct sum of indecomposable modules in which the summands are unique up to order. Primary decomposition. The primary decomposition generalizes to finitely generated modules over commutative Noetherian rings, and this result is called the Lasker–Noether theorem. Indecomposable modules. By contrast, unique decomposition into "indecomposable" submodules does not generalize as far, and the failure is measured by the ideal class group, which vanishes for PIDs. For rings that are not principal ideal domains, unique decomposition need not even hold for modules over a ring generated by two elements. For the ring "R" = Z[√−5], both the module "R" and its submodule "M" generated by 2 and 1 + √−5 are indecomposable. While "R" is not isomorphic to "M", "R" ⊕ "R" is isomorphic to "M" ⊕ "M"; thus the images of the "M" summands give indecomposable submodules "L"1, "L"2 < "R" ⊕ "R" which give a different decomposition of "R" ⊕ "R". The failure of uniquely factorizing "R" ⊕ "R" into a direct sum of indecomposable modules is directly related (via the ideal class group) to the failure of the unique factorization of elements of "R" into irreducible elements of "R". However, over a Dedekind domain the ideal class group is the only obstruction, and the structure theorem generalizes to finitely generated modules over a Dedekind domain with minor modifications. There is still a unique torsion part, with a torsionfree complement (unique up to isomorphism), but a torsionfree module over a Dedekind domain is no longer necessarily free. Torsionfree modules over a Dedekind domain are determined (up to isomorphism) by rank and Steinitz class (which takes value in the ideal class group), and the decomposition into a direct sum of copies of "R" (rank one free modules) is replaced by a direct sum into rank one projective modules: the individual summands are not uniquely determined, but the Steinitz class (of the sum) is. Non-finitely generated modules. Similarly for modules that are not finitely generated, one cannot expect such a nice decomposition: even the number of factors may vary. There are Z-submodules of Q4 which are simultaneously direct sums of two indecomposable modules and direct sums of three indecomposable modules, showing the analogue of the primary decomposition cannot hold for infinitely generated modules, even over the integers, Z. Another issue that arises with non-finitely generated modules is that there are torsion-free modules which are not free. For instance, consider the ring Z of integers. Then Q is a torsion-free Z-module which is not free. Another classical example of such a module is the Baer–Specker group, the group of all sequences of integers under termwise addition. In general, the question of which infinitely generated torsion-free abelian groups are free depends on which large cardinals exist. A consequence is that any structure theorem for infinitely generated modules depends on a choice of set theory axioms and may be invalid under a different choice.
1473926
abstract_algebra
In mathematics, the Artin–Hasse exponential, introduced by Artin and Hasse (1928), is the power series given by $ E_p(x) = \exp\left(x + \frac{x^p}{p} + \frac{x^{p^2}}{p^2} + \frac{x^{p^3}}{p^3} +\cdots\right).$ Motivation. One motivation for considering this series to be analogous to the exponential function comes from infinite products. In the ring of formal power series Q"x" we have the identity $e^x = \prod_{n \geq 1}(1-x^n)^{-\mu(n)/n},$ where μ(n) is the Möbius function. This identity can be verified by showing the logarithmic derivative of the two sides are equal and that both sides have the same constant term. In a similar way, one can verify a product expansion for the Artin–Hasse exponential: $E_p(x) = \prod_{(p,n)=1}(1-x^n)^{-\mu(n)/n}.$ So passing from a product over all "n" to a product over only "n" prime to "p", which is a typical operation in "p"-adic analysis, leads from "e""x" to "E""p"("x"). Properties. The coefficients of "E""p"("x") are rational. We can use either formula for "E""p"("x") to prove that, unlike "e""x", all of its coefficients are "p"-integral; in other words, the denominators of the coefficients of "E""p"("x") are not divisible by "p". A first proof uses the definition of "E""p"("x") and Dwork's lemma, which says that a power series "f"("x") = 1 + ... with rational coefficients has "p"-integral coefficients if and only if "f"("x""p")/"f"("x")"p" ≡ 1 mod "pZ"p""x". When "f"("x") = "E""p"("x"), we have "f"("x""p")/"f"("x")"p" = "e"−"px", whose constant term is 1 and all higher coefficients are in "pZ"p". A second proof comes from the infinite product for "E""p"("x"): each exponent -μ("n")/"n" for "n" not divisible by "p" is a "p"-integral, and when a rational number "a" is "p"-integral all coefficients in the binomial expansion of (1 - "x""n")"a" are "p"-integral by "p"-adic continuity of the binomial coefficient polynomials "t"("t"-1)...("t"-"k"+1)/"k"! in "t" together with their obvious integrality when "t" is a nonnegative integer ("a" is a "p"-adic limit of nonnegative integers) . Thus each factor in the product of "E""p"("x") has "p"-integral coefficients, so "E""p"("x") itself has "p"-integral coefficients. The ("p"-integral) series expansion has radius of convergence 1. Combinatorial interpretation. The Artin–Hasse exponential is the generating function for the probability a uniformly randomly selected element of "S""n" (the symmetric group with "n" elements) has "p"-power order (the number of which is denoted by "t""p,n"): $E_p(x)=\sum_{n\ge 0} \frac{t_{p,n}}{n!}x^n.$ This gives a third proof that the coefficients of "E""p"("x") are "p"-integral, using the theorem of Frobenius that in a finite group of order divisible by "d" the number of elements of order dividing "d" is also divisible by "d". Apply this theorem to the "n"th symmetric group with "d" equal to the highest power of "p" dividing "n"!. More generally, for any topologically finitely generated profinite group "G" there is an identity $\exp(\sum_{H \subset G} x^{[G:H]}/[G:H])=\sum_{n\ge 0} \frac{a_{G,n}}{n!}x^n,$ where "H" runs over open subgroups of "G" with finite index (there are finitely many of each index since "G" is topologically finitely generated) and "aG,n" is the number of continuous homomorphisms from "G" to "Sn". Two special cases are worth noting. (1) If "G" is the "p"-adic integers, it has exactly one open subgroup of each "p"-power index and a continuous homomorphism from "G" to "Sn" is essentially the same thing as choosing an element of "p"-power order in "Sn", so we have recovered the above combinatorial interpretation of the Taylor coefficients in the Artin–Hasse exponential series. (2) If "G" is a finite group then the sum in the exponential is a finite sum running over all subgroups of "G", and continuous homomorphisms from "G" to "Sn" are simply homomorphisms from "G" to "Sn". The result in this case is due to Wohlfahrt (1977). The special case when "G" is a finite cyclic group is due to Chowla, Herstein, and Scott (1952), and takes the form $\exp(\sum_{d|m} x^d/d)=\sum_{n\ge 0} \frac{a_{m,n}}{n!}x^n,$ where "am,n" is the number of solutions to "gm" = 1 in "Sn". David Roberts provided a natural combinatorial link between the Artin–Hasse exponential and the regular exponential in the spirit of the ergodic perspective (linking the "p"-adic and regular norms over the rationals) by showing that the Artin–Hasse exponential is also the generating function for the probability that an element of the symmetric group is unipotent in characteristic "p", whereas the regular exponential is the probability that an element of the same group is unipotent in characteristic zero. Conjectures. At the 2002 PROMYS program, Keith Conrad conjectured that the coefficients of $E_p(x)$ are uniformly distributed in the p-adic integers with respect to the normalized Haar measure, with supporting computational evidence. The problem is still open. Dinesh Thakur has also posed the problem of whether the Artin–Hasse exponential reduced mod "p" is transcendental over $\mathbb{F}_p(x)$.
1067394
abstract_algebra
In mathematics, especially in the area of algebra known as group theory, the term Z-group refers to a number of distinct types of groups: "Usage: , , MR, , " Groups whose Sylow subgroups are cyclic. In the study of finite groups, a Z-group is a finite group whose Sylow subgroups are all cyclic. The Z originates both from the German and from their classification in . In many standard textbooks these groups have no special name, other than metacyclic groups, but that term is often used more generally today. See metacyclic group for more on the general, modern definition which includes non-cyclic "p"-groups; see for the stricter, classical definition more closely related to Z-groups. Every group whose Sylow subgroups are cyclic is itself metacyclic, so supersolvable. In fact, such a group has a cyclic derived subgroup with cyclic maximal abelian quotient. Such a group has the presentation : $G(m,n,r) = \langle a,b | a^m = b^n = 1, bab^{-1} = a^r \rangle $, where "mn" is the order of "G"("m","n","r"), the greatest common divisor, gcd(("r"-1)"n", "m") = 1, and "r""n" ≡ 1 (mod "m"). The character theory of Z-groups is well understood , as they are monomial groups. The derived length of a Z-group is at most 2, so Z-groups may be insufficient for some uses. A generalization due to Hall are the A-groups, those groups with abelian Sylow subgroups. These groups behave similarly to Z-groups, but can have arbitrarily large derived length . Another generalization due to allows the Sylow 2-subgroup more flexibility, including dihedral and generalized quaternion groups. "Usage: , " Group with a generalized central series. The definition of central series used for Z-group is somewhat technical. A series of "G" is a collection "S" of subgroups of "G", linearly ordered by inclusion, such that for every "g" in "G", the subgroups "A""g" = ∩ { "N" in "S" : "g" in "N" } and "B""g" = ∪ { "N" in "S" : "g" not in "N" } are both in "S". A (generalized) central series of "G" is a series such that every "N" in "S" is normal in "G" and such that for every "g" in "G", the quotient "A""g"/"B""g" is contained in the center of "G"/"B""g". A Z-group is a group with such a (generalized) central series. Examples include the hypercentral groups whose transfinite upper central series form such a central series, as well as the hypocentral groups whose transfinite lower central series form such a central series . "Usage: " Special 2-transitive groups. A (Z)-group is a group faithfully represented as a doubly transitive permutation group in which no non-identity element fixes more than two points. A (ZT)-group is a (Z)-group that is of odd degree and not a Frobenius group, that is a Zassenhaus group of odd degree, also known as one of the groups PSL(2,2"k"+1) or Sz(22"k"+1), for "k" any positive integer .
2015329
abstract_algebra
Mathematics concept In mathematics, especially group theory, the Zappa–Szép product (also known as the Zappa–Rédei–Szép product, general product, knit product, exact factorization or bicrossed product) describes a way in which a group can be constructed from two subgroups. It is a generalization of the direct and semidirect products. It is named after Guido Zappa (1940) and Jenő Szép (1950) although it was independently studied by others including B.H. Neumann (1935), G.A. Miller (1935), and J.A. de Séguier (1904). Internal Zappa–Szép products. Let "G" be a group with identity element "e", and let "H" and "K" be subgroups of "G". The following statements are equivalent: If either (and hence both) of these statements hold, then "G" is said to be an internal Zappa–Szép product of "H" and "K". Examples. Let "G" = GL("n",C), the general linear group of invertible "n × n" matrices over the complex numbers. For each matrix "A" in "G", the QR decomposition asserts that there exists a unique unitary matrix "Q" and a unique upper triangular matrix "R" with positive real entries on the main diagonal such that "A" = "QR". Thus "G" is a Zappa–Szép product of the unitary group "U"("n") and the group (say) "K" of upper triangular matrices with positive diagonal entries. One of the most important examples of this is Philip Hall's 1937 theorem on the existence of Sylow systems for soluble groups. This shows that every soluble group is a Zappa–Szép product of a Hall "p'"-subgroup and a Sylow "p"-subgroup, and in fact that the group is a (multiple factor) Zappa–Szép product of a certain set of representatives of its Sylow subgroups. In 1935, George Miller showed that any non-regular transitive permutation group with a regular subgroup is a Zappa–Szép product of the regular subgroup and a point stabilizer. He gives PSL(2,11) and the alternating group of degree 5 as examples, and of course every alternating group of prime degree is an example. This same paper gives a number of examples of groups which cannot be realized as Zappa–Szép products of proper subgroups, such as the quaternion group and the alternating group of degree 6. External Zappa–Szép products. As with the direct and semidirect products, there is an external version of the Zappa–Szép product for groups which are not known "a priori" to be subgroups of a given group. To motivate this, let "G" = "HK" be an internal Zappa–Szép product of subgroups "H" and "K" of the group "G". For each "k" in "K" and each "h" in "H", there exist α("k", "h") in "H" and β("k", "h") in "K" such that "kh" = α("k", "h") β("k", "h"). This defines mappings α : "K" × "H" → "H" and β : "K" × "H" → "K" which turn out to have the following properties: for all "h"1, "h"2 in "H", "k"1, "k"2 in "K". From these, it follows that More concisely, the first three properties above assert the mapping α : "K" × "H" → "H" is a left action of "K" on (the underlying set of) "H" and that β : "K" × "H" → "K" is a right action of "H" on (the underlying set of) "K". If we denote the left action by "h" → "k""h" and the right action by "k" → "k""h", then the last two properties amount to "k"("h"1"h"2) = "k""h"1 "k""h"1"h"2 and ("k"1"k"2)"h" = "k"1"k"2"h" "k"2"h". Turning this around, suppose "H" and "K" are groups (and let "e" denote each group's identity element) and suppose there exist mappings α : "K" × "H" → "H" and β : "K" × "H" → "K" satisfying the properties above. On the cartesian product "H" × "K", define a multiplication and an inversion mapping by, respectively, Then "H" × "K" is a group called the external Zappa–Szép product of the groups "H" and "K". The subsets "H" × {"e"} and {"e"} × "K" are subgroups isomorphic to "H" and "K", respectively, and "H" × "K" is, in fact, an internal Zappa–Szép product of "H" × {"e"} and {"e"} × "K". Relation to semidirect and direct products. Let "G" = "HK" be an internal Zappa–Szép product of subgroups "H" and "K". If "H" is normal in "G", then the mappings α and β are given by, respectively, α("k","h") = "k h k"− 1 and β("k", "h") = "k". This is easy to see because $(h_1k_1)(h_2k_2) = (h_1k_1h_2k_1^{-1})(k_1k_2)$ and $h_1k_1h_2k_1^{-1}\in H$ since by normality of $H$, $k_1h_2k_1^{-1}\in H$. In this case, "G" is an internal semidirect product of "H" and "K". If, in addition, "K" is normal in "G", then α("k","h") = "h". In this case, "G" is an internal direct product of "H" and "K".
1220812
abstract_algebra
Universal construction of a complex Lie group from a real Lie group In mathematics, the complexification or universal complexification of a real Lie group is given by a continuous homomorphism of the group into a complex Lie group with the universal property that every continuous homomorphism of the original group into another complex Lie group extends compatibly to a complex analytic homomorphism between the complex Lie groups. The complexification, which always exists, is unique up to unique isomorphism. Its Lie algebra is a quotient of the complexification of the Lie algebra of the original group. They are isomorphic if the original group has a quotient by a discrete normal subgroup which is linear. For compact Lie groups, the complexification, sometimes called the Chevalley complexification after Claude Chevalley, can be defined as the group of complex characters of the Hopf algebra of representative functions, i.e. the matrix coefficients of finite-dimensional representations of the group. In any finite-dimensional faithful unitary representation of the compact group it can be realized concretely as a closed subgroup of the complex general linear group. It consists of operators with polar decomposition "g" "u" • exp "iX", where "u" is a unitary operator in the compact group and "X" is a skew-adjoint operator in its Lie algebra. In this case the complexification is a complex algebraic group and its Lie algebra is the complexification of the Lie algebra of the compact Lie group. Universal complexification. Definition. If "G" is a Lie group, a universal complexification is given by a complex Lie group "G"C and a continuous homomorphism "φ": "G" → "G"C with the universal property that, if "f": "G" → "H" is an arbitrary continuous homomorphism into a complex Lie group "H", then there is a unique complex analytic homomorphism "F": "G"C → "H" such that "f" "F" ∘ "φ". Universal complexifications always exist and are unique up to a unique complex analytic isomorphism (preserving inclusion of the original group). Existence. If "G" is connected with Lie algebra 𝖌, then its universal covering group G is simply connected. Let GC be the simply connected complex Lie group with Lie algebra 𝖌C 𝖌 ⊗ C, let Φ: G → GC be the natural homomorphism (the unique morphism such that Φ*: 𝖌 ↪ 𝖌 ⊗ C is the canonical inclusion) and suppose "π": G → "G" is the universal covering map, so that ker "π" is the fundamental group of "G". We have the inclusion Φ(ker "π") ⊂ Z(GC), which follows from the fact that the kernel of the adjoint representation of GC equals its centre, combined with the equality $(C_{\Phi(k)})_*\circ \Phi_* = \Phi_* \circ (C_k)_* = \Phi_*$ which holds for any "k" ∈ ker "π". Denoting by Φ(ker "π")* the smallest closed normal Lie subgroup of GC that contains Φ(ker "π"), we must now also have the inclusion Φ(ker "π")* ⊂ Z(GC). We define the universal complexification of "G" as $G_{\mathbf C}=\frac{\mathbf G_{\mathbf C}}{\Phi(\ker \pi)^*}.$ In particular, if "G" is simply connected, its universal complexification is just GC. The map "φ": "G" → "G"C is obtained by passing to the quotient. Since "π" is a surjective submersion, smoothness of the map "π"C ∘ Φ implies smoothness of "φ". For non-connected Lie groups "G" with identity component "G"o and component group Γ "G" / "G"o, the extension $ \{1\} \rightarrow G^o \rightarrow G \rightarrow \Gamma \rightarrow \{1\} $ induces an extension $\{1\} \rightarrow (G^o)_{\mathbf{C}} \rightarrow G_{\mathbf{C}} \rightarrow \Gamma \rightarrow \{1\} $ and the complex Lie group "G"C is a complexification of "G". Proof of the universal property. The map "φ": "G" → "G"C indeed possesses the universal property which appears in the above definition of complexification. The proof of this statement naturally follows from considering the following instructive diagram. Here, $f\colon G\rightarrow H$ is an arbitrary smooth homomorphism of Lie groups with a complex Lie group as the codomain. Existence of the map F For simplicity, we assume $G$ is connected. To establish the existence of $F$, we first naturally extend the morphism of Lie algebras $f_*\colon \mathfrak g\rightarrow \mathfrak h$ to the unique morphism $\overline f_*\colon \mathfrak g_{\mathbf C}\rightarrow \mathfrak h$ of complex Lie algebras. Since $\mathbf G_{\mathbf C}$ is simply connected, Lie's second fundamental theorem now provides us with a unique complex analytic morphism $\overline F\colon \mathbf G_{\mathbf C}\rightarrow H$ between complex Lie groups, such that $(\overline F)_*=\overline f_*$. We define $F\colon G_{\mathbf C}\rightarrow H$ as the map induced by $\overline F$, that is: $F(g\,\Phi(\ker \pi)^*)=\overline F(g)$ for any $g\in\mathbf G_{\mathbf C}$. To show well-definedness of this map (i.e. $\Phi(\ker\pi)^*\subset \ker \overline F$), consider the derivative of the map $\overline F\circ \Phi$. For any $v\in T_e \mathbf G\cong \mathfrak g$, we have $(\overline F)_*\Phi_*v=(\overline F)_*(v\otimes 1)=f_*\pi_*v$, which (by simple connectedness of $\mathbf G$) implies $\overline F\circ\Phi=f\circ\pi$. This equality finally implies $\Phi(\ker\pi)\subset \ker \overline F$, and since $\ker \overline F$ is a closed normal Lie subgroup of $\mathbf G_{\mathbf C}$, we also have $\Phi(\ker \pi)^*\subset \ker \overline F$. Since $\pi_{\mathbb C}$ is a complex analytic surjective submersion, the map $F$ is complex analytic since $\overline F$ is. The desired equality $F\circ\varphi=f$ is imminent. Uniqueness of the map F To show uniqueness of $F$, suppose that $F_1,F_2$ are two maps with $F_1\circ\varphi=F_2\circ\varphi=f$. Composing with $\pi$ from the right and differentiating, we get $(F_1)_*(\pi_{\mathbf C})_*\Phi_*=(F_2)_*(\pi_{\mathbf C})_*\Phi_*$, and since $\Phi_*$ is the inclusion $\mathfrak g\hookrightarrow \mathfrak g_{\mathbf C}$, we get $(F_1)_*(\pi_{\mathbf C})_*=(F_2)_*(\pi_{\mathbf C})_*$. But $\pi_{\mathbf C}$ is a submersion, so $(F_1)_*=(F_2)_*$, thus connectedness of $G$ implies $F_1=F_2$. Uniqueness. The universal property implies that the universal complexification is unique up to complex analytic isomorphism. Injectivity. If the original group is linear, so too is the universal complexification and the homomorphism between the two is an inclusion. give an example of a connected real Lie group for which the homomorphism is not injective even at the Lie algebra level: they take the product of T by the universal covering group of SL(2,R) and quotient out by the discrete cyclic subgroup generated by an irrational rotation in the first factor and a generator of the center in the second. Basic examples. The following isomorphisms of complexifications of Lie groups with known Lie groups can be constructed directly from the general construction of the complexification. $\mathrm{SU}(2)_{\mathbf C}\cong \mathrm{SL}(2,\mathbf C)$. This follows from the isomorphism of Lie algebras $\mathfrak{su}(2)_{\mathbf C}\cong \mathfrak{sl}(2,\mathbf C)$, together with the fact that $\mathrm{SU}(2)$ is simply connected. $\mathrm{SL}(2,\mathbf C)_{\mathbf C}\cong \mathrm{SL}(2,\mathbf C)\times \mathrm{SL}(2,\mathbf C)$. This follows from the isomorphism of Lie algebras $\mathfrak{sl}(2,\mathbf C)_{\mathbf C}\cong \mathfrak{sl}(2,\mathbf C) \oplus \mathfrak{sl}(2,\mathbf C)$, together with the fact that $\mathrm{SL}(2,\mathbf C)$ is simply connected. $\mathrm{SO}(3)_{\mathbf C}\cong \frac{\mathrm{SL}(2,\mathbf C)}{\mathbf Z_2}\cong \mathrm{SO}^+(1,3)$, where $\mathrm{SO}^+(1,3)$ denotes the proper orthochronous Lorentz group. This follows from the fact that $\mathrm{SU}(2)$ is the universal (double) cover of $\mathrm{SO}(3)$, hence: $\mathfrak{so}(3)_{\mathbf C}\cong \mathfrak{su}(2)_{\mathbf C} \cong\mathfrak{sl}(2,\mathbf C)$. We also use the fact that $\mathrm{SL}(2,\mathbf C)$ is the universal (double) cover of $\mathrm{SO}^+(1,3)$. $\mathrm{SO}^+(1,3)_{\mathbf C}\cong \frac{\mathrm{SL}(2,\mathbf C)\times \mathrm{SL}(2,\mathbf C)}{\mathbf Z_2}$. This follows from the same isomorphism of Lie algebras as in the second example, again using the universal (double) cover of the proper orthochronous Lorentz group. $\mathrm{SO}(4)_{\mathbf C}\cong \frac{\mathrm{SL}(2,\mathbf C)\times \mathrm{SL}(2,\mathbf C)}{\mathbf Z_2}$. This follows from the fact that $\mathrm{SU}(2)\times\mathrm{SU}(2)$ is the universal (double) cover of $\mathrm{SO}(4)$, hence $\mathfrak{so}(4)\cong \mathfrak{su}(2)\oplus\mathfrak{su}(2)$ and so $\mathfrak{so}(4)_{\mathbf C}\cong \mathfrak{sl}(2,\mathbf C)\oplus\mathfrak{sl}(2,\mathbf C)$. The last two examples show that Lie groups with isomorphic complexifications may not be isomorphic. Furthermore, the complexifications of Lie groups $\mathrm{SU}(2)$ and $\mathrm{SL}(2,\mathbf C)$ show that complexification is not an idempotent operation, i.e. $(G_{\mathbf C})_{\mathbf C}\not\cong G_{\mathbf C}$ (this is also shown by complexifications of $\mathrm{SO}(3)$ and $\mathrm{SO}^+(1,3)$). Chevalley complexification. Hopf algebra of matrix coefficients. If "G"is a compact Lie group, the *-algebra "A" of matrix coefficients of finite-dimensional unitary representations is a uniformly dense *-subalgebra of "C"("G"), the *-algebra of complex-valued continuous functions on "G". It is naturally a Hopf algebra with comultiplication given by $\displaystyle{\Delta f(g,h)= f(gh).}$ The characters of "A" are the *-homomorphisms of "A" into C. They can be identified with the point evaluations "f" ↦ "f"("g") for "g" in "G" and the comultiplication allows the group structure on "G" to be recovered. The homomorphisms of "A" into C also form a group. It is a complex Lie group and can be identified with the complexification "G"C of "G". The *-algebra "A" is generated by the matrix coefficients of any faithful representation σ of "G". It follows that σ defines a faithful complex analytic representation of "G"C. Invariant theory. The original approach of to the complexification of a compact Lie group can be concisely stated within the language of classical invariant theory, described in . Let "G" be a closed subgroup of the unitary group "U"("V") where "V" is a finite-dimensional complex inner product space. Its Lie algebra consists of all skew-adjoint operators "X" such that exp "tX" lies in "G" for all real "t". Set "W" "V" ⊕ C with the trivial action of "G" on the second summand. The group "G" acts on "W"⊗"N" , with an element "u" acting as "u"⊗"N". The commutant (or centralizer algebra) is denoted by "A""N" End"G" "W"⊗"N". It is generated as a *-algebra by its unitary operators and its commutant is the *-algebra spanned by the operators "u"⊗"N". The complexification "G"C of "G" consists of all operators "g" in GL("V") such that "g"⊗"N" commutes with "A""N" and "g" acts trivially on the second summand in C. By definition it is a closed subgroup of GL("V"). The defining relations (as a commutant) show that "G" is an algebraic subgroup. Its intersection with "U"("V") coincides with "G", since it is "a priori" a larger compact group for which the irreducible representations stay irreducible and inequivalent when restricted to "G". Since "A""N" is generated by unitaries, an invertible operator "g" lies in "G"C if the unitary operator "u" and positive operator "p" in its polar decomposition "g" "u" ⋅ "p" both lie in "G"C. Thus "u" lies in "G" and the operator "p" can be written uniquely as "p" exp "T" with "T" a self-adjoint operator. By the functional calculus for polynomial functions it follows that "h"⊗"N" lies in the commutant of "A""N" if "h" exp "z" "T" with "z" in C. In particular taking "z" purely imaginary, "T" must have the form "iX" with "X" in the Lie algebra of "G". Since every finite-dimensional representation of "G" occurs as a direct summand of "W"⊗"N", it is left invariant by "G"C and thus every finite-dimensional representation of "G" extends uniquely to "G"C. The extension is compatible with the polar decomposition. Finally the polar decomposition implies that "G" is a maximal compact subgroup of "G"C, since a strictly larger compact subgroup would contain all integer powers of a positive operator "p", a closed infinite discrete subgroup. Decompositions in the Chevalley complexification. Cartan decomposition. The decomposition derived from the polar decomposition $\displaystyle{G_{\mathbf{C}} = G\cdot P =G \cdot \exp i\mathfrak{g},}$ where 𝖌 is the Lie algebra of "G", is called the Cartan decomposition of "G"C. The exponential factor "P" is invariant under conjugation by "G" but is not a subgroup. The complexification is invariant under taking adjoints, since "G" consists of unitary operators and "P" of positive operators. Gauss decomposition. The Gauss decomposition is a generalization of the LU decomposition for the general linear group and a specialization of the Bruhat decomposition. For GL("V") it states that with respect to a given orthonormal basis "e"1, ..., "e""n" an element "g" of GL("V") can be factorized in the form $\displaystyle{g=XDY}$ with "X" lower unitriangular, "Y" upper unitriangular and "D" diagonal if and only if all the principal minors of "g" are non-vanishing. In this case "X", "Y" and "D" are uniquely determined. In fact Gaussian elimination shows there is a unique "X" such that "X"−1 "g" is upper triangular. The upper and lower unitriangular matrices, N+ and N−, are closed unipotent subgroups of GL("V"). Their Lie algebras consist of upper and lower strictly triangular matrices. The exponential mapping is a polynomial mapping from the Lie algebra to the corresponding subgroup by nilpotence. The inverse is given by the logarithm mapping which by unipotence is also a polynomial mapping. In particular there is a correspondence between closed connected subgroups of N± and subalgebras of their Lie algebras. The exponential map is onto in each case, since the polynomial function log ( "e""A" "e""B" ) lies in a given Lie subalgebra if "A" and "B" do and are sufficiently small. The Gauss decomposition can be extended to complexifications of other closed connected subgroups "G" of U("V") by using the root decomposition to write the complexified Lie algebra as $\displaystyle{\mathfrak{g}_{\mathbf{C}} = \mathfrak{n}_- \oplus \mathfrak{t}_{\mathbf{C}} \oplus \mathfrak{n}_+,}$ where 𝖙 is the Lie algebra of a maximal torus "T" of "G" and 𝖓± are the direct sum of the corresponding positive and negative root spaces. In the weight space decomposition of "V" as eigenspaces of "T", 𝖙 acts as diagonally, 𝖓+ acts as lowering operators and 𝖓− as raising operators. 𝖓± are nilpotent Lie algebras acting as nilpotent operators; they are each other's adjoints on "V". In particular "T" acts by conjugation of 𝖓+, so that 𝖙C ⊕ 𝖓+ is a semidirect product of a nilpotent Lie algebra by an abelian Lie algebra. By Engel's theorem, if 𝖆 ⊕ 𝖓 is a semidirect product, with 𝖆 abelian and 𝖓 nilpotent, acting on a finite-dimensional vector space "W" with operators in 𝖆 diagonalizable and operators in 𝖓 nilpotent, there is a vector "w" that is an eigenvector for 𝖆 and is annihilated by 𝖓. In fact it is enough to show there is a vector annihilated by 𝖓, which follows by induction on dim 𝖓, since the derived algebra 𝖓' annihilates a non-zero subspace of vectors on which 𝖓 / 𝖓' and 𝖆 act with the same hypotheses. Applying this argument repeatedly to 𝖙C ⊕ 𝖓+ shows that there is an orthonormal basis "e"1, ..., "e""n" of "V" consisting of eigenvectors of 𝖙C with 𝖓+ acting as upper triangular matrices with zeros on the diagonal. If "N"± and "T"C are the complex Lie groups corresponding to 𝖓+ and 𝖙C, then the Gauss decomposition states that the subset $\displaystyle{N_- T_{\mathbf{C}} N_+}$ is a direct product and consists of the elements in "G"C for which the principal minors are non-vanishing. It is open and dense. Moreover, if T denotes the maximal torus in U("V"), $\displaystyle{N_\pm=\mathbf{N}_\pm\cap G_{\mathbf{C}},\,\,\, T_{\mathbf{C}} = \mathbf{T}_{\mathbf{C}}\cap G_{\mathbf{C}}.}$ These results are an immediate consequence of the corresponding results for GL("V"). Bruhat decomposition. If "W" "N""G"("T") / "T" denotes the Weyl group of "T" and "B" denotes the Borel subgroup "T"C "N"+, the Gauss decomposition is also a consequence of the more precise Bruhat decomposition $\displaystyle{G_{\mathbf{C}} =\bigcup_{\sigma\in W} B\sigma B,}$ decomposing "G"C into a disjoint union of double cosets of "B". The complex dimension of a double coset "BσB" is determined by the length of σ as an element of "W". The dimension is maximized at the Coxeter element and gives the unique open dense double coset. Its inverse conjugates "B" into the Borel subgroup of lower triangular matrices in "G"C. The Bruhat decomposition is easy to prove for SL("n",C). Let "B" be the Borel subgroup of upper triangular matrices and "T"C the subgroup of diagonal matrices. So N("T"C) / "T"C S"n". For "g" in SL("n",C), take "b" in "B" so that "bg" maximizes the number of zeros appearing at the beginning of its rows. Because a multiple of one row can be added to another, each row has a different number of zeros in it. Multiplying by a matrix "w" in N("T"C), it follows that "wbg" lies in "B". For uniqueness, if "w"1"b" "w"2 "b"0, then the entries of "w"1"w"2 vanish below the diagonal. So the product lies in "T"C, proving uniqueness. showed that the expression of an element "g" as "g" "b"1"σb"2 becomes unique if "b"1 is restricted to lie in the upper unitriangular subgroup "N"σ "N"+ ∩ "σ N"− "σ"−1. In fact, if "M""σ" "N"+ ∩ "σ N"+ "σ"−1, this follows from the identity $\displaystyle{N_+=N_\sigma\cdot M_\sigma.}$ The group "N"+ has a natural filtration by normal subgroups "N"+("k") with zeros in the first "k" − 1 superdiagonals and the successive quotients are Abelian. Defining "N""σ"("k") and "M""σ"("k") to be the intersections with "N"+("k"), it follows by decreasing induction on "k" that "N"+("k") "N""σ"("k") ⋅ "M""σ"("k"). Indeed, "N""σ"("k")"N"+("k" + 1) and "M""σ"("k")"N"+("k" + 1) are specified in "N"+("k") by the vanishing of complementary entries ("i", "j") on the "k"th superdiagonal according to whether σ preserves the order "i" < "j" or not. The Bruhat decomposition for the other classical simple groups can be deduced from the above decomposition using the fact that they are fixed point subgroups of folding automorphisms of SL("n",C). For Sp("n",C), let "J" be the "n" × "n" matrix with 1's on the antidiagonal and 0's elsewhere and set $\displaystyle{A=\begin{pmatrix} 0 & J\\ -J & 0\end{pmatrix}.}$ Then Sp("n",C) is the fixed point subgroup of the involution "θ"("g") "A" ("g""t")−1 "A"−1 of SL(2"n",C). It leaves the subgroups "N"±, "T"C and "B" invariant. If the basis elements are indexed by "n", "n"−1, ..., 1, −1, ..., −"n", then the Weyl group of Sp("n",C) consists of σ satisfying "σ"("j") −"j", i.e. commuting with θ. Analogues of "B", "T"C and "N"± are defined by intersection with Sp("n",C), i.e. as fixed points of θ. The uniqueness of the decomposition "g" "nσb" "θ"("n") "θ"("σ") "θ"("b") implies the Bruhat decomposition for Sp("n",C). The same argument works for SO("n",C). It can be realised as the fixed points of "ψ"("g") "B" ("g""t")−1 "B"−1 in SL("n",C) where "B" "J". Iwasawa decomposition. The Iwasawa decomposition $\displaystyle{G_{\mathbf{C}} = G\cdot A \cdot N}$ gives a decomposition for "G"C for which, unlike the Cartan decomposition, the direct factor "A" ⋅ "N" is a closed subgroup, but it is no longer invariant under conjugation by "G". It is the semidirect product of the nilpotent subgroup "N" by the Abelian subgroup "A". For U("V") and its complexification GL("V"), this decomposition can be derived as a restatement of the Gram–Schmidt orthonormalization process. In fact let "e"1, ..., "e""n" be an orthonormal basis of "V" and let "g" be an element in GL("V"). Applying the Gram–Schmidt process to "ge"1, ..., "ge""n", there is a unique orthonormal basis "f"1, ..., "f""n" and positive constants "a""i" such that $\displaystyle{f_i= a_i ge_i + \sum_{j If "k" is the unitary taking ("e""i") to ("f""i"), it follows that "g"−1"k" lies in the subgroup AN, where A is the subgroup of positive diagonal matrices with respect to ("e""i") and N is the subgroup of upper unitriangular matrices. Using the notation for the Gauss decomposition, the subgroups in the Iwasawa decomposition for "G"C are defined by $\displaystyle{A=\exp i\mathfrak{t} = \mathbf{A} \cap G_{\mathbf{C}}, \,\,\, N=\exp \mathfrak{n}_+=\mathbf{N} \cap G_{\mathbf{C}}.}$ Since the decomposition is direct for GL("V"), it is enough to check that "G"C "GAN". From the properties of the Iwasawa decomposition for GL("V"), the map "G" × "A" × "N" is a diffeomorphism onto its image in "G"C, which is closed. On the other hand, the dimension of the image is the same as the dimension of "G"C, so it is also open. So "G"C "GAN" because "G"C is connected. gives a method for explicitly computing the elements in the decomposition. For "g" in "G"C set "h" "g"*"g". This is a positive self-adjoint operator so its principal minors do not vanish. By the Gauss decomposition, it can therefore be written uniquely in the form "h" "XDY" with "X" in "N"−, "D" in "T"C and "Y" in "N"+. Since "h" is self-adjoint, uniqueness forces "Y" "X"*. Since it is also positive "D" must lie in "A" and have the form "D" exp "iT" for some unique "T" in 𝖙. Let "a" exp "iT"/2 be its unique square root in "A". Set "n" "Y" and "k" "g" "n"−1 "a"−1. Then "k" is unitary, so is in "G", and "g" "kan". Complex structures on homogeneous spaces. The Iwasawa decomposition can be used to describe complex structures on the "G"-orbits in complex projective space of highest weight vectors of finite-dimensional irreducible representations of "G". In particular the identification between "G" / "T" and "G"C / "B" can be used to formulate the Borel–Weil theorem. It states that each irreducible representation of "G" can be obtained by holomorphic induction from a character of "T", or equivalently that it is realized in the space of sections of a holomorphic line bundle on "G" / "T". The closed connected subgroups of "G" containing "T" are described by Borel–de Siebenthal theory. They are exactly the centralizers of tori "S" ⊆ "T". Since every torus is generated topologically by a single element "x", these are the same as centralizers C"G"("X") of elements "X" in 𝖙. By a result of Hopf C"G"("x") is always connected: indeed any element "y" is along with "S" contained in some maximal torus, necessarily contained in C"G"("x"). Given an irreducible finite-dimensional representation "V"λ with highest weight vector "v" of weight "λ", the stabilizer of C "v" in "G" is a closed subgroup "H". Since "v" is an eigenvector of "T", "H" contains "T". The complexification "G"C also acts on "V" and the stabilizer is a closed complex subgroup "P" containing "T"C. Since "v" is annihilated by every raising operator corresponding to a positive root "α", "P" contains the Borel subgroup "B". The vector "v" is also a highest weight vector for the copy of sl2 corresponding to "α", so it is annihilated by the lowering operator generating 𝖌−"α" if ("λ", "α") 0. The Lie algebra p of "P" is the direct sum of 𝖙C and root space vectors annihilating "v", so that $\displaystyle{\mathfrak{p}=\mathfrak{b}\oplus \bigoplus_{(\alpha,\lambda)=0} \mathfrak{g}_{-\alpha}.}$ The Lie algebra of "H" "P" ∩ "G" is given by p ∩ 𝖌. By the Iwasawa decomposition "G"C "GAN". Since "AN" fixes C "v", the "G"-orbit of "v" in the complex projective space of "V""λ" coincides with the "G"C orbit and $\displaystyle{G/H=G_{\mathbf{C}}/P.}$ In particular $\displaystyle{G/T=G_{\mathbf{C}}/B.}$ Using the identification of the Lie algebra of "T" with its dual, "H" equals the centralizer of λ in "G", and hence is connected. The group "P" is also connected. In fact the space "G" / "H" is simply connected, since it can be written as the quotient of the (compact) universal covering group of the compact semisimple group "G" / "Z" by a connected subgroup, where "Z" is the center of "G". If "P"o is the identity component of "P", "G"C / "P" has "G"C / "P"o as a covering space, so that "P" "P"o. The homogeneous space "G"C / "P" has a complex structure, because "P" is a complex subgroup. The orbit in complex projective space is closed in the Zariski topology by Chow's theorem, so is a smooth projective variety. The Borel–Weil theorem and its generalizations are discussed in this context in , , and . The parabolic subgroup "P" can also be written as a union of double cosets of "B" $\displaystyle{P=\bigcup_{\sigma\in W_\lambda} B\sigma B,}$ where "W""λ" is the stabilizer of λ in the Weyl group "W". It is generated by the reflections corresponding to the simple roots orthogonal to λ. Noncompact real forms. There are other closed subgroups of the complexification of a compact connected Lie group "G" which have the same complexified Lie algebra. These are the other real forms of "G"C. Involutions of simply connected compact Lie groups. If "G" is a simply connected compact Lie group and σ is an automorphism of order 2, then the fixed point subgroup "K" = "G"σ is "automatically connected". (In fact this is true for any automorphism of "G", as shown for inner automorphisms by Steinberg and in general by Borel.) This can be seen most directly when the involution σ corresponds to a Hermitian symmetric space. In that case σ is inner and implemented by an element in a one-parameter subgroup exp "tT" contained in the center of "G"σ. The innerness of σ implies that "K" contains a maximal torus of "G", so has maximal rank. On the other hand, the centralizer of the subgroup generated by the torus "S" of elements exp "tT" is connected, since if "x" is any element in "K" there is a maximal torus containing "x" and "S", which lies in the centralizer. On the other hand, it contains "K" since "S" is central in "K" and is contained in "K" since "z" lies in "S". So "K" is the centralizer of "S" and hence connected. In particular "K" contains the center of "G". For a general involution σ, the connectedness of "G"σ can be seen as follows. The starting point is the Abelian version of the result: if "T" is a maximal torus of a simply connected group "G" and σ is an involution leaving invariant "T" and a choice of positive roots (or equivalently a Weyl chamber), then the fixed point subgroup "T"σ is connected. In fact the kernel of the exponential map from $\mathfrak{t}$ onto "T" is a lattice Λ with a Z-basis indexed by simple roots, which σ permutes. Splitting up according to orbits, "T" can be written as a product of terms T on which σ acts trivially or terms T2 where σ interchanges the factors. The fixed point subgroup just corresponds to taking the diagonal subgroups in the second case, so is connected. Now let "x" be any element fixed by σ, let "S" be a maximal torus in C"G"("x")σ and let "T" be the identity component of C"G"("x", "S"). Then "T" is a maximal torus in "G" containing "x" and "S". It is invariant under σ and the identity component of "T"σ is "S". In fact since "x" and "S" commute, they are contained in a maximal torus which, because it is connected, must lie in "T". By construction "T" is invariant under σ. The identity component of "T"σ contains "S", lies in C"G"("x")σ and centralizes "S", so it equals "S". But "S" is central in "T", to "T" must be Abelian and hence a maximal torus. For σ acts as multiplication by −1 on the Lie algebra $\mathfrak{t}\ominus \mathfrak{s}$, so it and therefore also $\mathfrak{t}$ are Abelian. The proof is completed by showing that σ preserves a Weyl chamber associated with "T". For then "T"σ is connected so must equal "S". Hence "x" lies in "S". Since "x" was arbitrary, "G"σ must therefore be connected. To produce a Weyl chamber invariant under σ, note that there is no root space $\mathfrak{g}_\alpha$ on which both "x" and "S" acted trivially, for this would contradict the fact that C"G"("x", "S") has the same Lie algebra as "T". Hence there must be an element "s" in "S" such that "t" = "xs" acts non-trivially on each root space. In this case "t" is a "regular element" of "T"—the identity component of its centralizer in "G" equals "T". There is a unique Weyl alcove "A" in $\mathfrak{t}$ such that "t" lies in exp "A" and 0 lies in the closure of "A". Since "t" is fixed by σ, the alcove is left invariant by σ and hence so also is the Weyl chamber "C" containing it. Conjugations on the complexification. Let "G" be a simply connected compact Lie group with complexification "G"C. The map "c"("g") = ("g"*)−1 defines an automorphism of "G"C as a real Lie group with "G" as fixed point subgroup. It is conjugate-linear on $\mathfrak{g}_{\mathbf{C}}$ and satisfies "c"2 = id. Such automorphisms of either "G"C or $\mathfrak{g}_{\mathbf{C}}$ are called conjugations. Since "G"C is also simply connected any conjugation "c"1 on $\mathfrak{g}_{\mathbf{C}}$ corresponds to a unique automorphism "c"1 of "G"C. The classification of conjugations "c"0 reduces to that of involutions σ of "G" because given a "c"1 there is an automorphism φ of the complex group "G"C such that $\displaystyle{c_0=\varphi\circ c_1\circ \varphi^{-1}}$ commutes with "c". The conjugation "c"0 then leaves "G" invariant and restricts to an involutive automorphism σ. By simple connectivity the same is true at the level of Lie algebras. At the Lie algebra level "c"0 can be recovered from σ by the formula $\displaystyle{c_0(X+iY)=\sigma(X)- i\sigma(Y)}$ for "X", "Y" in $\mathfrak{g}$. To prove the existence of φ let ψ = "c"1"c" an automorphism of the complex group "G"C. On the Lie algebra level it defines a self-adjoint operator for the complex inner product $\displaystyle{(X,Y)=-B(X,c(Y)),}$ where "B" is the Killing form on $\mathfrak{g}_{\mathbf{C}}$. Thus ψ2 is a positive operator and an automorphism along with all its real powers. In particular take $\displaystyle{\varphi=(\psi^2)^{1/4}}$ It satisfies $\displaystyle{c_0c=\varphi c_1 \varphi^{-1} c=\varphi cc_1 \varphi=(\psi^2)^{1/2} \psi^{-1} =\varphi^{-1} cc_1 \varphi^{-1}=c \varphi c_1\varphi^{-1}=cc_0.}$ Cartan decomposition in a real form. For the complexification "G"C, the Cartan decomposition is described above. Derived from the polar decomposition in the complex general linear group, it gives a diffeomorphism $\displaystyle{G_{\mathbf{C}} = G\cdot \exp i\mathfrak{g} = G\cdot P = P\cdot G.}$ On "G"C there is a conjugation operator "c" corresponding to "G" as well as an involution σ commuting with "c". Let "c"0 = "c" σ and let "G"0 be the fixed point subgroup of "c". It is closed in the matrix group "G"C and therefore a Lie group. The involution σ acts on both "G" and "G"0. For the Lie algebra of "G" there is a decomposition $\displaystyle{\mathfrak{g}=\mathfrak{k} \oplus \mathfrak{p}}$ into the +1 and −1 eigenspaces of σ. The fixed point subgroup "K" of σ in "G" is connected since "G" is simply connected. Its Lie algebra is the +1 eigenspace $\mathfrak{k}$. The Lie algebra of "G"0 is given by $\displaystyle{\mathfrak{g}=\mathfrak{k} \oplus \mathfrak{p}}$ and the fixed point subgroup of σ is again "K", so that "G" ∩ "G"0 = "K". In "G"0, there is a Cartan decomposition $\displaystyle{G_0=K\cdot \exp i\mathfrak{p} =K\cdot P_0 = P_0\cdot K}$ which is again a diffeomorphism onto the direct and corresponds to the polar decomposition of matrices. It is the restriction of the decomposition on "G"C. The product gives a diffeomorphism onto a closed subset of "G"0. To check that it is surjective, for "g" in "G"0 write "g" = "u" ⋅ "p" with "u" in "G" and "p" in "P". Since "c"0 "g" = "g", uniqueness implies that σ"u" = "u" and σ"p" = "p"−1. Hence "u" lies in "K" and "p" in "P"0. The Cartan decomposition in "G"0 shows that "G"0 is connected, simply connected and noncompact, because of the direct factor "P"0. Thus "G"0 is a noncompact real semisimple Lie group. Moreover, given a maximal Abelian subalgebra $\mathfrak{a}$ in $\mathfrak{p}$, "A" = exp $\mathfrak{a}$ is a toral subgroup such that σ("a") = "a"−1 on "A"; and any two such $\mathfrak{a}$'s are conjugate by an element of "K". The properties of "A" can be shown directly. "A" is closed because the closure of "A" is a toral subgroup satisfying σ("a") = "a"−1, so its Lie algebra lies in $\mathfrak{m}$ and hence equals $\mathfrak{a}$ by maximality. "A" can be generated topologically by a single element exp "X", so $\mathfrak{a}$ is the centralizer of "X" in $\mathfrak{m}$. In the "K"-orbit of any element of $\mathfrak{m}$ there is an element "Y" such that (X,Ad "k" Y) is minimized at "k" = 1. Setting "k" = exp "tT" with "T" in $\mathfrak{k}$, it follows that ("X",["T","Y"]) = 0 and hence ["X","Y"] = 0, so that "Y" must lie in $\mathfrak{a}$. Thus $\mathfrak{m}$ is the union of the conjugates of $\mathfrak{a}$. In particular some conjugate of "X" lies in any other choice of $\mathfrak{a}$, which centralizes that conjugate; so by maximality the only possibilities are conjugates of $\mathfrak{a}$. A similar statements hold for the action of "K" on $\mathfrak{a}_0=i\mathfrak{a}$ in $\mathfrak{p}_0$. Morevoer, from the Cartan decomposition for "G"0, if "A"0 = exp $\mathfrak{a}_0$, then $\displaystyle{G_0=KA_0K.}$ Notes.
4003400
abstract_algebra
In mathematics, a group is called boundedly generated if it can be expressed as a finite product of cyclic subgroups. The property of bounded generation is also closely related with the congruence subgroup problem (see ). Definitions. A group "G" is called "boundedly generated" if there exists a finite subset "S" of "G" and a positive integer "m" such that every element "g" of "G" can be represented as a product of at most "m" powers of the elements of "S": $g = s_1^{k_1} \cdots s_m^{k_m},$ where $s_i \in S$ and $k_i$ are integers. The finite set "S" generates "G", so a boundedly generated group is finitely generated. An equivalent definition can be given in terms of cyclic subgroups. A group "G" is called "boundedly generated" if there is a finite family "C"1, …, "C""M" of not necessarily distinct cyclic subgroups such that "G" = "C"1…"C""M" as a set. Properties. A "pseudocharacter" on a discrete group "G" is defined to be a real-valued function "f" on a "G" such that "f"("gh") − "f"("g") − "f"("h") is uniformly bounded and "f"("g""n") = "n"·"f"("g"). Free groups are not boundedly generated. Several authors have stated in the mathematical literature that it is obvious that finitely generated free groups are not boundedly generated. This section contains various obvious and less obvious ways of proving this. Some of the methods, which touch on bounded cohomology, are important because they are geometric rather than algebraic, so can be applied to a wider class of groups, for example Gromov-hyperbolic groups. Since for any "n" ≥ 2, the free group on 2 generators F2 contains the free group on "n" generators F"n" as a subgroup of finite index (in fact "n" − 1), once one non-cyclic free group on finitely many generators is known to be not boundedly generated, this will be true for all of them. Similarly, since SL2(Z) contains F2 as a subgroup of index 12, it is enough to consider SL2(Z). In other words, to show that no F"n" with "n" ≥ 2 has bounded generation, it is sufficient to prove this for one of them or even just for SL2(Z) . Burnside counterexamples. Since bounded generation is preserved under taking homomorphic images, if a single finitely generated group with at least two generators is known to be not boundedly generated, this will be true for the free group on the same number of generators, and hence for all free groups. To show that no (non-cyclic) free group has bounded generation, it is therefore enough to produce one example of a finitely generated group which is not boundedly generated, and any finitely generated infinite torsion group will work. The existence of such groups constitutes Golod and Shafarevich's negative solution of the generalized Burnside problem in 1964; later, other explicit examples of infinite finitely generated torsion groups were constructed by Aleshin, Olshanskii, and Grigorchuk, using automata. Consequently, free groups of rank at least two are not boundedly generated. Symmetric groups. The symmetric group S"n" can be generated by two elements, a 2-cycle and an "n"-cycle, so that it is a quotient group of F2. On the other hand, it is easy to show that the maximal order "M"("n") of an element in S"n" satisfies log "M"("n") ≤ "n"/"e" where "e" is Euler's number (Edmund Landau proved the more precise asymptotic estimate log "M"("n") ~ ("n" log "n")1/2). In fact if the cycles in a cycle decomposition of a permutation have length "N"1, ..., "N""k" with "N"1 + ··· + "N""k" = "n", then the order of the permutation divides the product "N"1 ··· "N""k", which in turn is bounded by ("n"/"k")"k", using the inequality of arithmetic and geometric means. On the other hand, ("n"/"x")"x" is maximized when "x" = "e". If F2 could be written as a product of "m" cyclic subgroups, then necessarily "n"! would have to be less than or equal to "M"("n")"m" for all "n", contradicting Stirling's asymptotic formula. Hyperbolic geometry. There is also a simple geometric proof that "G" = SL2(Z) is not boundedly generated. It acts by Möbius transformations on the upper half-plane H, with the Poincaré metric. Any compactly supported 1-form α on a fundamental domain of "G" extends uniquely to a "G"-invariant 1-form on H. If "z" is in H and γ is the geodesic from "z" to "g"("z"), the function defined by $F(g)\equiv F_{\alpha,z}(g)=\int_{\gamma}\, \alpha$ satisfies the first condition for a pseudocharacter since by the Stokes theorem $F(gh) - F(g)-F(h) = \int_{\Delta}\, d\alpha,$ where Δ is the geodesic triangle with vertices "z", "g"("z") and "h"−1("z"), and geodesics triangles have area bounded by π. The homogenized function $f_\alpha(g) = \lim_{n\rightarrow \infty} F_{\alpha,z}(g^n)/n$ defines a pseudocharacter, depending only on α. As is well known from the theory of dynamical systems, any orbit ("g""k"("z")) of a hyperbolic element "g" has limit set consisting of two fixed points on the extended real axis; it follows that the geodesic segment from "z" to "g"("z") cuts through only finitely many translates of the fundamental domain. It is therefore easy to choose α so that "f"α equals one on a given hyperbolic element and vanishes on a finite set of other hyperbolic elements with distinct fixed points. Since "G" therefore has an infinite-dimensional space of pseudocharacters, it cannot be boundedly generated. Dynamical properties of hyperbolic elements can similarly be used to prove that any non-elementary Gromov-hyperbolic group is not boundedly generated. Brooks pseudocharacters. Robert Brooks gave a combinatorial scheme to produce pseudocharacters of any free group F"n"; this scheme was later shown to yield an infinite-dimensional family of pseudocharacters (see ). Epstein and Fujiwara later extended these results to all non-elementary Gromov-hyperbolic groups. Gromov boundary. This simple folklore proof uses dynamical properties of the action of hyperbolic elements on the Gromov boundary of a Gromov-hyperbolic group. For the special case of the free group F"n", the boundary (or space of ends) can be identified with the space "X" of semi-infinite reduced words "g"1 "g"2 ··· in the generators and their inverses. It gives a natural compactification of the tree, given by the Cayley graph with respect to the generators. A sequence of semi-infinite words converges to another such word provided that the initial segments agree after a certain stage, so that "X" is compact (and metrizable). The free group acts by left multiplication on the semi-infinite words. Moreover, any element "g" in F"n" has exactly two fixed points "g" ±∞, namely the reduced infinite words given by the limits of "g" "n" as "n" tends to ±∞. Furthermore, "g" "n"·"w" tends to "g" ±∞ as "n" tends to ±∞ for any semi-infinite word "w"; and more generally if "w""n" tends to "w" ≠ "g" ±∞, then "g" "n"·"w""n" tends to "g" +∞ as "n" tends to ∞. If F"n" were boundedly generated, it could be written as a product of cyclic groups C"i" generated by elements "h""i". Let "X"0 be the countable subset given by the finitely many F"n"-orbits of the fixed points "h""i" ±∞, the fixed points of the "h""i" and all their conjugates. Since "X" is uncountable, there is an element of "g" with fixed points outside "X"0 and a point "w" outside "X"0 different from these fixed points. Then for some subsequence ("g""m") of ("g""n") "g""m" = "h"1"n"("m",1) ··· "h""k""n"("m","k"), with each "n"("m","i" ) constant or strictly monotone. On the one hand, by successive use of the rules for computing limits of the form "h" "n"·"w""n", the limit of the right hand side applied to "x" is necessarily a fixed point of one of the conjugates of the "h""i"'s. On the other hand, this limit also must be "g" +∞, which is not one of these points, a contradiction.
1883356
abstract_algebra
Mathematical structure The affine symmetric groups are a family of mathematical structures that describe the symmetries of the number line and the regular triangular tiling of the plane, as well as related higher-dimensional objects. In addition to this geometric description, the affine symmetric groups may be defined in other ways: as collections of permutations (rearrangements) of the integers (..., −2, −1, 0, 1, 2, ...) that are periodic in a certain sense, or in purely algebraic terms as a group with certain generators and relations. They are studied as part of the fields of combinatorics and representation theory. A finite symmetric group consists of all permutations of a finite set. Each affine symmetric group is an infinite extension of a finite symmetric group. Many important combinatorial properties of the finite symmetric groups can be extended to the corresponding affine symmetric groups. Permutation statistics such as descents and inversions can be defined in the affine case. As in the finite case, the natural combinatorial definitions for these statistics also have a geometric interpretation. The affine symmetric groups have close relationships with other mathematical objects, including juggling patterns and certain complex reflection groups. Many of their combinatorial and geometric properties extend to the broader family of affine Coxeter groups. Definitions. The affine symmetric group may be equivalently defined as an abstract group by generators and relations, or in terms of concrete geometric and combinatorial models. Algebraic definition. One way of defining groups is by generators and relations. In this type of definition, generators are a subset of group elements that, when combined, produce all other elements. The relations of the definition are a system of equations that determine when two combinations of generators are equal. In this way, the affine symmetric group $\widetilde{S}_n$ is generated by a set <math display="block"> s_0, s_1, \ldots, s_{n - 1}$ of n elements that satisfy the following relations: when $ n \geq 3 $, In the relations above, indices are taken modulo n, so that the third relation includes as a particular case $ s_0s_{n - 1}s_0 = s_{n - 1}s_0s_{n - 1} $. (The second and third relation are sometimes called the braid relations.) When $ n = 2$, the affine symmetric group $\widetilde{S}_2$ is the infinite dihedral group generated by two elements $s_0, s_1$ subject only to the relations $s_0^2 = s_1^2 = 1$. These relations can be rewritten in the special form that defines the Coxeter groups, so the affine symmetric groups are Coxeter groups, with the $s_i$ as their Coxeter generating sets. Each Coxeter group may be represented by a Coxeter–Dynkin diagram, in which vertices correspond to generators and edges encode the relations between them. For $ n \geq 3$, the Coxeter–Dynkin diagram of $\widetilde{S}_n$ is the n-cycle (where the edges correspond to the relations between pairs of consecutive generators and the absence of an edge between other pairs of generators indicates that they commute), while for $ n = 2$ it consists of two nodes joined by an edge labeled $\infty$. Geometric definition. In the Euclidean space $\R^{n}$ with coordinates $(x_1, \ldots, x_n)$, the set V of points for which $x_1 + x_2 + \cdots + x_n = 0$ forms a (hyper)plane, an ("n" − 1)-dimensional subspace. For every pair of distinct elements i and j of $\{1, \ldots, n\}$ and every integer k, the set of points in V that satisfy $x_i - x_j = k$ forms an ("n" − 2)-dimensional subspace within V, and there is a unique reflection of V that fixes this subspace. Then the affine symmetric group $\widetilde{S}_n$ can be realized geometrically as a collection of maps from V to itself, the compositions of these reflections. Inside V, the subset of points with integer coordinates forms the "root lattice", Λ. It is the set of all the integer vectors $(a_1, \ldots, a_n)$ such that $a_1 + \cdots + a_n = 0$. Each reflection preserves this lattice, and so the lattice is preserved by the whole group. The fixed subspaces of these reflections divide V into congruent simplices, called "alcoves". The situation when $n = 3$ is shown in the figure; in this case, the root lattice is a triangular lattice, the reflecting lines divide V into equilateral triangle alcoves, and the roots are the centers of nonoverlapping hexagons made up of six triangular alcoves. To translate between the geometric and algebraic definitions, one fixes an alcove and consider the n hyperplanes that form its boundary. The reflections through these boundary hyperplanes may be identified with the Coxeter generators. In particular, there is a unique alcove (the "fundamental alcove") consisting of points $(x_1, \ldots, x_n)$ such that $ x_1 \geq x_2 \geq \cdots \geq x_n \geq x_1 - 1$, which is bounded by the hyperplanes $ x_1 - x_2 = 0,$ $x_2 - x_3 = 0,$ ..., and $ x_1 - x_n = 1,$ illustrated in the case $n = 3$. For $i = 1, \ldots, n- 1$, one may identify the reflection through $x_i - x_{i + 1} = 0$ with the Coxeter generator $s_i$, and also identify the reflection through $ x_1 - x_n = 1$ with the generator $ s_0 = s_n$. Combinatorial definition. The elements of the affine symmetric group may be realized as a group of periodic permutations of the integers. In particular, say that a function $u \colon \Z \to \Z$ is an "affine permutation" if For every affine permutation, and more generally every shift-equivariant bijection, the numbers $u(1), \ldots, u(n)$ must all be distinct modulo n. An affine permutation is uniquely determined by its "window notation" $[u(1), \ldots, u(n)]$, because all other values of $u$ can be found by shifting these values. Thus, affine permutations may also be identified with tuples $[u(1), \ldots, u(n)]$ of integers that contain one element from each congruence class modulo n and sum to $1 + 2 + \cdots + n$. To translate between the combinatorial and algebraic definitions, for $i = 1, \ldots, n- 1$ one may identify the Coxeter generator $s_i$ with the affine permutation that has window notation $[1, 2, \ldots, i - 1, i + 1, i, i + 2, \ldots, n ] $, and also identify the generator $ s_0 = s_n$ with the affine permutation $[0, 2, 3, \ldots, n - 2, n - 1, n + 1] $. More generally, every reflection (that is, a conjugate of one of the Coxeter generators) can be described uniquely as follows: for distinct integers i, j in $\{1, \ldots, n\}$ and arbitrary integer k, it maps i to "j" − "kn", maps j to "i" + "kn", and fixes all inputs not congruent to i or j modulo n. Representation as matrices. Affine permutations can be represented as infinite periodic permutation matrices. If $u : \mathbb{Z} \to \mathbb{Z}$ is an affine permutation, the corresponding matrix has entry 1 at position $(i, u(i))$ in the infinite grid $ \mathbb{Z} \times \mathbb{Z}$ for each integer i, and all other entries are equal to 0. Since u is a bijection, the resulting matrix contains exactly one 1 in every row and column. The periodicity condition on the map u ensures that the entry at position $(a, b)$ is equal to the entry at position $(a + n, b + n)$ for every pair of integers $(a, b)$. For example, a portion of the matrix for the affine permutation $[2, 0, 4] \in \widetilde{S}_3$ is shown in the figure. In row 1, there is a 1 in column 2; in row 2, there is a 1 in column 0; and in row 3, there is a 1 in column 4. The rest of the entries in those rows and columns are all 0, and all the other entries in the matrix are fixed by the periodicity condition. Relationship to the finite symmetric group. The affine symmetric group $\widetilde{S}_n$ contains the finite symmetric group $S_n$ of permutations on $n$ elements as both a subgroup and a quotient group. These connections allow a direct translation between the combinatorial and geometric definitions of the affine symmetric group. As a subgroup. There is a canonical way to choose a subgroup of $\widetilde{S}_n$ that is isomorphic to the finite symmetric group $S_n$. In terms of the algebraic definition, this is the subgroup of $\widetilde{S}_n$ generated by $s_1, \ldots, s_{n - 1}$ (excluding the simple reflection $s_0 = s_n$). Geometrically, this corresponds to the subgroup of transformations that fix the origin, while combinatorially it corresponds to the window notations for which $\{u(1), \ldots, u(n) \} = \{1, 2, \ldots, n \}$ (that is, in which the window notation is the one-line notation of a finite permutation). If $ u = [u(1), u(2), \ldots, u(n)]$ is the window notation of an element of this standard copy of $S_n \subset \widetilde{S}_n$, its action on the hyperplane V in $\R^n$ is given by permutation of coordinates: $ (x_1, x_2, \ldots, x_n) \cdot u = (x_{u(1)}, x_{u(2)}, \ldots, x_{u(n)})$. (In this article, the geometric action of permutations and affine permutations is on the right; thus, if u and v are two affine permutations, the action of "uv" on a point is given by first applying u, then applying v.) There are also many nonstandard copies of $S_n$ contained in $\widetilde{S}_n$. A geometric construction is to pick any point a in Λ (that is, an integer vector whose coordinates sum to 0); the subgroup $(\widetilde{S}_n)_a$ of $\widetilde{S}_n$ of isometries that fix a is isomorphic to $S_n$. As a quotient. There is a simple map (technically, a surjective group homomorphism) π from $\widetilde{S}_n$ onto the finite symmetric group $S_n$. In terms of the combinatorial definition, an affine permutation can be mapped to a permutation by reducing the window entries modulo n to elements of $\{1, 2, \ldots, n\}$, leaving the one-line notation of a permutation. In this article, the image $\pi(u)$ of an affine permutation u is called the "underlying permutation" of u. The map π sends the Coxeter generator $ s_0 = [0, 2, 3, 4, \ldots, n - 2, n - 1, n + 1]$ to the permutation whose one-line notation and cycle notation are $ [n , 2 , 3 , 4, \ldots , n - 2 , n - 1 , 1]$ and $(1 \; n)$, respectively. The kernel of π is by definition the set of affine permutations whose underlying permutation is the identity. The window notations of such affine permutations are of the form $[1 - a_1 \cdot n, 2 - a_2 \cdot n, \ldots, n - a_n \cdot n]$, where $(a_1, a_2, \ldots, a_n)$ is an integer vector such that $a_1 + a_2 + \ldots + a_n = 0$, that is, where $(a_1, \ldots, a_n) \in \Lambda$. Geometrically, this kernel consists of the translations, the isometries that shift the entire space V without rotating or reflecting it. In an abuse of notation, the symbol Λ is used in this article for all three of these sets (integer vectors in V, affine permutations with underlying permutation the identity, and translations); in all three settings, the natural group operation turns Λ into an abelian group, generated freely by the "n" − 1 vectors $\{(1, -1, 0, \ldots, 0), (0, 1, -1, \ldots, 0), \ldots, (0, \ldots, 0, 1, -1)\}$. Connection between the geometric and combinatorial definitions. The affine symmetric group $\widetilde{S}_n$ has Λ as a normal subgroup, and is isomorphic to the semidirect product <math display = block> \widetilde{S}_n \cong S_n \ltimes \Lambda $ of this subgroup with the finite symmetric group $S_n$, where the action of $S_n$ on Λ is by permutation of coordinates. Consequently, every element u of $\widetilde{S}_n$ has a unique realization as a product $ u = r \cdot t $ where $r$ is a permutation in the standard copy of $S_n$ in $\widetilde{S}_n$ and $t$ is a translation in Λ. This point of view allows for a direct translation between the combinatorial and geometric definitions of $\widetilde{S}_n$: if one writes $ [u(1), \ldots, u(n)] = [r_1 - a_1 \cdot n, \ldots, r_n - a_n \cdot n]$ where $r = [r_1, \ldots, r_n] = \pi(u)$ and $(a_1, a_2, \ldots, a_n) \in \Lambda$ then the affine permutation u corresponds to the rigid motion of V defined by <math display = block> (x_1, \ldots, x_n) \cdot u = \left(x_{r(1)} + a_1, \ldots, x_{r(n)} + a_n \right).$ Furthermore, as with every affine Coxeter group, the affine symmetric group acts transitively and freely on the set of alcoves: for each two alcoves, a unique group element takes one alcove to the other. Hence, making an arbitrary choice of alcove $A_0$ places the group in one-to-one correspondence with the alcoves: the identity element corresponds to $A_0$, and every other group element g corresponds to the alcove $A = A_0 \cdot g$ that is the image of $A_0$ under the action of g. === Example: "n" 2 === Algebraically, $\widetilde{S}_2$ is the infinite dihedral group, generated by two generators $s_0, s_1$ subject to the relations $s_0^2 = s_1^2 = 1$. Every other element of the group can be written as an alternating product of copies of $s_0$ and $s_1$. Combinatorially, the affine permutation $s_1$ has window notation $[2, 1]$, corresponding to the bijection $2k \mapsto 2k - 1, 2k - 1 \mapsto 2k$ for every integer k. The affine permutation $s_0$ has window notation $[0, 3]$, corresponding to the bijection $2k \mapsto 2k + 1, 2k + 1 \mapsto 2k$ for every integer k. Other elements have the following window notations: $ \overbrace{s_0 s_1 \cdots s_0 s_1}^{2k \text{ factors}} & = [1 + 2k, 2 - 2k ], \\[5pt] \overbrace{s_1 s_0 \cdots s_1 s_0}^{2k \text{ factors}} & = [1 - 2k, 2 + 2k ], \\[5pt] \overbrace{s_0 s_1 \cdots s_0}^{2k + 1 \text{ factors}} & = [2 + 2k, 1 - 2k ], \\[5pt] \overbrace{s_1 s_0 \cdots s_1}^{2k + 1 \text{ factors}} & = [2 - 2(k + 1), 1 + 2(k + 1) ]. $ Geometrically, the space V on which $\widetilde{S}_2$ acts is a line, with infinitely many equally spaced reflections. It is natural to identify the line V with the real line $ \R^1$, $s_0$ with reflection around the point 0, and $s_1$ with reflection around the point 1. In this case, the reflection $(s_0 s_1)^k s_0$ reflects across the point –"k" for any integer k, the composition $s_0 s_1$ translates the line by –2, and the composition $s_1 s_0$ translates the line by 2. Permutation statistics and permutation patterns. Many permutation statistics and other features of the combinatorics of finite permutations can be extended to the affine case. Descents, length, and inversions. The "length" $\ell(g)$ of an element g of a Coxeter group G is the smallest number k such that g can be written as a product $ g= s_{i_1} \cdots s_{i_k}$ of k Coxeter generators of G. Geometrically, the length of an element g in $\widetilde{S}_n$ is the number of reflecting hyperplanes that separate $A_0$ and $ A_0 \cdot g$, where $A_0$ is the fundamental alcove (the simplex bounded by the reflecting hyperplanes of the Coxeter generators $s_0, s_1, \ldots, s_{n - 1}$). Combinatorially, the length of an affine permutation is encoded in terms of an appropriate notion of inversions: for an affine permutation u, the length is <math display = block> \ell(u) = \# \left\{ (i, j) \colon i \in \{1, \ldots, n\}, i < j, \text{ and } u(i) > u(j) \right\}.$ Alternatively, it is the number of equivalence classes of pairs $ (i, j) \in \mathbb{Z} \times \mathbb{Z} $ such that $ i < j$ and $ u(i) > u(j)$ under the equivalence relation $ (i, j) \equiv (i', j') $ if $(i - i', j - j') = (kn, kn)$ for some integer k. The generating function for length in $\widetilde{S}_n$ is <math display = block> \sum_{g \in \widetilde{S}_n} q^{\ell(g)} = \frac{1 - q^n}{(1 - q)^n}.$ Similarly, there is an affine analogue of descents in permutations: an affine permutation u has a descent in position i if $u(i) > u(i + 1)$. (By periodicity, u has a descent in position i if and only if it has a descent in position $i + kn$ for all integers k.) Algebraically, the descents corresponds to the "right descents" in the sense of Coxeter groups; that is, i is a descent of u if and only if $ \ell(u \cdot s_i) < \ell (u)$. The left descents (that is, those indices i such that $ \ell(s_i \cdot u) < \ell (u)$) are the descents of the inverse affine permutation $u^{-1}$; equivalently, they are the values i such that i occurs before "i" − 1 in the sequence $ \ldots, u(-2), u(-1), u(0), u(1), u(2), \ldots$. Geometrically, i is a descent of u if and only if the fixed hyperplane of $s_i$ separates the alcoves $A_0$ and $ A_0 \cdot u.$ Because there are only finitely many possibilities for the number of descents of an affine permutation, but infinitely many affine permutations, it is not possible to naively form a generating function for affine permutations by number of descents (an affine analogue of Eulerian polynomials). One possible resolution is to consider affine descents (equivalently, cyclic descents) in the finite symmetric group $S_n$. Another is to consider simultaneously the length and number of descents of an affine permutation. The multivariate generating function for these statistics over $\widetilde{S}_n$ simultaneously for all n is \left[ $ where des("w") is the number of descents of the affine permutation w and $\exp(x; q) = \sum_{n \geq 0} \frac{x^n (1 - q)^n}{(1 - q)(1 - q^2) \cdots (1 - q^n)}$ is the q-exponential function. Cycle type and reflection length. Any bijection $u : \mathbb{Z} \to \mathbb{Z}$ partitions the integers into a (possibly infinite) list of (possibly infinite) cycles: for each integer i, the cycle containing i is the sequence $ ( \ldots, u^{-2}(i), u^{-1}(i), i, u(i), u^2(i), \ldots )$ where exponentiation represents functional composition. For an affine permutation u, the following conditions are equivalent: all cycles of u are finite, u has finite order, and the geometric action of u on the space V has at least one fixed point. The "reflection length" $\ell_R(u)$ of an element u of $\widetilde{S}_n$ is the smallest number k such that there exist reflections $r_1, \ldots, r_k$ such that $u = r_1 \cdots r_k$. (In the symmetric group, reflections are transpositions, and the reflection length of a permutation u is $n - c(u)$, where $c(u)$ is the number of cycles of u.) In , the following formula was proved for the reflection length of an affine permutation u: for each cycle of u, define the "weight" to be the integer "k" such that consecutive entries congruent modulo n differ by exactly "kn". Form a tuple of cycle weights of u (counting translates of the same cycle by multiples of n only once), and define the "nullity" $\nu(u)$ to be the size of the smallest set partition of this tuple so that each part sums to 0. Then the reflection length of u is <math display = block> \ell_R(u) = n - 2 \nu(u) + c(\pi(u)),$ where $\pi(u)$ is the underlying permutation of u. For every affine permutation u, there is a choice of subgroup W of $\widetilde{S}_n$ such that $W \cong S_n$, $ \widetilde{S}_n = W \ltimes \Lambda$, and for the standard form $ u = w \cdot t $ implied by this semidirect product, the reflection lengths are additive, that is, $ \ell_R(u) = \ell_R(w) + \ell_R(t)$. Fully commutative elements and pattern avoidance. A "reduced word" for an element g of a Coxeter group is a tuple $(s_{i_1}, \ldots, s_{i_{\ell(g)}})$ of Coxeter generators of minimum possible length such that $ g = s_{i_1} \cdots s_{i_{\ell(g)}}$. The element g is called "fully commutative" if any reduced word can be transformed into any other by sequentially swapping pairs of factors that commute. For example, in the finite symmetric group $S_4$, the element $ 2143 = (12)(34)$ is fully commutative, since its two reduced words $(s_1, s_3)$ and $(s_3, s_1)$ can be connected by swapping commuting factors, but $ 4132 = (142)(3)$ is not fully commutative because there is no way to reach the reduced word $(s_3, s_2, s_3, s_1)$ starting from the reduced word $(s_2, s_3, s_2, s_1)$ by commutations. proved that in the finite symmetric group $S_n$, a permutation is fully commutative if and only if it avoids the permutation pattern 321, that is, if and only if its one-line notation contains no three-term decreasing subsequence. In , this result was extended to affine permutations: an affine permutation u is fully commutative if and only if there do not exist integers $ i < j < k$ such that $ u(i) > u(j) > u(k)$. The number of affine permutations avoiding a single pattern p is finite if and only if p avoids the pattern 321, so in particular there are infinitely many fully commutative affine permutations. These were enumerated by length in . Parabolic subgroups and other structures. The parabolic subgroups of $\widetilde{S}_n$ and their coset representatives offer a rich combinatorial structure. Other aspects of affine symmetric groups, such as their Bruhat order and representation theory, may also be understood via combinatorial models. Parabolic subgroups, coset representatives. A "standard parabolic subgroup" of a Coxeter group is a subgroup generated by a subset of its Coxeter generating set. The maximal parabolic subgroups are those that come from omitting a single Coxeter generator. In $\widetilde{S}_n$, all maximal parabolic subgroups are isomorphic to the finite symmetric group $S_n$. The subgroup generated by the subset $ \{s_0, \ldots, s_{n - 1} \} \smallsetminus \{s_i\}$ consists of those affine permutations that stabilize the interval $[i + 1, i + n]$, that is, that map every element of this interval to another element of the interval. For a fixed element i of $\{0, \ldots, n - 1\}$, let $ J = \{s_0, \ldots, s_{n - 1} \} \smallsetminus \{s_i\}$ be the maximal proper subset of Coxeter generators omitting $s_i$, and let $(\widetilde{S}_n)_J$ denote the parabolic subgroup generated by J. Every coset $ g \cdot (\widetilde{S}_n)_J $ has a unique element of minimum length. The collection of such representatives, denoted $(\widetilde{S}_n)^J$, consists of the following affine permutations: <math display = block> (\widetilde{S}_n)^J = \left \{ u \in \widetilde{S}_n \colon u(i - n + 1) < u(i - n + 2) < \cdots < u(i - 1) < u(i) \right \}.$ In the particular case that $ J = \{s_1, \ldots, s_{n - 1} \}$, so that $(\widetilde{S}_n)_J \cong S_n$ is the standard copy of $S_n$ inside $\widetilde{S}_n$, the elements of $(\widetilde{S}_n)^J \cong \widetilde{S}_n/S_n$ may naturally be represented by "abacus diagrams": the integers are arranged in an infinite strip of width n, increasing sequentially along rows and then from top to bottom; integers are circled if they lie directly above one of the window entries of the minimal coset representative. For example, the minimal coset representative $ u = [-5, 0, 6, 9]$ is represented by the abacus diagram at right. To compute the length of the representative from the abacus diagram, one adds up the number of uncircled numbers that are smaller than the last circled entry in each column. (In the example shown, this gives $ 5 + 3 + 0 + 1 = 9$.) Other combinatorial models of minimum-length coset representatives for $\widetilde{S}_n/S_n$ can be given in terms of core partitions (integer partitions in which no hook length is divisible by n) or bounded partitions (integer partitions in which no part is larger than "n" − 1). Under these correspondences, it can be shown that the weak Bruhat order on $\widetilde{S}_n / S_n$ is isomorphic to a certain subposet of Young's lattice. Bruhat order. The Bruhat order on $\widetilde{S}_n$ has the following combinatorial realization. If u is an affine permutation and i and j are integers, define $ u [i, j] $ to be the number of integers a such that $ a \leq i $ and $ u(a) \geq j$. (For example, with $ u = [2, 0, 4] \in \widetilde{S}_3$, one has $u [ 3, 1 ] = 3$: the three relevant values are $ a = 0, 1, 3 $, which are respectively mapped by u to 1, 2, and 4.) Then for two affine permutations u, v, one has that $u \leq v$ in Bruhat order if and only if $ u[i, j] \leq v[i, j] $ for all integers i, j. Representation theory and an affine Robinson–Schensted correspondence. In the finite symmetric group, the Robinson–Schensted correspondence gives a bijection between the group and pairs $(P, Q)$ of standard Young tableaux of the same shape. This bijection plays a central role in the combinatorics and the representation theory of the symmetric group. For example, in the language of Kazhdan–Lusztig theory, two permutations lie in the same left cell if and only if their images under Robinson–Schensted have the same tableau Q, and in the same right cell if and only if their images have the same tableau P. In , Jian-Yi Shi showed that left cells for $\widetilde{S}_n$ are indexed instead by "tabloids", and in he gave an algorithm to compute the tabloid analogous to the tableau P for an affine permutation. In , the authors extended Shi's work to give a bijective map between $\widetilde{S}_n$ and triples $(P, Q, \rho)$ consisting of two tabloids of the same shape and an integer vector whose entries satisfy certain inequalities. Their procedure uses the matrix representation of affine permutations and generalizes the shadow construction, introduced in . Inverse realizations. In some situations, one may wish to consider the action of the affine symmetric group on $\Z$ or on alcoves that is inverse to the one given above. These alternate realizations are described below. In the combinatorial action of $\widetilde{S}_n$ on $\Z$, the generator $s_i$ acts by switching the "values" i and "i" + 1. In the inverse action, it instead switches the entries in "positions" i and "i" + 1. Similarly, the action of a general reflection will be to switch the entries at "positions" "j" − "kn" and "i" + "kn" for each k, fixing all inputs at positions not congruent to i or j modulo n. In the geometric action of $\widetilde{S}_n$, the generator $s_i$ acts on an alcove A by reflecting it across one of the bounding planes of the fundamental alcove "A"0. In the inverse action, it instead reflects A across one of "its own" bounding planes. From this perspective, a reduced word corresponds to an "alcove walk" on the tessellated space V. Relationship to other mathematical objects. The affine symmetric groups are closely related to a variety of other mathematical objects. Juggling patterns. In , a correspondence is given between affine permutations and juggling patterns encoded in a version of siteswap notation. Here, a juggling pattern of period n is a sequence $(a_1, \ldots, a_n)$ of nonnegative integers (with certain restrictions) that captures the behavior of balls thrown by a juggler, where the number $a_i$ indicates the length of time the ith throw spends in the air (equivalently, the height of the throw). The number b of balls in the pattern is the average $b = \frac{a_1 + \cdots + a_n}{n}$. The Ehrenborg–Readdy correspondence associates to each juggling pattern ${\bf a} = (a_1, \ldots, a_n)$ of period n the function $w_{\bf a} \colon \Z \to \Z$ defined by <math display = block> w_{\bf a}(i) = i + a_i - b,$ where indices of the sequence a are taken modulo n. Then $w_{\bf a}$ is an affine permutation in $\widetilde{S}_n$, and moreover every affine permutation arises from a juggling pattern in this way. Under this bijection, the length of the affine permutation is encoded by a natural statistic in the juggling pattern: <math display = block>\ell(w_{\bf a}) = (b - 1)n - \operatorname{cross}({\bf a}),$ where $\operatorname{cross}({\bf a})$ is the number of crossings (up to periodicity) in the arc diagram of a. This allows an elementary proof of the generating function for affine permutations by length. For example, the juggling pattern 441 has $n = 3$ and $ b = \frac{4 + 4 + 1}{3} = 3$. Therefore, it corresponds to the affine permutation $w_{441} = [1 + 4 - 3, 2 + 4 - 3, 3 + 1 - 3] = [2, 3, 1]$. The juggling pattern has four crossings, and the affine permutation has length $\ell(w_{441}) = (3 - 1) \cdot 3 - 4 = 2$. Similar techniques can be used to derive the generating function for minimal coset representatives of $\widetilde{S}_n/S_n$ by length. Complex reflection groups. In a finite-dimensional real inner product space, a "reflection" is a linear transformation that fixes a linear hyperplane pointwise and negates the vector orthogonal to the plane. This notion may be extended to vector spaces over other fields. In particular, in a complex inner product space, a "reflection" is a unitary transformation T of finite order that fixes a hyperplane. This implies that the vectors orthogonal to the hyperplane are eigenvectors of T, and the associated eigenvalue is a complex root of unity. A "complex reflection group" is a finite group of linear transformations on a complex vector space generated by reflections. The complex reflection groups were fully classified by : each complex reflection group is isomorphic to a product of irreducible complex reflection groups, and every irreducible either belongs to an infinite family $G(m, p, n)$ (where m, p, and n are positive integers such that p divides m) or is one of 34 other (so-called "exceptional") examples. The group $G(m, 1, n)$ is the generalized symmetric group: algebraically, it is the wreath product $ (\Z / m \Z) \wr S_n$ of the cyclic group $\Z / m \Z$ with the symmetric group $S_n$. Concretely, the elements of the group may be represented by monomial matrices (matrices having one nonzero entry in every row and column) whose nonzero entries are all mth roots of unity. The groups $G(m, p, n)$ are subgroups of $G(m, 1, n)$, and in particular the group $G(m, m, n)$ consists of those matrices in which the product of the nonzero entries is equal to 1. In , Shi showed that the affine symmetric group is a "generic cover" of the family $\left\{G(m, m, n) \colon m \geq 1 \right\}$, in the following sense: for every positive integer m, there is a surjection $\pi_m$ from $\widetilde{S}_n$ to $G(m, m, n)$, and these maps are compatible with the natural surjections $G(m, m, n) \twoheadrightarrow G(p, p, n)$ when $p \mid m$ that come from raising each entry to the "m"/"p"th power. Moreover, these projections respect the reflection group structure, in that the image of every reflection in $\widetilde{S}_n$ under $\pi_m$ is a reflection in $G(m, m, n)$; and similarly when $m > 1$ the image of the standard Coxeter element $s_0 \cdot s_1 \cdots s_{n - 1}$ in $\widetilde{S}_n$ is a Coxeter element in $ G(m, m, n)$. Affine Lie algebras. Each affine Coxeter group is associated to an affine Lie algebra, a certain infinite-dimensional non-associative algebra with unusually nice representation-theoretic properties. In this association, the Coxeter group arises as a group of symmetries of the root space of the Lie algebra (the dual of the Cartan subalgebra). In the classification of affine Lie algebras, the one associated to $\widetilde{S}_n$ is of (untwisted) type $A_{n - 1}^{(1)}$, with Cartan matrix $ \left[ \begin{array}{rr} 2 & - 2 \\ - 2& 2 \end{array} \right] $ for $n = 2$ and <math display = block> \left[ \begin{array}{rrrrrr} 2 & -1 & 0 & \cdots & 0 & -1 \\ -1 & 2 & -1 & \cdots & 0 & 0 \\ 0 & -1 & 2 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 2 & -1 \\ -1 & 0 & 0 & \cdots & -1 & 2 \end{array} \right] $ (a circulant matrix) for $ n > 2 $. Like other Kac–Moody algebras, affine Lie algebras satisfy the Weyl–Kac character formula, which expresses the characters of the algebra in terms of their highest weights. In the case of affine Lie algebras, the resulting identities are equivalent to the Macdonald identities. In particular, for the affine Lie algebra of type $A_1^{(1)}$, associated to the affine symmetric group $\widetilde{S}_2$, the corresponding Macdonald identity is equivalent to the Jacobi triple product. Braid group and group-theoretic properties. Coxeter groups have a number of special properties not shared by all groups. These include that their word problem is decidable (that is, there exists an algorithm that can determine whether or not any given product of the generators is equal to the identity element) and that they are linear groups (that is, they can be represented by a group of invertible matrices over a field). Each Coxeter group W is associated to an Artin–Tits group $B_W$, which is defined by a similar presentation that omits relations of the form $s^2 = 1$ for each generator s. In particular, the Artin–Tits group associated to $\widetilde{S}_n$ is generated by n elements $ \sigma_0, \sigma_1, \ldots, \sigma_{n - 1}$ subject to the relations $\sigma_i \sigma_{i + 1} \sigma_i = \sigma_{i + 1}\sigma_i \sigma_{i + 1}$ for $i = 0, \ldots, n - 1$ (and no others), where as before the indices are taken modulo n (so $ \sigma_n = \sigma_0$). Artin–Tits groups of Coxeter groups are conjectured to have many nice properties: for example, they are conjectured to be torsion-free, to have trivial center, to have solvable word problem, and to satisfy the $K(\pi, 1)$ conjecture. These conjectures are not known to hold for all Artin–Tits groups, but in it was shown that $B_{\widetilde{S}_n}$ has these properties. (Subsequently, they have been proved for the Artin–Tits groups associated to affine Coxeter groups.) In the case of the affine symmetric group, these proofs make use of an associated Garside structure on the Artin–Tits group. Artin–Tits groups are sometimes also known as "generalized braid groups", because the Artin–Tits group $B_{S_n}$ of the (finite) symmetric group is the braid group on n strands. Not all Artin–Tits groups have a natural representation in terms of geometric braids. However, the Artin–Tits group of the hyperoctahedral group $S^{\pm}_n$ (geometrically, the symmetry group of the "n"-dimensional hypercube; combinatorially, the group of signed permutations of size "n") does have such a representation: it is given by the subgroup of the braid group on $n + 1$ strands consisting of those braids for which a particular strand ends in the same position it started in, or equivalently as the braid group of n strands in an annular region. Moreover, the Artin–Tits group of the hyperoctahedral group $S^{\pm}_n$ can be written as a semidirect product of $B_{\widetilde{S}_n}$ with an infinite cyclic group. It follows that $B_{\widetilde{S}_n}$ may be interpreted as a certain subgroup consisting of geometric braids, and also that it is a linear group. Extended affine symmetric group. The affine symmetric group is a subgroup of the "extended affine symmetric group". The extended group is isomorphic to the wreath product $ \Z \wr S_n$. Its elements are "extended affine permutations": bijections $u \colon \Z \to \Z$ such that $u(x + n) = u(x) + n$ for all integers x. Unlike the affine symmetric group, the extended affine symmetric group is not a Coxeter group. But it has a natural generating set that extends the Coxeter generating set for $\widetilde{S}_n$: the "shift operator" $\tau$ whose window notation is $\tau = [2, 3, \ldots, n, n + 1]$ generates the extended group with the simple reflections, subject to the additional relations $ \tau s_i \tau^{-1} = s_{i + 1}$. Combinatorics of other affine Coxeter groups. The geometric action of the affine symmetric group $\widetilde{S}_n$ places it naturally in the family of affine Coxeter groups, each of which has a similar geometric action on an affine space. The combinatorial description of the $\widetilde{S}_n$ may also be extended to many of these groups: in , an axiomatic description is given of certain permutation groups acting on $\Z$ (the "George groups", in honor of George Lusztig), and it is shown that they are exactly the "classical" Coxeter groups of finite and affine types A, B, C, and D. (In the classification of affine Coxeter groups, the affine symmetric group is type A.) Thus, the combinatorial interpretations of descents, inversions, etc., carry over in these cases. Abacus models of minimum-length coset representatives for parabolic quotients have also been extended to this context. History. The study of Coxeter groups in general could be said to first arise in the classification of regular polyhedra (the Platonic solids) in ancient Greece. The modern systematic study (connecting the algebraic and geometric definitions of finite and affine Coxeter groups) began in work of Coxeter in the 1930s. The combinatorial description of the affine symmetric group first appears in work of , and was expanded upon by ; both authors used the combinatorial description to study the Kazhdan–Lusztig cells of $\widetilde{S}_n$. The proof that the combinatorial definition agrees with the algebraic definition was given by .
6211210
abstract_algebra
Class of mathematical orderings In mathematics, a well-order (or well-ordering or well-order relation) on a set S is a total order on S with the property that every non-empty subset of S has a least element in this ordering. The set S together with the well-order relation is then called a well-ordered set. In some academic articles and textbooks these terms are instead written as wellorder, wellordered, and wellordering or well order, well ordered, and well ordering. Every non-empty well-ordered set has a least element. Every element s of a well-ordered set, except a possible greatest element, has a unique successor (next element), namely the least element of the subset of all elements greater than s. There may be elements besides the least element which have no predecessor (see below for an example). A well-ordered set S contains for every subset T with an upper bound a least upper bound, namely the least element of the subset of all upper bounds of T in S. If ≤ is a non-strict well ordering, then < is a strict well ordering. A relation is a strict well ordering if and only if it is a well-founded strict total order. The distinction between strict and non-strict well orders is often ignored since they are easily interconvertible. Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The well-ordering theorem, which is equivalent to the axiom of choice, states that every set can be well ordered. If a set is well ordered (or even if it merely admits a well-founded relation), the proof technique of transfinite induction can be used to prove that a given statement is true for all elements of the set. The observation that the natural numbers are well ordered by the usual less-than relation is commonly called the well-ordering principle (for natural numbers). Ordinal numbers. Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The position of each element within the ordered set is also given by an ordinal number. In the case of a finite set, the basic operation of counting, to find the ordinal number of a particular object, or to find the object with a particular ordinal number, corresponds to assigning ordinal numbers one by one to the objects. The size (number of elements, cardinal number) of a finite set is equal to the order type. Counting in the everyday sense typically starts from one, so it assigns to each object the size of the initial segment with that object as last element. Note that these numbers are one more than the formal ordinal numbers according to the isomorphic order, because these are equal to the number of earlier objects (which corresponds to counting from zero). Thus for finite n, the expression "n-th element" of a well-ordered set requires context to know whether this counts from zero or one. In a notation "β-th element" where β can also be an infinite ordinal, it will typically count from zero. For an infinite set the order type determines the cardinality, but not conversely: well-ordered sets of a particular cardinality can have many different order types (see , below, for an example). For a countably infinite set, the set of possible order types is uncountable. Examples and counterexamples. Natural numbers. The standard ordering ≤ of the natural numbers is a well ordering and has the additional property that every non-zero natural number has a unique predecessor. Another well ordering of the natural numbers is given by defining that all even numbers are less than all odd numbers, and the usual ordering applies within the evens and the odds: $\begin{matrix} 0 & 2 & 4 & 6 & 8 & \dots & 1 & 3 & 5 & 7 & 9 & \dots \end{matrix}$ This is a well-ordered set of order type "ω" + "ω". Every element has a successor (there is no largest element). Two elements lack a predecessor: 0 and 1. Integers. Unlike the standard ordering ≤ of the natural numbers, the standard ordering ≤ of the integers is not a well ordering, since, for example, the set of negative integers does not contain a least element. The following binary relation R is an example of well ordering of the integers: x R y if and only if one of the following conditions holds: This relation R can be visualized as follows: $\begin{matrix} 0 & 1 & 2 & 3 & 4 & \dots & -1 & -2 & -3 & \dots \end{matrix}$ R is isomorphic to the ordinal number "ω" + "ω". Another relation for well ordering the integers is the following definition: $x \leq_z y$ if and only if $|x| < |y| \qquad \text{or} \qquad |x| = |y| \text{ and } x \leq y.$ This well order can be visualized as follows: $\begin{matrix} 0 & -1 & 1 & -2 & 2 & -3 & 3 & -4 & 4 & \dots \end{matrix}$ This has the order type ω. Reals. The standard ordering ≤ of any real interval is not a well ordering, since, for example, the open interval does not contain a least element. From the ZFC axioms of set theory (including the axiom of choice) one can show that there is a well order of the reals. Also Wacław Sierpiński proved that ZF + GCH (the generalized continuum hypothesis) imply the axiom of choice and hence a well order of the reals. Nonetheless, it is possible to show that the ZFC+GCH axioms alone are not sufficient to prove the existence of a definable (by a formula) well order of the reals. However it is consistent with ZFC that a definable well ordering of the reals exists—for example, it is consistent with ZFC that V=L, and it follows from ZFC+V=L that a particular formula well orders the reals, or indeed any set. An uncountable subset of the real numbers with the standard ordering ≤ cannot be a well order: Suppose X is a subset of well ordered by ≤. For each x in X, let "s"("x") be the successor of x in ≤ ordering on X (unless x is the last element of X). Let $A = \{(x,s(x)) \,|\, x \in X\}$ whose elements are nonempty and disjoint intervals. Each such interval contains at least one rational number, so there is an injective function from A to There is an injection from X to A (except possibly for a last element of X which could be mapped to zero later). And it is well known that there is an injection from to the natural numbers (which could be chosen to avoid hitting zero). Thus there is an injection from X to the natural numbers which means that X is countable. On the other hand, a countably infinite subset of the reals may or may not be a well order with the standard ≤. For example, Examples of well orders: Equivalent formulations. If a set is totally ordered, then the following are equivalent to each other: Order topology. Every well-ordered set can be made into a topological space by endowing it with the order topology. With respect to this topology there can be two kinds of elements: For subsets we can distinguish: A subset is cofinal in the whole set if and only if it is unbounded in the whole set or it has a maximum which is also maximum of the whole set. A well-ordered set as topological space is a first-countable space if and only if it has order type less than or equal to ω1 (omega-one), that is, if and only if the set is countable or has the smallest uncountable order type.
16399
abstract_algebra
Theorem relating a group with the image and kernel of a homomorphism In abstract algebra, the fundamental theorem on homomorphisms, also known as the fundamental homomorphism theorem, or the first isomorphism theorem, relates the structure of two objects between which a homomorphism is given, and of the kernel and image of the homomorphism. The homomorphism theorem is used to prove the isomorphism theorems. Group theoretic version. Given two groups "G" and "H" and a group homomorphism "f" : "G" → "H", let "N" be a normal subgroup in "G" and φ the natural surjective homomorphism "G" → "G"/"N" (where "G"/"N" is the quotient group of "G" by "N"). If "N" is a subset of ker("f") then there exists a unique homomorphism "h": "G"/"N" → "H" such that "f" "h"∘φ. In other words, the natural projection φ is universal among homomorphisms on "G" that map "N" to the identity element. The situation is described by the following commutative diagram: "h" is injective if and only if "N" ker("f"). Therefore, by setting "N" ker("f") we immediately get the first isomorphism theorem. We can write the statement of the fundamental theorem on homomorphisms of groups as "every homomorphic image of a group is isomorphic to a quotient group". Proof. The proof follows from two basic facts about homomorphisms, namely their preservation of the group operation, and their mapping of the identity element to the identity element. We need to show that if $ \phi: G \to H $ is a homomorphism of groups, then: $ \text{im}(\phi) $ is a subgroup of $ H $.$ G / \ker(\phi) $ is isomorphic to $ \text{im}(\phi) $. Proof of 1. The operation that is preserved by $ \phi $ is the group operation. If $ a, b \in \text{im}(\phi) $, then there exist elements $ a', b' \in G $ such that $ \phi(a')=a $ and $ \phi(b')=b $. For these $ a $ and $ b $, we have $ ab = \phi(a')\phi(b') = \phi(a'b') \in \text{im}(\phi) $ (since $ \phi $ preserves the group operation), and thus, the closure property is satisfied in $ \text{im}(\phi) $. The identity element $ e \in H $ is also in $ \text{im}(\phi) $ because $ \phi $ maps the identity element of $ G $ to it. Since every element $ a' $ in $ G $ has an inverse $ $such that $ $ (because $ \phi $ preserves the inverse property as well), we have an inverse for each element $ \phi(a') = a $ in $ \text{im}(\phi) $, therefore, $ \text{im}(\phi) $ is a subgroup of $ H $. Proof of 2. Construct a map $ \psi: G / \ker(\phi) \to \text{im}(\phi) $ by $ \psi(a\ker(\phi)) = \phi(a) $. This map is well-defined, as if $ a\ker(\phi) = b\ker(\phi) $, then $ b^{-1}a \in \ker(\phi) $ and so $ \phi(b^{-1}a) = e \Rightarrow \phi(b^{-1})\phi(a) = e $ which gives $ \phi(a) = \phi(b) $. This map is an isomorphism. $ \psi $ is surjective onto $ \text{im}(\phi) $ by definition. To show injectiveness, if $ \psi(a\ker(\phi)) = \psi(b\ker(\phi)) $, then $ \phi(a) = \phi(b) $, which implies $ b^{-1}a \in\ker(\phi) $ so $ a\ker(\phi) = b\ker(\phi) $. Finally, <math display="block">\psi((a\ker(\phi))(b\ker(\phi))) = \psi(ab\ker(\phi)) = \phi(ab) = \phi(a)\phi(b) = \psi(a\ker(\phi))\psi(b\ker(\phi))$hence $ \psi $ preserves the group operation. Hence $ \psi $ is an isomorphism between $ G / \ker(\phi) $ and $ \text{im}(\phi) $, which completes the proof. Other versions. Similar theorems are valid for monoids, vector spaces, modules, and rings.
5471
abstract_algebra
In computer science, a y-fast trie is a data structure for storing integers from a bounded domain. It supports exact and predecessor or successor queries in time "O"(log log "M"), using "O"("n") space, where "n" is the number of stored values and "M" is the maximum value in the domain. The structure was proposed by Dan Willard in 1982 to decrease the "O"("n" log "M") space used by an x-fast trie. Structure. A y-fast trie consists of two data structures: the top half is an x-fast trie and the lower half consists of a number of balanced binary trees. The keys are divided into groups of "O"(log "M") consecutive elements and for each group a balanced binary search tree is created. To facilitate efficient insertion and deletion, each group contains at least (log "M")/4 and at most 2 log "M" elements. For each balanced binary search tree a representative "r" is chosen. These representatives are stored in the x-fast trie. A representative "r" need not be an element of the tree associated with it, but it does need be an integer smaller than the successor of "r" and the minimum element of the tree associated with that successor and greater than the predecessor of "r" and the maximum element of the tree associated with that predecessor. Initially, the representative of a tree will be an integer between the minimum and maximum element in its tree. Since the x-fast trie stores "O"("n" / log "M") representatives and each representative occurs in "O"(log "M") hash tables, this part of the y-fast trie uses "O"("n") space. The balanced binary search trees store "n" elements in total which uses "O"("n") space. Hence, in total a y-fast trie uses "O"("n") space. Operations. Like van Emde Boas trees and x-fast tries, y-fast tries support the operations of an "ordered associative array". This includes the usual associative array operations, along with two more "order" operations, "Successor" and "Predecessor": Find. A key "k" can be stored in either the tree of the smallest representative "r" greater than "k" or in the tree of the predecessor of "r" since the representative of a binary search tree need not be an element stored in its tree. Hence, one first finds the smallest representative "r" greater than "k" in the x-fast trie. Using this representative, one retrieves the predecessor of "r". These two representatives point to two balanced binary search trees, both of which one searches for "k". Finding the smallest representative "r" greater than "k" in the x-fast trie takes "O"(log log "M"). Using "r", finding its predecessor takes constant time. Searching the two balanced binary search trees containing "O"(log "M") elements each takes "O"(log log "M") time. Hence, a key "k" can be found, and its value retrieved, in "O"(log log "M") time. Successor and Predecessor. Similarly to the key "k" itself, its successor can be stored in either the tree of the smallest representative "r" greater than "k" or in the tree of the predecessor of "r". Hence, to find the successor of a key "k", one first searches the x-fast trie for the smallest representative greater than "k". Next, one uses this representative to retrieve its predecessor in the x-fast trie. These two representatives point to two balanced binary search trees, which one searches for the successor of "k". Finding the smallest representative "r" greater than "k" in the x-fast trie takes "O"(log log "M") time and using "r" to find its predecessor takes constant time. Searching the two balanced binary search trees containing "O"(log "M") elements each takes "O"(log log "M") time. Hence, the successor of a key "k" can be found, and its value retrieved, in "O"(log log "M") time. Searching for the predecessor of a key "k" is highly similar to finding its successor. One searches the x-fast trie for the largest representative "r" smaller than "k" and one uses "r" to retrieve its predecessor in the x-fast trie. Finally, one searches the two balanced binary search trees of these two representatives for the predecessor of "k". This takes "O"(log log "M") time. Insert. To insert a new key/value pair ("k", "v"), one first needs to determine in which balanced binary search tree one needs to insert "k". To this end, one finds the tree "T" containing the successor of "k". Next, one inserts "k" into "T". To ensure that all balanced binary search trees contain "O"(log "M") elements, one splits "T" into two balanced binary trees and removes its representative from the x-fast trie if it contains more than 2 log "M" elements. Each of the two new balanced binary search trees contains at most log "M" + 1 elements. One picks a representative for each tree and insert these into the x-fast trie. Finding the successor of "k" takes "O"(log log "M") time. Inserting "k" into a balanced binary search tree that contains "O"(log "M") elements also takes "O"(log log "M") time. Splitting a binary search tree that contains "O"(log "M") elements can be done in "O"(log log "M") time. Finally, inserting and deleting the three representatives takes "O"(log "M") time. However, since one splits the tree at most once every "O"(log "M") insertions and deletions, this takes constant amortized time. Therefore, inserting a new key/value pair takes "O"(log log "M") amortized time. Delete. Deletions are very similar to insertions. One first finds the key "k" in one of the balanced binary search trees and delete it from this tree "T". To ensure that all balanced binary search trees contain "O"(log "M") elements, one merges "T" with the balanced binary search tree of its successor or predecessor if it contains less than (log "M")/4 elements. The representatives of the merged trees are removed from the x-fast trie. It is possible for the merged tree to contain more than 2 log "M" elements. If this is the case, the newly formed tree is split into two trees of about equal size. Next, one picks a new representative for each of the new trees and one inserts these into the x-fast trie. Finding the key "k" takes "O"(log log "M") time. Deleting "k" from a balanced binary search tree that contains "O"(log "M") elements also takes "O"(log log "M") time. Merging and possibly splitting the balanced binary search trees takes "O"(log log "M") time. Finally, deleting the old representatives and inserting the new representatives into the x-fast trie takes "O"(log "M") time. Merging and possibly splitting the balanced binary search tree, however, is done at most once for every "O"(log "M") insertions and deletions. Hence, it takes constant amortized time. Therefore, deleting a key/value pair takes "O"(log log "M") amortized time.
3398597
abstract_algebra
Category In mathematics, specifically in category theory, a pre-abelian category is an additive category that has all kernels and cokernels. Spelled out in more detail, this means that a category C is pre-abelian if: Note that the zero morphism in item 3 can be identified as the identity element of the hom-set Hom("A","B"), which is an abelian group by item 1; or as the unique morphism "A" → 0 → "B", where 0 is a zero object, guaranteed to exist by item 2. Examples. The original example of an additive category is the category Ab of abelian groups. Ab is preadditive because it is a closed monoidal category, the biproduct in Ab is the finite direct sum, the kernel is inclusion of the ordinary kernel from group theory and the cokernel is the quotient map onto the ordinary cokernel from group theory. Other common examples: These will give you an idea of what to think of; for more examples, see abelian category (every abelian category is pre-abelian). Elementary properties. Every pre-abelian category is of course an additive category, and many basic properties of these categories are described under that subject. This article concerns itself with the properties that hold specifically because of the existence of kernels and cokernels. Although kernels and cokernels are special kinds of equalisers and coequalisers, a pre-abelian category actually has "all" equalisers and coequalisers. We simply construct the equaliser of two morphisms "f" and "g" as the kernel of their difference "g" − "f"  ; similarly, their coequaliser is the cokernel of their difference. Since pre-abelian categories have all finite products and coproducts (the biproducts) and all binary equalisers and coequalisers (as just described), then by a general theorem of category theory, they have all finite limits and colimits. That is, pre-abelian categories are finitely complete. The existence of both kernels and cokernels gives a notion of image and coimage. We can define these as im "f" := ker coker "f"; coim "f" := coker ker "f". That is, the image is the kernel of the cokernel, and the coimage is the cokernel of the kernel. Note that this notion of image may not correspond to the usual notion of image, or range, of a function, even assuming that the morphisms in the category "are" functions. For example, in the category of topological abelian groups, the image of a morphism actually corresponds to the inclusion of the "closure" of the range of the function. For this reason, people will often distinguish the meanings of the two terms in this context, using "image" for the abstract categorical concept and "range" for the elementary set-theoretic concept. In many common situations, such as the category of sets, where images and coimages exist, their objects are isomorphic. Put more precisely, we have a factorisation of "f": "A" → "B" as "A" → "C" → "I" → "B", where the morphism on the left is the coimage, the morphism on the right is the image, and the morphism in the middle (called the "parallel" of "f") is an isomorphism. In a pre-abelian category, "this is not necessarily true". The factorisation shown above does always exist, but the parallel might not be an isomorphism. In fact, the parallel of "f" is an isomorphism for every morphism "f" if and only if the pre-abelian category is an abelian category. An example of a non-abelian, pre-abelian category is, once again, the category of topological abelian groups. As remarked, the image is the inclusion of the "closure" of the range; however, the coimage is a quotient map onto the range itself. Thus, the parallel is the inclusion of the range into its closure, which is not an isomorphism unless the range was already closed. Exact functors. Recall that all finite limits and colimits exist in a pre-abelian category. In general category theory, a functor is called "left exact" if it preserves all finite limits and "right exact" if it preserves all finite colimits. (A functor is simply "exact" if it's both left exact and right exact.) In a pre-abelian category, exact functors can be described in particularly simple terms. First, recall that an additive functor is a functor "F": C → D between preadditive categories that acts as a group homomorphism on each hom-set. Then it turns out that a functor between pre-abelian categories is left exact if and only if it is additive and preserves all kernels, and it's right exact if and only if it's additive and preserves all cokernels. Note that an exact functor, because it preserves both kernels and cokernels, preserves all images and coimages. Exact functors are most useful in the study of abelian categories, where they can be applied to exact sequences. Maximal exact structure. On every pre-abelian category $\mathcal A$ there exists an exact structure $\mathcal{E}_{\text{max}}$ that is maximal in the sense that it contains every other exact structure. The exact structure $\mathcal{E}_{\text{max}}$ consists of precisely those kernel-cokernel pairs $(f,g)$ where $f$ is a semi-stable kernel and $\mathcal g$ is a semi-stable cokernel. Here, $f:X\rightarrow Y$ is a semi-stable kernel if it is a kernel and for each morphism $h:X\rightarrow Z$ in the pushout diagram $ \begin{array}{ccc} X & \xrightarrow{f} & Y \\ \downarrow_{h} & & \downarrow_{h'}\\ Z & \xrightarrow{f'} & Q $ the morphism $f'$ is again a kernel. $g: X\rightarrow Y$ is a semi-stable cokernel if it is a cokernel and for every morphism $h: Z\rightarrow Y$ in the pullback diagram $ \begin{array}{ccc} P & \xrightarrow{g'} & Z \\ \downarrow_{h'} & & \downarrow_{h}\\ X & \xrightarrow{g} & Y $ the morphism $g'$ is again a cokernel. A pre-abelian category $\mathcal A$ is quasi-abelian if and only if all kernel-cokernel pairs form an exact structure. An example for which this is not the case is the category of (Hausdorff) bornological spaces. The result is also valid for additive categories that are not pre-abelian but Karoubian. Special cases. The pre-abelian categories most commonly studied are in fact abelian categories; for example, Ab is an abelian category. Pre-abelian categories that are not abelian appear for instance in functional analysis.
31491
abstract_algebra
A finite group In mathematical group theory, a normal p-complement of a finite group for a prime "p" is a normal subgroup of order coprime to "p" and index a power of "p". In other words the group is a semidirect product of the normal "p"-complement and any Sylow "p"-subgroup. A group is called p-nilpotent if it has a normal "p"-complement. Cayley normal 2-complement theorem. Cayley showed that if the Sylow 2-subgroup of a group "G" is cyclic then the group has a normal 2-complement, which shows that the Sylow 2-subgroup of a simple group of even order cannot be cyclic. Burnside normal p-complement theorem. Burnside (1911, Theorem II, section 243) showed that if a Sylow "p"-subgroup of a group "G" is in the center of its normalizer then "G" has a normal "p"-complement. This implies that if "p" is the smallest prime dividing the order of a group "G" and the Sylow "p"-subgroup is cyclic, then "G" has a normal "p"-complement. Frobenius normal p-complement theorem. The Frobenius normal "p"-complement theorem is a strengthening of the Burnside normal "p"-complement theorem, that states that if the normalizer of every non-trivial subgroup of a Sylow "p"-subgroup of "G" has a normal "p"-complement, then so does "G". More precisely, the following conditions are equivalent: Thompson normal p-complement theorem. The Frobenius normal "p"-complement theorem shows that if every normalizer of a non-trivial subgroup of a Sylow "p"-subgroup has a normal "p"-complement then so does "G". For applications it is often useful to have a stronger version where instead of using all non-trivial subgroups of a Sylow "p"-subgroup, one uses only the non-trivial characteristic subgroups. For odd primes "p" Thompson found such a strengthened criterion: in fact he did not need all characteristic subgroups, but only two special ones. showed that if "p" is an odd prime and the groups N(J("P")) and C(Z("P")) both have normal "p"-complements for a Sylow P-subgroup of "G", then "G" has a normal "p"-complement. In particular if the normalizer of every nontrivial characteristic subgroup of "P" has a normal "p"-complement, then so does "G". This consequence is sufficient for many applications. The result fails for "p" = 2 as the simple group PSL2(F7) of order 168 is a counterexample. gave a weaker version of this theorem. Glauberman normal p-complement theorem. Thompson's normal "p"-complement theorem used conditions on two particular characteristic subgroups of a Sylow "p"-subgroup. Glauberman improved this further by showing that one only needs to use one characteristic subgroup: the center of the Thompson subgroup. used his ZJ theorem to prove a normal "p"-complement theorem, that if "p" is an odd prime and the normalizer of Z(J(P)) has a normal "p"-complement, for "P" a Sylow "p"-subgroup of "G", then so does "G". Here "Z" stands for the center of a group and "J" for the Thompson subgroup. The result fails for "p" = 2 as the simple group PSL2(F7) of order 168 is a counterexample.
2556693
abstract_algebra
In mathematical finite group theory, an N-group is a group all of whose local subgroups (that is, the normalizers of nontrivial "p"-subgroups) are solvable groups. The non-solvable ones were classified by Thompson during his work on finding all the minimal finite simple groups. Simple N-groups. The simple N-groups were classified by Thompson (1968, 1970, 1971, 1973, 1974, 1974b) in a series of 6 papers totaling about 400 pages. The simple N-groups consist of the special linear groups PSL2("q"), PSL3(3), the Suzuki groups Sz(22"n"+1), the unitary group U3(3), the alternating group "A"7, the Mathieu group M11, and the Tits group. (The Tits group was overlooked in Thomson's original announcement in 1968, but Hearn pointed out that it was also a simple N-group.) More generally Thompson showed that any non-solvable N-group is a subgroup of Aut("G") containing "G" for some simple N-group "G". generalized Thompson's theorem to the case of groups where all 2-local subgroups are solvable. The only extra simple groups that appear are the unitary groups U3("q"). Proof. gives a summary of Thompson's classification of N-groups. The primes dividing the order of the group are divided into four classes π1, π2, π3, π4 as follows The proof is subdivided into several cases depending on which of these four classes the prime 2 belongs to, and also on an integer "e", which is the largest integer for which there is an elementary abelian subgroup of rank "e" normalized by a nontrivial 2-subgroup intersecting it trivially. Consequences. A minimal simple group is a non-cyclic simple group all of whose proper subgroups are solvable. The complete list of minimal finite simple groups is given as follows In other words a non-cyclic finite simple group must have a subquotient isomorphic to one of these groups.
3244057
abstract_algebra
Mathematical term In mathematics, a slender group is a torsion-free abelian group that is "small" in a sense that is made precise in the definition below. Definition. Let ZN denote the Baer–Specker group, that is, the group of all integer sequences, with termwise addition. For each natural number "n", let "e""n" be the sequence with "n"-th term equal to 1 and all other terms 0. A torsion-free abelian group "G" is said to be slender if every homomorphism from ZN into "G" maps all but finitely many of the "e""n" to the identity element. Examples. Every free abelian group is slender. The additive group of rational numbers Q is not slender: any mapping of the "e""n" into Q extends to a homomorphism from the free subgroup generated by the "e""n", and as Q is injective this homomorphism extends over the whole of ZN. Therefore, a slender group must be reduced. Every countable reduced torsion-free abelian group is slender, so every proper subgroup of Q is slender.
1065644
abstract_algebra
In geometric group theory, Gromov's theorem on groups of polynomial growth, first proved by Mikhail Gromov, characterizes finitely generated groups of "polynomial" growth, as those groups which have nilpotent subgroups of finite index. Statement. The growth rate of a group is a well-defined notion from asymptotic analysis. To say that a finitely generated group has polynomial growth means the number of elements of length (relative to a symmetric generating set) at most "n" is bounded above by a polynomial function "p"("n"). The "order of growth" is then the least degree of any such polynomial function "p". A nilpotent group "G" is a group with a lower central series terminating in the identity subgroup. Gromov's theorem states that a finitely generated group has polynomial growth if and only if it has a nilpotent subgroup that is of finite index. Growth rates of nilpotent groups. There is a vast literature on growth rates, leading up to Gromov's theorem. An earlier result of Joseph A. Wolf showed that if "G" is a finitely generated nilpotent group, then the group has polynomial growth. Yves Guivarc'h and independently Hyman Bass (with different proofs) computed the exact order of polynomial growth. Let "G" be a finitely generated nilpotent group with lower central series $ G = G_1 \supseteq G_2 \supseteq \cdots. $ In particular, the quotient group "G""k"/"G""k"+1 is a finitely generated abelian group. The Bass–Guivarc'h formula states that the order of polynomial growth of "G" is $ d(G) = \sum_{k \geq 1} k \operatorname{rank}(G_k/G_{k+1}) $ where: "rank" denotes the rank of an abelian group, i.e. the largest number of independent and torsion-free elements of the abelian group. In particular, Gromov's theorem and the Bass–Guivarc'h formula imply that the order of polynomial growth of a finitely generated group is always either an integer or infinity (excluding for example, fractional powers). Another nice application of Gromov's theorem and the Bass–Guivarch formula is to the quasi-isometric rigidity of finitely generated abelian groups: any group which is quasi-isometric to a finitely generated abelian group contains a free abelian group of finite index. Proofs of Gromov's theorem. In order to prove this theorem Gromov introduced a convergence for metric spaces. This convergence, now called the Gromov–Hausdorff convergence, is currently widely used in geometry. A relatively simple proof of the theorem was found by Bruce Kleiner. Later, Terence Tao and Yehuda Shalom modified Kleiner's proof to make an essentially elementary proof as well as a version of the theorem with explicit bounds. Gromov's theorem also follows from the classification of approximate groups obtained by Breuillard, Green and Tao. A simple and concise proof based on functional analytic methods is given by Ozawa. The gap conjecture. Beyond Gromov's theorem one can ask whether there exists a gap in the growth spectrum for finitely generated group just above polynomial growth, separating virtually nilpotent groups from others. Formally, this means that there would exist a function $f: \mathbb N \to \mathbb N$ such that a finitely generated group is virtually nilpotent if and only if its growth function is an $O(f(n))$. Such a theorem was obtained by Shalom and Tao, with an explicit function $n^{\log\log(n)^c}$ for some $c > 0$. All known groups with intermediate growth (i.e. both superpolynomial and subexponential) are essentially generalizations of Grigorchuk's group, and have faster growth functions; so all known groups have growth faster than $e^{n^\alpha}$, with $\alpha = \log(2)/\log(2/\eta ) \approx 0.767$, where $\eta$ is the real root of the polynomial $x^3+x^2+x-2$. It is conjectured that the true lower bound on growth rates of groups with intermediate growth is $e^{\sqrt n}$. This is known as the "Gap conjecture".
239443
abstract_algebra
In number theory and algebraic geometry, the Tate conjecture is a 1963 conjecture of John Tate that would describe the algebraic cycles on a variety in terms of a more computable invariant, the Galois representation on étale cohomology. The conjecture is a central problem in the theory of algebraic cycles. It can be considered an arithmetic analog of the Hodge conjecture. Statement of the conjecture. Let "V" be a smooth projective variety over a field "k" which is finitely generated over its prime field. Let "k"s be a separable closure of "k", and let "G" be the absolute Galois group Gal("k"s/"k") of "k". Fix a prime number ℓ which is invertible in "k". Consider the ℓ-adic cohomology groups (coefficients in the ℓ-adic integers Zℓ, scalars then extended to the ℓ-adic numbers Qℓ) of the base extension of "V" to "k"s; these groups are representations of "G". For any "i" ≥ 0, a codimension-"i" subvariety of "V" (understood to be defined over "k") determines an element of the cohomology group $ H^{2i}(V_{k_s},\mathbf{Q}_{\ell}(i)) = W $ which is fixed by "G". Here Qℓ("i" ) denotes the "i"th Tate twist, which means that this representation of the Galois group "G" is tensored with the "i"th power of the cyclotomic character. The Tate conjecture states that the subspace "W""G" of "W" fixed by the Galois group "G" is spanned, as a Qℓ-vector space, by the classes of codimension-"i" subvarieties of "V". An algebraic cycle means a finite linear combination of subvarieties; so an equivalent statement is that every element of "W""G" is the class of an algebraic cycle on "V" with Qℓ coefficients. Known cases. The Tate conjecture for divisors (algebraic cycles of codimension 1) is a major open problem. For example, let "f" : "X" → "C" be a morphism from a smooth projective surface onto a smooth projective curve over a finite field. Suppose that the generic fiber "F" of "f", which is a curve over the function field "k"("C"), is smooth over "k"("C"). Then the Tate conjecture for divisors on "X" is equivalent to the Birch and Swinnerton-Dyer conjecture for the Jacobian variety of "F". By contrast, the Hodge conjecture for divisors on any smooth complex projective variety is known (the Lefschetz (1,1)-theorem). Probably the most important known case is that the Tate conjecture is true for divisors on abelian varieties. This is a theorem of Tate for abelian varieties over finite fields, and of Faltings for abelian varieties over number fields, part of Faltings's solution of the Mordell conjecture. Zarhin extended these results to any finitely generated base field. The Tate conjecture for divisors on abelian varieties implies the Tate conjecture for divisors on any product of curves "C"1 × ... × "C""n". The (known) Tate conjecture for divisors on abelian varieties is equivalent to a powerful statement about homomorphisms between abelian varieties. Namely, for any abelian varieties "A" and "B" over a finitely generated field "k", the natural map $ \text{Hom}(A,B)\otimes_{\mathbf{Z}}\mathbf{Q}_{\ell} \to \text{Hom}_G \left (H_1 \left (A_{k_s},\mathbf{Q}_{\ell} \right), H_1 \left (B_{k_s},\mathbf{Q}_{\ell} \right) \right )$ is an isomorphism. In particular, an abelian variety "A" is determined up to isogeny by the Galois representation on its Tate module "H"1("A""k"s, Zℓ). The Tate conjecture also holds for K3 surfaces over finitely generated fields of characteristic not 2. (On a surface, the nontrivial part of the conjecture is about divisors.) In characteristic zero, the Tate conjecture for K3 surfaces was proved by André and Tankeev. For K3 surfaces over finite fields of characteristic not 2, the Tate conjecture was proved by Nygaard, Ogus, Charles, Madapusi Pera, and Maulik. surveys known cases of the Tate conjecture. Related conjectures. Let "X" be a smooth projective variety over a finitely generated field "k". The semisimplicity conjecture predicts that the representation of the Galois group "G" = Gal("k"s/"k") on the ℓ-adic cohomology of "X" is semisimple (that is, a direct sum of irreducible representations). For "k" of characteristic 0, showed that the Tate conjecture (as stated above) implies the semisimplicity of $H^i \left (X \times_k \overline{k}, \mathbf{Q}_\ell(n) \right ).$ For "k" finite of order "q", Tate showed that the Tate conjecture plus the semisimplicity conjecture would imply the strong Tate conjecture, namely that the order of the pole of the zeta function "Z"("X", "t") at "t" = "q"−"j" is equal to the rank of the group of algebraic cycles of codimension "j" modulo numerical equivalence. Like the Hodge conjecture, the Tate conjecture would imply most of Grothendieck's standard conjectures on algebraic cycles. Namely, it would imply the Lefschetz standard conjecture (that the inverse of the Lefschetz isomorphism is defined by an algebraic correspondence); that the Künneth components of the diagonal are algebraic; and that numerical equivalence and homological equivalence of algebraic cycles are the same. Notes.
725318
abstract_algebra
Algebra of formal sums In mathematics, a free abelian group is an abelian group with a basis. Being an abelian group means that it is a set with an addition operation that is associative, commutative, and invertible. A basis, also called an integral basis, is a subset such that every element of the group can be uniquely expressed as an integer combination of finitely many basis elements. For instance the two-dimensional integer lattice forms a free abelian group, with coordinatewise addition as its operation, and with the two points (1,0) and (0,1) as its basis. Free abelian groups have properties which make them similar to vector spaces, and may equivalently be called free $\Z$-modules, the free modules over the integers. Lattice theory studies free abelian subgroups of real vector spaces. In algebraic topology, free abelian groups are used to define chain groups, and in algebraic geometry they are used to define divisors. The elements of a free abelian group with basis $B$ may be described in several equivalent ways. These include formal sums over $B$, which are expressions of the form <math display=inline>\sum a_i b_i $ where each $a_i$ is a nonzero integer, each $b_i$ is a distinct basis element, and the sum has finitely many terms. Alternatively, the elements of a free abelian group may be thought of as signed multisets containing finitely many elements of $B$, with the multiplicity of an element in the multiset equal to its coefficient in the formal sum. Another way to represent an element of a free abelian group is as a function from $B$ to the integers with finitely many nonzero values; for this functional representation, the group operation is the pointwise addition of functions. Every set $B$ has a free abelian group with $B$ as its basis. This group is unique in the sense that every two free abelian groups with the same basis are isomorphic. Instead of constructing it by describing its individual elements, a free abelian group with basis $B$ may be constructed as a direct sum of copies of the additive group of the integers, with one copy per member of $B$. Alternatively, the free abelian group with basis $B$ may be described by a presentation with the elements of $B$ as its generators and with the commutators of pairs of members as its relators. The rank of a free abelian group is the cardinality of a basis; every two bases for the same group give the same rank, and every two free abelian groups with the same rank are isomorphic. Every subgroup of a free abelian group is itself free abelian; this fact allows a general abelian group to be understood as a quotient of a free abelian group by "relations", or as a cokernel of an injective homomorphism between free abelian groups. The only free abelian groups that are free groups are the trivial group and the infinite cyclic group. Definition and examples. A free abelian group is an abelian group that has a basis. Here, being an abelian group means that it is described by a set $S$ of its elements and a binary operation on $S$, conventionally denoted as an additive group by the $+$ symbol (although it need not be the usual addition of numbers) that obey the following properties: A basis is a subset $B$ of the elements of $S$ with the property that every element of $S$ may be formed in a unique way by choosing finitely many basis elements $b_i$ of $B$, choosing a nonzero integer $k_i$ for each of the chosen basis elements, and adding together $k_i$ copies of the basis elements $b_i$ for which $k_i$ is positive, and $-k_i$ copies of $-b_i$ for each basis element for which $k_i$ is negative. As a special case, the identity element can always be formed in this way as the combination of zero basis elements, according to the usual convention for an empty sum, and it must not be possible to find any other combination that represents the identity. The integers $\mathbb{Z}$, under the usual addition operation, form a free abelian group with the basis $\{1\}$. The integers are commutative and associative, with 0 as the additive identity and with each integer having an additive inverse, its negation. Each non-negative $x$ is the sum of $x$ copies of $1$, and each negative integer $x$ is the sum of $-x$ copies of $-1$, so the basis property is also satisfied. An example where the group operation is different from the usual addition of numbers is given by the positive rational numbers $\mathbb{Q}^+$, which form a free abelian group with the usual multiplication operation on numbers and with the prime numbers as their basis. Multiplication is commutative and associative, with the number $1$ as its identity and with $1/x$ as the inverse element for each positive rational number $x$. The fact that the prime numbers forms a basis for multiplication of these numbers follows from the fundamental theorem of arithmetic, according to which every positive integer can be factorized uniquely into the product of finitely many primes or their inverses. If $q=a/b$ is a positive rational number expressed in simplest terms, then $q$ can be expressed as a finite combination of the primes appearing in the factorizations of $a$ and $b$. The number of copies of each prime to use in this combination is its exponent in the factorization of $a$, or the negation of its exponent in the factorization of $b$. The polynomials of a single variable $x$, with integer coefficients, form a free abelian group under polynomial addition, with the powers of $x$ as a basis. As an abstract group, this is the same as (an isomorphic group to) the multiplicative group of positive rational numbers. One way to map these two groups to each other, showing that they are isomorphic, is to reinterpret the exponent of the $i$th prime number in the multiplicative group of the rationals as instead giving the coefficient of $x^{i-1}$ in the corresponding polynomial, or vice versa. For instance the rational number $5/27$ has exponents of $0, -3, 1$ for the first three prime numbers $2, 3, 5$ and would correspond in this way to the polynomial $-3x+x^2$ having the same coefficients $0, -3, 1$ for its constant, linear, and quadratic terms. Because these mappings merely reinterpret the same numbers, they define a bijection between the elements of the two groups. And because the group operation of multiplying positive rationals acts additively on the exponents of the prime numbers, in the same way that the group operation of adding polynomials acts on the coefficients of the polynomials, these maps preserve the group structure; they are homomorphisms. A bijective homomorphism is called an isomorphism, and its existence demonstrates that these two groups have the same properties. Although the representation of each group element in terms of a given basis is unique, a free abelian group has generally more than one basis, and different bases will generally result in different representations of its elements. For example, if one replaces any element of a basis by its inverse, one gets another basis. As a more elaborated example, the two-dimensional integer lattice $\Z^2$, consisting of the points in the plane with integer Cartesian coordinates, forms a free abelian group under vector addition with the basis $\{(1,0),(0,1)\}$. For this basis, the element $(4,3)$ can be written where 'multiplication' is defined so that, for instance, There is no other way to write $(4,3)$ in the same basis. However, with a different basis such as $\{(1,0),(1,1)\}$, it can be written as Generalizing this example, every lattice forms a finitely-generated free abelian group. The $d$-dimensional integer lattice $\Z^d$ has a natural basis consisting of the positive integer unit vectors, but it has many other bases as well: if $M$ is a $d\times d$ integer matrix with determinant $\pm 1$, then the rows of $M$ form a basis, and conversely every basis of the integer lattice has this form. For more on the two-dimensional case, see fundamental pair of periods. Constructions. Every set can be the basis of a free abelian group, which is unique up to group isomorphisms. The free abelian group for a given basis set can be constructed in several different but equivalent ways: as a direct sum of copies of the integers, as a family of integer-valued functions, as a signed multiset, or by a presentation of a group. Products and sums. The direct product of groups consists of tuples of an element from each group in the product, with componentwise addition. The direct product of two free abelian groups is itself free abelian, with basis the disjoint union of the bases of the two groups. More generally the direct product of any finite number of free abelian groups is free abelian. The $d$-dimensional integer lattice, for instance, is isomorphic to the direct product of $d$ copies of the integer group $\Z$. The trivial group $\{0\}$ is also considered to be free abelian, with basis the empty set. It may be interpreted as an empty product, the direct product of zero copies of $\Z$. For infinite families of free abelian groups, the direct product is not necessarily free abelian. For instance the Baer–Specker group $\mathbb{Z}^\mathbb{N}$, an uncountable group formed as the direct product of countably many copies of $\mathbb{Z}$, was shown in 1937 by Reinhold Baer to not be free abelian, although Ernst Specker proved in 1950 that all of its countable subgroups are free abelian. Instead, to obtain a free abelian group from an infinite family of groups, the direct sum rather than the direct product should be used. The direct sum and direct product are the same when they are applied to finitely many groups, but differ on infinite families of groups. In the direct sum, the elements are again tuples of elements from each group, but with the restriction that all but finitely many of these elements are the identity for their group. The direct sum of infinitely many free abelian groups remains free abelian. It has a basis consisting of tuples in which all but one element is the identity, with the remaining element part of a basis for its group. Every free abelian group may be described as a direct sum of copies of $\mathbb{Z}$, with one copy for each member of its basis. This construction allows any set $B$ to become the basis of a free abelian group. Integer functions and formal sums. Given a set $B$, one can define a group $\mathbb{Z}^{(B)}$ whose elements are functions from $B$ to the integers, where the parenthesis in the superscript indicates that only the functions with finitely many nonzero values are included. If $f(x)$ and $g(x)$ are two such functions, then $f+g$ is the function whose values are sums of the values in $f$ and $g$: that is, This pointwise addition operation gives $\mathbb{Z}^{(B)}$ the structure of an abelian group. Each element $x$ from the given set $B$ corresponds to a member of $\mathbb{Z}^{(B)}$, the function $e_x$ for which $e_x(x)=1$ and for which $e_x(y)=0$ for all $y\ne x$. Every function $f$ in $\mathbb{Z}^{(B)}$ is uniquely a linear combination of a finite number of basis elements: <math display=block>f=\sum_{\{x\mid f(x)\ne 0\}} f(x) e_x.$ Thus, these elements $e_x$ form a basis for $\mathbb{Z}^{(B)}$, and $\mathbb{Z}^{(B)}$ is a free abelian group. In this way, every set $B$ can be made into the basis of a free abelian group. The elements of $\mathbb{Z}^{(B)}$ may also be written as formal sums, expressions in the form of a sum of finitely many terms, where each term is written as the product of a nonzero integer with a distinct member of $B$. These expressions are considered equivalent when they have the same terms, regardless of the ordering of terms, and they may be added by forming the union of the terms, adding the integer coefficients to combine terms with the same basis element, and removing terms for which this combination produces a zero coefficient. They may also be interpreted as the signed multisets of finitely many elements of $B$. Presentation. A presentation of a group is a set of elements that generate the group (meaning that all group elements can be expressed as products of finitely many generators), together with "relators", products of generators that give the identity element. The elements of a group defined in this way are equivalence classes of sequences of generators and their inverses, under an equivalence relation that allows inserting or removing any relator or generator-inverse pair as a contiguous subsequence. The free abelian group with basis $B$ has a presentation in which the generators are the elements of $B$, and the relators are the commutators of pairs of elements of $B$. Here, the commutator of two elements $x$ and $y$ is the product $x^{-1}y^{-1}xy$; setting this product to the identity causes $xy$ to equal $yx$, so that $x$ and $y$ commute. More generally, if all pairs of generators commute, then all pairs of products of generators also commute. Therefore, the group generated by this presentation is abelian, and the relators of the presentation form a minimal set of relators needed to ensure that it is abelian. When the set of generators is finite, the presentation of a free abelian group is also finite, because there are only finitely many different commutators to include in the presentation. This fact, together with the fact that every subgroup of a free abelian group is free abelian (below) can be used to show that every finitely generated abelian group is finitely presented. For, if $G$ is finitely generated by a set $B$, it is a quotient of the free abelian group over $B$ by a free abelian subgroup, the subgroup generated by the relators of the presentation of $G$. But since this subgroup is itself free abelian, it is also finitely generated, and its basis (together with the commutators over $B$) forms a finite set of relators for a presentation of $G$. As a module. The modules over the integers are defined similarly to vector spaces over the real numbers or rational numbers: they consist of systems of elements that can be added to each other, with an operation for scalar multiplication by integers that is compatible with this addition operation. Every abelian group may be considered as a module over the integers, with a scalar multiplication operation defined as follows: However, unlike vector spaces, not all abelian groups have a basis, hence the special name "free" for those that do. A free module is a module that can be represented as a direct sum over its base ring, so free abelian groups and free $\mathbb Z$-modules are equivalent concepts: each free abelian group is (with the multiplication operation above) a free $\mathbb Z$-module, and each free $\mathbb Z$-module comes from a free abelian group in this way. As well as the direct sum, another way to combine free abelian groups is to use the tensor product of $\Z$-modules. The tensor product of two free abelian groups is always free abelian, with a basis that is the Cartesian product of the bases for the two groups in the product. Many important properties of free abelian groups may be generalized to free modules over a principal ideal domain. For instance, submodules of free modules over principal ideal domains are free, a fact that writes allows for "automatic generalization" of homological machinery to these modules. Additionally, the theorem that every projective $\Z$-module is free generalizes in the same way. Properties. Universal property. A free abelian group $F$ with basis $B$ has the following universal property: for every function $f$ from $B$ to an abelian group $A$, there exists a unique group homomorphism from $F$ to $A$ which extends $f$. Here, a group homomorphism is a mapping from one group to the other that is consistent with the group product law: performing a product before or after the mapping produces the same result. By a general property of universal properties, this shows that "the" abelian group of base $B$ is unique up to an isomorphism. Therefore, the universal property can be used as a definition of the free abelian group of base $B$. The uniqueness of the group defined by this property shows that all the other definitions are equivalent. It is because of this universal property that free abelian groups are called "free": they are the free objects in the category of abelian groups, the category that has abelian groups as its objects and homomorphisms as its arrows. The map from a basis to its free abelian group is a functor, a structure-preserving mapping of categories, from sets to abelian groups, and is adjoint to the forgetful functor from abelian groups to sets. However, a "free abelian" group is "not" a free group except in two cases: a free abelian group having an empty basis (rank zero, giving the trivial group) or having just one element in the basis (rank one, giving the infinite cyclic group). Other abelian groups are not free groups because in free groups $ab$ must be different from $ba$ if $a$ and $b$ are different elements of the basis, while in free abelian groups the two products must be identical for all pairs of elements. In the general category of groups, it is an added constraint to demand that $ab=ba$, whereas this is a necessary property in the category of abelian groups. Rank. Every two bases of the same free abelian group have the same cardinality, so the cardinality of a basis forms an invariant of the group known as its rank. Two free abelian groups are isomorphic if and only if they have the same rank. A free abelian group is finitely generated if and only if its rank is a finite number $n$, in which case the group is isomorphic to $\mathbb{Z}^n$. This notion of rank can be generalized, from free abelian groups to abelian groups that are not necessarily free. The rank of an abelian group $G$ is defined as the rank of a free abelian subgroup $F$ of $G$ for which the quotient group $G/F$ is a torsion group. Equivalently, it is the cardinality of a maximal subset of $G$ that generates a free subgroup. The rank is a group invariant: it does not depend on the choice of the subgroup. Subgroups. Every subgroup of a free abelian group is itself a free abelian group. This result of Richard Dedekind was a precursor to the analogous Nielsen–Schreier theorem that every subgroup of a free group is free, and is a generalization of the fact that every nontrivial subgroup of the infinite cyclic group is infinite cyclic. The proof needs the axiom of choice. A proof using Zorn's lemma (one of many equivalent assumptions to the axiom of choice) can be found in Serge Lang's "Algebra". Solomon Lefschetz and Irving Kaplansky argue that using the well-ordering principle in place of Zorn's lemma leads to a more intuitive proof. In the case of finitely generated free abelian groups, the proof is easier, does not need the axiom of choice, and leads to a more precise result. If $G$ is a subgroup of a finitely generated free abelian group $F$, then $G$ is free and there exists a basis $(e_1, \ldots, e_n)$ of $F$ and positive integers $d_1|d_2|\ldots|d_k$ (that is, each one divides the next one) such that $(d_1e_1,\ldots, d_ke_k)$ is a basis of $G.$ Moreover, the sequence $d_1,d_2,\ldots,d_k$ depends only on $F$ and $G$ and not on the basis. A constructive proof of the existence part of the theorem is provided by any algorithm computing the Smith normal form of a matrix of integers. Uniqueness follows from the fact that, for any $r\le k$, the greatest common divisor of the minors of rank $r$ of the matrix is not changed during the Smith normal form computation and is the product $d_1\cdots d_r$ at the end of the computation. Torsion and divisibility. All free abelian groups are torsion-free, meaning that there is no non-identity group element $x$ and nonzero integer $n$ such that $nx=0$. Conversely, all finitely generated torsion-free abelian groups are free abelian. The additive group of rational numbers $\mathbb{Q}$ provides an example of a torsion-free (but not finitely generated) abelian group that is not free abelian. One reason that $\mathbb{Q}$ is not free abelian is that it is divisible, meaning that, for every element $x\in\mathbb{Q}$ and every nonzero integer $n$, it is possible to express $x$ as a scalar multiple $ny$ of another element $y=x/n$. In contrast, non-trivial free abelian groups are never divisible, because in a free abelian group the basis elements cannot be expressed as multiples of other elements. Symmetry. The symmetries of any group can be described as group automorphisms, the invertible homomorphisms from the group to itself. In non-abelian groups these are further subdivided into inner and outer automorphisms, but in abelian groups all non-identity automorphisms are outer. They form another group, the automorphism group of the given group, under the operation of composition. The automorphism group of a free abelian group of finite rank $n$ is the general linear group $\operatorname{GL}(n,\mathbb{Z})$, which can be described concretely (for a specific basis of the free automorphism group) as the set of $n\times n$ invertible integer matrices under the operation of matrix multiplication. Their action as symmetries on the free abelian group $\Z^n$ is just matrix-vector multiplication. The automorphism groups of two infinite-rank free abelian groups have the same first-order theories as each other, if and only if their ranks are equivalent cardinals from the point of view of second-order logic. This result depends on the structure of involutions of free abelian groups, the automorphisms that are their own inverse. Given a basis for a free abelian group, one can find involutions that map any set of disjoint pairs of basis elements to each other, or that negate any chosen subset of basis elements, leaving the other basis elements fixed. Conversely, for every involution of a free abelian group, one can find a basis of the group for which all basis elements are swapped in pairs, negated, or left unchanged by the involution. Relation to other groups. If a free abelian group is a quotient of two groups $A/B$, then $A$ is the direct sum $B\oplus A/B$. Given an arbitrary abelian group $A$, there always exists a free abelian group $F$ and a surjective group homomorphism from $F$ to $A$. One way of constructing a surjection onto a given group $A$ is to let $F=\mathbb{Z}^{(A)}$ be the free abelian group over $A$, represented as formal sums. Then a surjection can be defined by mapping formal sums in $F$ to the corresponding sums of members of $A$. That is, the surjection maps <math display=block>\sum_{\{x\mid a_x\ne 0\}} a_x e_x \mapsto \sum_{\{x\mid a_x\ne 0\}} a_x x,$ where $a_x$ is the integer coefficient of basis element $e_x$ in a given formal sum, the first sum is in $F$, and the second sum is in $A$. This surjection is the unique group homomorphism which extends the function $e_x\mapsto x$, and so its construction can be seen as an instance of the universal property. When $F$ and $A$ are as above, the kernel $G$ of the surjection from $F$ to $A$ is also free abelian, as it is a subgroup of $F$ (the subgroup of elements mapped to the identity). Therefore, these groups form a short exact sequence <math display=block>0\to G\to F\to A\to 0$ in which $F$ and $G$ are both free abelian and $A$ is isomorphic to the factor group $F/G$. This is a free resolution of $A$. Furthermore, assuming the axiom of choice, the free abelian groups are precisely the projective objects in the category of abelian groups. Applications. Algebraic topology. In algebraic topology, a formal sum of $k$-dimensional simplices is called a $k$-chain, and the free abelian group having a collection of $k$-simplices as its basis is called a chain group. The simplices are generally taken from some topological space, for instance as the set of $k$-simplices in a simplicial complex, or the set of singular $k$-simplices in a manifold. Any $k$-dimensional simplex has a boundary that can be represented as a formal sum of $(k-1)$-dimensional simplices, and the universal property of free abelian groups allows this boundary operator to be extended to a group homomorphism from $k$-chains to $(k-1)$-chains. The system of chain groups linked by boundary operators in this way forms a chain complex, and the study of chain complexes forms the basis of homology theory. Algebraic geometry and complex analysis. Every rational function over the complex numbers can be associated with a signed multiset of complex numbers $c_i$, the zeros and poles of the function (points where its value is zero or infinite). The multiplicity $m_i$ of a point in this multiset is its order as a zero of the function, or the negation of its order as a pole. Then the function itself can be recovered from this data, up to a scalar factor, as <math display=block>f(q)=\prod (q-c_i)^{m_i}.$ If these multisets are interpreted as members of a free abelian group over the complex numbers, then the product or quotient of two rational functions corresponds to the sum or difference of two group members. Thus, the multiplicative group of rational functions can be factored into the multiplicative group of complex numbers (the associated scalar factors for each function) and the free abelian group over the complex numbers. The rational functions that have a nonzero limiting value at infinity (the meromorphic functions on the Riemann sphere) form a subgroup of this group in which the sum of the multiplicities is zero. This construction has been generalized, in algebraic geometry, to the notion of a divisor. There are different definitions of divisors, but in general they form an abstraction of a codimension-one subvariety of an algebraic variety, the set of solution points of a system of polynomial equations. In the case where the system of equations has one degree of freedom (its solutions form an algebraic curve or Riemann surface), a subvariety has codimension one when it consists of isolated points, and in this case a divisor is again a signed multiset of points from the variety. The meromorphic functions on a compact Riemann surface have finitely many zeros and poles, and their divisors form a subgroup of a free abelian group over the points of the surface, with multiplication or division of functions corresponding to addition or subtraction of group elements. To be a divisor, an element of the free abelian group must have multiplicities summing to zero, and meet certain additional constraints depending on the surface. Group rings. The integral group ring $\Z[G]$, for any group $G$, is a ring whose additive group is the free abelian group over $G$. When $G$ is finite and abelian, the multiplicative group of units in $\Z[G]$ has the structure of a direct product of a finite group and a finitely generated free abelian group.
122588
abstract_algebra
Number of elements in a subset of a commutative group In mathematics, the rank, Prüfer rank, or torsion-free rank of an abelian group "A" is the cardinality of a maximal linearly independent subset. The rank of "A" determines the size of the largest free abelian group contained in "A". If "A" is torsion-free then it embeds into a vector space over the rational numbers of dimension rank "A". For finitely generated abelian groups, rank is a strong invariant and every such group is determined up to isomorphism by its rank and torsion subgroup. Torsion-free abelian groups of rank 1 have been completely classified. However, the theory of abelian groups of higher rank is more involved. The term rank has a different meaning in the context of elementary abelian groups. Definition. A subset {"a""α"} of an abelian group "A" is linearly independent (over Z) if the only linear combination of these elements that is equal to zero is trivial: if $\sum_\alpha n_\alpha a_\alpha = 0, \quad n_\alpha\in\mathbb{Z},$ where all but finitely many coefficients "n""α" are zero (so that the sum is, in effect, finite), then all coefficients are zero. Any two maximal linearly independent sets in "A" have the same cardinality, which is called the rank of "A". The rank of an abelian group is analogous to the dimension of a vector space. The main difference with the case of vector space is a presence of torsion. An element of an abelian group "A" is classified as torsion if its order is finite. The set of all torsion elements is a subgroup, called the torsion subgroup and denoted "T"("A"). A group is called torsion-free if it has no non-trivial torsion elements. The factor-group "A"/"T"("A") is the unique maximal torsion-free quotient of "A" and its rank coincides with the rank of "A". The notion of rank with analogous properties can be defined for modules over any integral domain, the case of abelian groups corresponding to modules over Z. For this, see finitely generated module#Generic rank. $0\to A\to B\to C\to 0\;$ is a short exact sequence of abelian groups then rk "B" = rk "A" + rk "C". This follows from the flatness of Q and the corresponding fact for vector spaces. $\operatorname{rank}\left(\bigoplus_{j\in J}A_j\right) = \sum_{j\in J}\operatorname{rank}(A_j),$ where the sum in the right hand side uses cardinal arithmetic. Groups of higher rank. Abelian groups of rank greater than 1 are sources of interesting examples. For instance, for every cardinal "d" there exist torsion-free abelian groups of rank "d" that are indecomposable, i.e. cannot be expressed as a direct sum of a pair of their proper subgroups. These examples demonstrate that torsion-free abelian group of rank greater than 1 cannot be simply built by direct sums from torsion-free abelian groups of rank 1, whose theory is well understood. Moreover, for every integer $n\ge 3$, there is a torsion-free abelian group of rank $2n-2$ that is simultaneously a sum of two indecomposable groups, and a sum of "n" indecomposable groups. Hence even the number of indecomposable summands of a group of an even rank greater or equal than 4 is not well-defined. Another result about non-uniqueness of direct sum decompositions is due to A.L.S. Corner: given integers $n\ge k\ge 1$, there exists a torsion-free abelian group "A" of rank "n" such that for any partition $n = r_1 + \cdots + r_k$ into "k" natural summands, the group "A" is the direct sum of "k" indecomposable subgroups of ranks $r_1, r_2, \ldots, r_k$. Thus the sequence of ranks of indecomposable summands in a certain direct sum decomposition of a torsion-free abelian group of finite rank is very far from being an invariant of "A". Other surprising examples include torsion-free rank 2 groups "A""n","m" and "B""n","m" such that "A""n" is isomorphic to "B""n" if and only if "n" is divisible by "m". For abelian groups of infinite rank, there is an example of a group "K" and a subgroup "G" such that Generalization. The notion of rank can be generalized for any module "M" over an integral domain "R", as the dimension over "R"0, the quotient field, of the tensor product of the module with the field: $\operatorname{rank} (M)=\dim_{R_0} M\otimes_R R_0$ It makes sense, since "R"0 is a field, and thus any module (or, to be more specific, vector space) over it is free. It is a generalization, since every abelian group is a module over the integers. It easily follows that the dimension of the product over Q is the cardinality of maximal linearly independent subset, since for any torsion element "x" and any rational "q", $x\otimes_{\mathbf Z} q = 0.$
121761
abstract_algebra
Type of group in mathematics In mathematics, a primary cyclic group is a group that is both a cyclic group and a "p"-primary group for some prime number "p". That is, it is a cyclic group of order "p""m", C"p""m", for some prime number "p", and natural number "m". Every finite abelian group "G" may be written as a finite direct sum of primary cyclic groups, as stated in the fundamental theorem of finite abelian groups: $G=\bigoplus_{1\leq i \leq n}\mathrm{C}_ .$ This expression is essentially unique: there is a bijection between the sets of groups in two such expressions, which maps each group to one that is isomorphic. Primary cyclic groups are characterised among finitely generated abelian groups as the torsion groups that cannot be expressed as a direct sum of two non-trivial groups. As such they, along with the group of integers, form the building blocks of finitely generated abelian groups. The subgroups of a primary cyclic group are linearly ordered by inclusion. The only other groups that have this property are the quasicyclic groups.
957219
abstract_algebra
In the mathematical subject of group theory, the rank of a group "G", denoted rank("G"), can refer to the smallest cardinality of a generating set for "G", that is $ \operatorname{rank}(G)=\min\{ |X|: X\subseteq G, \langle X\rangle =G\}.$ If "G" is a finitely generated group, then the rank of "G" is a nonnegative integer. The notion of rank of a group is a group-theoretic analog of the notion of dimension of a vector space. Indeed, for "p"-groups, the rank of the group "P" is the dimension of the vector space "P"/Φ("P"), where Φ("P") is the Frattini subgroup. The rank of a group is also often defined in such a way as to ensure subgroups have rank less than or equal to the whole group, which is automatically the case for dimensions of vector spaces, but not for groups such as affine groups. To distinguish these different definitions, one sometimes calls this rank the subgroup rank. Explicitly, the subgroup rank of a group "G" is the maximum of the ranks of its subgroups: $ \operatorname{sr}(G)=\max_{H \leq G} \min\{ |X|: X \subseteq H, \langle X \rangle = H \}.$ Sometimes the subgroup rank is restricted to abelian subgroups. rank("L") − 1 ≤ 2(rank("K") − 1)(rank("H") − 1). This result is due to Hanna Neumann. The Hanna Neumann conjecture states that in fact one always has rank("L") − 1 ≤ (rank("K") − 1)(rank("H") − 1). The Hanna Neumann conjecture has recently been solved by Igor Mineyev and announced independently by Joel Friedman. rank("A"$\ast$"B") = rank("A") + rank("B"). The rank problem. There is an algorithmic problem studied in group theory, known as the rank problem. The problem asks, for a particular class of finitely presented groups if there exists an algorithm that, given a finite presentation of a group from the class, computes the rank of that group. The rank problem is one of the harder algorithmic problems studied in group theory and relatively little is known about it. Known results include: Generalizations and related notions. The rank of a finitely generated group "G" can be equivalently defined as the smallest cardinality of a set "X" such that there exists an onto homomorphism "F"("X") → "G", where "F"("X") is the free group with free basis "X". There is a dual notion of co-rank of a finitely generated group "G" defined as the "largest" cardinality of "X" such that there exists an onto homomorphism "G" → "F"("X"). Unlike rank, co-rank is always algorithmically computable for finitely presented groups, using the algorithm of Makanin and Razborov for solving systems of equations in free groups. The notion of co-rank is related to the notion of a "cut number" for 3-manifolds. If "p" is a prime number, then the "p"-rank of "G" is the largest rank of an elementary abelian "p"-subgroup. The sectional "p"-rank is the largest rank of an elementary abelian "p"-section (quotient of a subgroup). Notes.
2266842
abstract_algebra
In mathematics, in the realm of group theory, a group is said to be a CA-group or centralizer abelian group if the centralizer of any nonidentity element is an abelian subgroup. Finite CA-groups are of historical importance as an early example of the type of classifications that would be used in the Feit–Thompson theorem and the classification of finite simple groups. Several important infinite groups are CA-groups, such as free groups, Tarski monsters, and some Burnside groups, and the locally finite CA-groups have been classified explicitly. CA-groups are also called commutative-transitive groups (or CT-groups for short) because commutativity is a transitive relation amongst the non-identity elements of a group if and only if the group is a CA-group. History. Locally finite CA-groups were classified by several mathematicians from 1925 to 1998. First, finite CA-groups were shown to be simple or solvable in . Then in the Brauer–Suzuki–Wall theorem , finite CA-groups of even order were shown to be Frobenius groups, abelian groups, or two dimensional projective special linear groups over a finite field of even order, PSL(2, 2"f") for "f" ≥ 2. Finally, finite CA-groups of odd order were shown to be Frobenius groups or abelian groups in , and so in particular, are never non-abelian simple. CA-groups were important in the context of the classification of finite simple groups. Michio Suzuki showed that every finite, simple, non-abelian, CA-group is of even order. This result was first extended to the Feit–Hall–Thompson theorem showing that finite, simple, non-abelian, CN-groups had even order, and then to the Feit–Thompson theorem which states that every finite, simple, non-abelian group is of even order. A textbook exposition of the classification of finite CA-groups is given as example 1 and 2 in . A more detailed description of the Frobenius groups appearing is included in , where it is shown that a finite, solvable CA-group is a semidirect product of an abelian group and a fixed-point-free automorphism, and that conversely every such semidirect product is a finite, solvable CA-group. Wu also extended the classification of Suzuki et al. to locally finite groups. Examples. Every abelian group is a CA-group, and a group with a non-trivial center is a CA-group if and only if it is abelian. The finite CA-groups are classified: the solvable ones are semidirect products of abelian groups by cyclic groups such that every non-trivial element acts fixed-point-freely and include groups such as the dihedral groups of order 4"k"+2, and the alternating group on 4 points of order 12, while the nonsolvable ones are all simple and are the 2-dimensional projective special linear groups PSL(2, 2"n") for "n" ≥ 2. Infinite CA-groups include free groups, PSL(2, R), and Burnside groups of large prime exponent, . Some more recent results in the infinite case are included in , including a classification of locally finite CA-groups. Wu also observes that Tarski monsters are obvious examples of infinite simple CA-groups. Works cited.
1066334
abstract_algebra
In mathematics, the category Grp (or Gp) has the class of all groups for objects and group homomorphisms for morphisms. As such, it is a concrete category. The study of this category is known as group theory. Relation to other categories. There are two forgetful functors from Grp, M: Grp → Mon from groups to monoids and U: Grp → Set from groups to sets. M has two adjoints: one right, I: Mon→Grp, and one left, K: Mon→Grp. I: Mon→Grp is the functor sending every monoid to the submonoid of invertible elements and K: Mon→Grp the functor sending every monoid to the Grothendieck group of that monoid. The forgetful functor U: Grp → Set has a left adjoint given by the composite KF: Set→Mon→Grp, where F is the free functor; this functor assigns to every set "S" the free group on "S." Categorical properties. The monomorphisms in Grp are precisely the injective homomorphisms, the epimorphisms are precisely the surjective homomorphisms, and the isomorphisms are precisely the bijective homomorphisms. The category Grp is both complete and co-complete. The category-theoretical product in Grp is just the direct product of groups while the category-theoretical coproduct in Grp is the free product of groups. The zero objects in Grp are the trivial groups (consisting of just an identity element). Every morphism "f" : "G" → "H" in Grp has a category-theoretic kernel (given by the ordinary kernel of algebra ker f = {"x" in "G" | "f"("x") = "e"}), and also a category-theoretic cokernel (given by the factor group of "H" by the normal closure of "f"("G") in "H"). Unlike in abelian categories, it is not true that every monomorphism in Grp is the kernel of its cokernel. Not additive and therefore not abelian. The category of abelian groups, Ab, is a full subcategory of Grp. Ab is an abelian category, but Grp is not. Indeed, Grp isn't even an additive category, because there is no natural way to define the "sum" of two group homomorphisms. A proof of this is as follows: The set of morphisms from the symmetric group "S"3 of order three to itself, $E=\operatorname{Hom}(S_3,S_3)$, has ten elements: an element "z" whose product on either side with every element of "E" is "z" (the homomorphism sending every element to the identity), three elements such that their product on one fixed side is always itself (the projections onto the three subgroups of order two), and six automorphisms. If Grp were an additive category, then this set "E" of ten elements would be a ring. In any ring, the zero element is singled out by the property that 0"x"="x"0=0 for all "x" in the ring, and so "z" would have to be the zero of "E". However, there are no two nonzero elements of "E" whose product is "z", so this finite ring would have no zero divisors. A finite ring with no zero divisors is a field by Wedderburn's little theorem, but there is no field with ten elements because every finite field has for its order, the power of a prime. Exact sequences. The notion of exact sequence is meaningful in Grp, and some results from the theory of abelian categories, such as the nine lemma, the five lemma, and their consequences hold true in Grp. Grp is a regular category.
276604
abstract_algebra
On jumps of upper numbering filtration of the Galois group of a finite Galois extension In mathematics, specifically in local class field theory, the Hasse–Arf theorem is a result concerning jumps of the upper numbering filtration of the Galois group of a finite Galois extension. A special case of it when the residue fields are finite was originally proved by Helmut Hasse, and the general result was proved by Cahit Arf. Statement. Higher ramification groups. The theorem deals with the upper numbered higher ramification groups of a finite abelian extension "L"/"K". So assume "L"/"K" is a finite Galois extension, and that "v""K" is a discrete normalised valuation of "K", whose residue field has characteristic "p" > 0, and which admits a unique extension to "L", say "w". Denote by "v""L" the associated normalised valuation "ew" of "L" and let $\scriptstyle{\mathcal{O}}$ be the valuation ring of "L" under "v""L". Let "L"/"K" have Galois group "G" and define the "s"-th ramification group of "L"/"K" for any real "s" ≥ −1 by $G_s(L/K)=\{\sigma\in G\,:\,v_L(\sigma a-a)\geq s+1 \text{ for all }a\in\mathcal{O}\}.$ So, for example, "G"−1 is the Galois group "G". To pass to the upper numbering one has to define the function "ψ""L"/"K" which in turn is the inverse of the function "η""L"/"K" defined by $\eta_{L/K}(s)=\int_0^s \frac{dx}.$ The upper numbering of the ramification groups is then defined by "G""t"("L"/"K") = "G""s"("L"/"K") where "s" = "ψ""L"/"K"("t"). These higher ramification groups "G""t"("L"/"K") are defined for any real "t" ≥ −1, but since "v""L" is a discrete valuation, the groups will change in discrete jumps and not continuously. Thus we say that "t" is a jump of the filtration {"G""t"("L"/"K") : "t" ≥ −1} if "G""t"("L"/"K") ≠ "G""u"("L"/"K") for any "u" > "t". The Hasse–Arf theorem tells us the arithmetic nature of these jumps. Statement of the theorem. With the above set up, the theorem states that the jumps of the filtration {"G""t"("L"/"K") : "t" ≥ −1} are all rational integers. Example. Suppose "G" is cyclic of order $p^n$, $p$ residue characteristic and $G(i)$ be the subgroup of $G$ of order $p^{n-i}$. The theorem says that there exist positive integers $i_0, i_1, ..., i_{n-1}$ such that $G_0 = \cdots = G_{i_0} = G = G^0 = \cdots = G^{i_0}$ $G_{i_0 + 1} = \cdots = G_{i_0 + p i_1} = G(1) = G^{i_0 + 1} = \cdots = G^{i_0 + i_1}$ $G_{i_0 + p i_1 + 1} = \cdots = G_{i_0 + p i_1 + p^2 i_2} = G(2) = G^{i_0 + i_1 + 1}$ $G_{i_0 + p i_1 + \cdots + p^{n-1}i_{n-1} + 1} = 1 = G^{i_0 + \cdots + i_{n-1} + 1}.$ Non-abelian extensions. For non-abelian extensions the jumps in the upper filtration need not be at integers. Serre gave an example of a totally ramified extension with Galois group the quaternion group "Q"8 of order 8 with The upper numbering then satisfies so has a jump at the non-integral value "n"=3/2. Notes.
2633131
abstract_algebra
Algebraic structure with addition, multiplication, and division In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers do. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics. The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and "p"-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements. The relation of two fields is expressed by the notion of a field extension. Galois theory, initiated by Évariste Galois in the 1830s, is devoted to understanding the symmetries of field extensions. Among other results, this theory shows that angle trisection and squaring the circle cannot be done with a compass and straightedge. Moreover, it shows that quintic equations are, in general, algebraically unsolvable. Fields serve as foundational notions in several mathematical domains. This includes different branches of mathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Number fields, the siblings of the field of rational numbers, are studied in depth in number theory. Function fields can help describe properties of geometric objects. Definition. Informally, a field is a set, along with two operations defined on that set: an addition operation written as "a" + "b", and a multiplication operation written as "a" ⋅ "b", both of which behave similarly as they behave for rational numbers and real numbers, including the existence of an additive inverse −"a" for all elements a, and of a multiplicative inverse "b"−1 for every nonzero element b. This allows one to also consider the so-called "inverse" operations of subtraction, "a" − "b", and division, "a" / "b", by defining: "a" − "b" := "a" + (−"b"), "a" / "b" := "a" ⋅ "b"−1. Classic definition. Formally, a field is a set "F" together with two binary operations on F called "addition" and "multiplication". A binary operation on F is a mapping "F" × "F" → "F", that is, a correspondence that associates with each ordered pair of elements of F a uniquely determined element of F. The result of the addition of "a" and "b" is called the sum of "a" and "b", and is denoted "a" + "b". Similarly, the result of the multiplication of "a" and "b" is called the product of "a" and "b", and is denoted "ab" or "a" ⋅ "b". These operations are required to satisfy the following properties, referred to as "field axioms" (in these axioms, a, b, and c are arbitrary elements of the field F): An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is a group under addition with 0 as the additive identity; the nonzero elements are a group under multiplication with 1 as the multiplicative identity; and multiplication distributes over addition. Even more succinct: a field is a commutative ring where $0 \ne 1$ and all nonzero elements are invertible under multiplication. Alternative definition. Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties. Division by zero is, by definition, excluded. In order to avoid existential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and two nullary operations (the constants 0 and 1). These operations are then subject to the conditions above. Avoiding existential quantifiers is important in constructive mathematics and computing. One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants 1 and −1, since 0 = 1 + (−1) and −"a" = (−1)"a". Examples. Rational numbers. Rational numbers have been widely used a long time before the elaboration of the concept of field. They are numbers that can be written as fractions "a"/"b", where "a" and "b" are integers, and "b" ≠ 0. The additive inverse of such a fraction is −"a"/"b", and the multiplicative inverse (provided that "a" ≠ 0) is "b"/"a", which can be seen as follows: $ \frac b a \cdot \frac a b = \frac{ba}{ab} = 1.$ The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows: $ & \frac a b \cdot \left(\frac c d + \frac e f \right) \\[6pt] = {} & \frac a b \cdot \left(\frac c d \cdot \frac f f + \frac e f \cdot \frac d d \right) \\[6pt] = {} & \frac{a}{b} \cdot \left(\frac{cf}{df} + \frac{ed}{fd}\right) = \frac{a}{b} \cdot \frac{cf + ed}{df} \\[6pt] = {} & \frac{a(cf + ed)}{bdf} = \frac{acf}{bdf} + \frac{aed}{bdf} = \frac{ac}{bd} + \frac{ae}{bf} \\[6pt] = {} & \frac a b \cdot \frac c d + \frac a b \cdot \frac e f. $ Real and complex numbers. The real numbers R, with the usual operations of addition and multiplication, also form a field. The complex numbers C consist of expressions "a" + "bi", with "a", "b" real, where "i" is the imaginary unit, i.e., a (non-real) number satisfying "i"2 = −1. Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold for C. For example, the distributive law enforces ("a" + "bi")("c" + "di") = "ac" + "bci" + "adi" + "bdi"2 = ("ac" − "bd") + ("bc" + "ad")"i". It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in the plane, with Cartesian coordinates given by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines. Constructible numbers. In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers with compass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field of constructible numbers. Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using only compass and straightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the field Q of rational numbers. The illustration shows the construction of square roots of constructible numbers, not necessarily contained within Q. Using the labeling in the illustration, construct the segments "AB", "BD", and a semicircle over "AD" (center at the midpoint "C"), which intersects the perpendicular line through "B" in a point "F", at a distance of exactly $h=\sqrt p$ from "B" when "BD" has length one. Not all real numbers are constructible. It can be shown that $\sqrt[3] 2$ is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of a cube with volume 2, another problem posed by the ancient Greeks. A field with four elements. In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements called "O", "I", "A", and "B". The notation is chosen such that "O" plays the role of the additive identity element (denoted 0 in the axioms above), and "I" is the multiplicative identity (denoted 1 in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example, "A" ⋅ ("B" + "A") = "A" ⋅ "I" = "A", which equals "A" ⋅ "B" + "A" ⋅ "A" = "I" + "B" = "A", as required by the distributivity. This field is called a finite field or Galois field with four elements, and is denoted F4 or GF(4). The subset consisting of "O" and "I" (highlighted in red in the tables at the right) is also a field, known as the "binary field" F2 or GF(2). In the context of computer science and Boolean algebra, "O" and "I" are often denoted respectively by "false" and "true", and the addition is then denoted XOR (exclusive or). In other words, the structure of the binary field is the basic structure that allows computing with bits. Elementary notions. In this section, "F" denotes an arbitrary field and "a" and "b" are arbitrary elements of "F". Consequences of the definition. One has "a" ⋅ 0 = 0 and −"a" = (−1) ⋅ "a". In particular, one may deduce the additive inverse of every element as soon as one knows −1. If "ab" = 0 then "a" or "b" must be 0, since, if "a" ≠ 0, then "b" = ("a"−1"a")"b" = "a"−1("ab") = "a"−1 ⋅ 0 = 0. This means that every field is an integral domain. In addition, the following properties are true for any elements "a" and "b": −0 = 0 1−1 = 1 (−(−"a")) = "a" (−"a") ⋅ "b" = "a" ⋅ (−"b") = −("a" ⋅ "b") ("a"−1)−1 = "a" if "a" ≠ 0 The additive and the multiplicative group of a field. The axioms of a field "F" imply that it is an abelian group under addition. This group is called the additive group of the field, and is sometimes denoted by ("F", +) when denoting it simply as "F" could be confusing. Similarly, the "nonzero" elements of "F" form an abelian group under multiplication, called the multiplicative group, and denoted by ("F" \ {0}, ⋅) or just "F" \ {0} or "F"*. A field may thus be defined as set "F" equipped with two operations denoted as an addition and a multiplication such that "F" is an abelian group under addition, "F" \ {0} is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication is distributive over addition. Some elementary statements about fields can therefore be obtained by applying general facts of groups. For example, the additive and multiplicative inverses −"a" and "a"−1 are uniquely determined by "a". The requirement 1 ≠ 0 follows, because 1 is the identity element of a group that does not contain 0. Thus, the trivial ring, consisting of a single element, is not a field. Every finite subgroup of the multiplicative group of a field is cyclic (see ). Characteristic. In addition to the multiplication of two elements of "F", it is possible to define the product "n" ⋅ "a" of an arbitrary element "a" of "F" by a positive integer "n" to be the "n"-fold sum "a" + "a" + ⋯ + "a" (which is an element of "F".) If there is no positive integer such that "n" ⋅ 1 = 0, then "F" is said to have characteristic 0. For example, the field of rational numbers Q has characteristic 0 since no positive integer "n" is zero. Otherwise, if there "is" a positive integer "n" satisfying this equation, the smallest such positive integer can be shown to be a prime number. It is usually denoted by "p" and the field is said to have characteristic "p" then. For example, the field F4 has characteristic 2 since (in the notation of the above addition table) I + I = O. If "F" has characteristic "p", then "p" ⋅ "a" = 0 for all "a" in "F". This implies that ("a" + "b")"p" = "a""p" + "b""p", since all other binomial coefficients appearing in the binomial formula are divisible by "p". Here, "a""p" := "a" ⋅ "a" ⋅ ⋯ ⋅ "a" ("p" factors) is the "p"-th power, i.e., the "p"-fold product of the element "a". Therefore, the Frobenius map Fr: "F" → "F", "x" ⟼ "x""p" is compatible with the addition in "F" (and also with the multiplication), and is therefore a field homomorphism. The existence of this homomorphism makes fields in characteristic "p" quite different from fields of characteristic 0. Subfields and prime fields. A "subfield" "E" of a field "F" is a subset of "F" that is a field with respect to the field operations of "F". Equivalently "E" is a subset of "F" that contains 1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that 1 ∊ "E", that for all "a", "b" ∊ "E" both "a" + "b" and "a" ⋅ "b" are in "E", and that for all "a" ≠ 0 in "E", both −"a" and 1/"a" are in "E". Field homomorphisms are maps "φ": "E" → "F" between two fields such that "φ"("e"1 + "e"2) = "φ"("e"1) + "φ"("e"2), "φ"("e"1"e"2) = "φ"("e"1) "φ"("e"2), and "φ"(1E) = 1F, where "e"1 and "e"2 are arbitrary elements of "E". All field homomorphisms are injective. If "φ" is also surjective, it is called an isomorphism (or the fields "E" and "F" are called isomorphic). A field is called a prime field if it has no proper (i.e., strictly smaller) subfields. Any field "F" contains a prime field. If the characteristic of "F" is "p" (a prime number), the prime field is isomorphic to the finite field F"p" introduced below. Otherwise the prime field is isomorphic to Q. Finite fields. "Finite fields" (also called "Galois fields") are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory example F4 is a field with four elements. Its subfield F2 is the smallest field, because by definition a field has at least two distinct elements 1 ≠ 0. The simplest finite fields, with prime order, are most directly accessible using modular arithmetic. For a fixed positive integer "n", arithmetic "modulo "n"" means to work with the numbers Z/"n"Z = {0, 1, ..., "n" − 1}. The addition and multiplication on this set are done by performing the operation in question in the set Z of integers, dividing by "n" and taking the remainder as result. This construction yields a field precisely if "n" is a prime number. For example, taking the prime "n" = 2 results in the above-mentioned field F2. For "n" = 4 and more generally, for any composite number (i.e., any number "n" which can be expressed as a product "n" = "r" ⋅ "s" of two strictly smaller natural numbers), Z/"nZ is not a field: the product of two non-zero elements is zero since "r" ⋅ "s" = 0 in Z/"nZ, which, as was explained above, prevents Z/"nZ from being a field. The field Z/"pZ with "p" elements ("p" being prime) constructed in this way is usually denoted by F"p". Every finite field "F" has "q" = "p""n" elements, where "p" is prime and "n" ≥ 1. This statement holds since "F" may be viewed as a vector space over its prime field. The dimension of this vector space is necessarily finite, say "n", which implies the asserted statement. A field with "q" = "p""n" elements can be constructed as the splitting field of the polynomial "f" ("x") = "x""q" − "x". Such a splitting field is an extension of F"p" in which the polynomial "f" has "q" zeros. This means "f" has as many zeros as possible since the degree of "f" is "q". For "q" = 22 = 4, it can be checked case by case using the above multiplication table that all four elements of F4 satisfy the equation "x"4 = "x", so they are zeros of "f". By contrast, in F2, "f" has only two zeros (namely 0 and 1), so "f" does not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic. It is thus customary to speak of "the" finite field with "q" elements, denoted by F"q" or GF("q"). History. Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations, algebraic number theory, and algebraic geometry. A first step towards the notion of a field was made in 1770 by Joseph-Louis Lagrange, who observed that permuting the zeros "x"1, "x"2, "x"3 of a cubic polynomial in the expression ("x"1 + "ωx"2 + "ω"2"x"3)3 (with "ω" being a third root of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method of Scipione del Ferro and François Viète, which proceeds by reducing a cubic equation for an unknown "x" to a quadratic equation for "x"3. Together with a similar observation for equations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups. Vandermonde, also in 1770, and to a fuller extent, Carl Friedrich Gauss, in his "Disquisitiones Arithmeticae" (1801), studied the equation "x" "p" = 1 for a prime "p" and, again using modern language, the resulting cyclic Galois group. Gauss deduced that a regular "p"-gon can be constructed if "p" = 22"k" + 1. Building on Lagrange's work, Paolo Ruffini claimed (1799) that quintic equations (polynomial equations of degree 5) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled by Niels Henrik Abel in 1824. Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known as Galois theory today. Both Abel and Galois worked with what is today called an algebraic number field, but conceived neither an explicit notion of a field, nor of a group. In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word "Körper", which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by . By a field we will mean every infinite system of real or complex numbers so closed in itself and perfect that addition, subtraction, multiplication, and division of any two of these numbers again yields a number of the system. In 1881 Leopold Kronecker defined what he called a "domain of rationality", which is a field of rational fractions in modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as Q(π) abstractly as the rational function field Q("X"). Prior to this, examples of transcendental numbers were known since Joseph Liouville's work in 1844, until Charles Hermite (1873) and Ferdinand von Lindemann (1882) proved the transcendence of "e" and π, respectively. The first clear definition of an abstract field is due to . In particular, Heinrich Martin Weber's notion included the field F"p". Giuseppe Veronese (1891) studied the field of formal power series, which led to introduce the field of "p"-adic numbers. synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sections Galois theory, Constructing fields and Elementary notions can be found in Steinitz's work. linked the notion of orderings in a field, and thus the area of analysis, to purely algebraic properties. Emil Artin redeveloped Galois theory from 1928 through 1942, eliminating the dependency on the primitive element theorem. Constructing fields. Constructing fields from rings. A commutative ring is a set, equipped with an addition and multiplication operation, satisfying all the axioms of a field, except for the existence of multiplicative inverses "a"−1. For example, the integers Z form a commutative ring, but not a field: the reciprocal of an integer "n" is not itself an integer, unless "n" = ±1. In the hierarchy of algebraic structures fields can be characterized as the commutative rings "R" in which every nonzero element is a unit (which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinct ideals, (0) and "R". Fields are also precisely the commutative rings in which (0) is the only prime ideal. Given a commutative ring "R", there are two ways to construct a field related to "R", i.e., two ways of modifying "R" such that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions of Z is Q, the rationals, while the residue fields of Z are the finite fields F"p". Field of fractions. Given an integral domain "R", its field of fractions "Q"("R") is built with the fractions of two elements of "R" exactly as Q is constructed from the integers. More precisely, the elements of "Q"("R") are the fractions "a"/"b" where "a" and "b" are in "R", and "b" ≠ 0. Two fractions "a"/"b" and "c"/"d" are equal if and only if "ad" = "bc". The operation on the fractions work exactly as for rational numbers. For example, $\frac{a}{b}+\frac{c}{d} = \frac{ad+bc}{bd}.$ It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field. The field "F"("x") of the rational fractions over a field (or an integral domain) "F" is the field of fractions of the polynomial ring "F"["x"]. The field "F"(("x")) of Laurent series $\sum_{i=k}^\infty a_i x^i \ (k \in \Z, a_i \in F)$ over a field "F" is the field of fractions of the ring "F""x" of formal power series (in which "k" ≥ 0). Since any Laurent series is a fraction of a power series divided by a power of "x" (as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though. Residue fields. In addition to the field of fractions, which embeds "R" injectively into a field, a field can be obtained from a commutative ring "R" by means of a surjective map onto a field "F". Any field obtained in this way is a quotient , where "m" is a maximal ideal of "R". If "R" has only one maximal ideal "m", this field is called the residue field of "R". The ideal generated by a single polynomial "f" in the polynomial ring "R" = "E"["X"] (over a field "E") is maximal if and only if "f" is irreducible in "E", i.e., if "f" cannot be expressed as the product of two polynomials in "E"["X"] of smaller degree. This yields a field "F" = "E"["X"] / ( "f" ("X")). This field "F" contains an element "x" (namely the residue class of "X") which satisfies the equation "f" ("x") = 0. For example, C is obtained from R by adjoining the imaginary unit symbol i, which satisfies "f" ("i") = 0, where "f" ("X") = "X"2 + 1. Moreover, "f" is irreducible over R, which implies that the map that sends a polynomial "f" ("X") ∊ R["X"] to "f" ("i" ) yields an isomorphism $\mathbf R[X]/\left(X^2 + 1\right) \ \stackrel \cong \longrightarrow \ \mathbf C.$ Constructing fields within a bigger field. Fields can be constructed inside a given bigger container field. Suppose given a field "E", and a field "F" containing "E" as a subfield. For any element "x" of "F", there is a smallest subfield of "F" containing "E" and "x", called the subfield of "F" generated by "x" and denoted "E"("x"). The passage from "E" to "E"("x") is referred to by "adjoining an element" to "E". More generally, for a subset "S" ⊂ "F", there is a minimal subfield of "F" containing "E" and "S", denoted by "E"("S"). The compositum of two subfields "E" and "E' " of some field "F" is the smallest subfield of "F" containing both "E" and "E'." The compositum can be used to construct the biggest subfield of "F" satisfying a certain property, for example the biggest subfield of "F", which is, in the language introduced below, algebraic over "E". Field extensions. The notion of a subfield "E" ⊂ "F" can also be regarded from the opposite point of view, by referring to "F" being a "field extension" (or just extension) of "E", denoted by "F" / "E", and read ""F" over "E"". A basic datum of a field extension is its degree ["F" : "E"], i.e., the dimension of "F" as an "E"-vector space. It satisfies the formula ["G" : "E"] = ["G" : "F"] ["F" : "E"]. Extensions whose degree is finite are referred to as finite extensions. The extensions C / R and F4 / F2 are of degree 2, whereas R / Q is an infinite extension. Algebraic extensions. A pivotal notion in the study of field extensions "F" / "E" are algebraic elements. An element is "algebraic" over E if it is a root of a polynomial with coefficients in E, that is, if it satisfies a polynomial equation "e""n" "x""n" + "e""n"−1"x""n"−1 + ⋯ + "e"1"x" + "e"0 = 0, with "e""n", ..., "e"0 in E, and "e""n" ≠ 0. For example, the imaginary unit "i" in C is algebraic over R, and even over Q, since it satisfies the equation "i"2 + 1 = 0. A field extension in which every element of "F" is algebraic over "E" is called an algebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula. The subfield "E"("x") generated by an element "x", as above, is an algebraic extension of "E" if and only if "x" is an algebraic element. That is to say, if "x" is algebraic, all other elements of "E"("x") are necessarily algebraic as well. Moreover, the degree of the extension "E"("x") / "E", i.e., the dimension of "E"("x") as an "E"-vector space, equals the minimal degree "n" such that there is a polynomial equation involving "x", as above. If this degree is "n", then the elements of "E"("x") have the form $\sum_{k=0}^{n-1} a_k x^k, \ \ a_k \in E.$ For example, the field Q("i") of Gaussian rationals is the subfield of C consisting of all numbers of the form "a" + "bi" where both "a" and "b" are rational numbers: summands of the form "i"2 (and similarly for higher exponents) do not have to be considered here, since "a" + "bi" + "ci"2 can be simplified to "a" − "c" + "bi". Transcendence bases. The above-mentioned field of rational fractions "E"("X"), where "X" is an indeterminate, is not an algebraic extension of "E" since there is no polynomial equation with coefficients in "E" whose zero is "X". Elements, such as "X", which are not algebraic are called transcendental. Informally speaking, the indeterminate "X" and its powers do not interact with elements of "E". A similar construction can be carried out with a set of indeterminates, instead of just one. Once again, the field extension "E"("x") / "E" discussed above is a key example: if "x" is not algebraic (i.e., "x" is not a root of a polynomial with coefficients in "E"), then "E"("x") is isomorphic to "E"("X"). This isomorphism is obtained by substituting "x" to "X" in rational fractions. A subset "S" of a field "F" is a transcendence basis if it is algebraically independent (do not satisfy any polynomial relations) over "E" and if "F" is an algebraic extension of "E"("S"). Any field extension "F" / "E" has a transcendence basis. Thus, field extensions can be split into ones of the form "E"("S") / "E" (purely transcendental extensions) and algebraic extensions. Closure operations. A field is algebraically closed if it does not have any strictly bigger algebraic extensions or, equivalently, if any polynomial equation "f""n" "x""n" + "f""n"−1"x""n"−1 + ⋯ + "f"1"x" + "f"0 = 0, with coefficients "f""n", ..., "f"0 ∈ "F", "n" > 0, has a solution "x" ∊ "F". By the fundamental theorem of algebra, C is algebraically closed, i.e., "any" polynomial equation with complex coefficients has a complex solution. The rational and the real numbers are "not" algebraically closed since the equation "x"2 + 1 = 0 does not have any rational or real solution. A field containing "F" is called an "algebraic closure" of "F" if it is algebraic over "F" (roughly speaking, not too big compared to "F") and is algebraically closed (big enough to contain solutions of all polynomial equations). By the above, C is an algebraic closure of R. The situation that the algebraic closure is a finite extension of the field "F" is quite special: by the Artin-Schreier theorem, the degree of this extension is necessarily 2, and "F" is elementarily equivalent to R. Such fields are also known as real closed fields. Any field "F" has an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to as "the" algebraic closure and denoted . For example, the algebraic closure of Q is called the field of algebraic numbers. The field is usually rather implicit since its construction requires the ultrafilter lemma, a set-theoretic axiom that is weaker than the axiom of choice. In this regard, the algebraic closure of F"q", is exceptionally simple. It is the union of the finite fields containing F"q" (the ones of order "q""n"). For any algebraically closed field "F" of characteristic 0, the algebraic closure of the field "F"(("t")) of Laurent series is the field of Puiseux series, obtained by adjoining roots of "t". Fields with additional structure. Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas. Ordered fields. A field "F" is called an "ordered field" if any two elements can be compared, so that "x" + "y" ≥ 0 and "xy" ≥ 0 whenever "x" ≥ 0 and "y" ≥ 0. For example, the real numbers form an ordered field, with the usual ordering ≥. The Artin-Schreier theorem states that a field can be ordered if and only if it is a formally real field, which means that any quadratic equation $x_1^2 + x_2^2 + \dots + x_n^2 = 0$ only has the solution "x"1 = "x"2 = ⋯ = "x""n" = 0. The set of all possible orders on a fixed field "F" is isomorphic to the set of ring homomorphisms from the Witt ring W("F") of quadratic forms over "F", to Z. An Archimedean field is an ordered field such that for each element there exists a finite expression 1 + 1 + ⋯ + 1 whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains no infinitesimals (elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield of R. An ordered field is Dedekind-complete if all upper bounds, lower bounds (see Dedekind cut) and limits, which should exist, do exist. More formally, each bounded subset of "F" is required to have a least upper bound. Any complete field is necessarily Archimedean, since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence 1/2, 1/3, 1/4, ..., every element of which is greater than every infinitesimal, has no limit. Since every proper subfield of the reals also contains such gaps, R is the unique complete ordered field, up to isomorphism. Several foundational results in calculus follow directly from this characterization of the reals. The hyperreals R* form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis of non-standard analysis. Topological fields. Another refinement of the notion of a field is a topological field, in which the set "F" is a topological space, such that all operations of the field (addition, multiplication, the maps "a" ↦ −"a" and "a" ↦ "a"−1) are continuous maps with respect to the topology of the space. The topology of all the fields discussed below is induced from a metric, i.e., a function "d" : "F" × "F" → R, that measures a "distance" between any two elements of "F". The completion of "F" is another field in which, informally speaking, the "gaps" in the original field "F" are filled, if there are any. For example, any irrational number "x", such as "x" = √2, is a "gap" in the rationals Q in the sense that it is a real number that can be approximated arbitrarily closely by rational numbers "p"/"q", in the sense that distance of "x" and "p"/"q" given by the absolute value | "x" − "p"/"q" | is as small as desired. The following table lists some examples of this construction. The fourth column shows an example of a zero sequence, i.e., a sequence whose limit (for "n" → ∞) is zero. The field Q"p" is used in number theory and "p"-adic analysis. The algebraic closure carries a unique norm extending the one on Q"p", but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field of complex p-adic numbers and is denoted by C"p". Local fields. The following topological fields are called "local fields": These two types of local fields share some fundamental similarities. In this relation, the elements "p" ∈ Q"p" and "t" ∈ F"p"(("t")) (referred to as uniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients in F"p". (However, since the addition in Q"p" is done using carrying, which is not the case in F"p"(("t")), these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper: Differential fields. Differential fields are fields equipped with a derivation, i.e., allow to take derivatives of elements in the field. For example, the field R("X"), together with the standard derivative of polynomials forms a differential field. These fields are central to differential Galois theory, a variant of Galois theory dealing with linear differential equations. Galois theory. Galois theory studies algebraic extensions of a field by studying the symmetry in the arithmetic operations of addition and multiplication. An important notion in this area is that of finite Galois extensions "F" / "E", which are, by definition, those that are separable and normal. The primitive element theorem shows that finite separable extensions are necessarily simple, i.e., of the form "F" = "E"["X"] / "f" ("X"), where "f" is an irreducible polynomial (as above). For such an extension, being normal and separable means that all zeros of "f" are contained in "F" and that "f" has only simple zeros. The latter condition is always satisfied if "E" has characteristic 0. For a finite Galois extension, the Galois group Gal("F"/"E") is the group of field automorphisms of "F" that are trivial on "E" (i.e., the bijections "σ" : "F" → "F" that preserve addition and multiplication and that send elements of "E" to themselves). The importance of this group stems from the fundamental theorem of Galois theory, which constructs an explicit one-to-one correspondence between the set of subgroups of Gal("F"/"E") and the set of intermediate extensions of the extension "F"/"E". By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is not solvable (cannot be built from abelian groups), then the zeros of "f" "cannot" be expressed in terms of addition, multiplication, and radicals, i.e., expressions involving $\sqrt[n]{\ }$. For example, the symmetric groups S"n" is not solvable for "n" ≥ 5. Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as the Abel–Ruffini theorem: "f"("X") = "X"5 − 4"X" + 2 (and "E" = Q), "f"("X") = "X" "n" + "a""n"−1"X" "n"−1 + ⋯ + "a"0 (where "f" is regarded as a polynomial in "E"("a"0, ..., "a""n"−1), for some indeterminates "a""i", "E" is any field, and "n" ≥ 5). The tensor product of fields is not usually a field. For example, a finite extension "F" / "E" of degree "n" is a Galois extension if and only if there is an isomorphism of "F"-algebras "F" ⊗"E" "F" ≅ "F""n". This fact is the beginning of Grothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects. Invariants of fields. Basic invariants of a field "F" include the characteristic and the transcendence degree of "F" over its prime field. The latter is defined as the maximal number of elements in "F" that are algebraically independent over the prime field. Two algebraically closed fields "E" and "F" are isomorphic precisely if these two data agree. This implies that any two uncountable algebraically closed fields of the same cardinality and the same characteristic are isomorphic. For example, and C are isomorphic (but "not" isomorphic as topological fields). Model theory of fields. In model theory, a branch of mathematical logic, two fields "E" and "F" are called elementarily equivalent if every mathematical statement that is true for "E" is also true for "F" and conversely. The mathematical statements in question are required to be first-order sentences (involving 0, 1, the addition and multiplication). A typical example, for "n" > 0, "n" an integer, is "φ"("E") = "any polynomial of degree "n" in "E" has a zero in "E"" The set of such formulas for all "n" expresses that "E" is algebraically closed. The Lefschetz principle states that C is elementarily equivalent to any algebraically closed field "F" of characteristic zero. Moreover, any fixed statement "φ" holds in C if and only if it holds in any algebraically closed field of sufficiently high characteristic. If "U" is an ultrafilter on a set "I", and "F""i" is a field for every "i" in "I", the ultraproduct of the "F""i" with respect to "U" is a field. It is denoted by ulim"i"→∞ "F""i", since it behaves in several ways as a limit of the fields "F""i": Łoś's theorem states that any first order statement that holds for all but finitely many "F""i", also holds for the ultraproduct. Applied to the above sentence φ, this shows that there is an isomorphism $\operatorname{ulim}_{p \to \infty} \overline \mathbf F_p \cong \mathbf C.$ The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primes "p") ulim"p" Q"p" ≅ ulim"p" F"p"(("t")). In addition, model theory also studies the logical properties of various other types of fields, such as real closed fields or exponential fields (which are equipped with an exponential function exp : "F" → "F"×). The absolute Galois group. For fields that are not algebraically closed (or not separably closed), the absolute Galois group Gal("F") is fundamentally important: extending the case of finite Galois extensions outlined above, this group governs "all" finite separable extensions of "F". By elementary means, the group Gal(F"q") can be shown to be the Prüfer group, the profinite completion of Z. This statement subsumes the fact that the only algebraic extensions of Gal(F"q") are the fields Gal(F"q""n") for "n" > 0, and that the Galois groups of these finite extensions are given by Gal(F"q""n" / F"q") = Z/"n"Z. A description in terms of generators and relations is also known for the Galois groups of "p"-adic number fields (finite extensions of Q"p"). Representations of Galois groups and of related groups such as the Weil group are fundamental in many branches of arithmetic, such as the Langlands program. The cohomological study of such representations is done using Galois cohomology. For example, the Brauer group, which is classically defined as the group of central simple "F"-algebras, can be reinterpreted as a Galois cohomology group, namely Br("F") = H2("F", Gm). K-theory. Milnor K-theory is defined as $K_n^M(F) = F^\times \otimes \cdots \otimes F^\times / \left\langle x \otimes (1-x) \mid x \in F \setminus \{0, 1\} \right\rangle.$ The norm residue isomorphism theorem, proved around 2000 by Vladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism $K_n^M(F) / p = H^n(F, \mu_l^{\otimes n}).$ Algebraic K-theory is related to the group of invertible matrices with coefficients the given field. For example, the process of taking the determinant of an invertible matrix leads to an isomorphism "K"1("F") = "F"×. Matsumoto's theorem shows that "K"2("F") agrees with "K"2"M"("F"). In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general. Applications. Linear algebra and commutative algebra. If "a" ≠ 0, then the equation "ax" = "b" has a unique solution "x" in a field "F", namely $x=a^{-1}b.$ This immediate consequence of the definition of a field is fundamental in linear algebra. For example, it is an essential ingredient of Gaussian elimination and of the proof that any vector space has a basis. The theory of modules (the analogue of vector spaces over rings instead of fields) is much more complicated, because the above equation may have several or no solutions. In particular systems of linear equations over a ring are much more difficult to solve than in the case of fields, even in the specially simple case of the ring Z of the integers. Finite fields: cryptography and coding theory. A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing "a""n" = "a" ⋅ "a" ⋅ ⋯ ⋅ "a" ("n" factors, for an integer "n" ≥ 1) in a (large) finite field F"q" can be performed much more efficiently than the discrete logarithm, which is the inverse operation, i.e., determining the solution "n" to an equation "a""n" = "b". In elliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on an elliptic curve, i.e., the solutions of an equation of the form "y"2 = "x"3 + "ax" + "b". Finite fields are also used in coding theory and combinatorics. Geometry: field of functions. Functions on a suitable topological space "X" into a field k can be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain: ("f" ⋅ "g")("x") = "f"("x") ⋅ "g"("x"). This makes these functions a k-commutative algebra. For having a "field" of functions, one must consider algebras of functions that are integral domains. In this case the ratios of two functions, i.e., expressions of the form $\frac{f(x)}{g(x)},$ form a field, called field of functions. This occurs in two main cases. When "X" is a complex manifold "X". In this case, one considers the algebra of holomorphic functions, i.e., complex differentiable functions. Their ratios form the field of meromorphic functions on "X". The function field of an algebraic variety "X" (a geometric object defined as the common zeros of polynomial equations) consists of ratios of regular functions, i.e., ratios of polynomial functions on the variety. The function field of the "n"-dimensional space over a field "k" is "k"("x"1, ..., "x""n"), i.e., the field consisting of ratios of polynomials in "n" indeterminates. The function field of "X" is the same as the one of any open dense subvariety. In other words, the function field is insensitive to replacing "X" by a (slightly) smaller subvariety. The function field is invariant under isomorphism and birational equivalence of varieties. It is therefore an important tool for the study of abstract algebraic varieties and for the classification of algebraic varieties. For example, the dimension, which equals the transcendence degree of "k"("X"), is invariant under birational equivalence. For curves (i.e., the dimension is one), the function field "k"("X") is very close to "X": if "X" is smooth and proper (the analogue of being compact), "X" can be reconstructed, up to isomorphism, from its field of functions. In higher dimension the function field remembers less, but still decisive information about "X". The study of function fields and their geometric meaning in higher dimensions is referred to as birational geometry. The minimal model program attempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field. Number theory: global fields. Global fields are in the limelight in algebraic number theory and arithmetic geometry. They are, by definition, number fields (finite extensions of Q) or function fields over F"q" (finite extensions of F"q"("t")). As for local fields, these two types of fields share several similar features, even though they are of characteristic 0 and positive characteristic, respectively. This function field analogy can help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, the Riemann hypothesis concerning the zeros of the Riemann zeta function (open as of 2017) can be regarded as being parallel to the Weil conjectures (proven in 1974 by Pierre Deligne). Cyclotomic fields are among the most intensely studied number fields. They are of the form Q(ζ"n"), where ζ"n" is a primitive "n"-th root of unity, i.e., a complex number satisfying ζ"n" = 1 and ζ"m" ≠ 1 for all "m" < "n". For "n" being a regular prime, Kummer used cyclotomic fields to prove Fermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation "x""n" + "y""n" = "z""n". Local fields are completions of global fields. Ostrowski's theorem asserts that the only completions of Q, a global field, are the local fields Q"p" and R. Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called the local-global principle. For example, the Hasse–Minkowski theorem reduces the problem of finding rational solutions of quadratic equations to solving these equations in R and Q"p", whose solutions can easily be described. Unlike for local fields, the Galois groups of global fields are not known. Inverse Galois theory studies the (unsolved) problem whether any finite group is the Galois group Gal("F"/Q) for some number field "F". Class field theory describes the abelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, the Kronecker–Weber theorem, describes the maximal abelian Qab extension of Q: it is the field Q(ζ"n", "n" ≥ 2) obtained by adjoining all primitive "n"-th roots of unity. Kronecker's Jugendtraum asks for a similarly explicit description of "F"ab of general number fields "F". For imaginary quadratic fields, $F=\mathbf Q(\sqrt{-d})$, "d" > 0, the theory of complex multiplication describes "F"ab using elliptic curves. For general number fields, no such explicit description is known. Related notions. In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field 0 ≠ 1, any field has at least two elements. Nonetheless, there is a concept of field with one element, which is suggested to be a limit of the finite fields F"p", as "p" tends to 1. In addition to division rings, there are various other weaker algebraic structures related to fields such as quasifields, near-fields and semifields. There are also proper classes with field structure, which are sometimes called Fields, with a capital F. The surreal numbers form a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. The nimbers, a concept from game theory, form such a Field as well. Division rings. Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of a "division ring" or "skew field"; sometimes associativity is weakened as well. The only division rings that are finite-dimensional R-vector spaces are R itself, C (which is a field), and the quaternions H (in which multiplication is non-commutative). This result is known as the Frobenius theorem. The octonions O, for which multiplication is neither commutative nor associative, is a normed alternative division algebra, but is not a division ring. This fact was proved using methods of algebraic topology in 1958 by Michel Kervaire, Raoul Bott, and John Milnor. The non-existence of an odd-dimensional division algebra is more classical. It can be deduced from the hairy ball theorem illustrated at the right. Notes.
4999
abstract_algebra
In mathematics, especially in the area of algebra studying the theory of abelian groups, a pure subgroup is a generalization of direct summand. It has found many uses in abelian group theory and related areas. Definition. A subgroup $S$ of a (typically abelian) group $G$ is said to be pure if whenever an element of $S$ has an $n^{\text{th}}$ root in $G$, it necessarily has an $n^{\text{th}}$ root in $S$. Formally: $\forall n \in\Z, a \in S$, the existence of an $x$ in G such that $x^n = a \Rightarrow$ the existence of a $y$ in S such that $y^n = a$. Origins. Pure subgroups are also called isolated subgroups or serving subgroups and were first investigated in Prüfer's 1923 paper which described conditions for the decomposition of primary abelian groups as direct sums of cyclic groups using pure subgroups. The work of Prüfer was complemented by Kulikoff where many results were proved again using pure subgroups systematically. In particular, a proof was given that pure subgroups of finite exponent are direct summands. A more complete discussion of pure subgroups, their relation to infinite abelian group theory, and a survey of their literature is given in Irving Kaplansky's little red book. Examples. Since in a finitely generated Abelian group the torsion subgroup is a direct summand, one might ask if the torsion subgroup is always a direct summand of an Abelian group. It turns out that it is not always a summand, but it "is" a pure subgroup. Under certain mild conditions, pure subgroups are direct summands. So, one can still recover the desired result under those conditions, as in Kulikoff's paper. Pure subgroups can be used as an intermediate property between a result on direct summands with finiteness conditions and a full result on direct summands with less restrictive finiteness conditions. Another example of this use is Prüfer's paper, where the fact that "finite torsion Abelian groups are direct sums of cyclic groups" is extended to the result that "all torsion Abelian groups of finite exponent are direct sums of cyclic groups" via an intermediate consideration of pure subgroups. Generalizations. Pure subgroups were generalized in several ways in the theory of abelian groups and modules. Pure submodules were defined in a variety of ways, but eventually settled on the modern definition in terms of tensor products or systems of equations; earlier definitions were usually more direct generalizations such as the single equation used above for n'th roots. Pure injective and pure projective modules follow closely from the ideas of Prüfer's 1923 paper. While pure projective modules have not found as many applications as pure injectives, they are more closely related to the original work: A module is pure projective if it is a direct summand of a direct sum of finitely presented modules. In the case of the integers and Abelian groups a pure projective module amounts to a direct sum of cyclic groups.
1065693
abstract_algebra
Group with a cyclic order respected by the group operation In mathematics, a cyclically ordered group is a set with both a group structure and a cyclic order, such that left and right multiplication both preserve the cyclic order. Cyclically ordered groups were first studied in depth by Ladislav Rieger in 1947. They are a generalization of cyclic groups: the infinite cyclic group Z and the finite cyclic groups Z/"n". Since a linear order induces a cyclic order, cyclically ordered groups are also a generalization of linearly ordered groups: the rational numbers Q, the real numbers R, and so on. Some of the most important cyclically ordered groups fall into neither previous category: the circle group T and its subgroups, such as the subgroup of rational points. Quotients of linear groups. It is natural to depict cyclically ordered groups as quotients: one has Z"n" = Z/"n"Z and T = R/Z. Even a once-linear group like Z, when bent into a circle, can be thought of as Z2 / Z. Rieger (1946, 1947, 1948) showed that this picture is a generic phenomenon. For any ordered group L and any central element z that generates a cofinal subgroup Z of L, the quotient group "L" / "Z" is a cyclically ordered group. Moreover, every cyclically ordered group can be expressed as such a quotient group. The circle group. built upon Rieger's results in another direction. Given a cyclically ordered group K and an ordered group L, the product "K" × "L" is a cyclically ordered group. In particular, if T is the circle group and L is an ordered group, then any subgroup of T × "L" is a cyclically ordered group. Moreover, every cyclically ordered group can be expressed as a subgroup of such a product with T. By analogy with an Archimedean linearly ordered group, one can define an Archimedean cyclically ordered group as a group that does not contain any pair of elements "x", "y" such that [e, "x""n", "y"] for every positive integer n. Since only positive n are considered, this is a stronger condition than its linear counterpart. For example, Z no longer qualifies, since one has [0, "n", −1] for every n. As a corollary to Świerczkowski's proof, every Archimedean cyclically ordered group is a subgroup of T itself. This result is analogous to Otto Hölder's 1901 theorem that every Archimedean linearly ordered group is a subgroup of R. Topology. Every compact cyclically ordered group is a subgroup of T. Related structures. showed that a certain subcategory of cyclically ordered groups, the "projectable Ic-groups with weak unit", is equivalent to a certain subcategory of MV-algebras, the "projectable MV-algebras". Notes.
3452523
abstract_algebra
Group in which each element has finite order In group theory, a branch of mathematics, a torsion group or a periodic group is a group in which every element has finite order. The exponent of such a group, if it exists, is the least common multiple of the orders of the elements. For example, it follows from Lagrange's theorem that every finite group is periodic and it has an exponent that divides its order. Infinite examples. Examples of infinite periodic groups include the additive group of the ring of polynomials over a finite field, and the quotient group of the rationals by the integers, as well as their direct summands, the Prüfer groups. Another example is the direct sum of all dihedral groups. None of these examples has a finite generating set. Explicit examples of finitely generated infinite periodic groups were constructed by Golod, based on joint work with Shafarevich (see "Golod–Shafarevich theorem"), and by Aleshin and Grigorchuk using automata. These groups have infinite exponent; examples with finite exponent are given for instance by Tarski monster groups constructed by Olshanskii. Burnside's problem. Burnside's problem is a classical question that deals with the relationship between periodic groups and finite groups, when only finitely generated groups are considered: Does specifying an exponent force finiteness? The existence of infinite, finitely generated periodic groups as in the previous paragraph shows that the answer is "no" for an arbitrary exponent. Though much more is known about which exponents can occur for infinite finitely generated groups there are still some for which the problem is open. For some classes of groups, for instance linear groups, the answer to Burnside's problem restricted to the class is positive. Mathematical logic. An interesting property of periodic groups is that the definition cannot be formalized in terms of first-order logic. This is because doing so would require an axiom of the form $\forall x,\big((x = e) \lor (x\circ x=e) \lor ((x\circ x)\circ x=e) \lor \cdots\big) ,$ which contains an infinite disjunction and is therefore inadmissible: first order logic permits quantifiers over one type and cannot capture properties or subsets of that type. It is also not possible to get around this infinite disjunction by using an infinite set of axioms: the compactness theorem implies that no set of first-order formulae can characterize the periodic groups. Related notions. The torsion subgroup of an abelian group "A" is the subgroup of "A" that consists of all elements that have finite order. A torsion abelian group is an abelian group in which every element has finite order. A torsion-free abelian group is an abelian group in which the identity element is the only element with finite order.
790960
abstract_algebra
In group theory, a Dedekind group is a group "G" such that every subgroup of "G" is normal. All abelian groups are Dedekind groups. A non-abelian Dedekind group is called a Hamiltonian group. The most familiar (and smallest) example of a Hamiltonian group is the quaternion group of order 8, denoted by Q8. Dedekind and Baer have shown (in the finite and respectively infinite order case) that every Hamiltonian group is a direct product of the form "G" = Q8 × "B" × "D", where "B" is an elementary abelian 2-group, and "D" is a torsion abelian group with all elements of odd order. Dedekind groups are named after Richard Dedekind, who investigated them in , proving a form of the above structure theorem (for finite groups). He named the non-abelian ones after William Rowan Hamilton, the discoverer of quaternions. In 1898 George Miller delineated the structure of a Hamiltonian group in terms of its order and that of its subgroups. For instance, he shows "a Hamilton group of order 2"a" has 22"a" − 6 quaternion groups as subgroups". In 2005 Horvat "et al" used this structure to count the number of Hamiltonian groups of any order "n" = 2"e""o" where "o" is an odd integer. When "e" < 3 then there are no Hamiltonian groups of order "n", otherwise there are the same number as there are Abelian groups of order "o". Notes.
88253
abstract_algebra
Theorem in homotopy theory Segal's Burnside ring conjecture, or, more briefly, the Segal conjecture, is a theorem in homotopy theory, a branch of mathematics. The theorem relates the Burnside ring of a finite group "G" to the stable cohomotopy of the classifying space "BG". The conjecture was made in the mid 1970s by Graeme Segal and proved in 1984 by Gunnar Carlsson. As of 2016[ [update]], this statement is still commonly referred to as the Segal conjecture, even though it now has the status of a theorem. Statement of the theorem. The Segal conjecture has several different formulations, not all of which are equivalent. Here is a weak form: there exists, for every finite group "G", an isomorphism $\varprojlim \pi_S^0 \left( BG^{(k)}_+ \right) \to \widehat{A}(G).$ Here, lim denotes the inverse limit, πS* denotes the stable cohomotopy ring, "B" denotes the classifying space, the superscript "k" denotes the "k"-skeleton, and the subscript + denotes the addition of a disjoint basepoint. On the right-hand side, the hat denotes the completion of the Burnside ring with respect to its augmentation ideal. The Burnside ring. The Burnside ring of a finite group "G" is constructed from the category of finite "G"-sets as a Grothendieck group. More precisely, let "M"("G") be the commutative monoid of isomorphism classes of finite "G"-sets, with addition the disjoint union of "G"-sets and identity element the empty set (which is a "G"-set in a unique way). Then "A"("G"), the Grothendieck group of "M"("G"), is an abelian group. It is in fact a free abelian group with basis elements represented by the "G"-sets "G"/"H", where "H" varies over the subgroups of "G". (Note that "H" is not assumed here to be a normal subgroup of "G", for while "G"/"H" is not a group in this case, it is still a "G"-set.) The ring structure on "A"("G") is induced by the direct product of "G"-sets; the multiplicative identity is the (isomorphism class of any) one-point set, which becomes a "G"-set in a unique way. The Burnside ring is the analogue of the representation ring in the category of finite sets, as opposed to the category of finite-dimensional vector spaces over a field (see motivation below). It has proven to be an important tool in the representation theory of finite groups. The classifying space. For any topological group "G" admitting the structure of a CW-complex, one may consider the category of principal "G"-bundles. One can define a functor from the category of CW-complexes to the category of sets by assigning to each CW-complex "X" the set of principal "G"-bundles on "X". This functor descends to a functor on the homotopy category of CW-complexes, and it is natural to ask whether the functor so obtained is representable. The answer is affirmative, and the representing object is called the classifying space of the group "G" and typically denoted "BG". If we restrict our attention to the homotopy category of CW-complexes, then "BG" is unique. Any CW-complex that is homotopy equivalent to "BG" is called a "model" for "BG". For example, if "G" is the group of order 2, then a model for "BG" is infinite-dimensional real projective space. It can be shown that if "G" is finite, then any CW-complex modelling "BG" has cells of arbitrarily large dimension. On the other hand, if "G" = Z, the integers, then the classifying space "BG" is homotopy equivalent to the circle "S"1. Motivation and interpretation. The content of the theorem becomes somewhat clearer if it is placed in its historical context. In the theory of representations of finite groups, one can form an object $R[G]$ called the representation ring of $G$ in a way entirely analogous to the construction of the Burnside ring outlined above. The stable cohomotopy is in a sense the natural analog to complex K-theory, which is denoted $KU^*$. Segal was inspired to make his conjecture after Michael Atiyah proved the existence of an isomorphism $KU^0(BG) \to \widehat{R}[G]$ which is a special case of the Atiyah–Segal completion theorem.
966551
abstract_algebra
Abelian group with no non-trivial torsion elements In mathematics, specifically in abstract algebra, a torsion-free abelian group is an abelian group which has no non-trivial torsion elements; that is, a group in which the group operation is commutative and the identity element is the only element with finite order. While finitely generated abelian groups are completely classified, not much is known about infinitely generated abelian groups, even in the torsion-free countable case. Definitions. An abelian group $ \langle G, + ,0\rangle $ is said to be torsion-free if no element other than the identity $ e $ is of finite order. Explicitly, for any $n > 0$, the only element $x \in G$ for which $nx = 0$ is $x = 0$. A natural example of a torsion-free group is $ \langle \mathbb Z,+,0\rangle $, as only the integer 0 can be added to itself finitely many times to reach 0. More generally, the free abelian group $\mathbb Z^r$ is torsion-free for any $r \in \mathbb N$. An important step in the proof of the classification of finitely generated abelian groups is that every such torsion-free group is isomorphic to a $\mathbb Z^r$. A non-finitely generated countable example is given by the additive group of the polynomial ring $\mathbb Z[X]$ (the free abelian group of countable rank). More complicated examples are the additive group of the rational field $\mathbb Q$, or its subgroups such as $\mathbb Z[p^{-1}]$ (rational numbers whose denominator is a power of $p$). Yet more involved examples are given by groups of higher rank. Groups of rank 1. Rank. The "rank" of an abelian group $A$ is the dimension of the $\mathbb Q$-vector space $\mathbb Q \otimes_{\mathbb Z} A$. Equivalently it is the maximal cardinality of a linearly independent (over $\Z$) subset of $A$. If $A$ is torsion-free then it injects into $\mathbb Q \otimes_{\mathbb Z} A$. Thus, torsion-free abelian groups of rank 1 are exactly subgroups of the additive group $\mathbb Q$. Classification. Torsion-free abelian groups of rank 1 have been completely classified. To do so one associates to a group $A$ a subset $\tau(A)$ of the prime numbers, as follows: pick any $x \in A \setminus \{0\}$, for a prime $p$ we say that $p \in \tau(A)$ if and only if $x \in p^kA$ for every $k \in \mathbb N$. This does not depend on the choice of $x$ since for another $y \in A\setminus \{0\}$ there exists $n, m \in \mathbb Z\setminus\{0\}$ such that $ny = mx$. Baer proved that $\tau(A)$ is a complete isomorphism invariant for rank-1 torsion free abelian groups. Classification problem in general. The hardness of a classification problem for a certain type of structures on a countable set can be quantified using model theory and descriptive set theory. In this sense it has been proved that the classification problem for countable torsion-free abelian groups is as hard as possible. Notes.
1613744
abstract_algebra
In the mathematical field of group theory, the transfer defines, given a group "G" and a subgroup "H" of finite index, a group homomorphism from "G" to the abelianization of "H". It can be used in conjunction with the Sylow theorems to obtain certain numerical results on the existence of finite simple groups. The transfer was defined by Issai Schur (1902) and rediscovered by Emil Artin (1929). Construction. The construction of the map proceeds as follows: Let ["G":"H"] = "n" and select coset representatives, say $x_1, \dots, x_n,\,$ for "H" in "G", so "G" can be written as a disjoint union $G = \bigcup\ x_i H.$ Given "y" in "G", each "yxi" is in some coset "xjH" and so $yx_i = x_jh_i$ for some index "j" and some element "h""i" of "H". The value of the transfer for "y" is defined to be the image of the product $\textstyle \prod_{i=1}^n h_i $ in "H"/"H"′, where "H"′ is the commutator subgroup of "H". The order of the factors is irrelevant since "H"/"H"′ is abelian. It is straightforward to show that, though the individual "hi" depends on the choice of coset representatives, the value of the transfer does not. It is also straightforward to show that the mapping defined this way is a homomorphism. Example. If "G" is cyclic then the transfer takes any element "y" of "G" to "y"["G":"H"]. A simple case is that seen in the Gauss lemma on quadratic residues, which in effect computes the transfer for the multiplicative group of non-zero residue classes modulo a prime number "p", with respect to the subgroup {1, −1}. One advantage of looking at it that way is the ease with which the correct generalisation can be found, for example for cubic residues in the case that "p" − 1 is divisible by three. Homological interpretation. This homomorphism may be set in the context of group homology. In general, given any subgroup "H" of "G" and any "G"-module "A", there is a corestriction map of homology groups $\mathrm{Cor} : H_n(H,A) \to H_n(G,A)$ induced by the inclusion map $i: H \to G$, but if we have that "H" is of finite index in "G", there are also restriction maps $\mathrm{Res} : H_n(G,A) \to H_n(H,A)$. In the case of "n =" 1 and $A=\mathbb{Z}$ with the trivial "G"-module structure, we have the map $\mathrm{Res} : H_1(G,\mathbb{Z}) \to H_1(H,\mathbb{Z})$. Noting that $H_1(G,\mathbb{Z})$ may be identified with $G/G'$ where $G'$ is the commutator subgroup, this gives the transfer map via $G \xrightarrow{\pi} G/G' \xrightarrow{\mathrm{Res}} H/H'$, with $\pi$ denoting the natural projection. The transfer is also seen in algebraic topology, when it is defined between classifying spaces of groups. Terminology. The name "transfer" translates the German "Verlagerung", which was coined by Helmut Hasse. Commutator subgroup. If "G" is finitely generated, the commutator subgroup "G"′ of "G" has finite index in "G" and "H=G"′, then the corresponding transfer map is trivial. In other words, the map sends "G" to 0 in the abelianization of "G"′. This is important in proving the principal ideal theorem in class field theory. See the Emil Artin-John Tate "Class Field Theory" notes.
242769
abstract_algebra
Group whose operation is composition of permutations In mathematics, a permutation group is a group "G" whose elements are permutations of a given set "M" and whose group operation is the composition of permutations in "G" (which are thought of as bijective functions from the set "M" to itself). The group of "all" permutations of a set "M" is the symmetric group of "M", often written as Sym("M"). The term "permutation group" thus means a subgroup of the symmetric group. If "M" = {1, 2, ..., "n"} then Sym("M") is usually denoted by S"n", and may be called the "symmetric group on n letters". By Cayley's theorem, every group is isomorphic to some permutation group. The way in which the elements of a permutation group permute the elements of the set is called its group action. Group actions have applications in the study of symmetries, combinatorics and many other branches of mathematics, physics and chemistry. Basic properties and terminology. Being a subgroup of a symmetric group, all that is necessary for a set of permutations to satisfy the group axioms and be a permutation group is that it contain the identity permutation, the inverse permutation of each permutation it contains, and be closed under composition of its permutations. A general property of finite groups implies that a finite nonempty subset of a symmetric group is again a group if and only if it is closed under the group operation. The degree of a group of permutations of a finite set is the number of elements in the set. The order of a group (of any type) is the number of elements (cardinality) in the group. By Lagrange's theorem, the order of any finite permutation group of degree "n" must divide "n"! since "n"-factorial is the order of the symmetric group "S""n". Notation. Since permutations are bijections of a set, they can be represented by Cauchy's "two-line notation". This notation lists each of the elements of "M" in the first row, and for each element, its image under the permutation below it in the second row. If $\sigma$ is a permutation of the set $M = \{x_1,x_2,\ldots,x_n\}$ then, $ \sigma = \begin{pmatrix} x_1 & x_2 & x_3 & \cdots & x_n \\ \sigma(x_1) &\sigma(x_2) & \sigma(x_3) & \cdots& \sigma(x_n)\end{pmatrix}.$ For instance, a particular permutation of the set {1, 2, 3, 4, 5} can be written as $\sigma=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 2 & 5 & 4 & 3 & 1\end{pmatrix};$ this means that "σ" satisfies "σ"(1) = 2, "σ"(2) = 5, "σ"(3) = 4, "σ"(4) = 3, and "σ"(5) = 1. The elements of "M" need not appear in any special order in the first row, so the same permutation could also be written as $\sigma=\begin{pmatrix} 3 & 2 & 5 & 1 & 4 \\ 4 & 5 & 1 & 2 & 3\end{pmatrix}.$ Permutations are also often written in cycle notation ("cyclic form") so that given the set "M" = {1, 2, 3, 4}, a permutation "g" of "M" with "g"(1) = 2, "g"(2) = 4, "g"(4) = 1 and "g"(3) = 3 will be written as (1, 2, 4)(3), or more commonly, (1, 2, 4) since 3 is left unchanged; if the objects are denoted by single letters or digits, commas and spaces can also be dispensed with, and we have a notation such as (124). The permutation written above in 2-line notation would be written in cycle notation as $ \sigma = (125)(34).$ Composition of permutations–the group product. The product of two permutations is defined as their composition as functions, so $\sigma \cdot \pi$ is the function that maps any element "x" of the set to $\sigma (\pi (x))$. Note that the rightmost permutation is applied to the argument first, because of the way function composition is written. Some authors prefer the leftmost factor acting first, but to that end permutations must be written to the "right" of their argument, often as a superscript, so the permutation $\sigma$ acting on the element $x$ results in the image $x ^{\sigma}$. With this convention, the product is given by $x ^{\sigma \cdot \pi} = (x ^{\sigma})^{\pi}$. However, this gives a "different" rule for multiplying permutations. This convention is commonly used in the permutation group literature, but this article uses the convention where the rightmost permutation is applied first. Since the composition of two bijections always gives another bijection, the product of two permutations is again a permutation. In two-line notation, the product of two permutations is obtained by rearranging the columns of the second (leftmost) permutation so that its first row is identical with the second row of the first (rightmost) permutation. The product can then be written as the first row of the first permutation over the second row of the modified second permutation. For example, given the permutations, $P = \begin{pmatrix}1 & 2 & 3 & 4 & 5 \\2 & 4 & 1 & 3 & 5 \end{pmatrix}\quad \text{ and } \quad Q = \begin{pmatrix}1 & 2 & 3 & 4 & 5 \\ 5 & 4 & 3 & 2 & 1 \end{pmatrix},$ the product "QP" is: $QP =\begin{pmatrix}1 & 2 & 3 & 4 & 5 \\ 5 & 4 & 3 & 2 & 1 \end{pmatrix}\begin{pmatrix}1 & 2 & 3 & 4 & 5 \\2 & 4 & 1 & 3 & 5 \end{pmatrix} = \begin{pmatrix} 2 & 4 & 1 & 3 & 5 \\ 4 & 2 & 5 & 3 & 1 \end{pmatrix} \begin{pmatrix}1 & 2 & 3 & 4 & 5 \\2 & 4 & 1 & 3 & 5 \end{pmatrix} = \begin{pmatrix}1 & 2 & 3 & 4 & 5 \\4 & 2 & 5 & 3 & 1 \end{pmatrix}.$ The composition of permutations, when they are written in cycle notation, is obtained by juxtaposing the two permutations (with the second one written on the left) and then simplifying to a disjoint cycle form if desired. Thus, the above product would be given by: $Q \cdot P = (1 5)(2 4) \cdot (1 2 4 3) = (1 4 3 5).$ Since function composition is associative, so is the product operation on permutations: $(\sigma \cdot \pi) \cdot \rho = \sigma \cdot(\pi \cdot \rho)$. Therefore, products of two or more permutations are usually written without adding parentheses to express grouping; they are also usually written without a dot or other sign to indicate multiplication (the dots of the previous example were added for emphasis, so would simply be written as $\sigma \pi \rho$). Neutral element and inverses. The identity permutation, which maps every element of the set to itself, is the neutral element for this product. In two-line notation, the identity is $\begin{pmatrix}1 & 2 & 3 & \cdots & n \\ 1 & 2 & 3 & \cdots & n\end{pmatrix}.$ In cycle notation, "e" = (1)(2)(3)...("n") which by convention is also denoted by just (1) or even (). Since bijections have inverses, so do permutations, and the inverse "σ"−1 of "σ" is again a permutation. Explicitly, whenever "σ"("x")="y" one also has "σ"−1("y")="x". In two-line notation the inverse can be obtained by interchanging the two lines (and sorting the columns if one wishes the first line to be in a given order). For instance $\begin{pmatrix}1 & 2 & 3 & 4 & 5 \\ 2 & 5 & 4 & 3 & 1\end{pmatrix}^{-1} =\begin{pmatrix}1 & 2 & 3 & 4 & 5 \\ 5 & 1 & 4 & 3 & 2\end{pmatrix}.$ To obtain the inverse of a single cycle, we reverse the order of its elements. Thus, $ (1 2 5)^{-1} = (5 2 1) = (152).$ To obtain the inverse of a product of cycles, we first reverse the order of the cycles, and then we take the inverse of each as above. Thus, $ [(1 2 5)(3 4)]^{-1} = (34)^{-1}(125)^{-1} = (43)(521) = (34)(152).$ Having an associative product, an identity element, and inverses for all its elements, makes the set of all permutations of "M" into a group, Sym("M"); a permutation group. Examples. Consider the following set "G"1 of permutations of the set "M" = {1, 2, 3, 4}: "G"1 forms a group, since "aa" = "bb" = "e", "ba" = "ab", and "abab" = "e". This permutation group is, as an abstract group, the Klein group "V"4. As another example consider the . Let the vertices of a square be labeled 1, 2, 3 and 4 (counterclockwise around the square starting with 1 in the top left corner). The symmetries are determined by the images of the vertices, that can, in turn, be described by permutations. The rotation by 90° (counterclockwise) about the center of the square is described by the permutation (1234). The 180° and 270° rotations are given by (13)(24) and (1432), respectively. The reflection about the horizontal line through the center is given by (12)(34) and the corresponding vertical line reflection is (14)(23). The reflection about the 1,3−diagonal line is (24) and reflection about the 2,4−diagonal is (13). The only remaining symmetry is the identity (1)(2)(3)(4). This permutation group is known, as an abstract group, as the dihedral group of order 8. Group actions. In the above example of the symmetry group of a square, the permutations "describe" the movement of the vertices of the square induced by the group of symmetries. It is common to say that these group elements are "acting" on the set of vertices of the square. This idea can be made precise by formally defining a group action. Let "G" be a group and "M" a nonempty set. An action of "G" on "M" is a function "f": "G" × "M" → "M" such that This pair of conditions can also be expressed as saying that the action induces a group homomorphism from "G" into "Sym"("M"). Any such homomorphism is called a "(permutation) representation" of "G" on "M". For any permutation group, the action that sends ("g", "x") → "g"("x") is called the natural action of "G" on "M". This is the action that is assumed unless otherwise indicated. In the example of the symmetry group of the square, the group's action on the set of vertices is the natural action. However, this group also induces an action on the set of four triangles in the square, which are: "t"1 = 234, "t"2 = 134, "t"3 = 124 and "t"4 = 123. It also acts on the two diagonals: "d"1 = 13 and "d"2 = 24. Transitive actions. The action of a group "G" on a set "M" is said to be "transitive" if, for every two elements "s", "t" of "M", there is some group element "g" such that "g"("s") = "t". Equivalently, the set "M" forms a single orbit under the action of "G". Of the examples above, the group {e, (1 2), (3 4), (1 2)(3 4)} of permutations of {1, 2, 3, 4} is not transitive (no group element takes 1 to 3) but the group of symmetries of a square is transitive on the vertices. Primitive actions. A permutation group "G" acting transitively on a non-empty finite set "M" is "imprimitive" if there is some nontrivial set partition of "M" that is preserved by the action of "G", where "nontrivial" means that the partition isn't the partition into singleton sets nor the partition with only one part. Otherwise, if "G" is transitive but does not preserve any nontrivial partition of "M", the group "G" is "primitive". For example, the group of symmetries of a square is imprimitive on the vertices: if they are numbered 1, 2, 3, 4 in cyclic order, then the partition into opposite pairs is preserved by every group element. On the other hand, the full symmetric group on a set "M" is always primitive. Cayley's theorem. Any group "G" can act on itself (the elements of the group being thought of as the set "M") in many ways. In particular, there is a regular action given by (left) multiplication in the group. That is, "f"("g", "x") = "gx" for all "g" and "x" in "G". For each fixed "g", the function "f""g"("x") = "gx" is a bijection on "G" and therefore a permutation of the set of elements of "G". Each element of "G" can be thought of as a permutation in this way and so "G" is isomorphic to a permutation group; this is the content of Cayley's theorem. For example, consider the group "G"1 acting on the set {1, 2, 3, 4} given above. Let the elements of this group be denoted by "e", "a", "b" and "c" = "ab" = "ba". The action of "G"1 on itself described in Cayley's theorem gives the following permutation representation: "f""e" ↦ ("e")("a")("b")("c") "f""a" ↦ ("ea")("bc") "f""b" ↦ ("eb")("ac") "f""c" ↦ ("ec")("ab"). Isomorphisms of permutation groups. If "G" and "H" are two permutation groups on sets "X" and "Y" with actions "f"1 and "f"2 respectively, then we say that "G" and "H" are "permutation isomorphic" (or "isomorphic as permutation groups") if there exists a bijective map "λ" : "X" → "Y" and a group isomorphism "ψ" : "G" → "H" such that "λ"("f"1("g", "x")) = "f"2("ψ"("g"), "λ"("x")) for all "g" in "G" and "x" in "X". If "X" = "Y" this is equivalent to "G" and "H" being conjugate as subgroups of Sym("X"). The special case where "G" = "H" and "ψ" is the identity map gives rise to the concept of "equivalent actions" of a group. In the example of the symmetries of a square given above, the natural action on the set {1,2,3,4} is equivalent to the action on the triangles. The bijection "λ" between the sets is given by "i" ↦ "t""i". The natural action of group "G"1 above and its action on itself (via left multiplication) are not equivalent as the natural action has fixed points and the second action does not. Oligomorphic groups. When a group "G" acts on a set "S", the action may be extended naturally to the Cartesian product "Sn" of "S", consisting of "n"-tuples of elements of "S": the action of an element "g" on the "n"-tuple ("s"1, ..., "s""n") is given by "g"("s"1, ..., "s""n") = ("g"("s"1), ..., "g"("s""n")). The group "G" is said to be "oligomorphic" if the action on "Sn" has only finitely many orbits for every positive integer "n". (This is automatic if "S" is finite, so the term is typically of interest when "S" is infinite.) The interest in oligomorphic groups is partly based on their application to model theory, for example when considering automorphisms in countably categorical theories. History. The study of groups originally grew out of an understanding of permutation groups. Permutations had themselves been intensively studied by Lagrange in 1770 in his work on the algebraic solutions of polynomial equations. This subject flourished and by the mid 19th century a well-developed theory of permutation groups existed, codified by Camille Jordan in his book "Traité des Substitutions et des Équations Algébriques" of 1870. Jordan's book was, in turn, based on the papers that were left by Évariste Galois in 1832. When Cayley introduced the concept of an abstract group, it was not immediately clear whether or not this was a larger collection of objects than the known permutation groups (which had a definition different from the modern one). Cayley went on to prove that the two concepts were equivalent in Cayley's theorem. Another classical text containing several chapters on permutation groups is Burnside's "Theory of Groups of Finite Order" of 1911. The first half of the twentieth century was a fallow period in the study of group theory in general, but interest in permutation groups was revived in the 1950s by H. Wielandt whose German lecture notes were reprinted as "Finite Permutation Groups" in 1964.
12140
abstract_algebra
In abstract algebra, a torsion abelian group is an abelian group in which every element has finite order. For example, the torsion subgroup of an abelian group is a torsion abelian group.
4637537
abstract_algebra
In mathematics, in the realm of group theory, the term complemented group is used in two distinct, but similar ways. In , a complemented group is one in which every subgroup has a group-theoretic complement. Such groups are called completely factorizable groups in the Russian literature, following and . The following are equivalent for any finite group "G": Later, in , a group is said to be complemented if the lattice of subgroups is a complemented lattice, that is, if for every subgroup "H" there is a subgroup "K" such that "H" ∩ "K" = 1 and ⟨"H", "K" ⟩ is the whole group. Hall's definition required in addition that "H" and "K" permute, that is, that "HK" = { "hk" : "h" in "H", "k" in "K" } form a subgroup. Such groups are also called K-groups in the Italian and lattice theoretic literature, such as . The Frattini subgroup of a K-group is trivial; if a group has a core-free maximal subgroup that is a K-group, then it itself is a K-group; hence subgroups of K-groups need not be K-groups, but quotient groups and direct products of K-groups are K-groups, . In it is shown that every finite simple group is a complemented group. Note that in the classification of finite simple groups, "K"-group is more used to mean a group whose proper subgroups only have composition factors amongst the known finite simple groups. An example of a group that is not complemented (in either sense) is the cyclic group of order "p"2, where "p" is a prime number. This group only has one nontrivial subgroup "H", the cyclic group of order "p", so there can be no other subgroup "L" to be the complement of "H".
1070651
abstract_algebra
Any of certain special normal subgroups of a group In group theory, a branch of mathematics, a core is any of certain special normal subgroups of a group. The two most common types are the normal core of a subgroup and the "p"-core of a group. The normal core. Definition. For a group "G", the normal core or normal interior of a subgroup "H" is the largest normal subgroup of "G" that is contained in "H" (or equivalently, the intersection of the conjugates of "H"). More generally, the core of "H" with respect to a subset "S" ⊆ "G" is the intersection of the conjugates of "H" under "S", i.e. $\mathrm{Core}_S(H) := \bigcap_{s \in S}{s^{-1}Hs}.$ Under this more general definition, the normal core is the core with respect to "S" = "G". The normal core of any normal subgroup is the subgroup itself. Significance. Normal cores are important in the context of group actions on sets, where the normal core of the isotropy subgroup of any point acts as the identity on its entire orbit. Thus, in case the action is transitive, the normal core of any isotropy subgroup is precisely the kernel of the action. A core-free subgroup is a subgroup whose normal core is the trivial subgroup. Equivalently, it is a subgroup that occurs as the isotropy subgroup of a transitive, faithful group action. The solution for the hidden subgroup problem in the abelian case generalizes to finding the normal core in case of subgroups of arbitrary groups. The "p"-core. In this section "G" will denote a finite group, though some aspects generalize to locally finite groups and to profinite groups. Definition. For a prime "p", the p"-core of a finite group is defined to be its largest normal "p"-subgroup. It is the normal core of every Sylow p-subgroup of the group. The "p"-core of "G" is often denoted $O_p(G)$, and in particular appears in one of the definitions of the Fitting subgroup of a finite group. Similarly, the p"′-core is the largest normal subgroup of "G" whose order is coprime to "p" and is denoted $O_{p'}(G)$. In the area of finite insoluble groups, including the classification of finite simple groups, the 2′-core is often called simply the core and denoted $O(G)$. This causes only a small amount of confusion, because one can usually distinguish between the core of a group and the core of a subgroup within a group. The "p"′,"p"-core, denoted $O_{p',p}(G)$ is defined by $O_{p',p}(G)/O_{p'}(G) = O_p(G/O_{p'}(G))$. For a finite group, the "p"′,"p"-core is the unique largest normal "p"-nilpotent subgroup. The "p"-core can also be defined as the unique largest subnormal "p"-subgroup; the "p"′-core as the unique largest subnormal "p"′-subgroup; and the "p"′,"p"-core as the unique largest subnormal "p"-nilpotent subgroup. The "p"′ and "p"′,"p"-core begin the upper "p"-series. For sets "π"1, "π"2, ..., "π""n"+1 of primes, one defines subgroups O"π"1, "π"2, ..., "π""n"+1("G") by: $O_{\pi_1,\pi_2,\dots,\pi_{n+1}}(G)/O_{\pi_1,\pi_2,\dots,\pi_{n}}(G) = O_{\pi_{n+1}}( G/O_{\pi_1,\pi_2,\dots,\pi_{n}}(G) )$ The upper "p"-series is formed by taking "π"2"i"−1 = "p"′ and "π"2"i" = "p;" there is also a lower "p"-series. A finite group is said to be p"-nilpotent if and only if it is equal to its own "p"′,"p"-core. A finite group is said to be p"-soluble if and only if it is equal to some term of its upper "p"-series; its "p"-length is the length of its upper "p"-series. A finite group "G" is said to be p-constrained for a prime "p" if $C_G(O_{p',p}(G)/O_{p'}(G)) \subseteq O_{p',p}(G)$. Every nilpotent group is "p"-nilpotent, and every "p"-nilpotent group is "p"-soluble. Every soluble group is "p"-soluble, and every "p"-soluble group is "p"-constrained. A group is "p"-nilpotent if and only if it has a normal "p"-complement, which is just its "p"′-core. Significance. Just as normal cores are important for group actions on sets, "p"-cores and "p"′-cores are important in modular representation theory, which studies the actions of groups on vector spaces. The "p"-core of a finite group is the intersection of the kernels of the irreducible representations over any field of characteristic "p". For a finite group, the "p"′-core is the intersection of the kernels of the ordinary (complex) irreducible representations that lie in the principal "p"-block. For a finite group, the "p"′,"p"-core is the intersection of the kernels of the irreducible representations in the principal "p"-block over any field of characteristic "p". Also, for a finite group, the "p"′,"p"-core is the intersection of the centralizers of the abelian chief factors whose order is divisible by "p" (all of which are irreducible representations over a field of size "p" lying in the principal block). For a finite, "p"-constrained group, an irreducible module over a field of characteristic "p" lies in the principal block if and only if the "p"′-core of the group is contained in the kernel of the representation. Solvable radicals. A related subgroup in concept and notation is the solvable radical. The solvable radical is defined to be the largest solvable normal subgroup, and is denoted $O_\infty(G)$. There is some variance in the literature in defining the "p"′-core of "G". A few authors in only a few papers (for instance Thompson's N-group papers, but not his later work) define the "p"′-core of an insoluble group "G" as the "p"′-core of its solvable radical in order to better mimic properties of the 2′-core.
296131
abstract_algebra
In mathematics, a topological group "G" is called a discrete group if there is no limit point in it (i.e., for each element in "G", there is a neighborhood which only contains that element). Equivalently, the group "G" is discrete if and only if its identity is isolated. A subgroup "H" of a topological group "G" is a discrete subgroup if "H" is discrete when endowed with the subspace topology from "G". In other words there is a neighbourhood of the identity in "G" containing no other element of "H". For example, the integers, Z, form a discrete subgroup of the reals, R (with the standard metric topology), but the rational numbers, Q, do not. Any group can be endowed with the discrete topology, making it a discrete topological group. Since every map from a discrete space is continuous, the topological homomorphisms between discrete groups are exactly the group homomorphisms between the underlying groups. Hence, there is an isomorphism between the category of groups and the category of discrete groups. Discrete groups can therefore be identified with their underlying (non-topological) groups. There are some occasions when a topological group or Lie group is usefully endowed with the discrete topology, 'against nature'. This happens for example in the theory of the Bohr compactification, and in group cohomology theory of Lie groups. A discrete isometry group is an isometry group such that for every point of the metric space the set of images of the point under the isometries is a discrete set. A discrete symmetry group is a symmetry group that is a discrete isometry group. Properties. Since topological groups are homogeneous, one need only look at a single point to determine if the topological group is discrete. In particular, a topological group is discrete only if the singleton containing the identity is an open set. A discrete group is the same thing as a zero-dimensional Lie group (uncountable discrete groups are not second-countable, so authors who require Lie groups to satisfy this axiom do not regard these groups as Lie groups). The identity component of a discrete group is just the trivial subgroup while the group of components is isomorphic to the group itself. Since the only Hausdorff topology on a finite set is the discrete one, a finite Hausdorff topological group must necessarily be discrete. It follows that every finite subgroup of a Hausdorff group is discrete. A discrete subgroup "H" of "G" is cocompact if there is a compact subset "K" of "G" such that "HK" = "G". Discrete normal subgroups play an important role in the theory of covering groups and locally isomorphic groups. A discrete normal subgroup of a connected group "G" necessarily lies in the center of "G" and is therefore abelian. "Other properties": Citations.
191592
abstract_algebra
Model of set theory constructed using permutations In mathematical set theory, a permutation model is a model of set theory with atoms (ZFA) constructed using a group of permutations of the atoms. A symmetric model is similar except that it is a model of ZF (without atoms) and is constructed using a group of permutations of a forcing poset. One application is to show the independence of the axiom of choice from the other axioms of ZFA or ZF. Permutation models were introduced by Fraenkel (1922) and developed further by Mostowski (1938). Symmetric models were introduced by Paul Cohen. Construction of permutation models. Suppose that "A" is a set of atoms, and "G" is a group of permutations of "A". A normal filter of "G" is a collection "F" of subgroups of "G" such that If "V" is a model of ZFA with "A" the set of atoms, then an element of "V" is called symmetric if the subgroup fixing it is in "F", and is called hereditarily symmetric if it and all elements of its transitive closure are symmetric. The permutation model consists of all hereditarily symmetric elements, and is a model of ZFA. Construction of filters on a group. A filter on a group can be constructed from an invariant ideal on of the Boolean algebra of subsets of "A" containing all elements of "A". Here an ideal is a collection "I" of subsets of "A" closed under taking finite unions and subsets, and is called invariant if it is invariant under the action of the group "G". For each element "S" of the ideal one can take the subgroup of "G" consisting of all elements fixing every element "S". These subgroups generate a normal filter of "G".
4465754
abstract_algebra
Graph where all pairs of edges are automorphic In the mathematical field of graph theory, an edge-transitive graph is a graph G such that, given any two edges "e"1 and "e"2 of G, there is an automorphism of G that maps "e"1 to "e"2. In other words, a graph is edge-transitive if its automorphism group acts transitively on its edges. Examples and properties. The number of connected simple edge-transitive graphs on n vertices is 1, 1, 2, 3, 4, 6, 5, 8, 9, 13, 7, 19, 10, 16, 25, 26, 12, 28 ... (sequence in the OEIS) Edge-transitive graphs include all symmetric graph, such as the vertices and edges of the cube. Symmetric graphs are also vertex-transitive (if they are connected), but in general edge-transitive graphs need not be vertex-transitive. Every connected edge-transitive graph that is not vertex-transitive must be bipartite, (and hence can be colored with only two colors), and either semi-symmetric or biregular. Examples of edge but not vertex transitive graphs include the complete bipartite graphs $K_{m,n}$ where m ≠ n, which includes the star graphs $K_{1,n}$. For graphs on n vertices, there are (n-1)/2 such graphs for odd n and (n-2) for even n. Additional edge transitive graphs which are not symmetric can be formed as subgraphs of these complete bi-partite graphs in certain cases. Subgraphs of complete bipartite graphs Km,n exist when m and n share a factor greater than 2. When the greatest common factor is 2, subgraphs exist when 2n/m is even or if m=4 and n is an odd multiple of 6. So edge transitive subgraphs exist for K3,6, K4,6 and K5,10 but not K4,10. An alternative construction for some edge transitive graphs is to add vertices to the midpoints of edges of a symmetric graph with v vertices and e edges, creating a bipartite graph with e vertices of order 2, and v of order 2e/v. An edge-transitive graph that is also regular, but still not vertex-transitive, is called semi-symmetric. The Gray graph, a cubic graph on 54 vertices, is an example of a regular graph which is edge-transitive but not vertex-transitive. The Folkman graph, a quartic graph on 20 vertices is the smallest such graph. The vertex connectivity of an edge-transitive graph always equals its minimum degree.
224293
abstract_algebra
In abstract algebra, a basic subgroup is a subgroup of an abelian group which is a direct sum of cyclic subgroups and satisfies further technical conditions. This notion was introduced by L. Ya. Kulikov (for "p"-groups) and by László Fuchs (in general) in an attempt to formulate classification theory of infinite abelian groups that goes beyond the Prüfer theorems. It helps to reduce the classification problem to classification of possible extensions between two well understood classes of abelian groups: direct sums of cyclic groups and divisible groups. Definition and properties. A subgroup, "B", of an abelian group, "A", is called "p"-basic, for a fixed prime number, "p", if the following conditions hold: Conditions 1–3 imply that the subgroup, "B", is Hausdorff in the "p"-adic topology of "B", which moreover coincides with the topology induced from "A", and that "B" is dense in "A". Picking a generator in each cyclic direct summand of "B" creates a " "p"-basis" of "B", which is analogous to a basis of a vector space or a free abelian group. Every abelian group, "A", contains "p"-basic subgroups for each "p", and any 2 "p"-basic subgroups of "A" are isomorphic. Abelian groups that contain a unique "p"-basic subgroup have been completely characterized. For the case of "p"-groups they are either divisible or "bounded"; i.e., have bounded exponent. In general, the isomorphism class of the quotient, "A"/"B" by a basic subgroup, "B", may depend on "B". Generalization to modules. The notion of a "p"-basic subgroup in an abelian "p"-group admits a direct generalization to modules over a principal ideal domain. The existence of such a "basic submodule" and uniqueness of its isomorphism type continue to hold.
3117675
abstract_algebra
In mathematics, the term socle has several related meanings. Socle of a group. In the context of group theory, the socle of a group "G", denoted soc("G"), is the subgroup generated by the minimal normal subgroups of "G". It can happen that a group has no minimal non-trivial normal subgroup (that is, every non-trivial normal subgroup properly contains another such subgroup) and in that case the socle is defined to be the subgroup generated by the identity. The socle is a direct product of minimal normal subgroups. As an example, consider the cyclic group Z12 with generator "u", which has two minimal normal subgroups, one generated by "u"4 (which gives a normal subgroup with 3 elements) and the other by "u"6 (which gives a normal subgroup with 2 elements). Thus the socle of Z12 is the group generated by "u"4 and "u"6, which is just the group generated by "u"2. The socle is a characteristic subgroup, and hence a normal subgroup. It is not necessarily transitively normal, however. If a group "G" is a finite solvable group, then the socle can be expressed as a product of elementary abelian "p"-groups. Thus, in this case, it is just a product of copies of Z/"p"Z for various "p", where the same "p" may occur multiple times in the product. Socle of a module. In the context of module theory and ring theory the socle of a module "M" over a ring "R" is defined to be the sum of the minimal nonzero submodules of "M". It can be considered as a dual notion to that of the radical of a module. In set notation, $\mathrm{soc}(M) = \sum_{N \text{ is a simple submodule of }M} N. $ Equivalently, $\mathrm{soc}(M) = \bigcap_{E \text{ is an essential submodule of }M} E. $ The socle of a ring "R" can refer to one of two sets in the ring. Considering "R" as a right "R"-module, soc("R""R") is defined, and considering "R" as a left "R"-module, soc("R""R") is defined. Both of these socles are ring ideals, and it is known they are not necessarily equal. Socle of a Lie algebra. In the context of Lie algebras, a socle of a symmetric Lie algebra is the eigenspace of its structural automorphism that corresponds to the eigenvalue −1. (A symmetric Lie algebra decomposes into the direct sum of its socle and cosocle.)
791429
abstract_algebra
Method for partitioning partial orders into levels The Coffman–Graham algorithm is an algorithm for arranging the elements of a partially ordered set into a sequence of levels. The algorithm chooses an arrangement such that an element that comes after another in the order is assigned to a lower level, and such that each level has a number of elements that does not exceed a fixed width bound W. When "W" = 2, it uses the minimum possible number of distinct levels, and in general it uses at most 2 − 2/"W" times as many levels as necessary. It is named after Edward G. Coffman, Jr. and Ronald Graham, who published it in 1972 for an application in job shop scheduling. In this application, the elements to be ordered are jobs, the bound W is the number of jobs that can be scheduled at any one time, and the partial order describes prerequisite relations between the jobs. The goal is to find a schedule that completes all jobs in minimum total time. Subsequently, the same algorithm has also been used in graph drawing, as a way of placing the vertices of a directed graph into layers of fixed widths so that most or all edges are directed consistently downwards. For a partial ordering given by its transitive reduction (covering relation), the Coffman–Graham algorithm can be implemented in linear time using the partition refinement data structure as a subroutine. If the transitive reduction is not given, it takes polynomial time to construct it. Problem statement and applications. In the version of the job shop scheduling problem solved by the Coffman–Graham algorithm, one is given a set of n jobs "J"1, "J"2, ..., "J""n", together with a system of precedence constraints "Ji" < "Jj" requiring that job "Ji" be completed before job "Jj" begins. Each job is assumed to take unit time to complete. The scheduling task is to assign each of these jobs to time slots on a system of W identical processors, minimizing the makespan of the assignment (the time from the beginning of the first job until the completion of the final job). Abstractly, the precedence constraints define a partial order on the jobs, so the problem can be rephrased as one of assigning the elements of this partial order to levels (time slots) in such a way that each time slot has at most as many jobs as processors (at most W elements per level), respecting the precedence constraints. This application was the original motivation for Coffman and Graham to develop their algorithm. In the layered graph drawing framework outlined by the input is a directed graph, and a drawing of a graph is constructed in several stages: In this framework, the y-coordinate assignment again involves grouping elements of a partially ordered set (the vertices of the graph, with the reachability ordering on the vertex set) into layers (sets of vertices with the same y-coordinate), which is the problem solved by the Coffman–Graham algorithm. Although there exist alternative approaches than the Coffman–Graham algorithm to the layering step, these alternatives in general are either not able to incorporate a bound on the maximum width of a level or rely on complex integer programming procedures. More abstractly, both of these problems can be formalized as a problem in which the input consists of a partially ordered set and an integer W. The desired output is an assignment of integer level numbers to the elements of the partially ordered set such that, if "x" < "y" is an ordered pair of related elements of the partial order, the number assigned to x is smaller than the number assigned to y, such that at most W elements are assigned the same number as each other, and minimizing the difference between the smallest and the largest assigned numbers. The algorithm. The Coffman–Graham algorithm performs the following steps. Analysis. Output quality. As originally proved, their algorithm computes an optimal assignment for "W" = 2; that is, for scheduling problems with unit length jobs on two processors, or for layered graph drawing problems with at most two vertices per layer. A closely related algorithm also finds the optimal solution for scheduling of jobs with varying lengths, allowing pre-emption of scheduled jobs, on two processors. For "W" > 2, the Coffman–Graham algorithm uses a number of levels (or computes a schedule with a makespan) that is within a factor of 2 − 2/"W" of optimal. For instance, for "W" = 3, this means that it uses at most 4/3 times as many levels as is optimal. When the partial order of precedence constraints is an interval order, or belongs to several related classes of partial orders, the Coffman–Graham algorithm finds a solution with the minimum number of levels regardless of its width bound. As well as finding schedules with small makespan, the Coffman–Graham algorithm (modified from the presentation here so that it topologically orders the reverse graph of G and places the vertices as early as possible rather than as late as possible) minimizes the total flow time of two-processor schedules, the sum of the completion times of the individual jobs. A related algorithm can be used to minimize the total flow time for a version of the problem in which preemption of jobs is allowed. Time complexity. and state the time complexity of the Coffman–Graham algorithm, on an n-element partial order, to be "O"("n"2). However, this analysis omits the time for constructing the transitive reduction, which is not known to be possible within this bound. shows how to implement the topological ordering stage of the algorithm in linear time, based on the idea of partition refinement. Sethi also shows how to implement the level assignment stage of the algorithm efficiently by using a disjoint-set data structure. In particular, with a version of this structure published later by , this stage also takes linear time.
3398749
abstract_algebra
In mathematics, a Koszul–Tate resolution or Koszul–Tate complex of the quotient ring "R"/"M" is a projective resolution of it as an "R"-module which also has a structure of a dg-algebra over "R", where "R" is a commutative ring and "M" ⊂ "R" is an ideal. They were introduced by Tate (1957) as a generalization of the Koszul resolution for the quotient "R"/("x"1, ..., "x"n) of "R" by a regular sequence of elements. Friedemann Brandt, Glenn Barnich, and Marc Henneaux (2000) used the Koszul–Tate resolution to calculate BRST cohomology. The differential of this complex is called the Koszul–Tate derivation or Koszul–Tate differential. Construction. First suppose for simplicity that all rings contain the rational numbers "Q". Assume we have a graded supercommutative ring "X", so that "ab" = (−1)deg("a")deg ("b")"ba", with a differential "d", with "d"("ab") = "d"("a")"b" + (−1)deg("a")"ad"("b")), and "x" ∈ "X" is a homogeneous cycle ("dx" = 0). Then we can form a new ring "Y" = "X"[T] of polynomials in a variable "T", where the differential is extended to "T" by "dT"="x". (The polynomial ring is understood in the super sense, so if "T" has odd degree then "T"2 = 0.) The result of adding the element "T" is to kill off the element of the homology of "X" represented by "x", and "Y" is still a supercommutative ring with derivation. A Koszul–Tate resolution of "R"/"M" can be constructed as follows. We start with the commutative ring "R" (graded so that all elements have degree 0). Then add new variables as above of degree 1 to kill off all elements of the ideal "M" in the homology. Then keep on adding more and more new variables (possibly an infinite number) to kill off all homology of positive degree. We end up with a supercommutative graded ring with derivation "d" whose homology is just "R"/"M". If we are not working over a field of characteristic 0, the construction above still works, but it is usually neater to use the following variation of it. Instead of using polynomial rings "X"["T"], one can use a "polynomial ring with divided powers" "X"〈"T"〉, which has a basis of elements "T"("i") for "i" ≥ 0, where "T"("i")"T"("j") = (("i" + "j")!/"i"!"j"!)"T"("i"+"j"). Over a field of characteristic 0, "T"("i") is just "T""i"/"i"!.
1011917
abstract_algebra
In cryptography, most public key cryptosystems are founded on problems that are believed to be intractable. The higher residuosity problem (also called the n th-residuosity problem) is one such problem. This problem is "easier" to solve than integer factorization, so the assumption that this problem is hard to solve is "stronger" than the assumption that integer factorization is hard. Mathematical background. If "n" is an integer, then the integers modulo "n" form a ring. If "n"="pq" where "p" and "q" are primes, then the Chinese remainder theorem tells us that $\mathbb{Z}/n\mathbb{Z} \simeq \mathbb{Z}/p\mathbb{Z} \times \mathbb{Z}/q\mathbb{Z}$ The group of units of any ring form a group, and the group of units in $\mathbb{Z}/n\mathbb{Z}$ is traditionally denoted $(\mathbb{Z}/n\mathbb{Z}) ^*$. From the isomorphism above, we have $(\mathbb{Z}/n\mathbb{Z})^* \simeq (\mathbb{Z}/p\mathbb{Z})^* \times (\mathbb{Z}/q\mathbb{Z})^*$ as an isomorphism of "groups". Since "p" and "q" were assumed to be prime, the groups $(\mathbb{Z}/p\mathbb{Z})^*$ and $(\mathbb{Z}/q\mathbb{Z})^*$ are cyclic of orders "p"-1 and "q"-1 respectively. If "d" is a divisor of "p"-1, then the set of "d"th powers in $(\mathbb{Z}/p\mathbb{Z})^*$ form a subgroup of index "d". If gcd("d","q"-1) = 1, then "every" element in $(\mathbb{Z}/q\mathbb{Z})^*$ is a "d"th power, so the set of "d"th powers in $(\mathbb{Z}/n\mathbb{Z})^*$ is also a subgroup of index "d". In general, if gcd("d","q"-1) = "g", then there are ("q"-1)/("g") "d"th powers in $(\mathbb{Z}/q\mathbb{Z})^*$, so the set of "d"th powers in $(\mathbb{Z}/n\mathbb{Z})^*$ has index "dg". This is most commonly seen when "d"=2, and we are considering the subgroup of quadratic residues, it is well known that exactly one quarter of the elements in $(\mathbb{Z}/n\mathbb{Z})^*$ are quadratic residues (when "n" is the product of exactly two primes, as it is here). The important point is that for any divisor "d" of "p"-1 (or "q"-1) the set of "d"th powers forms a subgroup of $(\mathbb{Z}/n\mathbb{Z})^*.$ Problem statement. Given an integer "n" = "pq" where "p" and "q" are unknown, an integer "d" such that "d" divides "p"-1, and an integer "x" < "n", it is infeasible to determine whether "x" is a "d"th power (equivalently "d"th residue) modulo "n". Notice that if "p" and "q" are known it is easy to determine whether "x" is a "d"th residue modulo "n" because "x" will be a "d"th residue modulo "p" if and only if $x^{(p-1)/d} \equiv 1 \pmod p$ When "d"=2, this is called the quadratic residuosity problem. Applications. The semantic security of the Benaloh cryptosystem and the Naccache–Stern cryptosystem rests on the intractability of this problem.
1121727
abstract_algebra
In mathematics, the binary cyclic group of the "n"-gon is the cyclic group of order 2"n", $C_{2n}$, thought of as an extension of the cyclic group $C_n$ by a cyclic group of order 2. Coxeter writes the "binary cyclic group" with angle-brackets, ⟨"n"⟩, and the index 2 subgroup as ("n") or ["n"]+. It is the binary polyhedral group corresponding to the cyclic group. In terms of binary polyhedral groups, the binary cyclic group is the preimage of the cyclic group of rotations ($C_n < \operatorname{SO}(3)$) under the 2:1 covering homomorphism $\operatorname{Spin}(3) \to \operatorname{SO}(3)\,$ of the special orthogonal group by the spin group. As a subgroup of the spin group, the binary cyclic group can be described concretely as a discrete subgroup of the unit quaternions, under the isomorphism $\operatorname{Spin}(3) \cong \operatorname{Sp}(1)$ where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.) Presentation. The "binary cyclic group" can be defined as: $\begin{align} \omega_n & = e^{i\pi/n} = \cos\frac{\pi}{n} + i\sin\frac{\pi}{n} \\ $
1889756
abstract_algebra
The women's team event at the 2020 Summer Olympics in Tokyo, Japan, took place at the Tokyo Aquatics Centre on 6 and 7 August 2021. Competition format. Only one round of competition is held. Each team will perform a technical routine and a free routine. The scores from the two routines are added together to decide the overall winners. Both free and technical routines starting lists are decided by random draw. The technical routine must be completed in between 2 minutes 35 seconds and 3 minutes 5 seconds. There are 3 panels of 5 judges for each routine. In the technical routine, one panel each considers execution (30% of score), impression (30%), and elements (40%). The execution and impression judges each give a single score, while the elements judges give a score for each element. Scores are between 0 and 10, with 0.1 point increments. The highest and lowest score from each panel (including within each element, for the elements panel) are discarded. The remaining scores are averaged and weighted by the percentage for that panel, with element scores weighted within the element panel by degree of difficulty. The maximum possible score is 100. The routine must contain two highlight moves, one with the full team and one with the team split into subgroups. It must also contain a cadence, a circle, and a straight line. There are 5 required elements, which must be done in order: The free routine time limits are 3 minutes 45 seconds to 4 minutes 15 seconds. There is no restriction on the routine, except that there is a maximum of 6 acrobatic movements. The 3 panels for the free routine consider execution (30% of score), artistic impression (40%), and difficulty (30%). Each judge gives a single score. The highest and lowest score from each panel are discarded, with the remaining scores averaged and weighted. The maximum possible score is 100. Qualification. A total of 10 teams qualify for the event. The 2 National Olympic Committees (NOC) with the best result at the 2019 World Aquatics Championships qualify. Each continent also received one dedicated duet place; Africa and Oceania used the 2019 World Aquatics Championships to determine their selections, while the 2019 Pan American Games and the 2019 European Champions Cup served as qualifiers for the Americas and Europe. The Asia spot was guaranteed to the Olympic host, Japan. The final 3 places will be determined through a 2020 Olympic Qualification Tournament. Schedule. The schedule for the women's team event covers two consecutive days of competition. Notes.
5718109
abstract_algebra
Algebraic ring without a multiplicative identity In mathematics, and more specifically in abstract algebra, a rng (or non-unital ring or pseudo-ring) is an algebraic structure satisfying the same properties as a ring, but without assuming the existence of a multiplicative identity. The term "rng" (IPA: ) is meant to suggest that it is a ring without "i", that is, without the requirement for an identity element. There is no consensus in the community as to whether the existence of a multiplicative identity must be one of the ring axioms (see ). The term "rng" was coined to alleviate this ambiguity when people want to refer explicitly to a ring without the axiom of multiplicative identity. A number of algebras of functions considered in analysis are not unital, for instance the algebra of functions decreasing to zero at infinity, especially those with compact support on some (non-compact) space. Definition. Formally, a rng is a set "R" with two binary operations (+, ·) called "addition" and "multiplication" such that A rng homomorphism is a function "f": "R" → "S" from one rng to another such that for all "x" and "y" in "R". If "R" and "S" are rings, then a ring homomorphism "R" → "S" is the same as a rng homomorphism "R" → "S" that maps 1 to 1. Examples. All rings are rngs. A simple example of a rng that is not a ring is given by the even integers with the ordinary addition and multiplication of integers. Another example is given by the set of all 3-by-3 real matrices whose bottom row is zero. Both of these examples are instances of the general fact that every (one- or two-sided) ideal is a rng. Rngs often appear naturally in functional analysis when linear operators on infinite-dimensional vector spaces are considered. Take for instance any infinite-dimensional vector space "V" and consider the set of all linear operators "f" : "V" → "V" with finite rank (i.e. dim "f"("V") < ∞). Together with addition and composition of operators, this is a rng, but not a ring. Another example is the rng of all real sequences that converge to 0, with component-wise operations. Also, many test function spaces occurring in the theory of distributions consist of functions decreasing to zero at infinity, like e.g. Schwartz space. Thus, the function everywhere equal to one, which would be the only possible identity element for pointwise multiplication, cannot exist in such spaces, which therefore are rngs (for pointwise addition and multiplication). In particular, the real-valued continuous functions with compact support defined on some topological space, together with pointwise addition and multiplication, form a rng; this is not a ring unless the underlying space is compact. Example: even integers. The set 2Z of even integers is closed under addition and multiplication and has an additive identity, 0, so it is a rng, but it does not have a multiplicative identity, so it is not a ring. In 2Z, the only multiplicative idempotent is 0, the only nilpotent is 0, and the only element with a reflexive inverse is 0. Example: finite quinary sequences. The direct sum <math display="inline">\mathcal T = \bigoplus_{i=1}^\infty \mathbf{Z}/5 \mathbf{Z}$ equipped with coordinate-wise addition and multiplication is a rng with the following properties: Adjoining an identity element (Dorroh extension). Every rng "R" can be enlarged to a ring "R"^ by adjoining an identity element. A general way in which to do this is to formally add an identity element 1 and let "R"^ consist of integral linear combinations of 1 and elements of "R" with the premise that none of its nonzero integral multiples coincide or are contained in "R". That is, elements of "R"^ are of the form "n" ⋅ 1 + "r" where "n" is an integer and "r" ∈ "R". Multiplication is defined by linearity: ("n"1 + "r"1) ⋅ ("n"2 + "r"2) = "n"1"n"2 + "n"1"r"2 + "n"2"r"1 + "r"1"r"2. More formally, we can take "R"^ to be the cartesian product Z × "R" and define addition and multiplication by ("n"1, "r"1) + ("n"2, "r"2) = ("n"1 + "n"2, "r"1 + "r"2), ("n"1, "r"1) · ("n"2, "r"2) = ("n"1"n"2, "n"1"r"2 + "n"2"r"1 + "r"1"r"2). The multiplicative identity of "R"^ is then (1, 0). There is a natural rng homomorphism "j" : "R" → "R"^ defined by "j"("r") = (0, "r"). This map has the following universal property: Given any ring "S" and any rng homomorphism "f" : "R" → "S", there exists a unique ring homomorphism "g" : "R"^ → "S" such that "f" = "gj". The map "g" can be defined by "g"("n", "r") = "n" · 1"S" + "f"("r"). There is a natural surjective ring homomorphism "R"^ → Z which sends ("n", "r") to "n". The kernel of this homomorphism is the image of "R" in "R"^. Since "j" is injective, we see that "R" is embedded as a (two-sided) ideal in "R"^ with the quotient ring "R"^/"R" isomorphic to Z. It follows that "Every rng is an ideal in some ring, and every ideal of a ring is a rng." Note that "j" is never surjective. So, even when "R" already has an identity element, the ring "R"^ will be a larger one with a different identity. The ring "R"^ is often called the Dorroh extension of "R" after the American mathematician Joe Lee Dorroh, who first constructed it. The process of adjoining an identity element to a rng can be formulated in the language of category theory. If we denote the category of all rings and ring homomorphisms by Ring and the category of all rngs and rng homomorphisms by Rng, then Ring is a (nonfull) subcategory of Rng. The construction of "R"^ given above yields a left adjoint to the inclusion functor "I" : Ring → Rng. Notice that Ring is not a reflective subcategory of Rng because the inclusion functor is not full. Properties weaker than having an identity. There are several properties that have been considered in the literature that are weaker than having an identity element, but not so general. For example: It is not hard to check that these properties are weaker than having an identity element and weaker than the previous one. Rng of square zero. A rng of square zero is a rng "R" such that "xy" = 0 for all "x" and "y" in "R". Any abelian group can be made a rng of square zero by defining the multiplication so that "xy" = 0 for all "x" and "y"; thus every abelian group is the additive group of some rng. The only rng of square zero with a multiplicative identity is the zero ring {0}. Any additive subgroup of a rng of square zero is an ideal. Thus a rng of square zero is simple if and only if its additive group is a simple abelian group, i.e., a cyclic group of prime order. Unital homomorphism. Given two unital algebras "A" and "B", an algebra homomorphism "f" : "A" → "B" is unital if it maps the identity element of "A" to the identity element of "B". If the associative algebra "A" over the field "K" is "not" unital, one can adjoin an identity element as follows: take "A" × "K" as underlying "K"-vector space and define multiplication ∗ by ("x", "r") ∗ ("y", "s") = ("xy" + "sx" + "ry", "rs") for "x", "y" in "A" and "r", "s" in "K". Then ∗ is an associative operation with identity element (0, 1). The old algebra "A" is contained in the new one, and in fact "A" × "K" is the "most general" unital algebra containing "A", in the sense of universal constructions. Notes.
767003
abstract_algebra
In mathematical group theory, the Hall–Higman theorem, due to Philip Hall and Graham Higman (1956, Theorem B), describes the possibilities for the minimal polynomial of an element of prime power order for a representation of a "p"-solvable group. Statement. Suppose that "G" is a "p"-solvable group with no normal "p"-subgroups, acting faithfully on a vector space over a field of characteristic "p". If "x" is an element of order "p""n" of "G" then the minimal polynomial is of the form ("X" − 1)"r" for some "r" ≤ "p""n". The Hall–Higman theorem states that one of the following 3 possibilities holds: Examples. The group SL2(F3) is 3-solvable (in fact solvable) and has an obvious 2-dimensional representation over a field of characteristic "p"=3, in which the elements of order 3 have minimal polynomial ("X"−1)2 with "r"=3−1.
3206137
abstract_algebra
Largest and smallest value taken by a function takes at a given point In mathematical analysis, the maximum and minimum of a function are, respectively, the largest and smallest value taken by the function. Known generically as extremum, they may be defined either within a given range (the "local" or "relative" extrema) or on the entire domain (the "global" or "absolute" extrema) of a function. Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum. In statistics, the corresponding concept is the sample maximum and minimum. Definition. A real-valued function "f" defined on a domain "X" has a global (or absolute) maximum point at "x"∗, if "f"("x"∗) ≥ "f"("x") for all "x" in "X". Similarly, the function has a global (or absolute) minimum point at "x"∗, if "f"("x"∗) ≤ "f"("x") for all "x" in "X". The value of the function at a maximum point is called the maximum value of the function, denoted $\max(f(x))$, and the value of the function at a minimum point is called the minimum value of the function. Symbolically, this can be written as follows: $x_0 \in X$ is a global maximum point of function $f:X \to \R,$ if $(\forall x \in X)\, f(x_0) \geq f(x).$ The definition of global minimum point also proceeds similarly. If the domain "X" is a metric space, then "f" is said to have a local (or relative) maximum point at the point "x"∗, if there exists some "ε" > 0 such that "f"("x"∗) ≥ "f"("x") for all "x" in "X" within distance "ε" of "x"∗. Similarly, the function has a local minimum point at "x"∗, if "f"("x"∗) ≤ "f"("x") for all "x" in "X" within distance "ε" of "x"∗. A similar definition can be used when "X" is a topological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows: Let $(X, d_X)$ be a metric space and function $ f:X \to \R$. Then $x_0 \in X$ is a local maximum point of function $f$ if $ (\exists \varepsilon > 0)$ such that $(\forall x \in X)\, d_X(x, x_0)<\varepsilon \implies f(x_0)\geq f(x).$ The definition of local minimum point can also proceed similarly. In both the global and local cases, the concept of a strict extremum can be defined. For example, "x"∗ is a strict global maximum point if for all "x" in "X" with "x" ≠ "x"∗, we have "f"("x"∗) > "f"("x"), and "x"∗ is a strict local maximum point if there exists some "ε" > 0 such that, for all "x" in "X" within distance "ε" of "x"∗ with "x" ≠ "x"∗, we have "f"("x"∗) > "f"("x"). Note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points. A continuous real-valued function with a compact domain always has a maximum point and a minimum point. An important example is a function whose domain is a closed and bounded interval of real numbers (see the graph above). Search. Finding global maxima and minima is the goal of mathematical optimization. If a function is continuous on a closed interval, then by the extreme value theorem, global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary, and take the largest (or smallest) one. For differentiable functions, Fermat's theorem states that local extrema in the interior of a domain must occur at critical points (or points where the derivative equals zero). However, not all critical points are extrema. One can often distinguish whether a critical point is a local maximum, a local minimum, or neither by using the first derivative test, second derivative test, or higher-order derivative test, given sufficient differentiability. For any function that is defined piecewise, one finds a maximum (or minimum) by finding the maximum (or minimum) of each piece separately, and then seeing which one is largest (or smallest). Examples. For a practical example, assume a situation where someone has $200$ feet of fencing and is trying to maximize the square footage of a rectangular enclosure, where $x$ is the length, $y$ is the width, and $xy$ is the area: $ 2x+2y = 200 $ $ 2y = 200-2x $ $ \frac{2y}{2} = \frac{200-2x}{2} $ $ y = 100 - x$ $ xy=x(100-x) $ The derivative with respect to $x$ is: $\begin{align} \frac{d}{dx}xy&=\frac{d}{dx}x(100-x) \\ &=\frac{d}{dx} \left(100x-x^2 \right) \\ &=100-2x \end{align}$ Setting this equal to $0$ $0=100-2x$ $2x=100$ $x=50$ reveals that $x=50$ is our only critical point. Now retrieve the endpoints by determining the interval to which $x$ is restricted. Since width is positive, then $x>0$, and since that implies that $x < 100$. Plug in critical point $50$, as well as endpoints $0$ and $100$, into and the results are $2500, 0,$ and $0$ respectively. Therefore, the greatest area attainable with a rectangle of $200$ feet of fencing is Functions of more than one variable. For functions of more than one variable, similar conditions apply. For example, in the (enlargeable) figure on the right, the necessary conditions for a "local" maximum are similar to those of a function with only one variable. The first partial derivatives as to "z" (the variable to be maximized) are zero at the maximum (the glowing dot on top in the figure). The second partial derivatives are negative. These are only necessary, not sufficient, conditions for a local maximum, because of the possibility of a saddle point. For use of these conditions to solve for a maximum, the function "z" must also be differentiable throughout. The second partial derivative test can help classify the point as a relative maximum or relative minimum. In contrast, there are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable function "f" defined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use the intermediate value theorem and Rolle's theorem to prove this by contradiction). In two and more dimensions, this argument fails. This is illustrated by the function $f(x,y)= x^2+y^2(1-x)^3,\qquad x,y \in \R,$ whose only critical point is at (0,0), which is a local minimum with "f"(0,0) = 0. However, it cannot be a global one, because "f"(2,3) = −5. Maxima or minima of a functional. If the domain of a function for which an extremum is to be found consists itself of functions (i.e. if an extremum is to be found of a functional), then the extremum is found using the calculus of variations. In relation to sets. Maxima and minima can also be defined for sets. In general, if an ordered set "S" has a greatest element "m", then "m" is a maximal element of the set, also denoted as $\max(S)$. Furthermore, if "S" is a subset of an ordered set "T" and "m" is the greatest element of "S" with (respect to order induced by "T"), then "m" is a least upper bound of "S" in "T". Similar results hold for least element, minimal element and greatest lower bound. The maximum and minimum function for sets are used in databases, and can be computed rapidly, since the maximum (or minimum) of a set can be computed from the maxima of a partition; formally, they are self-decomposable aggregation functions. In the case of a general partial order, the least element (i.e., one that is smaller than all others) should not be confused with a minimal element (nothing is smaller). Likewise, a greatest element of a partially ordered set (poset) is an upper bound of the set which is contained within the set, whereas a maximal element "m" of a poset "A" is an element of "A" such that if "m" ≤ "b" (for any "b" in "A"), then "m" = "b". Any least element or greatest element of a poset is unique, but a poset can have several minimal or maximal elements. If a poset has more than one maximal element, then these elements will not be mutually comparable. In a totally ordered set, or "chain", all elements are mutually comparable, so such a set can have at most one minimal element and at most one maximal element. Then, due to mutual comparability, the minimal element will also be the least element, and the maximal element will also be the greatest element. Thus in a totally ordered set, we can simply use the terms minimum and maximum. If a chain is finite, then it will always have a maximum and a minimum. If a chain is infinite, then it need not have a maximum or a minimum. For example, the set of natural numbers has no maximum, though it has a minimum. If an infinite chain "S" is bounded, then the closure "Cl"("S") of the set occasionally has a minimum and a maximum, in which case they are called the greatest lower bound and the least upper bound of the set "S", respectively. Notes.
137279
abstract_algebra
In mathematics, Green's relations are five equivalence relations that characterise the elements of a semigroup in terms of the principal ideals they generate. The relations are named for James Alexander Green, who introduced them in a paper of 1951. John Mackintosh Howie, a prominent semigroup theorist, described this work as "so all-pervading that, on encountering a new semigroup, almost the first question one asks is 'What are the Green relations like?'" (Howie 2002). The relations are useful for understanding the nature of divisibility in a semigroup; they are also valid for groups, but in this case tell us nothing useful, because groups always have divisibility. Instead of working directly with a semigroup "S", it is convenient to define Green's relations over the monoid "S"1. ("S"1 is ""S" with an identity adjoined if necessary"; if "S" is not already a monoid, a new element is adjoined and defined to be an identity.) This ensures that principal ideals generated by some semigroup element do indeed contain that element. For an element "a" of "S", the relevant ideals are: The L, R, and J relations. For elements "a" and "b" of "S", Green's relations "L", "R" and "J" are defined by That is, "a" and "b" are "L"-related if they generate the same left ideal; "R"-related if they generate the same right ideal; and "J"-related if they generate the same two-sided ideal. These are equivalence relations on "S", so each of them yields a partition of "S" into equivalence classes. The "L"-class of "a" is denoted "L""a" (and similarly for the other relations). The "L"-classes and "R"-classes can be equivalently understood as the strongly connected components of the left and right Cayley graphs of "S"1. Further, the "L", "R", and "J" relations define three preorders ≤"L", ≤"R", and ≤"J", where "a" ≤"J" "b" holds for two elements "a" and "b" of "S" if the ideal generated by "a" is included in that of "b", i.e., "S"1 "a" "S"1 ⊆ "S"1 "b" "S"1, and ≤"L" and ≤"R" are defined analogously. Green used the lowercase blackletter $\mathfrak{l}$, $\mathfrak{r}$ and $\mathfrak{f}$ for these relations, and wrote $a \equiv b (\mathfrak{l})$ for "a" "L" "b" (and likewise for "R" and "J"). Mathematicians today tend to use script letters such as $\mathcal{R}$ instead, and replace Green's modular arithmetic-style notation with the infix style used here. Ordinary letters are used for the equivalence classes. The "L" and "R" relations are left-right dual to one another; theorems concerning one can be translated into similar statements about the other. For example, "L" is "right-compatible": if "a" "L" "b" and "c" is another element of "S", then "ac" "L" "bc". Dually, "R" is "left-compatible": if "a" "R" "b", then "ca" "R" "cb". If "S" is commutative, then "L", "R" and "J" coincide. The H and D relations. The remaining relations are derived from "L" and "R". Their intersection is "H": "a" "H" "b" if and only if "a" "L" "b" and "a" "R" "b". This is also an equivalence relation on "S". The class "H""a" is the intersection of "L""a" and "R""a". More generally, the intersection of any "L"-class with any "R"-class is either an "H"-class or the empty set. "Green's Theorem" states that for any $\mathcal H$-class "H" of a semigroup S either (i) $H^2 \cap H = \emptyset$ or (ii) $H^2 \subseteq H$ and "H" is a subgroup of "S". An important corollary is that the equivalence class "H""e", where "e" is an idempotent, is a subgroup of "S" (its identity is "e", and all elements have inverses), and indeed is the largest subgroup of "S" containing "e". No $\mathcal H$-class can contain more than one idempotent, thus $\mathcal H$ is "idempotent separating". In a monoid "M", the class "H"1 is traditionally called the group of units. (Beware that unit does not mean identity in this context, i.e. in general there are non-identity elements in "H"1. The "unit" terminology comes from ring theory.) For example, in the transformation monoid on "n" elements, "T""n", the group of units is the symmetric group "S""n". Finally, "D" is defined: "a" "D" "b" if and only if there exists a "c" in "S" such that "a" "L" "c" and "c" "R" "b". In the language of lattices, "D" is the join of "L" and "R". (The join for equivalence relations is normally more difficult to define, but is simplified in this case by the fact that "a" "L" "c" and "c" "R" "b" for some "c" if and only if "a" "R" "d" and "d" "L" "b" for some "d".) As "D" is the smallest equivalence relation containing both "L" and "R", we know that "a" "D" "b" implies "a" "J" "b"—so "J" contains "D". In a finite semigroup, "D" and "J" are the same, as also in a rational monoid. Furthermore they also coincide in any epigroup. There is also a formulation of "D" in terms of equivalence classes, derived directly from the above definition: "a" "D" "b" if and only if the intersection of "R""a" and "L""b" is not empty. Consequently, the "D"-classes of a semigroup can be seen as unions of "L"-classes, as unions of "R"-classes, or as unions of "H"-classes. Clifford and Preston (1961) suggest thinking of this situation in terms of an "egg-box": Each row of eggs represents an "R"-class, and each column an "L"-class; the eggs themselves are the "H"-classes. For a group, there is only one egg, because all five of Green's relations coincide, and make all group elements equivalent. The opposite case, found for example in the bicyclic semigroup, is where each element is in an "H"-class of its own. The egg-box for this semigroup would contain infinitely many eggs, but all eggs are in the same box because there is only one "D"-class. (A semigroup for which all elements are "D"-related is called "bisimple".) It can be shown that within a "D"-class, all "H"-classes are the same size. For example, the transformation semigroup "T"4 contains four "D"-classes, within which the "H"-classes have 1, 2, 6, and 24 elements respectively. Recent advances in the combinatorics of semigroups have used Green's relations to help enumerate semigroups with certain properties. A typical result (Satoh, Yama, and Tokizawa 1994) shows that there are exactly 1,843,120,128 non-equivalent semigroups of order 8, including 221,805 that are commutative; their work is based on a systematic exploration of possible "D"-classes. (By contrast, there are only five groups of order 8.) Example. The full transformation semigroup "T"3 consists of all functions from the set {1, 2, 3} to itself; there are 27 of these. Write ("a" "b" "c") for the function that sends 1 to "a", 2 to "b", and 3 to "c". Since "T"3 contains the identity map, (1 2 3), there is no need to adjoin an identity. The egg-box diagram for "T"3 has three "D"-classes. They are also "J"-classes, because these relations coincide for a finite semigroup. In "T"3, two functions are "L"-related if and only if they have the same image. Such functions appear in the same column of the table above. Likewise, the functions "f" and "g" are "R"-related if and only if "f"("x") = "f"("y") ⇔ "g"("x") = "g"("y") for "x" and "y" in {1, 2, 3}; such functions are in the same table row. Consequently, two functions are "D"-related if and only if their images are the same size. The elements in bold are the idempotents. Any "H"-class containing one of these is a (maximal) subgroup. In particular, the third "D"-class is isomorphic to the symmetric group "S"3. There are also six subgroups of order 2, and three of order 1 (as well as subgroups of these subgroups). Six elements of "T"3 are not in any subgroup. Generalisations. There are essentially two ways of generalising an algebraic theory. One is to change its definitions so that it covers more or different objects; the other, more subtle way, is to find some desirable outcome of the theory and consider alternative ways of reaching that conclusion. Following the first route, analogous versions of Green's relations have been defined for semirings (Grillet 1970) and rings (Petro 2002). Some, but not all, of the properties associated with the relations in semigroups carry over to these cases. Staying within the world of semigroups, Green's relations can be extended to cover relative ideals, which are subsets that are only ideals with respect to a subsemigroup (Wallace 1963). For the second kind of generalisation, researchers have concentrated on properties of bijections between "L"- and "R"- classes. If "x" "R" "y", then it is always possible to find bijections between "L""x" and "L""y" that are "R"-class-preserving. (That is, if two elements of an "L"-class are in the same "R"-class, then their images under a bijection will still be in the same "R"-class.) The dual statement for "x" "L" "y" also holds. These bijections are right and left translations, restricted to the appropriate equivalence classes. The question that arises is: how else could there be such bijections? Suppose that Λ and Ρ are semigroups of partial transformations of some semigroup "S". Under certain conditions, it can be shown that if "x" Ρ = "y" Ρ, with "x" "ρ"1 = "y" and "y" "ρ"2 = "x", then the restrictions "ρ"1 : Λ "x" → Λ "y" "ρ"2 : Λ "y" → Λ "x" are mutually inverse bijections. (Conventionally, arguments are written on the right for Λ, and on the left for Ρ.) Then the "L" and "R" relations can be defined by "x" "L" "y" if and only if Λ "x" = Λ "y" "x" "R" "y" if and only if "x" Ρ = "y" Ρ and "D" and "H" follow as usual. Generalisation of "J" is not part of this system, as it plays no part in the desired property. We call (Λ, Ρ) a "Green's pair". There are several choices of partial transformation semigroup that yield the original relations. One example would be to take Λ to be the semigroup of all left translations on "S"1, restricted to "S", and Ρ the corresponding semigroup of restricted right translations. These definitions are due to Clark and Carruth (1980). They subsume Wallace's work, as well as various other generalised definitions proposed in the mid-1970s. The full axioms are fairly lengthy to state; informally, the most important requirements are that both Λ and Ρ should contain the identity transformation, and that elements of Λ should commute with elements of Ρ.
366841
abstract_algebra
In mathematics, the term maximal subgroup is used to mean slightly different things in different areas of algebra. In group theory, a maximal subgroup "H" of a group "G" is a proper subgroup, such that no proper subgroup "K" contains "H" strictly. In other words, "H" is a maximal element of the partially ordered set of subgroups of "G" that are not equal to "G". Maximal subgroups are of interest because of their direct connection with primitive permutation representations of "G". They are also much studied for the purposes of finite group theory: see for example Frattini subgroup, the intersection of the maximal subgroups. In semigroup theory, a maximal subgroup of a semigroup "S" is a subgroup (that is, a subsemigroup which forms a group under the semigroup operation) of "S" which is not properly contained in another subgroup of "S". Notice that, here, there is no requirement that a maximal subgroup be proper, so if "S" is in fact a group then its unique maximal subgroup (as a semigroup) is "S" itself. Considering subgroups, and in particular maximal subgroups, of semigroups often allows one to apply group-theoretic techniques in semigroup theory. There is a one-to-one correspondence between idempotent elements of a semigroup and maximal subgroups of the semigroup: each idempotent element is the identity element of a unique maximal subgroup. Existence of maximal subgroup. Any proper subgroup of a finite group is contained in some maximal subgroup, since the proper subgroups form a finite partially ordered set under inclusion. There are, however, infinite abelian groups that contain no maximal subgroups, for example the Prüfer group. Maximal normal subgroup. Similarly, a normal subgroup "N" of "G" is said to be a maximal normal subgroup (or maximal proper normal subgroup) of "G" if "N" < "G" and there is no normal subgroup "K" of "G" such that "N" < "K" < "G". We have the following theorem: Theorem: A normal subgroup "N" of a group "G" is a maximal normal subgroup if and only if the quotient "G"/"N" is simple. Hasse diagrams. These Hasse diagrams show the lattices of subgroups of the symmetric group S4, the dihedral group D4, and C23, the third direct power of the cyclic group C2.<br> The maximal subgroups are linked to the group itself (on top of the Hasse diagram) by an edge of the Hasse diagram.
425615
abstract_algebra
In computer science, a double-ended priority queue (DEPQ) or double-ended heap is a data structure similar to a priority queue or heap, but allows for efficient removal of both the maximum and minimum, according to some ordering on the "keys" (items) stored in the structure. Every element in a DEPQ has a priority or value. In a DEPQ, it is possible to remove the elements in both ascending as well as descending order. Operations. A double-ended priority queue features the following operations: If an operation is to be performed on two elements having the same priority, then the element inserted first is chosen. Also, the priority of any element can be changed once it has been inserted in the DEPQ. Implementation. Double-ended priority queues can be built from balanced binary search trees (where the minimum and maximum elements are the leftmost and rightmost leaves, respectively), or using specialized data structures like min-max heap and pairing heap. Generic methods of arriving at double-ended priority queues from normal priority queues are: Dual structure method. In this method two different priority queues for min and max are maintained. The same elements in both the PQs are shown with the help of correspondence pointers. Here, the minimum and maximum elements are values contained in the root nodes of min heap and max heap respectively. Total correspondence. Half the elements are in the min PQ and the other half in the max PQ. Each element in the min PQ has a one-to-one correspondence with an element in max PQ. If the number of elements in the DEPQ is odd, one of the elements is retained in a buffer. Priority of every element in the min PQ will be less than or equal to the corresponding element in the max PQ. Leaf correspondence. In contrast to a total correspondence, in this method only the leaf elements of the min and max PQ form corresponding one-to-one pairs. It is not necessary for non-leaf elements to be in a one-to-one correspondence pair. If the number of elements in the DEPQ is odd, one of the elements is retained in a buffer. Interval heaps. Apart from the above-mentioned correspondence methods, DEPQ's can be obtained efficiently using interval heaps. An interval heap is like an embedded min-max heap in which each node contains two elements. It is a complete binary tree in which: Depending on the number of elements, two cases are possible - Inserting an element. Depending on the number of elements already present in the interval heap, following cases are possible: The time required for inserting an element depends on the number of movements required to meet all the conditions and is O(log "n"). Deleting an element. Thus, with interval heaps, both the minimum and maximum elements can be removed efficiently traversing from root to leaf. Thus, a DEPQ can be obtained from an interval heap where the elements of the interval heap are the priorities of elements in the DEPQ. Time complexity. Interval heaps. When DEPQ's are implemented using Interval heaps consisting of "n" elements, the time complexities for the various functions are formulated in the table below Pairing heaps. When DEPQ's are implemented using heaps or pairing heaps consisting of "n" elements, the time complexities for the various functions are formulated in the table below. For pairing heaps, it is an amortized complexity. Applications. External sorting. One example application of the double-ended priority queue is external sorting. In an external sort, there are more elements than can be held in the computer's memory. The elements to be sorted are initially on a disk and the sorted sequence is to be left on the disk. The external quick sort is implemented using the DEPQ as follows:
3223724
abstract_algebra
In Lie theory, an area of mathematics, the Kazhdan–Margulis theorem is a statement asserting that a discrete subgroup in semisimple Lie groups cannot be too dense in the group. More precisely, in any such Lie group there is a uniform neighbourhood of the identity element such that every lattice in the group has a conjugate whose intersection with this neighbourhood contains only the identity. This result was proven in the 1960s by David Kazhdan and Grigory Margulis. Statement and remarks. The formal statement of the Kazhdan–Margulis theorem is as follows. "Let $G</matH> be a semisimple Lie group: there exists an open neighbourhood $U$ of the identity $e$ in $G$ such that for any discrete subgroup $\Gamma \subset G$ there is an element $g \in G$ satisfying $g\Gamma g^{-1} \cap U = \{ e \}$. " Note that in general Lie groups this statement is far from being true; in particular, in a nilpotent Lie group, for any neighbourhood of the identity there exists a lattice in the group which is generated by its intersection with the neighbourhood: for example, in $\mathbb R^n$, the lattice $\varepsilon \mathbb Z^n$ satisfies this property for $\varepsilon > 0$ small enough. Proof. The main technical result of Kazhdan–Margulis, which is interesting in its own right and from which the better-known statement above follows immediately, is the following. "Given a semisimple Lie group without compact factors $G$ endowed with a norm $|\cdot|$, there exists $c > 1$, a neighbourhood $U_0$ of $e$ in $G$, a compact subset $E \subset G$ such that, for any discrete subgroup $\Gamma \subset G$ there exists a $g \in E$ such that $|g\gamma g^{-1}| \ge c|\gamma|$ for all $\gamma \in \Gamma \cap U_0$. " The neighbourhood $U_0$ is obtained as a Zassenhaus neighbourhood of the identity in $G$: the theorem then follows by standard Lie-theoretic arguments. There also exist other proofs. There is one proof which is more geometric in nature and which can give more information, and there is a third proof, relying on the notion of invariant random subgroups, which is considerably shorter. Applications. Selberg's hypothesis. One of the motivations of Kazhdan–Margulis was to prove the following statement, known at the time as "Selberg's hypothesis" (recall that a lattice is called "uniform" if its quotient space is compact): "A lattice in a semisimple Lie group is non-uniform if and only if it contains a unipotent element. " This result follows from the more technical version of the Kazhdan–Margulis theorem and the fact that only unipotent elements can be conjugated arbitrarily close (for a given element) to the identity. Volumes of locally symmetric spaces. A corollary of the theorem is that the locally symmetric spaces and orbifolds associated to lattices in a semisimple Lie group cannot have arbitrarily small volume (given a normalisation for the Haar measure). For hyperbolic surfaces this is due to Siegel, and there is an explicit lower bound of $\pi / 21$ for the smallest covolume of a quotient of the hyperbolic plane by a lattice in $\mathrm{PSL}_2(\mathbb R)$ (see Hurwitz's automorphisms theorem). For hyperbolic three-manifolds the lattice of minimal volume is known and its covolume is about 0.0390. In higher dimensions the problem of finding the lattice of minimal volume is still open, though it has been solved when restricting to the subclass of arithmetic groups. Wang's finiteness theorem. Together with local rigidity and finite generation of lattices the Kazhdan-Margulis theorem is an important ingredient in the proof of Wang's finiteness theorem. "If $G</matH> is a simple Lie group not locally isomorphic to $\mathrm{SL}_2(\mathbb R)$ or $\mathrm{SL}_2(\mathbb C)$ with a fixed Haar measure and $v>0$ there are only finitely many lattices in $G$ of covolume less than $v$. " Notes.
5001558
abstract_algebra
Shared independent set of two matroids In combinatorial optimization, the matroid intersection problem is to find a largest common independent set in two matroids over the same ground set. If the elements of the matroid are assigned real weights, the weighted matroid intersection problem is to find a common independent set with the maximum possible weight. These problems generalize many problems in combinatorial optimization including finding maximum matchings and maximum weight matchings in bipartite graphs and finding arborescences in directed graphs. The matroid intersection theorem, due to Jack Edmonds, says that there is always a simple upper bound certificate, consisting of a partitioning of the ground set amongst the two matroids, whose value (sum of respective ranks) equals the size of a maximum common independent set. Based on this theorem, the matroid intersection problem for two matroids can be solved in polynomial time using matroid partitioning algorithms. Examples. Let "G" = ("U","V","E") be a bipartite graph. One may define a partition matroid "MU" on the ground set "E", in which a set of edges is independent if no two of the edges have the same endpoint in "U". Similarly one may define a matroid "MV" in which a set of edges is independent if no two of the edges have the same endpoint in "V". Any set of edges that is independent in both "MU" and "MV" has the property that no two of its edges share an endpoint; that is, it is a matching. Thus, the largest common independent set of "MU" and "MV" is a maximum matching in "G". Similarly, if each edge has a weight, then the maximum-weight independent set of "MU" and "MV" is a Maximum weight matching in "G". Algorithms. There are several polynomial-time algorithms for weighted matroid intersection, with different run-times. The run-times are given in terms of $n$ - the number of elements in the common base-set, $r$ - the maximum between the ranks of the two matroids, $T$ - the number of operations required for a circuit-finding oracle, and $k$ - the number of elements in the intersection (in case we want to find an intersection of a specific size $k$). Extensions. Maximizing weight subject to cardinality. In a variant of weighted matroid intersection, called "(Pk)", the goal is to find a common independent set with the maximum possible weight among all such sets with cardinality "k", if such a set exists. This variant, too, can be solved in polynomial time. Three matroids. The matroid intersection problem becomes NP-hard when three matroids are involved, instead of only two. One proof of this hardness result uses a reduction from the Hamiltonian path problem in directed graphs. Given a directed graph "G" with "n" vertices, and specified nodes "s" and "t", the Hamiltonian path problem is the problem of determining whether there exists a simple path of length "n" − 1 that starts at "s" and ends at "t". It may be assumed without loss of generality that "s" has no incoming edges and "t" has no outgoing edges. Then, a Hamiltonian path exists if and only if there is a set of "n" − 1 elements in the intersection of three matroids on the edge set of the graph: two partition matroids ensuring that the in-degree and out-degree of the selected edge set are both at most one, and the graphic matroid of the undirected graph formed by forgetting the edge orientations in "G", ensuring that the selected edge set has no cycles. Matroid parity. Another computational problem on matroids, the matroid parity problem, was formulated by Lawler as a common generalization of matroid intersection and non-bipartite graph matching. However, although it can be solved in polynomial time for linear matroids, it is NP-hard for other matroids, and requires exponential time in the matroid oracle model. Valuated matroids. A valuated matroid is a matroid equipped with a value function "v" on the set of its bases, with the following "exchange property": for any two distinct bases $A$ and $B$, if $a\in A\setminus B$, then there exists an element $b\in B\setminus A$ such that both $(A \setminus \{ a \}) \cup \{b\}$ and $(B \setminus \{ b \}) \cup \{a\}$ are bases, and :$v(A)+v(B) \leq v(B \setminus \{ b \} \cup \{a\}) + v(A \setminus \{ a \} \cup \{b\})$. Given a weighted bipartite graph "G" = ("X"+"Y", "E") and two valuated matroids, one on "X" with bases set "BX" and valuation "vX", and one on "Y" with bases "BY" and valuation "vY", the valuated independent assignment problem is the problem of finding a matching "M" in "G", such that "MX" (the subset of "X" matched by "M") is a base in "BX" , "MY" is a base in "BY" , and subject to this, the sum $w(M) + v_X(M_X)+v_Y(M_Y)$ is maximized. The weighted matroid intersection problem is a special case in which the matroid valuations are constant, so we only seek to maximize $w(M)$ subject to "MX" is a base in "BX" and "MY" is a base in "BY" "." Murota presents a polynomial-time algorithm for this problem.
2690437
abstract_algebra
Superstrong approximation is a generalisation of strong approximation in algebraic groups "G", to provide spectral gap results. The spectrum in question is that of the Laplacian matrix associated to a family of quotients of a discrete group Γ; and the gap is that between the first and second eigenvalues (normalisation so that the first eigenvalue corresponds to constant functions as eigenvectors). Here Γ is a subgroup of the rational points of "G", but need not be a lattice: it may be a so-called thin group. The "gap" in question is a lower bound (absolute constant) for the difference of those eigenvalues. A consequence and equivalent of this property, potentially holding for Zariski dense subgroups Γ of the special linear group over the integers, and in more general classes of algebraic groups "G", is that the sequence of Cayley graphs for reductions Γ"p" modulo prime numbers "p", with respect to any fixed set "S" in Γ that is a symmetric set and generating set, is an expander family. In this context "strong approximation" is the statement that "S" when reduced generates the full group of points of "G" over the prime fields with "p" elements, when "p" is large enough. It is equivalent to the Cayley graphs being connected (when "p" is large enough), or that the locally constant functions on these graphs are constant, so that the eigenspace for the first eigenvalue is one-dimensional. Superstrong approximation therefore is a concrete quantitative improvement on these statements. Background. Property (τ) is an analogue in discrete group theory of Kazhdan's property (T), and was introduced by Alexander Lubotzky. For a given family of normal subgroups "N" of finite index in Γ, one equivalent formulation is that the Cayley graphs of the groups Γ/"N", all with respect to a fixed symmetric set of generators "S", form an expander family. Therefore superstrong approximation is a formulation of property (τ), where the subgroups "N" are the kernels of reduction modulo large enough primes "p". The Lubotzky–Weiss conjecture states (for special linear groups and reduction modulo primes) that an expansion result of this kind holds independent of the choice of "S". For applications, it is also relevant to have results where the modulus is not restricted to being a prime. Proofs of superstrong approximation. Results on superstrong approximation have been found using techniques on approximate subgroups, and growth rate in finite simple groups. Notes.
4374501
abstract_algebra
Data structure for integer priorities A bucket queue is a data structure that implements the priority queue abstract data type: it maintains a dynamic collection of elements with numerical priorities and allows quick access to the element with minimum (or maximum) priority. In the bucket queue, the priorities must be integers, and it is particularly suited to applications in which the priorities have a small range. A bucket queue has the form of an array of buckets: an array data structure, indexed by the priorities, whose cells contain collections of items with the same priority as each other. With this data structure, insertion of elements and changes of their priority take constant time. Searching for and removing the minimum-priority element takes time proportional to the number of buckets or, by maintaining a pointer to the most recently found bucket, in time proportional to the difference in priorities between successive operations. The bucket queue is the priority-queue analogue of pigeonhole sort (also called bucket sort), a sorting algorithm that places elements into buckets indexed by their priorities and then concatenates the buckets. Using a bucket queue as the priority queue in a selection sort gives a form of the pigeonhole sort algorithm. Bucket queues are also called bucket priority queues or bounded-height priority queues. When used for quantized approximations to real number priorities, they are also called untidy priority queues or pseudo priority queues. They are closely related to the calendar queue, a structure that uses a similar array of buckets for exact prioritization by real numbers. Applications of the bucket queue include computation of the degeneracy of a graph, fast algorithms for shortest paths and widest paths for graphs with weights that are small integers or are already sorted, and greedy approximation algorithms for the set cover problem. The quantized version of the structure has also been applied to scheduling and to marching cubes in computer graphics. The first use of the bucket queue was in a shortest path algorithm by . Operation. Basic data structure. A bucket queue can handle elements with integer priorities in the range from 0 or 1 up to some known bound C, and operations that insert elements, change the priority of elements, or extract (find and remove) the element that has the minimum (or maximum) priority. It consists of an array A of container data structures; in most sources these containers are doubly linked lists but they could alternatively be dynamic arrays or dynamic sets. The container in the pth array cell "A"["p"] stores the collection of elements whose priority A bucket queue can handle the following operations: In this way, insertions and priority changes take constant time, and extracting the minimum or maximum priority element takes time "O"("C"). Optimizations. As an optimization, the data structure can start each sequential search for a non-empty bucket at the most recently-found non-empty bucket instead of at the start of the array. This can be done in either of two different ways, lazy (delaying these sequential searches until they are necessary) or eager (doing the searches ahead of time). The choice of when to do the search affects which of the data structure operations is slowed by these searches. Dial's original version of the structure used a lazy search. This can be done by maintaining an index L that is a lower bound on the minimum priority of any element currently in the queue. When inserting a new element, L should be updated to the minimum of its old value and the new element's priority. When searching for the minimum priority element, the search can start at L instead of at zero, and after the search L should be left equal to the priority that was found in the search. Alternatively, the eager version of this optimization keeps L updated so that it always points to the first non-empty bucket. When inserting a new element with a priority smaller than L, the data structure sets L to the new priority, and when removing the last element from a bucket with priority L, it performs a sequential search through larger indexes until finding a non-empty bucket and setting L to the priority of the resulting bucket. In either of these two variations, each sequential search takes time proportional to the difference between the old and new values of L. This could be significantly faster than the "O"("C") time bound for the searches in the un-optimized version of the data structure. In many applications of priority queues such as Dijkstra's algorithm, the minimum priorities form a monotonic sequence, allowing a monotone priority queue to be used. In these applications, for both the lazy and eager variations of the optimized structure, the sequential searches for non-empty buckets cover disjoint ranges of buckets. Because each bucket is in at most one of these ranges, their numbers of steps add to at most C. Therefore, in these applications, the total time for a sequence of n operations is "O"("n" + "C"), rather than the slower "O"("nC") time bound that would result without this optimization. A corresponding optimization can be applied in applications where a bucket queue is used to find elements of maximum priority, but in this case it should maintain an index that upper-bounds the maximum priority, and the sequential search for a non-empty bucket should proceed downwards from this upper bound. Another optimization (already given by ) can be used to save space when the priorities are monotonic and, throughout the course of an algorithm, always fall within a range of r values rather than extending over the whole range from 0 to C. In this case, one can index the array by the priorities modulo r rather than by their actual values. The search for the minimum priority element should always begin at the previous minimum, to avoid priorities that are higher than the minimum but have lower moduli. In particular, this idea can be applied in Dijkstra's algorithm on graphs whose edge lengths are integers in the range from 1 to r. Because creating a new bucket queue involves initializing an array of empty buckets, this initialization step takes time proportional to the number of priorities. A variation of the bucket queue described by Donald B. Johnson in 1981 instead stores only the non-empty buckets in a linked list, sorted by their priorities, and uses an auxiliary search tree to quickly find the position in this linked list for any new buckets. It takes time "O"(log log "C") to initialize this variant structure, constant time to find an element with minimum or maximum priority, and time "O"(log log "D") to insert or delete an element, where D is the difference between the nearest smaller and larger priorities to the priority of the inserted or deleted element. Example. For example, consider a bucket queue with four priorities, the numbers 0, 1, 2, and 3. It consists of an array $A$ whose four cells each contain a collection of elements, initially empty. For the purposes of this example, $A$ can be written as a bracketed sequence of four sets: $A = [\emptyset, \emptyset, \emptyset, \emptyset]$. Consider a sequence of operations in which we insert two elements $x$ and $y$ with the same priority 1, insert a third element $z$ with priority 3, change the priority of $x$ to 3, and then perform two extractions of the minimum-priority element. Applications. Graph degeneracy. A bucket queue can be used to maintain the vertices of an undirected graph, prioritized by their degrees, and repeatedly find and remove the vertex of minimum degree. This greedy algorithm can be used to calculate the degeneracy of a given graph, equal to the largest degree of any vertex at the time of its removal. The algorithm takes linear time, with or without the optimization that maintains a lower bound on the minimum priority, because each vertex is found in time proportional to its degree and the sum of all vertex degrees is linear in the number of edges of the graph. Dial's algorithm for shortest paths. In Dijkstra's algorithm for shortest paths in directed graphs with edge weights that are positive integers, the priorities are monotone, and a monotone bucket queue can be used to obtain a time bound of "O"("m" + "dc"), where m is the number of edges, d is the diameter of the network, and c is the maximum (integer) link cost. This variant of Dijkstra's algorithm is also known as Dial's algorithm, after Robert B. Dial, who published it in 1969. The same idea also works, using a quantized bucket queue, for graphs with positive real edge weights when the ratio of the maximum to minimum weight is at most c. In this quantized version of the algorithm, the vertices are processed out of order, compared to the result with a non-quantized priority queue, but the correct shortest paths are still found. In these algorithms, the priorities will only span a range of width "c" + 1, so the modular optimization can be used to reduce the space to "O"("n" + "c"). A variant of the same algorithm can be used for the widest path problem. In combination with methods for quickly partitioning non-integer edge weights into subsets that can be assigned integer priorities, it leads to near-linear-time solutions to the single-source single-destination version of the widest path problem. Greedy set cover. The set cover problem has as its input a family of sets. The output should be a subfamily of these sets, with the same union as the original family, including as few sets as possible. It is NP-hard, but has a greedy approximation algorithm that achieves a logarithmic approximation ratio, essentially the best possible unless P = NP. This approximation algorithm selects its subfamily by repeatedly choosing a set that covers the maximum possible number of remaining uncovered elements. A standard exercise in algorithm design asks for an implementation of this algorithm that takes linear time in the input size, which is the sum of sizes of all the input sets. This may be solved using a bucket queue of sets in the input family, prioritized by the number of remaining elements that they cover. Each time that the greedy algorithm chooses a set as part of its output, the newly covered set elements should be subtracted from the priorities of the other sets that cover them; over the course of the algorithm the number of these changes of priorities is just the sum of sizes of the input sets. The priorities are monotonically decreasing integers, upper-bounded by the number of elements to be covered. Each choice of the greedy algorithm involves finding the set with the maximum priority, which can be done by scanning downwards through the buckets of the bucket queue, starting from the most recent previous maximum value. The total time is linear in the input size. Scheduling. Bucket queues can be used to schedule tasks with deadlines, for instance in packet forwarding for internet data with quality of service guarantees. For this application, the deadlines should be quantized into discrete intervals, and tasks whose deadlines fall into the same interval are considered to be of equivalent priority. A variation of the quantized bucket queue data structure, the calendar queue, has been applied to scheduling of discrete-event simulations, where the elements in the queue are future events prioritized by the time within the simulation that the events should happen. In this application, the ordering of events is critical, so the priorities cannot be approximated. Therefore, the calendar queue performs searches for the minimum-priority element in a different way than a bucket queue: in the bucket queue, any element of the first non-empty bucket may be returned, but instead the calendar queue searches all the elements in that bucket to determine which of them has the smallest non-quantized priority. To keep these searches fast, this variation attempts to keep the number of buckets proportional to the number of elements, by adjusting the scale of quantization and rebuilding the data structure when it gets out of balance. Calendar queues may be slower than bucket queues in the worst case (when many elements all land in the same smallest bucket) but are fast when elements are uniformly distributed among buckets causing the average bucket size to be constant. Fast marching. In applied mathematics and numerical methods for the solution of differential equations, untidy priority queues have been used to prioritize the steps of the fast marching method for solving boundary value problems of the Eikonal equation, used to model wave propagation. This method finds the times at which a moving boundary crosses a set of discrete points (such as the points of an integer grid) using a prioritization method resembling a continuous version of Dijkstra's algorithm, and its running time is dominated by its priority queue of these points. It can be sped up to linear time by rounding the priorities used in this algorithm to integers, and using a bucket queue for these integers. As in Dijkstra's and Dial's algorithms, the priorities are monotone, so fast marching can use the monotone optimization of the bucket queue and its analysis. However, the discretization introduces some error into the resulting calculations.
4987290
abstract_algebra
Automorphism of a group, ring, or algebra given by the conjugation action of one of its elements In abstract algebra an inner automorphism is an automorphism of a group, ring, or algebra given by the conjugation action of a fixed element, called the "conjugating element". They can be realized via simple operations from within the group itself, hence the adjective "inner". These inner automorphisms form a subgroup of the automorphism group, and the quotient of the automorphism group by this subgroup is defined as the outer automorphism group. Definition. If G is a group and g is an element of G (alternatively, if G is a ring, and g is a unit), then the function $\begin{align} \varphi_g\colon G&\to G \\ \varphi_g(x)&:= g^{-1}xg \end{align}$ is called (right) conjugation by g (see also conjugacy class). This function is an endomorphism of G: for all $x_1,x_2\in G,$ $\varphi_g(x_1 x_2) = g^{-1} x_1 x_2g = \left(g^{-1} x_1 g\right)\left(g^{-1} x_2 g\right) = \varphi_g(x_1)\varphi_g(x_2),$ where the second equality is given by the insertion of the identity between $x_1$ and $x_2.$ Furthermore, it has a left and right inverse, namely $\varphi_{g^{-1}}.$ Thus, $\varphi_g$ is bijective, and so an isomorphism of G with itself, i.e. an automorphism. An inner automorphism is any automorphism that arises from conjugation. When discussing right conjugation, the expression $g^{-1}xg$ is often denoted exponentially by $x^g.$ This notation is used because composition of conjugations satisfies the identity: $\left(x^{g_1}\right)^{g_2} = x^{g_1g_2}$ for all $g_1, g_2 \in G.$ This shows that right conjugation gives a right action of G on itself. Inner and outer automorphism groups. The composition of two inner automorphisms is again an inner automorphism, and with this operation, the collection of all inner automorphisms of G is a group, the inner automorphism group of G denoted Inn("G"). Inn("G") is a normal subgroup of the full automorphism group Aut("G") of G. The outer automorphism group, Out("G") is the quotient group $\operatorname{Out}(G) = \operatorname{Aut}(G) / \operatorname{Inn}(G).$ The outer automorphism group measures, in a sense, how many automorphisms of G are not inner. Every non-inner automorphism yields a non-trivial element of Out("G"), but different non-inner automorphisms may yield the same element of Out("G"). Saying that conjugation of x by a leaves x unchanged is equivalent to saying that a and x commute: $a^{-1}xa = x \iff xa = ax.$ Therefore the existence and number of inner automorphisms that are not the identity mapping is a kind of measure of the failure of the commutative law in the group (or ring). An automorphism of a group G is inner if and only if it extends to every group containing G. By associating the element "a" ∈ "G" with the inner automorphism "f"("x") "x""a" in Inn("G") as above, one obtains an isomorphism between the quotient group "G" / Z("G") (where Z("G") is the center of G) and the inner automorphism group: $G\,/\,\mathrm{Z}(G) \cong \operatorname{Inn}(G).$ This is a consequence of the first isomorphism theorem, because Z("G") is precisely the set of those elements of G that give the identity mapping as corresponding inner automorphism (conjugation changes nothing). Non-inner automorphisms of finite p-groups. A result of Wolfgang Gaschütz says that if G is a finite non-abelian p-group, then G has an automorphism of p-power order which is not inner. It is an open problem whether every non-abelian p-group G has an automorphism of order p. The latter question has positive answer whenever G has one of the following conditions: Types of groups. The inner automorphism group of a group G, Inn("G"), is trivial (i.e., consists only of the identity element) if and only if G is abelian. The group Inn("G") is cyclic only when it is trivial. At the opposite end of the spectrum, the inner automorphisms may exhaust the entire automorphism group; a group whose automorphisms are all inner and whose center is trivial is called complete. This is the case for all of the symmetric groups on n elements when n is not 2 or 6. When "n" 6, the symmetric group has a unique non-trivial class of non-inner automorphisms, and when "n" 2, the symmetric group, despite having no non-inner automorphisms, is abelian, giving a non-trivial center, disqualifying it from being complete. If the inner automorphism group of a perfect group G is simple, then G is called quasisimple. Lie algebra case. An automorphism of a Lie algebra 𝔊 is called an inner automorphism if it is of the form Ad"g", where Ad is the adjoint map and g is an element of a Lie group whose Lie algebra is 𝔊. The notion of inner automorphism for Lie algebras is compatible with the notion for groups in the sense that an inner automorphism of a Lie group induces a unique inner automorphism of the corresponding Lie algebra. Extension. If G is the group of units of a ring, A, then an inner automorphism on G can be extended to a mapping on the projective line over A by the group of units of the matrix ring, M2("A"). In particular, the inner automorphisms of the classical groups can be extended in that way.
31031
abstract_algebra
Function that is invariant under all permutations of its variables In mathematics, a function of $n$ variables is symmetric if its value is the same no matter the order of its arguments. For example, a function $f\left(x_1,x_2\right)$ of two arguments is a symmetric function if and only if $f\left(x_1,x_2\right) = f\left(x_2,x_1\right)$ for all $x_1$ and $x_2$ such that $\left(x_1,x_2\right)$ and $\left(x_2,x_1\right)$ are in the domain of $f.$ The most commonly encountered symmetric functions are polynomial functions, which are given by the symmetric polynomials. A related notion is alternating polynomials, which change sign under an interchange of variables. Aside from polynomial functions, tensors that act as functions of several vectors can be symmetric, and in fact the space of symmetric $k$-tensors on a vector space $V$ is isomorphic to the space of homogeneous polynomials of degree $k$ on $V.$ Symmetric functions should not be confused with even and odd functions, which have a different sort of symmetry. Symmetrization. Given any function $f$ in $n$ variables with values in an abelian group, a symmetric function can be constructed by summing values of $f$ over all permutations of the arguments. Similarly, an anti-symmetric function can be constructed by summing over even permutations and subtracting the sum over odd permutations. These operations are of course not invertible, and could well result in a function that is identically zero for nontrivial functions $f.$ The only general case where $f$ can be recovered if both its symmetrization and antisymmetrization are known is when $n = 2$ and the abelian group admits a division by 2 (inverse of doubling); then $f$ is equal to half the sum of its symmetrization and its antisymmetrization. Applications. U-statistics. In statistics, an $n$-sample statistic (a function in $n$ variables) that is obtained by bootstrapping symmetrization of a $k$-sample statistic, yielding a symmetric function in $n$ variables, is called a U-statistic. Examples include the sample mean and sample variance.
2624253
abstract_algebra
An equivalent impedance is an equivalent circuit of an electrical network of impedance elements which presents the same impedance between all pairs of terminals as did the given network. This article describes mathematical transformations between some passive, linear impedance networks commonly found in electronic circuits. There are a number of very well known and often used equivalent circuits in linear network analysis. These include resistors in series, resistors in parallel and the extension to series and parallel circuits for capacitors, inductors and general impedances. Also well known are the Norton and Thévenin equivalent current generator and voltage generator circuits respectively, as is the Y-Δ transform. None of these are discussed in detail here; the individual linked articles should be consulted. The number of equivalent circuits that a linear network can be transformed into is unbounded. Even in the most trivial cases this can be seen to be true, for instance, by asking how many different combinations of resistors in parallel are equivalent to a given combined resistor. The number of series and parallel combinations that can be formed grows exponentially with the number of resistors, "n". For large "n" the size of the set has been found by numerical techniques to be approximately 2.53"n" and analytically strict bounds are given by a Farey sequence of Fibonacci numbers. This article could never hope to be comprehensive, but there are some generalisations possible. Wilhelm Cauer found a transformation that could generate all possible equivalents of a given rational, passive, linear one-port, or in other words, any given two-terminal impedance. Transformations of 4-terminal, especially 2-port, networks are also commonly found and transformations of yet more complex networks are possible. The vast scale of the topic of equivalent circuits is underscored in a story told by Sidney Darlington. According to Darlington, a large number of equivalent circuits were found by Ronald M. Foster, following his and George Campbell's 1920 paper on non-dissipative four-ports. In the course of this work they looked at the ways four ports could be interconnected with ideal transformers and maximum power transfer. They found a number of combinations which might have practical applications and asked the AT&T patent department to have them patented. The patent department replied that it was pointless just patenting some of the circuits if a competitor could use an equivalent circuit to get around the patent; they should patent all of them or not bother. Foster therefore set to work calculating every last one of them. He arrived at an enormous total of 83,539 equivalents (577,722 if different output ratios are included). This was too many to patent, so instead the information was released into the public domain in order to prevent any of AT&T's competitors from patenting them in the future. 2-terminal, 2-element-kind networks. A single impedance has two terminals to connect to the outside world, hence can be described as a 2-terminal, or a one-port, network. Despite the simple description, there is no limit to the number of meshes, and hence complexity and number of elements, that the impedance network may have. 2-element-kind networks are common in circuit design; filters, for instance, are often LC-kind networks and printed circuit designers favour RC-kind networks because inductors are less easy to manufacture. Transformations are simpler and easier to find than for 3-element-kind networks. One-element-kind networks can be thought of as a special case of two-element-kind. It is possible to use the transformations in this section on a certain few 3-element-kind networks by substituting a network of elements for element "Z"n. However, this is limited to a maximum of two impedances being substituted; the remainder will not be a free choice. All the transformation equations given in this section are due to Otto Zobel. 3-element networks. One-element networks are trivial and two-element, two-terminal networks are either two elements in series or two elements in parallel, also trivial. The smallest number of elements that is non-trivial is three, and there are two 2-element-kind non-trivial transformations possible, one being both the reverse transformation and the topological dual, of the other. Example 3 shows the result is a Π-network rather than an L-network. The reason for this is that the shunt element has more capacitance than is required by the transform so some is still left over after applying the transform. If the excess were instead, in the element nearest the transformer, this could be dealt with by first shifting the excess to the other side of the transformer before carrying out the transform. Terminology.
2324264
abstract_algebra
Method for finding kth smallest value In computer science, a selection algorithm is an algorithm for finding the $k$th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the $k$th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of $n$ values, these algorithms take linear time, $O(n)$ as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time $O(1)$. Problem statement. An algorithm for the selection problem takes as input a collection of values, and a number $k$. It outputs the $k$th smallest of these values, or, in some versions of the problem, a collection of the $k$ smallest values. For this to be well-defined, it should be possible to sort the values into an order from smallest to largest; for instance, they may be integers, floating-point numbers, or some other kind of object with a numeric key. However, they are not assumed to have been already sorted. Often, selection algorithms are restricted to a comparison-based model of computation, as in comparison sort algorithms, where the algorithm has access to a comparison operation that can determine the relative ordering of any two values, but may not perform any other kind of arithmetic operations on these values. To simplify the problem, some works on this problem assume that the values are all distinct from each other, or that some consistent tie-breaking method has been used to assign an ordering to pairs of items with the same value as each other. Another variation in the problem definition concerns the numbering of the ordered values: is the smallest value obtained by as in zero-based numbering of arrays, or is it obtained by following the usual English-language conventions for the smallest, second-smallest, etc.? This article follows the conventions used by Cormen et al., according to which all values are distinct and the minimum value is obtained from With these conventions, the maximum value, among a collection of $n$ values, is obtained by When $n$ is an odd number, the median of the collection is obtained by When $n$ is even, there are two choices for the median, obtained by rounding this choice of $k$ down or up, respectively: the "lower median" with $k=n/2$ and the "upper median" with Algorithms. Sorting and heapselect. As a baseline algorithm, selection of the $k$th smallest value in a collection of values can be performed by the following two steps: The time for this method is dominated by the sorting step, which requires $\Theta(n\log n)$ time using a comparison sort. Even when integer sorting algorithms may be used, these are generally slower than the linear time that may be achieved using specialized selection algorithms. Nevertheless, the simplicity of this approach makes it attractive, especially when a highly-optimized sorting routine is provided as part of a runtime library, but a selection algorithm is not. For inputs of moderate size, sorting can be faster than non-random selection algorithms, because of the smaller constant factors in its running time. This method also produces a sorted version of the collection, which may be useful for other later computations, and in particular for selection with other choices of $k$. For a sorting algorithm that generates one item at a time, such as selection sort, the scan can be done in tandem with the sort, and the sort can be terminated once the $k$th element has been found. One possible design of a consolation bracket in a single-elimination tournament, in which the teams who lost to the eventual winner play another mini-tournament to determine second place, can be seen as an instance of this method. Applying this optimization to heapsort produces the heapselect algorithm, which can select the $k$th smallest value in time $O(n+k\log n)$. This is fast when $k$ is small relative to $n$, but degenerates to $O(n\log n)$ for larger values of $k$, such as the choice $k=n/2$ used for median finding. Pivoting. Many methods for selection are based on choosing a special "pivot" element from the input, and using comparisons with this element to divide the remaining $n-1$ input values into two subsets: the set $L$ of elements less than the pivot, and the set $R$ of elements greater than the pivot. The algorithm can then determine where the $k$th smallest value is to be found, based on a comparison of $k$ with the sizes of these sets. In particular, if $k\le the $k$th smallest value is in $L$, and can be found recursively by applying the same selection algorithm to $L$. L then the $k$th smallest value is the pivot, and it can be returned immediately. In the remaining case, the $k$th smallest value is in $R$, and more specifically it is the element in position $k-|L|-1$ of $R$. It can be found by applying a selection algorithm recursively, seeking the value in this position in $R$. As with the related pivoting-based quicksort algorithm, the partition of the input into $L$ and $R$ may be done by making new collections for these sets, or by a method that partitions a given list or array data type in-place. Details vary depending on how the input collection is represented. The time to compare the pivot against all the other values is $O(n)$. However, pivoting methods differ in how they choose the pivot, which affects how big the subproblems in each recursive call will be. The efficiency of these methods depends greatly on the choice of the pivot. If the pivot is chosen badly, the running time of this method can be as slow as $O(n^2)$. Factories. The deterministic selection algorithms with the smallest known numbers of comparisons, for values of $k$ that are far from $1$ or $n$, are based on the concept of "factories", introduced in 1976 by Arnold Schönhage, Mike Paterson, and Nick Pippenger. These are methods that build partial orders of certain specified types, on small subsets of input values, by using comparisons to combine smaller partial orders. As a very simple example, one type of factory can take as input a sequence of single-element partial orders, compare pairs of elements from these orders, and produce as output a sequence of two-element totally ordered sets. The elements used as the inputs to this factory could either be input values that have not been compared with anything yet, or "waste" values produced by other factories. The goal of a factory-based algorithm is to combine together different factories, with the outputs of some factories going to the inputs of others, in order to eventually obtain a partial order in which one element (the $k$th smallest) is larger than some $k-1$ other elements and smaller than another $n-k$ others. A careful design of these factories leads to an algorithm that, when applied to median-finding, uses at most $2.942n$ comparisons. For other values of $k$, the number of comparisons is smaller. Parallel algorithms. Parallel algorithms for selection have been studied since 1975, when Leslie Valiant introduced the parallel comparison tree model for analyzing these algorithms, and proved that in this model selection using a linear number of comparisons requires $\Omega(\log\log n)$ parallel steps, even for selecting the minimum or maximum. Researchers later found parallel algorithms for selection in $O(\log\log n)$ steps, matching this bound. In a randomized parallel comparison tree model it is possible to perform selection in a bounded number of steps and a linear number of comparisons. On the more realistic parallel RAM model of computing, with exclusive read exclusive write memory access, selection can be performed in time $O(\log n)$ with $O(n/\log n)$ processors, which is optimal both in time and in the number of processors. With concurrent memory access, slightly faster parallel time is possible in general, and the $\log n$ term in the time bound can be replaced by $\log k$. Sublinear data structures. When data is already organized into a data structure, it may be possible to perform selection in an amount of time that is sublinear in the number of values. As a simple case of this, for data already sorted into an array, selecting the $k$th element may be performed by a single array lookup, in constant time. For values organized into a two-dimensional array of size $m\times n$, with sorted rows and columns, selection may be performed in time $O\bigl(m\log(2n/m)\bigr)$, or faster when $k$ is small relative to the array dimensions. For a collection of $m$ one-dimensional sorted arrays, with $k_i$ items less than the selected item in the $i$th array, the time is Selection from data in a binary heap takes time $O(k)$. This is independent of the size $n$ of the heap, and faster than the $O(k\log n)$ time bound that would be obtained from best-first search. This same method can be applied more generally to data organized as any kind of heap-ordered tree (a tree in which each node stores one value in which the parent of each non-root node has a smaller value than its child). This method of performing selection in a heap has been applied to problems of listing multiple solutions to combinatorial optimization problems, such as finding the k shortest paths in a weighted graph, by defining a state space of solutions in the form of an implicitly defined heap-ordered tree, and then applying this selection algorithm to this tree. In the other direction, linear time selection algorithms have been used as a subroutine in a priority queue data structure related to the heap, improving the time for extracting its $k$th item from $O(\log n)$ to $O(\log^* n+\log k)$; here $\log^* n$ is the iterated logarithm. For a collection of data values undergoing dynamic insertions and deletions, the order statistic tree augments a self-balancing binary search tree structure with a constant amount of additional information per tree node, allowing insertions, deletions, and selection queries that ask for the $k$th element in the current set to all be performed in $O(\log n)$ time per operation. Going beyond the comparison model of computation, faster times per operation are possible for values that are small integers, on which binary arithmetic operations are allowed. It is not possible for a streaming algorithm with memory sublinear in both $n$ and $k$ to solve selection queries exactly for dynamic data, but the count–min sketch can be used to solve selection queries approximately, by finding a value whose position in the ordering of the elements (if it were added to them) would be within $\varepsilon n$ steps of $k$, for a sketch whose size is within logarithmic factors of $1/\varepsilon$. Lower bounds. The $O(n)$ running time of the selection algorithms described above is necessary, because a selection algorithm that can handle inputs in an arbitrary order must take that much time to look at all of its inputs. If any one of its input values is not compared, that one value could be the one that should have been selected, and the algorithm can be made to produce an incorrect answer. Beyond this simple argument, there has been a significant amount of research on the exact number of comparisons needed for selection, both in the randomized and deterministic cases. Selecting the minimum of $n$ values requires $n-1$ comparisons, because the $n-1$ values that are not selected must each have been determined to be non-minimal, by being the largest in some comparison, and no two of these values can be largest in the same comparison. The same argument applies symmetrically to selecting the maximum. The next simplest case is selecting the second-smallest. After several incorrect attempts, the first tight lower bound on this case was published in 1964 by Soviet mathematician Sergey Kislitsyn. It can be shown by observing that selecting the second-smallest also requires distinguishing the smallest value from the rest, and by considering the number $p$ of comparisons involving the smallest value that an algorithm for this problem makes. Each of the $p$ items that were compared to the smallest value is a candidate for second-smallest, and $p-1$ of these values must be found larger than another value in a second comparison in order to rule them out as second-smallest. With $n-1$ values being the larger in at least one comparison, and $p-1$ values being the larger in at least two comparisons, there are a total of at least $n+p-2$ comparisons. An adversary argument, in which the outcome of each comparison is chosen in order to maximize $p$ (subject to consistency with at least one possible ordering) rather than by the numerical values of the given items, shows that it is possible to force $p$ to be at least $\log_2 n$. Therefore, the worst-case number of comparisons needed to select the second smallest is $n+\lceil\log_2 n\rceil-2$, the same number that would be obtained by holding a single-elimination tournament with a run-off tournament among the values that lost to the smallest value. However, the expected number of comparisons of a randomized selection algorithm can be better than this bound; for instance, selecting the second-smallest of six elements requires seven comparisons in the worst case, but may be done by a randomized algorithm with an expected number of 6.5 comparisons. More generally, selecting the $k$th element out of $n$ requires at least $n+\min(k,n-k)-O(1)$ comparisons, in the average case, matching the number of comparisons of the Floyd–Rivest algorithm up to its $o(n)$ term. The argument is made directly for deterministic algorithms, with a number of comparisons that is averaged over all possible permutations of the input values. By Yao's principle, it also applies to the expected number of comparisons for a randomized algorithm on its worst-case input. For deterministic algorithms, it has been shown that selecting the $k$th element requires $\bigl(1+H(k/n)\bigr)n+\Omega(\sqrt n)$ comparisons, where <math display=block>H(x)=x\log_2\frac1x + (1-x)\log_2\frac1{1-x}$ is the binary entropy function. The special case of median-finding has a slightly larger lower bound on the number of comparisons, at least $(2+\varepsilon)n$, for $\varepsilon\approx 2^{-80}$. Exact numbers of comparisons. Knuth supplies the following triangle of numbers summarizing pairs of $n$ and $k$ for which the exact number of comparisons needed by an optimal selection algorithm is known. The $n$th row of the triangle (starting with $n=1$ in the top row) gives the numbers of comparisons for inputs of $n$ values, and the $k$th number within each row gives the number of comparisons needed to select the $k$th smallest value from an input of that size. The rows are symmetric because selecting the $k$th smallest requires exactly the same number of comparisons, in the worst case, as selecting the $k$th largest. 0 1    1 2    3    2 3    4    4    3 4    6    6    6    4 5    7    8    8    7    5 6    8   10   10   10   8    6 7    9   11   12   12   11   9    7 8   11   12   14   14   14   12   11   8 9   12   14   15   16   16   15   14   12   9 Most, but not all, of the entries on the left half of each row can be found using the formula <math display=block>n-k+(k-1)\bigl\lceil\log_2(n+2-k)\bigr\rceil.$ This describes the number of comparisons made by a method of Abdollah Hadian and Milton Sobel, related to heapselect, that finds the smallest value using a single-elimination tournament and then repeatedly uses a smaller tournament among the values eliminated by the eventual tournament winners to find the next successive values until reaching the $k$th smallest. Some of the larger entries were proven to be optimal using a computer search. Language support. Very few languages have built-in support for general selection, although many provide facilities for finding the smallest or largest element of a list. A notable exception is the Standard Template Library for C++, which provides a templated <code>nth_element</code> method with a guarantee of expected linear time. Python's standard library (since 2.4) includes <code>heapq.nsmallest</code> and <code>heapq.nlargest</code> subroutines for returning the smallest or largest elements from a collection, in sorted order. Different versions of Python have used different algorithms for these subroutines. As of Python version 3.13, the implementation maintains a binary heap, limited to holding $k$ elements, and initialized to the first $k$ elements in the collection. Then, each subsequent items of the collection may replace the largest or smallest element in the heap (respectively for <code>heapq.nsmallest</code> and <code>heapq.nlargest</code>) if it is smaller or larger than this element. The worst-case time for this implementation is $O(n\log k)$, worse than the $O(n+k\log n)$ that would be achieved by heapselect. However, for random input sequences, there are likely to be few heap updates and most input elements are processed with only a single comparison. Since 2017, Matlab has included <code>maxk()</code> and <code>mink()</code> functions, which return the maximal (minimal) $k$ values in a vector as well as their indices. The Matlab documentation does not specify which algorithm these functions use or what their running time is. History. Quickselect was presented without analysis by Tony Hoare in 1965, and first analyzed in a 1971 technical report by Donald Knuth. The first known linear time deterministic selection algorithm is the median of medians method, published in 1973 by Manuel Blum, Robert W. Floyd, Vaughan Pratt, Ron Rivest, and Robert Tarjan. They trace the formulation of the selection problem to work of Charles L. Dodgson (better known as Lewis Carroll) who in 1883 pointed out that the usual design of single-elimination sports tournaments does not guarantee that the second-best player wins second place, and to work of Hugo Steinhaus circa 1930, who followed up this same line of thought by asking for a tournament design that can make this guarantee, with a minimum number of games played (that is, comparisons).
215860
abstract_algebra
In mathematics, especially in the area of algebra known as representation theory, the representation ring (or Green ring after J. A. Green) of a group is a ring formed from all the (isomorphism classes of the) finite-dimensional linear representations of the group. Elements of the representation ring are sometimes called virtual representations. For a given group, the ring will depend on the base field of the representations. The case of complex coefficients is the most developed, but the case of algebraically closed fields of characteristic "p" where the Sylow "p"-subgroups are cyclic is also theoretically approachable. Formal definition. Given a group "G" and a field "F", the elements of its representation ring "R""F"("G") are the formal differences of isomorphism classes of finite dimensional linear "F"-representations of "G". For the ring structure, addition is given by the direct sum of representations, and multiplication by their tensor product over "F". When "F" is omitted from the notation, as in "R"("G"), then "F" is implicitly taken to be the field of complex numbers. Succinctly, the representation ring of "G" is the Grothendieck ring of the category of finite-dimensional representations of "G". Characters. Any representation defines a character χ:"G" → C. Such a function is constant on conjugacy classes of "G", a so-called class function; denote the ring of class functions by "C"("G"). If "G" is finite, the homomorphism "R"("G") → "C"("G") is injective, so that "R"("G") can be identified with a subring of "C"("G"). For fields "F" whose characteristic divides the order of the group "G", the homomorphism from "R""F"("G") → "C"("G") defined by Brauer characters is no longer injective. For a compact connected group "R"("G") is isomorphic to the subring of "R"("T") (where "T" is a maximal torus) consisting of those class functions that are invariant under the action of the Weyl group (Atiyah and Hirzebruch, 1961). For the general compact Lie group, see Segal (1968). λ-ring and Adams operations. Given a representation of "G" and a natural number "n", we can form the "n"-th exterior power of the representation, which is again a representation of "G". This induces an operation λ"n" : "R"("G") → "R"("G"). With these operations, "R"("G") becomes a λ-ring. The "Adams operations" on the representation ring "R"("G") are maps Ψ"k" characterised by their effect on characters χ: $\Psi^k \chi (g) = \chi(g^k) \ . $ The operations Ψ"k" are ring homomorphisms of "R"("G") to itself, and on representations ρ of dimension "d" $\Psi^k (\rho) = N_k(\Lambda^1\rho,\Lambda^2\rho,\ldots,\Lambda^d\rho) \ $ where the Λ"i"ρ are the exterior powers of ρ and "N""k" is the "k"-th power sum expressed as a function of the "d" elementary symmetric functions of "d" variables.
2089755
abstract_algebra
Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations. Given a structured object "X" of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This can occur in many ways; for example, if "X" is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object "X" is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (i.e., an isometry). In general, every kind of structure in mathematics will have its own kind of symmetry, many of which are listed in the given points mentioned above. Symmetry in geometry. The types of symmetry considered in basic geometry include reflectional symmetry, rotation symmetry, translational symmetry and glide reflection symmetry, which are described more fully in the main article Symmetry (geometry). Symmetry in calculus. Even and odd functions. Even functions. Let "f"("x") be a real-valued function of a real variable, then "f" is even if the following equation holds for all "x" and "-x" in the domain of "f": $ f(x) = f(-x) $ Geometrically speaking, the graph face of an even function is symmetric with respect to the "y"-axis, meaning that its graph remains unchanged after reflection about the "y"-axis. Examples of even functions include |"x"|, "x"2, "x"4, cos("x"), and cosh("x"). Odd functions. Again, let "f" be a real-valued function of a real variable, then "f" is odd if the following equation holds for all "x" and "-x" in the domain of "f": $ -f(x) = f(-x) $ That is, $ f(x) + f(-x) = 0 \, . $ Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Examples of odd functions are "x", "x"3, sin("x"), sinh("x"), and erf("x"). Integrating. The integral of an odd function from −"A" to +"A" is zero, provided that "A" is finite and that the function is integrable (e.g., has no vertical asymptotes between −"A" and "A"). The integral of an even function from −"A" to +"A" is twice the integral from 0 to +"A", provided that "A" is finite and the function is integrable (e.g., has no vertical asymptotes between −"A" and "A"). This also holds true when "A" is infinite, but only if the integral converges. Symmetry in linear algebra. Symmetry in matrices. In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose (i.e., it is invariant under matrix transposition). Formally, matrix "A" is symmetric if $A = A^{T}.$ By the definition of matrix equality, which requires that the entries in all corresponding positions be equal, equal matrices must have the same dimensions (as matrices of different sizes or shapes cannot be equal). Consequently, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if the entries are written as "A" = ("a""ij"), then "a""ij" = a"ji", for all indices "i" and "j". For example, the following 3×3 matrix is symmetric: $\begin{bmatrix} 1 & 7 & 3\\ 7 & 4 & -5\\ 3 & -5 & 6\end{bmatrix}$ Every square diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative. In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them. Symmetry in abstract algebra. Symmetric groups. The symmetric group "S""n" (on a finite set of "n" symbols) is the group whose elements are all the permutations of the "n" symbols, and whose group operation is the composition of such permutations, which are treated as bijective functions from the set of symbols to itself. Since there are "n"! ("n" factorial) possible permutations of a set of "n" symbols, it follows that the order (i.e., the number of elements) of the symmetric group "S""n" is "n"!. Symmetric polynomials. A symmetric polynomial is a polynomial "P"("X"1, "X"2, ..., "X""n") in "n" variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, "P" is a "symmetric polynomial" if for any permutation σ of the subscripts 1, 2, ..., "n", one has "P"("X"σ(1), "X"σ(2), ..., "X"σ("n")) = "P"("X"1, "X"2, ..., "X""n"). Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view, the elementary symmetric polynomials are the most fundamental symmetric polynomials. A theorem states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that every "symmetric" polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial. Examples. In two variables "X"1 and "X"2, one has symmetric polynomials such as: and in three variables "X"1, "X"2 and "X"3, one has as a symmetric polynomial: Symmetric tensors. In mathematics, a symmetric tensor is tensor that is invariant under a permutation of its vector arguments: $T(v_1,v_2,\dots,v_r) = T(v_{\sigma 1},v_{\sigma 2},\dots,v_{\sigma r})$ for every permutation σ of the symbols {1,2...,"r"}. Alternatively, an "r"th order symmetric tensor represented in coordinates as a quantity with "r" indices satisfies $T_{i_1i_2\dots i_r} = T_{i_{\sigma 1}i_{\sigma 2}\dots i_{\sigma r}}.$ The space of symmetric tensors of rank "r" on a finite-dimensional vector space is naturally isomorphic to the dual of the space of homogeneous polynomials of degree "r" on "V". Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on "V". A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics. Galois theory. Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say "A" and "B", that "A"2 + 5"B"3 = 7. The central idea of Galois theory is to consider those permutations (or rearrangements) of the roots having the property that "any" algebraic equation satisfied by the roots is "still satisfied" after the roots have been permuted. An important proviso is that we restrict ourselves to algebraic equations whose coefficients are rational numbers. Thus, Galois theory studies the symmetries inherent in algebraic equations. Automorphisms of algebraic objects. In abstract algebra, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object. Symmetry in representation theory. Symmetry in quantum mechanics: bosons and fermions. In quantum mechanics, bosons have representatives that are symmetric under permutation operators, and fermions have antisymmetric representatives. This implies the Pauli exclusion principle for fermions. In fact, the Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state $\scriptstyle |x \rangle$ and the other in state $\scriptstyle |y\rangle$: $ $ and antisymmetry under exchange means that "A"("x","y") = −"A"("y","x"). This implies that "A"("x","x") = 0, which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantity "A"("x","y") is not a matrix but an antisymmetric rank-two tensor. Conversely, if the diagonal quantities "A"("x","x") are zero "in every basis", then the wavefunction component: $ A(x,y)=\langle \psi|x,y\rangle = \langle \psi | ( |x\rangle \otimes |y\rangle ) $ is necessarily antisymmetric. To prove it, consider the matrix element: $ \langle\psi| ((|x\rangle + |y\rangle)\otimes(|x\rangle + |y\rangle)) \,$ This is zero, because the two particles have zero probability to both be in the superposition state $\scriptstyle |x\rangle + |y\rangle$. But this is equal to $ \langle \psi |x,x\rangle + \langle \psi |x,y\rangle + \langle \psi |y,x\rangle + \langle \psi | y,y \rangle \,$ The first and last terms on the right hand side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey: $ \langle \psi|x,y\rangle + \langle\psi |y,x\rangle = 0 \,$. or $ A(x,y)=-A(y,x) \,$ Symmetry in set theory. Symmetric relation. We call a relation symmetric if every time the relation stands from A to B, it stands too from B to A. Note that symmetry is not the exact opposite of antisymmetry. Symmetry in metric spaces. Isometries of a space. An isometry is a distance-preserving map between metric spaces. Given a metric space, or a set and scheme for assigning distances between elements of the set, an isometry is a transformation which maps elements to another metric space such that the distance between the elements in the new metric space is equal to the distance between the elements in the original metric space. In a two-dimensional or three-dimensional space, two geometric figures are congruent if they are related by an isometry: related by either a rigid motion, or a composition of a rigid motion and a reflection. Up to a relation by a rigid motion, they are equal if related by a direct isometry. Isometries have been used to unify the working definition of symmetry in geometry and for functions, probability distributions, matrices, strings, graphs, etc. Symmetries of differential equations. A symmetry of a differential equation is a transformation that leaves the differential equation invariant. Knowledge of such symmetries may help solve the differential equation. A Line symmetry of a system of differential equations is a continuous symmetry of the system of differential equations. Knowledge of a Line symmetry can be used to simplify an ordinary differential equation through reduction of order. For ordinary differential equations, knowledge of an appropriate set of Lie symmetries allows one to explicitly calculate a set of first integrals, yielding a complete solution without integration. Symmetries may be found by solving a related set of ordinary differential equations. Solving these equations is often much simpler than solving the original differential equations. Symmetry in probability. In the case of a finite number of possible outcomes, symmetry with respect to permutations (relabelings) implies a discrete uniform distribution. In the case of a real interval of possible outcomes, symmetry with respect to interchanging sub-intervals of equal length corresponds to a continuous uniform distribution. In other cases, such as "taking a random integer" or "taking a random real number", there are no probability distributions at all symmetric with respect to relabellings or to exchange of equally long subintervals. Other reasonable symmetries do not single out one particular distribution, or in other words, there is not a unique probability distribution providing maximum symmetry. There is one type of isometry in one dimension that may leave the probability distribution unchanged, that is reflection in a point, for example zero. A possible symmetry for randomness with positive outcomes is that the former applies for the logarithm, i.e., the outcome and its reciprocal have the same distribution. However this symmetry does not single out any particular distribution uniquely. For a "random point" in a plane or in space, one can choose an origin, and consider a probability distribution with circular or spherical symmetry, respectively.
658686
abstract_algebra
Distributive lattice whose elements are stable matchings In mathematics, economics, and computer science, the lattice of stable matchings is a distributive lattice whose elements are stable matchings. For a given instance of the stable matching problem, this lattice provides an algebraic description of the family of all solutions to the problem. It was originally described in the 1970s by John Horton Conway and Donald Knuth. By Birkhoff's representation theorem, this lattice can be represented as the lower sets of an underlying partially ordered set. The elements of this set can be given a concrete structure as rotations, with cycle graphs describing the changes between adjacent stable matchings in the lattice. The family of all rotations and their partial order can be constructed in polynomial time, leading to polynomial time solutions for other problems on stable matching including the minimum or maximum weight stable matching. The Gale–Shapley algorithm can be used to construct two special lattice elements, its top and bottom element. Every finite distributive lattice can be represented as a lattice of stable matchings. The number of elements in the lattice can vary from an average case of $e^{-1}n\ln n$ to a worst-case of exponential. Computing the number of elements is #P-complete. Background. In its simplest form, an instance of the stable matching problem consists of two sets of the same number of elements to be matched to each other, for instance doctors and positions at hospitals. Each element has a preference ordering on the elements of the other type: the doctors each have different preferences for which hospital they would like to work at (for instance based on which cities they would prefer to live in), and the hospitals each have preferences for which doctors they would like to work for them (for instance based on specialization or recommendations). The goal is to find a matching that is "stable": no pair of a doctor and a hospital prefer each other to their assigned match. Versions of this problem are used, for instance, by the National Resident Matching Program to match American medical students to hospitals. In general, there may be many different stable matchings. For example, suppose there are three doctors (A,B,C) and three hospitals (X,Y,Z) which have preferences of: A: YXZ   B: ZYX   C: XZY   X: BAC   Y: CBA   Z: ACB There are three stable solutions to this matching arrangement: The lattice of stable matchings organizes this collection of solutions, for any instance of stable matching, giving it the structure of a distributive lattice. Structure. Partial order on matchings. The lattice of stable matchings is based on the following weaker structure, a partially ordered set whose elements are the stable matchings. Define a comparison operation $\le$ on the stable matchings, where $P\le Q$ if and only if all doctors prefer matching $Q$ to matching $P$: either they have the same assigned hospital in both matchings, or they are assigned a better hospital in $Q$ than they are in $P$. If the doctors disagree on which matching they prefer, then $P$ and $Q$ are incomparable: neither one is $\le$ the other. The same comparison operation can be defined in the same way for any two sets of elements, not just doctors and hospitals. The choice of which of the two sets of elements to use in the role of the doctors is arbitrary. Swapping the roles of the doctors and hospitals reverses the ordering of every pair of elements, but does not otherwise change the structure of the partial order. Then this ordering gives the matchings the structure of a partially ordered set. To do so, it must obey the following three properties: For stable matchings, all three properties follow directly from the definition of the comparison operation. Top and bottom elements. Define the best match of an element $x$ of a stable matching instance to be the element $y$ that $x$ most prefers, among all the elements that can be matched to $x$ in a stable matching, and define the worst match analogously. Then no two elements can have the same best match. For, suppose to the contrary that doctors $x$ and $x'$ both have $y$ as their best match, and that $y$ prefers $x$ to $x'$. Then, in the stable matching that matches $x'$ to $y$ (which must exist by the definition of the best match of $x'$), $x$ and $y$ would be an unstable pair, because $y$ prefers $x$ to $x'$ and $x$ prefers $y$ to any other partner in any stable matching. This contradiction shows that assigning all doctors to their best matches gives a matching. It is a stable matching, because any unstable pair would also be unstable for one of the matchings used to define best matches. As well as assigning all doctor to their best matches, it assigns all hospitals to their worst matches. In the partial ordering on the matchings, it is greater than all other stable matchings. Symmetrically, assigning all doctors to their worst matches and assigning all hospitals to their best matches gives another stable matching. In the partial order on the matchings, it is less than all other stable matchings. The Gale–Shapley algorithm gives a process for constructing stable matchings, that can be described as follows: until a matching is reached, the algorithm chooses an arbitrary hospital with an unfilled position, and that hospital makes a job offer to the doctor it most prefers among the ones it has not already made offers to. If the doctor is unemployed or has a less-preferred assignment, the doctor accepts the offer (and resigns from their other assignment if it exists). The process always terminates, because each doctor and hospital interact only once. When it terminates, the result is a stable matching, the one that assigns each hospital to its best match and that assigns all doctors to their worst matches. An algorithm that swaps the roles of the doctors and hospitals (in which unemployed doctors send a job applications to their next preference among the hospitals, and hospitals accept applications either when they have an unfilled position or they prefer the new applicant, firing the doctor they had previously accepted) instead produces the stable matching that assigns all doctors to their best matches and each hospital to its worst match. Lattice operations. Given any two stable matchings $P$ and $Q$ for the same input, one can form two more matchings $P\vee Q$ and $P\wedge Q$ in the following way: In $P\vee Q$, each doctor gets their best choice among the two hospitals they are matched to in $P$ and $Q$ (if these differ) and each hospital gets its worst choice. In $P\wedge Q$, each doctor gets their worst choice among the two hospitals they are matched to in $P$ and $Q$ (if these differ) and each hospital gets its best choice. Then both $P\vee Q$ and $P\wedge Q$ are matchings. It is not possible, for instance, for two doctors to have the same best choice and be matched to the same hospital in $P\vee Q$, for regardless of which of the two doctors is preferred by the hospital, that doctor and hospital would form an unstable pair in whichever of $P$ and $Q$ they are not already matched in. Because the doctors are matched in $P\vee Q$, the hospitals must also be matched. The same reasoning applies symmetrically to $P\wedge Q$. Additionally, both $P\vee Q$ and $P\wedge Q$ are stable. There cannot be a pair of a doctor and hospital who prefer each other to their match, because the same pair would necessarily also be an unstable pair for at least one of $P$ and $Q$. Lattice properties. The two operations $P\vee Q$ and $P\wedge Q$ form the join and meet operations of a finite distributive lattice. In this context, a finite lattice is defined as a partially ordered finite set in which there is a unique minimum element and a unique maximum element, in which every two elements have a unique least element greater than or equal to both of them (their join) and every two elements have a unique greatest element less than or equal to both of them (their meet). In the case of the operations $P\vee Q$ and $P\wedge Q$ defined above, the join $P\vee Q$ is greater than or equal to both $P$ and $Q$ because it was defined to give each doctor their preferred choice, and because these preferences of the doctors are how the ordering on matchings is defined. It is below any other matching that is also above both $P$ and $Q$, because any such matching would have to give each doctor an assigned match that is at least as good. Therefore, it fits the requirements for the join operation of a lattice. Symmetrically, the operation $P\wedge Q$ fits the requirements for the meet operation. Because they are defined using an element-wise minimum or element-wise maximum in the preference ordering, these two operations obey the same distributive laws obeyed by the minimum and maximum operations on linear orderings: for every three different matchings $P$, $Q$, and $R$, $P\wedge(Q\vee R)=(P\wedge Q)\vee (P\wedge R)$ and $P\vee(Q\wedge R)=(P\vee Q)\wedge (P\vee R)$ Therefore, the lattice of stable matchings is a distributive lattice. Representation by rotations. Birkhoff's representation theorem states that any finite distributive lattice can be represented by a family of finite sets, with intersection and union as the meet and join operations, and with the relation of being a subset as the comparison operation for the associated partial order. More specifically, these sets can be taken to be the lower sets of an associated partial order. In the general form of Birkhoff's theorem, this partial order can be taken as the induced order on a subset of the elements of the lattice, the join-irreducible elements (elements that cannot be formed as joins of two other elements). For the lattice of stable matchings, the elements of the partial order can instead be described in terms of structures called "rotations", described by . Suppose that two different stable matchings $P$ and $Q$ are comparable and have no third stable matching between them in the partial order. (That is, $P$ and $Q$ form a pair of the covering relation of the partial order of stable matchings.) Then the set of pairs of elements that are matched in one but not both of $P$ and $Q$ (the symmetric difference of their sets of matched pairs) is called a rotation. It forms a cycle graph whose edges alternate between the two matchings. Equivalently, the rotation can be described as the set of changes that would need to be performed to change the lower of the two matchings into the higher one (with lower and higher determined using the partial order). If two different stable matchings are separately the higher matching for the same rotation, then so is their meet. It follows that for any rotation, the set of stable matchings that can be the higher of a pair connected by the rotation has a unique lowest element. This lowest matching is join irreducible, and this gives a one-to-one correspondence between rotations and join-irreducible stable matchings. If the rotations are given the same partial ordering as their corresponding join-irreducible stable matchings, then Birkhoff's representation theorem gives a one-to-one correspondence between lower sets of rotations and all stable matchings. The set of rotations associated with any given stable matching can be obtained by changing the given matching by rotations downward in the partial ordering, choosing arbitrarily which rotation to perform at each step, until reaching the bottom element, and listing the rotations used in this sequence of changes. The stable matching associated with any lower set of rotations can be obtained by applying the rotations to the bottom element of the lattice of stable matchings, choosing arbitrarily which rotation to apply when more than one can apply. Every pair $(x,y)$ of elements of a given stable matching instance belongs to at most two rotations: one rotation that, when applied to the lower of two matchings, removes other assignments to $x$ and $y$ and instead assigns them to each other, and a second rotation that, when applied to the lower of two matchings, removes pair $(x,y)$ from the matching and finds other assignments for those two elements. Because there are $n^2$ pairs of elements, there are $O(n^2)$ rotations. Mathematical properties. Universality. Beyond being a finite distributive lattice, there are no other constraints on the lattice structure of stable matchings. This is because, for every finite distributive lattice $L$, there exists a stable matching instance whose lattice of stable matchings is isomorphic to $L$. More strongly, if a finite distributive lattice has $k$ elements, then it can be realized using a stable matching instance with at most $k^2-k+4$ doctors and hospitals. Number of lattice elements. The lattice of stable matchings can be used to study the computational complexity of counting the number of stable matchings of a given instance. From the equivalence between lattices of stable matchings and arbitrary finite distributive lattices, it follows that this problem has equivalent computational complexity to counting the number of elements in an arbitrary finite distributive lattice, or to counting the antichains in an arbitrary partially ordered set. Computing the number of stable matchings is #P-complete. In a uniformly-random instance of the stable marriage problem with $n$ doctors and $n$ hospitals, the average number of stable matchings is asymptotically $e^{-1}n\ln n$. In a stable marriage instance chosen to maximize the number of different stable matchings, this number can be at least $2^{n-1}$, and us also upper-bounded by an exponential function of n (significantly smaller than the naive factorial bound on the number of matchings). Algorithmic consequences. The family of rotations and their partial ordering can be constructed in polynomial time from a given instance of stable matching, and provides a concise representation to the family of all stable matchings, which can for some instances be exponentially larger when listed explicitly. This allows several other computations on stable matching instances to be performed efficiently. Weighted stable matching and closure. If each pair of elements in a stable matching instance is assigned a real-valued weight, it is possible to find the minimum or maximum weight stable matching in polynomial time. One possible method for this is to apply linear programming to the order polytope of the partial order of rotations, or to the stable matching polytope. An alternative, combinatorial algorithm is possible, based on the same partial order. From the weights on pairs of elements, one can assign weights to each rotation, where a rotation that changes a given stable matching to another one higher in the partial ordering of stable matchings is assigned the change in weight that it causes: the total weight of the higher matching minus the total weight of the lower matching. By the correspondence between stable matchings and lower sets of rotations, the total weight of any matching is then equal to the total weight of its corresponding lower set, plus the weight of the bottom element of the lattice of matchings. The problem of finding the minimum or maximum weight stable matching becomes in this way equivalent to the problem of finding the minimum or maximum weight lower set in a partially ordered set of polynomial size, the partially ordered set of rotations. This optimal lower set problem is equivalent to an instance of the closure problem, a problem on vertex-weighted directed graphs in which the goal is to find a subset of vertices of optimal weight with no outgoing edges. The optimal lower set is an optimal closure of a directed acyclic graph that has the elements of the partial order as its vertices, with an edge from $\alpha$ to $\beta$ whenever $\alpha\le\beta$ in the partial order. The closure problem can, in turn, be solved in polynomial time by transforming it into an instance of the maximum flow problem. Minimum regret. defines the regret of a participant in a stable matching to be the distance of their assigned match from the top of their preference list. He defines the regret of a stable matching to be the maximum regret of any participant. Then one can find the minimum-regret stable matching by a simple greedy algorithm that starts at the bottom element of the lattice of matching and then repeatedly applies any rotation that reduces the regret of a participant with maximum regret, until this would cause some other participant to have greater regret. Median stable matching. The elements of any distributive lattice form a median graph, a structure in which any three elements $P$, $Q$, and $R$ (here, stable matchings) have a unique median element $m(P,Q,R)$ that lies on a shortest path between any two of them. It can be defined as: $m(P,Q,R)=(P\wedge Q)\vee(P\wedge R)\vee(Q\wedge R)=(P\vee Q)\wedge(P\vee R)\wedge(Q\vee R).$ For the lattice of stable matchings, this median can instead be taken element-wise, by assigning each doctor the median in the doctor's preferences of the three hospitals matched to that doctor in $P$, $Q$, and $R$ and similarly by assigning each hospital the median of the three doctors matched to it. More generally, any set of an odd number of elements of any distributive lattice (or median graph) has a median, a unique element minimizing its sum of distances to the given set. For the median of an odd number of stable matchings, each participant is matched to the median element of the multiset of their matches from the given matchings. For an even set of stable matchings, this can be disambiguated by choosing the assignment that matches each doctor to the higher of the two median elements, and each hospital to the lower of the two median elements. In particular, this leads to a definition for the median matching in the set of all stable matchings. However, for some instances of the stable matching problem, finding this median of all stable matchings is NP-hard.
5854721
abstract_algebra
In four-dimensional Euclidean geometry, the truncated 16-cell honeycomb (or cantic tesseractic honeycomb) is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space. It is constructed by 24-cell and truncated 16-cell facets. Related honeycombs. The [3,4,3,3], , Coxeter group generates 31 permutations of uniform tessellations, 28 are unique in this family and ten are shared in the [4,3,3,4] and [4,3,31,1] families. The alternation (13) is also repeated in other families. The [4,3,3,4], , Coxeter group generates 31 permutations of uniform tessellations, 21 with distinct symmetry and 20 with distinct geometry. The expanded tesseractic honeycomb (also known as the stericated tesseractic honeycomb) is geometrically identical to the tesseractic honeycomb. Three of the symmetric honeycombs are shared in the [3,4,3,3] family. Two alternations (13) and (17), and the quarter tesseractic (2) are repeated in other families. The [4,3,31,1], , Coxeter group generates 31 permutations of uniform tessellations, 23 with distinct symmetry and 4 with distinct geometry. There are two alternated forms: the alternations (19) and (24) have the same geometry as the 16-cell honeycomb and snub 24-cell honeycomb respectively. There are ten uniform honeycombs constructed by the ${\tilde{D}}_4$ Coxeter group, all repeated in other families by extended symmetry, seen in the graph symmetry of rings in the Coxeter–Dynkin diagrams. The 10th is constructed as an alternation. As subgroups in Coxeter notation: [3,4,(3,3)*] (index 24), [3,3,4,3*] (index 6), [1+,4,3,3,4,1+] (index 4), [31,1,3,4,1+] (index 2) are all isomorphic to [31,1,1,1]. The ten permutations are listed with its highest extended symmetry relation: See also. Regular and uniform honeycombs in 4-space: Notes.
3822349
abstract_algebra
Technique for drawing non-planar graphs In the mathematical field of graph theory, planarization is a method of extending graph drawing methods from planar graphs to graphs that are not planar, by embedding the non-planar graphs within a larger planar graph. Planarization may be performed by using any method to find a drawing (with crossings) for the given graph, and then replacing each crossing point by a new artificial vertex, causing each crossed edge to be subdivided into a path. The original graph will be represented as an immersion minor of its planarization. In incremental planarization, the planarization process is split into two stages. First, a large planar subgraph is found within the given graph. Then, the remaining edges that are not already part of this subgraph are added back one at a time, and routed through an embedding of the planar subgraph. When one of these edges crosses an already-embedded edge, the two edges that cross are replaced by two-edge paths, with a new artificial vertex that represents the crossing point placed at the middle of both paths. In some case a third local optimization stage is added to the planarization process, in which edges with many crossings are removed and re-added in an attempt to improve the planarization. Finding the largest planar subgraph. Using incremental planarization for graph drawing is most effective when the first step of the process finds as large a planar graph as possible. Unfortunately, finding the planar subgraph with the maximum possible number of edges (the "maximum planar subgraph" problem) is NP-hard, and MaxSNP-hard, implying that there probably does not exist a polynomial time algorithm that solves the problem exactly or that approximates it arbitrarily well. In an "n"-vertex connected graph, the largest planar subgraph has at most 3"n" − 6 edges, and any spanning tree forms a planar subgraph with "n" − 1 edges. Thus, it is easy to approximate the maximum planar subgraph within an approximation ratio of one-third, simply by finding a spanning tree. A better approximation ratio, 9/4, is known, based on a method for finding a large partial 2-tree as a subgraph of the given graph. Alternatively, if it is expected that the planar subgraph will include almost all of the edges of the given graph, leaving only a small number "k" of non-planar edges for the incremental planarization process, then one can solve the problem exactly by using a fixed-parameter tractable algorithm whose running time is linear in the graph size but non-polynomial in the parameter "k". The problem may also be solved exactly by a branch and cut algorithm, with no guarantees on running time, but with good performance in practice. This parameter "k" is known as the skewness of the graph. There has also been some study of a related problem, finding the largest planar induced subgraph of a given graph. Again, this is NP-hard, but fixed-parameter tractable when all but a few vertices belong to the induced subgraph. proved a tight bound of 3"n"/(Δ + 1) on the size of the largest planar induced subgraph, as a function of "n", the number of vertices in the given graph, and Δ, its maximum degree; their proof leads to a polynomial time algorithm for finding an induced subgraph of this size. Adding edges to a planarization. Once a large planar subgraph has been found, the incremental planarization process continues by considering the remaining edges one by one. As it does so, it maintains a planarization of the subgraph formed by the edges that have already been considered. It adds each new edge to a planar embedding of this subgraph, forming a drawing with crossings, and then replaces each crossing point with a new artificial vertex subdividing the two edges that cross. In some versions of this procedure, the order for adding edges is arbitrary, but it is also possible to choose the ordering to be a random permutation, running the same algorithm several times and returning the best planarization that it finds. In the simplest form of this process, the planar embedding of the planarized subgraph is not allowed to change while new edges are added. In order to add each new edge in a way that minimizes the number of crossings it forms, one can use a shortest path algorithm in the dual graph of the current embedding, in order to find the shortest sequence of faces of the embedding and edges to be crossed that connects the endpoints of the new edge to each other. This process takes polynomial time per edge. Fixing the embedding of the planarized subgraph is not necessarily optimal in terms of the number of crossings that result. In fact, there exist graphs that are formed by adding one edge to a planar subgraph, where the optimal drawing has only two crossings but where fixing the planar embedding of the subgraph forces a linear number of crossings to be created. As a compromise between finding the optimal planarization of a planar subgraph plus one edge, and keeping a fixed embedding, it is possible to search over all embeddings of the planarized subgraph and find the one that minimizes the number of crossings formed by the new edge.
4404625
abstract_algebra
Comparison-based sorting algorithm In computer science, smoothsort is a comparison-based sorting algorithm. A variant of heapsort, it was invented and published by Edsger Dijkstra in 1981. Like heapsort, smoothsort is an in-place algorithm with an upper bound of "O"("n" log "n"), but it is not a stable sort. The advantage of smoothsort is that it comes closer to "O"("n") time if the input is already sorted to some degree, whereas heapsort averages "O"("n" log "n") regardless of the initial sorted state. Overview. Like heapsort, smoothsort organizes the input into a priority queue and then repeatedly extracts the maximum. Also like heapsort, the priority queue is an implicit heap data structure (a heap-ordered implicit binary tree), which occupies a prefix of the array. Each extraction shrinks the prefix and adds the extracted element to a growing sorted suffix. When the prefix has shrunk to nothing, the array is completely sorted. Heapsort maps the binary tree to the array using a top-down breadth-first traversal of the tree; the array begins with the root of the tree, then its two children, then four grandchildren, and so on. Every element has a well-defined depth below the root of the tree, and every element except the root has its parent earlier in the array. Its height above the leaves, however, depends on the size of the array. This has the disadvantage that every element must be moved as part of the sorting process: it must pass through the root before being moved to its final location. Smoothsort uses a different mapping, a bottom-up depth-first post-order traversal. A left child is followed by the subtree rooted at its sibling, and a right child is followed by its parent. Every element has a well-defined height above the leaves, and every non-leaf element has its "children" earlier in the array. Its depth below the root, however, depends on the size of the array. The algorithm is organized so the root is at the end of the heap, and at the moment that an element is extracted from the heap it is already in its final location and does not need to be moved. Also, a sorted array is already a valid heap, and many sorted intervals are valid heap-ordered subtrees. More formally, every position i is the root of a unique subtree, whose nodes occupy a contiguous interval that ends at i. An initial prefix of the array (including the whole array), might be such an interval corresponding to a subtree, but in general decomposes as a union of a number of successive such subtree intervals, which Dijkstra calls "stretches". Any subtree without a parent (i.e. rooted at a position whose parent lies beyond the prefix under consideration) gives a stretch in the decomposition of that interval, which decomposition is therefore unique. When a new node is appended to the prefix, one of two cases occurs: either the position is a leaf and adds a stretch of length 1 to the decomposition, or it combines with the last two stretches, becoming the parent of their respective roots, thus replacing the two stretches by a new stretch containing their union plus the new (root) position. Dijkstra noted that the obvious rule would be to combine stretches if and only if they have equal size, in which case all subtrees would be perfect binary trees of size 2"k"−1. However, he chose a different rule, which gives more possible tree sizes. This has the same asymptotic efficiency, but gains a small constant factor in efficiency by requiring fewer stretches to cover each interval. The rule Dijkstra uses is that the last two stretches are combined if and only if their sizes are consecutive Leonardo numbers "L"("i"+1) and "L"("i") (in that order), which numbers are recursively defined, in a manner very similar to the Fibonacci numbers, as: As a consequence, the size of any subtree is a Leonardo number. The sequence of stretch sizes decomposing the first n positions, for any n, can be found in a greedy manner: the first size is the largest Leonardo number not exceeding n, and the remainder (if any) is decomposed recursively. The sizes of stretches are decreasing, strictly so except possibly for two final sizes 1, and avoiding successive Leonardo numbers except possibly for the final two sizes. In addition to each stretch being a heap-ordered tree, the roots of the trees are maintained in sorted order. This effectively adds a third child (which Dijkstra calls a "stepson") to each root linking it to the preceding root. This combines all of the trees together into one global heap. with the global maximum at the end. Although the location of each node's stepson is fixed, the link only exists for tree roots, meaning that links are removed whenever trees are merged. This is different from ordinary children, which are linked as long as the parent exists. In the first (heap growing) phase of sorting, an increasingly large initial part of the array is reorganized so that the subtree for each of its stretches is a max-heap: the entry at any non-leaf position is at least as large as the entries at the positions that are its children. In addition, all roots are at least as large as their stepsons. In the second (heap shrinking) phase, the maximal node is detached from the end of the array (without needing to move it) and the heap invariants are re-established among its children. (Specifically, among the newly created stepsons.) Practical implementation frequently needs to compute Leonardo numbers "L"("k"). Dijkstra provides clever code which uses a fixed number of integer variables to efficiently compute the values needed at the time they are needed. Alternatively, if there is a finite bound N on the size of arrays to be sorted, a precomputed table of Leonardo numbers can be stored in "O"(log "N") space. Operations. While the two phases of the sorting procedure are opposite to each other as far as the evolution of the sequence-of-heaps structure is concerned, they are implemented using one core primitive, equivalent to the "sift down" operation in a binary max-heap. Sifting down. The core sift-down operation (which Dijkstra calls "trinkle") restores the heap invariant when it is possibly violated only at the root node. If the root node is less than any of its children, it is swapped with its greatest child and the process repeated with the root node in its new subtree. The difference between smoothsort and a binary max-heap is that the root of each stretch must be ordered with respect to a third "stepson": the root of the preceding stretch. So the sift-down procedure starts with a series of four-way comparisons (the root node and three children) until the stepson is not the maximal element, then a series of three-way comparisons (the root plus two children) until the root node finds its final home and the invariants are re-established. Each tree is a full binary tree: each node has two children or none. There is no need to deal with the special case of one child which occurs in a standard implicit binary heap. (But the special case of stepson links more than makes up for this saving.) Because there are "O"(log "n") stretches, each of which is a tree of depth "O"(log "n"), the time to perform each sifting-down operation is bounded by "O"(log "n"). Growing the heap region by incorporating an element to the right. When an additional element is considered for incorporation into the sequence of stretches (list of disjoint heap structures) it either forms a new one-element stretch, or it combines the two rightmost stretches by becoming the parent of both their roots and forming a new stretch that replaces the two in the sequence. Which of the two happens depends only on the sizes of the stretches currently present (and ultimately only on the index of the element added); Dijkstra stipulated that stretches are combined if and only if their sizes are "L"("k"+1) and "L"("k") for some k, i.e., consecutive Leonardo numbers; the new stretch will have size "L"("k"+2). In either case, the new element must be sifted down to its correct place in the heap structure. Even if the new node is a one-element stretch, it must still be sorted relative to the preceding stretch's root. Optimization. Dijkstra's algorithm saves work by observing that the full heap invariant is required at the end of the growing phase, but it is not required at every intermediate step. In particular, the requirement that an element be greater than its stepson is only important for the elements which are the final tree roots. Therefore, when an element is added, compute the position of its future parent. If this is within the range of remaining values to be sorted, act as if there is no stepson and only sift down within the current tree. Shrinking the heap region by separating the rightmost element from it. During this phase, the form of the sequence of stretches goes through the changes of the growing phase in reverse. No work at all is needed when separating off a leaf node, but for a non-leaf node its two children become roots of new stretches, and need to be moved to their proper place in the sequence of roots of stretches. This can be obtained by applying sift-down twice: first for the left child, and then for the right child (whose stepson was the left child). Because half of all nodes in a full binary tree are leaves, this performs an average of one sift-down operation per node. Optimization. It is already known that the newly exposed roots are correctly ordered with respect to their normal children; it is only the ordering relative to their stepsons which is in question. Therefore, while shrinking the heap, the first step of sifting down can be simplified to a single comparison with the stepson. If a swap occurs, subsequent steps must do the full four-way comparison. Analysis. Smoothsort takes "O"("n") time to process a presorted array, "O"("n" log  "n") in the worst case, and achieves nearly-linear performance on many nearly-sorted inputs. However, it does not handle all nearly-sorted sequences optimally. Using the count of inversions as a measure of un-sortedness (the number of pairs of indices i and j with "i" < "j" and "A"["i"] > "A"["j"]; for randomly sorted input this is approximately "n"2/4), there are possible input sequences with "O"("n" log "n") inversions which cause it to take Ω("n" log "n") time, whereas other adaptive sorting algorithms can solve these cases in "O"("n" log log "n") time. The smoothsort algorithm needs to be able to hold in memory the sizes of all of the trees in the Leonardo heap. Since they are sorted by order and all orders are distinct, this is usually done using a bit vector indicating which orders are present. Moreover, since the largest order is at most "O"(log "n"), these bits can be encoded in "O"(1) machine words, assuming a transdichotomous machine model. Note that "O"(1) machine words is not the same thing as "one" machine word. A 32-bit vector would only suffice for sizes less than "L"(32) = 7049155. A 64-bit vector will do for sizes less than "L"(64) = 34335360355129 ≈ 245. In general, it takes 1/log2("φ") ≈ 1.44 bits of vector per bit of size. Poplar sort. A simpler algorithm inspired by smoothsort is poplar sort. Named after the rows of trees of decreasing size often seen in Dutch polders, it performs fewer comparisons than smoothsort for inputs that are not mostly sorted, but cannot achieve linear time for sorted inputs. The significant change made by poplar sort in that the roots of the various trees are "not" kept in sorted order; there are no "stepson" links tying them together into a single heap. Instead, each time the heap is shrunk in the second phase, the roots are searched to find the maximum entry. Because there are n shrinking steps, each of which must search "O"(log "n") tree roots for the maximum, the best-case run time for poplar sort is "O"("n" log "n"). The authors also suggest using perfect binary trees rather than Leonardo trees to provide further simplification, but this is a less significant change. The same structure has been proposed as a general-purpose priority queue under the name post-order heap, achieving "O"(1) amortized insertion time in a structure simpler than an implicit binomial heap. Applications. The musl C library uses smoothsort for its implementation of <code>qsort()</code>.
47738
abstract_algebra