# Hamming weight

The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string of bits, this is the number of 1's in the string, or the digit sum of the binary representation of a given number and the ₁ norm of a bit vector. In this binary case, it is also called the population count,[1] popcount, sideways sum,[2] or bit summation.[3]

The Hamming weight is named after Richard Hamming although he did not originate the notion.[7] The Hamming weight of binary numbers was already used in 1899 by James W. L. Glaisher to give a formula for the number of odd binomial coefficients in a single row of Pascal's triangle.[8] Irving S. Reed introduced a concept, equivalent to Hamming weight in the binary case, in 1954.[9]

Hamming weight is used in several disciplines including information theory, coding theory, and cryptography. Examples of applications of the Hamming weight include:

The population count of a bitstring is often needed in cryptography and other applications. The Hamming distance of two words A and B can be calculated as the Hamming weight of A xor B.[1]

The problem of how to implement it efficiently has been widely studied. A single operation for the calculation, or parallel operations on bit vectors are available on some processors. For processors lacking those features, the best solutions known are based on adding counts in a tree pattern. For example, to count the number of 1 bits in the 16-bit binary number a = 0110 1100 1011 1010, these operations can be done:

Here, the operations are as in C programming language, so `X >> Y` means to shift X right by Y bits, X & Y means the bitwise AND of X and Y, and + is ordinary addition. The best algorithms known for this problem are based on the concept illustrated above and are given here:[1]

//uint64_t is an unsigned 64-bit integer variable type (defined in C99 version of C language)//This algorithm uses 12 arithmetic operations, one of which is a multiply.

The above implementations have the best worst-case behavior of any known algorithm. However, when a value is expected to have few nonzero bits, it may instead be more efficient to use algorithms that count these bits one at a time. As Wegner described in 1960,[13] the bitwise AND of x with x − 1 differs from x only in zeroing out the least significant nonzero bit: subtracting 1 changes the rightmost string of 0s to 1s, and changes the rightmost 1 to a 0. If x originally had n bits that were 1, then after only n iterations of this operation, x will be reduced to zero. The following implementation is based on this principle.

//This algorithm uses 3 arithmetic operations and 1 comparison/branch per "1" bit in x.

If a greater memory usage is allowed, we can calculate the Hamming weight faster than the above methods. With unlimited memory, we could simply create a large lookup table of the Hamming weight of every 64 bit integer. If we can store a lookup table of the hamming function of every 16 bit integer, we can do the following to compute the Hamming weight of every 32 bit integer.

Muła et al.[14] have shown that a vectorized version of popcount64b can run faster than dedicated instructions (e.g., popcnt on x64 processors).

In error-correcting coding, the minimum Hamming weight, commonly referred to as the minimum weight wmin of a code is the weight of the lowest-weight non-zero code word. The weight w of a code word is the number of 1s in the word. For example, the word 11001010 has a weight of 4.

In a linear block code the minimum weight is also the minimum Hamming distance (dmin) and defines the error correction capability of the code. If wmin = n, then dmin = n and the code will correct up to dmin/2 errors.[15]

Some C compilers provide intrinsic functions that provide bit counting facilities. For example, GCC (since version 3.4 in April 2004) includes a builtin function `__builtin_popcount` that will use a processor instruction if available or an efficient library implementation otherwise.[16] LLVM-GCC has included this function since version 1.5 in June 2005.[17]

In C++ STL, the bit-array data structure `bitset` has a `count()` method that counts the number of bits that are set. In C++20, a new header `<bit>` was added, containing functions `std::popcount` and `std::has_single_bit`, taking arguments of unsigned integer types.

In Java, the growable bit-array data structure has a method that counts the number of bits that are set. In addition, there are and functions to count bits in primitive 32-bit and 64-bit integers, respectively. Also, the arbitrary-precision integer class also has a method that counts bits.

In Python, the `int` type has a `bit_count()` method to count the number of bits set. This functionality is new in Python 3.10, scheduled for release in 2021.[18]

In Common Lisp, the function `logcount`, given a non-negative integer, returns the number of 1 bits. (For negative integers it returns the number of 0 bits in 2's complement notation.) In either case the integer can be a BIGNUM.

Starting in GHC 7.4, the Haskell base package has a `popCount` function available on all types that are instances of the `Bits` class (available from the `Data.Bits` module).[19]

MySQL version of SQL language provides `BIT_COUNT()` as a standard function.[20]

Fortran 2008 has the standard, intrinsic, elemental function `popcnt` returning the number of nonzero bits within an integer (or integer array).[21]

Some programmable scientific pocket calculators feature special commands to calculate the number of set bits, e.g. `#B` on the HP-16C[3][22] and WP 43S,[23][24] `#BITS`[25][26] or `BITSUM`[27][28] on HP-16C emulators, and `nBITS` on the WP 34S.[29][30]