Benford's lawW
Benford's law

Benford's law, also called the Newcomb–Benford law, the law of anomalous numbers, or the first-digit law, is an observation about the frequency distribution of leading digits in many real-life sets of numerical data. The law states that in many naturally occurring collections of numbers, the leading digit is likely to be small. In sets that obey the law, the number 1 appears as the leading significant digit about 30% of the time, while 9 appears as the leading significant digit less than 5% of the time. If the digits were distributed uniformly, they would each occur about 11.1% of the time. Benford's law also makes predictions about the distribution of second digits, third digits, digit combinations, and so on.

Gompertz–Makeham law of mortalityW
Gompertz–Makeham law of mortality

The Gompertz–Makeham law states that the human death rate is the sum of an age-dependent component, which increases exponentially with age and an age-independent component. In a protected environment where external causes of death are rare, the age-independent mortality component is often negligible. In this case the formula simplifies to a Gompertz law of mortality. In 1825, Benjamin Gompertz proposed an exponential increase in death rates with age.

Heaps' lawW
Heaps' law

In linguistics, Heaps' law is an empirical law which describes the number of distinct words in a document as a function of the document length. It can be formulated as

Long tailW
Long tail

In statistics and business, a long tail of some distributions of numbers is the portion of the distribution having many occurrences far from the "head" or central part of the distribution. The distribution could involve popularities, random numbers of occurrences of events with various probabilities, etc. The term is often used loosely, with no definition or arbitrary definition, but precise definitions are possible.

Lotka's lawW
Lotka's law

Lotka's law, named after Alfred J. Lotka, is one of a variety of special applications of Zipf's law. It describes the frequency of publication by authors in any given field. It states that the number of authors making contributions in a given period is a fraction of the number making a single contribution, following the formula where nearly always equals two, i.e., an approximate inverse-square law, where the number of authors publishing a certain number of articles is a fixed ratio to the number of authors publishing a single article. As the number of articles published increases, authors producing that many publications become less frequent. There are 1/4 as many authors publishing two articles within a specified time period as there are single-publication authors, 1/9 as many publishing three articles, 1/16 as many publishing four articles, etc. Though the law itself covers many disciplines, the actual ratios involved are discipline-specific.

Pareto principleW
Pareto principle

The Pareto principle states that for many outcomes roughly 80% of consequences come from 20% of the causes. Other names for this principle are the 80/20 rule, the law of the vital few, or the principle of factor sparsity.

Power lawW
Power law

In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of those quantities: one quantity varies as a power of another. For instance, considering the area of a square in terms of the length of its side, if the length is doubled, the area is multiplied by a factor of four.

Rank-size distributionW
Rank-size distribution

Rank-size distribution is the distribution of size by rank, in decreasing order of size. For example, if a data set consists of items of sizes 5, 100, 5, and 8, the rank-size distribution is 100, 8, 5, 5. This is also known as the rank-frequency distribution, when the source data are from a frequency distribution. These are particularly of interest when the data vary significantly in scale, such as city size or word frequency. These distributions frequently follow a power law distribution, or less well-known ones such as a stretched exponential function or parabolic fractal distribution, at least approximately for certain ranges of ranks; see below.

Regression toward the meanW
Regression toward the mean

In statistics, regression toward the mean is the phenomenon that arises if a sample point of a random variable is extreme, a future point will be closer to the mean or average on further measurements. To avoid making incorrect inferences, regression toward the mean must be considered when designing scientific experiments and interpreting data. Historically, what is now called regression toward the mean was also called reversion to the mean and reversion to mediocrity.

Safety in numbersW
Safety in numbers

Safety in numbers is the hypothesis that, by being part of a large physical group or mass, an individual is less likely to be the victim of a mishap, accident, attack, or other bad event. Some related theories also argue that mass behaviour can reduce accident risks, such as in traffic safety – in this case, the safety effect creates an actual reduction of danger, rather than just a redistribution over a larger group.

Zipf's lawW
Zipf's law

Zipf's law is an empirical law formulated using mathematical statistics that refers to the fact that many types of data studied in the physical and social sciences can be approximated with a Zipfian distribution, one of a family of related discrete power law probability distributions. Zipf distribution is related to the zeta distribution, but is not identical.