# Standard probability space

In probability theory, a standard probability space, also called Lebesgue–Rokhlin probability space or just Lebesgue space (the latter term is ambiguous) is a probability space satisfying certain assumptions introduced by Vladimir Rokhlin in 1940. He showed that the unit interval endowed with the Lebesgue measure has important advantages over general probability spaces, and can be used as a probability space for all practical purposes in probability theory. The theory of standard probability spaces was started by von Neumann in 1932 and shaped by Vladimir Rokhlin in 1940. The dimension of the unit interval is not a concern, as was clear already to Norbert Wiener. He constructed the Wiener process (also called Brownian motion) in the form of a measurable map from the unit interval to the space of continuous functions.

## Short history

The theory of standard probability spaces was started by von Neumann in 1932[1] and shaped by Vladimir Rokhlin in 1940.[2] For modernized presentations see Template:Harv, Template:Harv, Template:Harv and Template:Harv.

Nowadays standard probability spaces may be (and often are) treated in the framework of descriptive set theory, via standard Borel spaces, see for example Template:Harv. This approach is based on the isomorphism theorem for standard Borel spaces Template:Harv. An alternate approach of Rokhlin, based on measure theory, neglects null sets, in contrast to descriptive set theory. Standard probability spaces are used routinely in ergodic theory,[3][4]

## Definition

One of several well-known equivalent definitions of the standardness is given below, after some preparations. All probability spaces are assumed to be complete.

### Isomorphism

Two probability spaces are isomorphic, if there exists an isomorphism between them.

### Isomorphism modulo zero

Two probability spaces ${\displaystyle \textstyle (\Omega _{1},{\mathcal {F}}_{1},P_{1})}$, ${\displaystyle \textstyle (\Omega _{2},{\mathcal {F}}_{2},P_{2})}$ are isomorphic ${\displaystyle \textstyle \operatorname {mod} \,0}$, if there exist null sets ${\displaystyle \textstyle A_{1}\subset \Omega _{1}}$, ${\displaystyle \textstyle A_{2}\subset \Omega _{2}}$ such that the probability spaces ${\displaystyle \textstyle \Omega _{1}\setminus A_{1}}$, ${\displaystyle \textstyle \Omega _{2}\setminus A_{2}}$ are isomorphic (being endowed naturally with sigma-fields and probability measures).

### Standard probability space

A probability space is standard, if it is isomorphic ${\displaystyle \textstyle \operatorname {mod} \,0}$ to an interval with Lebesgue measure, a finite or countable set of atoms, or a combination (disjoint union) of both.

See Template:Harv, Template:Harv, and Template:Harv. See also Template:Harv, and Template:Harv. In Template:Harv the measure is assumed finite, not necessarily probabilistic. In Template:Harv atoms are not allowed.

## Examples of non-standard probability spaces

### A naive white noise

The space of all functions ${\displaystyle \textstyle f:{\mathbb {R} }\to {\mathbb {R} }}$ may be thought of as the product ${\displaystyle \textstyle {\mathbb {R} }^{\mathbb {R} }}$ of a continuum of copies of the real line ${\displaystyle \textstyle {\mathbb {R} }}$. One may endow ${\displaystyle \textstyle {\mathbb {R} }}$ with a probability measure, say, the standard normal distribution ${\displaystyle \textstyle \gamma =N(0,1)}$, and treat the space of functions as the product ${\displaystyle \textstyle ({\mathbb {R} },\gamma )^{\mathbb {R} }}$ of a continuum of identical probability spaces ${\displaystyle \textstyle ({\mathbb {R} },\gamma )}$. The product measure ${\displaystyle \textstyle \gamma ^{\mathbb {R} }}$ is a probability measure on ${\displaystyle \textstyle {\mathbb {R} }^{\mathbb {R} }}$. Many non-experts are inclined to believe that ${\displaystyle \textstyle \gamma ^{\mathbb {R} }}$ describes the so-called white noise.

However, it does not. For the white noise, its integral from 0 to 1 should be a random variable distributed N(0, 1). In contrast, the integral (from 0 to 1) of ${\displaystyle \textstyle f\in \textstyle ({\mathbb {R} },\gamma )^{\mathbb {R} }}$ is undefined. Even worse, ƒ fails to be almost surely measurable. Still worse, the probability of ƒ being measurable is undefined. And the worst thing: if X is a random variable distributed (say) uniformly on (0, 1) and independent of ƒ, then ƒ(X) is not a random variable at all! (It lacks measurability.)

### A perforated interval

Let ${\displaystyle \textstyle Z\subset (0,1)}$ be a set whose inner Lebesgue measure is equal to 0, but outer Lebesgue measure is equal to 1 (thus, ${\displaystyle \textstyle Z}$ is nonmeasurable to extreme). There exists a probability measure ${\displaystyle \textstyle m}$ on ${\displaystyle \textstyle Z}$ such that ${\displaystyle \textstyle m(Z\cap A)=\operatorname {mes} (A)}$ for every Lebesgue measurable ${\displaystyle \textstyle A\subset (0,1)}$. (Here ${\displaystyle \textstyle \operatorname {mes} }$ is the Lebesgue measure.) Events and random variables on the probability space ${\displaystyle \textstyle (Z,m)}$ (treated ${\displaystyle \textstyle \operatorname {mod} \,0}$) are in a natural one-to-one correspondence with events and random variables on the probability space ${\displaystyle \textstyle ((0,1),\operatorname {mes} )}$. Many non-experts are inclined to conclude that the probability space ${\displaystyle \textstyle (Z,m)}$ is as good as ${\displaystyle \textstyle ((0,1),\operatorname {mes} )}$.

However, it is not. A random variable ${\displaystyle \textstyle X}$ defined by ${\displaystyle \textstyle X(\omega )=\omega }$ is distributed uniformly on ${\displaystyle \textstyle (0,1)}$. The conditional measure, given ${\displaystyle \textstyle X=x}$, is just a single atom (at ${\displaystyle \textstyle x}$), provided that ${\displaystyle \textstyle ((0,1),\operatorname {mes} )}$ is the underlying probability space. However, if ${\displaystyle \textstyle (Z,m)}$ is used instead, then the conditional measure does not exist when ${\displaystyle \textstyle x\notin Z}$.

A perforated circle is constructed similarly. Its events and random variables are the same as on the usual circle. The group of rotations acts on them naturally. However, it fails to act on the perforated circle.

### A superfluous measurable set

Let ${\displaystyle \textstyle Z\subset (0,1)}$ be as in the previous example. Sets of the form ${\displaystyle \textstyle (A\cap Z)\cup (B\setminus Z),}$ where ${\displaystyle \textstyle A}$ and ${\displaystyle \textstyle B}$ are arbitrary Lebesgue measurable sets, are a σ-algebra ${\displaystyle \textstyle {\mathcal {F}};}$ it contains the Lebesgue σ-algebra and ${\displaystyle \textstyle Z.}$ The formula

${\displaystyle \displaystyle m{\big (}(A\cap Z)\cup (B\setminus Z){\big )}=p\,\operatorname {mes} (A)+(1-p)\operatorname {mes} (B)}$

gives the general form of a probability measure ${\displaystyle \textstyle m}$ on ${\displaystyle \textstyle {\big (}(0,1),{\mathcal {F}}{\big )}}$ that extends the Lebesgue measure; here ${\displaystyle \textstyle p\in [0,1]}$ is a parameter. To be specific, we choose ${\displaystyle \textstyle p=0.5.}$ Many non-experts are inclined to believe that such an extension of the Lebesgue measure is at least harmless.

However, it is the perforated interval in disguise. The map

${\displaystyle \displaystyle f(x)={\begin{cases}0.5x&{\text{for }}x\in Z,\\0.5+0.5x&{\text{for }}x\in (0,1)\setminus Z\end{cases}}}$

is an isomorphism between ${\displaystyle \textstyle {\big (}(0,1),{\mathcal {F}},m{\big )}}$ and the perforated interval corresponding to the set

${\displaystyle \displaystyle Z_{1}=\{0.5x:x\in Z\}\cup \{0.5+0.5x:x\in (0,1)\setminus Z\}\,,}$

another set of inner Lebesgue measure 0 but outer Lebesgue measure 1.

## A criterion of standardness

Standardness of a given probability space ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ is equivalent to a certain property of a measurable map ${\displaystyle \textstyle f}$ from ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ to a measurable space ${\displaystyle \textstyle (X,\Sigma ).}$ Interestingly, the answer (standard, or not) does not depend on the choice of ${\displaystyle \textstyle (X,\Sigma )}$ and ${\displaystyle \textstyle f}$. This fact is quite useful; one may adapt the choice of ${\displaystyle \textstyle (X,\Sigma )}$ and ${\displaystyle \textstyle f}$ to the given ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P).}$ No need to examine all cases. It may be convenient to examine a random variable ${\displaystyle \textstyle f:\Omega \to {\mathbb {R} },}$ a random vector ${\displaystyle \textstyle f:\Omega \to {\mathbb {R} }^{n},}$ a random sequence ${\displaystyle \textstyle f:\Omega \to {\mathbb {R} }^{\infty },}$ or a sequence of events ${\displaystyle \textstyle (A_{1},A_{2},\dots )}$ treated as a sequence of two-valued random variables, ${\displaystyle \textstyle f:\Omega \to \{0,1\}^{\infty }.}$

Two conditions will be imposed on ${\displaystyle \textstyle f}$ (to be injective, and generating). Below it is assumed that such ${\displaystyle \textstyle f}$ is given. The question of its existence will be addressed afterwards.

The probability space ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ is assumed to be complete (otherwise it cannot be standard).

### A single random variable

A measurable function ${\displaystyle \textstyle f:\Omega \to {\mathbb {R} }}$ induces a pushforward measure, – the probability measure ${\displaystyle \textstyle \mu }$ on ${\displaystyle \textstyle {\mathbb {R} },}$ defined by

${\displaystyle \displaystyle \mu (B)=P{\big (}f^{-1}(B){\big )}}$    for Borel sets ${\displaystyle \textstyle B\subset {\mathbb {R} }.}$

(It is nothing but the distribution of the random variable.) The image ${\displaystyle \textstyle f(\Omega )}$ is always a set of full outer measure,

${\displaystyle \displaystyle \mu ^{*}{\big (}f(\Omega ){\big )}=1,}$

but its inner measure can differ (see a perforated interval). In other words, ${\displaystyle \textstyle f(\Omega )}$ need not be a set of full measure ${\displaystyle \textstyle \mu .}$

A measurable function ${\displaystyle \textstyle f:\Omega \to {\mathbb {R} }}$ is called generating if ${\displaystyle \textstyle {\mathcal {F}}}$ is the completion of the σ-algebra of inverse images ${\displaystyle \textstyle f^{-1}(B),}$ where ${\displaystyle \textstyle B\subset {\mathbb {R} }}$ runs over all Borel sets.

Caution.   The following condition is not sufficient for ${\displaystyle \textstyle f}$ to be generating: for every ${\displaystyle \textstyle A\in {\mathcal {F}}}$ there exists a Borel set ${\displaystyle \textstyle B\subset {\mathbb {R} }}$ such that ${\displaystyle \textstyle P(A\Delta f^{-1}(B))=0.}$ (${\displaystyle \textstyle \Delta }$ means symmetric difference).

Theorem. Let a measurable function ${\displaystyle \textstyle f:\Omega \to {\mathbb {R} }}$ be injective and generating, then the following two conditions are equivalent:

### A random vector

The same theorem holds for any ${\displaystyle {\mathbb {R} }^{n}\,}$ (in place of ${\displaystyle {\mathbb {R} }\,}$). A measurable function ${\displaystyle f:\Omega \to {\mathbb {R} }^{n}\,}$ may be thought of as a finite sequence of random variables ${\displaystyle X_{1},\dots ,X_{n}:\Omega \to {\mathbb {R} },\,}$ and ${\displaystyle f\,}$ is generating if and only if ${\displaystyle {\mathcal {F}}\,}$ is the completion of the σ-algebra generated by ${\displaystyle X_{1},\dots ,X_{n}.\,}$

### A random sequence

The theorem still holds for the space ${\displaystyle {\mathbb {R} }^{\infty }\,}$ of infinite sequences. A measurable function ${\displaystyle f:\Omega \to {\mathbb {R} }^{\infty }\,}$ may be thought of as an infinite sequence of random variables ${\displaystyle X_{1},X_{2},\dots :\Omega \to {\mathbb {R} },\,}$ and ${\displaystyle f\,}$ is generating if and only if ${\displaystyle {\mathcal {F}}\,}$ is the completion of the σ-algebra generated by ${\displaystyle X_{1},X_{2},\dots .\,}$

### A sequence of events

In particular, if the random variables ${\displaystyle X_{n}\,}$ take on only two values 0 and 1, we deal with a measurable function ${\displaystyle f:\Omega \to \{0,1\}^{\infty }\,}$ and a sequence of sets ${\displaystyle A_{1},A_{2},\ldots \in {\mathcal {F}}.\,}$ The function ${\displaystyle f\,}$ is generating if and only if ${\displaystyle {\mathcal {F}}\,}$ is the completion of the σ-algebra generated by ${\displaystyle A_{1},A_{2},\dots .\,}$

In the pioneering work Template:Harv sequences ${\displaystyle A_{1},A_{2},\ldots \,}$ that correspond to injective, generating ${\displaystyle f\,}$ are called bases of the probability space ${\displaystyle (\Omega ,{\mathcal {F}},P)\,}$ (see Template:Harvnb). A basis is called complete mod 0, if ${\displaystyle f(\Omega )\,}$ is of full measure ${\displaystyle \mu ,\,}$ see Template:Harv. In the same section Rokhlin proved that if a probability space is complete mod 0 with respect to some basis, then it is complete mod 0 with respect to every other basis, and defines Lebesgue spaces by this completeness property. See also Template:Harv and Template:Harv.

The four cases treated above are mutually equivalent, and can be united, since the measurable spaces ${\displaystyle {\mathbb {R} },\,}$ ${\displaystyle {\mathbb {R} }^{n},\,}$ ${\displaystyle {\mathbb {R} }^{\infty }\,}$ and ${\displaystyle \{0,1\}^{\infty }\,}$ are mutually isomorphic; they all are standard measurable spaces (in other words, standard Borel spaces).

Existence of an injective measurable function from ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ to a standard measurable space ${\displaystyle \textstyle (X,\Sigma )}$ does not depend on the choice of ${\displaystyle \textstyle (X,\Sigma ).}$ Taking ${\displaystyle \textstyle (X,\Sigma )=\{0,1\}^{\infty }}$ we get the property well known as being countably separated (but called separable in Template:Harvnb).

Existence of a generating measurable function from ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ to a standard measurable space ${\displaystyle \textstyle (X,\Sigma )}$ also does not depend on the choice of ${\displaystyle \textstyle (X,\Sigma ).}$ Taking ${\displaystyle \textstyle (X,\Sigma )=\{0,1\}^{\infty }}$ we get the property well known as being countably generated (mod 0), see Template:Harv.

Probability space Countably separated Countably generated Standard
Template:Rh | Interval with Lebesgue measure Yes Yes Yes
Template:Rh | Naive white noise No No No
Template:Rh | Perforated interval Yes Yes No

Every injective measurable function from a standard probability space to a standard measurable space is generating. See Template:Harv, Template:Harv, Template:Harv. This property does not hold for the non-standard probability space dealt with in the subsection "A superfluous measurable set" above.

Caution.   The property of being countably generated is invariant under mod 0 isomorphisms, but the property of being countably separated is not. In fact, a standard probability space ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ is countably separated if and only if the cardinality of ${\displaystyle \textstyle \Omega }$ does not exceed continuum (see Template:Harvnb). A standard probability space may contain a null set of any cardinality, thus, it need not be countably separated. However, it always contains a countably separated subset of full measure.

## Equivalent definitions

Let ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ be a complete probability space such that the cardinality of ${\displaystyle \textstyle \Omega }$ does not exceed continuum (the general case is reduced to this special case, see the caution above).

### Via absolute measurability

Definition.   ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ is standard if it is countably separated, countably generated, and absolutely measurable.

See Template:Harv and Template:Harv. "Absolutely measurable" means: measurable in every countably separated, countably generated probability space containing it.

### Via perfectness

Definition.   ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ is standard if it is countably separated and perfect.

See Template:Harv. "Perfect" means that for every measurable function from ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ to ${\displaystyle {\mathbb {R} }\,}$ the image measure is regular. (Here the image measure is defined on all sets whose inverse images belong to ${\displaystyle \textstyle {\mathcal {F}}}$, irrespective of the Borel structure of ${\displaystyle {\mathbb {R} }\,}$).

### Via topology

See Template:Harv.

## Verifying the standardness

Every probability distribution on the space ${\displaystyle \textstyle {\mathbb {R} }^{n}}$ turns it into a standard probability space. (Here, a probability distribution means a probability measure defined initially on the Borel sigma-algebra and completed.)

The same holds on every Polish space, see Template:Harv, Template:Harv, Template:Harv, and Template:Harv.

For example, the Wiener measure turns the Polish space ${\displaystyle \textstyle C[0,\infty )}$ (of all continuous functions ${\displaystyle \textstyle [0,\infty )\to {\mathbb {R} },}$ endowed with the topology of local uniform convergence) into a standard probability space.

Another example: for every sequence of random variables, their joint distribution turns the Polish space ${\displaystyle \textstyle {\mathbb {R} }^{\infty }}$ (of sequences; endowed with the product topology) into a standard probability space.

(Thus, the idea of dimension, very natural for topological spaces, is utterly inappropriate for standard probability spaces.)

The product of two standard probability spaces is a standard probability space.

The same holds for the product of countably many spaces, see Template:Harv, Template:Harv, and Template:Harv.

A measurable subset of a standard probability space is a standard probability space. It is assumed that the set is not a null set, and is endowed with the conditional measure. See Template:Harv and Template:Harv.

Every probability measure on a standard Borel space turns it into a standard probability space.

## Using the standardness

### Regular conditional probabilities

In the discrete setup, the conditional probability is another probability measure, and the conditional expectation may be treated as the (usual) expectation with respect to the conditional measure, see conditional expectation. In the non-discrete setup, conditioning is often treated indirectly, since the condition may have probability 0, see conditional expectation. As a result, a number of well-known facts have special 'conditional' counterparts. For example: linearity of the expectation; Jensen's inequality (see conditional expectation); Hölder's inequality; the monotone convergence theorem, etc.

Given a random variable ${\displaystyle \textstyle Y}$ on a probability space ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$, it is natural to try constructing a conditional measure ${\displaystyle \textstyle P_{y}}$, that is, the conditional distribution of ${\displaystyle \textstyle \omega \in \Omega }$ given ${\displaystyle \textstyle Y(\omega )=y}$. In general this is impossible (see Template:Harvnb). However, for a standard probability space ${\displaystyle \textstyle (\Omega ,{\mathcal {F}},P)}$ this is possible, and well known as canonical system of measures (see Template:Harvnb), which is basically the same as conditional probability measures (see Template:Harvnb), disintegration of measure (see Template:Harvnb), and regular conditional probabilities (see Template:Harvnb).

The conditional Jensen's inequality is just the (usual) Jensen's inequality applied to the conditional measure. The same holds for many other facts.

### Measure preserving transformations

Given two probability spaces ${\displaystyle \textstyle (\Omega _{1},{\mathcal {F}}_{1},P_{1})}$, ${\displaystyle \textstyle (\Omega _{2},{\mathcal {F}}_{2},P_{2})}$ and a measure preserving map ${\displaystyle \textstyle f:\Omega _{1}\to \Omega _{2}}$, the image ${\displaystyle \textstyle f(\Omega _{1})}$ need not cover the whole ${\displaystyle \textstyle \Omega _{2}}$, it may miss a null set. It may seem that ${\displaystyle \textstyle P_{2}(f(\Omega _{1}))}$ has to be equal to 1, but it is not so. The outer measure of ${\displaystyle \textstyle f(\Omega _{1})}$ is equal to 1, but the inner measure may differ. However, if the probability spaces ${\displaystyle \textstyle (\Omega _{1},{\mathcal {F}}_{1},P_{1})}$, ${\displaystyle \textstyle (\Omega _{2},{\mathcal {F}}_{2},P_{2})}$ are standard then ${\displaystyle \textstyle P_{2}(f(\Omega _{1}))=1}$, see Template:Harv. If ${\displaystyle \textstyle f}$ is also one-to-one then every ${\displaystyle \textstyle A\in {\mathcal {F}}_{1}}$ satisfies ${\displaystyle \textstyle f(A)\in {\mathcal {F}}_{2}}$, ${\displaystyle \textstyle P_{2}(f(A))=P_{1}(A)}$. Therefore ${\displaystyle \textstyle f^{-1}}$ is measurable (and measure preserving). See Template:Harv and Template:Harv. See also Template:Harv.

"There is a coherent way to ignore the sets of measure 0 in a measure space" Template:Harv. Striving to get rid of null sets, mathematicians often use equivalence classes of measurable sets or functions. Equivalence classes of measurable subsets of a probability space form a normed complete Boolean algebra called the measure algebra (or metric structure). Every measure preserving map ${\displaystyle \textstyle f:\Omega _{1}\to \Omega _{2}}$ leads to a homomorphism ${\displaystyle \textstyle F}$ of measure algebras; basically, ${\displaystyle \textstyle F(B)=f^{-1}(B)}$ for ${\displaystyle \textstyle B\in {\mathcal {F}}_{2}}$.

It may seem that every homomorphism of measure algebras has to correspond to some measure preserving map, but it is not so. However, for standard probability spaces each ${\displaystyle \textstyle F}$ corresponds to some ${\displaystyle \textstyle f}$. See Template:Harv, Template:Harv, Template:Harv.

## Notes

1. Template:Harv and Template:Harv are cited in Template:Harv and Template:Harv.
2. Published in short in 1947, in detail in 1949 in Russian and in 1952 in English, reprinted in 1962 Template:Harv. An unpublished text of 1940 is mentioned in Template:Harv. "The theory of Lebesgue spaces in its present form was constructed by V. A. Rokhlin" Template:Harv.
3. "In this book we will deal exclusively with Lebesgue spaces" Template:Harv.
4. "Ergodic theory on Lebesgue spaces" is the subtitle of the book Template:Harv.

## References

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}. Translated from Russian: {{#invoke:citation/CS1|citation |CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

.