Ulam matrix

From HandWiki
Short description: Term in mathematical set theory

In mathematical set theory, an Ulam matrix is an array of subsets of a cardinal number with certain properties. Ulam matrices were introduced by Stanislaw Ulam in his 1930 work on measurable cardinals: they may be used, for example, to show that a real-valued measurable cardinal is weakly inaccessible.[1]

Definition

Suppose that κ and λ are cardinal numbers, and let [math]\displaystyle{ \mathcal F }[/math] be a [math]\displaystyle{ \lambda }[/math]-complete filter on [math]\displaystyle{ \lambda }[/math]. An Ulam matrix is a collection of subsets [math]\displaystyle{ A_{\alpha \beta} }[/math] of [math]\displaystyle{ \lambda }[/math] indexed by [math]\displaystyle{ \alpha \in \kappa, \beta \in \lambda }[/math] such that

  • If [math]\displaystyle{ \beta \ne \gamma \in \lambda }[/math] then [math]\displaystyle{ A_{\alpha \beta} }[/math] and [math]\displaystyle{ A_{\alpha \gamma} }[/math] are disjoint.
  • For each [math]\displaystyle{ \beta \in \lambda }[/math], the union over [math]\displaystyle{ \alpha \in \kappa }[/math] of the sets [math]\displaystyle{ A_{\alpha \beta}, \, \bigcup\left\{A_{\alpha \beta}:\alpha \in \kappa\right\} }[/math], is in the filter [math]\displaystyle{ \mathcal F }[/math].

References

  1. Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (Third Millennium ed.), Berlin, New York: Springer-Verlag, p. 131, ISBN 978-3-540-44085-7