Iterative proportional fitting
The iterative proportional fitting procedure (IPF or IPFP, also known as biproportional fitting or biproportion in statistics or economics (input-output analysis, etc.), RAS algorithm[1] in economics, raking in survey statistics, and matrix scaling in computer science) is the operation of finding the fitted matrix [math]\displaystyle{ X }[/math] which is the closest to an initial matrix [math]\displaystyle{ Z }[/math] but with the row and column totals of a target matrix [math]\displaystyle{ Y }[/math] (which provides the constraints of the problem; the interior of [math]\displaystyle{ Y }[/math] is unknown). The fitted matrix being of the form [math]\displaystyle{ X=PZQ }[/math], where [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math] are diagonal matrices such that [math]\displaystyle{ X }[/math] has the margins (row and column sums) of [math]\displaystyle{ Y }[/math]. Some algorithms can be chosen to perform biproportion. We have also the entropy maximization,[2][3] information loss minimization (or cross-entropy)[4] or RAS which consists of factoring the matrix rows to match the specified row totals, then factoring its columns to match the specified column totals; each step usually disturbs the previous step's match, so these steps are repeated in cycles, re-adjusting the rows and columns in turn, until all specified marginal totals are satisfactorily approximated. However, all algorithms give the same solution.[5] In three- or more-dimensional cases, adjustment steps are applied for the marginals of each dimension in turn, the steps likewise repeated in cycles.
History
IPF has been "re-invented" many times, the earliest by Kruithof in 1937 [6] in relation to telephone traffic ("Kruithof’s double factor method"), Deming and Stephan in 1940[7] for adjusting census crosstabulations, and G.V. Sheleikhovskii for traffic as reported by Bregman.[8] (Deming and Stephan proposed IPFP as an algorithm leading to a minimizer of the Pearson X-squared statistic, which Stephan later reported it does not).[9] Early proofs of uniqueness and convergence came from Sinkhorn (1964),[10] Bacharach (1965),[11] Bishop (1967),[12] and Fienberg (1970).[13] Bishop's proof that IPFP finds the maximum likelihood estimator for any number of dimensions extended a 1959 proof by Brown for 2x2x2... cases. Fienberg's proof by differential geometry exploits the method's constant crossproduct ratios, for strictly positive tables. Csiszár (1975).[14] found necessary and sufficient conditions for general tables having zero entries. Pukelsheim and Simeone (2009) [15] give further results on convergence and error behavior.
An exhaustive treatment of the algorithm and its mathematical foundations can be found in the book of Bishop et al. (1975).[16] Idel (2016)[17] gives a more recent survey.
Other general algorithms can be modified to yield the same limit as the IPFP, for instance the Newton–Raphson method and the EM algorithm. In most cases, IPFP is preferred due to its computational speed, low storage requirements, numerical stability and algebraic simplicity.
Applications of IPFP have grown to include trip distribution models, Fratar or Furness and other applications in transportation planning (Lamond and Stewart), survey weighting, synthesis of cross-classified demographic data, adjusting input–output models in economics, estimating expected quasi-independent contingency tables, biproportional apportionment systems of political representation, and for a preconditioner in linear algebra.[18]
Biproportion
Biproportion, whatever the algorithm used to solve it, is the following concept: [math]\displaystyle{ Z }[/math], matrix [math]\displaystyle{ Y }[/math] and matrix [math]\displaystyle{ X }[/math] are known real nonnegative matrices of dimension [math]\displaystyle{ n,m }[/math]; the interior of [math]\displaystyle{ Y }[/math] is unknown and [math]\displaystyle{ X }[/math] is searched such that [math]\displaystyle{ X }[/math] has the same margins than [math]\displaystyle{ Y }[/math], i.e. [math]\displaystyle{ Xs=Ys }[/math] and [math]\displaystyle{ s'X=s'Y }[/math] ([math]\displaystyle{ s }[/math] being the sum vector, and such that [math]\displaystyle{ X }[/math] is closed to [math]\displaystyle{ Z }[/math] following a given criterion, the fitted matrix being of the form [math]\displaystyle{ X=K(Z,Y)=PZQ }[/math], where [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math] are diagonal matrices.
[math]\displaystyle{ min \sum_i\sum_j x_{ij}\log(x_{ij}/z_{ij}) }[/math] s.t.[math]\displaystyle{ \sum_j x_{ij}=y_{i.} }[/math], ∀[math]\displaystyle{ i }[/math] and [math]\displaystyle{ \sum_i x_{ij}=y_{.j} }[/math], ∀[math]\displaystyle{ j }[/math]. The Lagrangian is [math]\displaystyle{ L=\sum_i\sum_j x_{ij}\log(x_{ij}/z_{ij})-\sum_i p_i(y_{i.}-\sum_j x_{ij})-\sum_jq_j(y_{.j}-\sum_i x_{ij}) }[/math].
Thus [math]\displaystyle{ x_{ij}=z_{ij} \exp-(1+p_i+q_j) }[/math], for ∀[math]\displaystyle{ i,j }[/math],
which, after posing [math]\displaystyle{ P_i=\exp-(1+p_i) }[/math] and [math]\displaystyle{ Q_j=\exp-q_j }[/math], yields
[math]\displaystyle{ x_{ij}=P_i z_{ij} Q_j }[/math], ∀[math]\displaystyle{ i,j }[/math], i.e., [math]\displaystyle{ X=PZQ }[/math],
with [math]\displaystyle{ P_i=z_i.(\sum_j z_{ij}Q_j)^{-1} }[/math], ∀[math]\displaystyle{ i }[/math] and [math]\displaystyle{ Q_j=z_{.j}(\sum_i z_{ij}P_i)^{-1} }[/math], ∀[math]\displaystyle{ j }[/math]. [math]\displaystyle{ P_i }[/math] and [math]\displaystyle{ Q_j }[/math] form a system that can be solve iteratively:
[math]\displaystyle{ P_i=z_i.^{(t+1)}(\sum_j z_{ij}Q_j^{(t)})^{-1} }[/math], ∀[math]\displaystyle{ i }[/math] and [math]\displaystyle{ Q_j^{(t+1)}=z_{.j}(\sum_i z_{ij}P_i^{(t+1)})^{-1} }[/math], ∀[math]\displaystyle{ j }[/math].
The solution [math]\displaystyle{ X }[/math] is independent of the initialization chosen (i.e., we can begin by [math]\displaystyle{ q_j^{(0)}=1 }[/math], ∀[math]\displaystyle{ j }[/math] or by [math]\displaystyle{ p_i^{(0)}=1 }[/math], ∀[math]\displaystyle{ i }[/math]. If the matrix [math]\displaystyle{ Z }[/math] is “indecomposable”, then this process has a unique fixed-point because it is deduced from program a program where the function is a convex and continuously derivable function defined on a compact set. In some cases the solution may not exist: see de Mesnard's example cited by Miller and Blair (Miller R.E. & Blair P.D. (2009) Input-output analysis: Foundations and Extensions, Second edition, Cambridge (UK): Cambridge University Press, p. 335-336 (freely available)).
Some properties (see de Mesnard (1994)):
Lack of information: if [math]\displaystyle{ Z }[/math] brings no information, i.e., [math]\displaystyle{ z_ij=z }[/math], ∀[math]\displaystyle{ i,j }[/math] then [math]\displaystyle{ X=PQ }[/math].
Idempotency: [math]\displaystyle{ X=K(Z,Y)=Z }[/math] if [math]\displaystyle{ Y }[/math] has the same margins than [math]\displaystyle{ Z }[/math].
Composition of biproportions: [math]\displaystyle{ K(K(Z,Y_1),Y_2=K(Z,Y_2) }[/math]; [math]\displaystyle{ K(...K(Z,Y_1),Y_2)...Z_N)=K(Z,Y_N) }[/math].
Zeros: a zero in [math]\displaystyle{ Z }[/math] is projected as a zero in [math]\displaystyle{ X }[/math]. Thus, a bloc-diagonal matrix is projected as a bloc-diagonal matrix and a triangular matrix is projected as a triangular matrix.
Theorem of separable modifications: if [math]\displaystyle{ Z }[/math] is premutiplied by a diagonal matrix and/or postmultiplied by a diagonal matrix, then the solution is unchanged.
Theorem of "unicity": If [math]\displaystyle{ K^{q} }[/math] is any non-specified algorithm, with [math]\displaystyle{ \hat{X}=K^{q}(Z,Y)=UZV }[/math], [math]\displaystyle{ U }[/math] and [math]\displaystyle{ V }[/math] being unknown, then [math]\displaystyle{ U }[/math] and [math]\displaystyle{ V }[/math] can always be changed into the standard form of [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math]. The demonstrations calls some above properties, particularly the Theorem of separable modifications and the composition of biproportions.
Algorithm 1 (classical IPF)
Given a two-way (I × J)-table [math]\displaystyle{ x_{ij} }[/math], we wish to estimate a new table [math]\displaystyle{ \hat{m}_{ij} = a_i b_j x_{ij} }[/math] for all i and j such that the marginals satisfy [math]\displaystyle{ \sum_j \hat{m}_{ij}\ = u_{i}, }[/math] and [math]\displaystyle{ \sum_i \hat{m}_{ij}\ = v_{j} }[/math].
Choose initial values [math]\displaystyle{ \hat{m}_{ij}^{(0)} := x_{ij} }[/math], and for [math]\displaystyle{ \eta \geq 1 }[/math] set
- [math]\displaystyle{ \hat{m}_{ij}^{(2\eta - 1)} = \frac{\hat{m}_{ij}^{(2\eta-2)}u_{i}}{\sum_{k=1}^J \hat{m}_{ik}^{(2\eta-2)}} }[/math]
- [math]\displaystyle{ \hat{m}_{ij}^{(2\eta)} = \frac{\hat{m}_{ij}^{(2\eta-1)}v_{j}}{\sum_{k=1}^I \hat{m}_{kj}^{(2\eta-1)}}. }[/math]
Repeat these steps until row and column totals are sufficiently close to u and v.
Notes:
- For the RAS form of the algorithm, define the diagonalization operator [math]\displaystyle{ diag: \mathbb{R}^k \longrightarrow \mathbb{R}^{k\times k} }[/math], which produces a (diagonal) matrix with its input vector on the main diagonal and zero elsewhere. Then, for each row adjustment, let [math]\displaystyle{ R^{\eta}=diag(\frac{u_i}{\sum_j m_{ij}^{(2\eta-2)}}) }[/math], from which [math]\displaystyle{ M^{2\eta - 1}=R^{\eta}M^{2\eta-2} }[/math]. Similarly each column adjustment's [math]\displaystyle{ S^{\eta}=diag(\frac{v_i}{\sum_i m_{ij}^{(2\eta-1)}}) }[/math], from which [math]\displaystyle{ M^{2\eta}=M^{2\eta-1}S^{\eta} }[/math]. Reducing the operations to the necessary ones, it can easily be seen that RAS does the same as classical IPF. In practice, one would not implement actual matrix multiplication with the whole R and S matrices; the RAS form is more a notational than computational convenience.
Algorithm 2 (factor estimation)
Assume the same setting as in the classical IPFP. Alternatively, we can estimate the row and column factors separately: Choose initial values [math]\displaystyle{ \hat{b}_j^{(0)} := 1 }[/math], and for [math]\displaystyle{ \eta \geq 1 }[/math] set
- [math]\displaystyle{ \hat{a}_i^{(\eta)} = \frac{u_{i}}{\sum_j\ x_{ij}\hat{b}_j^{(\eta-1)}}, }[/math]
- [math]\displaystyle{ \hat{b}_j^{(\eta)} = \frac{v_{j}}{\sum_i\ x_{ij}\hat{a}_i^{(\eta)}} }[/math]
Repeat these steps until successive changes of a and b are sufficiently negligible (indicating the resulting row- and column-sums are close to u and v).
Finally, the result matrix is [math]\displaystyle{ \hat{m}_{ij} = \hat{a}_i^{(\eta)}\hat{b}_j^{(\eta)}x_{ij} }[/math]
Notes:
- The two variants of the algorithm are mathematically equivalent, as can be seen by formal induction. With factor estimation, it is not necessary to actually compute each cycle's [math]\displaystyle{ \hat{m}_{ij}^{(\eta)} }[/math].
- The factorization is not unique, since it is [math]\displaystyle{ m_{ij} = a_i b_j x_{ij} = (\gamma a_i)(\frac{1}{\gamma}b_j)x_{ij} }[/math] for all [math]\displaystyle{ \gamma \gt 0 }[/math].
Discussion
The vaguely demanded 'similarity' between M and X can be explained as follows: IPFP (and thus RAS) maintains the crossproduct ratios, i.e.
- [math]\displaystyle{ \frac{m^{(\eta)}_{ij}m^{(\eta)}_{hk}}{m^{(\eta)}_{ik}m^{(\eta)}_{hj}} = \frac{x_{ij}x_{hk}}{x_{ik}x_{hj}}\ \forall\ \eta \geq 0\text{ and }i\neq h,\quad j\neq k }[/math]
since [math]\displaystyle{ m^{(\eta)}_{ij} = a_i^{(\eta)}b_j^{(\eta)}x_{ij}. }[/math]
This property is sometimes called structure conservation and directly leads to the geometrical interpretation of contingency tables and the proof of convergence in the seminal paper of Fienberg (1970).
Direct factor estimation (algorithm 2) is generally the more efficient way to solve IPF: Whereas a form of the classical IPFP needs
- [math]\displaystyle{ IJ(2+J) + IJ(2+I) = I^2J + IJ^2 + 4IJ \, }[/math]
elementary operations in each iteration step (including a row and a column fitting step), factor estimation needs only
- [math]\displaystyle{ I(1+J) + J(1+I) = 2IJ + I + J \, }[/math]
operations being at least one order in magnitude faster than classical IPFP.
IPFP can be used to estimate expected quasi-independent (incomplete) contingency tables, with [math]\displaystyle{ u_i = x_{i+}, v_j = x_{+j} }[/math], and [math]\displaystyle{ m^{0}_{ij}=1 }[/math] for included cells and [math]\displaystyle{ m^{0}_{ij}=0 }[/math] for excluded cells. For fully independent (complete) contingency tables, estimation with IPFP concludes exactly in one cycle.
Comparison with the NM-method
Similar to the IPF, the NM-method is also an operation of finding a matrix [math]\displaystyle{ X }[/math] which is the “closest” to matrix [math]\displaystyle{ Z }[/math] ([math]\displaystyle{ Z \in \mathbb{N}^{n \times m } }[/math]) while its row totals and column totals are identical to those of a target matrix [math]\displaystyle{ Y }[/math] [math]\displaystyle{ ( Y \in \mathbb{N}^{n \times m }) }[/math].
However, there are differences between the NM-method and the IPF. For instance, the NM-method defines closeness of matrices of the same size differently from the IPF.[19] Also, the NM-method was developed to solve for matrix [math]\displaystyle{ X }[/math] in problems, where matrix [math]\displaystyle{ \boldsymbol{Z} }[/math] is not a sample from the population characterized by the row totals and column totals of matrix [math]\displaystyle{ Y }[/math], but represents another population.[19] In contrast, matrix [math]\displaystyle{ \boldsymbol{Z} }[/math] is a sample from this population in problems where the IPF is applied as the maximum likelihood estimator.
Macdonald (2023)[20] is at ease with the conclusion by Naszodi (2023)[21] that the IPF is suitable for sampling correction tasks, but not for generation of counterfactuals. Similarly to Naszodi, Macdonald also questions whether the row and column proportional transformations of the IPF preserve the structure of association within a contingency table that allows us to study social mobility.
Existence and uniqueness of MLEs
Necessary and sufficient conditions for the existence and uniqueness of MLEs are complicated in the general case (see[22]), but sufficient conditions for 2-dimensional tables are simple:
- the marginals of the observed table do not vanish (that is, [math]\displaystyle{ x_{i+} \gt 0,\ x_{+j} \gt 0 }[/math]) and
- the observed table is inseparable (i.e. the table does not permute to a block-diagonal shape).
If unique MLEs exist, IPFP exhibits linear convergence in the worst case (Fienberg 1970), but exponential convergence has also been observed (Pukelsheim and Simeone 2009). If a direct estimator (i.e. a closed form of [math]\displaystyle{ (\hat{m}_{ij}) }[/math]) exists, IPFP converges after 2 iterations. If unique MLEs do not exist, IPFP converges toward the so-called extended MLEs by design (Haberman 1974), but convergence may be arbitrarily slow and often computationally infeasible.
If all observed values are strictly positive, existence and uniqueness of MLEs and therefore convergence is ensured.
Example
Consider the following table, given with the row- and column-sums and targets.
1 | 2 | 3 | 4 | TOTAL | TARGET | |
---|---|---|---|---|---|---|
1 | 40 | 30 | 20 | 10 | 100 | 150 |
2 | 35 | 50 | 100 | 75 | 260 | 300 |
3 | 30 | 80 | 70 | 120 | 300 | 400 |
4 | 20 | 30 | 40 | 50 | 140 | 150 |
TOTAL | 125 | 190 | 230 | 255 | 800 | |
TARGET | 200 | 300 | 400 | 100 | 1000 |
For executing the classical IPFP, we first adjust the rows:
1 | 2 | 3 | 4 | TOTAL | TARGET | |
---|---|---|---|---|---|---|
1 | 60.00 | 45.00 | 30.00 | 15.00 | 150.00 | 150 |
2 | 40.38 | 57.69 | 115.38 | 86.54 | 300.00 | 300 |
3 | 40.00 | 106.67 | 93.33 | 160.00 | 400.00 | 400 |
4 | 21.43 | 32.14 | 42.86 | 53.57 | 150.00 | 150 |
TOTAL | 161.81 | 241.50 | 281.58 | 315.11 | 1000.00 | |
TARGET | 200 | 300 | 400 | 100 | 1000 |
The first step exactly matched row sums, but not the column sums. Next we adjust the columns:
1 | 2 | 3 | 4 | TOTAL | TARGET | |
---|---|---|---|---|---|---|
1 | 74.16 | 55.90 | 42.62 | 4.76 | 177.44 | 150 |
2 | 49.92 | 71.67 | 163.91 | 27.46 | 312.96 | 300 |
3 | 49.44 | 132.50 | 132.59 | 50.78 | 365.31 | 400 |
4 | 26.49 | 39.93 | 60.88 | 17.00 | 144.30 | 150 |
TOTAL | 200.00 | 300.00 | 400.00 | 100.00 | 1000.00 | |
TARGET | 200 | 300 | 400 | 100 | 1000 |
Now the column sums exactly match their targets, but the row sums no longer match theirs. After completing three cycles, each with a row adjustment and a column adjustment, we get a closer approximation:
1 | 2 | 3 | 4 | TOTAL | TARGET | |
---|---|---|---|---|---|---|
1 | 64.61 | 46.28 | 35.42 | 3.83 | 150.13 | 150 |
2 | 49.95 | 68.15 | 156.49 | 25.37 | 299.96 | 300 |
3 | 56.70 | 144.40 | 145.06 | 53.76 | 399.92 | 400 |
4 | 28.74 | 41.18 | 63.03 | 17.03 | 149.99 | 150 |
TOTAL | 200.00 | 300.00 | 400.00 | 100.00 | 1000.00 | |
TARGET | 200 | 300 | 400 | 100 | 1000 |
Implementation
The R package mipfp (currently in version 3.1) provides a multi-dimensional implementation of the traditional iterative proportional fitting procedure.[23] The package allows the updating of a N-dimensional array with respect to given target marginal distributions (which, in turn can be multi-dimensional).
Python has an equivalent package, ipfn[24][25] that can be installed via pip. The package supports numpy and pandas input objects.
See also
- Data cleansing
- Data editing
- NM-method
- Triangulation (social science) for quantitative and qualitative study data enhancement.
References
- ↑ Bacharach, M. (1965). "Estimating Nonnegative Matrices from Marginal Data". International Economic Review (Blackwell Publishing) 6 (3): 294–310. doi:10.2307/2525582.
- ↑ Jaynes E.T. (1957) Information theory and statistical mechanics, Physical Review, 106: 620-30.
- ↑ Wilson A.G. (1970) Entropy in urban and regional modelling. London: Pion LTD, Monograph in spatial and environmental systems analysis.
- ↑ Kullback S. & Leibler R.A. (1951) On information and sufficiency, Annals of Mathematics and Statistics, 22 (1951) 79-86.
- ↑ de Mesnard, L. (1994). "Unicity of Biproportion". SIAM Journal on Matrix Analysis and Applications 15 (2): 490–495. doi:10.1137/S0895479891222507.https://www.researchgate.net/publication/243095013_Unicity_of_Biproportion
- ↑ Kruithof, J (February 1937). "Telefoonverkeersrekening (Calculation of telephone traffic)". De Ingenieur 52 (8): E15-E25. https://resolver.kb.nl/resolve?urn=dts:2980078:mpeg21.
- ↑ "On a Least Squares Adjustment of a Sampled Frequency Table When the Expected Marginal Totals are Known". Annals of Mathematical Statistics 11 (4): 427–444. 1940. doi:10.1214/aoms/1177731829.
- ↑ Lamond, B. and Stewart, N.F. (1981) Bregman's balancing method. Transportation Research 15B, 239-248.
- ↑ Stephan, F. F. (1942). "Iterative method of adjusting frequency tables when expected margins are known". Annals of Mathematical Statistics 13 (2): 166–178. doi:10.1214/aoms/1177731604.
- ↑ Sinkhorn, Richard (1964). “A Relationship Between Arbitrary Positive Matrices and Doubly Stochastic Matrices”. In: Annals of Mathematical Statistics 35.2, pp. 876–879.
- ↑ Bacharach, Michael (1965). “Estimating Nonnegative Matrices from Marginal Data”. In: International Economic Review 6.3, pp. 294–310.
- ↑ Bishop, Y. M. M. (1967). “Multidimensional contingency tables: cell estimates”. PhD thesis. Harvard University.
- ↑ "An Iterative Procedure for Estimation in Contingency Tables". Annals of Mathematical Statistics 41 (3): 907–917. 1970. doi:10.1214/aoms/1177696968.
- ↑ "I-Divergence of Probability Distributions and Minimization Problems". Annals of Probability 3 (1): 146–158. 1975. doi:10.1214/aop/1176996454.
- ↑ "On the Iterative Proportional Fitting Procedure: Structure of Accumulation Points and L1-Error Analysis". Pukelsheim, F. and Simeone, B.. http://opus.bibliothek.uni-augsburg.de/volltexte/2009/1368/.
- ↑ Bishop, Y. M. M.; Fienberg, S. E.; Holland, P. W. (1975). Discrete Multivariate Analysis: Theory and Practice. MIT Press. ISBN 978-0-262-02113-5. https://archive.org/details/discretemultivar00bish.
- ↑ Martin Idel (2016) A review of matrix scaling and Sinkhorn’s normal form for matrices and positive maps arXiv preprint https://arxiv.org/pdf/1609.06349.pdf
- ↑ Bradley, A.M. (2010) Algorithms for the equilibration of matrices and their application to limited-memory quasi-newton methods. Ph.D. thesis, Institute for Computational and Mathematical Engineering, Stanford University, 2010
- ↑ 19.0 19.1 Naszodi, A.; Mendonca, F. (2021). "A new method for identifying the role of marital preferences at shaping marriage patterns". Journal of Demographic Economics 1 (1): 1–27. doi:10.1017/dem.2021.1.
- ↑ Macdonald, K. (2023). "The marginal adjustment of mobility tables, revisited". OSF: 1–19. https://osf.io/z4u2d/download.
- ↑ Naszodi, A. (2023). "The iterative proportional fitting algorithm and the NM-method: solutions for two different sets of problems". arXiv:2303.05515 [econ.GN].
- ↑ Haberman, S. J. (1974). The Analysis of Frequency Data. Univ. Chicago Press. ISBN 978-0-226-31184-5. https://archive.org/details/analysisoffreque0000habe.
- ↑ Barthélemy, Johan; Suesse, Thomas. "mipfp: Multidimensional Iterative Proportional Fitting". CRAN. https://cran.r-project.org/web/packages/mipfp/index.html.
- ↑ "ipfn: pip". https://pypi.org/project/ipfn/.
- ↑ "ipfn: github". https://github.com/Dirguis/ipfn.
Original source: https://en.wikipedia.org/wiki/Iterative proportional fitting.
Read more |