z-logo
Premium
Convergence of the Iterative Scaling Procedure for Non‐Negative Matrices
Author(s) -
Pretzel Oliver
Publication year - 1980
Publication title -
journal of the london mathematical society
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.441
H-Index - 62
eISSN - 1469-7750
pISSN - 0024-6107
DOI - 10.1112/jlms/s2-21.2.379
Subject(s) - convergence (economics) , citation , computer science , scaling , mathematics , combinatorics , information retrieval , mathematical economics , library science , geometry , economics , economic growth
The iterative scaling procedure (or iterative proportional fitting procedure) was first suggested in 1940 by Deming and Stephan [4]. It consists in modifying a non-negative matrix to achieve specified row and column sums by alternately multiplying the rows and columns to adjust their sums to the desired values. Since 1940 the procedure has been the subject of numerous papers both by statisticians and pure mathematicians. Fienberg [6] contains an admirable review of the papers up to 1970. Sinkhorn [11] first stressed the idea of diagonal equivalence in 1964 and it is clear that by the nature of the process the iterates are diagonally equivalent to the original matrix. That is also true for the limit if no non-zero cell of the initial matrix tends to zero (an easy proof of this is included in the present paper for completeness). Suppose that all the desired sums are 1. In that case the limit matrix is doubly stochastic, and the structure of all doubly stochastic matrices is known (Birkhoff [1]). Thus it is possible to determine which cells have to be zero in the limit. Sinkhorn and Knopp [14] give necessary and sufficient conditions for convergence in the doubly stochastic case and show further that only those cells converge to zero that have to do so. That paper forms the starting point for the present one. In 1972 Sinkhorn [12] proved that in the same case one could set all cells that tended to zero equal to zero at the start without changing the limit. The corresponding theorem, proved differently, plays an important role in this paper. After the paper by Sinkhorn and Knopp the attention of pure mathematicians shifted away from the ISP itself and towards the concept of diagonal equivalence. In 1968 Brualdi [2] gave necessary and sufficient conditions for the existence of a matrix with a given pattern of non-zero cells and given row and column sums. There are several papers, of greater or lesser generality, that use these conditions to show that if the initial matrix A has a suitable pattern, then there exists a unique matrix diagonally equivalent to A with the desired row and column sums (see [3], [5], [7], [8], [9] and [13]). These proofs are essentially topological in nature, using fixed point or optimum theorems. The norms they introduce are closely related to the auxiliary functions defined in Sinkhorn and Knopp and in this paper. We prove here that a necessary and sufficient condition for the convergence of the ISP is that there exists a matrix with the given totals and zeros at least everywhere that the initial matrix has zeros. Such a matrix is used explicitly in the proof, rather than Brualdi's conditions. In the course of the proof we also show that the minimum possible number of cells tend to zero, and that if these cells are set to zero initially, then the process will converge to the same limit, which will then be diagonally equivalent to the new initial matrix. These results constitute Theorem 1. In a second section they are extended to the case when certain column sums are left unspecified (Theorem 2). The author would like to express his gratitude to Datum e.V. of Bonn for suggesting and supporting the research contained in this paper, and to the referee for his helpful comments.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here