Please see PDF version







COMMUN. STATIST.THEORY METH., 19(11), 43154329 (1990)

PROBABILITY AND STATISITICS
IN THE SERVICE OF COMPUTER SCIENCE:
ILLUSTRATIONS USING THE ASSIGNMENT PROBLEM

J. Michael Steele
Wharton School, Department of Statistics
University of Pennsylvania
Philadelphia PA 19104

Key Words and Phrases: assignment problem; simulated annealing; stochastic optimization; linear programming with random costs; statistical mechanic&.

ABSTRACT
Recent work on the assignment problem is surveyed with the aim of, illustrating the contribution that stochastic thinking can make to problems of interest to computer scientistA. The assignment problem is thus examined in connection with the analysis of greedy algorithrns, marriage lemmas, linear programming with random coats, randomization based matching, stochastic programming, and statistical mechanics. (The survey is based on the invited presentation given during the "Statistics Days at FW" in March 1990)

1. Introduction.

The role of probability and statistics in computer science has been expanding vigorously for almost thirty years. It is no longer possible to survey that role honestly in a single article, so the path taken here is rather to look hard at one concrete problem that illustrates many difrerent parts that probability and statistics can play.
The problem chosen for the task in the Assignment Problem. Before formally introducing this problem, it seems appropriate to review its qualifications as an illustrative vehicle. It is simple to state and has been extensively studied, but its most compelling charm for the present pupose is the breadth of topics to which it is naturally connected. The assignment problem allows one to discuss a number of beautiful ideas that tie together topics as diverse as linear programming and statistical mechanics.
A final benefit of stud*ying the assignment problem in that it provides natural illustrations of a basic distinction that is sometimes eclipsed. The probabilistic analysis of algorithms is quite distinct from the analysis of probabilistic algorithms. Despite the polysyllabic overlap, these activities have radically diffierent tools, aims, and assumptions.

4315

Copyright Q 1991 by Marcel Dekker, Inc.


4316 STEELE.

.2. The Assignment Problem.

Consider the task of assigning n jobs to n machines while minindzing the total processing time required to do the n jobs. We assume the required processing times (or more generally the processing costs ) are specified by n 2 real numbers % where % is viewed as the cost of performing job i on machine j. Formally, the Assignment Problem is the task of determining a permutation x that solves

n
min cir(i).

In the sections that follow we will consider two basic 8Lochastic models for this problem; one is given by choosing the % as independent identically distributed random variables, the second is given by choosing the offidiagonal values of cii as the interpoint distances of a set of n points in Rd. Each of these models is examined in the contexts of heuristic and optimal algorithms.

3. Simplest Stochastic Model.

No doubt the simplest stochastic model for the assignment problem is given by considering the cij to be independent random variables with the uniform distribution on (0, 11. Walkup (1979) investigated this problem and discovered the striking fact that the expected cost of the minimal assignment is bounded independently of n. More precisely, Walkup showed the cost A,, of minimal assignment satisfies EA,, < 3.
Shortly we will take up the issue of hounding (or even determining) EA,., but it is first worth examining some naive approaches to choosing x and estimating EA..

3.1 Greedy Assignments.

Greedy algorithms are among the most studied methods in combinatorial optimization, and in many instances the natural greedy algorithms are indeed optimal. The theory of matroids has developed in good measure because it provides a systematic view of situations for which greedy algorithms yield genuine optima.
In the case of the assignment problem, the greedy approach is not particularly sur, cessful, but it still merits a quick look since it provides one of the few instances where a complete analysis is easy. Moreover, examining the failures of the greedy algorithms helps to reveal the subtlety behind the the analysis or the optimal solution.

Local Greedy Assignment.
Consider the assignment process that successively examines each job i, 1:5 i < n, and makes an assignment to the free machine for which % is minimal, i.e. w(i) is chosen 50 that ci.(i) = min(cii a(k), for all k < il. Under the hypothesis that the (%) are independent and uniformly distributed on [0, 11, the Rh assignment equals the minimum


ILLUSTRATIONS USING THE ASSIGNMENT PROBLEM 4317

of n  i + 1 independent uniformly distributed random variables, so the expected cost of the i'th assignment in exactly ll(n  i + 2). The totat expected cost of the assignment is therefore asymptotic to logn.

Global Greedy Assignment.
A less naive heuristic for the assignment problem chooses matches successively out of the set of all possible matches by choosing the pair i and j for which % is the least cost among the unmatched pairs. This method looks promising since the expected cost of the first match in just ll(n2 + 1), rather than the ll(n + 1) of the local greedy assignment. Still, this early success noon falters, and the global greedy heuristic again leads to an expected total cost of order logn.
To see why this is so requires a little work. To develop a recursion via a firststep analysis, we let a,.(t) denote the expected cost of the assignment produced by the global greedy heuristic under the assumption that the cost % of the n2 jobprocessor pairs are i.i.d. and uniformly distributed on the restricted interval [t, 1]. To solve the original problem, we therefore need to determine a.(0).
If mk (t, u) denotes the density of the minimum of 0 random variables with the uniform distribution in [t, 1], it is easy to see that

umn(t, u)du + a,,,(u)m,,(9,u)du.
a,,C)

Next, by scaling we have a,, (t) = nt + (1  t)a.(0), so writing a. for a.(0) and substituting into the integral recursion, we find

a, U mn (0, u)du + (0  1)u + (1  u)a,,,)m,(0, u)du
J0 J0
2
n n
+ a,l
+ n2) (1 + n2)*
This recursion easily shows that a. in exactly of order log n, a result that goes back to
Kurtzburg(I962).

3.2 Methods Based on Marriages.

Hall's matching theorem was christened the "Marriage Lemma" by H. Weyl(I949), and his colorful phrasing of Hall's theorem has become standard: "If a set of n men and n women has the property that any subset of k women known at least k men, then it is possible to marry the n women to the n men so that each women marries a man she knows." We thus have at hand a sufficient condition for the existence of a perfect matching in a bipartite graph. The sufficient condition of Hall's theorem is also obviously necessary.
There are several ways the Marriage Lemma can belurned toward the assignment problem. Perhaps the simplest idea in to consider all of the pairs (i,j) such that cy < t and then ask when it is possible to extract a perfect matching from the resulting graph. If the graph satisfies the condition of the Marriage Lemma, we obtain an assignment that has cost bounded by nt; thus if we let Bt denote the event that


4P18
STEELE

1(cii ISI for alISC {1,2,...,n},

then we have the bound

EAn < nt + nP(B,').

This bound can be used to improve on the 0(log n) bound that was obtained by the greedy algorithm, but seems to lack the force to show EA. is bounded independently of n. The difficulty comes frorn the requirement that we consider all S in our construction of Bt. Fortunately, there is a mild variant of the Marriage Lemma that lets us restrict attention to a smaller collection of subsets. With this new condition in hand one can indeed show the uniform boundedness of EAn.

Symmetric Marriage Lemma and Checking Boundedness.
It is possible to give a symmetric version of the. marriage lemma.that has the benefit of avoiding the largest values of ISI. The symmetrized version says: `If for each 0 < k < rn/21 it happens that each act of k women knows at least k men and each set of k men knows at least k women, then there is a matching between the men and the women."
It is not hard to prove the Symmetric Marriage Lemma using Hall's theorem; in fact, it can be obtained with help from almost any of the matching theory tools. For example, one can prove it using an induction after the style of the wellknown proof of the marriage lemma due to Halmos and Vaughn(1950), or one can extract it from general results like Menger's theorem or the MaxFlowMinCut theorem.'
1 To prove EAn is uniformly bounded we modify our earlier construction to take advantage of the new lemma. Thus, we let Cc denote the event that for all S C (1,2,...,n} satisfying 151 :5 [n/21 we have the inequalities

E I(Cii :S g) k IS1
l:5j5n.iES

and
1(cii :S o ?: ISI.

If Z(tJ) denotes the number of distinct elements in a sample of size N drawn with replacement from (1, 2,..., n} where N has the binomial distribution with parameters t and j, then

P(C10) :5 2 rn/21 n PMU) < j).
E j

By choosing t g/n, it is routine (but tedious) to show that EAn is bounded by 10. Although one can try to squeeze more out of this argument, say by noting

EA,, :5 min n(t + P(Ct`)),
t

but the underlying construction has built in constraints that seem likely to keep it from


ILLUSTRATIONS USING THE ASSIGNMENT PROBLEM 4319

equalling the beet known bounds for EA.. Consequently, attention will be directed toward other methods.

Waikup's Method.
The preceeding calculation gives us the same qualitative understanding of EA. that Walkup obtained, but WaIkup's method gives a quantitatively superior bound. Partly for this reason, we briefly sketch Walkup's original approach; but, since there are methods that are even more quantitatively precise, the main attraction in Walkup's method now comes from the further light it throws on on the use of marriage lemmas.
The last variant of the Marriage Lemma that we recall says: "If there is a k > 1 such that each man knows exactly k women and each woman knows exactly k men, then it is possible to marry,the men and the wormn." In other words, any regular bipartite graph contains a perfect matching. To make use of this variant we would like to find a way to construct a regular bipartite graph that carries with it the information in our weighted graph. Such a construction is not presently known, but the idea of examining regular graphs is definitely fruitful. It turns out that we need to look toward regular directed graphs.
We first show there is a pleasant way to associate a random variable with each vertex so that these vertex variables can be related usefully to our uniform edge variables. We first note that if independent random variables Y and, Z are distributed so that P(Y < A) = P(Z c A) = 1  (1  A)112 = F(A), 0 < A < 1, then X = min(Y, Z) is uniformly distributed on [0, 1]. Now, if we associate with each vertex an independent random variable with distribution F and associate to each edge the minimum of the two associated vertex variables, we obtain a model of a random graph with edge weights that are uniformly distributed. Alas, the new edge variables are no longer independent; nevertheless, the process shows promise.
To get a rigorous approach to A. under the original independent uniform edge weight model we apparently need to introduce a slightly more complex structure. Thus, for each 1 <
< n we consider independent variables {Yijl and (Zij) with the distribution F and use them to build a random regular directed bipartite digraph G. We write A :2 (a,, a2, .... on) and B = (bl, b2, ...,b.) for the vertex set bipartition. We then take (ai, bi) to be a directed edge of G if YiS is among the d smallest elements of (Vik : 1 :5 k < n), and finally we take a directed edge (bi, aj) if Aj is among the d smallest of (Zkj : 1 < k < n)
This construction is now reladyly easy to analyze. If Vi(,t) in the k'th smallest element of (Yj, : 1 < j :5 n), and U(&) is the k'th smallest of n independent uniformly distributed variables, then EYi(k) ~ . 2EU(k) and hence EYi(k) :5 2k/(n + 1).
With these observations in place, we just need a good bound on the probability p(n, d) that a random regular bipartite digraph contains a perfect matching. Thus we have the amusing situation that the last marriage lemma we use is a random marriage lemma. By a counting argument and traditional estimations, Walkup(1978) showed:

P0, 2)5 (5n)'


4320 STEELE

and

1  p(n, d) :5 (122) 1 (d/n)(d+ 1)(d2) for d
> 3.

Given these relations, it becomes an easy calculation to show that expected cost of the optimal assignment is bounded by 3.
Although the analysis given here has always focused on the uniform distribution, this has been for sake of convenience. In Frenk, van Ilouweninge, and Rinnooy Kan (1982,1986), one finds the Waikup approach to the assignment problem can be developed as well for a broad claw of distributions. In fact, even the early papers of KuUrburg(I962) and Borovkov(1962) considered nonuniformly distributed variables.
So fir we have seen how the two greedy algorithms can be bested by estimates based on marriage lemmas of several flavors (basic, symmetric, regular, and diregular stochastic). This should create some appreciation of the remarkable bound that one obtains by using the tools of linear programming.

3.3 Linear Programmiiig Bounds.

The provence of linear programming is the set of problems that can be formulated in terms of the minimization of a linear function subject to constraints:

n
minimize Ecji
J=1
0 LP
subject to E aijzj bi, i 1, 2, , m
J=1
0, j 1, 2, ..., n.

The assignment problem is easily cast in this framework. We first consider the problem:
minimize E cii wii

subject to i 1, 2, ..., n
AP

n,

and zij.> 0.

If we now further require the zij to be integers, it is clear that the linear programming problem AP is an appropriate reformulation of the assignment problem. Without that extra integrality constraint it is a priori conceivable that the optimal solution to AP is not a zeroone incidence vector. It turns out that we are actually in luck and the vertices of the convex polytope determined by the constraints of AP are alwVS integers. We will not digress on the details here, but this fact follows from a beautiful (but elementary) result from the theory of polyhedra: If A is the vertexedge incidence matrix of a bipartite graph


ILLUSTRATIONS USING.THE ASSIGNMENT PROBLEM 4321.

G, then every square submatrix.of A has determinant 0,. 1, or 1.

Dyer, Frieze, McDiarmid Inequality

What we need is. a way to bound the, exp ectation of the value. of the solution ,*(cl,c2, ... c.) of the. linear programming*, problem LP where the cost, coefficients cl,c2,....,c, are nonnegative random variables, but where constraint. coefficients aij, 1 < < m, 1 < j:5 n, and constraint levels bi, 1 < i < m are fixed constants.
The bound on Ez* is computed in terms of a fixed feasible solution ii, 1 < j < n, where by "fixed". we mean that.the ij,are determined only by consideration of (%) and (bj). Thus, the vector (,ij) is not permitted to dep~end.on the specific realization of the random cos t coefficients (rj).
Th * eorern (Dyer, Frieze, McDiarmid.) If cS, 1 < j:5 n, are independent and uniformly
distributed on [0, 11, and :Ej, 1 < < n is a fixed feasible solution of LP, then the value Z*
of the optimal solution satisfies

E** < m max(ij 1 < j :5 n). DFM

Application to the Amignment Problem
Nothing could be simpler than to apply the Dyer, Frieze, McDiarmid inequ)aity to the assignment problem AP. We first, note that we have 2n nontrivial constraints so m = 2n. Further, it we let I/n for all 1 :5 i,j :5 n then % is clearly a feasible solution. Hence, the right hand side,of DFM works out to be just 2, and the proof of. Karp's bound is complete. The method that underlies the proof of the D.FM inequality is.extremely general and far from.being exhausted.. The key.idea "conditioning on a basis."  was evident already in Karp(1987), but the understanding of the method was greatly advanced through the generalizations given in Dyer, Frieze, and WDiarmid(1986) and the related paper McDiarmid(1986).
In order to review the DFM technique it will be useful to have at hand the basics of the simplex algorithm of linear programming. Probabilists and statisicians who are familiar with traditional undergraduate expositions of the simplex algorithm may be relieved to see how succinct a reformulation is possible.

Blitzkrieg Simplex Theory
The simplex method for solving linear programming problems (like LP) can be boiled down to the fact that one can rewrite LP in a form that either suggests a further reformulation  or else tenders the solution obvious. It takes only a few lines to give a useful and rigorous development of this claim.
First, we rephrase LP in tidier matrix form, so it now reads

minimize z = cx

subject to Ax = b

X > 0.

Ilere, of course A is an m x n matrix, x in an ndimensional column vector, h is an m

1


4322 STEELE

dimensional column vector, and c is an ndimensional tow vector. We will sometimes need to write A = [&19 &2 9 ... s an] where aj denotes the j'th column of A.

A fundamental construct underlying reformula ' tion is the notion of a basis. This is just
an index subset B C (1, 2,..., n} of cardinality m with the property that the corresponding
columns of A are linearly independent. By custom, the variables zi such that i E B are
called the basic variabks. We will denote the complement of B by N and call the ai ouch
that i E N the nonbasic variabJes.
. The first.step of our reformulation' takes us from the equation Ax = b to one that expresses the basic variables as a linear combination'of the nonbasic variables and the constraint variable b. Thus, separating x and A into components consisting of the basic and nonbaBic variables and collecting these on difrerent aides of the equation, we write

ABxB = b  ANxiv

wher Aa denotes the matix consisting of the columns (ai : i E B}. Other subscripted variables are defined analogously.
It is useful to have an expression for xg; and, since our definition of the basis requires the linear independence of the columns of the square matrix AD, we can write

xB = Aib  A;lAivxiv.

Next we express the objective function x = cx in a form that reflects the division into basic and nonbasic variables. Certainly, we can write xt = cBxB + cjvxjv; but, by using our formula for xB, we can clear away the basic variables entirely and write the original problem LP in the very useful dictionary form:

xB = Ailb  A1Aiyxiv W

z = cBA;1b + (cjv  cBA1Aiv)xIv.

We are now in positior; to see the point of this maneuvering. Suppose we are lucky and all the coefficients of the nonbasic variables in the formula for z in DF are nonnegative, i.e. suppose that

cj k cBAlai for all i E N. 0C

In that case, the formula for x given by the second row of the dictionary makes it clear that one can minimize x over all choices of xiv ?: 0 by letting xiv = 0. Finally, for X = (xjv, XB) given by xN = 0 and xB = A;1b to be a genuine optimal solution to the problem LP we only need to check the second part of feasibility xB > 0.
We will call OC the optimality criterion for the linear program LP. For our theoretical needs, the key insight behind the simplex method is that it gives an explicit condition for a basis to correspond to an optimal solution.
Naturally, for this criterion to he useful algorithmically, more work is needed. For example, if one of the nonbasic variables zi has a negative coefficient in the second equation


ILLUSTRATIONS USING THE ASSIGNMENT PROBLEM 4323

of DF, we need to rewrite DF with xi m a basic variable. Since the basis B has exactly m elements, we therefore need to move some variable xl from B to N to make room for ri. The rules for such changing of the basic are put of the fundamentals of linear programming, but they need not concern us. The only additional facts we need concerning linear programming are those that support the probabilistic use of the stopping rule OC.
The first result is a classic existence theorem that tells us there in an optimal basic
solution whenever tl;e problem LP is feasible and bounded (i.e.'in all ' the cam that matter)
there in an optimal solution that satisfies OC. The second result tells us that in the cases of
interest to us the optimal basic solution it unique. More precisely, we call on the following
two results:

Basin Existence Theorem. It a linear program in the form LP has an optimal solution, then it has an optimal.buic solution, i.e. a solution of the form x = (xv, xB) where B is a basis and xly = 0.

Basin Uniqueness Theorem. If a linear program in the form LP has an optimal solution, then for any 6 > 0 there is an c with 1 ~ J:S & such that the correspoadingproblem with cost vector c, = c + c har a unique optimal basic solution.

Sketch of the Conditioning Argument
We now have at hand everything needed to give a `one line' sketch of the proof of the DFM inequality.. Let A be a fixed feasible solution and 'write down the conditonal expectation of c* given that 3 is a basis. By the optimality criterion OC and linearity, this expectation ends* up depending on the calculation of the conditional expectation of a uniformly distributed random variable conditional on its being larger than a gi.~en value. This is an easy calculation. Finally, one just does honest arithmetic on the resulting identity, collecting the expectations of interest on one side and then using the crudest of bounds on the rest.
Naturally, . his sketch is far from complete, but study of the paper of Dyer, Frieze, and
McDiarn~dd(1986) will show that it is nevertheless honest. The crudeness of the sketch
may help to show that a great deal of unexhausted potential remains in Kup's idea of
($conditioning on alasis.*

4. Randomized Algorithm for Matching.

So far we have discussed how one can analyze the behavior of the objective function of an optimization problem if one imposes a probability model. on the input&. There* is another way that p~obability comes into the theory of algorithnw~metirnes it in beneficial to make randornized steps in the course of an algorithm. The best known algorithms with this feature are probably those based on "Simulated Annealine an initiated by Kilpatrick, Gelatt, and Vecchi(1983). The simulated annealing paradigm is especially intriguing because it is rather easy to implement, and it applies to almost any problem. One drawback to simulated an nealing in that it in not easy to analyze, and the analytical results that are available are of a qualitative nature. Typically, we can say that simulated annealing will converge, but we cannot say when.


*k

4324 ST . EELE

Saski and flajek(I988) studled simulated annealing in the context of Eucidean matching, and his analysis provides a rare look.at how simulated annealing performs ona problem' for which we have good polynomial time algorithnu.. The conclusion of Hajek's analysis is that simulated annealing does not eiven*lead to a'polynomial tim e algorithm for'the 'matching problem, much less to an algorithmithat* is competitive with the earlier' !methods. This analysis provides perhaps the strongest negative result in the theory of 8iii ulated annealing.
An interesting contrast to the failings of simulated annealing for minimal matching
is that other randomized algorithm& for the problem actually provide the fastest known
practical methods. One such rand . ornized method due to b~uir~ulle~, Va~irani, and Vazirani
is particularly attractive because it is also well suited for implementation on parallel ma
chin'es,'(c.f. Lov u*z(1980)). . The MVV method is, a bit,"'involved to detail. here; but the
essentia . 1 idea is to examine what can b~ learned abou~ matchingo.by considering certain
determinants (or more precisely, Pfafflans) in indeterminate variables. Now, determinants
in indeterminate variables would seem to he computationally unattactive since they.~ would
seem a Priori to require time and space of order M.
There are several tricks that help one Set. around the apparent difficulty of computing with indeterminate variables. First, it turns out to bq possible ~o tak.e. 4Avan.tage of the fact one can evaluate,determinants in ".real". v4rigblea. in time of order only n3 (where n is. the. number of rows of the matrix). Secondi ' if we replace the indeterminate variables with independentrandom uniform..[0. 11 var'iablesi.then with probability one a polynomial in these variables evaluates to zero Tand only if the polynomial.is identically zero. In other words, as far as the presence or 'absence of polynomial.identities istoncerned, independent random uniforms can be used to create surrogates.for indeterminate variables.
The last idea is that for N. suffitiently. large,. a random choice from '(1, 2,..., N} can be used to capture the qualitative features of a random uniform on [0, 11. The following techinital lemma.is the tool one uses to provide ailgorous analysis of algorithms based on these ideas.

4.1 Schwartz Lemma.

The lemma in question has nothing to dio with the well known lemma on conformal mappings, but rather is a tool that helps make computational sense out of arguments that might.otherwise seem to call on the use of real numbers (a computational taboo). Although we could skip the proof, it is brief and contains a charming induction.

Schwartz' Lemma. Suppose Azi,Z2.... ocn) is a * nonzero polynomial of ~egree not
greater than k. If Xi, 1 < i < n are independent random variables uniformly distributed
on the finite set (1, n} then

PU (X,, X2,  M = 0) :S kn


ILLUSTRATIONS USING THE ASSIGNMENT PROBLEM 4325

Proof. Without loss of generality, we can assume that f can be written as a polynomial in the one variable xl but with coefficients that are polynomiah in the variables Z2, Z3,, rn:

f(ZI,Z2,...,Zn) XJ +...+X klfk(X2,r3o...,zn).

If A ~ ff(Xl, X2,, X.) = 0) and 8 = {fi(X2, X3,, Xn) = 0 for all 0 < i < k), then

P(A) :5 P(B) + P(A 1 B')

< k(n  1)1N + kIN.

where we have applied induction to bound P(B) and the bound on P(A) follows from the fact that when we hold z2, z3,..., r. fixed and vary xl then 1 can have at most k roots. From our bound and induction on n, the lemma is thus proved.

5. Euclidean Matching and Assigning

For any set (ZI, 20 Zn) of n point* in the Euclidean plane, a matching is a collection of 1n12J disjoint pairs of points. By the weight of a matching we will denote the sum of the Euclidean distances between the elements of the pair@ in the matching. The algorithm@ for finding. a maximal weight matching are among the most studied in the theory of combinatorial optimization. The problem is a formally tractable one (i.e. there are polynomial time algorithms), but the deterministic algorithrm for minimal matching are not particularly fast, and practical problems regularly arise that outstrip available computational resources.
The most widely implemented deterministic methods for finding a minimal weight matching have running times of order n3. There are algorithms that are faster than this. In fact, one can show that minimal matching is of the same theoretical complexity as matrix multplication, so one can perform minimal matching at least an fut m 0(n log, 7 ) recalling Strassen's fast matrix multiplication method. So far, these newer deterministic methods have not excelled in practice, so probabilstic: and heuristic algorithm@ remain of considerable interest.
Parallel to our analysis of the assignment problem proper, we can give an appealing heuristic for the Euelidean matching problem by proceeding myopically. In this can the resulting globafty greedy algorithm succeseively matches the two closest unmatched pairs of points. Even naive implementations of this greedy heuristic are faster than traditional matching algorithms, but by making use of specialized data structures Bentley and Saxe (1980) provided an implementation of the greedy algorithm which has a worst case running time of 0(n312 log n) which is better than any known exact algorithm for determining a mLinimal matching.
Let G. and 0. denote the respective weights of the greedy and optimal matchings of
an nwt (zl, zJ C *2. There are several known relations between G. and 0.. In

1


4326 STEELE

particular there is a ratio bound due to Reingold and Tarjan (1981), the order of which cannot be improved.
In Papadirnitriou (1977), the behavior of 0. was considered for points (xl, X2. ....X.)
which were chosen at random from [0, 1]2, and it was outlined how the techniques used in
Beardwo od, Halton, and Hammersley (1959) to study the traveling salesman problem, could
be modified to prove

0./n2 , c > 0 almost surely as n  oo.

Avis, Davis, and Steele(I988) considered the corresponding issues for the greedy algorithm and proved results that imply that for G., the cost of the matching obtained by the greedy matching, one has:

G.In 2 , c > 0 almost surely as n  oo.

The results of Avis, Davis, and Steele(I988) actually provide for socalled complete convergence, a mode of convergence that is stronger than convergence with probability one and that is less sensitive to modelling consideration when applied. The methods used in Avis, Davis, and Steele also apply to random variables with general distribution$ in W. In Rd the variables G,, are asymptotic to n(J1)1d.

Other Euclidean Matching Problems
There are several variants of the Euclidean matching problem that have interesting connections with other problems of the theory of algorithm&. One of these is the twosample matching problem studied in Ataji, Korn16s, and Tusnidy(I983). A second development concerns the connection between bin packing an "up right" matching. The later problem is well discussed in Shor(I986) where one finds a discussion of several matching variants and related literature.

6. Advances via Statisitcal Mechanics.

Simulated annealing had its origin in statistical mechanics, and the design of its earliest applications rested on considerations of physical analogy. While many now regard simulated annealing as just one among many possible stochastic relaxations of the greedy algorithm, the insights that statistical mechanics can provide to the theory of optimization are far from exhausted.
Mdzard and Parisi (1987,1988) have in fact made remarkable use of ideas from the theory of Spin Glasses to make important progress on the assignment problem. Although this work proceeds within a framework that is not easily linked with that used here, these authors provide interesting arguments fop the limit relation

EA.  ir 2 16.


ILLUSTRATIONS USING THE ASSIGNMENT PROBLEM 4327

They also give a derivation of a corresponding (though somewhat less explicit) limit relation for the Euclidean costs 0.. The explicitness of these results is almost shocking when one considers the energy that has been put into the upperbounds on EA. that we have reviewed and the corresponding hard work that has gone into proving lower bounds (see Lazarus(I979) and especially Goemans and Kodialam(1989) ).
These methods of M6zard and Parisi ofIer many directions for further study, and they surely suggest that the ideas of statistical mechanics are likely to feature prominently in subsequent work in combinatorial optimization.m

7. Concluding Remarks.

We began with the analysis of the assignment problem under the uniform model, and we saw how several useful techniques fared in action. After the simple analysis or two greedy algorithms, we saw that a deeper understanding of the matching problem was possible using variations on the Marriage Lemma. Before leaving the uniform model we finally saw that remarkable fresh insight& could be won by use of linear programming tools, and our sketch of the proof of the Dyer, Frieze, McDairmid theorem motivated blitzkrieg derivation of the simplex method.
Afterwards, we looked more broadly at matching. In particular, we reviewed a probabilstic algorithm that is solidly practical and examined a sampling of probabilistic results that are avaliable under the Euclidean model. Finally, we looked at what the ideas of statistical mechanics can contribute to matching and the assignment problem. This lead us to consider the method of simulated annealing and the remarkable developments of Maard and Parisi.
Clearly, more was said here about the service that probability provides to computer
science than about the service that statistics provides. To some extent this reflects the in
fluence of the assignment problem as a vehicle of exposition. Nevertheless, something is also
revealed about the the three disciplines. While there have been some striking applications
of stati 1 stics (or more broadly data analysis ) in computer science, the influence of statistics
seems to trail that of probability.
To round out the story just a bit, we recall two illustrative applications of data analysis in computer science. The first is Bentley's use of data analytic considerations in the art of practical code speedup's, c.f. Bentley (1982). The second in Colin Mallows' masterful use of data analysis in the solution of John Conway's challenge problem, c.f. Mallows (1989). Even briefly noted, these examples serve to recall that real statistics can also make compelling contributions to computer science.
Perhaps the most remembered suggestion made at the FSU 30*th Birthday Party C'Statistics Days at FSU") was Thomas Cover's proposal that such happy events roll across the nation as different departments celebrate historical milestones. While one might question the suitability of birthday gifts on such occasions, a natural choice might be the soontoappear monograph Probability Theory of Combinatorial Optimization , Steele(I991). This volume treats in greater depth almost all of the topics surveyed here.


4328 STEELE

ACKNOWLEDGEMENT
This research has he supported in part by the following grants: A RODAAL0389G0092.PO001, NSFDMS8812868,AFOSR890301.A,and NSAMDA 90489H2034. I would also like to thank Professors Anjani Jain and Abba Krieger for their comments on and earlier draft of this survey.

BIBLIOGRAPHY

Ajtai, M., Korri16s, J., and Thsnidy, G. (1984). "On optimal matchings," Combinatorica, 4,259264.

Avis,. D., Davis, D., and Steele, J.M. (1988). "Probabilistic analysis of a greedy heuristic for Euclidean matching," Probability in the Engineering and Information Sciences, 2, 143156.

Bdardwood, J., lialton, J.H., and Hammersley, J.M. (1959). "The shortest path through many point*," Proc. Cambridge Philos. Soc., 55, 299327.

Bentley, J.L. (1982). Writing Efficient Programs, PrenticeHall, Inglewood Clifflis, NJ

Bentley, J.L. and Saxe, J.1). (1980). 'Decomposable searching problems 1: static to dynamic transformations," J. Algorithms, 1, 301358.

Borovkov, A.A.(1962). "A probabilistic formulation of two economic problems," Soviet Math., 3 14031406.

Donath, WX. (1969). "Algorithm and averagevalue bounds for assignment problems," 1 IBM J. Reg. Develop., 13, 380386.

Dyer, M.E., Frieze, AX., and McDiarmid, C3.H. (1986). "On linear programs with random 1 costs," Math. Programming, 35.

Frenk, J.11.G.; Van Hotiweninge, M., and Rinnooy Kan, A.H.G. (1986). "Order statistics and the linear assignment problem," Technical Report 886091A, Econometric Institute, Erasmus University Rotterdam, The Netherlands.

Frenk, J.B.G., Van Houweninge, M.V., and Rinnooy Kan, A.H.G. (1982). "Asymptotic properties of assignment problems," Technical Report 8823310, Econometric Institute, Erasmus University Rotterdam, The Netherlands.

Goemans, M.X. and Kodialam, M.S. (1989). "A Lower Bound on the Expected Cost of an Optimal Assignment," Technical Report, M.I.T. Operations Research Center, Cambridge, MA.

fiall, P. (1935). "On representation of subsets," J. London Math. Soc., 10, 2630.

Halmos, P. and Vaughan, H.E. (1950). "The marriage problem," Amer. J. Math., 72, 214215.

Karp, R.M. (1987). "An upper bound on the expected cost of an optimal assignment," 14, in Discrete Algorithms and Complexity: Proceedings of the JapanUS Joint Seminar, (D. Johnson, et al. eds.), Academic Press, New York, NY.

Kilpatrick, S., Gelatt, C., and Vecchi, M. (1983). "Optimization by simulated annealing," Science, 220, 671680.

Kurtz 1 berg, J.M. (1962). "On approximation methods for the assignment problem," J. ACM, 9,419439.


ILLUSTRATIONS USING THE ASSIGNMENT PROBLEM 4329

Lazarus, A. (1979). "The Assignment Problem with Uniform (0, 1) Cost Matrix," Unpublished B.A. Thesis, Department of Mathematics, Princeton University, Prineeton, NJ.

~ild Lovizz, L. and Plummer, M.D. (1986). Matching Theory, NorthItolland Mathematical
~.ier Studies, New York.
Mallows, C. (1989). ' Conway's challenge problem.", Technical report, ATT Bell Labora
tories, Murry Hill, NJ.
M6zard, M. and Parisi, G. (1987)."On the Solution of the Random Link Matching Problern",
C&, J. Phyique 48,14511459.
M6zard, M. and Parisi, G. (1988).*The Euclidean Matching Problern", J. Phys. France
tie 49,20192025.
2, Mizard, M., Parisi, G., and Virasoro,M.A. (1989). Spinn Glass Theory and Beyond, Lecture
Notes in Physics, 9. World Scientific, Singapore.
gh McDiarmid, C. (1986). "On the greedy algorithm with random costs," Mathematical Pro
gramming, 36, 245255.

Reingold, E.M. and Tarjan R..E. (1981). "On the greedy heuristic for complete matching,"
lie SIAM J. Comput., 10 , 67668 1.
Papadimitriou, C.11. (1978). "The probabilistic analysis of matching heuristics," Proc. 15th
iet Annual Conference Comm. Contr. Computing, Univ. Illinois, Champaign, IL.
Schor, P. W. (1986). ' The average case analysis of some online algorithms for bin packing,"
Combinatorica, 6 2,179200.
Saski, G.H. and Hajek, B. (1988) "The Time Complexity of Simulated Annealing," J. As
OM soc. Comput. Mach. 35 (2) 387403. Steele, J.M. (1991). Probability theory of
Combinatorial Optimization, Wadsworth Advanced Books, Carmel, CA.
Z8 Walkup, D.W. (1981). 'Matchings in random regular bipartite digraphs," Discrete Math.
8,440442.
Waikup, D.W. (1979). "On the expected value of a random assignment problem," SIAM J.
SC on Computing, 8, 440442.

Weyl, H. (1949). "Almost periodic invariant vector sets in a metric vector space," Amer. J. Math., 71, 178205.
of n

Received Aphil 1990; Revi6edSeptembeA 1990.

2, Recommended by P. E. Lin, The Flohida State
UniveA6ity, Tallaha66ee, FL.
4, Re4eneed by RobeAt J. Se461ing, John6 Hopkin6 UniveA6ity, 8attimone, MD.