The Annals of Probability
1982, Vol. 10, No. 3, 54&553

OPTIMAL TRIANGULATION OF RANDOM SAMPLES
IN THE PLANE

BY J. MICHAEL STEELE'

Stanford University

Let T~ denote the length of the minimal triangulation of n points chosen independently and uniformly from the unit square. It is proved that T.1VIn converges almost surely to a positive constant. This settles a conjecture of Gy6rgy Turim.

1. Introduction. Triangulation of finite sets occurs naturally in several areas of mathematical analysis, and the object of this article is to take some beginning steps in the probabilistic: study of the triangulation process. The main result obtained is a proof of a conjecture of Gy6rgy Turfin (1980) on the rate of growth of the minimal triangulation of n points independently and uniformly distributed in the unit square.
TurAn's conjecture was initially motivated by an envisioned analogue to Karp's probabilistic algorithm for the Traveling Salesman Problem (Karp, 1977). This in'turn was motivated in part by the relatively recent development of practical fixed point algorithms. (See, for example, Karamardian, 1977).
In view of the apparently special nature of the process studied here, it seems worth noting that the method employed can be used in several problems which deal with growth rates in geometric probability. The present technique will hopefully form a useful complement to the theory of subadditive Euclidean functionals (Steele, 1980).
To state the minimization problem precisely, first let S = (Xl, X2, ..., X.) denote a finite set of points xi EE R 2 . By a triangulation of S, we will mean a decomposition of the convex hull of S into triangles such that each xi EE S is a vertex of some triangle. It is not required that each vertex of the triangulation necessarily be an element of S.
By the length of triangulation we mean the sum of the lengths of all of the edges in the triangulation. The central quantity of interest here is T(xl, x2, xn) which denotes the minimum length over all possible triangulations of S.
The following result, originally conjectured by Gy6rgy Turfin, will be proved in the next two sections.

THEOREM 1. If Tn = T(Xl, X2, Xn) where Xi, 1 :s i < oo are independent and
uniformly distributed in [0, lf, then
lirun T.
,Fn

with probability one for some constant# > 0.

In the fourth section, the assumption of uniform distribution is relaxed, and some special results are obtained. The section also mentions several open problems.

2. Mean asymptotics. To establish a limit as in Theorem 1 it is almost essential to first show ET, ~ pyrn. This will be done here by means of a Poisson "smoothing" and Tauberian argument. By II we denote a planar Poisson point process with intensity 1, so

Received June 1980; revised May 1981.
'Research supported in part by NSF Grant No. MCS 7807736.
AMS 1980 subject classifications. Primary, 60F15, 60DO5; secondary, 68C05, 68E10.
Key words andphrases. Triangulation, probabilistic algorithm, subadditive Buclidean functionals, jackknife, EfronStein inequality.
548

OPTIMAL TRIANGULATION OF SAMPLES 549

for each Borel A C R', II (A) is a set of points uniformly distributed in A. Further, the cardinality 111 (A) 1 is a Poisson random variable with parameter A (A), the Lebesque measure of A.

LEMMA 2.1. The expectation qS(t) = E(TX11[0, tl')) satisfies

0 (t) :S m 10 (tIM) + Cmt log t for some constant C, all m E Z' and t (3, oo).

PRooF. Let [0, 1f be decomposed intq in' cells Q, 1:5j:5 M2, of edge length 1/m, and set tQj = (x: x = ty, y E Q}. Also, for any set of points A C R 2 let e (A) denote the cardinality of the set of extreme points. of the convex hull of A. The main observation is that

(2.1) T(H [0, t12) :5 T(I1(tQ)) + 4.21"m'tEJ', e(I1(tQp)) + 2(m + 1)t.

To establish (2.1) we note that a triangulation of I1[0, t]2 can be obtained by taking the minimal triangulations of each 11 (tQj), and then completing these to a general triangulation as follows:
(a) Adjoin each of the boundaries a(tQ,), 1 :5 j :5 M2 , and thus add a total length of 2(m + 1)t, and
(b) extend the triangulation of II(tQ,) to a triangulation of Il~tQj) U {extreme points of tQ}. This second action has a total cost bounded by 4.(21/2/M) . t. e (tQj), since there are certainly no more than 4e(tQ) lines of length (uniformly) bounded by t.21/2/m which are necessary to complete the triangulation within tQ.
We now take expectations in (2.1) and note that by Euclidean translation
ET(II(tQ)) = ET(I1[0, t/M]2)=(p(t/M), 1:5j:5M2
If X are independent and uniformly distributed in [0, lf, Renyi and Sulanke (1963) proved that

(2.2) E (e (X, X2, XJ)  A log n, n  3,

for a constant A. By elementary bounds on the Poisson distribution, this shows Ee(I1[0, tl)  A' log t for all t k 3, and some constant A'. Returning to (2.1) with these bounds, one completes the proof of Lemma 2.1.

LEMMA 2.2. If a nondecreasing function 0 (t) satisfies
(2.3) 0 (t) :5 M 20 (tIM) + CMt log t
for some c > 0 and all m e Z+ and t =t to, then

liM,__ 0(t)1t2 = lim inft. 0(t)1t2 =,8.

PRooF. From (2.3) we have
(P (Mt) 41(t) + C LO~
~mt7 t

and setting ~(t) = 0(t) + t3/2 we have
0 (mt)
(2.4) (Mt)2 t2

for all m :5 t ` exp(.~/t/2c). If we let
r 0(mt) 0(t)
A,  M* (mtvr = (M1 < M2 < < Ink <

i i i
i

1

550 J. MICHAEL STEELE

then by (2.4) applied recursively we see At contains all integers which are of the form 2 a 3 with a, b e V. From this last observation and elementary number theory one can cheek that

lim Mk+11Mk Now, for any e > 0 we can choose a to such that

0(t,)1t2 5 p + C.
0

But for any s, we can find mh and mk+i in At. such that mktos s < mk+i to and apply the monotonicity of 0 to obtain

0(8),0(Mk+lto),0(MA+14) M2
k+l
2.1
8 2  (Mkto)2  Mk+lto)2 Mh which yields

lim sup. + e).
sr

This last result implies 0(s) ~ #82 , and since 0(s) = 0(8) + 0(8 2) the proof is complete.
All that remains is the Tauberian argument to complete the main result of this section.

LEMMA2.3. ET.~#Jn_.

PROOF. From the definition of 0M and the rescaling of [0, 1f to [0, t]2, one can calculate that

0(t) = t EZo (ET.)e`2t2n /n!.
The two previous lemmas can now be translated by change of variables to
(2.5) En'.o (ET.)e "Un /n!  fi~lu as u

By the Abelian theorem for Borel summability (Doetch, 1943), (2.5) implies that as xl 1, one has

(2.6) EZ.1 (ETn  ET., )Xn _ #]P Xrl/2.

But now the (carefully chosen) definition of Tn implies ET,,l k ET, so the Karamata Tauberian theorem (Feller, 1971) is applicable. This means that (2.6) also implies the basic result,

ETn = 2:Z1 (E(Tk)  E(Tk_J) ~,8,1n.

3. Stabilizing variances. Rather than attempt to bound the variance of Tn directly it seems useful to introduce a close approximation whose variance is more easily bounded. We therefore consider the new random variables T.' = TJi, X2, ..., X, (0, 0), (0, 1),
2
(1, 0), (1, 1)); and thus, instead of just triangulating (Xl, X2, X.} C [0, 11 ' one also
includes the corners in the triangulation.
To cheek that T.' is close to T, one just notes that

(3.1) 0:5 T'n  Tns 4,/2_ e(X~, X2, ..., X.) + 4 for X, C [0, 112, 1 n.

The first inequality in (3.1) follows from the definition of T and the second inequality follows from the same consideration applied in the proof of Lemma 2. 1. Since in the case of X i.i.d. uniform on [0, 112 we have Ee J,, X2,  . ., Xn) :s A log n, the inequalities in (3. 1) and Lemma 2.3 give in that case the asymptotic equivalence,

(3.2) ET.' ~ ETn ~ 8.~n as n >

1

OPTIMAL TRIANGULATION OF SAMPLES 551

The key tool in obtaining the required variance bound of this section is the EftonStein inequality (EfronStein 1981). If S(Y1, Y2, ..., Y,'1) is any real symmetric function of the independent identically distributed random vectors Yj, 1 :5 j < w, we set Si = S(Y1, Y2, * * *, Yii, Yi+i, Y.) and S. = l/n E71 Si. The EfronStein jackknife inequality then says

(3.3) Var 8., :5 E (S, _ S.)2.

This inequality will now be applied to the random variable T'Ji, X2, XJ

LEMMA3.1. IfXi, 1  i< oo, are independent and identically distributed with support contained in [0, 112, then

VarT'n~is6y12ET'n, nk2.

PROOF. We first note that in the EfronStein inequality the right side is not decreased if S. is replaced by any other function of Y1, Y2,  ., Y.. Applying the resulting inequality to Tn' we have
2
(3.4) Var Tn~ 1 :5 E 1 J'(Xl, X2, X_ 1, X+ 1, Xn)  Tn')

There are two ways to bound the differences in (3.4). If we extend a minimal triangulation Of (0, 0), (1, 0), (0, 1), (1, 1), Xl, X2, * * *, X1, Xi+l, * .., XJ m Ai to a triangulation of all of ((0, 0), (1, 0), (0, 1), (1, 1), X,, X2, . .., XJ  A, then an increment of at most 3,F2 is made. This crude bound comes ftom the fact that X is contained in some triangle A of the first triangulation, and Xi can be connected to the three vertices of A at a cost less than 3,12.
A second way of bounding the differences in (3.4) comes from the fact that if Xi is removed from a minimal triangulation JF of A, the decrease in the cost is at most Ej.,(j) 1 X  Xj 1 where I(i) = (j: (Xi, Xj) is an edge of Y}.
From the trivial inequality, min(a 2 , b 2) :5 ab applied to the two preceeding bounds and (3.4), we obtain
(3.5) Var T' _ 1 :5 E E g_ 1 3 ,F2 Eic=i(i) 1 X  X 1

The reason this is effective is that

J~ 1 Ejej(j) 1 Xi  Xj2T' ,
(3.6) D_

because each edge of the minimal triangulation 5 ofA appears exactly twice in the double sum. From (3.5) and (3.6) the lemma follows immediately.
Now it is possible to complete the proof of Theorem 1. First recall that when the Xi,
1 :5 i < oo, are independent and uniformly distributed on [0, 112 the relation (3.2) ET' . ~
PJn is valid. By Lemma 3.1, Chebyshev's inequality, and a 2e argument, we then see that
the probabilities ^ T.', flJnk 1 < e) are summable for nk = k', 1:5 k < oo. By Markov's
in uality and the RenyiSulanke bound (2.2) the probabilities P(e(Xl, X2, ..., X.,) k
MZ) are also summable. The inequalities (3.1) therefore imply that
(3.7) lim,. T.JIn,t = p

with probability one. Since Tn is nondecreasing and nk+i/nh 1, the subsequence limit (3.7) suffices to prove the existence of the limit in Theorem 1.
To check thatfl is indeed positive we just note that
(3.8) 2Tn k ininf 1 Xj  X 1: 1 :s j 5 n,

since each X must be joined to some other Xj by an edge of the triangulation. Now a trite calculation shows that there is a constant a > 0 so that E min{ 1 Xj  Xi 1: 1:5 j 5 n, j 0 i} k an112 and hence that ETn k 6.~Fn for a P > 0.

1

1
i

i

552 J. MICHAEL STEELE

4. General distributions and open problems. If one assumes only that the Xi, 1 :5 i < oo are independent and identically distributed with EIXil' < 00, it still seems inevitable that lim.T. / Jn exist with probability one. The natural approach to such an extension, as used in the case of subadditive Euclidean functions (Steele, 1980), does not seem to help in this context even when the Xi have bounded support. The problem seems to arise from the fact that the functional T. = T(Xl, X2, X.) does not have the "simple subadditivity" property.
Nevertheless, the next lemma shows that T,, does have one of the crucial properties used in extending the theory of subadditive Euclidean functionals to general distributions, the "scale boundedness" property.

LEMMA 4.1. There is a constant C such that

T'(xi, X2, , X.) :S 01n for all n ~: 1 and x, EE [0, 112.

PROOF. Forn k lone sets 0(n) = max T'(xl, x2, ...,X.) where the maximumis over all possible choices of x, E [0, 11. We also set 0(0) = 4. The key observation is that

(4.1) OW:5 max.+b+~+dn 1/2(0(a) + 0(b) + 0(c) + 0(d)).

This inequality is easily proved by decomposing [0, 1]2 into four ~_uadrants, correcting for the scale of the quadrants, and being careful of the cases which leave some quadrants empty. By induction one can then prove 0(n) :5 30vrn. (In fact, C = 2.4. (2  /35)~'
29.8 will suffice.)
As an application of the preceding lemma, one can prove the following.

THEOREM 2. If Xi, 1 :s i < oo, are independently distributed with compact support
(4.2) lim.T(Xl, X2, ... , X.)1vrn = 0 a.s.,
if, and only if, the support of the (X} is singular with respect to Lebesgue measure.

PROOF. First suppose the Xi have compact support 8 with Lebesgue measure zero,
and then choose a finite cover of S by closed squares A, 1 sjs m, with disjoint interiors.
We can further suppose that X(A,) aAj 1's e where A(A ) denotes the
Lebesgue measure for A, and 1 aA., 1 denotes the length of its boundary aAj.
For each j, let E, denote the (four) extreme points of A, and complete the set of edges U J' 1 aA, to a triangulation of U Ej. Let B be the total length of the resulting triangulation.
We then have the elementary bound
T(Xl, X2, .  ., X.) :5 E7', T(Ej U {X.: X. E Aj)) + B. By Lemma 4.1 with a change of scale to squares of side X(A,)/' we obtain . . ., X.) _ C Eq, ~ X (A,) 1,21 E.' U (X  A
TJ,, X2, ,:X, E= ,) 1112 + B,

so by Schwarz' inequality
112
X.):5 C(ET 1.X(Aj)112
T(X, X2, (4m + n) + B.
Since E,, A (AJ < c was arbitrary, the first half of the lemma is thus proved. To prove the converse we note as before that

(4.3) 2T. ~t E',=, minfl X,  X, 1: 1:5 j:s n, i Oj).

BY the Lebesque density theorem one can show (without too much difficulty) that if the

i
i

OPTIMAL TRIANGULATION OF SAMPLES 553

Xi do not have purely singular support that there is a constant 8 > 0 so that

(4.4) E minfl X,  Xj 1: 1 < j  n) > Sn'I'.

But if (4.2) holds, the scale boundedness of Lemma 4.1 and the bounded convergence theorem imply ET,, = o(Fn). This is incompatible with (4.4), so the theorem is proved.
The fact that T. / An > 0 a.s. when the (X J are singular with compact support suggests
that more refined results might be possible in the singular case. This suspicion is confirmed by several special results of which the following is typical.

THEOREM& If X, 1:5 i < oo, are independent and uniformly distributed on the unit
circle, then

(4.5) Tn  ET~ = o(log"n) a.s.

for any constant a > 1/2 and
(4.6) y, log ns ETn:s y2 log n,

for some constants 0 < y, < y2 < oo.

The proof of (4.5) is not difficult and can be based on the EfronStein inequality which
in this case shows Var T,, = 0(n'). The bounds in (4.6) are not difficult if.one does not aim for good values of y, and V2. Since it is very likely that one actually has ET. .y log n, there is little loss in omitting the proofs of the bounds (4.6).

Acknowledgment. The author wishes to thank Joe Marhoul, Larry Shepp, and a
very careful referee for many useful comments. In particular, the proof given here of Lemma 2.2 corrects an earlier error and uses an idea suggested by the referee.

REFERENCES

DOETSCH, G. (1943). Theorie und Anwendung der LaplaceTransformation, 191192. Dover, New York.
EFRON, B. and STEIN, C. (1981). The jackknife estimate of variance. Ann. Statist. 9 586596.
FELLER, W. (1971). An Introduction to Probability Theory and Its Applications, 11, 447448. Wiley, New York.
KARAMARDIAN, S. (1977). Fixed Points Algorithms and Applications (in collaboration with C. B. Garcia). Academic, New York.
KARP, R. M. (1977). Probabilistic analysis of partitioning algorithms for the traveling salesman problem in the plane. Math Oper. Res. 2 209224.
RANYi, A. and SULANKE, R. (1963). Ober die Konvexe HiWe von n zufffilig gewiihiten Punkten. Z. Wahrsch. Verw. Gebiete 2 7585.
STEELE, J. M. (1980). Subadditive Buclidean functionals and nonlinear growth in geometric probability. Ann. Probability. 9 365376.
TURAN, G. (1980). Informal talks, StanfordHungarian Academy Conference, Budapest, March 1980.

STANFORD UNIVERSITY
DEPARTMENT OF STATISTICS STANFORD, CA 94305

i