The Annals of Statistics.

1986, Vol. 14, No. 2

Princeton University

If S(Xl 1 X21 , x.) is any function of n variables and if Xi, k,, ;g n

are 2 n Li.d. random variables then

varS :5 !E (S _ S.)2>

2

where S RX,, X2,..., Xp,) and S, is given by replacing the ith observation

with k, si = S(X', X ki, , X.). This is applied to sharpen known

variance bounds in the long common subsequence problem.

1. Introduction. In Efron and Stein (1981) the following result was estab

lished: If S(Xl, XD. , Xn 1) is a symmetric function of n 1 variables and

X, X2,, Xn are independent, identically distributed random variables then for

lvn

Si = S(Xl, X21 ... 1 Xi 1, Xi+ Xn) and ig ~ n ~i 1Si, one has

n

2

varS(Xl, X21 ... 1 X,~1)::~ E (si

This inequality was motivated by a desire to understand the nature of the bias in the jackknife estimate of variance, but it has also proved useful in the probabilistic analysis of algorithms, Steele (1981, 1982b). There have been extensions of the EfronStein inequality to the case where one drops out more than one observation from S (Bhargava (1980)), and there have been new proofs of the result by Karlin and Rinott (1982) and Vitale (1984).

The purpose of this note is to establish an analogue to (1.1) which is not burdened by a symmetry hypothesis. It will be proved using the Hilbert space technique introduced in this context by Vitale (1984).

Finally the inequality is applied to a problem of string comparisons by means of long common subsequences, a problem considered at length in Sankoff and Kruskal (1983). The best known bound on the variance of the longest common subsequence is improved, and a new k string comparison problem is introduced.

2. Main results. Let S(Xl, X21 "., x.) be any function of n arguments and consider the statistics formed by

S = S(Xl, X21 ... 1 Xn) and Si = S(4 X2pp Xi ip lip Xi+ 19 Y Xn) where the X, and ±, are 2n

Received June 1985; revised July 1985.

'Research partially supported by NSF Grant DMS8414069.

AMS 1980 subject clamifications. Primary 60E15; secondary 62H20.

Key words and phrases. EfronStein inequality, variance bounds, tensor product basis, long common subsequences.

753

754 J. M. STEELE

independent random variables with the distribution F. In other words, the S, are formed by redrawing the ith datum independently and then recalculating S. We will prove the following inequality:

(8

(2.1) var S E Si)2.

2

First we can cheek that there is no loss of generality in assuming that ES2 < oo. To do this, consider new variables which resample the first i observations, so Ai = S(±1t ±2p * * * 7 ki ly kiy xi+ XJ, where Xi and ±i, 1 :5 i :5 n are 2n Li.d. random variables. Setting Ao S, one has by Schwarz's inequality that

var(S) LE(S )2

2

~ )~2

(2.2) E + 1

n n ~i, 1)2.

E E(~i

2 i()

Since + , has the same distribution as S Si + 1, we see from (2.2) that

the righthand side of (2.1) is infinite unless varS < oo. This shows (2.2) holds

when ES2 = oo and thus lets us focus on the case ES2 < 00.

By elementary Hilbert space theory we know that there are functions 01, such that 00(x) _= 1 and E0k(X)01(X) = Sk,, i.e., we choose .01, 0 ..s; k < oo to be an orthonormal basis for L 2 (dF). Further, if we let k = (k 1, k 2, k.) denote a multiffidex then the variables defined by

n

Ok(X) = OkM, X2,..., Xn) Ok,(Xi)

are orthonormal and there are constants c(k) such that

(2.3) SM, X2,. .., XJ = EC(k)0k(X)

k

holds almost everywhere. Here we have just expressed S in what is sometimes called the tensor product basis for L 2 (dPdF ... dF). By orthonormality we have ES2 = Ek C2(k) and varS = Ek ' "C2(k). All we need now is to relate these identities to the righthand side of (2.2).

Without loss of generality we can assume that ES = 0 so c(k) = 0 if k (0, 0, .. . , 0). We first note that E(S Si)2 = 2ES2 2ESS, When ki is substituted into (2.3) to give an expansion for Si we see that

(2.4) E(SSi) = E C2 (k),

k: k,0

since the orthonormality. of the Ok(Xi) and the independence of X] 1 X21 ... 1 Xo ... 1 X, and Xi cause all other summands to have expectation

AN EFRONSTEIN INEQUALITY 755

zero. Summing over i we have

n

(2.5) E (S Si)' = 2nES2 2 F, e'(k)z(k),

k

where z(k) (k 0), i.e., z(k) is equal to the number of indices k i of the

multiindex k which equal zero. Since we were able to assume that c(O) = 0, we

have z(k) :5 n 1 for all e(k) :A 0. This is a crucial observation, which applied to

(2.5) gives us

n

(2.6) E (S Si)' > 2nES2

2(n 'C2 (k) = 2ES2

k

which completes the proof of the main result.

One can easily extend this result to the situation where one redraws m

observations, i.e., one considers S(Xl, ±i .. ., XJ. We let [m] denote a

subset of {l, 2,..., n} of cardinality m and denot'e by S[] the statistic obtained

by replacing Xi ', by ki J for ij E= [m]. The extension we seek for (2.1) is the

following:

n 1

(2.7) varS:s 'E (S S

( m 1 ) 2

The observations that we can assume ES2 < oo, ES = 0, and S

Ekc(k)4,k(X) go on just as before. Now in calculating ESS(m] we get

kCC

_ '[t.1C2 (k) where G[m] is the set of all k such that ki = 0 if i E= [m]. Hence using E(S S["t])2 2ES2 2ESS[ .. and summing overall m subsets [m]

contained in {l, n}, we get

i [.])2 n )ES2 C2 (z(k))

2 E(SS m (k) m

k

>(n)ES2_ n1 2 n 1 ES2.

m m )Fc (k) = (m 1)

k

This completes the proof of inequality (2.7).

Before attending to applications it is worth recording two key remarks.

(1) The arguments xi of S need not be real numbers, or for that matter even vectors, although the case of vectors is by far the most important. The proofs of (2.1) and (2.7) depend on the structure of the arguments xi only in the very shallow sense that we need L 2 (dP) to have a countable basis.

(2) The inequalities (2.1) and (2.7) are both sharp as one can see from choosing S(Xl, X21 ... 1 Xn) = F71Xi.

3. Applications to long common subsequences. A benefit of the two variance bounds of the last section is that they provide reasonably tight bounds on the variances of statistics which may be computationally difficult or even intractable. One illustration comes from the theory of string comparison.

The length of the longest common subsequence of the two random strings is a statistic which arises in an amazing variety of fields from biology to computer

756 J.M.STEELE

science (see, e.g., Sankoff and Kruskal (1983)). Formally, we consider 2n independent, identically distributed random variables Xi and Xj', 1 i ~ n, which take values in a finite alphabet V, and let

L,, := max{k: Xi. = XJ,, Xi, = XJ,,,..., Xi,, = XJ,, where

1 :5 ii < i2 < ... < ik:5 n and 1 :5j,

ClivAtal and Sankoff (1974) initiated the asymptotic study of ELn, proved that EL,, ~ cn, and established some bounds on c. Much subsequent work has been done on c by Deken (1979) and Gordon and Arratia (unpublished).

The most intriguing special case is that of coinflip sequences; that is where Xi and X,' are independent random variables with success probability In that case the value c = 2/(1 + F2) is consistent with all of thi~ known bounds and computational experience. This tidy expression was put forth by Richard Arratia and subsequently we found the following suggestive heuristic.

By a "good k pair" we will denote any pair of subsequences of length k of the X, and the Xj' which coincide. We let Z denote the total number of good k pairs which can be found in the two n strings of the Xi and Xj', 1 :!g i < n. The expectation of Z is easily determined,

E(Z) = 2k (n )2 k

and, by considering ratios of successive choices of k, it is easy to see that E(Z) is unimodal and the mode occurs for that integer value of k nearest n/(1 + F2 ).

We can also get a handle on the number of good k pairs by noting that every k subsequence of the longest common subsequence gives a rise to good k pair, so

there are at least Ln) of them. Now by the usual unimodality of the binomial

( k coefficients, as k varies this sequence is unimodal with mode equal to the nearest integer to LJ2. The heuristic leap of faith is that in expectation these two modes are within a distance of o(n) of each other. A proof of that leap would prove that Arratia's suggested value for c is the correct one.

A second interesting problem concerning L,, is the conjecture of ChviLtal and

Sankoff (1974) that var(Ln) = 0(n 2 /3). It was put forth in Steele (1982a) that

var(L.)::g (n'/2 + 1)2 . As an illustration of the power of (2.1), we can now give

an easy proof of the stronger result,

(3.1) var Ln .s; n;

in fact we can show

1_

' p,2

(3.2) var L,, :5 n ( . ) 1

where p.=P(X,=a)=P(±i=a), for ac.W. Since we have

2>1s011. To get (3.2) from (2.1) we consider S=L(Xl,

E..,,Pa X21 ... 1

X, X,', X Xn) and consider S as a (nonsymmetric) function of 2n variables.

2

Changing any one of those variables will change S by at most one. Moreover if, say, Xi is replaced by ki then P(Xi = ±i) = Ep,2, so P(S = S,) ~t Ep.2. These

AN EFRONSTEIN INEQUALITY 757

two facts give us E(S S,)2 < 1 Ep,2, and there are 2n such summands, so we have established (3.2). This is a long way from the conjectured var L, = o(n 2/3), but it is the sharpest known result. The ease with which it comes from (2.1) is surprising if one initially studies L. from a combinatorial perspective like that of Chvhtal and Sankoff (1974).

It is tempting to try to improve (3.2) by use of the changem inequality (2.7). To do so would require improving the naive bound

E S S [.1)2 :~ M2,

since this bound leads only to a variance bound given by the ratio m(n 1)(n 2) ... (n m + 1)/(m 1)!. This bound is not even linear in n for fixed m > 2, and trying to optimally choose m for fixed n does not. help since the bound is minimized for m = 1. One seems to need new combinatorial insights to improve the naive bound on (S S t.])2 , and thus to use (2.7) with effect.

It is likely that (2.7) is in fact sharper than (2.1). Karlin and Rinott (1982) found that to be the case with the Bhargava (1980) version of the original EfronStein inequality and there is no reason to expect our version to break with tradition.

The hard part of using (2.7) is not a lack of sharpness but rather an excess of complexity. One has to find a way to get strong information on how L,, changes as one changes a substantial part of the sample. This is harder than getting a decent bound on the possible variation due to changing a single observation.

The longest common subsequence problem has a natural analogue for k strings for which much of the preceding theory goes through with little change. One benefit of the k string analogue is that it gives a second handle on the constant c.

To define the simplest incidence of the ksequence problem we consider k = 3; let Yi = (Xi, Xil, Xj') and set

L,,:= max{t: Xi, = xil, = Xk11 xi, = XJ, = X1,} 1

where 1 . il < '2 < ... < it:5 n, 1 :~il < ...

lim ELnIn = C3

noo

and the same proof given by Deken (1979) shows that Ln/n c3, almost surely.

It would be of, interest to relate C3 to c, and one is tempted to speculate that

C:) = c 2 (and more generally that ek = Ck1). Computational evidence does

not yet rule this out. The application of (2.7) to this new functional gives

varLn:5 .*'n(l Ep 2) in the case k = 3. Again, this seems difficult to improve.

2 a

Acknowledgments. I am indebted to Michael Waterman and Louis Gordon for stimulating this work, and to Richard Arratia for his comments on c = 2/(1 + F2).

REFERENCES

13HARGAVA, It. P. (1980). A property of the jackknife estimation of the variance when more than one observation is omitted. Technical Report No. 140, Dept. of Statistics, Stanford University.

758 J.M.STEELE

ClIVATAL, V. and SANKOFF, D. (1975). Longest common subsequences of two random sequences. J. Appl. Probab. 12 306315.

DEKEN, J. P. (1979). Some limit results for longest common subsequences. Discrete Math. 26 1731.

EFRON, B. and STEIN, C. (1981). The jackknife estimate of variance. Ann. Statist. 9 586596.

KARLIN, S. and RINOTT, Y. (1982). Application of ANOVA type decomposition for comparisons of conditional variance statistics including jackknife estimates. Ann. Statist. 10 485501.

SANKOFF, 1). and KRUSKAI, J. B., eds. (1983). Time Warps, String Edits, and Macromolecules: The theory and practice of sequence comparison. AddisonWesley, Reading, Mass.

STEELE,J. M. (1981). Complete convergence of short paths and Karp's algorithm for the TSP. Math. Oper. Res. 6 374378.

STEELE, .1. M. (1 982a). Long common subsequences and the proximity of two random strings. SIAM J. Appl. Math. 42 7317.37.

STEELE, J. M. (1982b). Optimal triangulations of random samples in the plane. Ann. Probab. 10 548553.

VITALE, It. A. (1984). An expansion for symmetric statistics ana the EfronStein inequality. In Inequalities in Statistics and Probability, IMS Lecture NotesMonograph Series 5 (Y. L. Tong, ed.) 112114. IMS, Hayward, Calif.

DEPARTMENT OF STATISTICS PRINCETON UNIVERSITY

PRINCETON, NEw JERSEY 08544

i

1

i i i