Please see PDF version

The Annals of Probability
1981, Vol. 9, No. 6. 937947



Purdue University and Stanford University

The length of the longest monotone increasing subsequence of a random sample of size n is known to have expected value asymptotic to 2n '12. We prove that it is possible to make sequential choices which give an increasing subsequence of expected length asymptotic to (2n)"'. Moreover, this rate of increase is proved to be asymptotically best possible.

1. Introduction. A central theme in the theory of optimal stopping is that many stochastic tasks can be performed almost as well by someone unable to foresee the future as by a prophet. In one classic example, the "secretary problem", the task is to stop at the largest of n sequentially observed independent identically distributed observations X,, X2, : . . , X.. Without clairvoyance one attains X, where T is some stopping time, but a prophet is always able to attain maxi:st:sn Xi.
That the prophet's advantage is rather modest follows frofil the wellknown fact that no matter how large n is there is a stopping time T. for which

(1.1) P(X,~ = MaXi:s.n X4 > e1,

(see, e.g., Gilbert and Mosteller (1966)).
The stochastic task we consider here is more complex than the secretary problem and the central theme is illustrated in a different way from (1.1). To set the problem, let {X: 1  i < oo}, denote independent random variables with continuous distribution F. The basic object of interest is

L,, = max (k: X, > X, > ... >X,, with 1Sil
the length of the longest monotone decreasing subsequence of the sample {Xl, X2,
XJ. (We could equally well have considered increasing subsequences but the notation will be simpler this way.) The variable L. has been studied extensively and it is now known that

(1.2) ELn ~ 2n'

The first result, ELn  cn was obtained by Hammersley (1972) via an ingenious use of the planar Poisson process. Baer and Brock (1968) had conjectured earlier on the basis of computer simulations; that c = 2. By a delicate variational argument Logan and Shepp (1977) proved that c is at least 2 and by a similar method Vergik and Kerov (1977) established that c equals 2.
How well can one sequentially choose a monotone decreasing subsequence using only stopping times? Formally, we call a sequence of stopping times ri, r2, ... a policy if (1) they are adaptedto (Xi: 1:5 i < }, (2) 1:5 Irl < T2 < "' , and (3) X,>X,> The class of all policies is denoted by Yand our main problem is to determine

u. = sup,e,,,E(max(k:,rk:5 n}).

The quantity u. is the largest expected length of a monotone decreasing subsequence

Received April 23, 1979; revised April 23, 1980.
'The research was supported in part by Grant N001476C0475 (Office of Naval Research).
AMS 1970 subject classifications. Primary 62L15; secondary 6OG40.
Key words and phrases. Monotone subsequence, optimal stopping, subadditive process.



which can be achieved by sequential selection. Although u. would a priori depend on F, one can easily cheek that it is the same for all continuous F. Moreover the optimal policy for a given F can be obtained via the probability integral transform from the policy for the uniform distribution on [0, 1].
Our main result is the following


(1.3) u.  (2n) 1/2.

The intuitive content of this result is another illustration of the central theme; the prophet asymptotically outperforms the intelligent (but nonclairvoyant) individual by only
112 a factor of 2 . One should also note that the naive individual who too eagerly reports each successive record low achieves an expected length of only Eh", 11k ~ log n and thus does much worse than the prophet or intelligent individual.
Our proof that u.  (2n) 1/2 begins with a simple algorithm for computing Un based on an integral equation obtained by dynamic programming. Standing alone, the integral equation seems to be ineffective, so in Section 3 we prove by a subadditivity argument that uln 1,2 has a limit. A sequence of efficient, but suboptimal, policies are then given in Section 4 which show lim u,/n'/'  211'. The crux of the proof is in Section 5, where the integral equation is finally used to show lim u.ln 1/2  21,2 after first establishing an essential regularity property of the solution by a probabilistic argument.
The proof outlined above yields several results en route. In particular, we obtain results on optimal selection when the sample size is random. These results as well as comments on a related problem are collected in Section 6.
In the final section we are fortunate to be able to include a result due to Burgess Davis on the sequential selection of a decreasing subsequence from a random permutation. This result teams up with the main theorem of this paper to settle a second conjecture given in the computational paper of Baer and Brock (1968). We would like to thank Professor Davis for his kind suggestion that his result be included in the present paper.

2. An algorithm for computing the optimal expected length. First of all, as we remarked in the introduction, we may assume without loss of generality that the common distribution of the observations X,, X2, is uniform on (0, 1).
Let us define, for each t EE (0, 1],

(2.1) 91t = (,r = (T1,,r2, 91:X,, < t, i = 1, 2,

the class of policies which only select observations smaller than t. We also let

(2.2) u,(t) = sup,c=.5,, E(max{k:,rk:s n}}.


(2.3) U. = U.M


(2.4) Ul(t) = P(X1:5 t) = t.

We also record the trivial fact: uo(t) =_ 0.
Now fix t and consider n + 1 available observations. Because of the stationarity of the Xi's, the maximal conditional expected subsequence length, given X,, will be just u.(t) if X1 is not selected and 1 + u~(Xl) if X, is selected (in which case necessarily X, < t). Since the optimal policy must do whichever maximizes the conditional expectation, we have the algorithm:

u.,,(t) = u.(t)P(Xl ~ t) + E max(u.(t), 1 + uAX1W(X,<')

= (1  0U.M + fo' max{u.(t), 1 + u.(s)} ds.


It follows easily that for each n and t, the policy which achieves u,(t) is

TI = min{i:X, < t and 1 + U..(X) ~ UnJO)

(2.6) > n (arbitrary) if no such i  n
TkIl = min{i > 7.k:X < X, and 1 + u._,(X) k u._,(X,))

> n (arbitrary) if no such i 5 n or if Tk > n.

Since (2.4) and (2.5) imply that each u.(.) is strictly increasing on (0, 1), we can implicitly define the functions tn* (t) by the following relations:

UJtnt(O) = U.M  1 if Un(t) 5 1

(2.7) tn* (t) = 0 if Un(t) 5 1.

Equation (2.6) now simplifies to the following:

(2.8a) T1 = min{i: CJt) 25 X < t)

(2.8b) Tk+1 = min{i > Tk: t n*_,J,) :5 X < XJ.

One would naturally like to show un  (2n) 1/2 directly from (2.5), but this does not seem possible. The integral equation (2.5) becomes effective only after substantial quantitative information about Un(t) is obtained. Here we should remark that

= 2: n (n)
(2.9) Un(P k_l k` E7k j ti(l  t)

for all t small enough so that the right side of (2.9) is less than or equal to one. This can be shown either directly from the integral equation or by noting that when un(t) :5 1 the optimal policy is to select all successive minima among those X,'s, i = 1, 2, ..., n, which are smaller than t. The right side of (2.9) is the expected number of such minima.

3. Existence of the limit. To show that uln'l' has a limit, we shall first prove that a limit exists for an analogous planar Poisson process problem and then show that the two problems are asymptotically similar. The proof is a version of Hammersley's subadditivity idea made somewhat simpler because we deal only with expectations rather than with random variables.

3a. The planar Poisson process problem. Let Z1, Z2, be i.i.d., each exponentially distributed with mean one, and independent of the Xi's which are i.i.d. uniform on (0, 1). Let Yz be the class of policies T = (T1, T2, ... ) with
(a) each Ti adapted to {(zi, Xj): 1 :S i < 00},
(b) 1  T1 X, > ... ;

and let w (X) = sup,c=,95 E {max k: 4 < X).

In other words, we observe a Poisson process with arrival rate one, on an interval of length X. At each arrival time we are allowed to observe a random variable uniform on (0, 1) and independent of its predecessors, and the object is to select a decreasing subsequence of maximal expected length.
What makes this problem so appealing is the wellknown fact that, if we choose p and t, each in (0, 1), then the following two processes are also Poisson:
(a) those arrival times in (0, pX) for which the corresponding Xi's are ;t t;
(b) those arrival times in (pX, X) for which the corresponding Xi's are < t.
Those processes have expected numbers of arrivals p (1  t) X and (1  p)tX respectively. It follows, by considering the subclass of Yz consisting of those r's with

X, 1 k t if E;,, Zi < A

< t if EJ1, Zi > pX,



(3.1) W(X) ~ W(P(1  t)A) +W((1 P)A).

If we now define

p (X) = W (X)
and choose A = (r + s)', p = 1  t = rl (r + s), (3.1) becomes
(3.2) p (r + s) a p (r) + p (s).
By the elementary lemma on subadditive sequences (Fekete (1923), or Khinchin (1957)),
this implies the existence of
(3.3) V = linit p (t)lt = lim sup p (t)lt k 1.
The finiteness of y follows ftom the inequality

,xk < ME. M A112
W (/X) :S E'k~.0 j~! ELk  kO'~! k 1/2 

the second inequality holds for some constant M by (1.2). The last inequality is just an application of the familiar inequality E 1 X 1 :5 (EX2)112.
Thus we have shown that there is a finite y > 0 such that
(3.4) Y = liMt_. W(t2)1t

= liMt_. W(t)1t112.

3b. Asymptotic similarity of the two processes. We will now use some elementary estimates to show that u. ~ w (n). With thp usual notation P W X) = E to XJ'C'M and

P(k; X) = 1  P(k  I; X)
we will prove the inequalities:
(3.5) U[(1)x1PQ(1  E)x]; X) :5 W (X)
(3.6) W(X) S U[(l+d)X)Pffil + e)XI; X) + XPQ(1 + e)A]; X).

The first inequality holds since the optimal policy for n = [Q  e)X] observations used on the Poisson process (paying no heed to the Zi's) would yield an expected length of at least u((,~,)xi whenever there are at least [Q  E)X] arrivals.
For the second inequality let N be the number of arrivals and let L be the subsequence length obtained when the optimal policy in Y1 is used. We then have w (X) = E (L 1 N:5 [ (1 + EA1)P ([ (1 + EA1; X) + E (E (L 1 N)I(N.,[(i,~)x])) 

We note E (L 1 N :5 [Q + E),Xl) :5 u[(,+,)xl and trivially E (L 1 N) :s N. Inequality (3.6) then follows since

ENI(N>pi+,)x) = XPQ0 + E)XI; X).

From the fact (3.4) that w OX) ~ YX1/2 and elementary bounds on the Poisson distribution
one now easily deduces ftom (3.5) and (3.6) that u. ~ ynl .
We remark that it is not hard to extend this result to show that for each t E (0, 1] one has u.(t)l(nt) 112


4. A lower bound for the limit. For any constant a > 0, and for each n, we consider the policyr(n) =,r(n; a), where

.ri (n) = min {j: X, ~ 1  a/n'/2

,rk,l(n) = min{j >,rk(n): X. EE [X,,,(n)  aln", X,,(.))}.

We shall show that for these policies

(4.1) lim inf.. E max{k: rk(n) :5 n} k n 1/2 min(a, 2/a).

The right side of (4.1) is maximized for a = 2'/, which shows that lim inf,.u,/ni/2 >. 21/2. (Of course when we complete the proof of the theorem, (4.1) with a 2'/' will imply that the policiesr(n; 21/2) are asymptotically optimal.)
What makes the policies T(n) easy to evaluate is the fact that if Xg.) ~~: a/n'/2, then Tk+i(n) ,rk(n) andX,,(.) X,,,,(n) are conditionally independent and independent of {,rl(n), ... ' rk(n); X,(.), ... , X.,(nJ, with geometric (p = a/n1/2 ) and uniform on (0, a/n1/2 ) distributions respectively. Hence we now let {Yk: k = 1, 2, ... } and {Zk:k = 1, 2, ... } be independent sequences of i.i.d. random variables with these geometric and uniform distributions respectively. Also set Sk = Y1 + . . . + Yk and S'k = Z1 + + 4, and define

Mn = max (k: Sk :S n and S1 :5 1  et/n 1/2

We first observe

(4.2) EMn < E max(k:,rk(n)  n}.

Now, for any E > 0, Chebyshev's inequality gives

P(S.t s n) = 1 + 0(n'/2 if k:s (1  E)anl/2

= 0(n'/2) if k k (1 + E)anl/2


P(Sk':s 1  a/n1/2 1 + 0(n1/2) if k:s (1  oEA2/a)n1/2

0(n`12) if kz:(1+E)(2/a)nl/2.

The 0 (n'/2) terms are uniformly small in the indicated range, so, by the independence of the two sequences,
EM~ = En 1/2
h, P(Sks n)P(S'k:s 1  aIn'12) ~ n min(a, 2/a).

BY (4.2), this shows that (4.1) holds, completing the proof.

5. An upper bound for the limit. Now that we know that lim, Un/n'/2 exists and is at least 21/2, to complete the proof of the theorem it will suffice to show that
12 112
lim inf Un/n' s 2

Our proof of (5.1) hinges on showing that

(5.2) U"(t)/t1/2 T in t for each n.

The derivative Of Un(t01/2 is

t_1/2 {u,'(t)  (2t)'un(t)},

so, to prove (5.2), we must show that for each t EE (0, 1)

(5.3) Un(t + 8)  U.M > (%) (Slt)u.(t) + o (8) as 8 10.

This inequality will be proved by selecting a suboptimal member of Yt+s (as defined in


(2.1)) and showing that this policy improves on the optimal policy in.91 by an amount equal to the right side of (5.3).
What we actually do is a bit more complicated than this and involves showing that u,(t) is also the maximal expected subsequence length in a problem where the number of available observations is random with a binomial distribution.

5a. Optimal selection with binomially many observations. Since the policy in .5P, which achieves un(t), as given by (2.8), ignores the actual values of all X,'s which are greater than t, and since the other Xi's are, conditionally, MA uniform on (0, t), we could just as well replace the X,'s by a sequence of observable coin tosses with probability t of heads, letting each toss which gives heads be accompanied by the next in a sequence of MA random variables uniform on (0, t).
To exploit this observation let Y1, Y2, ... and X,, X2, ... be independent sequences of MA random variables, the Y's Bernoulli (t), and the X's uniform on (0, 1). Let 6k, be the class of policies adapted to the (Yi, Xi)'s which select a monotone decreasing subsequence by selecting only X's for which Y, = 1 and which totally ignore all X,'s for which Y, = 0. Then for the Un(t) defined by (2.2) we have the representation

sup,r=.,,, E(max{k:7.ks n)} = Un(t).

Since we have made the X,'s uniform on (0, 1) rather than on (0, t)this is to avoid confusion in what followsthe policyr = (T1,,r2, ...) which achieves un(t) becomes

T1 = min{i: Yi = 1 and C*_#)  tX, < t)

Tk+i=inin(i>,rk:Yi=l and tn*i(tX,,):stXi
Now suppose we introduce a second coin toss at each stageletting Y1, Y'2, ... be Li.d. Bernoulli (p) and independent of the Y,'s and the X,'sand we allow policies adapted to the {Yi'} as well, but maintain the requirement that all Xi's for which Y, = 0 must be ignored. Then clearly what we have introduced is external randomization; those policies which depend in some way on the (Yi'} are simply randomized policies, and, of course, none of these can improve on the best nonrandomized policies. In particular, any policy which ignores all Xi's for which either Yi = 0 or Yi' = 0 is really a policy in Ikt,'; hence the expected length of the subsequence ofX,, ... X, which it selects isno greater than u,(tt'). Just such a policy will be needed in proving (5.3).

5b. Monotonicity of u~(t)lt'l'. We now fix n and let the {Y,} be Bernoulli (t + 8) and the {Y1) be Bernoulli (tl(t + 0). First note that PW, = Yi' = 1) = t. We consider two randomized policiesr andr'in ok,,a. The first is to be equivalent to the optimal policy (for given n) in OUt while the second is to be a slight modification of the first. Specifically, we let 7(T1P T27 ... ) with

Tl=minWYi=Yi'=1 and tn*i(t):5tXi)

,rA,+,=min{i>,rk:Yi=Yi'=1 and t*ni(tX,,,):5tXi
We want T' to agree with T up to the first i 5 n, if any, at which Y, = 1, Yi' = 0, and at which X would have been selected byr if Y,' had been 1. We want T' to select this X,, but thereafter to continue to behave like r. We thus define

I = min{i: Yi = 1, Y1 = 0, G*_i(tX.) :5 txi < tXj

= oo if no such i  n,


a, = max (Tk: Th < i)

_'0(andXOwl) if Tii.


We then letr'= (,r',,,r'2, ...) where
TO = Irk
k if 7k < I

,r k = I if rk < I < rk ' 1
Tk',l=minfi>,r',,:Yi=Yi'=1 and tn*i(tX,,~):stXi
Now, for convenience, we let L and L' be the lengths of the subsequences of X,, X~
selected by r and T' respectively. Then L = L' on {I > nj,

EL = u.(t), and

ED:5 u,(t + 8) so

(5.4) u.(t + 8)  u.(t) k E(L' L 1 Is n)P(Is n).

Furthermore, from (2.2) and the definitions of T andr',
(5.5) E(L'  L 11 = i, X, = x) = 1 + E(uni(tXi)l I= i, X. = x)  Uni(tx).

(Note: (5.5) is valid even when i = n, since we have set uo(t) m 0.)
From (5.4) and (5.5) we see that to establish (5.3) it suffices to prove
(5.6) P(I:5n):5(Slt)un(t)+o(S) as 810
and, for each i 5 n,
(5.7) E (UnMXJ1 I = i, X., = X) k UnMX)  V2.

To prove (5.6) we first remark that, since tl(t + 8)  > 1 as 8 10, we have
(5.8) P(IS n) = Y,1i P(Y, = 1, W= 0, Ai) + o(S)

where Ai denotes the event {tn*i(tX.,) :5 tXi < tXj. Ai is independent of {Yi 1, Y,' 0} so

PWi = 1, Y1 = 0, Ai) = P (Yi = 1, Yi' = O)P(Ai)
= (81t)P (Yi = W = 1, Ai)

= (81t)P(Xi is selected by T).

Putting this back into (5.8) we have
PUs n) = (81t) P(Xi is selected byr) + o(S) = (Slt)u.(t) + o(B),

which is (5.6).
To prove (5.7), we first remark that the conditional distribution of tXi given X = i and
X, = x is uniform on (t.*,(tx), tx). Also we note that
(5.9) U.i(tn*i(tX)) k UnAtX)

with equality holding unless uni(tx) < 1. Hence if uni(.) were linear on the interval (tn*i(tx), tx), (5.7) would hold and would in fact be an equality if u,i(tx) a 1. So the most natural way to establish (5.7) is to prove the following lemma:

LEMMA 5.2. For each n, UnM is concave.

PROOF. We proceed by induc tion and first note the lemma is true for n = 1 because Ul(t) = t.


Using (2.7), we rewrite (2.5) as

U.+i(t) = t + (1  t)u~(t) + 4 (t)(U.(t)  1) + f CM u. (s) ds.

Formally we have

U'.+i(t) = 1  U.M + (1  OU'W)  C'M(U.M  1) + tn*(t)U.(t) + Un(t)  t.*'(t)U.(t.*(t)). Now u.(t.*(t)) = (u.(t)  l)+ and t.*'(t) = 0 ifUn(t) < 1. Hence whether or not u,(t) 1 we have

U'.+IM = 1 + (1  (t  tn*(t)))U.(t) and

Un,,+,(t) (1  (t  t.*(t)))Un"(t)  (t  tn*(t))'U'n(t).
If Un(.) is concave (i.e., u n" < 0), then (t  tn*(t)) is increasing. Since, t  Q*(t) < 1 and
un'(.) > 0, we conclude that un"+,(t) < 0, i.e., u.+,(.) is concave.
The validity of the foregoing differentiations are also easily established by induction. To begin note ul(t) = t and ti* (t) m 0. Next the differentiability Of Un and un' implies tn* (.) is differentiable on {t: Un(t) > 1}; in fact, we have tn*'(t) = u'n(t)lun'(tn* (t)). By means of (2.5) one even more easily sees the required differentiability of un(.).
This completes the'proof of the lemma, from which we obtain (5.7).

5c. Completion of the proof. At last we are ready to prove (5.1). We define
(5.10) Cn = U./n 1/2 = UnM/n1/2,
and note that it only remains to show lim inf Cn 5 2'/2. By (5.2),

(5.11) cn(nt)"  u.(t) t EE (0, 11.


C aa t.* (1) and define s.* analogously by

cn(nsn* )1/2 = cnn 1,2 so

(5.12) sn* = 1  2c.'n 1/2 + C2nl.

Now (5.10) and (5.11) imply that
so, rewriting (2.5), with t = 1, as

Un+l = Un + f G* (U.(t)  (Un  1)) dt,

we conclude from (5.10), (5.11), and (5.13) that
(5.14) Un+15 Cnn1/2 + fl (c.(nt:V/2 _ (c.n 1/2 _ M dt.

This is perhaps the central inequality in the proof, and it is made possible by (5.2). The remainder of the proof demandsonly straightforward analytical manipulation of the right side of (5.14).


Evaluating the right side of (5.14) we get

(5.15) c.+15 c.n 1/2 + (2/3) Cnn 1/2(1 _ 8.* 3/2) s.*) (b.n 112

Substituting (5.12) and the Taylor series expansion

1  8n*3/2 = (3/2)(1 _ 8.*) _ (3/8)(1 _ 8.*)2 + 0((1 _ 8.*)3),

into the right side of (5.15) gives

(5.16) u.+, cn112 + c,'n 114 + 0 (n
(n + 1)1/2(Cn1/2 (n + 1) 1/2 + c,'n 1/2 (n + l)1/2 + 0(n 3/2)

Now direct computation shows that

n 1/2 (n + l)112 = 1 _ (l/2  8J/n 1/2 (n + 1)'/'

where & > 0 and 0 as n > oo. Hence (5.16) is

Un+js (n + 1)1/2{Cn + n 1/2 (n + l)1/2(Cn1 C~ + 0(1))}.
From (5.10), with n + 1 instead of n, we have

(5.17) c.+, 5 cn + n112 (n + l)1/2{cn1 + o (l)}.

This is exactly what we need to show that lim inf c.s 2'/2 and thereby complete the proof.
We use the fact that c`  c/2 is decreasing in c and zero for c = 2 . Choose E > 0 and
n, large enough so that for all n k n, the o(l) in (5.17) satisfies

o(l) < (l/2)1(2 1/2 + E)1 _ (l/2)(2 1/2 + E)

Then for all n ~t n,,

c. > 2 1/2 + E =* C.+, < c.  8,n112 (n + l)1/2.


n'/2(n + 1)1/2 oo

so c., > 2 1/2 + E for some no 2: n, implies cn.+. < 2 1/2 E for some m. Since e > 0 is arbitrary this shows that lim inf cn s 2 1/2 as required to complete the proof.
We should remark that exactly the same argument can be used to prove that

u,(t)l(nt)'12+ 2 1/2

for every t in (0, 11.

6. Random sample size and an open problem. As an easy consequence of u.
(2n) 112 one can obtain several results on subsequence selection when the underlying sample
size N is random. In particular we now define UN by
(6.1) UN = sup,r,.sE{max{k: Tks N)),

where Y consists of those' strategies adapted to {Xi) It' .1 but not adapted to N. When N is
Poisson or binomial (with fixed p) one can easily show that as EN > oo we ha.~e

(6.2) UN  (2EN)

in fact one can check that the same result holds whenever EN  > 00 and Var N = O(EN).
(To compare these results with the asymptotic relations of Sections 3a and 5a one needs
to note that the class of policies applied there were quite different from those used in (6. 1)
since they were also adapted to the relevant Poisson or binomial processes.)
We now consider the next most complex case where N has the geometrical distribution


(P(N = k) = p(l  p)', k = 1, 2, ). The condition Var N = O(EN) does not hold, so as before the natural analysis begins with dynamic programming.
Analogously to (2.2) we define

UNW = sup,r,.,,, E {max{k : rt :s N)).

One can easily check in this case that

UN(p)(t) = UN(p')(1)


P, =P/ {p + tQ P)}. If we define f(P) = UN(p)(1) we are led to a single integral equation: 1
f(p) = (1 p) fo, max{f(p), 1 + fill {p + t(l p)})} dt.

As we similarly noted at the end of Section 2 we can solve the equation for sufficiently extreme values. In this case we know

f(p) = log(p') if p > el.

This observation just says that if f(p) :s 1 the optimal policy is to select all record values exactly as one does in the fixed sample size problem when un(t) :5 1.
One would like to determine the asymptotic behavior of f(p) as p .. > 0. We conjecture,
but have not been able to prove, that asp > 0

f(p)  CP1,

for a constant

c < 2

7. A result of Burgess Davis on selections from permutations. If one considers a random permutation of the set {l, 2, .. , n}, then the distribution of the length of the longest decreasing subsequence is the same as that in a random sample of size n from a uniform distribution. In contrast, the length of the optimal sequentially selected decreasing subsequence is stochastically larger in the first case.
We let 1. denote the expected length of the longest decreasing subsequence which can be chosen sequentially from a random permutation. The main result of this paper immediately implies that

(7.1) lim. inf 1.M'/2 >. 2 1/2.

Part of the interest of this observation stems from the fact that the study of the l.'s was already a primary objective in Baer and Brock (1968), where "natural" is used as a synonym for "sequential." On the basis of substantial computation Baer and Brock even conjectured that 1. ~ (2n)'/'. The truth of this conjecture is an immediate consequence of our main result un ~ (2n) 1/2 and the previously unpublished theorem due to Burgess Davis which is proved below.

THEOREM (Burgess Davis).
In ~ U..

PROOF. FirstsupposeXi, 1:5 LS oo, are il.d. uniform on [0, 1]. For any E> 0 andO < 8 < 1, let IP) denote the interval ((k  1)E/n '/', ke/n 1/2 ] and let Yp) denote the cardinality of the set (i : 1 :5 i :S n, Xi r= IP)}. Consider the events

An = {mini:sksnl/2/,Yk)  (1  S)En 1/2 j

and note that elementary binomial estimates show P(An)* 1 as n > oo.


Now assign "pretend ranks" as follows: if X E IP) and X is one of the first [(1 0En '/'] elements of IP) we chose its "pretend rank" at random from those integers between (k  1) (1 _ S)En 1/2 and k(l 8)En 1/2 which are not already assigned. We then ignore all Xi's in IP) after the first [(1 8).En 1/2]. We note that on A. the sequence of "pretend ranks" is simply a permutation on the integers {k : 1 :5 k 5 (1  8)n}.
Now use the optimal random permutation policy on the set of "pretend ranks" to select a subsequence with decreasing pretend ranks. Delete from the subsequence all X,'s which are not smaller than all their predecessors in the subsequence. This gives a decreasing subsequence.
We now claim that the expected cardinality of the resulting subsequence is at least P(A~)(1~  enl/') with m = [(1  8)n]. First note, on A. the expected length before deletions is 1.. Let J1") denote the interval C1 which contains the smallest selected element from {X,, X2, ..., X}. For X_,i to be an observation deleted from the selected subsequence it is necessary that Xj+1 EE JP). By Boole's inequality and the independence of Xi+i and J1(` we have that the expected number of deleted observations is at most

En1 p(X,+1 EE j,(n)) /2 /2.
0 = n(E/nl ) = En'

This proves 1./nl/'  P(AX1Un/n` + E. By (7.1) and the arbitrariness Of E and 8 the
theorem is proved.


BAER, R. M. and BROCK, P. (1968). Natural sorting over permutation spaces. Math. Comp. 22 385410.
FEKETE, M. (1923). Cber die Verteilung der Wurzeln bei gewissen algebraischen Gleichungen mit ganzzahligen Koeffizienten. Math. Z. 17 228249.
GILBFRT, J. P. and MOSTELLER, F. (1966). Recognizing the maximum of a sequence. J. Amer. Statist. Assoc. 613573.
HAMMERSLEY, J. M. (1972). A few seedlings of research. Proc. Sixth Berkeley Symp. Math. Statist. Prob.1 345394.
KIIINCHIN, A. 1. (1957). Mathematical Foundations of Information Theory. 48. Dover, New York.
LOGAN, B. F. and SHEPP, L. A. (1977). A variational problem for random Young tableaux. Advances in Math. 26 206222.
VERA1K, A. M. and KERov, S. V. (1977). Asymptoties of the Plancherel measure of the symmetric group and the limiting form of the Young tables. Soviet Math. DokL 18 527531.