Please see PDF version
Time and SpaceEfficient Algorithms for least
Median of Squares Regression
DIANE 1. SOUVAINE and J. MICHAEL STEELE*
The least median of squared residuals regression line (or LMS line) is ficient algorithms for calculating the robust fit that Rous
that line y = ax + b for which the median of the residuals ly, ax, seeuw (1984) called the least median of squares (LMS)
b12 is minimized over all choices of a and b. If we rephrase the traditional regression line. This fitting process has excellent behavior
ordinary least squares (OLS) problem as finding the a and b that minimize
the mean of ly, ax, b12, one can see that in a formal sense LMS from the point of view just indicated. In particular, Rous
just replaces a "mean" by a "median." llis way of describing LMS sceuw compared LMS with six important robust regression
regression does not do justice to the remarkable properties of LMS. In competitors and found substantial advantage to the LMS
fact, LMS regression behaves in ways that distinguish it greatly from method.
OLS as well as from many other methods for robustifying OLS (see, To introduce the computational issues associated with
e.g., Roussceuw 1984). As illustrations given here show, the LMS regres
sion line should provide a valuable tool for studying those data sets in LMS regression, we first let (xi, yi) (1 :r. i 5 n) denote n
which the usual linear model assumptions are violated by the presence points in the plane. The slope and intercept of the LMS
of some (not too small) groups of data values that behave distinctly from fitted line are those values a* and fl* that minimize
the bulk of the data. This feature of LMS regression is illustrated by the
fit given in Figure 1 and the residual plots of Figures 2a and 2b. f(a, fl) = median(iyi axi fl12). (1. 1)
The LMS regression line is an attractive tool for data analysis, but it
is not easy to compute. Steele and Steiger (1986) established that the As Rousseeuw showed, the LMS procedure does an ex
function f (a, b) = median fly, ax, b [2~ can have on the order of cellent job following the majority. One formal method for
n 2 local minima, so typical local methods have little hope of finding the expressing and quantifying this virtue relies on the notion
global minimum of f. lle main objective of this article is to provide of the breakdown point of an estimator (see, e.g., Donoho
algorithms that do minimize f and are efficient in terms of both time and Huber 1983). Roughly speaking, the breakdown point
T1wo algorithms are given here that determine the LMS regression line of an estimator is the smallest percentage of the obser
for n points in the plane. Both of these algorithms draw their strength vations that can be changed so as to make the estimate
from the systematic use of affine duality, and one objective pursued here arbitrarily large. In the case of estimates of location, we
is the exposition of the technique of affine duality so that it will become can see that the breakdown point of the mean is 0%, since
more commonly considered by statisticians. moving even one point out of n can cause an arbitrarily
The first algorithm uses the socalled sweepline technique, and it runs
in worstcase time complexity 0(n2 log n), using only 0(n) space. The large change. Similarly, one can check that the breakdown
second algorithm depends on the recent development of data structures point of the median is 50%. It is also easy to see that these
that permit the economical searching of the arrangement determined by same figures persist in the regression case; that is, OLS
n lines. It requires only 0(n2) time, but the space needed is raised to has breakdown point 0% and LMS regression has break
0(n'). down point 50%.
KEY WORDS: Duality; Sweepfine technique; Global optimization; Ar The first mentions of an algorithm for LMS regression
rangement of lines; Arrangement searching. are in Roussecuw (1984) and Leroy and Roussceuw (1984).
The first formal study was given in Steele and Steiger
1. THE REGRESSION PROBLEM (1986), where it was shown (a) for (xi, yi) in general po
One aim of robust regression is to fit a line to a set of sition, f (a, fl) has tl(nl) local minima (£1 stands for "order
data in such a way that inspection of the residuals will of at leasC just as 0 stands for "torder of at most," and
detect distinctive behavior of even a large minority of the for this result general position means that no three points
observations. From that point of view the solid line of are colinear and no four points determine parallel lines),
Figure 1 provides a far more effective fit than does the and (b) exact minimization of f is reducible to a finite
dotted line that represents the ordinary least squares (OLS) search that has worst time complexity of 0(n 3).
fit. A careful analysis of the residuals from the dotted line Although an O(W) algorithm is practical on modest data
might reveal the differences in the two clumps of data, but sets, faster algorithms are needed if the LMS fitting pro
any glance at the residuals from the solid line will shout cedure is to be useful as an interactive tool on data sets
out that the data with positive xcoordinates arespecial"of even moderate size. The main algorithm we give here
from the perspective of the majority. This point is illus runs in time 0(n'). Because lower bounds for geometric
trated by the residual plots of Figures 2a and 2b. algorithms are extremely rare (and generally very crude),
Our main purpose here is to provide two new and ef it is to be expected that establishing the optimality of the
0(n2) result will be difficult. Still, in all likelihood, there
Diane L. Souvaine is Assistant Professor, Department of Computer is no algorithm for LMS regression that is faster than 0(n'),
Science, Rutgers University, New Brunswick, NJ 08903. J. Nfichael Steele and applications of LMS regression may have to live with
is Professor, Department of Civil Engineering and Operations Research,
Program in Engineering Statistics, Princeton University, Princeton, NJ
08544. Souvaine's research was supported in part by National Science 0 1987 American Statistical Association
Foundation Grant MCS8303926. Steele's research was supported in part Journal of the American Statistical Association
by National Science Foundation Grant DMS8414069. September 1987, Vol. 82, No. 399, Theory and Methods
Souvaine and Steele: Least Median of Squares Regression 795
6 4 2 0 2 4
Figure 1. Benefits of High Breakdown Regression.
that constraint. One should note that an approximate al work together very well and should prove useful in many
gorithm such as that of Leroy and Roussceuw (1984) may other statistical problems that deal with lines and points.
give useful fits at a computational cost that may be sub The algorithms given here have been tailored to be as
stantially smaller than 0(n 2). Such approximate algo simple as possible given the aim of attaining the achieved
rithms may be the only practical approach for LMS mul orders of space and time complexity. Obviously any im
tiple regression. plementation of these algorithms would entail numerous
We also give a second algorithm in which more careful minor (or not so minor) speedups and improvements; in attention is paid to the space requirements of LMS regres particular, we could have been much more stingy with our sion. Although our 0(n 2 ) algorithm requires O(W) space, pointers. This situation rests in comfort with the widely we are able to give a 0(n 2 log n) algorithm that requires endorsed philosophy of getting the right algorithmand only 0(n) space. There are certainly machines and prob then toning for high performance. One can consult Bentley lems for which the second algorithm would actually run (1982) for many elegant illustrations of this viewpoint.
faster because of the lessened overhead of memory man Technical Remark. The word median needs to be un
agement. ambiguously defined in the case of even n. Our convention
The algorithms we provide use two important new tools for n even will be to take the high median or the value of from computational geometry. These tools are (a) affine rank m = 1 + n/2 in an ordered list ul :5 U2 :r' ... 5 U,. duality and (b) data structures that facilitate searching This convention is made for specificity and clarity; it has arrangements. The benefit of this feature is that these tools no material impact on the analysis of our algorithms.
796 Journal of the American Statistical Association, September 1987
6 4 2 0 2 4
.6 4 2 0 2 4
Figure 2. (a) Residuals From Robust Une. (b) Residuals From OLS's Une.
2. GEOMETRIC TRANSFORMATIONS: which is determined by the two points TLI and TL2. Like
POINTILINE DUALITY wise the line L determined by two points P, and P2 is mapped to the point TL determiped by the two lines TP,
There have been powerful applications of transforma and TP2. These relations lie at the base of duality, and
tions in geometry since the work of Steiner and Poncelet, they are shared by the classical transformation S of Pon
but the thesis of Brown (1979) opened up the floodgate celet, which maps the point (a, b) to the line ax + by +
of applications to computational geometry (see, e.g., Cha 1 = 0 and maps the line ax + by + 1 = 0 to the point
zelle, Guibas, and Lee 1983;.Dobkin and Souvaine 1986; (a, b). The duality applied here is not as symmetrical as
Edelsbrunner, O'Rourke, and Seidel 1983). Poncelet's; in particular, we note S2 = I, but since y
In statistical work the point/line duality was used most ax + b goes to ( a, b) we have T1 ge; I.
recently in Johnstone and Velleman (1985), which gives The duality given by S may have more algebraic appeal,
an algorithm for a different robust regression method, the but T has additional ordering properties that make it more
resistant line. In data analysis and in simple linear regres useful in computational problems where order plays a role.
sion, duality was also used by Emerson and Hoaglin (1983), The first important ordering property is that the transfor
Dolby (1960), and Daniels (1954). mation T preserves the relationship of "above" and "be
lle transformation that we will apply is the mapping low" for points and lines. That is, if P is above L then TP T, which takes the point (a, b) to the line y = ax + b is above TL. The transformation T also preserves the verand takes the line y = ax + b to the point ( a, b). The tical distance between points and lines; TL is above TP minus sign in ( a, b) may be a little surprising, but it is by exactly the same distance that L is above P, and so on. precisely what is required to make T a bona fide duality. This property is naturally crucial in a problem where a
Under this transformation, the point P that is deter median residual is to be minimized over a selection of mined by two lines L, and L2 is mapped to the line TP, lines.
Souvaine and Steele: Least Median of Squares Regression 797
The second important invariance property of T is that lines with slope a; the line L, determined by the points
if the slope of L is larger than the slope of V, then the P, = (xi,, yJ and P3 = (xi,, yJ, and the line L2 that
xcoordinate of TL is smaller than the xcoordinate of TV. goes through the point P2 = (xi,, yJ. A key property of
This is trivial from the definition, but is a very nice relation the LMS regression line 1,p is that the number K of points
in practical applications. between L, and L2 MUSt satisfy
We should make one final distinction about the way we K = (n 4)12 n even'
will use duality and the transformation T. We will make
no use of the socalled ideal objects, like "the point at = (n S)/2 n odd (3.1)
infinity," which help give the projective plane much of its provided n > 3 (compare Steele and Steiger 1986 main
charm. All of our computations depend on ordinary ob
jects in the finite plane. lemma). Both the idea of equioscillation and the enclo
sure property given by Equation (3.1) are illustrated in
3. EQUIOSCILLATION AND ITS DUAL Figure 3.
One simple method of determining the LMS regression
To articulate a finite problem that can be precisely dual line can now be given. For each triple of data points ^
ized, it is worthwhile to introduce some terminology rel P2, and P3 that are ordered by xcoordinate we first de
evant to the local structure of LMS regressions. First for termine lines L, and L2, as before. Next we find a triple
any a and fl the line 1,,,g = f(x, y): y = ax + fl} defines with exactly K data points between L, and L2 such that
residuals ri(a, fl), which we can typically write as ri without the vertical distance between L, and L2 is minimized. The
fear of confusion. We say the line l.,,q bisects three distinct LMS regression line is the line that goes exactly between
points (x,,, y) (j = 1, 2, 3) if all of the ri, are of the same L, and L2.
magnitude r but not all have the same sign. If xl, < xi, < We can now see the rephrasingoif our problem obtained
xi, and rj, = rj, = ri, we say l.,# equioscillates with respect by applying T to the data points and the lines they deter
to the points. mine. The condition that there are K points between the
It was proved in Steele and Steiger (1986) that the LMS lines L, and L2 becomes the condition that there are K
regression line must be an equioscillating line relative to lines between the points TL, and M2. Further, TL, is the
some triple of data points. Since there are at most (n3) such point determined by the two lines TP, and TP3; TL2 is the
lines a naive algorithm would be to examine them all. point on the line TP2 that has the same xcoordinate as
Obviously our 0(n 2 ) algorithm must work differently. the point TLI.
Given an equioscillating line 1,,p there are two related Our algorithmic problem in the dual can now be ex
3 ~ ~xi3 yi3 1q, 0
Pl= (xi,Ayil) 0
Figure 3. Equioscillation. For the given set of date points of size n 20, the LMS regression line Q equioscillates relative to P,, P2, and P3. Note that LIIVpi]L.~ and lines are equally spaced.
798 Journal of the American Statistical Association, September 1987
pressed as follows: Given n lines Li (1 :5 i :s n) in general we can use a heap (see, e.g., Aho, Hopcroft, and Ullinan
position in the plane with intersection points Py (1 :5 i < 1974). We will refer to our particular structure as HEAP.
j :9 n), find that line L* and intersection point P* such The initialization of these structures will show how they
that, among all linepoint pairs (L, P) that have exactly will be used and will help make the algorithm transparent.
K of the Li cutting the vertical segment S joining L and We begin by noting that to the left of all of the intersection
P, the pair (L*, P*) has the smallest vertical distance. [For points, the lines are themselves ordered by slope, that is,
statistical purposes there is no loss in assuming that our n the line with smallest slope is above all of the other lines,
lines are in general position in the sense that no pair of and so on. Consequently, we place all of the Li into LIST
lines are parallel and no more than two meet at any given in increasing order of slope. We also augment the structure
point. If this were not the case a statistically subliminal holding the Li by storing along with each of the Li a set
perturbation of the L, could make it so. This perturbation of four pointers, Upl, UpK, Downl, and DownK, which
(if required) can be achieved with a preprocessing cost of will point to the lines that are 1 above Li, K above L,,
order 0(n). In addition, with minor adjustments to our and so forth in the ordering given by LIST. Since lines too
algorithms, one can remove the stipulation that the lines near the top or the bottom will not have lines satisfying
be in general position.] the required conditions, we set the pointers to the null
If this dual problem is solved, then in constant time we value to flag this situation.
can interpret the solution as a solution to the primal. The current order of the lines in LIST, increasing order
4. SWEEPLINE ALGORITHM AND of slope, is exactly equivalent to decreasing order of L,,
SPACE EFFICIENCY intercept, where L.: x = A is any vertical line strictly to
the left of all of the Pip To compute a potential value for
We are now in a position to give our first dualitybased A, we must locate the leftmost intersection point Pq. But
algorithm for computing the least median of residuals the leftmost Pij must be an intersection of two lines ad
regression line. We will suppose for this section that the jacent in LIST. Consequently, we pass through LIST and
duality transformation has been performed, and we pro for each adjacent pair Li and Lj in LIST, we install fil in
ceed to solve the dual problem spelled out previously. HEAP. We do this while respecting the ordering of the Py
The technique that we apply is a type of sweeping of by xcoordinates and we end up with a heap of size n the plane by ' combinatorially moving a vertical "sweep 1 with the leftmost of the Pil installed at the root. As an line" from the left to the right, stopping at each of the Pij important adjunct to the heap building process, we make to perform some computation. This is probably the first a double set of pointers that associate the elements of application of these techniques in the area of regression, HEAP and the elements of LIST. Specifically, for each but Shamos and Hoey (1976) and Bentley and Ottman adjacent pair L, and Lj in LIST and each corresponding (1979) showed that sweepline techniques are applicable intersection point in HEAP, we create pointers from Li to a variety of problems in computational geometry. Edels and L1 to Pil and pointers from Pij to Li and Lp
brunner and Wetz]. (1986) recently applied techniques sim Once HEAP has been constructed, we fix the line L.:
ilar to those presented here to achieve fast algorithms for x = A by setting A to some arbitrary value less than the
such problems as halfplanar range estimation, the knear xcoordinate of the point at the root of HEAP. Next we
est neighbors search problem, and minimum area trian introduce another vertical line L, which we call the sweep
gulation. line, and we initialize L to L.. The line L is not essential
The computational objectives of this first algorithm are to the algorithm, but it helps articulate a program invariant (a) to illustrate the power of duality in a statistical problem that makes verifications clearer.
and (b) to provide a reasonably fast algorithm that is op With the preprocessing and initialization phase comthnally space efficient. The second objective requires some plete, we should make a preliminary tally of resources care and a few subtleties. In the first place, by "space consumed. First, because of the sorting according to slope, efficienC we mean an algorithm that requires 0(n) space. LIST requires time 0(n log n) to build and space 0(n) to Since the problem is of this size one cannot do better. store. Since HEAP is also of size only 0(n), it can be
The sweepline algorithm requires two offtheshelf data created in time 0(n) and space 0(n) using wellknown structures. The first of these, to be called LIST, will main methods.
tain our set of n lines in a geometrically meaningful order. The algorithm for computing the LMS regression line
LIST can be implemented as a simple linked list, but we is now almost trivial using the data structures we have
will want to add some additional pointers that will help built. The picture to keep in mind is that of sweeping the
LIST to interact with the second data structure. vertical line L from L. until it has passed through all of
Our second structure will be used to store a certain the points. What we will do is manage LIST and HEAP subset of n 1 out of the set of n(n 1)/2 intersection in such a way that (a) LIST always provides the ordering points Pip The structure we create will permit us to access of the L, according to decreasing order of Lintercept, (b) the "smallest" element in time 0(1) and permits the in the pointers stored in LIST remain consistent with their sertion or deletion of an element in time 0(log n). The initial definition, and (c) HEAP always stores as its minordering we will use to give meaning to "smallest" is that imal element the intersection point Py that has smallest xPij ~c P,, provided the xcoordinate of fil is smaller than coordinate of any intersection point to the right of L.
the xcoordinate of P,,. To implement this second structure As L reaches each new intersection point ^ we compute
Souvaine and Steele: Least Median of Squares Regression 799
the vertical distance to the lines UpK and DownK and from the n given lines Li (1 :5 i 5 n) and an auxiliary pair
compare the results with the previous optimum. Then we of vertical lines L and R, called "left" and "right." If m
exchange the relative positions of Li and L, in LIST. A is the number of intersections among the Li occurring
key geometric observation parallel to the idea used to between the two lines L and R, the hammock has a vertex
compute L. is that from the whole set of n(n 1)/2 set V of cardinality m + 2n formed by adding the set of
intersection po ' ints Pij (1 :s i < j :5 n) the one that is nearest intersection points on the vertical lines to the set of in
to the sweepline L must be determined by a pair of lines tersection points enclosed by L and R. The edge set E of fLi, LJ, which are adjacent in the updated LIST. This G consists of those pairs of V that are joined by a segment observation permits us to keep a set of candidates for the of the arrangement. That is, vi and vj are joined if they.are 64 next point to explore," which is at most of cardinality n on the same line and there is no line that separates them. 1 and means that for each of the 0(n 2 ) new positions An illustration of a hammock is given in Figure 4a.
of L, we need update only three points of HEAP at a cost The key computational fact we need about hammocks
of 0(log n) time each. Thus the entire algorithm runs in is that Chazelle (1984) introduced an algorithm that com
time 0(n 2 log n). putes either the edge list representation or the adjacency
5. DATA STRUCTURES FOR list representation of a hammock in time 0(n log n + m)
SEARCHING ARRANGEMENTS and space 0(n + m). For our application, we must pick
L and R so that the hammock contains all of the inter
To improve upon the 0(n 2 log n) time complexity of section points Pil (1 :5 i < j :5 n). Assuming as before that our first algorithm, we introduce the idea of a hammock all n lines are in general position, we may conclude that (Chazelle 1984). A hammock is a graph G constructed m = n(n 1)/2 and thus that the hammock will require a
V 6 V8 Y10
V4 'V 12
1 V1 V2 V7 7 : v7 ' V1 V6 4. V8 V9
2. v2 V1 V3 V6 8: V8 ~ V7 V6 V12~ V10
3: v3 4 V2 V4 VS 9: vg ' V7 V10
4: V4 V3 VS 10.. V10 Ve V9 V11
5: VS V3 V4 V12 V6 11: V11 Vs V10 V12
6 V6 V 2 VS V8 V7 12: V12 VS V 11
BottomIn > Bottofflout
Figure 4. (a) Small Hammock. (b) Adjacency List for the Hammock. (c) EntrancelExit Rules.
800 Journal of the American Statistical Association, September 1987
O(W) time and space. Furthermore, all of the vertices of record of the minimum value found and the pointedge
G that are on L or R have degree 2 or 3 and all other pair that produces that minimum.
vertices have degree 4. Although the details have just been spelled out for W,
The adjacency list representation of G, our choice, con and the strand K + 1 below Wi, we obviously can process
tains a linked list of the vertices such that each vertex v the strand that is K + 1 above Wi in the same way. (Typ
also has a pointer to a linked list containing the vertices ically one has either a strand K + 1 above or a strand K
adjacent to v in counterclockwise order. An illustration of + 1 below, but it is also possible to have both.)
the adjacency list representation corresponding to the After each strand Wi has been processed we can then
hammock of Figure 4a is given in Figure 4b. In this bounded at cost 0(n) examine the minimal pointedge pairs asso
degree situation, the linked adjacency list can be replaced ciated with the Wi (1 :5 i n). This will permit us to find
with just a simple fourelement array. We shall tag the the global minimum, that is, the pointedge pair (P,, E.)
four edges associated with a vertex not on L or R with such that the vertical distance between P. and E. is min
labels like those in Figure 4c. imal in the class of all pointedge pairs P and E such that
A final point about Chazelle's 0(n') construction of a K lines pass between P and E. This pair (P,,, E.) is (of
complete hammock is that it is reasonably easy to imple course) a solution to our dual version of the LMS regres
ment and there is not a substantial amount of overhead sion problem.
being concealed in the constants of the O(W) bound. A Before leaving this algorithm we should note where re
contrasting example is Strassen's 0(n 2.11 ) algorithm for fast sources are consumed. The creation of the basic hammock
matrix multiplication, which has seen little practical im had time cost 0(n 2 ) and space cost O(W). Creating S
plementation despite its remarkable theoretical impor added the more modest space cost of 0(n) and time cost
tance (see, e.g., Aho et al. 1974). 0(n log n). There were n strands traversed, and each of
these traversals required time 0(n), so the cumulative time
6. TIMEEFFICIENT LMS REGRESSIONS cost of the core algorithm was 0(n 2). The resources con
In addition to the hammock just described we will need sumed over the whole process are, therefore, of order
one further data structure before we can detail our second 0(n2) for both space and time.
algorithm. Just to have a name we will call this additional
structure S. If Qi (1 :s i :5 n) are the points of intersection 7. CONCLUDING REMARKS
of Li on L, we require that (a) S be a linked list of the Li The most obvious question that we have not fully ad
ordered by decreasing ycoordinate of Qi, (b) for each L, dressed is that of multiple regression. The techniques of
we have defined a pointer called Kbelow(LJ, which points this article should extend, although the expositional and
to the line L, that is K + 1 below Li in the list S, and (c) analytical overhead increases substantially. The funda
we have defined a s~ pointer Kabove(Li), which points mental tool for searching a higherdimensional arrange
to a line Lk that is K + 1 above Li in S. If one of the lines ment has been provided in Edelsibrunner et al. (1983), and
Lj or Lk fails to exist, it is understood that the correspond the basic device of duality remains intact. The running
ing pointer takes on the null value. time of any exact LMS regression with d dimensional is
A notion that helps the understanding of our second likely to be no faster than 0(nd). This fact makes the study procedure is that of a strand. By a strand we mean that of heuristic alternatives to LMS regression an open and connected sequence of intervals that is obtained by starting interesting area for further study. at a point Q1 following the only nonL edge of the ar Two other issues of importance are (a) making these rangement from Qi and then successively following the algorithms dynamic in the sense of being able to add or traversal rule given in Figure 4c and making the unique delete observations cheaply and maintain an LMS regrespermissible transition from each of the Pil until we arrive sion line, and (b) providing analogies to LMS regression
at R. The key property of a pair of strands W and W' is that accommodate weights for different data points. that any vertical line between two points of W and W' will The importance of these issues comes from the possialways cross exactly the same number of lines Li (1 :5 i :5 bility of real time applications of robust regression as part
n). of filtering systems.
The algorithm is now easy. For each Qi (1 i :5 n) we
construct the strand Wi that originates from Qi. Using the [Received March 1986. Revised December 1986.1
pointer in S we find the Q and Qk that are K + 1 above
or K + 1 below Qi (if they exist). We then consider the REFERENCES
associated strands Wj and Wk. Aho, A. V., Hopcroft, J. E., and Uliman, J. D. (1974), The Design and
For specificity let us suppose that Wi is a strand that is A,,Jysis of Computer Algorithms, Reading, MA: AddisonWesley.
K + 1 below Wi. We will traverse W, by starting at Qi and Bentley, J. L. (1982), WMing Efficient Programs, PrenticeHall Software
taking the unique admissible edge out of each successive Series, Englewood Cliffs, NJ: PrenticeHall.
Bentley, J. L., and Ottman, T. A. (1979), "Algorithms for Reporting
intersection point until we reach R. At each step of this and Counting Geometric Intersections," IEEE Transactions on Com
process we increment our position in Wj until we locate puters, C28, 643647.
the edge E of Wi that lies directly below our current in Brown, K. Q. (1979), "Geornetric Transforms for Fast Geometric Al
gorithms," Technical Report CMUCS80101, CarnegieMellon Uni
tersection point P in Wi. We then compute the vertical versity, Dept. of Computer Science.
distance between P and E. As we traverse Wi we retain a Chazelle, B. (1984), '1ntersecting Is Easier Than Sorting," in Proceed
Souvaine and Steele: Least Median of Squares Regression 801
ings of the 16th Association for Computing Machinery, Symposium on Edelsbrunner, H., and Wetzl, E. (1986), "Constructing Belts in Two
the Theory of Computing, pp. 125135. Dimensional Arrangements With Applications," SIAM Journal on
Chazelle, B., Guibas, L. L, and Lee, D. T. (1983)7 "The Power of Computing, 15, 271284.
Geometric Duality," in Proceedings of the 24th IEEE Foundations of Ernerson, J. D., and Hoaglin, D. C. (1983), "Resistant Lines for y
Computer Science, pp. 217225. versusx," in Understanding Robust and Exploratory Data Analysis,
Daniels, H. E. (1954), "A DistributionFree Test for Regression Param eds. D. C. Hoaglin, F. Mosteller, and J. Tukey, New York: John Wiley.
eters," Annals of Mathematical Statistics, 25, 499513. Johnstone, I. M., and Velleman, P. F. (1985), "The Resistant Line and
Dobkin, D. P., and Souvaine, D. L. (1986), "Computational Geometry Related Regression Methods," Journal of the American Statistical As
A Users Guide," in Advances in Robotics L Algorithmic and Geometric sociation, 80, 10411054.
Aspects of Robotics, eds. J. T. Schwartz and C. K. Yap, Hillsdale, NJ: Leroy, A., and Rousseeuw, P. J. (1984), "PROGRESS: A Program for
Lawrence Erlbaum. Robust Regression," Report 201, Centrum voor Statistiek en Oper
Dolby, J. L. (1960), "Graphical Procedures for Fitting the Best Line to ationell Onderzoek, University of Brussels.
a Set of Points," Technometrics, 2, 477481. Roussecuw, P. J. (1984), "Least Median of Squares Regression," Journal
Donoho, D. L., and Huber, P. J. (1983), "The Notion of Breakdown of the American Statistical Association, 79, 871880.
Point," in A Festschrift for Erich L. Lehmann, eds. P. J. Birkel, K. Shamos, M., and Hoey, D. (1976), "Geometric Intersection Problems,"
Doksum, and J. L. Hodges, Belmont, CA: Wadsworth, pp. 157184. in Proceedings of the 17th IEEE Foundations of Computer Science,
Edelsbrunner, H., O'Rourke, J., and Seidel, R. (1983), "Constructing pp. 208215.
Arrangements of Lines and Hyperplanes With Applications," in Pro Stecle, J. M., and Steiger, W. L. (1986), "Algorithms and Complexity
ceedings of the 24th IEEE Foundations of Computer Science, pp. 83 for Least Median of Squares Regression," Discrete Applied Mathe
91. matics, 13, 509517.