Nonlinear Estimators and Tail Bounds for Dimension Reduction in Using Cauchy Random Projections
Abstract
For ^{1}^{1}1Revised May 29, 2021. The original version, titled Practical Procedures for Dimension Reduction in , is available as a technical report in Stanford Statistics achive (report No. 200604, June, 2006). dimension reduction in , the method of Cauchy random projections multiplies the original data matrix with a random matrix () whose entries are i.i.d. samples of the standard Cauchy . Because of the impossibility results, one can not hope to recover the pairwise distances in from , using linear estimators without incurring large errors. However, nonlinear estimators are still useful for certain applications in data stream computation, information retrieval, learning, and data mining.
We propose three types of nonlinear estimators: the biascorrected sample median estimator, the biascorrected geometric mean estimator, and the biascorrected maximum likelihood estimator. The sample median estimator and the geometric mean estimator are asymptotically (as ) equivalent but the latter is more accurate at small . We derive explicit tail bounds for the geometric mean estimator and establish an analog of the JohnsonLindenstrauss (JL) lemma for dimension reduction in , which is weaker than the classical JL lemma for dimension reduction in .
Asymptotically, both the sample median estimator and the geometric mean estimators are about efficient compared to the maximum likelihood estimator (MLE). We analyze the moments of the MLE and propose approximating the distribution of the MLE by an inverse Gaussian.
Cauchy Random ProjectionsLi, Hastie, and Church \firstpageno1
Keywords: Dimension reduction, norm, Cauchy Random projections, JL bound
1 Introduction
This paper focuses on dimension reduction in , in particular, on the method based on Cauchy random projections (Indyk, 2000), which is special case of linear random projections.
The idea of linear random projections is to multiply the original data matrix with a random projection matrix , resulting in a projected matrix . If , then it should be much more efficient to compute certain summary statistics (e.g., pairwise distances) from as opposed to . Moreover, may be small enough to reside in physical memory while is often too large to fit in the main memory.
The choice of the random projection matrix depends on which norm we would like to work with. Indyk (2000) proposed constructing from i.i.d. samples of stable distributions, for dimension reduction in (). In the stable distribution family (Zolotarev, 1986), normal is 2stable and Cauchy is 1stable. Thus, we will call random projections for and , normal random projections and Cauchy random projections, respectively.
In normal random projections (Vempala, 2004), we can estimate the original pairwise distances of directly using the corresponding distances of (up to a normalizing constant). Furthermore, the JohnsonLindenstrauss (JL) lemma (Johnson and Lindenstrauss, 1984) provides the performance guarantee. We will review normal random projections in more detail in Section 2.
For Cauchy random projections, we should not use the distance in to approximate the original distance in , as the Cauchy distribution does not even have a finite first moment. The impossibility results (Brinkman and Charikar, 2003; Lee and Naor, 2004; Brinkman and Charikar, 2005) have proved that one can not hope to recover the distance using linear projections and linear estimators (e.g., sample mean), without incurring large errors. Fortunately, the impossibility results do not rule out nonlinear estimators, which may be still useful in certain applications in data stream computation, information retrieval, learning, and data mining.
Indyk (2000) proposed using the sample median (instead of the sample mean) in Cauchy random projections and described its application in data stream computation. In this study, we provide three types of nonlinear estimators: the biascorrected sample median estimator, the biascorrected geometric mean estimator, and the biascorrected maximum likelihood estimator. The sample median estimator and the geometric mean estimator are asymptotically equivalent (i.e., both are about efficient as the maximum likelihood estimator), but the latter is more accurate at small sample size . Furthermore, we derive explicit tail bounds for the biascorrected geometric mean estimator and establish an analog of the JL Lemma for dimension reduction in .
This analog of the JL Lemma for is weaker than the classical JL Lemma for , as the geometric mean estimator is a nonconvex norm and hence is not a metric. Many efficient algorithms, such as some sublinear time (using superlinear memory) nearest neighbor algorithms (Shakhnarovich et al., 2005), rely on the metric properties (e.g., the triangle inequality). Nevertheless, nonlinear estimators may be still useful in important scenarios.

Estimating distances online
The original data matrix requires storage space; and hence it is often too large for physical memory. The storage cost of all pairwise distances is , which may be also too large for the memory. For example, in information retrieval, could be the total number of word types or documents at Web scale. To avoid page fault, it may be more efficient to estimate the distances on the fly from the projected data matrix in the memory. 
Computing all pairwise distances
In distancebased clustering and classification applications, we need to compute all pairwise distances in , at the cost of time . Using Cauchy random projections, the cost can be reduced to . Because , the savings could be enormous. 
Linear scan nearest neighbor searching
We can always search for the nearest neighbors by linear scans. When working with the projected data matrix (which is in the memory), the cost of searching for the nearest neighbor for one data point is time , which may be still significantly faster than the sublinear algorithms working with the original data matrix (which is often on the disk).
We briefly comment on coordinate sampling, another strategy for dimension reduction. Given a data matrix , one can randomly sample columns from and estimate the summary statistics (including and distances). Despite its simplicity, there are two major disadvantages in coordinate sampling. First, there is no performance guarantee. For heavytailed data, we may have to choose very large in order to achieve sufficient accuracy. Second, large datasets are often highly sparse, for example, text data (Dhillon and Modha, 2001) and marketbasket data (Aggarwal and Wolf, 1999; Strehl and Ghosh, 2000). Li and Church (2005) and Li et al. (2006a) provide an alternative coordinate sampling strategy, called Conditional Random Sampling (CRS), suitable for sparse data. For nonsparse data, however, methods based on linear random projections are superior.
The rest of the paper is organized as follows. Section 2 reviews linear random projections. Section 3 summarizes the main results for three types of nonlinear estimators. Section 4 presents the sample median estimators. Section 5 concerns the geometric mean estimators. Section 6 is devoted to the maximum likelihood estimators. Section 7 concludes the paper.
2 Introduction to Linear Random Projections
We give a review on linear random projections, including normal and Cauchy random projections.
Denote the original data matrix by , i.e., data points in dimensions. Let be the th row of . Let be a random matrix whose entries are i.i.d. samples of some random variable. The projected data matrix . Denote the entries of by and let be the th row of . Then , with entries , i.i.d. to , where is the th column of .
For simplicity, we focus on the leading two rows, and , in , and the leading two rows, and , in . Define to be
(1) 
If we sample i.i.d. from a stable distribution (Zolotarev, 1986; Indyk, 2000), then ’s are also i.i.d. samples of the same stable distribution with a different scale parameter. In the family of stable distributions, normal and Cauchy are two important special cases.
2.1 Normal Random Projections
When is sampled from the standard normal, i.e., , i.i.d., then
(2) 
because a weighted sum of normals is also normal.
Denote the squared distance between and by . We can estimate from the sample squared distance:
(3) 
It is easy to show that (e.g., (Vempala, 2004; Li et al., 2006b))
(4)  
(5) 
We would like to bound the error probability by . Since there are in total data points, we need to bound the tail probabilities simultaneously for all pairs. By the Bonferroni union bound, it suffices if pairs among
(6) 
Using (5), it suffices if
(7)  
(8) 
Therefore, we obtain one version of the JL lemma:
If , the squared distance between any pair of data points (among data points) can be approximated within fraction of the truth, using the squared distance of the projected data after normal random projections. , then with probability at least
Many versions of the JL lemma have been proved (Johnson and Lindenstrauss, 1984; Frankl and Maehara, 1987; Indyk and Motwani, 1998; Arriaga and Vempala, 1999; Dasgupta and Gupta, 2003; Indyk, 2000, 2001; Achlioptas, 2003; Arriaga and Vempala, 2006; Ailon and Chazelle, 2006).
Note that we do not have to use for dimension reduction in . For example, we can sample from some subGaussian distributions (Indyk and Naor, 2006), in particular, the following sparse projection distribution:
(9) 
When , Achlioptas (2003) proved the JL lemma for the above sparse projection, which can also be shown by subGaussian analysis (Li et al., 2006c). Recently, Li et al. (2006d) proposed very sparse random projections using in (9), based on two practical considerations:

should be very large, otherwise there would be no need for dimension reduction.

The original distance should make engineering sense, in that the second (or higher) moments should be bounded (otherwise various termweighting schemes will be applied).
Based on these two practical assumptions, the projected data are asymptotically normal at a fast rate of convergence when . Of course, very sparse random projections do not have worst case performance guarantees.
2.2 Cauchy Random Projections
In Cauchy random projections, we sample i.i.d. from the standard Cauchy distribution, i.e., . By the 1stability of Cauchy (Zolotarev, 1986), we know that
(10) 
That is, the projected differences are also Cauchy random variables with the scale parameter being the distance, , in the original space.
Recall that a Cauchy random variable has the density
(11) 
The easiest way to see the 1stability is via the characteristic function,
(12)  
(13) 
for , , …, , i.i.d. , and any constants , , …, .
Therefore, in Cauchy random projections, the problem boils down to estimating the Cauchy scale parameter of from i.i.d. samples . Unfortunately, unlike in normal random projections, we can no longer estimate from the sample mean (i.e., ) because .
Although the impossibility results
(Lee and Naor, 2004; Brinkman and Charikar, 2005)
have ruled out estimators that are metrics, there is enough information
to recover from
samples , with a high accuracy. For
example, Indyk (2000) proposed using the sample median as
an estimator. The problem with the sample median estimator is the
inaccuracy at small and the difficulty in deriving explicit tail
bounds needed for determining the sample size .
This study focuses on deriving better estimators and explicit tail bounds for Cauchy random projections. Our main results are summarized in the next section, before we present the detailed derivations. Casual readers may skip these derivations after Section 3.
3 Main Results
We propose three types of nonlinear estimators: the biascorrected sample median estimator (), the biascorrected geometric mean estimator (), and the biascorrected maximum likelihood estimator (). and are asymptotically equivalent but the latter is more accurate at small sample size . In addition, we derive explicit tail bounds for , from which an analog of the JohnsonLindenstrauss (JL) lemma for dimension reduction in follows. Asymptotically, both and are efficient compared to the maximum likelihood estimator . We propose accurate approximations to the distribution and tail bounds of , while the exact closedform answers are not attainable.
3.1 The Biascorrected Sample Median Estimator
Denoted by , the biascorrected sample median estimator is
(14) 
where
(15)  
(16) 
Here, for convenience, we only consider , = 1, 2, 3, …
Some key properties of :

, i.e, is unbiased.

When , the variance of is
(17) if .

As , converges to a normal in distribution
(18)
3.2 The Biascorrected Geometric Mean Estimator
Denoted by , the biascorrected geometric mean estimator is defined as
(19) 
Important properties of include:

This estimator is a nonconvex norm, i.e., the norm with .

It is unbiased, i.e., .

Its variance is (for )
(20) 
For , its tail bounds can be represented in exponential forms
(21) (22) 
These exponential tail bounds yield an analog of the JohnsonLindenstrauss (JL) lemma for dimension reduction in :
If
3.3 The Biascorrected Maximum Likelihood Estimator
Denoted by , the biascorrected maximum likelihood estimator is
(23) 
where solves a nonlinear MLE equation
(24) 
Some properties of :

It is nearly unbiased, .

Its asymptotic variance is
(25) i.e., . () , as ,

Its distribution can be accurately approximated by an inverse Gaussian, at least in the small deviation range. Based on the inverse Gaussian approximation, we suggest the following approximate tail bound
(26) which has been verified by simulations for the tail probability range.
4 The Sample Median Estimators
Recall in Cauchy random projections, , we denote the leading two rows in by , , and the leading two rows in by , . Our goal is to estimate the distance from , , i.i.d.
It is easy to show (e.g., Indyk (2000)) that the population median of is . Therefore, it is natural to consider estimating from the sample median,
(27) 
As illustrated in the following lemma (proved in Appendix A), the sample median estimator, , is asymptotically unbiased and normal. For small samples (e.g., ), however, is severely biased.
The sample median estimator, , defined in (27), is asymptotically unbiased and normal
(28) 
When , = 1, 2, 3, …, the moment of can be represented as
(29) 
If , then .
For simplicity, we only consider when evaluating .
Once we know , we can remove the bias of using
(30) 
where the bias correction factor is
(31) 
can be numerically evaluated and tabulated, at least for small .^{2}^{2}2It is possible to express as an infinite sum. Note that , , is the probability density of a Beta distribution .
Obviously, is unbiased, i.e., . Its variance would be
(32) 
Of course, and are asymptotically equivalent, i.e., .
Figure 1 plots as a function of , indicating that is severely biased when . When , the bias becomes negligible. Note that, because , the bias correction not only removes the bias of but also reduces its variance.
The sample median is a special case of sample quantile estimators (Fama and Roll, 1968, 1971). For example, one version of the quantile estimators given by McCulloch (1986) would be
(33) 
where and are the .75 and .25 sample quantiles of , respectively.
Our simulations indicate that actually slightly outperforms . This is not surprising. works for any Cauchy distribution whose location parameter does not have to be zero, while takes advantage of the fact that the Cauchy location parameter is always zero in our case.
5 The Geometric Mean Estimators
This section derives estimators based on the geometric mean, which are more accurate than the sample median estimators. The geometric mean estimators allow us to derive tail bounds in explicit forms and (consequently) an analog of the JohnsonLindenstrauss (JL) lemma for dimension reduction in .
Recall, our goal is to estimate from i.i.d. samples . To help derive the geometric mean estimators, we first study two nonlinear estimators based on the fractional moment, i.e., () and the logarithmic moment, i.e, , respectively, as presented in Lemma 5. See the proof in Appendix B.
Assume . Then
(34)  
(35)  
(36) 
from which we can derive two biased estimators of from i.i.d. samples :
(37)  
(38) 
whose variances are, respectively,
(39)  
(40) 
The term , reaching a limit decreases with decreasing
(41) 
In other words, the variance of converges to
that of as approaches zero.
Note that can in fact be written as the geometric mean:
(42) 
is a nonconvex norm () because . is also a nonconvex norm (the norm as ). Both and do not satisfy the triangle inequality.
We propose , the biascorrected geometric mean estimator. Lemma 5 derives the moments of , proved in Appendix C.
(43) 
is unbiased, with the variance (valid when )
(44) 
The third and fourth central moments are (for and , respectively)
(45)  
(46) 
The higher (third or fourth) moments may be useful for approximating the distribution of . In Section 6, we will show how to approximate the distribution of the maximum likelihood estimator by matching the first four moments (in the leading terms). We could apply the similar technique to approximate . Fortunately, we do not have to do so because we are able to derive the exact tail bounds of in Lemma 5, which is proved in Appendix D.
(47) 
where
(48) 
(49) 
where
(50) 
By restricting , the tail bounds can be written in exponential forms:
(51)  
(52) 
An analog of the JL bound for follows from the exponential tail bounds (51) and (52). {lemma} Using with
Remarks on Lemma 5: (1) We can replace the constant “8” in Lemma
5 with better (i.e., smaller) constants for
specific values of . For example, If , we can
replace “8” by “5”. See the proof of Lemma 5.
(2) This Lemma is weaker than the classical JL Lemma for
dimension reduction in as reviewed in Section 2.1. The classical
JL Lemma for ensures that the interpoint distances of the
projected data points are close enough to the original
distances, while Lemma
5 merely says that the projected data points contain
enough information to reconstruct the original distances. On
the other hand, the geometric mean estimator is a nonconvex
norm; and therefore it does contain some information about the
geometry. We leave it for future work to explore the possibility of
developing efficient algorithms using the geometric mean estimator.
Figure 2 presents the simulated histograms of for , with and . The histograms reveal some characteristics shared by the maximum likelihood estimator we will discuss in the next section:

Supported on , is positively skewed.

The distribution of is still “heavytailed.” However, in the region not too far from the mean, the distribution of may be well captured by a gamma (or a generalized gamma) distribution. For large , even a normal approximation may suffice.
Figure 3 compares with the sample median estimators and , in terms of the mean square errors. is considerably more accurate than at small . The bias correction significantly reduces the mean square errors of .
6 The Maximum Likelihood Estimators
This section is devoted to analyzing the maximum likelihood estimators (MLE), which are “asymptotically optimum.” In comparisons, the sample median estimators and geometric mean estimators are not optimum. Our contribution in this section includes the higherorder analysis for the bias and moments and accurate closedfrom approximations to the distribution of the MLE.
The method of maximum likelihood is widely used. For example, Li et al. (2006b) applied the maximum likelihood method to normal random projections and provided an improved estimator of the distance by taking advantage of the marginal information.
The Cauchy distribution is often considered a “challenging” example because of the “multiple roots” problem when estimating the location parameter (Barnett, 1966; Haas et al., 1970). In our case, since the location parameter is always zero, much of the difficulty is avoided.
Recall our goal is to estimate from i.i.d. samples . The joint likelihood of is
(53) 
whose first and second derivatives (w.r.t. ) are
(54)  
(55) 
The maximum likelihood estimator of , denoted by , is the solution to , i.e.,
(56) 
Because , indeed maximizes the joint likelihood and is the only solution to the MLE equation (56). Solving (56) numerically is not difficult (e.g., a few iterations using the Newton’s method). For a better accuracy, we recommend the following biascorrected estimator:
(57) 
Lemma 6 concerns the asymptotic moments of and , proved in Appendix E. {lemma} Both and are asymptotically unbiased and normal. The first four moments of are
(58)  
(59)  
(60)  
(61) 
The first four moments of are
(62)  
(63)  
(64)  
(65) 
The order term of the variance, i.e., , is known, e.g., (Haas et al., 1970). We derive the biascorrected estimator, , and the higher order moments using stochastic Taylor expansions (Bartlett, 1953; Shenton and Bowman, 1963; Ferrari et al., 1996; Cysneiros et al., 2001).
We will propose an inverse Gaussian distribution to approximate the distribution of , by matching the first four moments (at least in the leading terms).
6.1 A Numerical Example
The maximum likelihood estimators are tested on MSN Web crawl data, a termbydocument matrix with Web pages. We conduct Cauchy random projections and estimate the distances between words. In this experiment, we compare the empirical and (asymptotic) theoretical moments, using one pair of words. Figure 4 illustrates that the bias correction is effective and these (asymptotic) formulas for the first four moments of in Lemma 6 are accurate, especially when .