An Optimal Lower Bound on the Communication Complexity of GapHammingDistance
Abstract
We prove an optimal lower bound on the randomized communication complexity of the muchstudied GapHammingDistance problem. As a consequence, we obtain essentially optimal multipass space lower bounds in the data stream model for a number of fundamental problems, including the estimation of frequency moments.
The GapHammingDistance problem is a communication problem, wherein Alice and Bob receive bit strings and , respectively. They are promised that the Hamming distance between and is either at least or at most , and their goal is to decide which of these is the case. Since the formal presentation of the problem by Indyk and Woodruff (FOCS, 2003), it had been conjectured that the naïve protocol, which uses bits of communication, is asymptotically optimal. The conjecture was shown to be true in several special cases, e.g., when the communication is deterministic, or when the number of rounds of communication is limited.
The proof of our aforementioned result, which settles this conjecture fully, is based on a new geometric statement regarding correlations in Gaussian space, related to a result of C. Borell (1985). To prove this geometric statement, we show that random projections of nottoosmall sets in Gaussian space are close to a mixture of translated normal variables.
1 Introduction
Communication complexity is a muchstudied topic in computational complexity, deriving its importance both from the basic nature of the questions it asks and the wide range of applications of its results, covering, for instance, lower bounds on circuit depth (see, e.g., [KW88]) and on query times for static data structures (see, e.g., [MNSW95, Pǎt08]). In the basic setup, which is all that concerns us here, each of two players, Alice and Bob, receives a binary string as input. Their goal is to compute some function of the two strings, using a protocol that involves exchanging a small number of bits. Since communication complexity is often applied as a lower bound technique, much of the work in the area attempts to rule out the existence of a nontrivial protocol. For many functions, this amounts to proving an lower bound on the number of bits any successful protocol must exchange, being the common length of Alice’s and Bob’s input strings. Proofs tend to be considerably more challenging, and more broadly applicable, when the protocol is allowed to be randomized and err with some small constant probability (such as ) on each input.
For a detailed coverage of the basics of the field, as well as a number of applications, we refer the reader to the textbook of Kushilevitz and Nisan [KN97]. For the reader’s convenience, we review the most basic notions in Section 2.
In this paper, we focus specifically on the GapHammingDistance problem (henceforth abbreviated as ghd), which was first formally studied by Indyk and Woodruff [IW03] in the context of proving space lower bounds for the Distinct Elements problem in the data stream model. We also consider some closely related variants of ghd.
The Problem and the Main Result.
In the GapHammingDistance problem , Alice and Bob receive binary strings and , respectively. They wish to decide whether and are “close” or “far” in the Hamming sense, with a certain gap separating the definitions of “close” and “far.” Specifically, the players must output if and if , where denotes Hamming distance; if neither of these holds, they may output either or . Clearly, this problem becomes easier as the gap increases. Of special interest is the case when and ; these parameters are natural, and as we shall show later using elementary reductions, understanding the complexity of the problem with these parameters leads to a complete understanding of the problem for essentially all other gap sizes and threshold locations. Furthermore, applications of ghd, such as the ones considered by Indyk and Woodruff [IW03], need precisely this natural setting of parameters. Henceforth, we shall simply write “ghd” to denote .
Our main result states, simply, that this problem does not have a nontrivial protocol. Here is a somewhat informal statement; a fully formal version appears as Theorem 2.6.
Theorem 1.1 (Main Theorem, Informal).
If a randomized protocol solves ghd, then it must communicate a total of bits.
In fact, the technique we use to prove this theorem yields the stronger result that the same hardness holds even if Alice and Bob are given uniformly random and independent inputs in . The cleanness of this “hard distribution” is potentially important in applications. We state this result formally in Theorem 2.7.
Relation to Prior Work.
Theorem 1.1 is the logical conclusion of a moderately long line of research. This was begun in the aforementioned work of Indyk and Woodruff [IW03], who showed a linear lower bound on the communication complexity of a somewhat artificial variant of ghd in the oneway model, i.e., in the model where the communication is required to consist of just one message from Alice to Bob. Woodruff [Woo04] soon followed up with an bound for ghd itself, still in the oneway model; the proof used rather intricate combinatorial constructions and computations. Jayram et al. [JKS08] later provided a rather different and much simpler proof, by a reduction from the index problem. Their reduction was geometric, in the sense that they exploited a natural correspondence between Hamming space and Euclidean space; this correspondence has proved fruitful in further work on the problem, including this work. Recently, Woodruff [Woo09] and Brody and Chakrabarti [BC09] gave direct combinatorial proofs of the oneway bound.
All of this work left open an important question: what can be said about the complexity of ghd when twoway communication is allowed? It has been conjectured, at least since the formalization of the problem in 2003, that is still the right answer, i.e., that ghd has no nontrivial protocol, irrespective of the communication pattern.
Until 2009, our understanding of this matter was limited to two “folklore” results. Firstly, the deterministic communication complexity of can be shown to be , even allowing twoway communication and a gap as large as , for a small enough constant . This follows by directly demonstrating that its communication matrix contains no large monochromatic rectangles (see, e.g., [Woo07]). Secondly, a simple reduction from disjointness to shows that its randomized (twoway) communication complexity is ; notice that the corresponding bound for ghd (where ) is . Meanwhile, we have an upper bound of , via the simple (and oneway) protocol that samples sufficiently many coordinates of and to give the right answer with high probability. It remained a significant challenge to improve upon either tradeoff, even for just two rounds of communication.
Recently, Brody and Chakrabarti [BC09] made progress on the conjecture, proving it for randomized protocols with twoway communication, but only a constant number of rounds of communication. In fact, they showed that in a round protocol, at least one message must have length . They achieved this via a round elimination argument. At a high level, they showed that if the first message in a ghd protocol is too short, the work done by the rest of the messages can be used to solve a “smaller” instance of ghd, by exploiting some combinatorial properties of Hamming space. More recently, Brody et al. [BCR10] improved the bound to , still using a round elimination argument, but exploiting geometric properties of Hamming and Euclidean space instead. We refer the reader to the discussion in [BCR10] for details, including a comparison of the two arguments.
Our main theorem completes this picture, confirming the main outstanding conjecture about ghd. Moreover, a straightforward reduction (Prop. 4.4) yields the more general result that the randomized complexity of is . Our lower bound proof is significantly different in approach from all of the aforementioned ones. We now give a highlevel overview.
The Technique.
Part of the difficulty in establishing our result is that many of the known techniques for proving communication complexity lower bounds seem unable to prove bounds better than . These include the classic rectanglebased methods of discrepancy and corruption,^{3}^{3}3We assume that the reader has some familiarity with these basic techniques in communication complexity, which are discussed in detail in the textbook of Kushilevitz and Nisan [KN97]. Some authors use terms like “onesided discrepancy” and “rectangle bound” when describing the technique that we (following Beame et al. [BPSW06]) have termed “corruption.” for reasons described below. They also include certain linear algebraic approaches, such as the factorization norms method of Linial and Shraibman [LS07] and the pattern matrix method of Sherstov [She08], because these methods lower bound quantum communication complexity. The trouble is that ghd does have a constanterror quantum communication protocol, as can be seen by combining a query complexity upper bound due to Nayak and Wu [NW99] with a communicationtoquery reduction, as in Buhrman et al. [BCW98] or Razborov [Raz02].
Instead, what does work is a suitable generalization of the corruption method. Recall that the standard corruption method proceeds as follows. First, one observes that every protocol that communicates bits induces a partition of the communication matrix into disjoint nearmonochromatic rectangles. In order to show a lower bound of , one then needs to prove that any rectangle containing at least a fraction of the inputs must also contain (or be “corrupted” by) a notmuchsmaller fraction of the inputs (or vice versa). In other words, one shows that large nearmonochromatic rectangles do not exist, from which the desired lower bound follows. It should be noted that proving such a property could be a challenging task. Indeed, this is the main technical contribution of Razborov’s proof of the lower bound on the randomized communication complexity of the disjointness problem [Raz90].
This idea appears not to give a lower bound better than on the randomized communication complexity of ghd because its communication matrix does contain “annoying” rectangles that are both large and nearmonochromatic. This can be seen, e.g., by considering all inputs with , for : the resulting rectangle contains a fraction of all inputs (it is large), but a much smaller fraction of inputs (it is nearly monochromatic).
Our generalization considers not just inputs and inputs, but also a carefully selected set of “joker” inputs, whose corresponding outputs are immaterial. Loosely speaking, we show that if a large rectangle contains many more inputs than inputs, then the fraction of joker inputs it contains must be even larger than the fraction of inputs it contains (by some constant factor, say ). This property — call it the “joker property” — implies that even though annoying rectangles exist, their union cannot contain more than a constant fraction of the inputs (say, ). In particular, there is no way to partition the inputs into nearmonochromatic rectangles, and a lower bound of follows.
This simplesounding idea seems to have considerable power. Indeed, the method we have presented above can be seen as a special case of the ideas behind the “smooth rectangle bound” recently introduced by Klauck [Kla10] and systematized by Jain and Klauck [JK10]. Formally, when we prove a communication lower bound using corruptionwithjokers as above, we are essentially lower bounding the smooth rectangle bound of the underlying function. For a careful understanding of this matter, based on linear programming duality, we refer the reader to Jain and Klauck [JK10].
Of course, there remains the task of proving the joker property referred to above. It turns out that the statement we need boils down to roughly the following: for arbitrary sets that are not too small (say, of size at least ), if and , then is not too concentrated around ; a precise statement appears as Corollary 3.8. The proof uses a Gaussian noise correlation inequality (Theorem 3.5, proved using analytic methods); this inequality and its proof are the main technical contributions of the paper and should be of independent interest.
Data Stream and Other Consequences.
The original motivation for studying ghd was a specific application to the Distinct Elements problem on data streams. Specifically, given a stream (sequence) of elements, each from , we wish to estimate, to within a factor, the number of distinct elements in it, while using space sublinear in and . A long line of research has culminated in a randomized algorithm [KNW10] that computes such an estimate (failing with probability at most , say) in one pass over the stream, using bits of space. A space lower bound of has been known for a while [AMS99] and is easily seen to apply to multipass algorithms. But the dependence of the lower bound on is a longer story.
An easy reduction (implicit in Indyk and Woodruff [IW03]) shows that a lower bound of on the maximum message length of a round protocol for ghd would imply a space lower bound on pass algorithms for the Distinct Elements problem. Thus, the oneway lower bound for ghd implied a tight lower bound for onepass streaming algorithms. The results of Brody and Chakrabarti [BC09] and Brody et al. [BCR10] extended this to pass algorithms, giving lower bounds of and , respectively.
Our main result improves this pass/space tradeoff, giving a space lower bound of . As is easy to see, this is tight up to factors logarithmic in and . Further, since the communication lower bound for ghd can be shown to hold under a uniform input distribution, this space lower bound can be shown to hold even for rather benign models of random uncorrelated data [Woo09].
Suitable reductions from ghd imply similar space lower bounds for several other data stream problems, such as estimating frequency moments [Woo04] and empirical entropy [CCM10]. One can also derive appropriate lower bounds for a certain class of distributed computing problems known as functional monitoring [ABC09]. We note that the second frequency moment (equivalently, the Euclidean norm) can be interpreted as the selfjoin size of a table in a database, and is an especially important primitive needed in many numerical streaming tasks such as regression and lowrank approximation.
Subsequent Developments.
Since the preliminary announcement of our results [CR11], there has been much additional research related to ghd. One line of research has provided alternative proofs of our main result. Vidick [Vid11] gave a proof that followed the same overall outline as ours, but had an alternative proof of the joker property, based on matrixanalytic and second moment methods. More recently, Sherstov [She12] gave a proof that changed the outline itself, working with a closely related problem called gaporthogonality that has the advantage of being amenable to the basic corruption method. Further, by using an inequality due to Talagrand, Sherstov was able to work with the discrete problem directly rather than passing to Gaussian space.
Other lines of research have applied the optimal bound on the communication complexity of ghd to obtain results on a diverse array of topics, including differential privacy [MMP10], distributed functional monitoring [WZ12], property testing [BBM11], and data aggregation in networks [KO11]. Furthermore, Woodruff and Zhang [WZ12] have given a new proof of optimal multipass space lower bounds for Distinct Elements without appealing to our lower bound for ghd.
2 Corruption, a Generalization, and the Main Theorem
2.1 Preliminaries
Consider a communication problem given by a (possibly partial) function ; we let take the value “” at inputs for which we do not care about the output given. For a communication protocol, , involving two players, Alice and Bob, we write to denote the output of when Alice receives and Bob receives . If is randomized, this is a random variable. We say that computes with error at most if
When the function is understood from the context, we use to denote computes with error at most . For a deterministic protocol and a distribution on , we define
For a protocol , let denote the worstcase number of bits communicated by . We let and denote the error randomized and error distributional communication complexities of , respectively; i.e.,
We also put and .
2.2 Rectangles and Corruption
Consider a twoplayer communication problem given by a function . A set is said to be a rectangle if for some and . A fundamental property of communication protocols is the following.
Fact 2.1 (Rectangle property; see, e.g., [Kn97]).
Let be a deterministic communication protocol that takes inputs in , produces an output in , and communicates bits. Then, for all , there exist pairwise disjoint rectangles such that
The rectangles are called the rectangles of .
Let us focus on problems with Boolean output, i.e., . The discrepancy method for proving lower bounds on consists of choosing a suitable distribution on and showing that for every rectangle , the quantity is “exponentially” small. For several functions, this method is unable to prove a strong enough lower bound; the canonical example is disj. A generalization that handles disj, and several other functions, is the corruption method [Raz90, Kla03, BPSW06] which consists of showing, instead, that for every “large” rectangle , we have , for a constant , where is a probability distribution on , for . Intuitively, we are arguing that any large rectangle that contains many s must be corrupted by the presence of many s. The largeness of is often enforced indirectly by writing the inequality in the following manner, where typically grows with and :
(1) 
An inequality of this form allows us to conclude an lower bound on for a suitable distribution and sufficiently small error . (Rather than present a full proof, we note that this follows as a special case of Theorem 2.2, below.) By the easy direction of Yao’s lemma, this implies .
2.3 Corruption With Jokers, and the Smooth Rectangle Bound
We now introduce a suitable generalization of the corruption method, which, as we shall soon see, implies that , for suitable and . The corresponding technical challenge is met using a new Gaussian noise correlation inequality that we prove in Section 3. Our generalization can be captured within the very recent smooth rectangle bound framework [Kla10, JK10]. However, we believe that there is merit in singling out the method we use, because it appears wieldier than the smooth rectangle bound, which is more technically involved.
The key idea is that, in addition to the distributions and on the inputs and inputs to , we consider an auxiliary distribution on “joker” inputs. Strictly speaking, we just have a “joker distribution” ,^{4}^{4}4In the sequel, when we apply the technique to ghd, , and will be sharply concentrated on pairwise disjoint sets of inputs, which we can think of as the interesting inputs, the interesting inputs, and the joker inputs, respectively. and it does not matter how relates to and , but it is crucial that the inequality below gives a negative weight to , and is therefore a weakening of (1).
(2) 
We shall in fact allow a little flexibility in our choice of and by requiring only that these be supported “mostly” on inputs and inputs. Also, we shall extend our theory to partial functions, since ghd is one. The next theorem captures our lower bound technique.
Theorem 2.2.
For all such that , there exist and such that the following holds. Let be a partial function. Let and . Suppose that there exist distributions on , and a real number such that

for , is mostly supported on , i.e., , and

inequality (2) holds for all rectangles .
Then, for the distribution , we have . In particular, we have .
Proof.
Consider a deterministic protocol that computes with some error (to be fixed later) under , and uses bits of communication. Let be the disjoint rectangles of , as given by Fact 2.1. Let and . Notice that is exactly the set of inputs on which outputs . Thus, for , we have
(3) 
where the last step uses Condition (1).
Instantiating inequality (2) with each and summing the resulting inequalities, we get
(4) 
Noting that and applying (3) to the and terms in (4), we obtain
Further, noting that , and rearranging terms, we obtain
Using and rearranging further, we get
By virtue of the upper bound on , we may choose small enough to make the righthand side of the above inequality positive, and equal to , say. Doing so gives us , as desired.
Notice that the “hard distribution” is explicitly specified, once the distributions involved in Condition (2) are made explicit. ∎
We could, alternately, have proved Theorem 2.2 by demonstrating that the given conditions imply that the smooth rectangle bound of is . We have chosen to give the above proof instead, because it is more elementary, avoiding the technical details of the latter bound, and because it was discovered independently by the first named author.
2.4 Application to GHD: the Main Theorem
The GapHammingDistance problem is formalized as the computation of the partial function defined as follows.
It will be useful to have some flexibility in the choice of the location of the threshold, , and the size of the gap, . It is not hard to see that all settings with and lead to “equally hard” problems, asymptotically; we prove this formally in Lemma 4.2.
Rather than working with directly, it proves convenient to consider the partial function , for some large enough constant to be determined later. We shall now come up with distributions and constants that satisfy the conditions of Theorem 2.2: Condition (1) turns out to be easy to verify, and verifying Condition (2), as mentioned above, is a significant technical challenge that we deal with in Section 3.
Definition 2.3.
For , let denote the distribution of defined by the following randomized procedure: pick uniformly at random, and then pick by independently flipping each bit of with probability . Notice that is the uniform distribution on .
We shall need the following two lemmas. The first of these follows easily from standard tail estimates for the binomial distribution, or even just the Chebyshev bound; we omit its proof. The second is formally proved at the end of Section 3.
Lemma 2.4.
For all there exists such that, for large enough , we have
where and . ∎
Lemma 2.5.
For all there exists such that, for large enough , we have
To derive the lower bound on , we put , , , , , , and . Note that this choice of constants satisfies . By Lemmas 2.4 and 2.5, we see that Conditions (1) and (2), respectively, of Theorem 2.2 are met; the inequality in Lemma 2.5 is easily seen to be the corresponding instantiation of (2).
Thus, applying Theorem 2.2, we conclude that there exist absolute constants and such that, for large enough , we have . Combining this with Lemma 4.2 (proved in Section 4) to adjust for the slightly offcenter threshold and the size of the gap, and applying standard error reduction techniques, we obtain the following asymptotically optimal lower bound for ghd.
Theorem 2.6 (Main Theorem).
. ∎
In applications of a communication lower bound, it is often helpful to have a good understanding of the “hard input distribution” that achieves the lower bound. One slightly unsatisfactory aspect of our proof above is that the hard distribution for ghd that it implies is not too clean. With a little additional work, however, we can show that the uniform input distribution is hard for ghd, once we require a small enough error bound. This is stated in the following theorem, whose proof appears in Section 4.
Theorem 2.7 (Hardness Under Uniform Distribution).
There exists an absolute constant for which .
3 An Inequality on Correlation under Gaussian Noise
We now turn to the proof of Lemma 2.5, for which we need some technical machinery that we now develop. We begin with some preliminaries.
Some Probability Distributions.
Let denote the uniform (Haar) distribution on , the unit sphere in . Let denote the standard Gaussian distribution on , with density function , and let denote the dimensional standard Gaussian distribution with density . For a set , when we write, e.g., , we tacitly assume that is measurable. For a set we denote by the distribution conditioned on being in . We say that a pair is an correlated Gaussian pair if its distribution is that obtained by choosing from and then setting where is an independent sample from . It is easy to verify that if is an correlated Gaussian pair, then so is ; in particular, is distributed as .
Relative Entropy.
We recall some basic information theory for continuous probability distributions. For clarity, we eschew a fully rigorous treatment — which would introduce a considerable amount of extra complexity through its formalism — and instead refer the interested reader to the textbook of Gray [Gra90]. Given two probability distributions and , we define the relative entropy of with respect to as
It is well known (and not difficult to show) that the relative entropy is always nonnegative and is zero iff the two distributions are essentially equal. We will also need Pinsker’s inequality, which says that the statistical distance between two distributions and is at most (see, e.g., [Gra90, Lemma 5.2.8]). Since we will only consider the relative entropy with respect to the Gaussian distribution, we introduce the notation
where is a realvalued random variable with distribution . We define similarly. These quantities can be thought of as measuring the “distance from Gaussianity.” They can be seen, in some precise sense, as additive inverses of entropy, and as such satisfy many of the familiar properties of entropy. For instance, it is easy to verify that for any sequence of random variables we have the chain rule
where, for random variables and , we use the notation to denote the expectation over of the distance from Gaussianity of .
3.1 Projections of Sets in Gaussian Space
Our main technical result is a statement about the projections of sets in Gaussian space. More precisely, let be any set of not too small measure, say, for some constant . What can we say about the projections (or onedimensional marginals) of , i.e., the set of distributions of as the (fixed) vector ranges over the unit sphere ?
Related questions have appeared in the literature. The first is in work by Sudakov [Sud78] and Diaconis and Freedman [DF84] (see also [Bob03] for a more recent exposition) who showed that for any random variable in with zero mean and identity covariance matrix whose norm is concentrated around , almost all its projections are close to the standard normal distribution. A second related result is by Klartag [Kla07] who, building on the previous result but with considerable additional work, showed that almost all projections of the uniform distribution over a (properly normalized) convex body are close to the standard normal distribution. (For the special case of the cube , this essentially follows from the central limit theorem.)
Our setting is different as we do not put any restrictions on the set (such as convexity) apart from its measure not being too small (and clearly without any requirement on the measure one cannot say anything about its projections). Another important difference is that in our setting the projections are not necessarily normal. To see why, take for , a set with Gaussian measure roughly , half of which is on vectors with and the other half on vectors with . It follows that the projection of on a unit vector is distributed more or less like the mixture of two normal variables, one centered around and the other centered around , both with variance . For unit vectors with (a set of measure about ), this distribution is very far from any normal distribution.
Our main theorem below shows that the general situation is similar: for any set of not too small measure, almost all projections of are close to being mixtures of translated normal variables of variance . One implication of this (which is essentially all we will use later) is that for any of not too small measure, and whose measure is also not too small, the inner product for chosen from and chosen uniformly from is not too concentrated around ; in fact, it must be at least as “spread out” as (and possibly much more).
Theorem 3.1.
For all and large enough , the following holds. Let be such that . Then, for all but an measure of unit vectors , the distribution of where is equal to the distribution of for some and random variables and satisfying
The proof is based on the following two lemmas. The first one below shows that for any set whose measure is not too small, and any orthonormal basis, most of the projections of on the basis vectors are close to normal. In fact, the statement is somewhat stronger, as it allows us to condition on previous projections (and this will be crucially used).
Lemma 3.2.
For all and large enough the following holds. For all sets with and all orthonormal bases , at least a fraction of the indices satisfy
where , with .
Proof.
By definition, . Thus, since is the vector written in the orthonormal basis , using the chain rule for relative entropy, we have
Hence, for at least a fraction of indices , we have ∎
The second lemma is due to Raz [Raz99] and shows that any nottoosmall subset of the sphere contains “nearly orthogonal” vectors. The idea of Raz’s proof is the following. First, a simple averaging argument shows that there is a nottoosmall measure of vectors satisfying the property that the measure of inside the unit sphere formed by the intersection of and the subspace orthogonal to is not much smaller than . Second, by the isoperimetric inequality, almost all vectors in are within distance of . Together, we obtain a vector as above that is within distance of . We take to be the closest vector in to and repeat the argument recursively with the intersection of and the subspace orthogonal to .
Definition 3.3.
A sequence of unit vectors is called orthogonal if for all , the squared norm of the projection of on is at most .
Lemma 3.4 ([Raz99, Lemma 4.4]).
For all and large enough , the following holds. Every of Haar measure contains a orthogonal sequence .
Proof of Theorem 3.1.
Let be an arbitrary set of unit vectors of measure at least . We will prove the theorem by showing that at least one vector satisfies the condition stated in the theorem.
By Lemma 3.4, there is a sequence of vectors that is orthogonal. Let be their GramSchmidt orthogonalization, i.e., each is defined to be the projection of on the space orthogonal to . Notice that, by definition, we can write each as
for some real coefficients . Moreover, by assumption, .
Let be the random variables representing when is chosen from . By applying Lemma 3.2 to any completion of to an orthonormal basis, we see that there exists an index for which
(In fact, at least of the indices satisfy this.) It remains to notice that we can write as
which satisfies the condition in the theorem, with taken to be and taken to be the above sum. Here we are using the fact that is a function of , which implies that since conditioning cannot decrease relative entropy. ∎
3.2 The Correlation Inequality
We now turn to our main technical result, which is given by the following theorem.
Theorem 3.5.
For all there exists a such that for all large enough and the following holds. For all sets with we have that
As will become evident in the proof, pairs for which is small contribute much less to the left hand side than to the right hand side. Hence the theorem essentially amounts to showing that is not too concentrated around zero, and precisely such an anticoncentration statement is given by Theorem 3.1.
We point out the following easy corollary (which is in fact equivalent to Theorem 3.5).
Corollary 3.6.
For all there exists a such that for all large enough and the following holds. For any sets with where (or ) is centrally symmetric (i.e., ) we have that
Remark.
Without the symmetry assumption, this probability can be considerably smaller. For instance, take and to be two opposing halfspaces, i.e., and for . Then for , the probability above can be seen to be . In fact, C. Borell [Bor85] showed that for any given and any , two opposing halfspaces of the corresponding measures exactly achieve the minimum of the probability above. It would be interesting to obtain a strengthening of Corollary 3.6 of a similar tight nature. See [Bar01] for a short related discussion.
Recall that . The following technical claim shows that if the distribution of is close to the normal distribution (in relative entropy) then the expectation of is at least . Notice that if is normal, this expectation is
where in the first equality we used the symmetry of , and the second follows from an easy direct calculation of the integral (just complete the square in the exponent).
Claim 3.7.
For all there exists a such that for any probability distribution on the reals satisfying , any , and any , we have
Proof.
Set and all , so that for all
where in the third inequality we use the fact that for all . Next, since the statistical distance between and is at most , we have that
for small enough . ∎
Proof of Theorem 3.5.
Let be small enough constants (depending only on and ) to be determined later. By choosing a small enough , and using the concentration of the Gaussian measure around the sphere of radius (see, e.g., [Bal97, Lecture 8]), we can guarantee that , defined as
satisfies and similarly for . We can write