Excited random walk in three dimensions has positive speed
1. introduction
Excited random walk is a model of a random walk on which, whenever it encounters a new vertex it receives a push toward a specific direction, call it the “right”, while when it reaches a vertex it “already knows”, it performs a simple random walk. This model has been suggested in [BW] and had since got lots of attention, see [V, Z]. The reason for the interest is that it is situated very naturally between two classical models: random walk in random environment and reinforced random walk. A reinforced random walk is a walk on a graph (say ) that, whenever it passes through an edge, it changes the weight of this edge, usually positively (i.e. the edge has now a greater probability to be chosen when the random walk rereaches one of its end points) but possibly also negatively, with the extreme being the “bridgeburning random walk” that can never traverse the same edge twice. The problem appears naturally in brain research in connection with the evolution of neural networks. Reinforced random walk models are notoriously difficult to analyze, and even the question whether the simplest onereinforced random walk on is recurrent or transient is open. See [K90, KR99, PV99, DKL02] for some known results.
Random walk in a random environment is also a model in which the environment is random, but independently of the walk. For example, one may throw a coin at every point of to decide if at this point the walk will have a push to the left or to the right, and then perform random walk on the resulting weighted graph. The independence of the walk from the environment turns out to be a powerful leverage, and many very precise results are known. See e.g. the book [H95].
Excited random walk has, seemingly, all the difficulties of reinforced random walk: the environment depends on the walk, and in a dynamic way. However, it has two significant advantages. The first is the inherent directedness: the drift of excited random walk is always in the same direction, and in particular, it can be coupled with simple random walk so that the excited is always to the right of the simple random walk. The second is the projected simple random walk of lower dimension. Thus, for example, for the excited random walk in three dimensions, its projection on the two directions orthogonal to our “right” is a simple twodimensional random walk, up to a time change.
Thus, for example, it is clear that the three dimensional excited random walk is transient. Indeed, since a twodimensional simple random walk visits an order of vertices, the three dimensional excited random walk must visit at least vertices. This means, roughly, that ( denoting the first, “leftright” coordinate of ), and in particular that drifts to the right and returns to every point only a finite number of times (in the two dimensional case this argument does not work — see [BW] for a proof of this fact). The purpose of this note is to improve this obvious remark. We shall show that the factor is only an artifact of this argument, namely we shall prove
Theorem 1.
Let be an excited random walk. Then
The corresponding problem in two dimensions remains open.
I believe that the lower limit above is in effect a limit. We will not prove it, but our techniques show great independence between different parts of an excited random walk, therefore it stands to reason that it shouldn’t be difficult.
An important element of the proof is a two dimensional result which might be of independent interest — indeed we already have another application for it, [ABK]. It reads
Theorem 2.
Let and be two independent simple random walks on starting from , of length and of length for some . Then
(1)  
(note that has the elegant expression has at least “holes” in a vicinity of zero — if finds them then they must exist! The theorem is sharp in the following sense: with probability , covers all of . The (easy) proof of sharpness will only be sketched below. ). In particular, it shows that
1.1. Around the proof
The proof of theorem 1 is only three pages long, but its inductive nature, the number of parameters and their interdependencies make it somewhat opaque. Therefore I feel compelled to make some vague comments in preparation for the actual argument. The basic argument is a block decomposition. This approach has been tried before, but a straightforward attack does not work. If you divide your time span into blocks of length and allow yourself to “lose a factor of ” in each, you are left with the following obstacle: once you have one really bad block (which will happen, if is large enough), you have difficulties to say anything useful about the next block. And then about the block following it. And so on. Hence cannot be independent of — it has to be at least to get something. Thus the basic block approach gives (roughly) , but not a constant.
The argument here tries to work around this problem by a “restart mechanism”, namely some way to continue after encountering a bad block. This mechanism, roughly, throws away a big chunk in this case, initializes the process from two dimensional considerations, and is then forced to “pay” just a little for the bad block, and of course, they happen very rarely. The “big chunk” above, denote it by , where , is simply an intermediate size block. Since multiple layers are needed to get an actual constant, the easiest method to describe the structure is inductive. Thus the reader should probably keep in mind, while reading the proof, that it really describes a multilayer structure where layer is used to restart the estimates in the rare events that a block in the th level failed.
Acknowledgement.
I wish to thank Itai Benjamini for many useful discussions.
1.2. Excited random walk — notations
Let . An excited random walk (in three dimensions) is a random sequence of points in with the distribution defined as follows. . Denote . Then

If for some ( is “visited”), then is one of the six neighbors of in with probability each.

Otherwise ( is “new”), the probability is for and for . The other neighbors have probability each.
In both cases, the random choice is independent of the past, except for the position and whether the vertex is visited or new.
If is any set and is a point, then an excited random walk starting from is an excited random walk such that , and such that if then rule 1 above is applies to it regardless of the past of , i.e. all vertices in are considered “visited”.
1.3. Standard notations
The notations and relate to absolute constants, which may be different from place to place. Sometimes we shall number them for clarity. will usually pertain to constants which are “large enough” and to constants which are “small enough”. The notation is a short hand for . The notations and have no particular additional mathematical content over and . We only use them to stress that in a specific point the estimate is very rough, and that’s OK because it is enough for our purposes.
For a subset , we denote by the inner boundary, namely all vertices with at least one neighbor outside .
For a number , will denote the largest integer and will denote the smallest integer .
2. Simple random walk in two dimensions — the phenomenon.
Lemma 1.
Let and let . Let and let be random walks starting from and stopped on . Then
Proof.
Lemma 2.
Let and let for some sufficiently big. Let , let and let be random walks starting from and conditioned to hit at . Then
Proof.
This follows as lemma 1 when one remembers the following fact: if is an (unconditioned) random walk starting from and stopped on , and if is any event that depends only on the portions of inside , then
(2) 
This is well known. see e.g. [BK, lemma A.5] (the result there is for dimension but the same proof holds for dimension with minimal changes). The constant comes from the constants implicit in the notation in (2). ∎
Definition.
Let be a random walk with some stopping time and let be some ball. Define stopping times by and
(3) 
Let We call the number of visits to . In many cases we will have walks with stopping times . In this case we define and in the same manner, and call the total number of visits to . Note that it is possible for (some of) the random walk to start inside the ball . In this case and this is considered the first visit.
Lemma 3.
Let and let for some sufficiently big. Let , and let be random walks starting from and conditioned to hit in . Then
(4) 
Proof.
Clearly, we may assume is sufficiently large (in the sense that ) and pay only in the constant in (4); and since the probability is decreasing in that means we may also assume is sufficiently large. Let for some that will be fixed, together with , only later (however, the implicit constants and are assumed to be fixed after and and may depend on their values). We do need to remark in this stage that is also “sufficiently large” i.e. during the proof we will only add restrictions that increase it. Let satisfy that are disjoint and that . Clearly, we may assume . Examine the ’th walk (for a while, everything below will depend on this but we will not repeat this fact every time). Define stopping times as follows: and
(5) 
We have left out the which might be formally needed in the definition of since, clearly, the same holds for both and — may only change when changes. It is easy to see that for some sufficiently small,
(i.e. the estimate holds independently of the past). Notice that this uses (2) to overcome the conditioning over the past. Hence, easily,
(6) 
(not necessarily the same , of course). Let be the stopping time when exits . It is well known that is approximately , with an exponentially decreasing tail, i.e.
(7) 
(the only difficulty is that is conditioned, and (2) doesn’t apply. Again, we refer to a proof of a high dimensional analog result, [BK, lemma A.8]).
We now return the notation . Define . The same argument that gave (6) will give, with another sum,
(8) 
while a sum over (7) would give
(9) 
Picking sufficiently large such that this would be , and combining (8) we get an estimate for :
(10) 
Denote this event by . Let denote the space of vectors where are integers and the are points in some and respectively. For every denote by the event that and that . Since clearly determines whether happened or not, define to be the collection of all ’s such that ensures . Let be the event that for all and all and let .
If the total number of visits of some ball by the ’s is , we may apply lemma 2, if only , where is from lemma 2, i.e. if , and we get
(11) 
Notice that is, in effect, the sum over all balls of the total number of visits of the ’s. Since the number of balls is , and since says that , we get that for at least half of the balls the number of visits is . We now choose so that (11) is satisfied for at least half of the balls. Further, conditioning by all the balls are independent and we get using standard estimates for independent variables, for sufficiently large,
Together with (10) we get
Hence we only need to check when , but this happens, for sufficiently large, when , so we may now choose and the lemma is proved. ∎
Remark.
Clearly, the same proof yields the stronger estimate
However, we will not need it here.
Lemma 4.
Let , let and let . Let and and let be random walks starting from and conditioned to hit in . Let satisfy and for all . Let denote the total number of visits of the ’s to . Then
Proof.
The proof is a simple variation on the classic estimate for sums of independent variables. Let be the number of visits of to . The same harmonic potential estimates as in lemma 1 show that for any point in , the probability to hit before hitting is (here we use (2)). This means that each has an exponential distribution with the tail decreasing faster than , or in other words,
(12) 
for some sufficiently small.
Remark.
The value is of course arbitrary — it can be replaced by any but the constants and from the formulation of the lemma depend on this .
Proof of theorem 2.
As usual, we assume is sufficiently large, as we may. The constant will be fixed last, at the very end of the proof. In particular we assume so that we have no problem dividing with . Let . Clearly,
(13) 
Denote this “bad” event by .
Examine the number of visits of to . Let be some point and let be a random walk starting from , and let be the first time when . Clearly, if then and then the usual harmonic potential argument gives
Clearly, this implies that if then with probability this is the last visit of to . Hence we see that the number of visits has an exponentially decreasing tail, and in particular, for any constant ,
(14) 
where the various ’s depend on . We shall fix later on. This bad event (denote it by ) is the one with the largest probability, and the reason that the factor appears in (1). Let
(15) 
Define the stopping times using (3) for the ball , and from now on we shall examine instead of . Similarly define to be the stopping time when exits and replace with . More precisely, we shall show that
(16) 
Let therefore , , be the event that (notice that ). Conditioning by we get independent walks, each one conditioned to exit at a given point. We wish to use lemma 4 with . First we note that
(17) 
so for sufficiently small and sufficiently large we get , and we may apply lemma 4 in a meaningful way and get, for every satisfying ,
(18) 
where is the number of visits to . Pick sufficiently large such that the probability above is .
Examine the set
The proof of the theorem will follow from the interactions of and with the balls , (note that they are disjoint). As in the proof of lemma 3, we want to be able to consider the events inside each as independent. Define therefore stopping times for , similarly to (5), i.e
and define , where , to be the event that and that and , or in other words, that the total number of visits of to the balls , , is . Denote the collection of these ’s by Z. Examine first .
1. and the balls .
Since and all other we get that for all and . Hence, since with (18) and our choice of we get
(19) 
Denote this event by . Let be the subset of all ’s such that implies (clearly, if happened then we can calculate the number of visits to every and know whether happened or not). Conditioning by , we get that all balls are independent, and we may use lemma 3 for every , if only . Remembering (15) and (17), and comparing the exponents, we see that this will hold (for sufficiently large) if only
which, again, holds if only is sufficiently small. The conclusion of lemma 3 now reads
(20)  
2. and the balls .
The last conclusion, (20), says in effect that many balls have “large holes” (the ’s) in them. Here we shall complement this with proving that passes through many ’s and at least in one of them, through a sizable part of the hole.
Easily, if if a random walk starting from and stopped at , then , and if
Hence the probability not to intersect has an exponentially decreasing tail, as long as we are still within the annulus. In particular, if
Examine now an annulus where , intersects a ball . Taking we get a sequence of disjoint annuli, and then . We get that with probability
and in particular, if is the set of ’s such that intersects , then
(21) 
Denote this event by .^{1}^{1}1This estimate is actually quite bad. The true expected value of is , analogous to the fact that a random walk of length passes through approximately distinct points. However, it will do for our needs. Define to be the subset of ’s that ensure that did not happen..
3. The interaction between and .
Next examine one , . For every , the harmonic potential argument shows that
hence, if then , and of course . This shows that
Remembering (20) and the independence of and gives
for some sufficiently small. Remembering the definition of (see below (21)) we get
(remember the definition of , (15)). Throwing in (19) and (21) and summing over all we get
However, this event is what we need in (16)! Indeed, directly from the definitions,
so we need only explain why . Using (17) we see that this is equivalent to, for sufficiently large,
(22) 
which holds for sufficiently small. Finally we may fix the value of , get (16) and hence the theorem. ∎
Remarks.

As explained in the introduction, the theorem is sharp in the sense that with probability , covers all of . Roughly, the proof is as follows: lemma 1 can be reversed to show that the probability to cover any point in is . Similarly, the argument leading to (14) can be reversed to show that the probability to have visits to is , and these two together give the result.
3. Excited random walk in three dimensions
The theorem will follow very easily from the following lemma. In effect, the lemma is stronger than the theorem. The reason we need this stronger formulation is its inductive proof.
Lemma 5.
Let . Let be any configuration of visited vertices. Let . Let be an excited random walk starting from of length . Then
where the numbers satisfy a recursive condition ensuring that .
The mystery number is simply in the middle between the of theorem 2 and . Since the of theorem 2 was an arbitrary number , so is this . For the impatient, the recursive condition on the is (30) below where is defined in (24) and where is some constant. It clearly ensures .
Proof.
All the constants during the proof will depend on , but we will not repeat this fact and only write or instead of and . The lemma will be proved by induction, so assume the lemma holds for any (we shall explain how to deal with the case , indeed with all sufficiently small , at the end). Due to this fact we need to pay special attention to the constant , to ensure that it is indeed a constant and does not increase with — hence none of the and below will depend implicitly on .
Our first observation is that one can couple (meaning, realizing them on the same probability space) in the obvious way the excited random walk in the interval to a regular three dimensional random walk such that for . For we can use a simple estimate of binomial variables to say that
(23) 
Hence the same holds for and we conclude that we do not need to know anything about — with very large probability, does not intersect and we need to investigate only its intersections with .
Let
(24) 
and let for . Let
Theorem 2 allows to estimate since if the projections of the ’s on the second and third coordinates are large then they themselves definitely will be. Denote the projection by .