Skip to content

How many real zeroes does a random polynomial have ?

Tháng Tám 25, 2013

Finding real zeroes of a polynomial (with real coefficients) is perhaps one of the oldest and most important problems in mathematics. There are many theorems which give estimates for the number of real roots. A famous example is Descartes’ rule of sign changes:

“The number of positive zeroes is at most the number of sign changes in the coefficients sequence. ”

For example, x^3 +x^2-x-1 has at most one positive real zero. As a matter of fact, it has exactly one positive zero.  On the other hand, I do not know of a good way to give a lower bound  on the number of real zeroes.

In general, the number of real zeroes can be seen as a function of the coefficients. If one chooses these coefficients randomly, then this number becomes a random variable.  Waring was the first to study the distribution of this random variable (he considered cubic polynomials), but a systematic studies only started in the 1940s with a series of works of Littlewood and Offord, following some earlier consideration of Bloch and Polya.  This series of papers has become the defining work of Offord’s career.

The  precise setting of the problem is as follows. Let c_0, \dots, c_n be deterministic numbers which may depend on n, and  \xi_0, \dots, \xi_n be iid copies of a real random variable $\latex \xi$ with mean zero and variance one. Let N_n be the number of real zeroes of the random polynomial f_n (z) = \sum_{i=0}^n c_i \xi_i z^i.

One would like to study the distribution of N_n.  Another, more difficult question is to study the distribution of the real zeroes themselves.

In the 40s to 60s many leading researchers (including Bloch, Polya, Littlewood, Offord, Erdos, Kac etc) have obtained significant results in bounding N_n (either in probability or expectation), with respect to various atom variable \xi.  In particular, Kac obtained an asymptotic sharp bound on E (N_n) for the case \xi is real gaussian and c_0=c_1= \cdots =c_n=1. In this case, the expectation of N_n isapproximately C log n with C =2/\pi. His approach has led to the general Kac-Rice formula which is an important tool in the field.  Several decades later, in 1998, Edelman and Kostlan came up with a nice geometric approach that allows one to compute the expectation of N_n for a general sequence of c_i, given that the atom variable \xi is still gaussian.

Motivated by the universality phenomenon, one would conjecture that given a “reasonable” coefficient sequence, any result for N_n in the gaussian case should hold for a general  variable \xi with the same mean and variance.  However,   Edelman-Kostlan  proof relies heavily on the geometric fact that a gaussian vector is rotationally symmetric.  Even for the case when  all c_i=1,  extending Kac’s result to other random variables is highly trivial task and required a considerable amount of effort.  In 1947, Kac extended his famous result in the gaussian case  to the case when \xi has the uniform distribution on the interval [-1,1].  In 1956, Erdos and Offord  extended the result to the Bernoulli case (\xi is uniform on the set  \{-1,+1\}). Stevens in 1969 pushed the extension to a quite wide class of distributions, and around the same time Ibragimov and Maslova extended Kac’s result to all mean-zero distributions in the domain of attraction of the normal law, under  the extra assumption that Pr(\xi=0 )=0.  Few years later, Maslova  computed the variance of  N_n  and furthermore established a central limit theorem.

Let us now turn to the general case when c_i can be rather arbitrary (and can depend on both i and n in particular). A deeper level of understanding N_n is to determine the correlation functions of the real zeroes.  These functions are essential tools in the study of random point processes. In simple cases, such as when the process is Poisson (one simply puts  n  independent random points on [-1,1], say), the correlation between the points is trivial (there is none). However, when the points  come from
a complex system (such as zeroes of a random polynomial or eigenvalues of a random matrix), the situation is highly sophisticated, as the points strongly correlate and nearby points exercise some repelling force. (This is the reason why researchers like to use such processes for modeling  in physics.)  In the polynomial case, the correlation functions  provide information about the interaction between real zeroes in a very short interval, where one expects to see only O(1) zeroes. In principle, from there one can determine all moments of the number of zeroes in that short interval. Given these local information,  in most cases one can determine  all moments of  the global variable N_n. For example, by cutting the real line into small intervals, one obtains  the expectation of N_n by the additivity of  expectations.  For more details, including the precise definition of correlation functions, we refer to Hough et. al.   Thus, a more ambitious goal is to go for the universality of the correlation functions directly,  instead of proving universality for N_n.

In the gaussian case, the correlation functions can be computed using Kac-Rice formula. (As a matter of fact, this formula is available in an abstract form  for any atom variable \xi; but its  explicit calculation is extremelly hard and one needs strong properties of  gaussian vectors to see it through.)  Universality for correlation functions has been established by Bleher and Di in 2004 for random binomial polynomials $latex  c_i := \sqrt {n \choose i}$, given that the atom variable \xi has a continuous distribution function which is sufficiently smooth. Their method relies on Kac-Rice formula. This formula gives an abstract form for correlation functions of all orders. As mentioned above, it is very hard to compute this formula in non-gaussian cases. Using analytic arguments,  Bleher and Di showed that  the difference between the value of the formula in the gaussian case and a general case  is negligible.  The authors mentioned that a similar argument would also work for Weyl polynomial (c_i:= \sqrt {1/ i!}), another class of well studied random polynomials. On the other hand, their proof required the analytic  assumption  on the distribution of \xi in an essential way and so does not apply to discrete cases, such as Bernoulli.  Furthermore, the exact formula for the  coefficients  c_i  ({n \choose i}^{1/2} and \sqrt{1/i!}, respectively)  also plays a role and it is not clear if one can extend the result to other sequences.

In a recent attempt, Tao and I proposed another way to prove universality.  What we did was to prove a general replacement principle which shows universality of correlations given some assumptions about the (joint) distribution of the values of $f_n (z)$ ((for a general sequence c_0, \cdots, c_n) and some easy bounds of the correlation functions in the gaussian case.  It is not to difficult to show that these requirements are satisfied in the Weyl and binomial case. For Kac’s polynomial ($\latex c_i =1$)  it requires a little more work, especially the application of Inverse Littlewood-Offord theory and  a quantitatively version of Gromov’s growth theorem. The method also works for many other distributions; in fact we believe it works for most  “natural” sequences.  Our result can be seen as a two moment theorem for  random polynomials (in order to compare to our earlier four moment theorem for random Hermitian matrices).  However, the approach is quite different and more general. As a matter of fact, the polynomials considered in the paper does not need to have independent coefficients, thus our method can be applied to the characteristic polynomial of a random matrix as well.

While the above discussion sounds rather analytical, our proof, somewhat surprisingly,  is essentially combinatorial in spirit.  Here are two new, simple, (take home) arguments that I like a lot. Our starting idea is to use sampling to show that two quantities are approximately the same. In particular, we can approximate an integral by a random Riemann sum.  As well known, this approximation is good if we have sufficiently many sampling points and the “variance” of the function in question is small, so Chebysev inequality can be used. The first twist in the proof here is that our function has a large variance. To fight this problem, we apply a smoothening process, which replaces a function f by  f-g where g is chosen carefully so that \int g =0 and f-g is much smoother than f, which then reduces the “variance” significantly.  Another new, useful idea, is a new kind of small ball probability (Littlewood-Offord type) using the lacunary structure of the coefficient sequence.

From → Khác

7 phản hồi
  1. abc_learner permalink

    Một vấn đề thú vị!
    Thưa Thầy em đang là sinh viên mới ngành toán ở VN. Em cũng rất yêu thích các vấn đề về tổ hợp. Qua tìm hiểu em tìm được cuốn Additive Combinatorics của Thầy và GS Tao. Em có đọc sơ qua về các chương thì thấy nhiều kiến thức chưa biết, em xin hỏi là để đọc hiểu cuốn sách của Thầy em cần đọc những cuốn nào trước đó ạ. Cảm ơn Thầy rất nhiều về bài viết này.

    • Em co the doc truoc cac sach co ban ve: Probability (Feller), Harmonic Analysis (Stein), Number Theory (Hardy), Graph Theory (West or Bollobas).

      That ra quyen Additive Combinatorics la self-contained, tuc la cac khai niem dung doc dinh nghia tu dau, chi co speed move tuong doi nhanh, nen khong doc luot luot duoc.

      • abc_learner permalink

        Em xin cảm ơn Thầy về những hướng dẫn và lời khuyên. Chúc Thầy và gia đình luôn khỏe.

  2. Hong permalink

    Dear Prof. Vu,

    is there any generalization of Descartes’ rule of sign changes for random polynomial?
    In other words, I am interested in the following question.

    How many real positive zeroes does a random polynomial have?

    Could you please give me any reference on this.

    Thank you!

    Yours,
    Hong

    • If you care only about polynomials with gaussian coefficients, the density function for real roots are know, using Kac’s formula. Thus, in principle, you can compute the expectation of number of real roots in any interval. See, for instance, Edelman-Kostlan paper in the Bullentin of the AMS.

    • My recent paper with Tao (Universality of Random polynomials) extend Kac’s result to general ensembles.

  3. Hong permalink

    Thanks for your suggestion!

Gửi phản hồi

Mời bạn điền thông tin vào ô dưới đây hoặc kích vào một biểu tượng để đăng nhập:

WordPress.com Logo

Bạn đang bình luận bằng tài khoản WordPress.com Log Out / Thay đổi )

Twitter picture

Bạn đang bình luận bằng tài khoản Twitter Log Out / Thay đổi )

Facebook photo

Bạn đang bình luận bằng tài khoản Facebook Log Out / Thay đổi )

Google+ photo

Bạn đang bình luận bằng tài khoản Google+ Log Out / Thay đổi )

Connecting to %s

%d bloggers like this: