36

$0,1$ polynomial has coefficients from $\{0,1\}$. I investigate the number of roots in such polynomials.
We are talking about real roots, and multiples are counted only once.
It was found numerically that minimal degrees $\nu(n)$ of such polynomials with given number of roots $n$ are (07.03):
(In the light of @Timothy Chow answer $\sqrt{\nu(n)}/n$ column added)

$n$ $\nu(n)$ $\sqrt{\nu(n)}/n$ Example
1 1 1.0 $x$
2 2 0.707 $x^2+x$
3 7 0.882 $x^7+x^4+x^2+x$
4 10 0.791 $x^{10}+x^9+x^7+x^4+x^2+x$
5 19 0.872 $x^{19}+x^{18}+x^{16}+x^{11}+x^9+x^6+x^5+x^4+x^2+x$
6 28 0.882 $x^{28} + x^{27} + x^{25} + x^{20} + x^{18} + x^{14} + x^{13} + x^{11} + x^{9} + x^{8} + x^{7} + x^{4} + x^{2} + x$
7 $\le$41 $\le$0.915 $x^{41} + x^{40} + x^{38} + x^{35} + x^{31} + x^{30} + x^{29} + x^{24} + x^{20} + x^{18} + x^{17} + x^{15} + x^{12} + x^{11} + x^7 + x^4 + x^2 + x$
(7) $\le$45 $\le$0.958 $x^{45} + x^{44} + x^{42} + x^{37} + x^{35} + x^{33} + x^{31} + x^{30} + x^{28} + x^{26} + x^{24} + x^{23} + x^{22} + x^{20} + x^{18} + x^{16} + x^{15} + x^{13} + x^{11} + x^{9} + x^{4} + x^{2} + x$
8 $\le$52 $\le$0.944 $x^{52} + x^{51} + x^{49} + x^{46} + x^{40} + x^{38} + x^{37} + x^{36} + x^{35} + x^{33} + x^{31} + x^{29} + x^{27} + x^{26} + x^{24} + x^{22} + x^{20} + x^{18} + x^{17} + x^{16} + x^{15} + x^{13} + x^{7} + x^{4} + x^{2} + x$

And are there some theoretical results about real roots of $0,1$ polynomials?

Updates

1

OEIS' editors decided to separate these results into a new sequence:
A368824
Anyone can edit and update it!

2

All polynomials with the maximum number of roots except $p_1$ (which is understandable) and $p_7$ (which I think will be corrected) have roots $0$ and $-1$ and their plots on $[-1,0]$ look like this:

enter image description here

I've got mad idea that there is some differential equation
$P(x)y''(x) + Q(x)y'(x) + R(x)y(x)=0$
with boundary conditions $y(0)=y(-1)=0$ which is related to these polynomials...

3

A solution was added for $n=7$ with degree $\nu=41$, see Fred's answer below. This polynomial has a root at $x=-1$ as expected. I did not yet delete the prevoius solution with $\nu=45$.

Fred Hucht
  • 2,705
  • 6
    Perhaps it's just an instance of Guy's strong law of small numbers. – Max Alekseyev Jan 05 '24 at 12:41
  • To preserve the pattern of each polynomial containing the previous one, for 5 roots there's $p_5(x) = x^{19} + x^{18} + x^{16} + x^{13} + p_4(x)$ – Command Master Jan 05 '24 at 17:31
  • @MaxAlekseyev, yes, I think about it, so match with A336903 may be wrong. But anyway looks like there is some low under this subject – Denis Ivanov Jan 05 '24 at 17:54
  • @CommandMaster Hmm, I try, but can't find direct pattern of behaviour =( – Denis Ivanov Jan 05 '24 at 18:09
  • My 2¢ reducing the problem to a purely combinatorial: the number of real roots of $\sum_0^n a_kx^k=\prod(x-\alpha_k)$ (roots are pairwise distinct) is $e_+-e_-$ where $e_+$ is the number of positive eigenvalues of $(s_{i+j-2}){i,j=1}^n$ and $e-$ is the number of its negative eigenvalues. Here $s_m=\sum\alpha_k^m$ is a power-sum polynomial. You can compute them via Newton identities, but it seems unreasonably hard for large enough number of nonzero coefficients. Brute force is helpless here, I guess. – te4 Jan 05 '24 at 18:39
  • 12
    Well, it's not A336903. A random search found $x^{38}+x^{37}+x^{35}+x^{33}+x^{32}+x^{30}+x^{29}+x^{28}+x^{26}+x^{24}+x^{17}+x^{15}+x^{11}+x^{10}+x^{9}+x^{8}+x^{7}+x^{4}+x^{2}+x$ with $6$ real roots. – Robert Israel Jan 05 '24 at 18:53
  • Dear @te4, yes I think in that direction, so we need some smart filtration tools – Denis Ivanov Jan 05 '24 at 18:53
  • @DenisIvanov maybe it's https://oeis.org/A309805 ? :) – kerzol Jan 05 '24 at 23:37
  • @kerzol, Yes I saw it, but no! Some guys from Julia community scan up to 25 degree, post updated – Denis Ivanov Jan 06 '24 at 06:55
  • 12
    The lexicographically smallest 0-1-polynomial with $6$ real roots is $x^{28} + x^{27} + x^{25} + x^{20} + x^{18} + x^{14} + x^{13} + x^{11} + x^{9} + x^{8} + x^{7} + x^{4} + x^{2} + x$, so $\nu(6)=28$. – Peter Mueller Jan 06 '24 at 09:14
  • 2
    @PeterMueller cool! Could you elaborate a bit on how you found that and how you know it is the lexicographically smallest, perhaps in an answer? – Max Horn Jan 06 '24 at 14:50
  • 1
  • 1
    The function mapping $d$ to the maximal number of roots of such a polynomial would maybe be more natural to consider (and contains strictly more information). I you know its first values (say, about 30), you might try to locate it on OEIS. – YCor Jan 07 '24 at 10:32
  • 3
    @YCor Yes, that's A362344 – Command Master Jan 07 '24 at 12:12
  • The first paragraph makes it sound like $\nu(n)$ is the number of roots, when it's actually the degree, and $n$ is the number of roots. – Gerry Myerson Jan 07 '24 at 19:42
  • 1
    @GerryMyerson Thank you, fix it – Denis Ivanov Jan 08 '24 at 06:13
  • 1
    Indeed, OEIS $f(n)=A362344(n)$ (thanks @CommandMaster) is more natural to define, since $\nu$ is just derived from $f$ as $\nu(m)=\inf{n:f(n)=m}$, and since practically computing $\nu$ involves computing $f$. For instance if you computed $\nu(6)=28$, it probably means you computed $f(n)$ for all (even) $n\le 28$. Surprisingly OEIS currently only provides $f(n)$ for $n\le 19$ and it means it could be easily improved. – YCor Jan 08 '24 at 10:25
  • 1
    @YCor, yes, I post proposal just to improve A362344 with a new data. But it's not "surprisingly", 'cause after $\nu(5)$ it takes a lot of calculations =) – Denis Ivanov Jan 08 '24 at 18:02
  • 1
    Fascinating question. Long ago, I asked in a similar vein about the largest complex root of a $\pm1$-polynomial. That one shows surprisingly self-similar patterns. No idea whether it is relevant here, but who knows? – Wolfgang Jan 10 '24 at 09:03
  • 1
    The approximation doesn't seem very significant - the polynomials $x^n + x^{n+1}$ are much better approximators – Command Master Jan 10 '24 at 13:46
  • @Wolfgang, thank you! I’m now a little obsessed with $0,1$-polys (o_O) so all ideas is interesting! – Denis Ivanov Jan 11 '24 at 18:51
  • @CommandMaster, yes, you are right, I've formulated more correctly the observation about the behavior on $[-1, 0]$ – Denis Ivanov Jan 11 '24 at 18:53
  • 1
    @DenisIvanov See my answer below, which reduces $\nu(7)$ to be ${}\leq 41$. Note that the two found polynomials have zeroes at $x=-1$. – Fred Hucht Mar 01 '24 at 19:23

6 Answers6

30

And are there some theoretical results about real roots of 0,1 polynomials?

Such polynomials are called Newman polynomials. One commonly cited reference is Zeros of polynomials with 0,1 coefficients by A. Odlyzko and B. Poonen, L’Enseignement Mathématique 39 (1993), 317–348. In the paper Littlewood-type problems on [0,1] by P. Borwein, T. Erdélyi, and G. Kós, Proc. London Math. Soc. 79 (1999), 22–46 (DOI), the authors show:

There is an absolute constant $c > 0$ such that every polynomial $p$ of the form $$p(x) = \sum_{j=0}^n a_j x^j, \quad |a_j| \le 1, \quad |a_0| = |a_n| = 1, \quad a_j \in \mathbb{C}$$ has at most $c\sqrt{n}$ real zeros.

This result gives some information about the question you asked, though it does not answer it completely. There is a large literature on Newman polynomials and their close cousins, the Littlewood polynomials (whose coefficients are $0$ or $\pm1$); see for example K. G. Hare and J. Jankauskas, On Newman and Littlewood polynomials with prescribed number of zeros inside the unit disk (arXiv:1910.13994) or T. Erdélyi, On the zeros of polynomials with Littlewood-type coefficient constraints (Michigan Math. J. 49 (2001), 97–111) and the references therein.

Timothy Chow
  • 78,129
  • 6
    Do you know of any non-trivial (substantially above the logarithm of the degree) lower bound on the possible number of real roots (for $0,\pm 1$ the order $\sqrt n$ is sharp, but $0,1$ case seems way more restrictive to me unless I miss something) – fedja Jan 07 '24 at 16:39
  • 2
    @fedja: My intuition is that, if $f(x)$ is a ${-1, 0, 1}$ polynomial with $c \sqrt{n}$ positive roots, then $f(x^{10})$ is a ${-1, 0, 1}$ polynomial with $c' \sqrt{n}$ positive roots, and then we can replace each $-x^{10 m}$ term with $x^{10m+1}$ and get a ${ 0,1 }$ polynomial whose roots are roughly the negatives of the roots of $f(x^{10})$. I can't test this, since I don't know what the sharp polynomials for the ${ -1, 0, 1 }$ case look like, but does this not work? – David E Speyer Jan 08 '24 at 19:34
  • @DavidESpeyer That depends on how expressed the crossings are The idea seems good but the devil is in the details, as usual :-) – fedja Jan 08 '24 at 19:42
  • 2
    In the meantime, in the interest of giving a lower bound, although fedja will consider it trivial, $\prod_{j=0}^{n-1} (x^{3 \cdot 4^j}+x^{4^j}+1)$ has all coefficients $0$ or $1$, with degree $4^n-1$ and $n$ real roots. – David E Speyer Jan 09 '24 at 01:17
  • 1
    I did some digging in the literature to try and figure out where it is established that there are ${ -1, 0, 1 }$ polynomials of degree $n$ with $c \sqrt{n}$ roots, and I failed. Bloch and Polya "On the Roots of Certain Algebraic Equations" (1932) prove that we can achieve a single root at $1$ of multiplicity $c n^{1/2}/\sqrt{\log n}$ and that we can achieve $c n^{1/4}/\sqrt{\log n}$ distinct real roots. Erdélyi "Extensions of the Bloch-Pólya theorem on the number of real zeros of polynomials" (2008) improves the latter to $c n^{1/4}$ distinct real roots, while vastly generalizing it. – David E Speyer Jan 09 '24 at 02:22
  • The Borwein, Erdelyi and Kos paper cited in the answer above does state that $c \sqrt{n}$ distinct real zeroes are achievable with $a_0=a_n=1$ and $|a_j| \leq 1$. However, perhaps foolishly, I cannot identify where it proves this result. – David E Speyer Jan 09 '24 at 02:31
  • @DavidESpeyer "I cannot identify where it proves this result." It is easier to prove it than to find it in the literature. If you don't mind $a_0,a_n=\pm 1$ instead of $1$, I can post a short simple proof, if you are interested. – fedja Jan 09 '24 at 13:28
  • @fedja Yes please! – David E Speyer Jan 09 '24 at 14:15
  • 2
    @DavidESpeyer Done. Unfortunately I couldn't squeeze it into a comment box, so I posted it as an "answer" despite the fact that it is not an answer at all. I hope the OP and mods won't scold me too harshly :-) – fedja Jan 09 '24 at 22:19
  • 1
    @fedja I emailed Jankauskas and Erdélyi and neither of them was aware of a nontrivial lower bound. Jankauskas speculated that a random Newman polynomial would have $c\sqrt{n}$ distinct roots but didn't claim to have well-founded reasons for this speculation. – Timothy Chow Jan 10 '24 at 04:20
  • Nah, a random Newman polynomial will have just $\log n$ real roots. Read Ken Soze arXiv:1601.04858 :-) – fedja Jan 10 '24 at 05:28
  • For what it is worth, here is my other idea about constructing Newman polynomials with many real roots. Find a polynomial $f(x) = \sum f_k x^k$ with coefficients in $[0,1]$ and $f_0 = f_n=1$ and many real roots. (I can't even do this.) Then let $g(x)$ be a random polynomial where $g_k = 1$ with probability $f_k$ and $0$ with probability $1-f_k$. If we have $x_1 < x_2 < \cdots < x_m$ with $f(x_i)$ alternating in sign, then the expected values of $g(x_i)$ alternate in sign, and it seems possible we could prove that it has those signs with nonzero probability. – David E Speyer Jan 10 '24 at 14:20
  • @DavidESpeyer I've told you already: the alternance will most likely be minuscule and the expected values will be very close to $0$, so the variance will kill the sign changes more likely than not. But yeah, let's think of $[0,1]$ first, indeed, and let's check formally how large alternance heights are already impossible. That looks like a good start of the siege. :-) – fedja Jan 10 '24 at 16:15
  • 1
    @fedja Re-reading Jankauskas's email, I think I misinterpreted it on my first read. He said his first guess for the lower bound was $c\sqrt{n}$. But separately, he commented that for these types of problems, the random case is often close to the extremal case. So the bottom line seems to be that the problem is wide open. – Timothy Chow Jan 11 '24 at 22:38
  • OK, let's try to make it less wide open then :-) – fedja Jan 12 '24 at 02:35
  • @DavidESpeyer "Find a polynomial $f(x)=\sum f_kx^k$ with coefficients in $[0,1]$ and $f_0=f_n=1$ and many real roots." I can do it for roughly speaking $n^{1/3}$ roots. Is that many enough? The alternance is really minuscule, as I warned you, but, perhaps, you have some bright ideas about how to overcome that . :-) – fedja Jan 13 '24 at 03:50
  • @fedja I don't have any bright ideas. Still, it seems worth recording what you know. I think at this point the question should be treated as "can we figure out anything?" – David E Speyer Jan 13 '24 at 23:32
  • @DavidESpeyer Well, I have added another "non-answer". So far I'm pretty sure that everything I'm posting is well-known, just hard to locate in the literature, but I agree with you that having it all in one place may help an interested reader. The only question is if we have any interested readers (i.e., the readers who are willing to think of this problem for more than a couple of weeks) :-) – fedja Jan 14 '24 at 04:53
  • @DavidESpeyer OK, the obvious lower bound is $c\frac{\log^2 n}{\log\log n} $ now. Can we improve it to something more decent? – fedja Jan 15 '24 at 04:09
  • 1
    @DavidESpeyer An example with $m$ real roots and somewhat smaller degree is $\prod_{i=1}^m(X^{3^i}+X^{(3^i-1)/2} +1)$. Of course, that is still far off what we expect. – Peter Mueller Jan 16 '24 at 19:49
  • @PeterMueller Well, so far I'm not sure what I really expect. Of course, we are a bit above $\log n$ now, but the numbers for the low degrees we have do not seem to lead to any definite conclusions IMHO. So, what would be your expectation? – fedja Jan 16 '24 at 21:37
  • 1
    @DavidESpeyer I added the lower bound $\frac{\sqrt n}{\log n}$ for the $0,\pm 1$ case. It still falls slightly short of $\sqrt n$ I claimed in my remark, but it is certainly not $n^{1/4}$ you quoted. :-) – fedja Jan 17 '24 at 13:09
  • 1
    @DavidESpeyer I added the lower bound of $\sqrt n$ for the $0,\pm 1$ case now, thus fully justifying my original remark. Any news on your side? We (I and Zachary) are still stuck on the $0,1$ case so any fresh ideas will be certainly appreciated. – fedja Feb 15 '24 at 02:03
  • @DavidESpeyer We tried to use one of your suggestions but could only squeeze the lower bound of $\log^2 n$ from it so far (see the last edit to one of my "non-answers"). Take a look and let us know if you had something smarter in mind :-) – fedja Feb 16 '24 at 21:59
  • @fedja Glad to hear you are working on this. I thought about it on an off for about a week last month but not since then; I doubt I know anything that you don't. I'll read what you wrote, though. – David E Speyer Feb 16 '24 at 22:18
  • @DavidESpeyer Yeah, that is what I'm mainly afraid of when answering on MO: by the moment I manage to figure something out, nobody cares anymore, so why to bother writing things down? (Figuring things out is a pleasure in itself, but writing down the details? Br-r-r-r). Thanks for your continued interest though :-) – fedja Feb 17 '24 at 01:59
  • @TimothyChow OK, we have finally made it "slightly less open", proving the lower bound of $cn^{1/6}$ for the $0,1$ case. I'll try to post it as a separate "half-answer" when I have some free time but I'm curious if you had some thoughts about the problem yourself. – fedja Feb 21 '24 at 00:02
  • 1
    @fedja I have not been thinking about this problem, but $cn^{1/6}$ sounds like a publishable paper to me. I'm sure people like Jankauskas and Erdélyi will find it interesting. – Timothy Chow Feb 21 '24 at 00:11
  • 1
    @TimothyChow Yep, I communicated to Tamas Erdelyi already. He was certainly interested. I didn't have the honor to get introduced to Jankauskas at any point of my life, but you are certainly welcome to attract his attention to his thread if you feel like it. As to "publishable", probably yes, but a) I'd like to push the technique a bit further for that and b) writing a paper is a long painstaking process and there are way too many other people eager to publish each and every small thing they happen to stumble upon, so I'm rather hesitant in this respect. But I'll post it here, of course. – fedja Feb 21 '24 at 00:25
  • @DavidESpeyer We have pure $1/5$ now and are optimistic about getting $1/4$ in a couple of days or so. The ideas are the same but the technical details are slightly different. I'll try to produce a set of handwritten notes again and will continue typing once I have a clear idea of what exactly I want to say :-) As usual, fresh ideas and suggestions from anybody who still cares about the question are highly appreciated :-) – fedja Feb 23 '24 at 01:34
  • @DavidESpeyer It looks like we are up to $cn^{1/2}$ and the case is closed. We'll verify it tomorrow when both of us are not so sleepy. It will be quite a headache to design a presentation that can be posted on MO though, but I'll try... – fedja Feb 24 '24 at 02:52
  • @fedja Congratulations! Now it really sounds like you should publish a paper. – Timothy Chow Feb 24 '24 at 03:36
  • Congratulations! And I agree with Timothy Chow, you should probably just write a paper and drop a link here. – David E Speyer Feb 24 '24 at 03:52
  • @DavidESpeyer That's an option, but it will take even more time so I'm not sure I'll finish before the WW3, which is approaching quite fast. What I'll do soon enough is to post the third set of handwritten notes (which should be readable for those who understood the first two) and a general overview of the argument :-) – fedja Feb 24 '24 at 04:59
  • 1
    @DavidESpeyer OK, I cleaned up my final answer and posted the entire argument in (I hope) a readable form. Feel free to comment, improve, or just ask questions. It was a pleasure talking to you, Timothy, and Peter about this little project. It was also quite fun to finally figure it out. My curiosity is now satisfied, but there are many related questions that are still unanswered, so I hope someone will solve them too :-) – fedja Mar 01 '24 at 05:09
20

This is not an answer, just a response to David's request to present a construction of a polynomial $P(x)=\sum_{j=0}^d a_jx^j$ of arbitrarily high degree $d$ whose coefficients are real, bounded by $1$ in absolute value, satisfy $|a_0|=|a_d|=1$, and whose number of distinct real roots is of order $\sqrt{d}$. I believe that I've also seen the statement about pure $0,\pm 1$ coefficients somewhere, but my attempt to retrieve the proof from my memory or to reinvent it failed, so it might be a hallucination.

We will use the standard "pigeonholing with a minor correction in the end" mumbo-jumbo.

Take big $n>m,k$

Let $P(x)=exT(x)$ where $T$ is the Chebyshev's polinomial of degree $m$ adjusted to the interval $[e^{-1},1]$. Then $P$ has no free term, is of degree $m+1$, and has the alternance points $y_1,\dots,y_{m+1}$ of heights at least $1$. The coefficients of $P$ are bounded by $K^m$ for some absolute $K>0$. Let $x_j=y_j^{1/k}$

Consider all polynomials $Q$ of degree $3n$ with coefficients $\beta_s\in{0,e^{(s/(3n))^2}}$. Since $|Q(x_j)|\le 3en$ for every $Q$ and every $j$, and the total number of $Q$-s is $2^{3n}$ we can (by pigeonholing) find a set of $2^{2n}$ $Q$-s such that their values at each $x_j$ are confined to some intervals $I_j$ of length $6en2^{-n/m}$.

Fix one such $Q_0$ in that set. Note that the number of $Q$ whose coefficients differ from those of $Q$ only within an interval of indices of length $n$ is at most $n2^n\ll 2^{2n}-1$. Thus we can find $Q_1$ in the same set such that $$ R(x)=Q(x)-Q_1(x)=\sum_{s=a}^b \alpha_s x^s $$ with $b>a+n$, $\alpha_s\in\{0,\pm e^{(s/(3n))^2}\}$, $\alpha_a,\alpha_b\ne 0$, $|R(x_j)|\le 6en2^{-n/m}$ for all $j$.

Dividing $R(x)$ by $e^{(a/(3n))^2}x^a$, and taking into account that $x_j\ge e^{-1/k}$ for all $j$, we get a polynomial $S(x)$ of degree $d=b-a\in[n,3n]$ with coefficients $\gamma_s\in\{0,\pm e^{\psi(s)}\}$ where $\psi(s)=[(s+a)/(3n)]^2-[a/(3n)]^2$ and such that $|S(x_j)|\le 6en 2^{-n/m}e^{3n/k}$.

The key point is that $\psi$ is strictly convex and $0$ at $0$, so $$ e^{\psi(s)}\le e^{s\psi(d)/d}-\kappa n^{-2} $$ for $1\le s\le d-1$ with some $\kappa>0$. This we can correct the intermediate coefficients by about $n^{-2}$ and still have them under the geometric progression joining $|\gamma_0|=1$ and $|\gamma_d|=e^{\psi(d)}$. Then, by a trivial linear change of variable, we'll turn $\gamma_d$ into $\pm 1$ and make all intermediate coefficients $\le 1$.

We will use this correction possibility to turn smallness at $x_j$ into an alternance. To this end, we will just add $ 6en 2^{-n/m}e^{3n/k}P(x^k)$. The coefficients of this correction are bounded by $6en 2^{-n/m}e^{3n/k}K^m$, it has no free term and the degree $(m+1)k$.

Now choose $k\approx \sqrt n$ and $m\approx \sigma\sqrt n$. Then if $\sigma<1$, our correction stays inside the degree span of $S$, and $$ 6en 2^{-n/m}e^{3n/k}K^m\approx 6en 2^{-\sigma^{-1}\sqrt n}e^{3\sqrt n}K^{\sigma\sqrt n}\, $$ which for small $\sigma>0$ is less than $e^{-\sqrt n}\ll \kappa n^{-2}$, so our correction has admissible size too. If you want both endpoint coefficients to be $1$ and they are $1$ and $-1$ in the polynomial we constructed, just multiply by $1-x^{d+1}$ doubling the degree.

That's it. You can see, however, that the alternance is minuscule, so you should exercise utmost care with your suggested replacement of negative coefficients, if it can work at all!

Edit: For completeness, let me also give a short proof that a polynomial $P(x)=\sum_{j=0}^na_j x^j$ with real coefficients $a_j$ satisfying $|a_j|\le 1$, $|a_0|=|a_n|=1$ cannot have more than $C\sqrt n$ real roots.

By considering $P(\pm x)$, $x^nP(\pm 1/x)$, we see that it suffices to bound the number of roots on $(0,1]$.

Recall that the Chebyshev polynomial $T_d(y)=\frac 12[(x-\sqrt{x^2-1})^d+(x+\sqrt{x^2-1})^d]$ with even $d$ has all its roots real, is bounded by $1$ on $[-1,1]$, is decreasing on $(-\infty,1]$ and is greater than $4$ at $-1-\frac C{d^2}$ for some absolute constant $C>0$. After proper rescaling and taking $Cd^{-2}\approx\delta\in(0,\frac 12)$, it turns into a polynomial $S_\delta(z)$ on $[0,1]$ such that the degree of $S$ is $\le C\delta^{-1/2}$, $|S(z)|\le S(0)=1$ for every $z\in[0,1]$ and $|S(z)|\le \frac 14$ when $z\in[\delta,1]$.

Now choose $\delta=\frac 1n$ and put $$ Q(z)=S_\delta(z/n)S_{2\delta}(z/n)S_{4\delta}(z/n)\dots S_{2^m\delta}(z/n) $$ where $2^m\delta\in[\frac 14,\frac 12)$. Then $Q$ still has all roots real, the degree of $Q$ is at most $C\sqrt n\sum_{j\ge 0}2^{-j/2}$, $Q(0)=1$ and $Q(k)\le 4^{-q-1}$ for $k\ge 2^q$ for $0\le q\le m$. The last estimate implies that $\sum_{k=1}^n |Q(k)|<1=Q(0)$ for not too small $n$. Hence, in the polynomial $$ \widetilde P(x)=\sum_{k=0}^n a_kQ(k)x^k $$ the constant term dominates the sum of all other ones, so $\widetilde P$ has no roots on $(0,1]$.

Now write the factorization $Q(z)=a\prod_s(z-\lambda_s)$, $\lambda_s\in\mathbb R$. Then $$ \widetilde P=a\left[\prod_s D_{\lambda s}\right]P $$ where $D_\lambda f=xf'-\lambda f$ is the differential operator that acts on the polynomials by multiplying the $k$-th coefficient by $k-\lambda$.

Representing $D_\lambda f=x^{\lambda+1}\frac{d}{dx}[x^{-\lambda}f]$ and using Rolle's theorem, we conclude that each application of $D_\lambda$ can reduce the number of roots of $f$ on $(0,1]$ by at most $1$. But the degree of $Q$ is only $C\sqrt n$ and we have lost all roots of $P$ on $(0,1]$ when passing from it to $\widetilde P$. The end.

fedja
  • 59,730
  • 1
    Would you write, at the beginning, what you are proving? – YCor Jan 09 '24 at 23:31
  • 1
    @YCor Technically it was just a response to David Speyer that I failed to fit into the comment box and he knew perfectly well what we were talking about, but OK, I added a few words describing what the request was made for :-) – fedja Jan 09 '24 at 23:43
  • 1
    Thanks, this is is helpful to everybody! – YCor Jan 09 '24 at 23:50
  • 1
    @YCor Once you think that it "is helpful to everybody", would you like me to post here a short version of the proof of the upper $\sqrt n$ bound too, or you think that reading original papers David referenced is more beneficial for the "everybody" you had in mind? :-) – fedja Jan 10 '24 at 21:22
  • 2
    Yes, I'd be interested and I believe it would be useful to others too. – YCor Jan 11 '24 at 00:09
  • 1
    @YCor At your service, Sir! :-) – fedja Jan 11 '24 at 05:16
16

Still, it seems worth recording what you know. I think at this point the question should be treated as "can we figure out anything?" – David E Speyer

Well, OK. Trying to figure out anything is exactly what we are now doing with Zachary Chase, and below is a small observation.

Let $\alpha>0$. Suppose that $n$ is even and we want to construct a polynomial $P_a(x)=\sum_{j=0}^n a_jx^j$ with $a_0=a_n=\alpha$, $a_j\in[0,1]$ for $1\le j\le n-1$ having roots $-1,-x,-x^2,\dots,-x^m$ where $x\in(0,1)$ is some number close to $1$.

It is possible if and only if the convex set $$ E=\{(a_0,a_n,P_a(x^k), 0\le k\le m):\\ a_0,a_n\in \mathbb R, a_j\in[0,1], 1\le j\le n-1\}\subset \mathbb R^{m+3} $$ contains the point $z=(\alpha,\alpha,0,\dots,0)$. So suppose that it does not. Then there is a non-trivial linear functional $\psi$ on $\mathbb R^{m+1}$ that is non-negativeon $E-z$, i.e., we can find some coefficients, not all $0$ such that $$ u(a_0-\alpha)+v(a_n-\alpha)+\sum_{k=0}^m w_kP_a(x_k)\ge 0 $$ for every admissible choice of the coefficient vector $a$.

Introduce the polynomial $Q(z)=\sum_{k=0}^m w_k z^k$. Then the condition above can be rewritten as $$ -u\alpha-v\alpha+(u+Q(1))a_0+(v+Q(x^n))a_n+\sum_{j=1}^{n-1}(-1)^jQ(x^j)a_j\ge 0\,. $$ Now, since $a_0$ and $a_n$ are free to run over the entire real line, we must have $u+Q(1)=v+Q(x^n)=0$. As to the rest of the expression, its minimum equals $$ U=\sum_{j=1}^{n-1} \min[(-1)^jQ(x^j),0]\,. $$ Thus we must have $$ \alpha(Q(1)+Q(x^n))+U\ge 0 $$ for some not identically $0$ polynomial $Q$ of degree at most $m$.

Now, when $x$ is really close to $1$, the points $x_j$, $j=1,\dots,n-1$ make an almost equispaced net on the interval $I=[x^n,1]$. Let $\mu=|Q(z)|=\max_I|Q|$. Then, by Markov's inequality, $|Q'|\le\frac 2{|I|}m^2\mu$ on $I$, so there is an interval of length $\frac{|I|}{4m^2}$ containing $z$ on which $Q$ preserves sign and is at least $\mu/2$ in absolute value. Since the powers $x_j$ are separated by about $|I|/n$, this interval contains about $cn/m^2$ powers $x^j$ with odd and even $j$. Choosing the parity appropriately, we get $U\le -c\frac{n}{m^2}\mu$.

On the other hand, $Q(1)+Q(x^n)\le 2\mu$. Hence we run into a contradiction when $2\alpha m^2\le cn$.

The conclusion: For every fixed $\alpha\ge 1$ and even $n$, there exists a polynomial $P_a(x)$ with $a_0=a_n=\alpha$, $a_j\in[0,1]$ ($j=1,\dots,n-1$) having $m$ distinct roots on $[-1,0)$, provided that $m^2\le c\alpha^{-1} n$.

It means that the non-negativity of the coefficients doesn't impose any substantial additional restrictions on the number of roots compared to the boundedness alone and the whole issue is the discretization from $[0,1]$ to $\{0,1\}$.

Edit I'll continue dumping obvious observations here. We'll now construct a polynomial with coefficients $0$ and $1$ having about $\frac{\log^2 n}{\log\log n}$ roots. While by itself this lower bound is rather pathetic, it still means that one should, probably, concentrate for a while on driving the lower bound up rather than the upper bound down.

The key observation is that $\sqrt[3]2<\frac 43$ by Bernoulli, so $$ -1+\sqrt[3]{\frac 12}+\frac 12-\left(\frac 12\right)^3-\left(\frac 12\right)^9 \\ >-1+\frac 34+\frac 12-\frac 18-\frac 18=0\,. $$ The immediate conclusion is that if $a=(-1,-1,1,1,-1,-1,1,1,\dots)$ and $p_j$ is an increasing sequence of positive integers such that $p_{j+1}\ge 3p_j$ for all $j$, then the polynomial $$ 1+\sum_{k=1}^u a_kx^{p_k} $$ has at least about $u/2$ sign changes on $(0,1)$. To turn it into $0,1$ polynomial with roots on $(-1,0)$, we need also to ensure that the parity of $p_j$ agrees with the sign of $a_j$, of course.

Now fix the (large) target degree $n$. Let $q,u,v$ be positive integers. Let $I_k=[3\cdot 6^{k-1} q, 6^k q]$ ($k=1,\dots,u$).

We will now choose $uv$ pairwise distinct integers $p_{ij}$ ($i=1,\dots,v, j=1,\dots,u$) so that for fixed $i$ one has $p_{ij}\in I_j$ for all $j$ and $p_{ij}$ has appropriate parity to be used in the above construction. We will also require that every positive integer $p$ can be written as the sum of at most $v$ integers $p_{ij}$ in at most one way. To ensure that such choice is possible by the mindless "just-take-what-is-still-available" algorithm, it is enough to require that $(uv)^{2v-1}<q$.

Now just put $P_i(x)=1+\sum_{j=1}^u x^{p_{ij}}$ and $P(x)=\prod_{i=1}^v P_i(x)$. Then $P$ is a $0,1$ polynomial of degree $\le 6^u qv$ and about $uv/2$ roots. To keep the degree under $n$ and to satisfy the previous condition, we choose $q\approx\sqrt n$, $u\approx c\log n$, $v\approx c\frac{\log n}{\log\log n}$ with sufficiently small $c>0$.

To be completely honest, I should also ensure that the roots of different $P_i$ are different too, but with that much freedom in choosing $p_{ij}$ that is rather trivial, so I'll leave it as an exercise to the interested readers.

Edit 2: Some more "Mathematische Banalen". This time we will show the existence of a polynomial with coefficients $0,\pm 1$ having $c\frac{\sqrt n}{\log n}$ zeroes on $[0,1]$. This still falls a bit short of $\sqrt n$, but is way better than $n^{1/4}$ David found in the literature for this case.

We start with a few observations about such polynomials. First, if $P$ is such a polynomial of degree $n$, then $$ \int_0^1 |P(x)|\,dx\ge n^{-C\sqrt n} $$ for large $n$ unless $P$ is identically $0$.

Indeed, if we use the same polynomial $Q$ and the differential operators $D_\lambda$, we will see that applying $D_{\lambda_s}$ with various $\lambda_s\in[0,n]$ to $P$ at most $C\sqrt n$ times, we'll get a polynomial $\widetilde P$ in which the first non-zero coefficient is $\pm 1$ and the sum of the absolute value of all other coefficients is below $1/2$. Then $\int_0^1|\widetilde P(x)|\,dx\ge\frac 12\int_0^1 x^n\,dx=\frac 1{2(n+1)}$.

However, due to the Markov's inequality $\|P'\|_\infty\le 2n^2\|P\|_\infty$, we have that the $L^1$ and the $L^\infty$ norms of $P$ are equivalent up to a factor $Cn^2$. Moreover, $|P|\ge \frac 12\|P\|_\infty$ on an interval of length $cn^{-2}$ around the point of the maximum of $P$. Thus, we have $\|P'\|_1\le Cn^4\|P\|_1$ and $\|P\|_\infty\le Cn^2 \int_{[0,1]\setminus E}|P|$ for every set $E$ of meaure $|E|\le cn^{-2}$.

The first conclusion shows that every application of $D_\lambda$ increases the $L^1$ norm of a polynomial of degree $n$ at most $Cn^4$ times, which immediately implies our first observation.

Now consider all polynomials $R$ of degree $n$ with coefficients $0,1$ and for each of them kompute their first $m+1$ moments $\int_0^1 R(x)x^k\,dx$ ($k=0,\dots,m$). Those are numbers in $[-n-1,n+1]$. Using the pigeonhole principle, as usual, and shamelessly exploiting the fact that the difference set is exactly what we need, we find a polynomial $P$ with coefficients $0,\pm 1$ that is not identically $0$ but satisfies $$ \left|\int_0^1 P(x)x^k\,dx\right|\le 2(n+1)2^{-\frac nm}\,. $$ for all $k=0,\dots,m$.

Now assume that $P$ has only $u<m$ distinct roots $r_j$ on $[0,1]$ at which a crossing occurs. Consider $q(x)=\prod_j(x-r_j)$. That is a polynomial of degree $u<m$ with the sum of absolute values of its coefficients at most $2^m$. Thus $$ \left|\int_0^1 P(x)q(x)\,dx\right|\le 2(n+1)2^{-\frac nm}2^m\,. $$

On the other hand, $Pq$ preserves sign and $|q|\ge (c'n^{-2})^m$ outside a set $E\subset \mathbb R$ of measure $|E|<cn^{-2}$ (Cartan's lemma). Hence the LHS equals $$ \int_0^1|P||q|\ge\int_{[0,1]\setminus E}|P||q|\ge (c'n^{-2})^m cn^{-2}\|P\|_\infty\ge (c'n^{-2})^m cn^{-2} n^{-C\sqrt n}\,, $$ so if $$ (c'n^{-2})^m cn^{-2} n^{-C\sqrt n}>2(n+1)2^{-\frac nm}2^m\,, $$ which happens for $m=c\frac{\sqrt n}{\log n}$, we get a contradiction.

Next Edit:

Let's now construct (or, rather, prove the existence of) a non-zero polynomial $P$ with coefficients $0,\pm 1$ of degree at most $n$ that has at least $c\sqrt n$ roots on $(0,1)$. This construction will, finally, justify my casual remark in the beginning of this long discussion (which I still hope to continue) and show that my memory, albeit failing, can still be occasionally trusted somewhat. If you ask me when, where, and by whom this argument was first invented, I have no idea.

We'll proceed as before but do everything way more carefully. First, we shall show that if $a>0$ is a small number, then every polynomial $P(z)=\sum_{k=0}^n a_kz^k$ with coefficients $a_k\in [-1,1]$ and $a_0=\pm 1$ satisfies $\|P\|_{L^0(I)}\ge \exp(-C/a)$ where $I=[1-2a,1-a]$ and for a non-negative function $f$ on an interval $J$, $\|f\|_{L^0(J)}=\exp\left[\frac{1}{|J|}\int_J\log| f|\right]$ is the geometric mean of $f$ on $J$.

This is achieved by considering the domain $\Omega$ that is a disk centered at some point $z_0\in(0,\frac 13)$ (say, $z_0=\frac 16$) of radius $1-|z_0|$ with a slit $[1-3a,1]$ (the blue circle on the figure below).

enter image description here

Applying Jensen's inequality, we see that $$ 0 =\log|P(0)|\le\int_{\partial\Omega}\log|P|d\omega $$ where $\omega$ is the harmonic measure on $\partial\Omega$ with respect to $0$. Now we first estimate the integral of $\log_+|P|$ using the trivial bound $|P(z)|\le\frac 1{1-|z|}$. We split $\partial\Omega$ into the slit part $S$ and the circle part $C$. Note that the conformal mapping of $\Omega$ to the unit circle is given by an explicit formula, so the density of $\omega$ can be found exactly, but I still prefer a back of the envelope computation that isn't algebraicly heavy even if it is a bit lengthier.

First, we consider the harmonic function $u(z)=\log\frac 1{|1-z|}$ in $\Omega$. we have $$ 0=u(0)=\int_S\log\frac 1{|1-z|}\,d\omega(z)+\int_C\log\frac 1{|1-z|}\,d\omega(z)\,. $$ Since $\log\frac 1{|1-z|}\ge -\log 2$ on $C$, we conclude that $$ \int_S\log_+|P|\,d\omega\le \int_S\log\frac 1{1-|z|}\,d\omega(z) \\ =\int_S\log\frac 1{|1-z|}\,d\omega(z)\le\log 2\,. $$ Now note that on $C$, the harmonic measure $\omega$ is dominated by the harmonic measure for the disk without a slit, which has bounded density with respect to the Lebesgue measure on the circumference, so the integral of $\log\frac{1}{1-|z|}$ with respect to $\omega$ is uniformly bounded by some constant depending on (fixed) $z_0$ but not on the size of the slit. Thus, $\int_{\partial\Omega}\log_+|P|\,d\omega\le C$ independently of $a$ and, therefore, $\int_I \log_-|P|\,d\omega\le \int_{\partial\Omega}\log_-|P|\,d\omega\le C$ as well.

Now we need a more clear idea of what $\omega$ is on $I$. First, map the disk conformally to the upper half-plane so that $0$ is mapped to $i$ and $1$ to $0$, say. Then the slit $S$ will be mapped to $[0,hi]$ for $h$ comparable to $3a$ and the mapping will be bi-Lipshitz on the slit and $\omega$ will be mapped to the harmonic measure $\omega'$ in the half-plane with the new slit.

Now, we use the usual $w\mapsto\sqrt{w^2+h^2}$. Then we'll get $d\omega'(w)\approx \frac{|w|}{\sqrt{|w^2+h^2|}}|dw|$. When $z$ is in $I$, the corresponding point $w$ is in the "middle part" of $[0,hi]$, so the factor in front of $|dw|$ is comparable to $1$. Thus, on $I$, we have $d\omega(z)\approx |dz|$ and we conclude that $\int_{I}\log_-|P(x)|\,dx\ge -C$, i.e., $\|P\|_{L^0(I)}\ge \exp[-C/|I|]=\exp[-C/a]$.

If $a_0=0$ and $a_k\in\{0,\pm 1\}$, consider the least $m\in[0,n]$ for which $a_m=\pm 1$. The factor $x^m\ge(1-2a)^n\ge \exp[-3an]$ on $I$, so in this case $$ \|P\|_{L^0(I)}\ge\exp\left[-\tfrac Ca-3an\right]\,. $$ We shall now fix $a=\frac 1{\sqrt n}$, so $$ \|P\|_{L^0(I)}\ge\exp\left[-C\sqrt n\right]\,. $$ This part has been definitely known to Borwein and Erdelyi. Now we play the same game as before but only on the interval $I$. Formally, for a polynomial $P$, we define $$ \widetilde P(t)=P(1-a-at), \qquad t\in[0,1]\,. $$ The above result states now that for every non-zero $P$ with coefficients $0,\pm 1$, we have $$ \|\widetilde P\|_{L^0([0,1])}\ge \exp[-C\sqrt n]\,. $$ Now, noting that $\|\widetilde p\|_{L^\infty([0,1])}\le n$ for any $p$ with coefficients $0,1$ and using the pigeonhole as before, we can find a non-zero polynomial $P$ with coefficients $0,\pm 1$ such that $$ \left|\int_0^1 \widetilde P(t) t^\ell\,dt\right|\le n2^{-n/m},\qquad \ell=0,1,\dots,m-1\,. $$ Assuming that $\widetilde P$ has only $m-1$ (or fewer) sign changes on $[0,1]$ (i.e., that $P$ has fewer than $m$ zeroes on $I$), we again construct $Q(t)=\prod_j(t-t_j)$ as before and note that $\widetilde PQ$ preserves sign on $[0,1]$. Also, $\|Q\|_{L^0([0,1])}\ge\exp[-Cm]$ (each factor $|t-t_j|$ has geometric mean uniformly bounded from below. So, we get the chain of inequalities $$ \exp[-C\sqrt n-Cm]\le \|\widetilde P Q\|_{L^0([0,1])}\le\|\widetilde P Q\|_{L^1([0,1])} \\ =\left|\int_0^1 \widetilde P Q\right|\le 2^m n2^{-n/m}\,, $$ and when $m=c\sqrt n$ with small enough $c>0$, we get a contradiction.

Next Edit

The construction of a polynomial with coefficients $0,\pm 1$ having $m$ roots on $(-1,0)$ of degree $Cm^2$ allows one also to construct a polynomial with coefficients $0,1$ of degree $n$ with $c\log^2 n$ roots on $(-1,0)$ giving a slight improvement from the previous "trivial bound" $c\log^2 n/\log\log n$.

To carry the construction out, note first of all that when looking for the sign changes of $P$ in the construction for the $0,\pm 1$ case, we can ignore the humps of height $\exp(-C_1 m)$ with large constant $C_1$ because they can be responsible only for a tiny portion of the $L^1-norm$ of $\widetilde PQ$ ($e^{-C_1 m}$ is much less than $e^{-Cm-C\sqrt{Cm^2}}$). Thus, a polynomial $P$ has not only $m$ roots, but also an alternance of size $m$ and of height $e^{-C_1 m}$.

Now take $P_1(x)=P(x^N)$. Then this alternance occurs on the interval $J=[-1,-1+\frac 1N]$ (actually even a bit shorter one, but it is irrelevant). Consider the polynomial $T(x)=\prod_{k:3^k\le \sqrt N}(1+x^{3^k})-1$. Note that $T$ has coefficients $0,1$ and $|T(x)+1|\le N^{-c\log N}=e^{-c\log^2 N}$ on $J$. Thus, if this number is below $e^{-C_1 m}/(Cm^2)$, we can replace every negative power $-x^k$ in $P_1$ by $x^kT(x)$ without disturbing the alternance too much. That happens exactly when $m<c_1\log^2 N$. The rest should be obvious.

Note that this construction is based exactly on David's idea to take whatever we have and to replace negative coefficients by something else afterwards. This particular realization of the idea raises the degree way too much, however. Any suggestions how to do it better? Remember that we cannot raise the multiplicity of zero at $-1$ substantially (see our discussion with Peter Mueller), so some other approach is needed.

fedja
  • 59,730
14

Update 1: I've now used the multiprocessing library for the parallelization of the computation, and accordingly updated the Python code.


Update 2: We have $\nu(7)\le 45$ and $\nu(8)\le 52$, as the polynomials \begin{equation} x^{45} + x^{44} + x^{42} + x^{37} + x^{35} + x^{33} + x^{31} + x^{30} + x^{28} + x^{26} + x^{24} + x^{23} + x^{22} + x^{20} + x^{18} + x^{16} + x^{15} + x^{13} + x^{11} + x^{9} + x^{4} + x^{2} + x \end{equation} and \begin{equation} x^{52} + x^{51} + x^{49} + x^{46} + x^{40} + x^{38} + x^{37} + x^{36} + x^{35} + x^{33} + x^{31} + x^{29} + x^{27} + x^{26} + x^{24} + x^{22} + x^{20} + x^{18} + x^{17} + x^{16} + x^{15} + x^{13} + x^{7} + x^{4} + x^{2} + x \end{equation} have $7$ and $8$ real roots, respectively.

These polynomials are the lexicographically smallest ones of the form $xh(x)$, where $h$ is equal to its reciprocal.


Below is the Python code which for each sufficiently small integers $n$ computes the maximum number of distinct real roots of degree $n$ polynomials with coefficients $0$ and $1$ (which is [OEIS A362344][1]) together with a lexicographically smallest example. For instance, we see within a few minutes that $\nu(6)=28$. (Initially, I had written a SageMath script using Sturm's Theorem, however the `polsturm` function from pari/gp is somewhat faster.)

The Python script makes use of parallelization. The line cpus = 17 tells that $17$ cores will be used and of course can be adjusted.

The script identifies the binary expansion $p=\sum a_i2^i$ with the polynomial $\sum a_ix^i$.

In case that someone wants to look for a pattern, here is a list of all $0$-$1$-polynomials of degree $28$ with $6$ distinct real roots. To save space, we give the list as the numbers $p$ (see above): 437545878, 437594646, 437594838, 437619606, 437643798, 440494998, 440875158, 443812758, 443836950, 443837334, 443910678, 443911062, 443916438, 443923062, 443923350, 444033942, 450177558, 462908310. (In SageMath, if K is a polynomial ring, and p is one of these numbers, K(p.bits()) yields the corresponding polynomial.)

And here is the Python code (which requires the library cypari2):

import cypari2
from multiprocessing import Pool
pari = cypari2.Pari()

cpus = 17 ################

def check(i): best, bestp, bestf = 0, 0, '' for p in range(2n + i, 2(n+1), cpus): l = [] for j, b in enumerate(bin(p)[-1:1:-1]): if b == '1': l.append(f'x^{j}') s = '+'.join(l) m = pari(s).polsturm() if m > best: best, bestp, bestf = m, p, s return best, bestp, bestf

n = 1 while True: n += 1 with Pool(cpus) as P: res = P.map(check, range(cpus)) best, bestp = 0, 2**(n+1) for m, p, s in res: if m > best or (m == best and p < bestp): best, bestp, bestf = m, p, s print(f'deg = {n}, m = {best}, p = {bestp}, bestf = {bestf}')

Peter Mueller
  • 20,764
  • 3
    Have you considered sending the data to the OEIS? As mentioned above, the function $\nu$ is closely related to https://oeis.org/A362344 . – Marco Golla Jan 08 '24 at 13:10
  • 3
    @MarcoGolla Yes, I might do that. But I'll wait a little to see if someone produces better results. The present code can go up to degree $31$ within a few hours on $17$ cores. There is some room for optimizations though. But with the present brute force approach, I don't believe that one can go beyond degree $40$. – Peter Mueller Jan 08 '24 at 16:23
  • 1
    @PeterMueller, Thank you for cool work! I've got some doubts about $p_7$ 'cause all other polys have $-1$ root (include $p_8$!), but $p_7$ don't – Denis Ivanov Jan 11 '24 at 18:06
  • 1
    @DenisIvanov Indeed, I do not expect that that $p_7$ and $p_8$ are examples with minimal degree. Thus I edited your table, replacing $45(?)$ and $52(?)$ with $\le45$ and $\le52$, respectively. By the way, the present state of the calculation shows $\nu(7)\ge35$. – Peter Mueller Jan 11 '24 at 21:23
  • Pythonic way would be l = [f'x^{j}' for j, b in enumerate(bin(p)[-1:1:-1]) if b == '1'] – Max Alekseyev Jan 14 '24 at 15:51
  • It would be also interesting to consider the 0,1-polynomials of minimal degree having the prescribed number of roots on the open interval $(-1,0)$. Can you compute a few of those? – fedja Jan 15 '24 at 05:29
  • 1
    @fedja The minimal degrees for having $1$, $2$ or $3$ roots in $(-1, 0)$ are $3$, $10$, and $21$ with the examples ${1+x+x^3}$, ${1+x+x^3+x^8+x^{10}, 1+x+x^3+x^5+x^6+x^8+x^{10}}$ and ${1+x+x^3+x^8+x^{10}+x^{12}+x^{17}+x^{19}+x^{21}, 1+x+x^3+x^6+x^7+x^8+x^{10}+x^{11}+x^{12}+x^{14}+x^{17}+x^{19}+x^{21}, 1+x+x^3+x^8+x^{10}+x^{12}+x^{14}+x^{15}+x^{17}+x^{19}+x^{21}, 1+x+x^3+x^8+x^{10}+x^{12}+x^{15}+x^{16}+x^{17}+x^{19}+x^{21}}$, respectively. – Peter Mueller Jan 15 '24 at 10:07
  • Excellent, thank you! I f you could also do 4 and 5, it would be absolutely great :-) – fedja Jan 15 '24 at 13:51
  • @fedja With the present exhaustive brute force approach, it will be difficult. All $0,1$-polynomials of degree $\le31$ have at most $3$ roots in $(0, 1)$. Presently the program searches degree $32$, but that may take a while. – Peter Mueller Jan 15 '24 at 16:30
  • @PeterMueller, I've found that if $p$ is prime than it's poly could has only 1 or 2 roots, never 3 or more. Your calculations confirm this? Is there a simple explanation? – Denis Ivanov Jan 16 '24 at 06:13
  • @DenisIvanov This does not hold. The polynomials for $p=50539$ (degree $15$) and $p=6836587$ (degree $22$) have $3$ and $4$ real roots. – Peter Mueller Jan 16 '24 at 08:01
  • @PeterMueller, thank you, I see! Is it first appearances? – Denis Ivanov Jan 16 '24 at 08:09
  • @DenisIvanov Yes, in both cases. – Peter Mueller Jan 16 '24 at 08:45
  • @PeterMueller I also tried to compute the minimal degree $D(m)$ of a 0,1-polynomial having a root of multiplicity $m$ at $-1$. The results for small $m$ are a bit discouraging: the sequence of degrees for $m=1,\dots,5$ is $1,4,9,20,41$, the same that can be obtained by the trivial degree doubling procedure (change the string $A$ to $AA$ or $A0A$ depending on the parity of the length). However, for $m=5$, we get an admissible string that breaks this pattern: $B\bar B$ where $B=110001111110111111011$ and $\bar B$ is the reflection of $B$. It would be really interesting to see if $D(6)<84$. – fedja Jan 19 '24 at 16:48
  • 1
    @fedja $D(6)=76$, assumed for $A0\bar A$ where $A=11001101111111111110111111111111110011$. (The leftmost bit corresponds to the constant term in my notation.) – Peter Mueller Jan 19 '24 at 18:48
  • @PeterMueller Thanks a lot! That gives some (but only some) hope that the growth of $D(m)$ is subexponential. What is also interesting is that most coefficients are 1's with just a few sparse zeroes. I'm not really sure what to make of this observation though... – fedja Jan 19 '24 at 18:56
  • 1
    @fedja And here an example $A\bar A$ of degree $149$ with $m=7$, again with $1$'s in the majority: $A=110001111110110110111111111111011111111111111110110011111111111111011111111$. – Peter Mueller Jan 20 '24 at 09:25
  • @PeterMueller Interesting. I wonder what algorithm you used. I just mindlessly set it up as a binary linear program and used a standard LP solver, but this allowed me to go up only to 55. Of course, if we decide to restrict the number of zeroes and to impose the palindromic structure, I can get higher, but still not as high as you did :-) – fedja Jan 20 '24 at 13:42
  • @fedja In fact, I first tried this simple approach (possibly the one you did): For $f=\sum a_kx^k$ we get the binary linear system $\sum\binom{k,i}a_k(-1)^k=$, $0\le i\le m-1$. For $m=6$, this got stuck for $n=\rm{deg}(f)$ somewhat below $60$. But this idea works much better: The coefficient vectors of $f$ of the multiples of $(x+1)^m$ are a $\mathbb Z$-module. Using the block Korkine–Zolotarev lattice basis reduction I get a generating set $v_1,\dots,v_r\in\mathbb Z^{n+1}$ with small integer entries. Now solve for integers $x_1,\dots,x_r$ such that each entry of $\sum x_iv_i$ is $0$ or $1$. – Peter Mueller Jan 20 '24 at 21:00
  • ... I use the MIP solvers Gurobi and Copt (both provide a free academic licence). Without further ideas, I won't get beyond $m=7$. In fact, my program still has not decided whether $D(7)=148$ or $149$. However, it found $13$ examples of degree $149$ so far. (Some of them are not palindromic.) They all have exactly $128$ terms! And the unique example for $D(6)=76$ has $64$ terms. – Peter Mueller Jan 20 '24 at 21:10
  • One cannot edit comments :-( The binary linear system above should be $\sum_{k=i}^n\binom{k}{i}a_k(-1)^k=0$, of course. – Peter Mueller Jan 20 '24 at 21:13
  • @PeterMueller Yeah, all examples of minimal degree I know also have $2^m$ terms. This is rather suggestive but for the life of mine I cannot figure out why :-) – fedja Jan 20 '24 at 21:32
  • @PeterMueller I'm a hopeless, patented, and ultimate imbecile: the ratio must be a polynomial with integer coefficients, so the sum of coefficients of $P$ (a.k.a. the number of 1's and the value at $+1$) must be divisible by $(1+1)^m$. That explains everything... Back to roots on $(-1,0)$ now :-) – fedja Jan 20 '24 at 22:00
  • @PeterMueller If you are still interested in the problem, I have one more computational request: Let $A_n$ be the family of polynomials $p$ of degree $\le n$ with coefficients $0,1$ such that $p(0)=1$. Define $f_n(t)=\inf_{p\in A_n}\log\max_{[-1+t,-1+2t+\frac 1n]}|p|$, $0\le t\le\frac 14$, say. I would love to see the graph of $f_n$ for not too small $n$. I don't need super-high precision: 10-20% relative error is fine. But I really want to see the general shape. Thanks in advance! – fedja Feb 17 '24 at 02:15
  • @PeterMueller I posted a code (in Asymptote) that finds an explicit polynomial of degree $40m^2$ with $m-1$ roots. With the standard float point precision, I can go only to $m=11$. Beyond that the rounding errors dominate. I wonder if you want to give it a try with arbitrary precision arithmetic. :-) – fedja Feb 28 '24 at 18:39
11

At last, here is a proper answer. The maximal possible number of real zeroes of a polynomial of degree $n$ with coefficients $0$ and $1$ is comparable to $\sqrt n$ for large $n$.

The overview of the proof

We have already seen in one of the non-answers that $C\sqrt n$ is an upper bound even for a much wider class of polynomials. Now our task is just to outline the construction of a polynomial with about $m$ zeroes of degree about $m^2$ for each large $m$. It will be more convenient for us to make the change of variable $x\mapsto -x$ and to consider the polynomials $$ P_n(x)=1+\sum_{k=1}^n (-1)^k\varepsilon_k x^k,\quad \varepsilon_k\in\{0,1\} $$ with even $n\ge 0$ on the interval $(0,1)$.

Fix $m$. The way we'll force $P$ to have about $m$ zeroes is via the following

Lemma. Let $f$ be a real valued continuous function on an interval $I$. Split $I$ into $m-1$ equal subintervals $I_k$ (so that we have $m$ endpoints). Assume that $\frac 1{|I|}\int_I\log_-|f|\le\beta$ and $\frac 1{|I_k|}\left|\int_{I_k}f\right|<e^{-2\beta}$ for all $k$. Then $f$ has at least $\frac{m-1}2$ zero on $I$.

Proof. The inequality $\frac 1{|I_k|}\int_{I_k}\log_-|f|> 2\beta$ can hold for at most $\frac{m-1}2$ intervals $I_k$. For every remaining interval, we have $$ \frac 1{|I_k|}\int_{I_k}|f|\ge \exp\left[\frac 1{|I_k|}\int_{I_k}\log|f|\right] \\ \ge\exp\left[-\frac 1{|I_k|}\int_{I_k}\log_-|f|\right]\ge e^{-2\beta}>\frac 1{|I_k|}\left|\int_{I_k} f\right| $$ ensuring at least one crossing on $I_k$.

We have already estimated the average $\frac 1a\int_{[1-2a,1-a]}\log_-|P_n|$ by $C/a$ regardless of $n$. So now our task is just to make the integrals over $m-1$ its subintervals small. For some technical reasons to be explained below, it will be convenient to apply the above zero forcing lemma not to $P_n$ itself but to $f(x)=x^{L-1}P_n(x)$ with some integer $L>1$. Then $\frac 1{|I|}\int_I\log_-|f|\le \frac Ca+3La$, say, and if $x_j$ are the splitting points, we will just need to make the $m$ values $Q(x_j)$ of the polynomial $$ Q_n(x)=\frac 1L+\sum_{k=1}^n (-1)^k\frac{\varepsilon_k}{L+k} x^k $$ smaller than $e^{-2(C/a)-6La}\frac a{2(m-1)}$ (the antiderivative of $f$ is just $x^LQ(x)$). The most natural way to do it is to create an infinite series of the above kind converging to $0$ at each $x_j$. Notice that if this infinite series converges to $0$ at all, it does so at the geometric speed, more precisely, like $a^{-1}e^{-an}$ because $|x_j|^k\le e^{-ak}$ for all $j,k$ involved. So as soon as $n> 2Ca^{-2}+6L+a^{-1}\log[2(m-1)a^{-2}]$, we will be winning. In the end of the story $a$ will be of order $m^{-1}$ and $L$ be of order $m$, so we'll need to use $n$ of order $m^2$.

Greedy algorithm and trapping.

Let's first consider a simpler problem: creating a series $$ P(x)=1+\sum_{k=1}^\infty (-1)^k\varepsilon_k x^k $$ that converges to $0$ at just one point $x$. That can be achieved for $x$ close to $1$ by a simple greedy algorithm (take the next awailable term to go in the direction opposite to the current sum) but let us create a little theory about how and why this greedy algorithm works.

Let $A$ be the operator of multiplication by $1/x$ on the real line. Then our series is just $$ P(x)=1+\sum_{k=1}^\infty (-1)^k\varepsilon_k A^{-k}1\,. $$ For making it converge to $0$, it is enough to build a star-shaped with respect to the origin bounded trap $T$ such that we can force $P_n(x)\in A^{-n}T$ along some not too sparse sequence of $n$ (we'll use the arithmetic progression with even step $M$ starting at $0$). That, in turn, is feasible to do by a straitforward induction if $1\in T$ and we have the following decomposition property:

For every $v\in T$, and every even $n$, there exist $\varepsilon_{n+k}\in\{0,1\}$, $k=1,\dots,M$ such that $$ A^M v+\sum_{k=1}^M (-1)^k\varepsilon_{n+k} A^{M-k} 1\in T\,. $$ In this simplest case we can take $T=[-2,2]$, $M=2$ and, provided that $x$ is sufficiently close to $1$, make the following choice:

If $v\in [-1,1]$, put $\varepsilon_{n+1}=\varepsilon_{n+2}=0$;

If $v>1$, put $\varepsilon_{n+1}=1, \varepsilon_{n+2}=0$;

If $v<-1$, put $\varepsilon_{n+1}=0, \varepsilon_{n+2}=1$

Notice that for this choice not only shall we get back to $T$, but we will even get to $\frac 34T=[\frac 32,\frac 32]$. This contractive property is irrelevant now but will be important in the story about $Q$.

For the series $Q(x)$, we will demand that $Q_n(x)\in \frac 1{n+L}A^{-n}T$ instead. The decomposition property then changes to the following:

For every $v\in T$, and every even $n$, there exist $\varepsilon_{n+k}\in\{0,1\}$, $k=1,\dots,M$ such that $$ A^M v+\sum_{k=1}^M (-1)^k\varepsilon_{n+k} \frac{n+L}{n+k+L}A^{M-k} 1\in \frac{n+L}{n+M+L}T\,. $$

so now the contraction requirement is explicit. It gradually fades away but in the beginning, when $n=0$, it is crucial and dictates the presence and the choice of $L$. Since in our simple example we had the contraction factor of $\frac 34$, we must have $\frac{L}{L+M}=\frac L{L+2}\ge \frac 34$, i.e., $L\ge 6$ or something like that.

The case of $m$ points.

The reader shouldn't get suprised now that we introduce the diagonal linear operator $A=\operatorname{diag}[x_j^{-1}]$, consider the problem (we'll do $Q$ immediately) of finding the infinite series $$ Q=\frac 1L[1]+\sum_{k=1}^\infty (-1)^k\frac{\varepsilon_k}{L+k} A^{-k}[1] $$ converging to $[0]$ in $\mathbb R^m$ ($[\gamma]$ denotes the $m$-dimensional vector with all coordinates equal to $\gamma\in\mathbb R$) and reduce it to finding a star-shaped with respect to the origin bounded trap $T\in\mathbb R^m$ such that $[1]\in T$ and

For every $v\in T$, and every even $n$, there exist $\varepsilon_{n+k}\in\{0,1\}$, $k=1,\dots,M$ such that $$ A^M v+\sum_{k=1}^M (-1)^k\varepsilon_{n+k} \frac{n+L}{n+k+L}A^{M-k}[1]\in \frac{n+L}{n+M+L}T\,. $$

We will from now on use the notation $v_k=A^{-k}[1]$, so the last inclusion can be written as $$ A^M v+\sum_{k=1}^M (-1)^k\varepsilon_{n+k} \frac{n+L}{n+k+L}v_{k-M}\in \frac{n+L}{n+M+L}T\,. $$ The key to our proof is the following lemma by Newman (essentially the same lemma as in the Borwein-Erdelyi paper but in the discrete setting):

Newman's Decomposition Lemma

For every $\delta>0$, there is $\eta>0$ such that for every $x_j\in(0,1)$ ($j=1,\dots,m$) satisfying $\prod_j x_j>1-\eta$, there exist real coefficients $a_k$, $k\ge 0$ with $\sum_{k\ge 0}|a_k|<\delta$ and such that $$ v_{-1}-1=\sum_{k=0}^\infty a_k\xi^k v_k $$ where $\xi=e^{-\eta/m}$.

We will postpone the proof of the Newman decomposition lemma and its discussion to the end and now derive the desired inclusion from it assuming that $\delta>0$ is small enough (note that it will not depend on $m$ nevertheless, so $\eta>0$ we'll finally use will be just some absolute constant and we'll be able to choose $a=\frac{\eta}{2m}$ as promised).

First, observe that $\frac{n+L}{n+M+L}\ge \frac{L}{M+L}=\left(1 +\frac{M}{L}\right)^{-1}\ge e^{-\eta/m}=\xi$ if $L>\eta^{-1}M m$ (so, with fixed $M$, which will be just $2$ in the end, we can, indeed, choose $L$ to be a constant multiple of $m$ as promised). Thus, it will suffice to ensure that $$ A^M v+\sum_{k=1}^M (-1)^k\varepsilon_{n+k} \frac{n+L}{n+k+L}v_{k-M}\in \xi T\,. $$ We now fix the Newman decomposition and put $U_k=\sum_{\ell\ge k}|a_\ell|$, $k\ge 0$. Note that $\delta>U_0\ge U_1\ge\dots\ge 0$. We will be interested in the representations $$ v=\omega v_0+\sum_{k\ge 1}\lambda_k \xi^k v_k $$ with $\omega,\lambda_k\in\mathbb R$ and $|\lambda_k|\le\Lambda U_k$ where $\Lambda>0$ is some constant to be chosen later.

Using Newman's decomposition and taking a vector $v$ that allows the above representation, we can write $$ Av=\omega(v_{-1}-v_0)+(\omega+\xi\lambda_1)v_0+\xi\sum_{k\ge 1}\xi^k\lambda_{k+1}v_k \\ =(\omega+a_0\omega+\xi\lambda_1)v_0+\sum_{k\ge 1}\xi^k(\xi\lambda_{k+1}+\omega a_k)v_k \\ =\omega' v_0+\sum_{k\ge 1}\xi^k\lambda'_{k}v_k\,. $$ Note now that $$ |\lambda'_k|\le \xi \Lambda U_{k+1}+|\omega||a_k| \le\max(\xi\Lambda,|\omega|)(U_{k+1}+|a_k|)=\max(\xi\Lambda,|\omega|)U_k\,, $$ so if $|\omega|\le\xi \Lambda$, the resulting representation is not only again admissible, but improves by a factor of $\xi$. Of course, that improvement in $\lambda_k$'s comes at a cost of possibly increasing $|\omega|$, but that cost is not too high: we have $$ |\omega'-\omega|\le |a_0||\omega|+\xi\Lambda U_1\le \max(|\omega|,\xi\Lambda)(|a_0|+U_1) \\ =\max(|\omega|,\xi\Lambda)U_0\le\delta \max(|\omega|,\xi\Lambda)\,. $$ Let now $v(0)=v$ have an admissible representation with $|\omega|\le\Omega$ and let $$ v(m)=A^m v+\sum_{k=1}^m (-1)^k\varepsilon_{n+k} \frac{n+L}{n+k+L}v_{k-m}\,. $$ Then $v(m)=Av(m-1)+(-1)^k\varepsilon_k\frac{n+L}{n+m+L}v_0$.

Let $\omega_m$ be the coefficient at $v_0$ in the decomposition of $v(m)$. Then we have the recursion $$ \omega_m=\omega'_{m-1}+(-1)^k\epsilon_{n+k}\frac{n+L}{n+k+L} \\ =\omega_{m-1}+(-1)^k\varepsilon_{n+k}+E_m $$ where $$ |E_m|\le \delta \max(|\omega|,\xi\Lambda)+\frac{k}{n+k+L} \\ \le \delta \max(|\omega|,\xi\Lambda)+\frac M{M+L} \,. $$

Assume now that $|\omega_0|\le\Omega$, $\Omega\ge M(1+\delta\Lambda)$ and $2\Omega\le e^{-\eta}\Lambda$. Notice that the last two conditions can be easily satisfied by choosing any $\Omega>2M$, then any $\Lambda>6\Omega$ and, finally $\delta<\Lambda^{-1}$ and $\eta=\min(1,\eta(\delta)$ where $\eta(\delta)$ is given by the Newman decomposition lemma. In this case we can easily prove by induction that $|\omega_m|\le \Omega+2m\le 2\Omega$ for $m=0,\dots,M$ regardless of our choice of $\varepsilon_{n+k}$. Thus, we'll have all $v(m)$ having admissible representations and the estimate $$ |E_m|\le\delta\Lambda+\frac{M}{M+L}\,. $$ But then $$ \omega_M=\omega_0+\sum_{k=1}^M (-1)^k\varepsilon_{n+k}+E $$ where $|E|\le M\delta\Lambda+\frac{M^2}{M+L}<\frac 12$, provided that $\delta<\frac 1{4M\Lambda}$ and $L>4M^2$. It remains to note that if we take $M=2$ and $\Omega\ge 2$, then by choosing $\varepsilon_{n+k}$ appropriately (actually, just by pushing towards $0$ as hard as we can at each step), we can ensure that the expression on the right hand side is between $-\Omega+\frac 12$ and $\Omega-\frac 12$. If $\Omega-\frac 12<e^{-\eta}\Omega$ (another smallness condition for $\eta>0$), then we can use as our trap $T$ the set of all vectors $v$ having an admissible representation with $|\omega|\le\Omega$ and finish the proof modulo the Newman decomposition lemma.

Proof of the Newman decomposition lemma (compare to the argument in the Borwein-Erdelyi paper)

Define $$ B(z)=\prod_{j=1}^m\frac{1-\xi x_j z}{\xi x_j(\xi x_j-z) }, \quad Q_\ell(z)=1-\frac{\ell+1}{\ell}z^{-1}+\frac 1\ell z^{-\ell-1}\,. $$ with some positive integer $\ell$. Put $$ c_k=\oint_{|z|=1}B(z)Q_\ell(z)z^k\frac{dz}{2\pi i}\,. $$ Then we have $$ \sum_{k\ge 0}c_k\xi_k x_j^k= \oint_{|z|=1}B(z)Q_\ell(z)\frac 1{1-\xi x_j z}\frac{dz}{2\pi i}=-\frac{1}{\xi x_j} $$ by the residue theorem (move the contour out to $\infty$ and notice that the factor $1-\xi x_j z$ in the denominator cancels with the same factor in the numerator of $B(z)$, so there are no singularities outside the unit disk). Thus we can try to take $a_0=-\xi c_0-1$, $a_k=-\xi c_k$ for $k\ge 1$.

To show that $\sum_{k\ge 0}|a_k|$ is small, we can shift the contour to the circle $C$ with diameter $[-\frac 12,1]$, say. Note that on $C$, the ratio $\frac{|1-z|^2}{1-|z|}$ stays bounded and that $Q_\ell$ has a root of multiplicity $2$ at $1$, so $\frac{|Q_\ell(z)|}{1-|z|}$ is bounded on $C$ as well. Also, $|B(z)|\le |B(0)|=\prod_{j=1}^m\frac{1}{(\xi x_j)^2}\le e^{4\eta}$ and, thereby, we have $$ |c_k|\le C_k=e^{4\eta}\int_{C}|Q_\ell(z)||z|^k\frac{|dz|}{2\pi} $$ and $$ \sum_{k\ge 0}C_k=e^{4\eta}\int_{C}\frac{|Q_\ell(z)|}{1-|z|}\frac{|dz|}{2\pi}\le C(\ell)<+\infty\,. $$ Note also that the summable majorant $C_k$ does not depend on the choice of $x_j$, only on $\ell$ and $\eta$.

On the other hand, we have $$ \int_{|z|=1}|B(z)-1|^2 \frac{|dz|}{2\pi}=\int_{|z|=1}|B(z)|^2\frac{|dz|}{2\pi}-1\le e^{4\eta}-1\, $$ so as $\eta\to 0$, $B$ tends to $1$ at least in $L^2$ on the unit circle and, thereby, for every fixed $k\ge 0$, $$ c_k\to \oint_{|z|=1}Q_\ell(z)z^k\frac{dz}{2\pi i}=\begin{cases} -\frac{\ell+1}\ell, & k=0; \\ \frac 1\ell, &k=\ell; \\ 0 &\text{otherwise}; \end{cases} $$ and the dominated convergence theorem implies that as $\eta\to 0$, the sum $\sum_{k\ge 0}|a_k|\to\frac 2\ell$, which can be made as small as we want by choosing $\ell$ large enough.

That finishes my official answer. It is still a bit sketchy in places but I believe that it shouldn't be too hard to recover all the details. Nevertheless, feel free to ask as many questions as you need if something is still unclear. The Borwein-Erdelyi paper I referred to can be found here and the code for actually finding the polynomial with many roots (in Asymptote; we'll try to translate it into Mathematica later to work with higher precision required for $m>11$) is below. Thanks to everybody who managed to read up to this point for their attention and patience. I'll turn to something else now. :-)

int m=11;  int MM=40*m^2; int N=2^(floor(log(MM)/log(2))+5); int Q=MM;
real t=0.1;
real[] x;
int[] cf; cf[0]=1;

for(int k=0; k<m;++k) x[k]=exp(-(1.5+0.5cos(pik/(m-1)))*t/m);

pair zeta=(exp(-t/m),0);

real[] V; int l=16;

pair B(pair z) { pair s=(z-(l+1)/l+1/z^l/l); for(int i=0;i<m;++i) s=(1-zetax[i]z)/(zetax[i]-z)/zeta/x[i]; return s; }

pair[] FF(pair[] u, int n) { pair[] v; if(n==1) return copy(u); pair[] ueven,uodd; for(int k=0;k<n/2;++k){ueven[k]=u[2*k]; uodd[k]=u[2*k+1];} int m=quotient(n,2); pair[] veven=FF(ueven,m), vodd=FF(uodd,m); for(int k=0;k<n;++k) v[k]=veven[k%m]+expi(2pik/n)*vodd[k%m]; return v; }

pair[] UU;

for(int q=0;q<N;++q) { UU[q]=B(expi(2piq/N)); } pair[] VV=-zeta/NFF(UU,N); for(int k=0;k<N;++k) {VV[k]=zeta^k; V[k]=VV[k].x;}

V[0]-=1;

//pause();

real nm=0; for(int k=0;k<=Q;++k) nm+=abs(V[k]/zeta^k); write(nm);

real a=x[1]; real s=0; for(int k=0;k<=Q;++k) s+=V[k]*a^k; write(1/a-1,s);

int L=5*m; real[] v; v[0]=1; for(int s=1; s<=Q;++s) v[s]=0;

for(int mm=1;mm<MM;++mm) { real a=v[0]; for(int s=0;s<Q;++s) v[s]=v[s+1]; v[Q]=0; v[0]+=a; for(int i=0;i<=Q;++i) v[i]+=a*V[i];

real thresh=0;

if(v[0]>thresh && mm%2==1) {cf[mm]=-1;} else if(v[0]<-thresh && mm%2==0) {cf[mm]=1;} else cf[mm]=0;

for(int s=0;s<=Q;++s) v[s]*=(mm+L)/(mm+L-1); v[0]+=cf[mm];

if(mm%100==0) write(mm, v[0], (1-t/m)^mm); }

import graph;

size(400,400,IgnoreAspect);

real f(real x) { real s=0; for(int k=0;k<MM;++k) s+=cf[k]exp(-txk/m);//(k+L); return 10.0^15s; }

draw(graph(f,1,2,1000)^^(1,0)--(2,0)--(2,1));

The largest number of roots we can get with the standard float precision ($10^{-15}$) is 10 ($m=11$ anchor points). Beyond that the rounding errors dominate. If someone wants to try this code with higher precision, that would be interesting to see.

fedja
  • 59,730
3

Using a different approach outlined below, I found the two dual polynomials of order $\nu=41$, \begin{align} P_7(x)&=x+x^{2}+x^{4}+x^{7}+x^{11}+x^{12}+x^{15}+x^{17}+x^{18}\\ &\quad{}+x^{20}+x^{24}+x^{29}+x^{30}+x^{31}+x^{35}+x^{38}+x^{40}+x^{41}\,, \tag{1a}\label{eq:1a}\\ \bar P_7(x)&=x+x^{2}+x^{4}+x^{7}+x^{11}+x^{12}+x^{13}+x^{18}+x^{22}\\ &\quad{}+x^{24}+x^{25}+x^{27}+x^{30}+x^{31}+x^{35}+x^{38}+x^{40}+x^{41}\,, \tag{1b}\label{eq:1b} \end{align} with $n=7$ distinct real roots, such that now $$\tag{2}\label{eq:2} \nu(7) \leq 41. $$ Note that $P_7(x) = x^{42} \bar P_7(1/x)$. The (restricted) search took only a few minutes on my iMac using Mathematica, code will be added below later.

I made the following assumptions:

  1. I only considered $(b_k=0,1)$-polynomials
    $$\tag{3}\label{eq:3} P(x)=\sum_{k=0}^\nu b_k x^k $$ with $P(0)=P(-1)=0$, such that $b_0=0$ and $\sum_{k>0 \text{ even}} b_k=\sum_{k>0 \text{ odd}} b_k$. We have two trivial and $n-2$ nontrivial roots.
  2. Define $b_{-k}=b_{\nu+1-k}$. From the OP's polynomials with $n<7$, I derived a pattern that with growing $n$ more and more of the first and last terms become symmetric, $$\tag{4}\label{eq:4} b_k=b_{-k} \,\, \forall \,\, k\leq k_n. $$ If this would hold for all $k$, it would give the palindromic polynomials used by @Peter Mueller.
  3. The asymptotic set of nonvanishing $b_{k_m}=1$ for given $n$ is assumed to be defined by $$\tag{5}\label{eq:5} k_m=\frac{(m-1)\,m}{2}+1=\{1, 1, 2, 4, 7, 11, \ldots \}, \quad m=0,\ldots,n-2\,, $$ such that the distance between nonzero $b_k$ grows linearly, $k_{m+1}-k_m=m$. Therefore, $k_m-1$ are the triangular numbers.

For $n=7$ we get $k_{n-2}=11$, such that at $\nu=41$ $$\tag{6}\label{eq:6} P_7(x)=x+x^{2}+x^{4}+x^{7}+x^{11} +\left[\sum_{k=12}^{30} b_k x^k \right] +x^{31}+x^{35}+x^{38}+x^{40}+x^{41}, $$ which are only $75582$ instead of $2^{41}$ equations. From these, the two candidates in Eqs. (1) have $n=7$ distinct real roots. The polynomials with $\nu<41$ had a maximum of six roots, such that $\nu(7)=41$ under the assumptions 1-3. I conjecture that this is also correct without the assumptions.

For $n=6$, where $k_{n-2}=7$, the OP's polynomial does not have the assumed symmetry. However, for $\nu=28$ eight polynomials are found with this method and symmetry, from $3432$ candidates. One solution reads \begin{align}\tag{7}\label{eq:7} P_6(x)&=x + x^2 + x^4 + x^7 + (x^{13} + x^{15} + x^{16} + x^{18} + x^{20} + x^{21} )\\ &\quad {}+ x^{22} + x^{25} + x^{27} + x^{28}, \end{align} where the braces mark the "free" part.

The asymptotic series \begin{align} \tag{8a}\label{eq:8a} P_\infty(x)&=\sum_{m=1}^\infty x^{k_m}=\sum_{m=1}^\infty x^{\frac 1 2(m-1)m+1} \\ &=x + x^2 + x^4 + x^7 + x^{11} + x^{16} + x^{22} + x^{29} + \ldots \end{align} can be calculated for $\lvert x \rvert < 1$, with the result $$\tag{8b}\label{eq:8b} P_\infty(x)=\frac{1}{2} x^{7/8} \vartheta_2\left(0,\sqrt{x}\right)\,. $$ Here, $\vartheta$ denotes the Jacobi theta function, see the included plot.

Plot of polynomials

Update 02.03.24

Well, there seems to be a grain of salt in this approach: Continuing to $n=8$ and $\nu=52$, with $184756$ equations, I cannot find a solution under the assumptions 1-3. It seems that $k_n$ grows too fast, leaving too few degrees of freedom ($DOF=20$) in $$ P_8(x)=x+x^{2}+x^{4}+x^{7}+x^{11}+x^{16} +\left[\sum_{k=17}^{36} b_k x^k \right] +x^{37}+x^{42}+x^{46}+x^{49}+x^{51}+x^{52}. $$

Fred Hucht
  • 2,705
  • 1
    Nice! Just to check: you count all zeroes on the real line, including the"trivial" one at $0$, not just those in $(-1,0)$ like in my posts, right? – fedja Mar 01 '24 at 20:15
  • 1
    Yes, exactly as defined by the OP. – Fred Hucht Mar 01 '24 at 20:34
  • 1
    Got it. And if I ask you to find a polynomial with $9$ zeroes on $(-1,0)$ alone (not necessarily of the minimal possible degree, just some polynomial), will your approach work and if it will, what degree will that yield? I tried to use the truncations of the Jacobi theta function you mentioned in my early attempts but didn't get far. On the one hand I'm glad I didn't because the result we got with Zachary applies to a much more general situation that $0,1$, but on the other hand I still feel that your approach may be better suited to this particular case. :-) – fedja Mar 01 '24 at 21:41
  • @fedja I think 9 roots on $(-1,0)$ is even too hard for this approach, as it requires $n=20$. – Fred Hucht Mar 02 '24 at 07:05
  • 2
    @FredHucht Concerning the Update 02.03.24: For $n=8$ there isn't even a solution for $\nu\le 59$ of the shape you are looking for (where I even dropped the requirement $P(-1)=0$ from the Assumption 1). – Peter Mueller Mar 12 '24 at 10:17
  • @PeterMueller Thanks for the info. As I already wrote, obviously the progression proposed in assumption 3 grows too fast. – Fred Hucht Mar 14 '24 at 10:00