9

Suppose that p is a prime number and x_1 is an integer in range [1, p - 1]. If x_k \not = 1, define x_{k + 1} := p \!\!\! \mod \!\! x_k. Clearly, x_{k + 1} < x_k and x_{k + 1} > 0 (because p is prime), so there exists some n (depending on x_1) such that x_n = 1. Is it true that n = O(\log p)?

For reference, it is quite easy to prove that n = O(\sqrt p). Let q_i := [\frac{p}{x_i}], i. e. p = q_i x_i + x_{i + 1} for every i \in [1, n - 1]. Because q_i x_i + x_{i + 1} = p = q_{i + 1} x_{i + 1} + x_{i + 2} for i \in [1, n - 2] and x_i > x_{i + 1}, x_{i + 1} > x_{i + 2}, q_i is increasing. On the other hand, x_i is decreasing. Also q_i x_i < q_i x_i + x_{i + 1} < p for every i \in [1, n - 1]. Therefore q_i < \sqrt{p} or x_i < \sqrt{p} for each i \in [1, n - 1]. Because q_i increases, there can be at most [\sqrt{p}] i-s with q_i < \sqrt{p}. The same way there can be at most [\sqrt{p}] i-s with x_i < \sqrt{p}. Therefore n \leqslant 2[\sqrt{p}].

Some ideas:

  1. Maybe it makes sense to think backwards: i. e. how long can a sequence y be if y_1 = 1 and y_{i + 1} is some divisor of p - y_i that is bigger than y_i (in this notation x_i = y_{n + 1 - i})?
  2. One more way to look at this is to notice that the question asks to estimate how deep can a rooted tree on p - 1 vertices be, with root in 1 and edges between x and p \!\!\! \mod \!\! x for x = 2, 3, \ldots, p - 1. Maybe it is possible to prove that this tree is somewhat balanced?
  3. Two heuristics suggest that the number of steps is indeed O(\log p): first one assumes that p \!\!\! \mod \!\! x is a random integer number in range [1, x - 1] and second one uses "backwards" point of view and assumes that p - y_i is divisible by t with probability t^{-1}. Of course, both heuristics don't make too much sense, but they show that n = O(\log p) is at least a reasonable enough statement to consider.

In fact, any ideas on proving any bound better than O(\sqrt{p}) are appreciated.

Kaban-5
  • 543
  • Let m be the least common multiple of the first n numbers. Primes which are -1 mod m will need n -1 iterations if the first x is n. However, one can for given p and alpha less than 1 find how many x have p mod x greater than alpha x, and then determine how likely it is to stay away from such x. Gerhard "Thinks Logarithmic Bound Quite Likely" Paseman, 2018.07.18. – Gerhard Paseman Jul 18 '18 at 15:32
  • 1
    I'm curious why you take p to be prime. Why not start with an arbitrary integer m? – Joe Silverman Jul 18 '18 at 20:27
  • Gee, I'd forgotten all about that question. Maybe I will see if I can improve the idea I posted. Gerhard "That Will Buy Many Mochas" Paseman, 2018.07.18. – Gerhard Paseman Jul 19 '18 at 02:45
  • Joe Silverman: original motivation was yet another (well known, not invented by me) algorithm that finds x^{-1} !!! \mod !! p in O(\log p) arithmetical operations: x [\frac{p}{x}] + (p !!! \mod !! x) = p \equiv 0 \pmod p, therefore x^{-1} \equiv -[\frac{p}{x}] (p !!! \mod !!x)^{-1} \pmod p. This formula yields a recursive formula for x^{-1} !!! \mod !! p, as long both left and right part makes sense, which is not true for composite p.

    I agree that the question I asked makes sense for composite p.

    – Kaban-5 Jul 19 '18 at 10:32
  • Max Alekseyev: linked post indeed shows that there are upper bounds better than O(\sqrt p), but it does not tell anything even about O(poly(\log n)). I am slightly confused about what to do with "This question may already have an answer here:" pop-up window: clearly, the linked post does not fully solve my problem, yet my question is just a special case of linked question. Should I just ignore this pop-up? – Kaban-5 Jul 19 '18 at 13:18

2 Answers2

5

Let g(p) be the maximal value of n for x_1\in\{1,\dotsc,p-1\}, and put h(p)=g(p)/log(p). Here is a plot of (p,h(p)) for the first 2000 primes, together with the functions 13/\log(x) and 25/\log(x)

enter image description here

The six points along the top correspond to the primes 5879, 9743, 10427, 13679, 16673, 16979 with g(p)=25. I could not find any other special properties of these primes, and OEIS does not recognise the list. For 20 primes centred at the Mersenne prime 524287, the minimum and maximum values of h(p) are 1.898 and 2.354. In general, there does not seem to be anything special about the Mersenne primes, contrary to what I thought might be the case.

  • You might note that for each of your listed primes p, p+1 has many divisors, more than Mersenne primes, giving lots of chances to "slow" the dynamic down. One observes something similar for natural numbers n in place of p with n+1,n+2,n+3, etc. having many divisors. Gerhard "Probably Some Nice Congruence Conditions" Paseman, 2018.07.19. – Gerhard Paseman Jul 20 '18 at 05:51
  • @GerhardPaseman I did observe that, but when I checked systematically for the number of factors of p+1, the listed primes did not stand out. – Neil Strickland Jul 20 '18 at 06:08
2

Here is an idea to pursue.

Fix p. I will call p \bmod x (your dynamic) d(x), and I look at H, the set of x less than p with 2*d(x) \gt x. The dynamic can stay out of H only a logarithmic number of times, so now we ask how attractive H can be as a dynamic.

H looks like a collection of intervals. Dividing everything by p, the set is like (1/2,2/3) union (1/3,2/5) union intervals of the form (1/k, 2/(2k-1)), for enough k until you get bored. This is some constant fraction of p, so pretty large in logarithmic terms.

However, we don't have to be afraid of H. If d visits H and not H alternately, d still reaches 1 in a logarithmic number of steps. (Indeed, we can set a number in (1,\log p) as a goal to finish in logarithmic time, so I am not going to worry about the very end of the dynamic.) So let us consider when x is in H and d(x) is also in H.

Suppose x and x+1 are in H. Then d(x+1)-d(x) \gt 1 when x \lt p/2. So the set of x less than p/2 which visit H at least twice in a row is less than half the number of x less than p/2 which visit H at least once under d. If we start from a point less than p/2, then the chance it lands in H is small, and the chance it lands in H twice in a row by iterating d gets very small. The idea is to analyze this region below p/2, and show that landing in H at least k times in a row is less than (1/2)^k as likely as landing once. I leave the detail to others.

Now we turn our attention to (2x \gt p). The question now is how long can one iterate d and stay above p/2. But this happens for no x less than p. So the ticket is to show the subsets of H which are visited by d at least k times in a row grows exponentially small.

Gerhard "Seems Better Than Square Root" Paseman, 2018.07.18.

  • Other than that d(x) is nonzero, I did not use profanity (primality, spellcheck!) anywhere. If you don't mind stopping at 0, you can use this argument for p not prime. If you do a literature search, you might consider the more general version. Gerhard "Maybe Easier Looking For Bigger" Paseman, 2018.07.18. – Gerhard Paseman Jul 18 '18 at 20:39
  • It seems that H is not as nice as I hoped. It behaves as expected for x bigger than (some small multiple of) sqrt(p), but is less predictable for smaller x. The argument can be used to show that a number not far from sqrt(p) can be reached in O((log p)^2) steps, but more is needed to get to one. Gerhard "Square Root Seems Bigger Now" Paseman, 2018.07.18. – Gerhard Paseman Jul 19 '18 at 06:41
  • Hello. I tried to understand your answer and it seems that you more or less claim two things (let d(1) := 1 so d is everywhere defined): |\bigcap_{i = 0}^k d^i (H)| \leqslant 2^{-k} |H| and it implies that n = O(\log p). First statement is almost true (up to factors that are negligible compared to |H| for k = 1, I will call this "edge effects"), but I failed to see how it generalizes to bigger k. Computer says that "edge effects" grow stronger as k grows: even |\bigcap_{i = 0}^k d^i (H)| - 50 is not less than \leqslant 2^{-k} |H|. – Kaban-5 Jul 19 '18 at 10:47
  • And I don't see how to get any bound significantly better than 2^{-k} p + k \log_2 p even if |\bigcap_{i = 0}^k d^i (H)| \leqslant 2^{-k} |H| would hold, and minimum of this expression is of order \log^2 p. – Kaban-5 Jul 19 '18 at 10:48
  • Yes, the effects when x goes below sqrt(p) are considerable. I think it can be rescued to show that about log p steps above sqrt p suffice. (If not, then H_k is nonempty for k bigger than log p, which is at odds with H_k having half again as many members as k increases, but I don't have this rigorously.) Either that, or I am complicating your simple sqrt bound argument above. Gerhard "Thanks For Considering This Idea" Paseman, 2018.07.19. – Gerhard Paseman Jul 19 '18 at 14:56
  • There are effects but it looks asymptotically like the size of H is more than twice the size of H_2 which eventually is more than twice the size of H_3 (as p grows, there are exceptions for small p), so the idea may be good after all if not as convenient. If I get a proof I will post it here. Gerhard "An Asymptotic Approach To Proof?" Paseman, 2018.07.19. – Gerhard Paseman Jul 20 '18 at 01:40
  • I thought about your suggestions quite a lot, but there seems to be a serious conceptual problem: proving that H_k := \bigcap_{j = 0}^k d^j (H) is significantly smaller than H_{k - 1} requires H_{k - 1} being quite big by itself: for example, H_{k - 1} \cap S_m being big enough at least for one m, where S_m = (\frac{p}{m + 1}, \frac{2p}{2m + 1}) \cap \mathbb{N} (alternatively, S_m = { x \in H : [\frac{p}{x}] = m }). Because it is hard to understand relations between d(x) and d(y), where x \in S_a, y \in S_b and a \not = b. – Kaban-5 Jul 20 '18 at 18:38
  • When x and y are small, it is indeed hard to understand relations between d(x) and d(y). When x and y are significantly larger than 2sqrt(p), that part of H is like a collection of intervals (1/k, 2/(2k-1)), and for H_2 (with x and d(x) both in H and both greater than say 2sqrt(p)) is a larger collection of smaller intervals which has about half the numbers above 2sqrt(p) that H has above 2sqrt(p). Below 2sqrt(p), I also don't understand H, but I suspect similar relations hold between H, H_2, and iterates of H. Gerhard "The Way Is Not Clear" Paseman, 2018.07.20. – Gerhard Paseman Jul 20 '18 at 18:47
  • Therefore it is hard to deal with possible case, when H_{k - 1} has only one element in each S_m: we can't use any arguments like "elements from H_{k - 1} \cap S_m run far from each other after applying d, so some of them miss H" or "H_k \cap S_m can't have too many elements coming from H_{k - 1} \cap S_r for every r < m after applying d". Therefore such counting methods would have problems with disproving statements like "there are such \alpha > 0, p, x_1 that [\frac{p}{x_k}] = k for each k < p^{\alpha}" . – Kaban-5 Jul 20 '18 at 18:49
  • Moreover, it should be noted that continuous version of the problem (x_1 \in [0, 1], x_{k + 1} = 1 !!! \mod !! x_k in a sense that 1 = qx_k + x_{k + 1} for some integer q and x_{k + 1} \in [0, x_k)) can have x_k decrease really slowly: d ((\frac{1}{k + 1}, \frac{2}{2k + 1})) = (\frac{1}{2k + 1}, \frac{1}{k + 1}) \supset (\frac{1}{k + 2}, \frac{2}{2k + 3}). Therefore it is possible that x_k \in (\frac{1}{k + 1}, \frac{2}{2k + 1}) for every k. So, figuring out why it is impossible in discrete version of the problem requires looking not only on high-level picture. – Kaban-5 Jul 20 '18 at 19:02
  • Answer to the last comment by you: I don't argue about "about half" statement, I agree. The problem is in the "about" part: as H_k grow smaller, it becomes more difficult to ignore: even arbitrarily small segments in [0, 1] notation may contain integers in [1, p] notation. It is more or less the same problem as with x \ll \sqrt{p}: when [\frac{p}{x}] = k \gg \sqrt{p}, then corresponding segment, where x lies in (0, 1) notation is (\frac{1}{k + 1}, \frac{1}{k}) and has length \approx pk^{-2} \ll 1 in [1, p] notation, but it does indeed contain an integer. – Kaban-5 Jul 20 '18 at 19:11
  • OK. How about this: suppose for small r and c that p= rx + x-c, so d(x) is x-c, so p= r(x-c) + (x-c) + rc, so d(d(x)) = rc when rc is small enough that x is larger than (r+1)c . (Otherwise d(d(x)) is less than rc.) So " r shifts" after less than two iterations to a larger value of r for small r. For large r, it is not as clear. Further, it may be possible to show that r shifts by 1 or by small values only for log(p) many values of small r. Of course, something different is needed for large r. Gerhard "Time To Write Some More" Paseman, 2018.07.20. – Gerhard Paseman Jul 20 '18 at 19:16