I've been interested greatly in the study of functional equations for some time now, I've learnt many different techniques for their solution. Currently I have been studying superfunctions and fractional iterations of functions. In all these subjects I have been led to three main equations all tied together. The first one I stumbled across in an effort to extend tetration to the reals (the forth operation above exponentiation) is called Abel's Equation, $$ f(g(x))=f(x)+1. $$ I studied it and searched through many works on methods for its solution, but in all of them I could neither find a general method nor a simple step by step systematic solution procedure. With the other two, i.e. Böttcher's equation, $$ f(g(x))=f(x)^n, $$ and Schröder's equation, $$ f(g(x))=sf(x), $$ I ran into the same problems. So my question is, is there a easy to follow neat systematic procedure in which I can use to solve any one of these equations to get the family or families of functions that satisfy the relations, and if so where I can find it?
-
1well, my question and answer https://mathoverflow.net/questions/45608/does-the-formal-power-series-solution-to-ffx-sin-x-converge Some of the material I found is at http://zakuski.math.utsa.edu/~jagy/Iteration.cgi – Will Jagy Aug 29 '23 at 00:51
-
1Could you perhaps clarify the question a bit? I assume in the first problem, $f$ is a given function (perhaps with some expected properties, such as being continuous, or...?) and we search for a $g$ satisfying the equation? Or perhaps the other way around? In the other equations, are $n$ resp. $s$ fixed constants? – Max Horn Aug 29 '23 at 08:12
-
Yes if g(x) is a given is there a way to find a continuous infinity differentiable function or set of functions that satisfy the one of the three equations above? – Anthony Corsi Aug 29 '23 at 23:13
1 Answers
A rough heuristic which I use for these is below:
Abel's Equation:
$$ f(g(x)) = f(x) + 1 \rightarrow f(g(x))-f(x)=1 $$
I haven't found an easy way to give a closed form / series representation of this always. But if $g$ is sufficiently "nice" you can find $f$ by doing the following: consider some point $x=p$ then the sequence of points $x_1 = g(p), x_2= g(g(p)), x_3= g(g(g(p)))\ \dots \ x_k = g^{k}(p), ... $ let $f(x_k) = k$ then you want to interpolate this set of points as a curve
Another way to view the same procedure is to let $f$ be the function such that counts how many times do you need to apply $g^{-1}$ (the function inverse! not the reciprocal) to a value $x$ to get a standard value $p$. So if $g = x+1$ then $g^{-1} = x-1$ and if we let the standard value $p=0$ then $f$ asks how many times do you have to apply $x-1$ to a value $x$ to get it 0. If $x$ is an integer then this is just $x$ times $f = x$ over the integers (we then just say $f = x$ for all inputs to simplify).
We can let $g = 2x$ then let $p=1$ and if $x$ is a power of $x=2^n$ we can have that $f = \log_2(x)$. So we have defined $x$ for all powers of $2$ and we can now just use that same definition over all real/complex numbers.
IF we let $g=x^2$ then let $p=2$ and if $x$ is a power of a power of $2$ so $x=2^{2^{n}}$ then $f=\log_2(\log_2(x))$. Etc...
Schröder's Equation:
$$ f(g(x)) - sf(x) = 0 $$
This is a little trickier to solve. First we need a function $H$ such that $H(g^{k}(x)) \rightarrow 0 $ as $k \rightarrow \infty$ and $k \rightarrow -\infty$ for some set of $x$. If $g^{-1}$ isn't well defined this can be a little subtle but there is usually a way to make this work. Also $H$ needs to approach $0$ "fast" depending on the choice of $s$.
Then we consider the function
$$ f(x) = ... s^2H(g^{-2}(x)) + sH(g^{-1}(x)) + H(x) + s^{-1}H(g(x)) + s^{-2}H(g^2(x)) + ... \sum_{k=-\infty}^{\infty} s^{-k} H(g^{k}(x)) $$
Note that $s^k$ is just a normal exponential and $g^k$ is function composition.
Clearly this obeys the desired functional equation and WILL converge if $H$ goes to 0 sufficiently fast for repeated applications of $g$ and $g^{-1}$.
We given an example now. Suppose we wish to solve
$$ F(x^2) - 2F(x) = 0 $$
In this case $g(x) = x^2$ and $g^{-1}(x)=x^{\frac{1}{2}} = e^{\frac{\ln(x)}{2}}$ (we can use the complex exponential with a branch cut to define this in the complex plane).
Now depending on the absolute value of $x$ we have differing behavior
$$ g^{+\infty}(x) = \begin{cases} \infty & \text{if $|x|>1$} \\ 0 & \text{if $|x|< 1$} \\ 1 & \text{if $x$ is a $2^n$ root of unity} \\ \text{undefined} & \text{otherwise} \end{cases} $$ $$ g^{-\infty}(x) = \begin{cases} 0 & \text{if $x$ = 0} \\ 1 & \text{otherwise} \end{cases} $$
So a good $H(x) \rightarrow 0$ quickly if $x \rightarrow 0,1,\infty$. We can basically make whatever we want as long as we meet that condition. We consider for example $H(x) = x^2(1-x)^2e^{-x^2}$
Then a solution can be written quite simply as
$$ f(x) =... x^{\frac{x}{4}}(1-x^{\frac{1}{8}})e^{-\frac{x}{4}} + x^{\frac{x}{2}}(1-x^{\frac{1}{4}})e^{-\frac{x}{2}} + x(1-x^{\frac{1}{2}})e^{-x} + x^2(1-x)^2e^{-x^2} + x^4(1-x^2)^2e^{-x^4} + x^8(1-x^4)^2e^{-x^8} + ... = \sum_{k=-\infty}^{\infty} x^{2^{1+k}}(1 - x^{2^k})^2 e^{-x^{2^k}} $$
This is some curve and it obeys our target equation. Of course $f(x) = \ln(x)$ is a much simpler solution! BUT, the point is that there are really uncountably many other exotic solutions that can be built up by changing $H$ (a fun exploration might be to ask which $H$ produces the natural log?)
Bötchers Equation:
Botchers Equation (this is my first time seeing it) is basicaly just the multiplicative version of Schroder's Equation...
If we have an $H$ that goes to 1 sufficiently quickly for $g^k$ and $g^{-k}$ as $k\rightarrow \infty$ then the following should work:
$$f(x) = ... H(g^{-2}(x)^{n^2} H(g^{-1}(x))^n H(x) H(g(x))^{\frac{1}{n}} H(g^2(x))^{\frac{1}{n^2}} ... \prod_{k=-\infty}^{\infty} H(g^{k}(x))^{n^{-k}} $$
Further Considerations:
These same tricks apply for ANY associative functional equation. So if $T$ is a binary associative operator $U,V$ are operators that distribute over $T$ then $T(U[f],V[f]) = \text{Id}_{T}$ supports an $H$ construction like we did for Schroder and Botcher.
Now it might be tempting to consider functional equations with multiple functional shifts such as
$$ f(g_1(x)) + f(g_2(x)) + f(x) = 0 $$
These are substantially more difficult to solve. And basically instead of looking at the chain $... g^{-k} ... g^{-2}, g^{-1}, g, g^1, g^2, ... g^{k} ... $ you need to look at the Cayley Diagram formed by $g_1, g_2$ (and this might shock you but this Diagram is almost NEVER a free group diagram)
Once you have nailed down the diagram then you need to find $H$ that decay properly for all the limit directions of that diagram and then you can sum.

- 2,263
-
This was very helpful, but there a few things I don't understand. In Schröder's Equation how does one find the H function used in the series for f(x)? Also why is the H function needed considering that without it the series representation of f(x) would still satisfy the Schröder Equation? – Anthony Corsi Aug 29 '23 at 16:31
-
The $H$ is selected to make the series converge. Formally speaking (without worrying about convergence) you can do whatever you want including foregoing the use of $H$ but you usually want to be able to graph / talk about your function over R or C. I will include some examples later to explain further – Sidharth Ghoshal Aug 29 '23 at 17:02
-
I understand, my main question is given an arbitrary g how does one solve for H – Anthony Corsi Aug 29 '23 at 22:40
-
You don't really solve for $H$, the space of $H$'s is very large, you just need to find ANY $H$ that decays to 0 fast enough – Sidharth Ghoshal Aug 30 '23 at 10:17
-
See the example I just added, we derive that any $H$ that goes to 0 quickly at $0,1,\infty$ gives us a solution and so we sort of manufacture an $H$ but its hardly "solving" its basically just making an $H$ that meets those criteria. – Sidharth Ghoshal Aug 30 '23 at 10:47
-
One last thing, I was playing around with this method to solve various Schröder equations, one I ran into was f(x^2+1)=sf(x). Trying to construct H that met the criteria was exceedingly hard. After reviewing the problem I noticed that the infinite iteration of the inverse function sqrt(x^2-1) (we'll call this function G) was undefined over R. I then considered G as a holomorphic function, but even then I could not figure out where it mapped the reals or if it even converged to a complex number. In a case like this how does one construct H or figure out G? -(ps thanks for all the help) – Anthony Corsi Aug 30 '23 at 18:48
-
So it’s easy to see that $g = x^2+1$ tends to infinity when applied so $H$ should go to 0 at infinity. $L = g^{-1} = \sqrt{x-1}$ is quite a bit trickier. This has two potentially two fixed points $x_0 = \phi$ and $x_1 = 1-\phi$ where $\phi$ is the golden ratio (which can be seen by solving $\sqrt{x-1} = x$. Using a result from here: https://math.stackexchange.com/questions/421269/attractive-and-repulsive-fixed-points we can check if $|L’(x_i)| < 1$ for either point. Wolfram Alpha confirms both points satisfy that so up to convention with how you define the square root – Sidharth Ghoshal Aug 30 '23 at 21:54
-
ONE of them must be a attractor, and so letting H go to 0 for that fixed point along with infinity gives you a function which will approach converge at least for a decent chunk of the complex plane (though maybe not the whole thing). $H = (x^2 - x + 1)^2e^{-x^2}$ then seems like a good place to start. – Sidharth Ghoshal Aug 30 '23 at 21:57
-
In general you want to take the set of fixed points of the Riemann sphere under the maps $g$ or $g^{-1}$ and let $H$ go to 0 rapidly for those points. – Sidharth Ghoshal Aug 30 '23 at 22:01
-
Ahhh typo, $\phi, 1-\phi$ should be $\frac{1\pm i\sqrt{3}}{2}$ respectively…. Which are the negatives of the two nontrivial cubic roots of unity. Again estimating $|L’|$ gives us that both points are attractors and so a candidate $H=(x^2-x+1)e^{-x^2}$ still works! – Sidharth Ghoshal Aug 30 '23 at 22:15