Fruit of Preterition

math

Functional Equations: The Algebra Behind Iteration

The Thesis

Here’s a claim: many things in mathematics and physics that seem gratuitously complicated are secretly about functional equations. And functional equations are, indeed, genuinely complicated—not because mathematicians made them hard, but because they touch something fundamental about composition and iteration.

What’s a functional equation? It’s an equation where the unknown is a function rather than a number. You’re probably used to solving $x^2 - 5x + 6 = 0$ for $x$. A functional equation asks you to solve something like:

\[f(x + y) = f(x) + f(y)\]

for the function $f$. (Answer: $f(x) = cx$ for any constant $c$, assuming continuity. Without continuity? Monsters.)

The functional equations I want to talk about today are specifically about iteration and composition. When you compose a function with itself—$f \circ f$, $f \circ f \circ f$, etc.—strange algebraic structures emerge. The simplest questions become hard:

  • Given $g$, find $f$ such that $f \circ f = g$ (the “functional square root”)
  • Describe all functions that commute: $f \circ g = g \circ f$
  • Interpolate between integer iterates: what’s $f^{1/2}$ or $f^\pi$?

These aren’t contrived puzzles. They show up in:

  • Dynamical systems and chaos
  • Renormalization group flows (where the beta function is an infinitesimal generator)
  • Formal power series and combinatorics
  • Fractional calculus and continuous iteration

The punchline is that solving these equations requires thinking about eigenvalues and conjugacy—the same ideas from linear algebra, but for nonlinear maps.

The Schröder Equation: Linearizing Iteration

Let’s start with the most important functional equation for dynamics. Suppose you have a function $f(x)$ and you want to understand its iterates $f^n = f \circ f \circ \cdots \circ f$ ($n$ times).

Schröder’s equation asks: can we find a change of coordinates $\phi$ such that iteration becomes linear?

\[\boxed{\phi(f(x)) = \lambda \, \phi(x)}\]

If you can solve this, then:

\[\phi(f^n(x)) = \lambda^n \phi(x)\]

So in the $\phi$-coordinates, iteration is just multiplication by $\lambda^n$. To compute $f^n(x)$, you’d do:

\[f^n(x) = \phi^{-1}(\lambda^n \phi(x))\]

This is exactly like diagonalizing a matrix. If $A = PDP^{-1}$ where $D$ is diagonal, then $A^n = PD^n P^{-1}$. Schröder’s equation does this for nonlinear maps.

When Does It Work?

Near a fixed point $f(x_0) = x_0$, if $\lambda = f’(x_0) \neq 0, 1$, you can (generically) solve Schröder’s equation locally. The $\lambda$ in the equation is exactly this multiplier.

Example: Take $f(x) = 2x + x^2$ near $x = 0$. We have $f(0) = 0$ and $f’(0) = 2$. Schröder’s equation becomes $\phi(2x + x^2) = 2\phi(x)$.

You can solve this by expanding $\phi(x) = x + a_2 x^2 + a_3 x^3 + \cdots$ and matching coefficients:

\[\phi(2x + x^2) = 2x + x^2 + a_2(2x + x^2)^2 + \cdots = 2\phi(x) = 2x + 2a_2 x^2 + \cdots\]

Matching $x^2$: $1 + 4a_2 = 2a_2$, so $a_2 = -1/2$.

And so on. The series converges, and you get a genuine conjugacy to linear dynamics.

The Hard Part

What if $\lambda = 1$? Then Schröder’s equation has no nontrivial solutions (you’d need $\phi(f(x)) = \phi(x)$, meaning $\phi$ is constant on orbits). This is where Abel’s equation comes in.

Abel’s Equation: When Schröder Fails

When $f’(x_0) = 1$ (a “parabolic” fixed point), we need a different approach. Abel’s equation asks for a function $\alpha$ such that:

\[\boxed{\alpha(f(x)) = \alpha(x) + 1}\]

This turns iteration into addition:

\[\alpha(f^n(x)) = \alpha(x) + n\]

So $f^n(x) = \alpha^{-1}(\alpha(x) + n)$.

Connection: If you have a Schröder function $\phi$ with $\phi(f(x)) = \lambda \phi(x)$, then $\alpha(x) = \log_\lambda \phi(x)$ solves Abel’s equation. So Abel and Schröder are related by a logarithm—exactly like the relationship between exponential growth ($y’ = \lambda y$) and linear growth ($y’ = c$).

Example: The Parabolic Case

Consider $f(x) = x + x^2$ near $x = 0$. Here $f(0) = 0$, $f’(0) = 1$. Schröder doesn’t work.

Abel’s equation: $\alpha(x + x^2) = \alpha(x) + 1$.

For small $x$, iteration looks like: $x_1 = x + x^2 \approx x$, $x_2 \approx x + 2x^2$, … The orbit moves slowly away from 0, with $x_n \approx x/(1 - nx)$ for small $x$.

The Abel function turns out to be $\alpha(x) = -1/x + O(\log x)$ as $x \to 0^+$. Check: $\alpha(f(x)) = -1/(x + x^2) = -1/x \cdot 1/(1+x) \approx -1/x + 1 = \alpha(x) + 1$.

The Jabotinsky Matrix: Formal Power Series Sorcery

Now here’s where it gets algebraically interesting. Suppose we work with formal power series:

\[f(x) = \sum_{n=1}^\infty a_n x^n, \quad a_1 \neq 0\]

Composition of such series is a group operation (assuming $a_1 \neq 0$ so there’s a compositional inverse). Can we understand this group algebraically?

Jabotinsky’s idea: represent composition as matrix multiplication.

If $f(x) = \sum a_n x^n$ and $g(x) = \sum b_n x^n$, then $(f \circ g)(x) = \sum c_n x^n$ where the $c_n$ depend on $a_i$ and $b_j$ in a complicated but polynomial way.

Define the Jabotinsky matrix $J_f$ whose $(i,j)$ entry encodes how $f$ acts on the $j$-th coefficient when you compose:

\[(J_f)_{ij} = [x^i] f(x)^j / j!\]

where $[x^i]$ means “coefficient of $x^i$”. Then:

\[J_{f \circ g} = J_f \cdot J_g\]

Composition becomes matrix multiplication! The group of formal diffeomorphisms embeds into infinite matrices.

The Lie Algebra

Matrix groups have Lie algebras. What’s the Lie algebra of the composition group?

A one-parameter subgroup $f_t$ satisfies $f_{s+t} = f_s \circ f_t$. Differentiating at $t = 0$:

\[\frac{d}{dt}\bigg|_{t=0} f_t(x) = v(x)\]

This vector field $v$ generates the flow $f_t$. The relationship is:

\[\frac{\partial f_t}{\partial t} = v(f_t(x))\]

or equivalently, the Abel function $\alpha$ for $f_t$ satisfies $v(x) \cdot \alpha’(x) = 1$.

The Lie algebra elements are vector fields! Composition of diffeomorphisms corresponds to exponentiating vector fields.

Connection to Physics: The Beta Function

Consider a renormalization group flow. You have a coupling constant $g$ that runs with scale $\mu$. Under a scale transformation $\mu \to e^t \mu$, the coupling transforms:

\[g(e^t \mu) = f_t(g(\mu))\]

where $f_t$ is some flow map. By the group property, $f_{s+t} = f_s \circ f_t$.

The beta function is exactly the generator of this one-parameter family:

\[\boxed{\beta(g) = \frac{d}{dt}\bigg|_{t=0} f_t(g) = \mu \frac{dg}{d\mu}}\]

This is the infinitesimal version of “how does the coupling change under scale transformations.”

The beta function is the vector field that generates functional composition of the RG flow.

The functional equation perspective clarifies several things:

  1. Fixed points: $\beta(g_) = 0$ means $g_$ is a fixed point of the flow. The eigenvalue of $d\beta/dg$ at $g_*$ determines stability.
  2. Scheme dependence: Different renormalization schemes give different $f_t$, hence different $\beta$. But fixed points and their stability (eigenvalues) are conjugacy invariants.
  3. Why it’s hard: The RG flow is a dynamical system in infinite dimensions (the space of all couplings). Functional equations in this setting are genuinely complicated.

Fractional Iteration: Can You Take the Square Root of a Function?

Here’s a fun question: given $f(x)$, can you find $g(x)$ such that $g \circ g = f$? This is the “functional square root” or “half-iterate” of $f$.

Schröder’s equation gives the answer (when it works):

\[g(x) = \phi^{-1}(\sqrt{\lambda} \, \phi(x))\]

More generally, fractional iterates:

\[f^t(x) = \phi^{-1}(\lambda^t \phi(x))\]

This interpolates between $f^0 = \text{id}$ and $f^1 = f$ smoothly.

The catch: $\sqrt{\lambda}$ has two values, so there are (at least) two square roots. And if $\lambda < 0$, you need complex Schröder functions. And at parabolic points, it’s worse.

A Classic Hard Problem

What’s the half-iterate of $f(x) = e^x$?

This is surprisingly hard. There’s no fixed point with nice multiplier (the only real fixed point is $x \to -\infty$ as an essential singularity). People have constructed various “solutions” using different analytic continuations, but there’s no canonical answer.

The difficulty of this problem—finding $g$ with $g(g(x)) = e^x$—illustrates how these innocent-looking equations contain real complexity.

Julia’s Equation and Commutators

One more for the road. Julia’s equation asks: when do two functions commute under composition?

\[f \circ g = g \circ f\]

For formal power series near a fixed point, this is highly constrained. If $f$ and $g$ both fix $0$ with multipliers $\lambda$ and $\mu$:

  • If $\lambda$ and $\mu$ are multiplicatively independent (no relation $\lambda^m = \mu^n$ for integers $m, n$), then $f$ and $g$ commute only if they’re both iterates of some common function $h$.

  • If $\lambda$ and $\mu$ are roots of unity, the analysis is more delicate.

This connects to centralizers in the diffeomorphism group: the centralizer of $f$ (functions commuting with $f$) is generically just the iterates of $f$ itself.

Summary: The Functional Equation Zoo

Equation Form Purpose When it works
Schröder $\phi(f(x)) = \lambda \phi(x)$ Linearize iteration $f’(x_0) \neq 0, 1$
Abel $\alpha(f(x)) = \alpha(x) + 1$ Turn iteration to addition $f’(x_0) = 1$
Böttcher $\phi(f(x)) = \phi(x)^d$ Handle critical points $f(x) = x^d + \cdots$
Julia $f(g(x)) = g(f(x))$ Characterize commutants Various

The Jabotinsky matrix packages all of this into linear algebra over infinite dimensions.

And whenever something in dynamics or physics seems needlessly complicated, there’s a decent chance it’s secretly a functional equation in disguise. Now you know what to look for.


Further reading: the classic reference is Kuczma, Choczewski & Ger’s “Iterative Functional Equations”. For the RG connection, anything by Jean Zinn-Justin. For Jabotinsky matrices and formal power series, look up the work of Eri Jabotinsky or more modern expositions in Berg & Duistermaat.