← Back to home

Functional Square Root

Right. Another intellectual curiosity. Let's not confuse this with the root of a function, which is about finding where a function equals zero. This is about finding a function's ancestry.

In the world of mathematics, a functional square root, or as some call it, a half iterate, is precisely what it sounds like: the square root of a function under the operation of function composition. If you have a function g, its functional square root is some other function f that, when you apply it to itself, gives you g. Formally, this means f(f(x)) = g(x) must hold true for all values of x in the relevant domain. It's the mathematical equivalent of asking, "What process, when performed twice, yields this result?" A simple question that, predictably, leads to complicated answers.

Notation

Because mathematicians occasionally enjoy ambiguity, the notations for expressing that f is a functional square root of g are not entirely standardized. You might see it written as f = g[1/2] or f = g1/2.[citation needed][dubiousdiscuss] This notation is a specific instance of the more general notation for an Iterated Function. However, it conveniently leaves open the possibility of misinterpretation, easily confused with taking the function to a multiplicative power, just as f² = f ∘ f can be mistaken for the far more mundane x ↦ f(x)². One must pay attention.

History

This isn't a new problem; people have been wrestling with it for centuries.

  • The challenge of finding the functional square root of the exponential function was notably explored by Hellmuth Kneser in 1950.[1] This specific case, now commonly known as a half-exponential function, wasn't just an academic exercise in finding a function f such that f(f(x)) = eˣ. Kneser's foundational work later became crucial for extending the monstrous concept of tetration to non-integer heights in 2017.[citation needed] After all, if iterating a function an integer number of times is a solved problem, the next logical step is to wonder what it means to iterate it half a time.

  • Long before Kneser, the specific case of f(f(x)) = x was investigated over the real numbers ( R{\displaystyle \mathbb {R} } ). These solutions are known as the involutions of the reals. The problem was first studied in detail by Charles Babbage in 1815, and the relation f(f(x)) = x is now called Babbage's functional equation.[2] An involution is a function that is its own inverse—apply it twice, and you're back where you started. A particular solution Babbage found is f(x) = (b − x)/(1 + cx) for bc ≠ −1. More profoundly, Babbage observed that for any given solution f, its functional conjugate Ψ⁻¹ ∘ f ∘ Ψ by an arbitrary invertible function Ψ is also a solution. In the more abstract language of modern algebra, this means the group of all invertible functions on the real line acts on the subset of solutions to Babbage's equation by conjugation. It's a machine for generating an infinity of solutions from a single one.

Solutions

If you're looking for a systematic procedure to generate functional n-roots for any n—be it real, negative, or even infinitesimal—for functions g:CC{\displaystyle g:\mathbb {C} \rightarrow \mathbb {C} } , your best bet lies in the solutions to Schröder's equation.[3][4][5] This provides a formal, albeit often complex, path forward. Beyond that, an infinite number of trivial solutions can be constructed if the domain of the root function f is permitted to be sufficiently larger than that of g. Such solutions, while technically correct, often feel like cheating.

Examples

  • f(x) = 2x² is a functional square root of g(x) = 8x⁴. This one is simple enough. f(f(x)) = 2(f(x))² = 2(2x²)² = 2(4x⁸) = 8x⁴.

  • A functional square root of the nth Chebyshev polynomial, g(x)=Tn(x){\displaystyle g(x)=T_{n}(x)} , is f(x)=cos(narccos(x)){\displaystyle f(x)=\cos {({\sqrt {n}}\arccos(x))}} . Note that this is not, in general, a polynomial. A lesson in expectations.

  • f(x) = x / (√2 + x(1 - √2)) serves as a functional square root of g(x) = x / (2 - x).

!A plot showing various fractional and integer iterates of the sine function. Iterates of the sine function (blue), in the first half-period. Half-iterate (orange), i.e., the sine's functional square root; the functional square root of that, the quarter-iterate (black) above it, and further fractional iterates up to the 1/64th iterate. The functions below sine are six integral iterates below it, starting with the second iterate (red) and ending with the 64th iterate. The green envelope triangle represents the limiting null iterate, the sawtooth function serving as the starting point leading to the sine function. The dashed line is the negative first iterate, i.e. the inverse of sine (arcsin).

The image provides a visual exploration of this concept for the sine function.

  • sin[2](x) = sin(sin(x)) [red curve]
  • sin[1](x) = sin(x) = rin(rin(x)) [blue curve]
  • sin[1/2](x) = rin(x) = qin(qin(x)) [orange curve]. This solution isn't unique; -rin would also satisfy the condition sin = (-rin) ∘ (-rin).
  • sin[1/4](x) = qin(x) [black curve above the orange curve]
  • sin[–1](x) = arcsin(x) [dashed curve]

Using this extended framework, sin[1/2](1) can be shown to have an approximate value of 0.90871.[6] A piece of information you now possess.

(See.[7] For the notation, see this archived document.)

See also